Preface. The article below argues that electric cars aren’t going to replace gas and diesel vehicles enough to lessen greenhouse emissions.
The average electric vehicle requires 30 kilowatt-hours to travel 100 miles — the same amount of electricity an average American home uses each day to run appliances, computers, lights and heating and air conditioning. If electric cars expand, a U.S. Department of Energy study found that increased electrification across all sectors of the economy could boost national consumption of electricity by as much as 38% by 2050, in large part because of electric vehicles (Brown 2020).
I would argue that since two-thirds of electricity is still generated with natural gas and coal, emissions will certainly go up. Wind and solar won’t put much of a dent in that 66% fossil usage in the future either, because the best areas for solar and wind power have already been built, and the new transmission lines cost far more than the solar and wind power generated in more distant unexploited areas. Also, when natural gas and coal are burned to generate electricity, two-thirds of the energy contained in them is lost as heat, so only one-third of their energy makes it onto the transmission grid, where another 6 to 10% is lost over the wires, so as little as 23% of the fossil energy reaches your electric socket. Better to just burn the natural gas directly in cars perhaps.
And finally, until we have massive energy stored in batteries and pumped hydropower, we simply have to have natural gas to balance intermittent wind and solar power or they’ll bring the grid down.
Do the math: expensive electric cars that only the top 5% can afford are not replacing natural gas and coal.
The number of automobiles on the world’s roads is on pace to double — to more than two billion — by 2030. And more likely than not, most of those cars will be burning carbon-emitting gasoline or diesel fuels.
That is because much of the expansion will be propelled by the rise of the consumer class in industrializing parts of the globe, especially in China and India, as hundreds of millions of new drivers discover the glory of the open road. Those populous and geographically sprawling countries might be hard pressed any time soon to assemble the ubiquitous electricity grid required for recharging electric vehicles; and much of the electricity China and India will produce in coming decades will come from coal-fired power plants that are some of the planet’s biggest emitters of carbon dioxide.
Given the limitations of electric cars so far — including their limited range between charges — many experts predict that most of the billion additional cars predicted to be on the road in 2030 will have internal combustion engines that spew greenhouse gases.
But virtually everyone who studies the issue understands that transportation, which is still 95% reliant on petroleum, is the world’s fastest-growing energy-based contributor to greenhouse gases. About three-quarters of the total comes from motor vehicles.
But optimists argue that even in the case of cars with internal-combustion engines, carbon dioxide emissions can be cut significantly by measures like increasing fuel economy and introducing smart-driving technologies to make cars move about with greater efficiency.
The countries with the most cars today have set aggressive goals for improving fuel mileage. The United States, under President Obama’s fleetwide standards for carmakers, is aiming for an average of 54.5 miles per gallon by 2025, up from about 30 m.p.g. now. China is aiming for 50.1 miles per gallon, and the European Union 60.6.
Still, the math is daunting. If the number of cars doubles, and the average mileage improves by only 50%, all of the fuel-economy gains would be offset by the emissions from the new vehicles.
And that assumes the auto industry does its part to comply with the new standards and that national regulators diligently enforce them. Recent revelations that Volkswagen, for one, deliberately misled regulators, and that European Union air-quality standards and enforcement have been far from rigorous, do not inspire confidence.
“But the automakers are attacking these standards as we speak, both in Congress and through a review of the program they demanded from the Obama administration,” Mr. Becker said. “Similar attacks are underway in the E.U.”
Congress, in an effort to make the United States more energy independent, passed a law in 2007 mandating a 35 m.p.g. auto-fleet standard by 2020. But before that, there had been no official change to American fuel-economy standards in more than 30 years.
“The U.S. auto industry was successful between 1975 and 2007 in preventing any improvement for mileage standards for CO2 emissions,” Mr. Becker said. “They exploit every loophole in the standards, making more SUVs, pickups and other light duty trucks than cars because trucks have weaker standards than cars, and more large vehicles because large vehicles have weaker standards than smaller vehicles.”
But Mr. Becker, at the Safe Climate Campaign, points out that electric vehicles are only as environmentally friendly as the electricity that recharges them. China, though it is rapidly adopting nuclear power plants, is still heavily reliant on coal-fired electrical plants.
And India, where the biggest growth in automobile ownership is expected to occur as the country industrializes and its population surpasses China’s by 2030, might actually increase its reliance on coal-fired electrical power plants between now and then.
“At the end of the day, when you talk about transport emissions for transport in general, including for freight transport, they increase when the economy is growing,” he said. “So what are we going to say, we’re going to stop the economy to stop emissions?”
Preface. The four articles below explain why methane from permafrost or hydrates are not likely to erupt abruptly and send Earth into a hothouse hell. In addition, here are some posts debunking Guy McPherson who believes the world will end in a methane apocalypse:
Researchers studied methane emissions from a period in Earth’s history partly analogous to the warming of Earth today. Their research, published in Science, indicates that even if methane is released from these large natural stores in response to warming, very little actually reaches the atmosphere in large quantities.
This finding suggests that methane emissions from future warming will likely not be as large as some have suggested.
This is due to several natural buffers. In the case of methane hydrates, if the methane is released in the deep ocean, most of it is dissolved and oxidized by ocean microbes before it ever reaches the atmosphere.
If the methane in permafrost forms deep enough in the soil, it may be oxidized by bacteria that eat the methane, or the carbon in the permafrost may never turn into methane and may instead be released as carbon dioxide.
A popular theory of a giant methane burp as the killer is known as the “clathrate gun” hypothesis posits a sudden and massive release of methane hydrates from the land, on ocean shelves, and the depths of the ocean. There are many reasons to question this though:
Majorowicz (2014) found methane hydrate reservoirs would have melted over 100 to 400 thousand years in the hothouse Permian world, long before the first extinction pulse
Hydrates only form in cool oceans, like those of today, and there isn’t much carbon in them, somewhere between 500 and 2500 GtC (Milkov 2004). Yet this is far more than what would have existed in Permian oceans, and just a small fraction of the overall 24,000 to 46,000 GtC Siberian Trap emissions.
Ocean hydrates would have also been far less extensive than they are today because super continent Pangea had far fewer miles of hydrate-containing continental shelves than the shelf length of our multiple continents today (Wignall 2017).
Methane is a far more powerful greenhouse gas than carbon dioxide, but doesn’t last long in the air because it oxidizes to CO2 and water vapor in about 9 years
It is not likely deep water methane hydrates would reach the atmosphere since they’d be oxidized in the water column (Rupple 2011).
Even if there were a methane hydrate burp in the first killer pulse, a new suspect has to be found for the second killing pulse, since there was no cooling in the 200,000 year interval between them. It takes millions of years of cold water for methane hydrates to form again after they’ve melted.
Milkov (2004) concludes “A significantly smaller global gas hydrate inventory implies that the role of gas hydrates in the global carbon cycle may not be as significant as speculated previously.
References
Majorowicz, J., et al. 2014. Gas hydrate contribution to Late Permian global warming. Earth and Planetary Science Letters 393:243-253.
Milkov, A. 2004. Global estimates of hydrate-bound gas in marine sediments: How much is really out there? Earth-Science Reviews 66: 183-197.
Rupple, C. D. 2011. Methane Hydrates and Contemporary Climate Change. Nature Education Knowledge.
Wignall, P. B. 2017. The Worst of Times: How Life on Earth Survived Eighty Million Years of Extinctions. Princeton University Press.
Recent studies (e.g. Whiteman et al) have raised the alarm that methane emissions could occur in the Arctic, especially over the East Siberian Shelf and in Siberian Lakes (e.g. Shakhova et al). However, there is a vigorous academic debate on the origin and potential impact of these emissions. As acknowledged by the IPCC: “How much of this CH4 originates from decomposing organic carbon or from destabilizing hydrates is not known. There is also no evidence available to determine whether these sources have been stimulated by recent regional warming, or whether they have always existed since the last deglaciation. More research is therefore urgently needed.
The first uncertainty is the amount of gas hydrates stored on Earth. Global gas-in-place estimates range over an order of magnitude 1,000-20,000 tcm, with most estimates around 3,000 tcm. Estimates are even more uncertain at the regional level. For instance, there are no models for Antarctic reservoirs, and estimates for Arctic permafrost have only been done recently.
In the permafrost, additional uncertainty arises from the origin of methane emissions, whereas in the case of ocean sediments, the mechanisms by which methane is released and its ability to reach the atmosphere are also disputed. So are the biochemical and chemical consequences that gas-hydrate releases would have on oxidation mechanisms e.g. there may be resource limitations hindering methane oxidation in the ocean.
Since gas hydrates are only stable under high pressures and at low temperatures, there have been concerns that climate change could result in gas-hydrate dissociation and the release of methane into the atmosphere. The response of gas hydrates to climate change has only been investigated recently. Modelling in this field is in its infancy and faces major uncertainties. Nevertheless, it is generally agreed that gas-hydrate dissociation is likely to be a regional phenomenon, rather than a global one, and more likely to occur in subsea permafrost and upper continental shelves than in deep-water reservoirs, which make up the majority of gas hydrates. Indeed,the later are relatively well insulated from climate change because of the slow propagation of warming and the long ventilation time of the ocean. Moreover, the release of methane from gas-hydrate dissociation should be chronic rather than explosive, as was once assumed;and emissions to the atmosphere caused by hydrate dissociation should be in the form of CO2 because of the oxidation of methane in the water column.
Graphs adapted from Archer (2007), “Methane hydrate stability and anthropogenic climate change”. In the graph on the right, ventilation timescale corresponds to the timescale required by temperature (heat), pressure and solutes such as methane to diffuse through the sediments
Ocean thermal response varies according to depth, as highlighted in the graph above (left), but also from place to place, especially in deep-water locations, due to ocean currents. In sediments, the diffusion of heat towards deeper layers takes time and varies primarily according to depth, but also according to the composition of the sediment and to the geothermal gradient. Heat can diffuse approximately 100 meters in about 300 years (point A). Solutes such as dissolved methane diffuse even more slowly (100 meters in about 30,000 years), point B), while pressure perturbation (e.g. following a sea-level rise) diffuses more quickly (100 meters in about 3 years), point C.
As a result of thermal inertia, heat diffusion and the melting of permafrost take time, and should be slow enough to insulate most hydrate deposits from expected anthropogenic warming over a 100-year timescale. Nevertheless, temperature increases in high latitudes, such as the Arctic, are expected to be much higher than increases in the mean global temperature, and are therefore more likely to affect gas-hydrates reservoirs. Rises in sea level would result in pressure increases at the seafloor that may mitigate further dissociation of offshore gas-hydrate deposits. However, it is likely to be insufficient to negate the warming.
Even if warming were to reach the gas hydrate stability zone, the fate of any methane released would be uncertain.Gas could escape if the pressure exceeded the sediment’s lithostatic pressure, but it might also remain in place. In addition, since gas-hydrate dissociation will start at the edge of the stability zone, even if gas were able to migrate, it might subsequently be trapped in newly formed hydrates.
Finally, even if methane were able to migrate towards the seafloor, it would probably not reach the atmosphere. Most methane is expected to be oxidized in the water column rather than released by bubble plumes or other “transport pathways” directly into the atmosphere as methane. Nevertheless, the oxidation of methane produces CO2, which will have an impact on ocean acidification and will remain in the atmosphere.
The susceptibility of gas-hydrate deposits to climate-change-induced dissociation varies significantly, according to reservoir location. (1) Moridis et al.2011. Challenges, uncertainties and issues facing production from gas hydrate deposits.
The risk of climate change causing gas-hydrate dissociation and methane leaks varies significantly by location.This can be explained by depth differentials, the existence of mitigation mechanisms such as water-column oxidation, or by the exposure of gas-hydrate deposits to varying regional warming phenomena. High-latitude warming is expected to be much greater than global-mean-temperature warming.
As a rule-of-thumb, gas hydrates held within subsea permafrost on the circum-Arctic ocean shelves and on upper continental slopes are the most prone to dissociation. Subsea permafrost, which were flooded under relatively warm waters due to sea level rises thousands of years ago, have been exposed to dramatic rises in temperature that have led to a significant degradation both of subsea permafrost and t he gas hydrates within it.The latter are believed to store a greater quantity of gas hydrates than the former, but methane releases are less likely to reach directly the atmosphere because of oxidation in the water column.
However, it is very unlikely that climate warming will disturb gas-hydrate deposits that are held in deep-water reservoirs around 95% of all deposits on a millennial timescale. Finally,
gas hydrates in seafloor mounds may also dissociate as a result of warming, overlying water or pressure perturbation, but these account for a very limited share of gas hydrates in place.
The sensitivity of gas-hydrate deposits in onshore permafrost,especially at the top of the hydrate stability zone, is more uncertain and subject to greater debate
Archer et al. calculated that between 35 and 940 GtC of methane could escape as a result of global warming of 3° C, with maximum consequences of adding a further 0.5° C to global warming. On top of the uncertainty reflected in the range above, there are other considerable uncertainties, notably concerning the effectiveness of mitigation mechanisms and the long-term outlook, since methane will continue to be released, even if warming stops.
Reagan and Moridis (2007), “Oceanic gas hydrate instability and dissociation under climate change scenarios”;
Maslin et al. (2010), “Gas hydrates: past and future geohazard?”;
Shakhova et al. (2010), “Predicted Methane Emission on the East Siberian Shelf”;
Whitemann et al. (2013), “Climate science: Vast costs of Arctic change”
Do the huge craters pockmarking Siberia herald a release of underground methane that could exceed our worst climate change fears? They look like massive bomb craters. So far 7 of these gaping chasms have been discovered in Siberia, apparently caused by pockets of methane exploding out of the melting permafrost. Has the Arctic methane time bomb begun to detonate in a more literal way than anyone imagined?
The “methane time bomb” is the popular shorthand for the idea that the thawing of the Arctic could at any moment trigger the sudden release of massive amounts of the potent greenhouse gas methane, rapidly accelerating the warming of the planet. Some refer to it in more dramatic terms: the Arctic methane catastrophe or methane apocalypse.
Some scientists have been issuing dire warnings about this. There is even an Arctic Methane Emergency Group. Others, though, think that while we are on course for catastrophic warming, the one thing we don’t need to worry about is the so-called methane time bomb. The possibility of an imminent release massive enough to accelerate warming can be ruled out, they say. So who is right?
Few scientists think there is any chance of limiting warming to 2 °C, even though many still publicly support this goal. Our carbon dioxide emissions are the main cause of the warming, but methane is a significant player.
Methane is a highly potent greenhouse gas – causing 86 times as much warming per molecule as CO2 over a 20-year period. Fortunately, there’s very little of it in the atmosphere. Before humans arrived on the scene there was less than 1000 parts per billion. Levels started rising very slowly around 5000 years ago, possibly to due to rice farming. They’ve gone up more since the industrial age began: the fossil fuel industry is by far the single biggest source, followed by farting farm animals, leaking landfills and so on. Only a tiny percentage comes from melting Arctic permafrost.
The level in the atmosphere is now nearing 1900 ppb, but that’s still low. CO2 levels were much higher to start with, around 270,000 ppb before the industrial age. They have now shot up to 400,000 ppb today. The main reason is that CO2 persists for hundreds of years, so even small increases in emissions lead to its buildup in the atmosphere, just as water dripping into a bath with the plug left in can fill the bath eventually.
Methane, by contrast, breaks down after just 12 years, so its level in the atmosphere can only increase if there are big ongoing emissions.
So for methane to cause a big jump in global warming there not only has to be a massive source, it has to be released very rapidly. Is there such a source?
Yes, claim a few scientists. They point to the Arctic permafrost, and specifically to the East Siberian Arctic shelf. This vast submerged shelf underlies a huge area of the Arctic Ocean, which is less than 100 meters deep in most places. During past ice ages, when sea level dropped 120 meters, the land froze solid.
This permafrost was covered by rising seas as the ice age ended around 15,000 years ago. The upper layer has been slowly melting as the relative warmth of the seawater penetrates down. But the frozen layer is still hundreds of meters thick. No one doubts that there is plenty of carbon locked away in and under it. The questions are, how much is there, how much will come out in the form of methane, and how fast?
Natalia Shakhova of the International Arctic Research Center at the University of Alaska Fairbanks, has been studying the East Siberian Arctic shelf for more than two decades. Her team has made more than 30 expeditions to the region, in winter and in summer, collected thousands of water samples and tons of seabed cores during four drilling campaigns and made millions of measurements of ambient levels of methane in the air.
Her team has estimated that there is a whopping 1750 gigatons of methane buried in and below the subsea permafrost, some of it in the form of methane hydrates – an ice-like substance that forms when methane and water combine under the right temperature and pressure. What’s more, they say that the permafrost is already beginning to thaw in places. “Our results show that… [the] subsea permafrost is perforating and opening gas migration paths for methane from the seabed to be released to the water column,” says Shakhova.
Her team’s work hit the headlines in 2010, when in a letter in the journal Science they reported finding more than 100 hot spots where methane was bubbling out from the seabed. But as others pointed out, it was not clear whether these emissions were something new or had been going on for thousands of years.
More sensational stuff was to follow. In another 2010 paper, the team explored the consequences of 50 gigatons of methane – 3% of their estimated total – entering the atmosphere (Doklady Earth Sciences, vol 430, p 190). If this happened over five years methane levels could soar to 20,000 ppb, albeit briefly. Using a simple model, the team calculated that if the world was on course to warm 2 °C by 2100, the extra methane would lead to additional warming of 1.3 °C, so temperatures would hit 3.3 °C by 2100.
This study appeared in an obscure journal and did not get much attention at the time. But then Peter Wadhams of the University of Cambridge and colleagues decided to see how much difference a huge methane release between 2015 and 2025 would make when added to an existing model of the economic costs of global warming. “A 50-gigaton reservoir of methane, stored in the form of hydrates, exists on the East Siberian Arctic shelf,” they stated in Nature, citing Shakhova’s paper as evidence. “It is likely to be emitted as the seabed warms, either steadily over 50 years or suddenly. Understandably, this was big news.
But in reality the idea that 50 gigatons could suddenly be released, or that there’s a store of 1750 gigatons in total, is very far from being accepted fact. On the contrary, Patrick Crill, a biogeochemist at Stockholm University in Sweden who studies methane release from the Arctic, says it is simply untenable. He wants Shakhova’s team to be more open about how they came up with these figures. “The data aren’t available,” says Crill. “It’s not very clear how those extrapolations are made, what the geophysics are that lead to those kinds of claims.
Shakhova now says, “We never stated that 50 gigatons is likely to be released in near or distant future.” It is true that the 2010 study explores the consequences of the release of 50 gigatons rather than explicitly claiming that this will happen. However, it has certainly been widely misunderstood both by other scientists and the media. And her team’s papers continue to fuel the idea that we should be worried about dramatic and damaging releases of methane from the Arctic.
But other researchers disagree. “The Arctic methane catastrophe hypothesis mostly works if you believe that there is a lot of methane hydrate,” says Carolyn Ruppel, who heads the gas hydrates project for the US Geological Survey in Woods Hole, Massachusetts. And her team estimates that there are only 20 gigatons of permafrost-associated hydrates in the Arctic (Journal of Chemical and Engineering Data, vol 60, p 429). If this is right, there’s little reason for concern.
The issue is not just how much methane hydrate there is, but whether it could be released rapidly enough to build up to high levels.
This could happen soon only if the hydrates are shallow enough to be destabilized by heat from the warming Arctic Ocean.
But David Archer of the University of Chicago says that hydrates could only exist hundreds of meters below the sea floor. That’s far too deep for any surface warming to have a rapid impact. The heat will take thousands of years to work its way down to that depth, he calculated last year, and only then will the hydrates respond (Biogeosciences Discussions, vol 12, p 1). “There is no way to get it all out on a short timescale,” says Archer. “That’s the crux of my position.
This concerted push back against the idea of an impending methane bomb has led to something of a feud. Commenting on Archer’s paper, for instance, Shakhova said he clearly knew nothing about the topic. She has repeatedly pointed out that her team has actual experience of collecting data in the East Siberian Ice shelf, unlike her detractors.
But there is skepticism about Shakhova’s actual measurements, too. For instance, her team has reported that methane levels above some hotspots in the East Siberian shelf were as high as 8000 ppb. Last summer, Crill was aboard the Swedish icebreaker Oden, measuring levels of methane over the East Siberian shelf. Nowhere did he find levels this high. Even when the Oden ventured near the hotspots identified by Shakhova’s team, he never saw levels much beyond 2000 ppb. “There was no indication of any large-scale rapid degassing,” says Crill.
It’s not clear why other teams are finding lower levels than Shakhova’s. But to find out if a catastrophic release of methane is imminent, there is another line of evidence we can turn to. Thanks to ice cores from places like Greenland, we have a record of past methane levels going back hundreds of thousands of years. If there are lots of shallow hydrates in the Arctic poised to release methane as soon it warms up a little, they should have done so in the past, and this should show up in the ice cores, says Gavin Schmidt of the NASA Goddard Institute for Space Studies in New York.
Around 6000 years ago, although the world as a whole was not warmer, Arctic summers were much warmer thanks to the peculiarities of Earth’s orbit. There is no sign of any short-term spikes in methane at this time. “There’s absolutely nothing,” says Schmidt. “If those methane hydrates were there, they were there 6000 years ago. They weren’t triggered 6000 years ago, so it’s unlikely they’d be triggered imminently.
During the last interglacial period, 125,000 years ago, when temperatures in the Arctic were about 3 °C warmer than now, methane levels rose a little, as expected in warmer periods, but never exceeded 750 ppb. Again, there’s no sign of the kind of spike a large release would produce.
There is, then, no solid evidence to back the idea of a methane bomb and past climate records suggest there is no cause for alarm. Extraordinary claims require extraordinary proof, otherwise it’s going to undermine credibility and slow down our ability to actually make the decisions that we are going to have to make as a society.
No one is saying methane is not a concern. Levels are now the highest they’ve been for at least 800,000 years and climbing.The Intergovernmental Panel on Climate Change’s worst-case emissions scenario assumes a big rise in methane, to as much as 4000 ppb by 2100.
What about the gaping craters? They are certainly spectacular and scary-looking. The latest idea is that they are caused by the release of pockets of compressed methane as ice seals melt. But the amount of methane released per crater is minuscule in global terms. Around 20 million craters would have to form within a few years to release 50 gigatons of the gas.
Preface. I’m writing a book now which concludes that only biomass and biofuels can replace diesel and other fossil fuels. But 3 billion people are expected to arrive by 2050, consuming a good chunk of the biomass, plus it will be needed to replace half a million products made from fossil fuels, generate heat for homes and buildings, generate electricity, and dozens of other fossil fuel uses. Rural roads will take even more of a beating than they already are.
There are two articles below, the first about rural roads falling apart, and the second excerpts from a 302 page National Research Council study of Super Heavy commercial trucks that can weigh 2 million pounds. Ouch. Though after reading the whole damn thing, I never did found out the cost we taxpayers shoulder. But I did find what these super heavy objects were of interest, and show some of them below.
Picture this: County highway workers in bright safety vests pour scalding liquid rubber bandages on the road to cover the worst gashes. From above, it looks like skywriting — as if the bandages on the highway were spelling out a message for readers in the clouds.
The roads look like losers in a barroom brawl. Thick, jagged cracks run down the asphalt like scars, interrupted at points by bruised bumps. In some places, guardrails are tilted off their moorings like a pair of glasses knocked askew.
Rural roads are falling apart in small agricultural counties and towns across the Midwest and South, with eroding shoulders dangerous to the 80,000 pound trucks full of soybeans careening down the road. Reconstruction costs $300,000 per mile, and short-term patching $17,000 per mile.
Two-thirds of U.S. freight originates in rural areas where traffic volume has increased from heavy supersized tractor-trailers and farm equipment. In the spring thaw, melting will create soft spots easily damaged by heavy trucks.
These behemoths can produce 5,000 to 10,000 times the road damage of one car (TIC 2020). Although only 19% of American’s live in rural areas, they have 68% of total road miles.
Asphalt roads only have a lifespan of 30 years. Some county’s have roads far older than that.
The result is emergency closings and weight limits. Sometimes a farmer can’t easily move equipment from one field to another. Truckers have to make long detours to deliver feed and fertilizers. Trucks break down with broken axles, wrecked suspension systems, and flat tires.
States often don’t have the money to fix roads. For example, in Wisconsin the gas tax hasn’t gone up since 2006. Often there’s no state or federal assistance either.
NRC. 2015. Practices for Permitting Superheavy Load Movements on Highway Pavements. National Research Council, National Academies Press.
Maximum allowed superheavy vehicle weight. 1 kip = 1,000 pounds
This report documents the practices followed in issuing permits for overweight and superheavy commercial vehicles (SHCVs) or “superloads.” These are trucks that exceed the thresholds set for overweight vehicles allowed to operate with annual permits throughout state highway networks. This synthesis collected detail on the practices that U.S. states and Canadian provinces use. It focuses on SHCV issues related to pavements.
The GVW of the heaviest SHCV ever permitted by some agencies exceeds 2 million pounds
570,000 pound vehicle carrying an electrical anode used in the copper refining process. It was subjected to a bridge analysis, but not to a pavement analysis. It traveled from Nevada to Miami through Arizona
900,000 pound water purification vessel used in oil refining
A massive 900,000 pound water purification vessel used in oil refining was transported from its manufacturing origin in Portland through Oregon, Idaho, and Montana to its final destination in Alberta, Canada. The fee levied was just $4.26/mile for the entire vehicle. No seasonal restrictions were placed on the movement of this load because it was determined that the subgrade soil conditions encountered were relatively dry and therefore not susceptible to frost heave and/or spring thaw. The move took place in November 2013, during which frost and non-frost conditions were encountered. The vessel was delivered by Columbia River barge to Umatilla, Oregon, traveled for a short distance east on I-84, and then followed secondary roads south to the Idaho border near Ontario, Oregon. The vehicle had an overall length of 375 ft, 4 in. and a width of 22 ft, 2 in. Its GVW was 900 kips and its maximum tandem axle load was 44.75 kips. It was equipped with 32 axles and its maximum tire unit load was 604 lb/in. It was propelled by two pusher tractors and one pull tractor. No pavement analysis was conducted for its impact on the I-84 continuous reinforced concrete pavement.
1.2 million pound transformer moving on Texas roads
This 1.2 million pound truck is hauling a transformer from East Houston to Flat Rock, Texas, in August 2014. It had a total of 31 axle assemblies and measured 320 ft, 4 in. in length and 20 ft, 3 in. in width. The picture was taken on FM Road 3009 in Bexar County, Texas. Each trailer axle assembly consisted of two 4-tire axles side-by-side, taking the width of two adjacent roadway lanes. The heaviest axle assembly of this vehicle was 48,000 lb divided among eight tires. The move involved flag vehicles and police escorts. The permit fee charged for this vehicle was $935 and stipulated that the hauler is liable of any infrastructure damage.
The practices of permitting superheavy commercial vehicles (SHCVs) in the United States varies widely between agencies in terms of both the criteria used to define them, the analysis details for evaluating their impact on pavements, and the fees levied for permitting them.
The gross vehicle weight (GVW) thresholds used to define SHCVs vary from 120 kips to 254.3 kips. Axle load limits by configuration also vary, ranging from 20 to 29 kips for single axles on dual tires, from 34 to 60 kips for tandem axles on 8 tires, and from 50 to 81 kips for tridem axles on 12 tires. In addition, some agencies set limits on the tire weight per unit width (i.e., it varies between 500 and 800 lb/in.), whereas others do not. This obvious lack of uniformity in weight regulations reduces the weights of SHCVs traveling through multiple jurisdictions to the least common set of rules in effect through the jurisdictions involved and imposes a considerable administrative burden on shipping companies.
The literature review also suggests that SHCV single-trip fees vary considerably among the 62 jurisdictions in North America (i.e., 50 states, the District of Columbia, ten Canadian provinces, and the Yukon Territory): • Twenty-three (37%) levy SHCV permit fees that are a function of weight-distance, typically in the form of $/ton/ mile for GVW exceeding a certain value. Interestingly, some of the states that use weight-distance taxes do not use the same approach for levying SHCV permit fees. This fee ranges from $0.006/ton/mi to $0.2/ton/mi with an average value of about $0.049/ton/mi. • Fifteen (24%) levy SHCV permit fees that are related to GVW per axle weight alone and do not consider the distance traveled by the vehicles. • Eight (13%) levy a flat SHCV permit fee that ranges from $5 to $550, regardless of any pavement usage indicators, that is the weight of the vehicle or the distance traveled. • Seven (11%) levy a processing fee and may add an infrastructure usage fee after studying SHCVs on a case-bycase basis. • Two jurisdictions (3%) levy a flat fee and the cost of repairing the infrastructure from any damage rather than the cost infrastructure utilization from SHCV movement.
Thirty-eight agencies responded as to whether or not they conduct pavement analysis as part of their SHCV permit process. Of those, five (13%) always do (Delaware, Missouri, Louisiana, Tennessee, and Vermont), 15 (40%) do so depending on the circumstances (Arizona, Colorado, Iowa, Illinois, Indiana, North Carolina, North Dakota, Oregon, Washington State, Wisconsin, Wyoming, Texas, Virginia, British Columbia, and Ontario), whereas the remaining 18 agencies (47%) never perform such an analysis. The majority of the agencies that perform pavement analysis do so when dealing with a vehicle exceeding their definition of a SHCV. Details of pavement analysis performed were provided by 15 states. Their majority uses either their own in-house developed mechanistic empirical pavement analysis approach or the mechanistic methods developed by industry (i.e., Asphalt Pavement Association and Portland Cement Association). Several agencies indicated that they use the 1993 AASHTO Guide for the Design of Pavement Structures and characterize the truck loads in terms of equivalent single axle loads. None of the responding agencies uses the Mechanistic-Empirical Pavement Design Guide for analyzing the impact of SHCV. Additional details on the pavement analysis performed by the 15 responding states suggest that their majority uses representative thickness and layer/subgrade moduli, and consider the entire length of the SHCV. About half consider only one wheel path and the actual number of tires in the wheel path and the tire inflation pressure, while approximately 25% consider the actual vehicle speed. Furthermore, only four of the 15 responding agencies consider the stability of the pavement subgrade and of those one indicated using a Mohr– Coulomb type of analysis and another using a slope-stability numerical method type of analysis. The number of SHCV permits issued annually varies between agencies and to a large extent depends on their definition of SHCVs. The range is from fewer than 100 to more than 10,000 per year.
There have been regional efforts to establish uniform heavy truck permitting regulations in the United States, whereby a permit issued by one state is accepted for travel in other states. Twelve western states, under the auspices of the Western Association of State Highway and Transportation Officials (WASHTO), Arizona, Colorado, Idaho, Louisiana, Montana, New Mexico, Nevada, Oklahoma, Oregon, Texas, Utah, and Washington, agreed on a uniform set of truck weight regulations that allow trucks permitted in one of these states to legally operate throughout the rest. In summary, these limits consist of a GVW of 160 kips; tire weights of 600 lb/in. of width; overall consecutive axle weight limits governed by the Bridge Formula; and axle configuration weight limits of 21.5, 43, and 53 kips for single, tandem, and tridem axles, respectively.
The literature review suggests that SHCV single-trip fees vary widely between the 62 jurisdictions in North America (i.e., 50 states, the District of Columbia, ten Canadian provinces, and the Yukon Territory): • Twenty-three (37%) levy SHCV permit fees that are a function of weight-distance, typically in the form of $/ton/mile for GVW exceeding a certain value. Interestingly, some of the states that use weight-distance taxes do not use the same approach for levying SHCV permit fees. This fee ranges from $0.006/ton/mi to $0.2/ton/mi, with an average value of about 0.049/ton/mi. • Fifteen (24%) levy SHCV permit fees that are related to GVW/axle weight alone and do not consider at all the distance traveled by the vehicles. • Eight (13%) levy a flat SHCV permit fee that ranges from $5 to $550 regardless of any pavement usage indicators; that is, the weight of the vehicle or the distance traveled. • Seven (11%) levy a processing fee and may add an infrastructure usage fee after studying SHCVs on a case-by-case basis.
Two jurisdictions (3%) levy a flat fee and the cost of repairing the infrastructure from any damage rather than the cost infrastructure utilization from SHCV movement.
The definition of a SHCV or “superload” varies significantly among jurisdictions. Sixteen of the responding agencies (41%) define SHCV in terms of GVW alone, five (13%) use GVW and axle loads regardless of axle spacing, and another five (13%) use GVW and axle loads as a function of axle spacing. Interestingly, the remaining 13 responding agencies (33%) use an alternative definition involving vehicle size, tire loading, axle spacing, and roadway condition.
Thirty-eight agencies responded as to whether or not they conduct pavement analysis as part of their SHCV permit process. Of those, five (13%) always do (Delaware, Missouri, Louisiana, Tennessee, and Vermont), 15 (40%) do so depending on the circumstances (Arizona, Colorado, Iowa, Illinois, Indiana, North Carolina, North Dakota, Oregon, Washington, Wisconsin, Wyoming, Texas, Virginia, British Columbia, and Ontario), whereas the remaining 18 agencies (47%) never perform such an analysis. The majority of the agencies that perform pavement analysis do so when dealing with a vehicle exceeding their definition of a SHCV. Details on the pavement analysis performed were provided by 15 states. Their majority uses either their own in-house developed mechanistic-empirical pavement analysis approach or the mechanistic methods developed by industry. Several agencies indicated that they use the 1993 AASHTO Guide for the Design of Pavement Structures and characterize the traffic in terms of equivalent single axle loads. None of the responding agencies uses the Mechanistic-Empirical Pavement Design Guide for analyzing the impact of SHCVs.
Additional details on the pavement analysis performed by the 15 responding states suggest that the majority use representative thickness and layer/subgrade moduli and consider the entire length of the SHCV. About half of them consider only one wheel path, the actual number of tires in the wheel path, and the tire inflation pressure, while approximately 25% consider the vehicle speed. Furthermore, only four of the 15 responding agencies consider structural failure of the pavement layers and subgrade as part of the SHCV permitting analysis. Only two of these four states gave details on the actual method used for analyzing the structural stability of the pavement layers.
The results of the survey questionnaire confirmed the findings of the literature review on the various methodologies agencies use for computing SHCV permit fees. Fifteen of the 46 responding agencies (33%) use a GVW-distance-traveled approach (Alabama, Florida, Illinois, Ohio, Missouri, Montana, North Dakota Tennessee, Utah, Vermont, Washington, West Virginia, Wyoming, British Columbia, and Ontario), two use a pavement damage-distancetraveled approach (Arizona and Oregon), another two use a number of axles-distance-traveled approach (Idaho and New Jersey), while 19 (41%) use a different methodology.
The findings of this study suggest that the practice of permitting SHCVs could be significantly improved through further study of their impact on pavements and implementation of the results in establishing equitable permit fees that cover pavement utilization and/or damage.
INTRODUCTION
There is an increasing demand for highway transport of very large non-divisible shipments that not only exceed legal gross vehicle weight (GVW) and axle weight limits, but also exceed the special provisions that allow overweight vehicles to operate with routine annual permits. Such vehicles are typically allowed to operate under single-trip permits following an engineering analysis of their impact on the pavement infrastructure (pavements and bridges) on a specific route.
State and provincial practices on permitting such vehicles, henceforth to be referred to as superheavy commercial vehicles (SHCVs) or “superloads,” have a significant impact on both transportation efficiency and infrastructure condition.
The condition of the pavement infrastructure is affected where the fees collected for SHCV permitting do not cover the pavement damage cost caused by these vehicles.
The differences in weight limits between jurisdictions, even those that have common borders, are substantial. For example, a vehicle with a GVW between 150 and 199 kips crossing the Florida–Georgia border would require a SHCV permit review in Georgia but not in Florida, and would be required to have a unit tire weight of less than 550 lb/in. only in Florida, since Georgia does not have this requirement.
Similarly, a vehicle with a GVW between 144 and 191 kips crossing the Minnesota–Wisconsin border would require a SHCV permit review in Minnesota but not in Wisconsin, and would face different maximum permitted axle weights (e.g., tandem axle weights of 40 versus 60 kips and tridem axle weights of 60 versus 81 kips, respectively).
Clearly, there is a lack of uniformity in weight regulations for SHCVs between jurisdictions.
As mentioned earlier, the U.S. Congress recently authorized a Comprehensive Truck Size and Weight Limits Study (1) under MAP-21 funding (Moving Ahead for Progress in the 21st Century Act; Section 32801), with the following objectives: • Address the differences in safety risks, infrastructure impacts, and the effect on levels of enforcement between trucks operating at or within federal truck size and weight limits and trucks legally operating in excess of federal limits; • Compare and contrast the potential safety and infrastructure impacts of alternative configurations (including configurations that exceed current federal limits) to the current federal truck size and weight law and regulations; and • Estimate the effects of freight diversion resulting from these alternative configurations.
DEFINITION OF SUPERHEAVY COMMERCIAL VEHICLES
This section summarizes the survey results related to background questions and the way SHCVs are defined and permitted in each jurisdiction. 16 of the responding agencies (41%) define SHCV in terms of a maximum GVW alone. They vary widely from 120 to 500 kips, with the most frequent value being 200 kips. Five of the responding agencies (13%) reported that they define SHCV in terms of GVW and axle group limits regardless of axle spacing. The wide range of GVW and load limits is again evident; GVW limits range from 80 kips to 350 kips and tandem axle loads, for example, range from 34 kips to more than 60 kips. Another five of the responding agencies (13%) define SHCV in terms of GVW and axle group limits as a function of axle spacing. The distribution of these GVW limits, the axle group load limits, and the corresponding minimum axle spacings. In this case, GVWs vary from 100 to 254 kips, tandem load limits from 40 to 50 kips, whereas minimum tandem axle spacings vary from 6 to 12 feet.
The potato battery is a type of electrochemical battery, or cell. Certain metals (zinc in the demonstration below) experience a chemical reaction with the acids inside of the potato. This chemical reaction creates the electrical energy that can power a small device like an LED light or clock (SFF 2018).
The satirical Onion proposes powering cities on Potato Power. But why not — potatoes are renewable, unlike wind, solar, wave, nuclear, and all other contraptions that depend on fossils for every step of their life cycle.
KNOXVILLE, TN—In what many experts are hailing as a game changer in the field of renewable energy, scientists from the University of Tennessee unveiled Friday a 10-story-tall, 800,000-ton potato capable of powering an entire city. “Our tests have demonstrated this single potato can generate more than 3.5 gigawatts of clean, renewable electricity,” said civil engineering professor Lauren Donaldson, explaining that the colossal tuber, when connected to the electrical grid via one zinc and one copper electrode, could provide enough output to illuminate approximately 70 million standard light bulbs for more than a decade. “In theory, the nation’s energy infrastructure could be revolutionized simply by placing one of these gigantic potatoes next to every city in America. We believe it is entirely conceivable that within 20 years, this technology—perhaps supplemented by several similar-sized lemons connected via lengths of wire and paper clips—could be our primary source of electricity. One day, everything from home appliances to cars to factories may be potato-powered.” Donaldson added that her team’s potato also had the benefit of being largely pollution-free, as nearly 98 percent of its waste products would be fried-up and eaten afterward (The Onion 2020).
A new report from the Department of Energy has uncovered an unforeseen source of mechanical kinetic energy: our Founding Fathers spinning in their graves. “For decades, our nation has lamented the fact that John Adams is likely oscillating in his coffin,” said a spokesperson. “But we’re only now discovering that our Founding Fathers’ rotational exasperation at the state of America today is a source of clean, white-hot fuel, comparable to over 15,000 nuclear reactors. Environmental scientists were quick to remind reporters that from a Constitutional standpoint, of course we should respect the laws on which America was founded. But from a sustainability standpoint, they urged the public to do everything possible to anger the ghost of Benjamin Franklin. “This source of combustion was first ignited during the freeing of slaves, and boosted by women’s suffrage. But if we’re serious about fighting climate change, we recommend kicking the spinning up a notch by permanently banning all firearms, censoring large amounts of speech, and implementing fully-automated luxury space gay communism.”
In tubes and tanks under the chassis, something very like soapy water mixed
with hydrogen called sodium borohydride and similar to borax found in laundry
detergent, is being tested (Steen 2002). Of course, there are many
problems why this hasn’t worked out and probably never will.
Thermal depolymerization and landfills turn garbage into biogass. But
as energy declines, there will be less and less garbage, not only because there
won’t be the fuel to take it to a landfill, but people will be burning anything
they can get their hands on to cook and heat with.
Raindrop power
Researchers reported that a single drop can muster 140V, or enough power to
briefly light up 100 small LED bulbs. It’s far from being able to produce
continuous power. This system works with drops of the same size falling from
the same height and may not do as well otherwise, and degradation of surface
charge may reduce the generator’s efficiency with time. Meanwhile maybe someone
can build a miniature Las Vegas for mice (Delbert 2020).
Elon Musk’s hyperloop
Not going to happen for too many reasons to list. Just one is that because temperature ranges from 32 to 120 F, there’d need to be 6,000 expansion joints. If even one failed, disaster. The vacuum would be released. This 28 minute video explains this and much more at: https://www.youtube.com/watch?v=RNFesa01llk
It’s time to look at what we can gain from muscle power, which will have to
increasingly replace fossil fuels as they decline. This has the added bonus of helping to cope
with the obesity crisis. The authors estimate that an average American has 5
pounds of excess fat, which translates to 133,000 GJ of stored energy. Using
human muscle as an energy source has the added benefit of reducing heart
disease, strokes, and diabetes.
Gym members comprise a large potential muscle power workforce. Over 54
million people are members of a fitness center in the U.S. where their
potential electricity generating exercise is wasted. Instead, members do the opposite
and consume electricity, since equipment such as treadmills, ellipticals,
stationary bikes, and rowers are electric. And air conditioning to keep members
cool uses additional electricity.
This study looked at how much electric power could be generated by 40
members at a gym in South Carolina.
At best, 3-5% of the gym’s average daily electricity demand could be provided at a large cost. To convert the rowing machines to generate electricity would take 33 years to pay back, perhaps longer than a rowing machine will last
References
Carbajales-Dale, M., et al. 2018. Human powered electricity generation as a
renewable resource. BioPhysical Economics and Resource Quality.
Delbert, C. 2020. The Cool Way Scientists Turned Falling
Raindrops Into Electricity. Popular Mechanics.
Steen, M. 16 Sep 2002. A squeaky clean future for the car?
Reuters News Service.
Posted inFar Out|Taggedhyperloop, raindrop, soap|Comments Off on Far out power #2: Soap, Raindrops, Hyperloops, and Fitness Centers
Preface. Before sewage treatment, cities were hell-holes of foul smells from rotting human waste, industrial effluent, and garbage. Few people lived beyond 50 because of the many waterborne diseases. In fact, sewage and water treatment systems are the main reason lifespans nearly doubled (Garrett 2001). Here are just a few of the diseases possible from drinking untreated water: Adenovirus infection, Amebiasis, Campylobacteriosis, Cryptosporidiosis, Cholera, E. Coli 0157:H7, Giardiasis, Hepatitis A, Legioellosis, Salmonellosis, Vibrio infection, Viral gastroenteritis, free living amoebae (ADHS). For the full list of waterborne diseases, see post Water-borne diseases will increase as energy declines.
Nearly all sewage infrastructure is past its life-time, a good way to spend the remaining cheap oil before it becomes scarce.
Sewage is also a way to return nutrients back to the soil, especially finite phosphorus, which is eaten and excreted, treated, and lost to oceans and other waterways.
Today moving sludge from city sewage treatment farms works because of cheap energy. In the future energy crisis, that won’t be possible.
There are two articles below, one about sewage sludge for crops, and the other about sewage corrosion.
Human waste is a nutrient-rich substance that farmers around the world have spread on cropland for centuries. Every day, about 20 million gallons of sewage flows into the city of Tacoma’s wastewater treatment plants. The water is separated, treated, and discharged into the Puget Sound, which leaves behind sludge — a mix of human excrement, industrial waste, and everything else that ends up in sewers. The plants further treat the product to reduce pathogens, bacteria, heavy metals, and odors, and convert it into a fertilizer called biosolids, which is high in phosphorus, nitrogen, and other nutrients that help plants grow.
Over 50 percent of the approximately 130 million wet tons of sludge produced nationally each year is treated and applied to less than 1% of cropland. As a fertilizer, it’s popular because most wastewater treatment plants give it away for free or at prices less than the cost of synthetic fertilizers.
The Sierra club notes that it can contain up to 90,000 man-made chemicals and we don’t know what new chemicals are made synergistically by combining them. It’s not certain that biosolids are safe.
One of the most controversial piece of the sewage puzzle is the fact that factories, slaughterhouses, and other industrial facilities are allowed to discharge their waste into the taxpayer-funded sewer system.
Before the 1972 Clean Water Act, the waste industry largely burned it, but that often violates the Clean Air Act, Lewis said. Municipalities also tried dumping it in the ocean, but that created large dead zones. Then, in 1993, the EPA approved a proposal to spread it on land after it was treated. Sludge that isn’t turned into biosolids is landfilled or incinerated — both of which are expensive compared to spreading it on farmland.
The EPA only requires nine pollutants — all heavy metals — to be removed from biosolids, as well as living pathogens such as E. coli and Salmonella. Sludge may be treated by air drying, pasteurization, or composting. Lime is often used to raise the pH level to eliminate odors, and about 95 percent of pathogens, viruses, and other organisms are killed in the process, according to waste management industry officials.
Sewer systems are among the most critical infrastructure assets for modern urban societies and provide essential human health protection. Sulfide-induced concrete sewer corrosion costs billions of dollars annually and has been identified as a main cause of global sewer deterioration. Aluminum sulfate addition during drinking water production contributes substantially to the sulfate load in sewage and indirectly serves as the primary source of sulfide. This unintended consequence of urban water management structures could be avoided by switching to sulfate-free coagulants, with no or only marginal additional expenses compared with the large potential savings in sewer corrosion costs.
Sewer systems are corroding at an alarming rate, costing governments billions of dollars to replace. Differences among water treatment systems make it difficult to track down the source of corrosive sulfide responsible for this damage.
Urban sewer networks collect and transport domestic and industrial waste waters through underground pipelines to wastewater treatment plants for pollutant removal before environmental discharge. They protect our urban society against sewage-borne diseases, unhygienic conditions, and noxious odors and so allow us to live in ever larger and more densely populated cities. Today’s underground sewer infrastructure is the result of an enormous investment over the last 100+ years with, for example, an estimated asset value of one trillion dollars in the USA (Brongers). This equates to ~7% of its current gross domestic product. However, these assets are under serious threat with an estimated annual asset loss of around $14 billion in the United States alone. Sulfide-induced concrete corrosion is recognized as a main cause of sewer deterioration in most cases.
Many water utilities will need to upgrade both their water supply and wastewater service infrastructure over the next 10 to 15 years, which will require enormous capital investments.
References
ADHS. Waterborne diseases. Arizona Department of Health Services.
Brongers, M. P. H., et al. 2002. “Drinking water and sewer systems in corrosion costs and preventative strategies in the United States”. Federal Highway Administration Publication FHWA-RD-01-156, U.S. Department of Transportation, Washington, DC.
Preface. There are three articles below on this topic. Plus these articles in the news:
Nakagawa T et al (2021) The spatio-temporal structure of the Lateglacial to early Holocene transition reconstructed from the pollen record of Lake Suigetsu and its precise correlation with other key global archives: Implications for palaeoclimatology and archaeology. Global and Planetary Change
The team’s data show that the transition from the ice age to the post-glacial age was characterized by alternations between stable and unstable periods. The domestication of plants didn’t start when the warm climate was established in ca. 13,000 BC, but had to wait until the climate stopped oscillating in short intervals and large amplitudes in ca. 12,000 BC. Agriculture is a subsistence practice that requires planning. But to plan in advance, a stable future is important. When the climate was unstable, agriculture was too risky a practice because accurately predicting the weather in the future wasn’t possible, thus making it difficult to select appropriate crops for agriculture. In such climatic conditions, hunting-and-gathering was a more reasonable subsistence strategy than agriculture because the natural ecosystem consists of diverse species from which humans could expect “something” edible, as opposed to the farmlands. These new findings challenge the traditional view that agriculture was a revolutionary step forward for the history of humanity. Instead, agriculture and hunting-and-gathering were equally reasonable adaptation strategies, depending on whether the climate was stable or unstable. https://www.sciencedaily.com/releases/2021/07/210729095215.htm
2022 Dwindling Mississippi Grounds Barges, Threatens Shipments. Bloomberg. A logjam of more than 100 ships, tugboats and their convoys of barges in the shrinking Mississippi River is threatening to grind trade of grains, fertilizer, metals and petroleum to a halt. Drought has reduced water levels along the biggest US waterway by so much that vessels are running aground. The largest US barge operator, Ingram Barge Co, declared force majeure due to “near-historic” low water conditions on the Mississippi, the top route to get US grains and soybeans to the world market. Some 60% of all grain exported from the US is shipped on the Mississippi River. The logjam is coming at the worst time as the soybean and corn harvests are each about one-fifth complete and supplies will start piling up, and the river is also vital to transport fertilizer.
Earth’s dryland ecosystem covers 45% of the world’s surface and is home to around a third of its population who depend on these areas for their food and water.
This study found that as aridity increases, dryland ecosystems undergo a series of abrupt changes. This results first in drastic reductions in the capacity of plants to fix carbon from the atmosphere, then substantial declines of soil fertility, drought-tolerant plants replace food crops, and finally vegetation disappears under the most arid and extreme conditions and turns into a desert.
As aridification grows worse, the land becomes more vulnerable to erosion, soil biota maintaining the ecosystem decline, pathogens increase, crops fail.
More than 20% of land may cross these thresholds by 2100 due to climate change.
Climate disruptions to agricultural production have increased in the past 40 years and are projected to increase over the next 25 years. By 2050 and beyond, these impacts will be increasingly negative on most crops and livestock.
Many agricultural regions will experience declines in crop and livestock production from increased stress due to weeds, diseases, insect pests, and other climate change induced stresses.
Current loss and degradation of critical agricultural soil and water assets due to increasing extremes in precipitation will continue to challenge both rain-fed and irrigated agriculture
The rising incidence of weather extremes will have increasingly negative impacts on crop and livestock productivity because critical thresholds are already being exceeded.
Agriculture has been able to adapt to recent changes in climate; however, increased innovation will be needed to ensure the rate of adaptation of agriculture and the associated socioeconomic system can keep pace with climate change over the next 25 years.
Climate change effects on agriculture will have consequences for food security, both in the U.S. and globally, through changes in crop yields and food prices and effects on food processing, storage, transportation, and retailing. Adaptation measures can help delay and reduce some of these impacts.
The United States produces nearly $330 billion per year in agricultural commodities, with livestock accounting for half of that value. Production of all commodities will be vulnerable to direct impacts (from changes in crop and livestock development and yield due to changing climate conditions and extreme weather events) and indirect impacts (through increasing pressures from pests and pathogens that will benefit from a changing climate). Crop production projections often fail to consider the indirect impacts from weeds, insects, and diseases that accompany changes in both average trends and extreme events, which can increase losses significantly.
Rising average temperatures will increase crop water demand, increasing the rate of water use by the crop. Higher temperatures are projected to increase both evaporative losses from land and water surfaces and transpiration losses (through plant leaves) from non-crop land cover, potentially reducing annual runoff and streamflow for a given amount of precipitation.
By mid-century, when temperature increases are projected to be between 1.8°F and 5.4°F and precipitation extremes are further intensified, yields of major U.S. crops and farm profits are expected to decline. There have already been detectable impacts on production due to increasing temperatures.
One critical period in which temperatures are a major factor is the pollination stage; pollen release is related to development of fruit, grain, or fiber. Exposure to high temperatures during this period can greatly reduce crop yields and increase the risk of total crop failure. Plants exposed to high nighttime temperatures during the grain, fiber, or fruit production period experience lower productivity and reduced quality. These effects have already begun to occur; high nighttime temperatures affected corn yields in 2010 and 2012 across the Corn Belt. With the number of nights with hot temperatures projected to increase as much as 30%, yield reductions will become more prevalent.
Plants have specific temperature tolerances, and can only be grown in areas where their temperature thresholds are not exceeded. As temperatures increase over this century, crop production areas may shift to follow the temperature range for optimal growth and yield of grain or fruit. Temperature effects on crop production are only one component; production over years in a given location is more affected by available soil water during the growing season than by temperature, and increased variation in seasonal precipitation, coupled with shifting patterns of precipitation within the season, will create more variation in soil water availability.
Increasing temperatures cause cultivated plants to grow and mature more quickly. Crops, such as cereals, would grow more quickly, meaning less time for the grain itself to mature, reducing productivity. But because the soil may not be able to supply nutrients at required rates for faster growing plants, plants may be smaller, reducing grain, forage, fruit, or fiber production.
In vegetables, exposure to temperatures in the range of 1.8°F to 7.2°F above optimal moderately reduces yield, and exposure to temperatures more than 9°F to 12.6°F above optimal often leads to severe if not total production losses.
Temperature and precipitation changes will include an increase in both the number of consecutive dry days (days with less than 0.01 inches of precipitation) and the number of hot nights. The western and southern parts of the nation show the greatest projected increases in consecutive dry days, while the number of hot nights is projected to increase throughout the U.S. These increases in consecutive dry days and hot nights will have negative impacts on crop and animal production. High nighttime temperatures during the grain-filling period (the period between the fertilization of the ovule and the production of a mature seed in a plant) increase the rate of grain-filling and decrease the length of the grain-filling period, resulting in reduced grain yields. Exposure to multiple hot nights increases the degree of stress imposed on animals resulting in reduced rates of meat, milk, and egg production.
Climate change poses a major challenge to U.S. agriculture because of the critical dependence of the agricultural system on climate and because of the complex role agriculture plays in rural and national social and economic systems (Figure 6.2). Climate change has the potential to both positively and negatively affect the location, timing, and productivity of crop, livestock, and fishery systems at local, national, and global scales. It will also alter the stability of food supplies and create new food security challenges for the United States
Over time, climate change is expected to increase the annual variation in crop and livestock production because of its effects on weather patterns and because of increases in some types of extreme weather events.
Each crop species has a temperature range for growth, along with an optimum temperature.9 Plants have specific temperature tolerances, and can only be grown in areas where their temperature thresholds are not exceeded. As temperatures increase over this century, crop production areas may shift to follow the temperature range for optimal growth and yield of grain or fruit. Temperature effects on crop production are only one component; production over years in a given location is more affected by available soil water during the growing season than by temperature, and increased variation in seasonal precipitation, coupled with shifting patterns of precipitation within the season, will create more variation in soil water availability. The use of a model to evaluate the effect of changing temperatures
Key Message: Extreme Precipitation and Soil Erosion
Current loss and degradation of critical agricultural soil and water assets due to increasing extremes in precipitation will continue to challenge both rainfed and irrigated agriculture unless innovative conservation methods are implemented. Wind erosion could also increase in areas with persistent drought because of the reduction in vegetative cover.
Several processes act to degrade soils, including erosion, compaction, acidification, salinization, toxification, and net loss of organic matter. Several of these processes, particularly erosion, will be directly affected by climate change. Rainfall’s erosive power is expected to increase as a result of increases in rainfall amount in northern portions of the United States, accompanied by further increases in precipitation intensity. Projected increases in rainfall intensity that include more extreme events will increase soil erosion in the absence of conservation practices. Precipitation and temperature affect the potential amount of water available, but the actual amount of available water also depends on soil type, soil water holding capacity, and the rate at which water filters through the soil.
Iowa is the nation’s top corn and soybean producing state. These crops are planted in the spring. Heavy rain can delay planting and create problems in obtaining a good stand of plants, both of which can reduce crop productivity. In Iowa soils with even modest slopes, rainfall of more than 1.25 inches in a single day leads to runoff that causes soil erosion and loss of nutrients and, under some circumstances, can lead to flooding. Figure 6.9 shows the number of days per year during which more than 1.25 inches of rain fell is increasing.
A few of the ecosystem services provided by soils include:
the provision of food
wood
fiber (i.e. cotton)
raw materials
flood mitigation
recycling of wastes
biological control of pest
regulation of carbon and other heat-trapping gases
physical support for roads and buildings
cultural and aesthetic values
Productive soils are characterized by levels of nutrients necessary for the production of healthy plants, moderately high levels of organic matter, a soil structure with good binding of the primary soil particles, moderate pH levels, thickness sufficient to store adequate water for plants, a healthy microbial community, and the absence of elements or compounds in concentrations that are toxic for plant, animal, and microbial life.
Erosion is managed through maintenance of cover on the soil surface to reduce the effect of rainfall intensity. Studies have shown that a reduction in projected crop biomass (and hence the amount of crop residue that remains on the surface over the winter) will increase soil loss.
Key Message: Weeds, Diseases, and Pests
Many agricultural regions will experience declines in crop and livestock production from increased stress due to weeds, diseases, insect pests, and other climate change induced stresses.
The growth of atmospheric CO2 concentrations has a disproportionately positive impact on several weed species. This effect will contribute to increased risk of crop loss due to weed pressure.
Weeds, insects, and diseases already have large negative impacts on agricultural production, and climate change has the potential to increase these impacts. Current estimates of losses in global crop production show that weeds cause the largest losses (34%), followed by insects (18%), and diseases (16%). Further increases in temperature and changes in precipitation patterns will induce new conditions that will affect insect populations, incidence of pathogens, and the geographic distribution of insects and diseases. Increasing CO2 boosts weed growth, adding to the potential for increased competition between crops and weeds. Several weed species benefit more than crops from higher temperatures and CO2 levels.
One concern involves the northward spread of invasive weeds like privet and kudzu, which are already present in the southern states. Changing climate and changing trade patterns are likely to increase both the risks posed by, and the sources of, invasive species. Controlling weeds costs the U.S. more than $11 billion a year, with most of that spent on herbicides. Both herbicide use and costs are expected to increase as temperatures and CO2 levels rise. Also, the most widely used herbicide in the United States, glyphosate, loses its efficacy on weeds grown at CO2 levels projected to occur in the coming decades. Higher concentrations of the chemical and more frequent sprayings thus will be needed, increasing economic and environmental costs associated with chemical use.
Insects are directly affected by temperature and synchronize their development and reproduction with warm periods and are dormant during cold periods. Higher winter temperatures increase insect populations due to overwinter survival and, coupled with higher summer temperatures, increase reproductive rates and allow for multiple generations each year. An example of this has been observed in the European corn borer (Ostrinia nubialis) which produces one generation in the northern Corn Belt and two or more generations in the southern Corn Belt. Changes in the number of reproductive generations coupled with the shift in ranges of insects will alter insect pressure in a given region.
Key Message: Heat and Drought Damage
The rising incidence of weather extremes will have increasingly negative impacts on crop and livestock productivity because critical thresholds are already being exceeded.
Climate change projections suggest an increase in extreme heat, severe drought, and heavy precipitation. Extreme climate conditions, such as dry spells, sustained droughts, and heat waves all have large effects on crops and livestock. The timing of extreme events will be critical because they may occur at sensitive stages in the life cycles of agricultural crops or reproductive stages for animals, diseases, and insects. Extreme events at vulnerable times could result in major impacts on growth or productivity, such as hot-temperature extreme weather events on corn during pollination. By the end of this century, the occurrence of very hot nights and the duration of periods lacking agriculturally significant rainfall are projected to increase. Recent studies suggest that increased average temperatures and drier conditions will amplify future drought severity and temperature extremes. Crops and livestock will be at increased risk of exposure to extreme heat events. Projected increases in the occurrence of extreme heat events will expose production systems to conditions exceeding maximum thresholds for given species more frequently.
California’s Wine, Fruit, & Nut production will begin declining as soon as 2050
In fact, it’s already happening. In 2000, the number of chilling hours in some regions was 30% lower than in 1950. A warmer climate will affect growing conditions, and the lack of cold temperatures may threaten perennial crop production (Figure 6.6), which have a winter chilling requirement (expressed as hours when temperatures are between 32°F and 50°F) ranging from 200 to 2,000 cumulative hours. Yields decline if the chilling requirement is not completely satisfied, because flower emergence and viability is low. Projections show that chilling requirements for fruit and nut trees in California will not be met by the middle to the end of this century.
Impacts on Animal Production
Animal agriculture is a major component of the U.S. agriculture system. Changing climatic conditions affect animal agriculture in four primary ways: 1) feed-grain production, availability, and price; 2) pastures and forage crop production and quality; 3) animal health, growth, and reproduction; and 4) disease and pest distributions. The optimal environmental conditions for livestock production include temperatures and other conditions for which animals do not need to significantly alter behavior or physiological functions to maintain relatively constant core body temperature.
Optimum animal core body temperature is often maintained within a 4°F to 5°F range, while deviations from this range can cause animals to become stressed. This can disrupt performance, production, and fertility, limiting the animals’ ability to produce meat, milk, or eggs. In many species, deviations in core body temperature in excess of 4°F to 5°F cause significant reductions in productive performance, while deviations of 9°F to 12.6°F often result in death. For cattle that breed during spring and summer, exposure to high temperatures reduces conception rates. Livestock and dairy production are more affected by the number of days of extreme heat than by increases in average temperature. Elevated humidity exacerbates the impact of high temperatures on animal health and performance.
Animals respond to extreme temperature events (hot or cold) by altering their metabolic rates and behavior. Increases in extreme temperature events may become more likely for animals, placing them under conditions where their efficiency in meat, milk, or egg production is affected. Projected increases in extreme heat events will further increase the stress on animals, leading to the potential for greater impacts on production. Meat animals are managed for a high rate of weight gain (high metabolic rate), which increases their potential risk when exposed to high temperature conditions. Exposure to heat stress disrupts metabolic functions in animals and alters their internal temperature when exposure occurs. Exposure to high temperature events can be costly to producers, as was the case in 2011, when heat-related production losses exceeded $1 billion.
Livestock production faces additional climate change related impacts that can affect disease prevalence and range. Regional warming and changes in rainfall distribution have the potential to change the distributions of diseases that are sensitive to temperature and moisture, such as anthrax, blackleg, and hemorrhagic septicemia, and lead to increased incidence of ketosis, mastitis, and lameness in dairy cows.
Goats, sheep, beef cattle, and dairy cattle are the livestock species most widely managed in extensive outdoor facilities. Within physiological limits, animals can adapt to and cope with gradual thermal changes, though shifts in thermoregulation may result in a loss of productivity. Lack of prior conditioning to rapidly changing or adverse weather events, however, often results in catastrophic deaths in domestic livestock and losses of productivity in surviving animals.
Key Message: Rate of Adaptation
Agriculture has been able to adapt to recent changes in climate; however, increased innovation will be needed to ensure the rate of adaptation of agriculture and the associated socioeconomic system can keep pace with climate change over the next 25 years.
In the longer term existing adaptive technologies will likely not be sufficient to buffer the impacts of climate change without significant impacts to domestic producers, consumers, or both. Limits to public investment and constraints on private investment could slow the speed of adaptation. Adaptation may also be limited by the availability of inputs (such as land or water), changing prices of other inputs with climate change (such as energy and fertilizer), and by the environmental implications of intensifying or expanding agricultural production.
In addition to regional constraints on the availability of critical basic resources such as land and water, there are potential constraints related to farm financing and credit availability in the U.S. and elsewhere. Research suggests that such constraints may be significant, especially for small family farms with little available capital.
Farm resilience to climate change is also a function of financial capacity to withstand increasing variability in production and returns, including catastrophic loss. As climate change intensifies, “climate risk” from more frequent and intense weather events will add to the existing risks commonly managed by producers, such as those related to production, marketing, finances, regulation, and personal health and safety factors. The role of innovative management techniques and government policies as well as research and insurance programs will have a substantial impact on the degree to which the agricultural sector increases climate resilience in the longer term.
Key Message: Food Security
Climate change effects on agriculture will have consequences for food security, both in the U.S. and globally, through changes in crop yields and food prices and effects on food processing, storage, transportation, and retailing.
Food security includes four components: availability, stability, access, and utilization of food. Following this definition, in 2011, 14.9% of U.S. households did not have secure food supplies at some point during the year, with 5.7% of U.S. households experiencing very low food security.
In addition to altering agricultural yields, projected rising temperatures, changing weather patterns, and increases in frequency of extreme weather events will affect distribution of food- and water-borne diseases as well as food trade and distribution. This means that U.S. food security depends not only on how climate change affects crop yields at the local and national level, but also on how climate change and changes in extreme events affect food processing, storage, transportation, and retailing, through the disruption of transportation as well as the ability of consumers to purchase food. And because about one-fifth of all food consumed in the U.S. is imported, our food supply and security can be significantly affected by climate variations and changes in other parts of the world. The import share has increased over the last two decades, and the U.S. now imports 13% of grains, 20% of vegetables (much higher in winter months), almost 40% of fruit, 85% of fish and shellfish, and almost all tropical products such as coffee, tea, and bananas. Climate extremes in regions that supply these products to the U.S. can cause sharp reductions in production and increases in prices.
In an increasingly globalized food system with volatile food prices, climate events abroad may affect food security in the U.S. while climate events in the U.S. may affect food security globally. The globalized food system can buffer the local impacts of weather events on food security, but can also increase the global vulnerability of food security by transmitting price shocks globally.
Senate 113-245. February 14, 2013. Drought, fire and freeze: the economics of disasters for America’s agricultural producers. U.S. Senate hearing.
Excerpts from this 195-page document follow.
DEBBIE STABENOW, MICHIGAN. Nobody feels the effect of weather disasters more than our nation’s farmers and ranchers, as we all know, whose livelihoods depend on getting the right amount of rain, the right amount of sunshine, getting it all together the right way at the right time. All too frequently, an entire season’s crop can be lost, as we know. Or an entire herd must be sent to slaughter due to the lack of feed.
The year 2012 was a year of unprecedented destruction, from drought, freezes, wildfires, hurricanes, and tornadoes, including the tornadoes that hit Mississippi and other parts of the South last weekend, and my heart goes out to all the survivors of those devastating storms. Our country experienced two of the most destructive hurricanes on record last year, Isaac and Sandy. We experienced the warmest year on record ever in the contiguous United States, which, coupled with the historic drought, produced conditions that rivaled the Dust Bowl. Wildfires raged in the West. In the Upper Midwest and Northeast, warm weather in February and March caused trees to bloom early, resulting in total fruit destruction when temperatures dropped down to the 20s again in April, and we certainly were hit hard with that in Michigan. California and Arizona experienced a freeze just last month, threatening citrus, strawberries, lettuce, and avocados. We learned last week that our cattle herd inventories are the lowest in over six decades, which has had broad-ranging impacts, including job losses in rural communities as processing facilities and feedlots idle.
The drought has left many of our waterways with dangerously low water levels. Lake Michigan, Lake Huron have hit their all- time lowest water levels. Barge traffic on the Mississippi, our most vital waterway has nearly ground to a halt. We have seen major disruptions and increased transportation costs for commodities and fertilizers. Today, we will hear from officials at the National Oceanic and Atmospheric Administration, NOAA, and the Department of Agriculture about the disasters we faced last year. We also will hear directly from those affected by these disasters. Thanks to our successful Crop Insurance Program, many farmers will be able to recover their losses. For those farmers who did not have access to crop insurance or the other risk management tools we worked so hard to include in our Senate-passed farm bill, the future is less certain. Unfortunately, instead of a farm bill that gave those farmers certainty, we ended up with a partial extension that creates the haves and haven’ts. Low crop producers that participate in crop insurance not only get assistance from crop insurance, which is essential, but some will continue to receive direct payments, as well, regardless if they have a loss. Meanwhile, many livestock producers and specialty crop growers who suffered substantial losses will not receive any assistance.
We all know that farming is the riskiest business in the world and altogether employs 16 million Americans.
ROGER PULWARTY, Director, National Integrated Drought Information System, NATIONAL OCEANIC & ATMOSPHERIC ADMINISTRATION, BOULDER, COLORADO
Drought is a pallet of the American experience, from the Southwest in the 13th century to the events of the 1930s and the 1950s to the present. From 2000 to 2010, the annual average land area affected by drought in the United States was 25 percent. Prior to the 2000s, this number stood at 15 percent. 2012 ended as one of the driest years on record, having had five months in which over 60 percent of the country was in moderate to extreme drought. It was also the warmest year on record. Only 1934 had more months with over 60 percent of the U.S. in moderate to severe drought. 1934 was also a warm year.
Drought conditions continue across much of the nation. According to one estimate, the cost of the 2012 drought is in excess of $35 billion, based on agriculture alone.
However, it is important to note the drought-related impacts cross a broad spectrum, from energy, tourism, and recreation in the State of Colorado where I live, to wildfire impacts. According to the National Interagency Fire Center in Boise, over nine million acres were burned last year, which had only happened twice before in the record, 2006 and 2007, since 1960. Low river levels also threaten commerce on the vital Mississippi shipping lanes, affecting transportation of agricultural products. As many of you know, half of the transport on the Mississippi is agriculturally based.
An important feature of conditions in 2012 was the persistence of the area of dryness and warm temperatures, the magnitude of the extremes, and the large area they encompassed.
Twenty-twelve began with about 32 percent of the U.S. in moderate to exceptional drought. The drought reintensified in May, and you can see a jump in the figure there. And by the end of August, the drought had expanded to cover 60 percent of the country, from the Central Rockies to the Ohio Valley and the Mexican to the Canadian borders. Several States had record dry seasons, including Arkansas, Kansas, Nebraska and South Dakota.
The drought years of 1955 and 1956 have the closest geographical pattern to what we have seen to date, and the year 1998, now the second-warmest year on record, and 2006, the third-warmest year on record, have the closest temperature pattern to what we see.
So as of this morning, we have released the U.S. Drought Monitor that gives you present conditions, which people have in front of them. And what we are pointing out in this case is the drought continues across many parts of the Midwest and the West. The physical drivers of drought are linked to sea surface temperatures in the Tropical Pacific and Atlantic Oceans.
As you can see from the last figure on the U.S. Drought Monitor, a dry pattern is expected over the upcoming three months across the South and the Midwest. Prospects are limited for improvement in drought conditions in California, Nevada, and Western Arizona. Drought development and persistence is forecasted for Texas by the end of April. The drought and warm temperatures in the Midwest are firmly entrenched into February, placing a greater need for above-normal spring rains if the region is to recover. This area is now becoming the epicenter of the 2013 drought. Despite some relief, much of the Appalachicola-Chattahoochee-Flint River Basin remain under extreme drought conditions, including low ground water levels, and Georgia is now in its driest two-year period on record.
JOE GLAUBER, CHIEF ECONOMIST, U.S. DEPARTMENT OF AGRICULTURE, WASHINGTON, DC
Row crop producers have generally fared well, despite the adverse weather, in large part due to higher prices and protection from the Federal Crop Insurance Program, which has helped offset many of the yield losses. For uninsured producers, or producers of crops for which insurance is unavailable, however, crop losses have had a more adverse effect. Livestock producers experienced high feed costs and poor pasture conditions this year with limited programs to fall back on, particularly since key livestock disaster programs authorized under the 2008 farm bill are currently unfunded.
What had started out as a promising year for U.S. crop production, with favorable planting conditions supporting high planted acreage and expectations of record or near- record production turned into one of the most unfavorable growing seasons in decades. Crop production estimates for several major crops declined throughout the summer. By January 2013, final production estimates for corn were down almost 28 percent from our May projections. Sorghum was down 26 percent, while soybeans fell about six percent over the same period.
As a result, prices for grains and oil seeds soared to record highs in the summer. Higher prices and crop insurance indemnity payments helped offset crop losses for many rural crop producers. Roughly 85 percent of corn, wheat, and soybean area, almost 80 percent of rice area, and over 90 percent of cotton area is typically enrolled in the Crop Insurance Program, and for those of you who were around back in 1988, this contrasts sharply with what the experience was in 1988 when we had this massive drought in the Midwest. At that time, only about 25 percent of the area, insurable area, was enrolled in the program. So, again, very, very strong participation has helped offset those losses.
As of February 11, just this Monday, about $14.2 billion in indemnity payments have been made to producers of 2012 crops suffering crop or revenue losses. We think that these indemnity payments will likely go higher. They could be as high as 16 or 17 billion dollars before we are done.
On the other hand, looking at the livestock, dairy, and poultry producers, they are facing very high feed costs for most—they faced very high feed costs for most of 2012, and the high prices are likely to persist through much of 2013 until new crops become available in the fall. And in addition to these high feed costs, cattle producers have been particularly hard hit by poor pasture conditions and a poor hay crop. Almost two- thirds of the nation’s pasture and hay crops were in drought conditions, with almost 60 percent of pasture conditions rated poor or very poor for most of July, August, and September 2012. December 1 stocks for hay were at their lowest level since 1957.
The U.S. cattle and calf herd, as was mentioned in your statement, is at its lowest level since 1952. Dryness in the Southern Plains has persisted for over two years and resulted in large liquidation in cattle numbers. The January 1 NASS Cattle Report indicated that total cattle and calf numbers in Kansas, Oklahoma, and Texas alone declined by 3.4 million head between 2011 and 2013. The reduction is a 13.6 percent decline and almost equals the net decline in the U.S. herd over the same period. Likewise, dairy producers have faced high feed costs and poor pasture conditions, and higher temperatures during the summer also adversely affected milk production.
Net cash income is forecast lower in 2013 for all livestock, dairy, and poultry sector. Feed costs make up 51 percent of expenses for dairy, about 20 percent for beef cattle, 42 percent for hogs, and 35 percent for poultry farm businesses.
Major concerns related to persistent drought conditions remain. Fifty-nine percent of wheat area, the winter wheat area, 69 percent of cattle production, and 59 percent of hay acreage remains under drought conditions. Forty-three percent of the winter wheat production is located in areas under extreme or exceptional drought conditions, down only slightly from the 51 percent in August.
Chairwoman STABENOW. How long before we are going to have crop insurance available for specialty crop growers?
Mr. GLAUBER. I think we have made some improvements there. As you know, I sit on the Federal Crop Insurance Board. We have seen several products, new products that have come in that have extended crop insurance to some specialty crops. We have made some changes, for example, in the cherry policy with a revenue product. I think the overall liability for specialty crops right now is around 10 to 13 billion dollars. Certainly, we would like to see that improved. The difficulty is that with a lot of these crops, they are very small with not a lot of producers, and sometimes some of the producers are not interested in crop insurance. Now, what we have seen over the last five years, ten years, which is very different than, I would say, 15 years ago, is the fact that a lot of producers now are interested in developing these products.
Our major issue, as you know, in the Midwest and the Southwest, in particular, the Colorado Basin, is that we are having back- to-back dry years, and a third year of that puts our systems completely under stress. The forecast for this season is that, in fact, we are projecting drier conditions.
Senator KLOBUCHAR. Mississippi River transportation is my next question. In 2012, as you know, the barge traffic on the Mississippi was greatly impacted by the drought. It was more difficult to transport grain abroad and more farm inputs up-river to our farmers in Minnesota. We were very scared at the end of the year they were actually going to have to stop barge traffic. Could you talk about that a little and how this could impact our ability to stay competitive, as so many agriculture products go down the Mississippi?
Mr. GLAUBER. Yes. We, too, were very concerned with it because it looked like, particularly late December, early January, that there would be a halt in traffic. Now, understand, the upper part of the Mississippi, as you well know, you stop shipping because of the winter weather. But I think there were a couple of good things. One, the best thing, is that we got rain. The Corps was able to go in and clear out some of the disruptions in the river and then we got adequate rain and barge traffic is moving very well. I will say this. Because of the lower corn harvest and lower soybean harvest and the fact that so much more grain is going to China, it was probably less stress than it might have been under, say, 15 years ago. But, still, the best news is that we have adequate water.
Senator KLOBUCHAR. It is good, but it was a close call and I think it is something that we have to prepare better for next time and have a plan in place. Drought-resistant seeds—what efforts is the USDA taking to speed the adoption of such drought-hardy varieties developed using biotech or conventional breeding?
Mr. GLAUBER. Most of the breeding for seed breeding is in private hands these days. They do it better. There are a lot of profits to be made in that industry and they are working very hard. My understanding is, is that we should be seeing some disaster-resistant, purely disaster-resistant strains come on the market just in the next few years. We know upstream, as well, about 20 percent of what comes into the basin is coal and 20 percent is about fertilizers, as well.
Senator ROBERTS. We have got two years of sustained drought and another one coming, according to our renowned forecaster here. But Kansas producers, once again, put seeds in the ground. Many will once again fire up their tractor and their planter in another six weeks. They manage their risk and protect their operations from Mother Nature’s destruction through the purchase of crop insurance.
Unfortunately, livestock producers do not have a similar safety net. However, with the support of Secretary Vilsack last year, the Department authorized the emergency haying and grazing of Conservation Reserve Program acres in all Kansas counties, including the emergency grazing on CP-25 for the first time. You do not do that unless you have a very, very serious problem.
According to USDA reports last year, over 9,000 emergency haying and grazing contracts allowed haying and grazing on over 470,000 acres in Kansas, that’s a lot of acres. But as we continue to experience what we have experienced in the 1950s and back in the 1930s, what considerations has the Department given to allowing emergency haying and grazing of CRP acres for 2013??
Mr. GLAUBER. A lot of these producers have been hanging on with very, very tight or negative margins. And again, I cited these numbers. Over three million, three-and-a-half million head down from just two years ago in your region of the country. And so it is very critical. I think any help that we can get to the producers to help them make it through to better prices, we will be working with your office on that.
Senator ROBERTS. As you know, many ranchers simply culled their herds and lost their genetics and many are out of business. Northwest Kansas producers irrigating from the Ogallala Aquifer, they must work to conserve their water, but current RMA practices do not have a middle ground between fully irrigated and dry land practices and we need a mechanism to allow limited irrigation to be fairly rated.
Senator BENNET. —we have now had two years in a row, and it sounds like we are going to have a third year of drought in our region. And I wonder if you could talk about the specific challenges that NOAA projects for producers in the water-scarce Western region of our country.
Mr. PULWARTY. I hope I am wrong, as well. The State of Colorado, as you know, in the Front Range, where I live and others do, we get 30 to 40 percent of our water from the Colorado Basin itself. The Colorado Basin came in at 44 percent in the previous water year. So far, the fall snow pack has not been as significant as we would like it. In some places, it is 40, some places 60%, and we hope that picks up in March and April.
However, right now, based on what is happening in the Pacific Ocean and the Atlantic Oceans, we are not projecting an improved set of conditions in those basins, the Upper Basin, including the San Juan and places like that.
The area in terms of the basin is experiencing some lower precipitation and snow pack, and it is also experiencing a combination of high temperatures, however driven. Something else that is happening in that basin has to do with some of our rural communities, where there is rain-fed agriculture.
the combination of temperature and drought is actually creating the die-off of key vegetation that holds our soils together. And the result, then, is dust storms, dust on snow, which lets the runoff and melt occur even earlier than we are accustomed to managing it.
From that standpoint, and looking into the future, while we are seeing some improvement in the lower Colorado Basin—Arizona, Southern California, Nevada—we are expecting that to be short- lived into April. From the standpoint of the Upper Basin, and again, I hope I am, in fact, wrong, we are not projecting significant new inputs of snow unless we get heavy rainfall events later in the spring.
One of the reasons why that is the case is when it has been dry for a year before, even when you get significant snow pack, a lot of that disappears because the soil just picks it up. In 2005, we had 100 percent of snow pack, but the runoff was 70 percent of what we expected because the springtime had been warm.
The Colorado is now in its second longest ten-year period of low flows on record. If we average over the last ten years, the flow has been at average or less, and this is in an already over-allocated system, as you know better than I do.
The issue concerning the basin, where 30 million people live and where we have seven States reliant on the water, is very much at the edge. The demand exceeded supply about ten years ago, so it does not take a major drought to put us into areas of contention.
And to be perfectly honest, given the uncertainty, certainly, there are issues in introducing drought-resistant crops. There are issues in introducing risk pooling and insurance. But where the Conservation Reserve Programs come in is the admission that we are uncertain about the future, that it leaves us the flexibility to manage for the pieces that we are uncertain about. And I think that is the richest contribution from the standpoint of an understand what the weather incline is doing, naturally or otherwise, and then what the buffers in our system supply.
Senator COWAN. We also need to be thinking about new threats that our farmers and fishermen are facing. The climate change and more frequent and intense extreme weather events threaten our agricultural economy, and I am pleased that the committee is discussing this important issue today.
According to the Climate Vulnerability Initiative, the U.S. is among the top ten countries that will be most adversely affected by desertification and sea level risk, and this does not bode well for either our farmers or fishermen.
Senator BAUCUS. It is a real honor for me to introduce Leon LaSalle. Leon is a Native American rancher. The real deal, several generations. His grandfather, Frank Billy, is one of the first to found the ranch on the Chippewa Cree Reservation of Montana. It actually is part of the Rocky Boys Reservation. We have got seven reservations in Montana. Leon and his family are real stalwarts, and one of the reservations is Rocky Boys and the Chippewa Cree are the Tribal members in that reservation. They raise Black Angus around the Bears Paw Mountains between Rocky Boys, up around Havre, Montana. It is sort of a real standout, that is, as a landmark in our State. We are very proud of it. Leon was featured in a book. The book was called Big Sky Boots. It is the working seasons of a Montana cowboy. He has a great quote in that book. He said he thinks there is a growing disconnect between the general public and agriculture producers. Well, Leon, I have got to tell you, the same thing is true in Washington, D.C. There is a disconnect between the people here and the people who represent the rest of the country, and maybe you can kind of help connect those dots a little bit here when it comes time for you to testify. We are really very honored to have you here because you are a great credit to the Tribe and to the State of Montana and your industry. Leon is also on the Board of Directors of the Montana Stockgrowers and one of the guiding lights there,
LEON LASALLE, RANCHER, HAVRE, MONTANA
We have installed numerous conservation practices specifically designed to preserve and protect our natural resources. Even though we have implemented these conservation measures, there are times when my family’s ranch has been struck so hard by weather- related disasters that we have sought economic assistance. The Federal Livestock Disaster Programs have been that assistance.
The Native American Livestock Feed Program is a great example of a program that helped when feed was short. In drought years, when there is little or no hay to feed our livestock, ranchers like me must purchase hay at a premium. Sometimes by the time the hay reaches the ranch, the freight is more than the cost of the hay itself.
These programs provide the only financial relief available when a rancher was faced with loss of livestock or forage to feed them. There is no insurance for catastrophic livestock losses, such as those experienced by Southeastern Montana ranchers during the horrific wildfires of 2012.
I have helped neighbors prepare applications for LIP, and on one sad occasion, I participated as a third-party witness when several cattle fell through the ice and drowned while trying to shelter themselves from a stinging Montana blizzard.
Mother Nature throws a variety of natural events in the path of a Montana rancher. Our weather is uncertain, sometimes severe. We find our markets are even vulnerable to the effects of drought, as well. Drought has reduced the number of cattle available, and processing facilities have closed as a result, thus affecting our price. If weather and markets are not the issue, then many of my fellow ranchers are challenged by the ever- increasing predator losses.
ANNGIE STEINBARGER, FARMER, EDINBURGH, INDIANA
we now farm 1,500 acres of corn and soybeans as well as a small cow-calf operation in the State. We find our association with various farm organizations, such as the Indiana Soybean Alliance, invaluable to the success of our operation. The Indiana Soybean Alliance is an arm of the American Soybean Association, a trade organization that represents our nation’s 600,000 soybean farmers on national and international policy issues.
It has always been our dream to farm. My husband and I both knew that the only way to make our dreams a reality were to save our pennies and work off-farm incomes in hope that, one day, my father would give us the opportunity to participate in the farming operation. Mike worked in the seed, tile ditching, and bulk milk transport business while I worked in fertilizer, chemical, and crop insurance businesses.
We started farming 600 acres and have increased the operation to 1,500 acres. Roughly one-half of our acres are on a share arrangement with our landlords. We continue to work off- farm, as it is still not self-supporting. Mike sold the milk truck to buy a school bus and I continue to work in the crop insurance and do the farm recordkeeping.
To manage our thin, light soil types, we started our farming operation employing conservation tillage techniques, using such programs as CRP and NRCS cost share funding. To this day, we are still advocates of no-till farming as a way to preserve our soil and maintain soil moisture. As a result of our conservation efforts, our average yields are 150 bushels of corn and 50 bushels of soybeans.
Our best corn was on the farm with the pivot. Under the pivot, it was 200 bushels to the acre. And outside of the pivot, ten. Needless to say, there was not anything to put in the grain bins. Due to the drought and heat, the grain quality was very poor and we even shipped our grain that was going to be fed for livestock.
The number one barrier to increasing our yields is the lack of water. Dry weather in the months of July and August always limit our yield potential. We find crop insurance an effective tool in managing risk when we experience these weather events. We began using crop insurance in 1991 as a way to maintain our cash flow and prevent us from having to borrow money. I actually have lost money over buying crop insurance over the last 20- year time span. It was not until the last two drought years that it actually paid for us to have crop insurance.
JEFF SEND, CHERRY FARMER, LEELANAU, MICHIGAN
I grew up working my grandfather’s 40 acres. Now, my wife and I, Anita, farm 800 acres of sweet and tart cherries. Putting some of the land into the Federal Farm and Ranch Land Protection Program is one of the tools we use to expand our operation. Our youngest daughter and her husband work with us and they someday hope to take over the farm. I also have managed a receiving station for 37 years. I have a working relationship with 35 growers who bring me cherries to be weighed, inspected, shipped to ten different processors in Michigan, Wisconsin, and the State of New York that I work with. I currently am serving as Vice Chair of the Cherry Marketing Institute Board. CMI is a national organization for tart cherry farmers. I am also a Vice Chair of the National Cherry Growers and Industries Foundation, which is a sweet cherry organization. Year in and year out, Michigan produces 75 percent of the United States tart cherries. However, that was not the case in 2012. Last year was the most disastrous year I and the cherry industry have ever experienced. Our winter was much warmer than normal, with little snow and ice on the Great Lakes. In mid-March, there were seven days of 80-degrees temperatures, which is unheard of in Northern Michigan. Cherry trees began to come out of dormancy and began to grow. This left them completely vulnerable to the next 13 freezes in April. This extreme weather in Michigan was one of the worst disasters we had ever seen. Sweet cherries endured freezes slightly better than tart cherries. But to top things off, we were hit with a worst case bacterial canker I had ever seen. There is no treatment for this disease, which affects the fruit buds.
In Michigan, we have the capacity to grow 275 million pounds of tart cherries. In 2012, our total was 11.6 million pounds.
There is no tart cherry insurance available at all for our industry, so my fellow growers and I had no risk management tool to get through this very difficult year. NAP insurance is available, but the policy starts at 50% loss and then pays out only 50% of that number. Farmers are left with only about 25% of coverage, and there is a $100,000 cap. This does not come close to covering our expenses. My costs on my farm alone are between three-quarters and a million dollars.
Tree fruits must be maintained whether there is a crop or not on them. You carry on with the same practices in order to keep them healthy. So expenses remain the same. Imagine working for a year-and-a-half with no paycheck and still having the same expenses.
I worry about our young farmers, who haven’t built up any equity. No income with all the same expenses is formula for disaster. There needs to be something to help farmers stay in business when natural disaster hits. A few days that we have no control over can put us out of business.
BEN E. STEFFEN, FARMER, STEFFEN AG, INC., HUMBOLDT, NEBRASKA
My family, our employees, and I produce milk, corn, soybeans, wheat, and hay on our farm at Humboldt in Southeast Nebraska. We milk 135 cows on 1,900 acres of non-irrigated dryland farm, and I have family members at home right now caring for and feeding animals so that I can be here today.
My family, our employees, and I produce milk, corn, soybeans, wheat, and hay on our farm at Humboldt in Southeast Nebraska. We milk 135 cows on 1,900 acres of non-irrigated dryland farm, and I have family members at home right now caring for and feeding animals so that I can be here today. This nation has benefited from a food supply that is plentiful, inexpensive, and of the highest quality, and securing that food supply for the future is clearly a responsible public policy. Facing a growing world population, it is a moral imperative. The impact of fire and drought has hit our farming operation and those of our neighbors. The price of high-quality dairy hay has gone up by 50 percent, and the price of lower- quality hay suitable for beef animals has more than doubled. While we appreciated last year’s release of Conservation Reserve Program acres for emergency haying and grazing, we would like to see efforts made for an earlier release date for those acres. This would dramatically improve the quality and the quantity of those forages.
My neighbors in Western Nebraska have been dealt a particularly hard blow by wildfires, and nearly 400,00 acres, approximately half the State—equivalent to half the State of Rhode Island—were burned in 2012. On those ranches, feed supplies were wiped out, fences were destroyed, and cattle have been liquidated. I would urge you to consider some tax relief to help those ranchers regain their footing. Ladies and gentlemen, our nation’s cattle herd is at a 61-year low and consumers will feel this damage for years.
Livestock contributed $10 billion to Nebraska’s economy in 2011 and crop production contributed $11.7 billion.
Another risk management tool that we employ is diversification. We include both livestock and crops in our business. In order to manage price risk, we constantly watch the changing world markets and the prices for the products we sell, and we accept the challenge of using futures and options contracts. But we, along with thousands of other producers and processors, were victimized by the genius of mismanagement at MF Global when our accounts were frozen in the subsequent bankruptcy. We continue to wait for the return of a slowly rising percentage of our funds.
To further protect our soil and water, we began using cover crops years ago. But participation in the Conservation Security Program gave us a push to go beyond the program requirements, and last year, we planted nearly 60 percent of our acres to cover crops. This practice holds great promise for conserving our soil, saving water, building quality, and sequestering carbon, but we need more research in this area. I urge Congress and this committee to prioritize funding for both basic and applied agricultural research and our Land Grant system of universities created by the Morrill Act of 1862.
Mr. STEFFEN. As I mentioned in my testimony, I would point our again to the no-till techniques we have been using for 40 years on our operation, to save soil, conserve water, and improve our crops. I would also point out that we are making extensive use of cover crops, and those crops planted in conjunction with our traditional crops offer us a way to catch more moisture and snowfall, to improve the way water and rainfall percolates into the soil and it is absorbed so that we are able to capture and store more water in that soil by using those cover crops. It is a way to increase the organic matter levels in the soil, and that makes the soil more productive and increases its ability to hold water.
Senator BAUCUS. Following on Senator Donnelly’s point, it has always struck me how farmers and ranchers have a better perspective on life. They are more philosophical. Why? Because they know they can’t control their fate as much as some people in cities think they can, erroneously. You can’t control the weather. You can’t control price. Cost, you can’t control. You take what you get, but you have got to manage it as well as you possibly can. It is very, very difficult and it is kind of humbling. It gives you a sense of life and the importance of hard work and doing one’s best. Whereas on the other hand, I think a lot of people in the city get a little arrogant and they think they can control everything, and obviously, they can’t.
Preface. If trucks, tractors, ships, locomotives, and airplanes can’t run on electricity or the electric grid stay up without natural gas to balance wind & solar (see When Trucks Stop Running), if cement and steel and other products requiring the high heat of fossil fuels can’t be electrified and much more (see Life After Fossil Fuels), what’s the point of fusion or fission electricity? Is it even ethical do this since the wastes, toxic for a million years, aren’t being stored? There aren’t even any plans to do that that.
Fuels made from biomass are a lot like the nuclear powered airplanes the Air Force tried to build from 1946 to 1961, for billions of dollars. They never got off the ground. The idea was interesting – atomic jets could fly for months without refueling. But the lead shielding to protect the crew and several months of food and water was too heavy for the plane to take off. The weight problem, the ease of shooting this behemoth down, and the consequences of a crash landing were so obvious, it’s amazing the project was ever funded, let alone kept going for 15 years (Wiki 2020).
Although shielding a plane enough to keep the radiation from killing the crew was impossible, some engineers proposed hiring elderly Air Force crews to pilot nuclear planes, because they would die before radiation exposure gave them fatal cancers. Also, the reactor would have to be small enough to fit onto an aircraft, which would release far more heat than a standard one. The heat could risk melting the reactor—and the plane along with it, sending a radioactive hunk of liquid metal careening toward Earth (Ruhl 2019).
Nuclear powered Cars
In 1958, Ford came up with a nuclear-powered concept, the Nucleon car that would be powered by a nuclear reactor in the trunk.
In the 1950s and 1960s, there was huge hype around nuclear energy. Many believed it would replace oil and deliver clean power.
Had Ford gone ahead and made an actual working version of the Nucleon, the company says drivers would have fueled it with Uranium pellets. Ford never actually made a working version, though.
The Nucleon car would use an atomic reactor like a nuclear submarine, fissioning Uranium pellets to heat water into steam that could turn turbines to produce electric power and turn it into mechanical power.
Running low on uranium? Just head to a Uranium station to get a new nuclear capsule, good for another 5,000 miles and no emissions.
Not surprisingly, the Nucleon project was scrapped since small-scale nuclear reactors and lightweight shielding materials couldn’t be developed. Just as well not to have 100+ mph nuclear bombs on our roads (Beedham 2020).
Nuclear Tanks (Peck 2020)
Chrysler’s design was essentially a giant pod-shaped turret mounted on a lightweight tank chassis, like a big head stuck on top a small body. The crew, weapons and power plant would have been housed in the turret, according to tank historian R.P. Hunnicut’s authoritative “A History of the Main American Battle Tank Vol. 2″.
The four-man vehicle would have weighed 25 tons, with a closed circuit TV to protect the crew from the flash of nuclear weapons and to increase the field of vision, running on a vapor-cycle power plant using nuclear fuel.
The Army also considered a nuclear tank to replace M-48 Patton. The 50-ton tank would have been propelled by a nuclear power plant that created heat to drive a turbine engine. The range of the vehicle would have more than 4,000 miles.
Obviously, such a tank would have been extremely expensive and the radiation hazard would have required crew changes at periodic intervals, as well as more ammunition. On top of the usual dangers such as fire or explosion, crews in combat would have worried being irradiated if their tank was hit. Pity the poor mechanics as well who would have had to fix or tow a damaged tank leaking radioactive fuel and spitting out radioactive particles.
Most important of all, nuclear-powered tactical vehicles would destroy the whole concept of nuclear non-proliferation. A fleet of atomic tanks would have meant hundreds or thousands of nuclear reactors spread out all over the place.
References
Beedham, M. 2020. Remembering the Nucleon, Ford’s 1958 nuclear-powered concept car that never was. thenextweb.com
Ruhl, C. 2019. Why There Are No Nuclear Airplanes. Strategists considered sacrificing older pilots to patrol the skies in flying reactors. An Object Lesson. The Atlantic.
Preface. One of the huge hurdles to shifting from oil to “something else” is the chicken-or-egg problem of no one buying a new-fuel vehicle with few places to get it, so few are made, so service stations don’t add the new fuel since there are few customers.
This is just one piece of the distribution system, it’s also a problem that ethanol can’t flow in oil or gas pipelines because it corrodes them, and has to be transported by truck or rail using diesel fuel (since trucks can’t burn ethanol or diesohol).
This is why it is hard for service stations to add E15, E85, hydrogen, or any fuel for that matter, though of course each has its own unique costs and difficulties. Go here to see where alternative fuels can be found by state.
And heaven forbid you put in the wrong fuel. Gasoline cars can not burn diesel fuel, it could lead to needing an engine rebuild. At best the car chugs and lurches and is towed, then billed up to $1500 to flush the tank, fuel lines, injectors, and fuel pump.
Mr. Shane Karr, Vice President of Federal Government Affairs, the Alliance of Automobile Manufacturers
only about 2% of gas stations have an E85 pump, and most are concentrated in the Midwest, where most com ethanol is produced. This makes sense, because keeping production close to point-of-sale is the most affordable approach. But even in states where E85 pumps are concentrated, actual sale of E85 has been low and stagnant. For example, in 2009 Minnesota had 351 stations with an E85 pump (the most of any state) but the average Flexible fuel vehicle (FFV) in the state used just 10.3 gallons of E85 for the whole year.
Achieving vehicle production mandates in H.R. 1687 by producing E85 FFVs would cost consumers well more than $1 billion per year by the most conservative estimates. And these conservative estimates are severely understated for the vehicle mandates of the bill for two reasons: (I) H.R. 1687 requires a new kind of tri-fuel FFV that can run on gasoline, ethanol, methanol, and any combination of the 3 fuels, and which does not exist today; and (2) it will be more expensive to produce tri-fuel FFVs that can comply with H.R. 1687, especially with the forthcoming California Low Emission Vehicles (LEV III) and federal Tier 3 emissions standards along with very aggressive fuel economy/GHG emission requirements through 2025.
Serial No. 112–159. July 10, 2012. The American energy initiative part 23: A focus on Alternative Fuels and vehicles. House of Representatives. 210 pages.
Jeffrey Miller, President of Miller Oil Company, Norfolk, VA.
On behalf of the National Association of Convenience Stores (NACS) Before the House Energy and Commerce Committee, Subcommittee on Energy and Power May 5, 2011 Hearing on “The American Energy Initiative”
My name is Jeff Miller, President of Miller Oil Company headquartered in Norfolk, VA. As of December 31, 2010, the U.S. convenience and fuel retailing industry operated 146,341 stores of which 117,297 (80.2%) sold motor fuels. In 2009, our industry generated $511 billion in sales (one of every 28 dollars spent in the United States), employed more than 1.5 million workers and sold approximately 80% of the nation’s motor fuel.
To fully understand how fuels enter the market and are sold to consumers, it is important to know who is making the decision at the retail level of trade. Our industry is dominated by small businesses. In fact, of the 117,297 convenience stores that sell fuel, 57.5% of them are single-store companies – true mom and pop operations. Overall, nearly 75% of all stores are owned and operated by companies my size or smaller – and we all started with just a couple of stores.
Many of these companies – mine included – sell fuel under the brand name of their fuel supplier. This has created a common misperception in the minds of many policymakers and consumers that the large integrated oil companies own these stations. The reality is that the majors are leaving the retail market place and today own and operate fewer than 2% of the retail locations.
Taking a chance by offering a new candy bar is very different from switching my fueling infrastructure to accommodate a new fuel. So when a new fuel product becomes available, our decision to offer it to our customers takes more time. We need to know that our customers want to buy it, that we can generate enough return to justify the investment, and that we can sell the fuel legally. These are the fundamental issues that face the introduction of new renewable and alternative fuels.
Today, most of the fuel sold in the United States is blended with 10% ethanol. The transition to this fuel mix was not complicated, but it was not without challenges. When ethanol became more prevalent in my market, we realized what a powerful solvent it is. Ethanol forced us to clean our storage tanks and change our filters frequently to avoid introducing contaminants into the fuel tanks of our customers’ vehicles. Despite our best efforts, however, there were times when the fuel a customer purchased caused problems with their vehicles. In those situations, it was our responsibility to correct the damage. And while the transition to E10 required no significant changes to equipment or systems, it taught us some lessons that influence our decisions concerning new fuels.
Retailers are now hearing reports from Washington that the use of fuel containing 15% ethanol is authorized.
Currently, there is essentially only one organization that certifies our equipment – Underwriters Laboratories (UL). UL establishes specifications for safety and compatibility and runs tests on equipment submitted by manufacturers for UL listing. Once satisfied, UL lists the equipment as meeting a certain standard for a certain fuel.
Prior to last spring, however, UL had not listed a single motor fuel dispenser (a.k.a, pump) as compatible with any fuel containing more than 10% ethanol. This means that any dispenser in the market prior to last spring – which would represent the vast majority of my dispensers – is not legally permitted to sell E15, E85 or anything above 10% ethanol – even if it is technically able to do so safely.
If I use non-listed equipment, I am in violation of OSHA regulations and may be violating my tank insurance policies, state tank fund program requirements, bank loan covenants, and potentially other local regulations. Furthermore, if my store has a petroleum release from that equipment, I could be sued on the grounds of negligence for using non-listed equipment, which would cost me significantly more than the expense of cleaning up the spill.
So, if none of my dispensers are UL-listed for E15, what are my options?
Unfortunately, UL will not re-certify any equipment. Only those units manufactured after UL certification is issued are so certified – all previously manufactured devices, even if they are the same model, are subject only to the UL listing available at the time of manufacture. This means that no retail dispensers, except those produced after UL issued a listing last spring, are legally approved for E10+ fuels.
In other words, the only legal option for me to sell E15 is to replace my dispensers with the specific models listed by UL. On average, a retail motor fuel dispenser costs approximately $20,000.
It is less clear how many of my underground storage tanks and associated pipes and lines would require replacement. Many of these units are manufactured to be compatible with high concentrations of ethanol, but they may not be listed as such. In addition, the gaskets and seals may need to be replaced to ensure the system does not pose a threat to the environment. If I have to crack open concrete to replace seals, gaskets or tanks, my costs can escalate rapidly and can easily exceed $100,000 per location.
MISFUELING
The second major issue I must consider is the effect of the fuel on customer engines and vehicles. Having dealt with engine problems associated with fuel contamination following the introduction of E10, I am very concerned about the potential effect a fuel like E15 would have on vehicles. The EPA decision concerning E15 is very challenging. Under EPA’s partial waiver, only vehicles manufactured in model year 2001 or more recently are authorized to fuel with E15. Older vehicles, motorcycles, boats, and small engines are not authorized to use E15.
How am I supposed to prevent the consumer from buying the wrong fuel? I can deal with the responsibility for fuel quality and contamination control, but self-service customer misfueling is a much more difficult challenge to control.
In the past, when we have introduced new fuels – like unleaded gasoline or ultra-low sulfur diesel – they were backwards compatible; i.e. older vehicles could use the new fuel. In addition, newer vehicles were required to use the new fuel, creating a guaranteed market demand.
Such is not the case with E15 – legacy vehicles are not permitted to use the new fuel. Doing so will violate Clean Air Act standards and could cause engine performance or safety issues. Yet, there are no viable options to retroactively install physical countermeasures to prevent misfueling. Consequently, my risk of liability if a customer uses E15 in the wrong engine – whether accidentally or intentionally – is significant.
First of all, I could be fined under the Clean Air Act for misuse of the fuel – this has happened before. When lead was phased out of gasoline, unleaded fuel was more expensive than leaded fuel. To save a few cents per gallon, some consumers physically altered their vehicle fill pipes to accommodate the larger leaded nozzles either by using can openers or by using a funnel while fueling. Retailers had no ability to prevent such behavior, but the EPA often levied fines against retailers for not physically preventing the consumer from bypassing the misfueling countermeasures.
My understanding is EPA has told NACS that the agency would not be targeting retailers for consumer misfueling. But that provides me with little comfort – EPA policy can change in the absence of specific legal safeguards. Further, the Clean Air Act includes a private right of action and any citizen can file a lawsuit against a retailer who does not prevent misfueling. Whether the retailer is found guilty does not change the fact that defending against such claims can be very expensive.
Finally, I am very concerned about the effect of E15 in the wrong engine. Using the wrong fuel could void an engine’s warranty, cause engine performance problems or even compromise the safety of some equipment. A consumer may seek to hold me liable for these situations even if my company was not responsible for the misfueling. Defending my company against such claims is financially expensive, but also expensive from a customer-relations perspective.
GENERAL LIABILITY EXPOSURE
Retailers are also concerned about long-term liability exposure. Our industry has experience with being sued for selling fuels that were approved at the time but later ruled defective. What assurances are there that such a situation will not repeat itself with new fuels being approved for commerce?
For example, E15 is approved only for certain engines and its use in other engines is prohibited by the EPA due to associated emissions and performance issues. What if E15 does indeed cause problems in non-approved engines or even in approved engines? What if in the future the product is determined defective, the rules are changed and E15 is no longer approved for use in commerce? There is significant concern that such a change in the law would be retroactively applied to any who manufactured, distributed, blended or sold the product in question.
Retailers are hesitant to enter new fuel markets without some assurance that our compliance with the law today will protect us from retroactive liability should the law change in the future. It seems reasonable that law abiding citizens should not be held accountable if the law changes in the future. Congress could help overcome significant resistance to new fuels by providing assurances that market participants will only be held to account for the laws as they exist at the time and not subject to liability for violating a future law or regulation.
MARKET ACCEPTANCE
The final challenge we face is the rate at which consumers will adopt the new fuels. Assume all the other issues are resolved, I have to ask myself: Will my customers purchase the fuel? It is important to note that this is the first fuel transition in which no person is required to purchase the fuel, unlike prior transitions to unleaded gasoline and ultra-low sulfur diesel fuel.
In the situation facing E15, only a subset of the population (about 65% of vehicles) is authorized to buy it. Yet the auto industry is not fully supportive of its use in anything except flexible fuel vehicles (about 3% of vehicles). This situation could dramatically reduce consumer acceptance. The risk of misfueling and potentially alienating customers if E15 causes performance issues also is a serious concern.
With these unknowns, how can I calculate an accurate return on my investment to install E15 compatible equipment? Again, this is not like offering a new candy bar – to sell E15 I will likely have to spend significant resources.
As new fuels enter the market, their compatibility with vehicles and their performance characteristics compared to traditional gasoline will be critically important to determining consumer acceptance. In addition, the cost of entry for retailers will influence the return on investment calculations required to determine whether to invest in the new fuel.
OPTIONS
NACS believes there are options available to Congress to help the market overcome these challenges. I have referenced E15 in this testimony because it is a fuel with which we are all familiar due to its current considerations at EPA. However, E15 alone will not satisfy the renewable fuel objectives of the country. Other products must be brought to market and how they interact with the refueling infrastructure and the consumer’s vehicles should be critical considerations to Congress when deciding whether to support their development and introduction.
Regardless which fuels are introduced in the future, the following recommendations can help lower the cost of entry and provide retailers with greater regulatory and legal certainty necessary for them to offer these new fuels to consumers:
First, because UL will not retroactively certify any equipment, Congress should authorize an alternative method for certifying legacy equipment. Such a method would preserve the protections for environmental health and safety, but eliminate the need to replace all equipment simply because the certification policy of the primary testing laboratory will not re-evaluate legacy equipment. NACS was supportive of legislation introduced in the House last Congress Reps. Mike Ross (D-AR) and John Shimkus (R-IL) as H.R. 5778. This bill directed the EPA to develop guidelines for determining the compatibility of equipment with new fuels and stipulates equipment that satisfied such guidelines would thereby satisfy all laws and regulations concerning compatibility.
Second, Congress can require EPA to issue labeling regulations for fuels that are authorized for only a subset of vehicles and ensure that retailers who comply with such requirements satisfy their requirements under the Clean Air Act and protect them from violations or engine warranty claims in the event a self-service customer ignores the notifications and misfuels a non-authorized engine. H.R. 5778 also included provisions to achieve these objectives.
Third, Congress can provide market participants with regulatory and legal certainty that compliance with current applicable laws and regulations concerning the manufacture, distribution, storage and sale of new fuels will protect them from retroactive liability should the laws and regulations change at some time in the future.
Finally, Congress should evaluate the prospects for the marketing of infrastructure-compatible fuels and support the development of such fuels. These could aid compliance with the renewable fuels standard and save retailers, engine makers and consumers billions of dollars. Policymakers might consider establishing characteristics that new fuels must possess so that equipment and engines can be manufactured or retrofitted to accommodate whichever new fuel provides the greatest benefit to consumers and the economy.
If Congress takes action to lower the cost of entry and to remove the threat of unreasonable liability, more retailers may be willing to take a chance and offer a new renewable fuel. By lowering the barriers to entry, Congress will give the market an opportunity to express its will and allow retailers to offer consumers more choice. If consumers reject the new fuel, the retailer can reverse the decision without sacrificing a significant investment, but new fuels will be given a better opportunity to successfully penetrate the market.
Serial No. 112–159. July 10, 2012. The American energy initiative part 23: A focus on Alternative Fuels and vehicles. House of Representatives. 210 pages.
Jack Gerard, President and CEO of the American Petroleum Institute. Over the past 7 years, the two RFS laws passed in 2005 and in 2007 have substantially expanded the role of renewables in America. Biofuels are now in almost all gasoline. While API supports the continued appropriate use of ethanol and other renewable fuels, the RFS law has become increasingly unrealistic, unworkable, and a threat to consumers. It needs an overhaul. Most of the problems relate to the law’s volume requirements. These mandates call for blending increasing amounts of renewable fuels into gasoline and diesel. Although we are already close to blending an amount that would result in a 10 percent concentration level of ethanol in every gallon of gasoline sold in America, that which is the maximum known safe level, the volumes required will more than double over the next 10 years. The E10, or 10 percent ethanol blend that we consume today could, by virtue of RFS volume requirements, become at least an E20 blend in the future. This would present an unacceptable risk to billions of dollars in consumer investment in vehicles, a vast majority of which were designed, built, and warranted to operate on a maximum blend of E10.
It also would put at risk billions of dollars of gasoline station equipment in thousands of retail outlets across America, most owned by small independent businesses. I believe well over 60 percent of retail establishments in this area are Ma and Pa operations.
Vehicle research conducted by the Auto Oil Coordinated Research Council shows that E15 could also damage the engines of millions of cars and light trucks, estimates exceeding five million vehicles on the road today. E20 blends may have similar, if not worse, compatibility issues with engines and service station attendants.
The RFS law also requires increasing use of cellulosic ethanol, an advanced form of ethanol that can be made from a broader range of feed stocks. The problem is, you can’t buy the fuel yet because no one is making it commercially. While EPA could waive that provision, it has decided to require refiners to purchase credits for this nonexistent fuel, which will drive up costs and potentially hurt consumers. Mandating the use of fuels that do not exist is absurd on its face and is inexcusably bad public policy.
To date, E85 has faced low consumer acceptance as FFV owners use E85 less than 1% of the time. The fuel economy of an FFV operated on E85 is approximately 25-30% lower than when fueled with gasoline due to ethanol’s lower energy content. Also, less than 2% of retail gasoline stations offer E85, which has high installation costs. In 2010 and 2011, EPA approved the use of E15 for a portion of the motor vehicle fleet in order to accommodate the RFS law’s volume increases. We believe these actions were premature and unlawful, and present an unacceptable risk to billions of dollars in consumer investments in vehicles. They also put at risk billions of dollars of gasoline station pump equipment in scores of thousands of retail outlets across America, most owned by small independent businesses. E15 is a different transportation fuel, well outside the range for which the vast majority of U.S. vehicles and engines have been designed and warranted. E15 is also outside the range for which service station pumping equipment has been listed and proven to be safe and compatible and conflicts with existing worker and public safety laws outlined in OSHA and Fire Codes. EPA should not have proceeded with E15, especially before a thorough evaluation was conducted to assess the full range of short- and long-term impacts of increasing the amount of ethanol in gasoline on the environment, on engine and vehicle performance, and on consumer safety. Research on higher blends was already underway when EPA approved El5 in 2010 and 2011. In response to the passage of EISA in 2007, the oil and natural gas industry, the auto industry, and other stakeholders, including EPA and DOE, recognized in early 2008 that substantial research was needed in order to assess the impact of higher ethanol blends including the compatibility of ethanol blends above 10% (E10+) with the existing fleet of vehicles and small engines. Through the Coordinating Research Council (CRC), the oil and auto industries developed and funded a comprehensive multi-year testing program prior to the biofuels industry’s E15 waiver application. API worked closely with the auto and off-road engine industries and with EPA and DOE to share and coordinate research plans. Yet, EPA approved the E15 waiver request before this research effort was finished and the results thoroughly evaluated. The potential for harm from that decision is substantial, as suggested by the results of various research studies, including testing performed by DOE’s National Renewal Energy Laboratory and by the CRC, have been completed to date. The DOE research shows an estimated half of existing service station pumping equipment may not be compatible with a 15% ethanol blend. The CRC research shows that E15 could also damage the engines of millions of cars and light trucks.
E20 may have similar, if not worse, compatibility issues with engines and service station equipment.
JOSEPH H. PETROWSKI. Gulf Oil Group.
We are the Nation’s eighth largest convenience retailer of petroleum products and convenience items in over 13 States. Our wholesale oil division, Gulf Oil, carries and merchandises over 350,000 barrels of petroleum products and biofuels over 29 States, $13 billion revenue places us in the top 50 private companies in the country. We employ 8,000 employees,
We do not drill, we do not refine petroleum products. What we care to sell are products that our customers want to buy that are most economic for them to achieve their desired transport, heating, and other energy uses in a lawful manner.
We blend—in addition to selling petroleum products, which is our primary product that we sell, we blend over 1 million gallons a day of biofuels across our system, and just recently, we have purchased 24 Class A trucks to begin to fuel on natural gas to deliver our fuel products to our stations and stores.
We believe that a sound energy policy rests on four bedrocks. One is that we have diverse fuel sources, and there are two reasons for that. The future is unknowable. The new shale technology that has taken over the industry in natural gas was unheard of more than 2 decades ago. Technology and events are beyond our abilities to understand where we are going, and so to bet any of our future on one single source of fuel would be a mistake. We believe diversity in all systems ensures health and stability. And so we look for diversity in fuel, not only by fuel type, but to make sure that we are not concentrated in taking it from one region, particularly the Middle East and unstable regions.
I do want to point out to all the members that we have billions, hundreds of billions of dollars invested in terminals, gas stations, barges, transportation, and we have to live with the realities of the marketplace and the particulars.
America’s love affair with the automobile is not going away. Neither is the need for transportation fuels that underpin the economy and create jobs. In a country as vast as ours with a density of 79 people per square mile (as opposed to the Netherlands with 1300 people per square mile), the cost of transport is central to economic health.
When total national energy costs exceed 16% of GDP a recession or worse is almost always the result. The United States’ current accounts trade balance for all energy products recently exceeded $1 trillion dollars, and while it has currently been reduced to one half that amount on an annualized basis we look forward to the day when the United States is a net energy exporter. Not only will that be positive to GDP and job growth, but it will position us to revitalize our industrial production, especially in energy-intensive industries with an eye toward value added product exports. And no policy would be more beneficial for the spread of world democracy
Our industry is dominated by small businesses. In fact, of the 120,950 convenience stores that sell fuel, almost sixty percent of them are single-store companies – true mom and pop operations. Many of these companies sell fuel under the brand name of their fuel supplier. This has created a common misperception in the minds of many policymakers and consumers that the large integrated oil companies own these stations. The reality is that the majors are leaving the retail marketplace and today own and operate fewer than 2% of the retail locations. Although a store may sell a particular brand of fuel associated with a refiner, the vast majority are independently owned and operated like mine. When people pull into an Exxon or a BP station, the odds are good that they are in fact refueling at a small mom-and-pop operation.
THE BLEND WALL AND THE NEED FOR A CONGRESSIONAL FIX. Since the enactment of the Energy Independence and Security Act (EISA) of2007, we have heard much about the impending arrival of the so-called “blend wall” – the point at which the market cannot absorb any additional renewable fuels. Most of the fuel sold in the United States today is blended with 10% ethanol. If 10% ethanol were blended into every gallon of gasoline sold in the nation in 2011 (33.9 billion gallons), the market would reach a maximum of 13.39 billion gallons. However, the 2012 statutory mandate for the RFS is 15.2 billion gallons. Meanwhile, the market for higher blends of ethanol (E85) for flexible fuel vehicles (FFVs) has not developed as rapidly as some had hoped. Clearly, we have reached the blend wall.
EPA recently authorized the use ofE15 in certain vehicles. However, this has so far done very little to expand the use of renewable fuels, due largely to retailers’ liability and compatibility concerns, as well as state and local restrictions on selling E15. Congress can do something immediately to mitigate other obstacles preventing new fuels from entering the market. H.R. 4345, the Domestic Fuels Protection Act of 2012-currentiy before the subcommittee on Environment and the Economy-addresses three of these obstacles: infrastructure compatibility, liability for consumer misuse of fuels, and retroactive liability of the rules governing a fuel change in the future.
The reason the retail market is unable to easily accommodate additional volumes of renewable fuels begins with the equipment found at retail stations. By law, all equipment used to store and dispense flammable and combustible liquids must be certified by a nationally recognized testing laboratory. These requirements are found in regulations of the Occupational Safety and Health Administration. Currently, there is essentially only one organization that certifies such equipment, Underwriters Laboratories (UL). UL establishes specifications for safety and compatibility and runs tests on equipment submitted by manufacturers for UL listing. Once satisfied, UL lists the equipment as meeting a certain standard for a certain fuel. Prior to 20I0, UL had not listed a single motor fuel dispenser (aka a gas pump) as compatible with any fuel containing more than 10% ethanol. This means that any dispenser in the market prior to early 20lOis not legally permitted to sell E15, E85 or anything above 10% ethanol – even if it is able to do so safely.
If a retailer fails to use listed equipment, that retailer is violating OSHA regulations and -may be violating tank insurance policies, state tank fund program requirements, bank loan covenants, and potentially other local regulations. In addition, the retailer could be found negligent per se based solely on the fact that his fuel dispensing system is not listed by UL. This brings us to the primary challenge: if no dispenser prior to early 20I0 was listed as compatible with fuels containing greater than ten percent ethanol, what options are available to retailers to sell these fuels? In order to comply with the law, retailers wishing to sell E I 0+ fuels can only use equipment specifically listed by UL as compatible with such fuels. Because UL did list any equipment as compatible with E10+ fuels until 2010, only those units produced after that date can legally sell E I 0+ fuels. All previously manufactured devices, even if they are the exact same model using the exact same materials, are subject only to the UL listing available at the time of manufacture. (UL policy prevents retroactive certification of equipment.)
Practically speaking, this means that a vast majority of retailers wishing to sell EIO+ fuels must replace their dispensers. This costs an average of $20,000 per dispenser. It is less clear how many underground storage tanks and associated pipes and lines would require replacement. Many of these units are manufactured to be compatible with high concentrations of ethanol, but they may not be listed as such. Further, if there are concerns with gaskets and seals in dispensers, care must be given to ensure the underground gaskets and seals do not pose a threat to the environment. Once a retailer begins to replace underground equipment, the cost can escalate rapidly and can easily exceed $100,000 per location.
The second major issue facing retailers is the potential liability associated with improperly fueling an engine with a non-approved fuel. The EPA decision concerning EI5 puts this issue into sharp focus for retailers. Under EPA’s partial waiver, only vehicles manufactured in model year 2001 or more recently are authorized to fuel with E15. Older vehicles, motorcycles, boats, and small engines are not authorized to use E15. For the retailer, bifurcating the market in this way presents serious challenges. For instance, how does the retailer prevent the consumer from buying the wrong fuel? Typically, when new fuels are authorized they are backwards compatible so this is not a problem. In other words, older vehicles can use the new fuel. When EPA phased lead out of gasoline in the late I 970s and early 1980s, for example, older vehicles were capable of running on unleaded fuel newer vehicles, however, were required to run only on unleaded. These newer vehicle gasoline tanks were equipped with smaller fill pipes into which a leaded nozzle could not fit – likewise, unleaded dispensers were equipped with smaller nozzles. E 15 is very different: legacy engines are not permitted to use the new fuel. Doing so will violate Clean Air Act standards and could cause engine performance or safety issues. Yet there are no viable options to retroactively install physical counter measures to prevent misfueling.
Retailers could be subject to penalties under the Clean Air Act for not preventing a customer from misfueling with E15. This concern is not without justification. In the past, retailers have been held accountable for the actions of their customers. For example, because unleaded fuel was more expensive than leaded fuel, some consumers physically altered their vehicle fill pipes to accommodate the larger leaded nozzles either by using can openers or by using a funnel while fueling. We may see similar behavior in the future given the high price of gasoline relative to ethanol. As in the past, the retailer will not be able to prevent such practices, but in the case of leaded gasoline the EPA levied fines against the retailer for not physically preventing the consumer from bypassing the misfueling counter measures. To EPA’s credit, they have asserted in meetings with NACS and SIGMA that they would not be targeting retailers for consumer misfueling. But that provides little comfort to retailers. EPA policy can change in the absence of specific legal safeguards. Additionally, the Clean Air Act includes a private right of action and any citizen can file a lawsuit against a retailer that does not prevent misfueling. Whether the retailer is found guilty does not change the fact that defending against such claims is very expensive. Further, the consumer may seek to hold the retailer liable for their own actions. Using the wrong fuel could void an engine’s warranty, cause engine performance problems or even compromise the safety of some equipment. In all situations, some consumers may seek to hold the retailer accountable even when the retailer was not responsible for the improper use of the fuel. Once again, defending such claims is expensive.
An EPA decision to approve E15 for 2001 and newer vehicles is not consistent with the terms of most warranty policies issued with these affected vehicles. Consequently, while using E15 in a 2009 vehicle might be lawful under the Clean Air Act, it may in fact void the warranty of the consumer’s vehicle. Retailers have no mechanism for ensuring that consumers abide by their vehicle warranties – it is the consumer’s responsibility to comply with the terms of their contract with their vehicle manufacturer. Therefore, H.R. 4345 stipulates that no person shall be held liable in the event a self-service customer introduces a fuel into their vehicle that is not covered by their vehicle warranty.
General Liability Exposure Finally, there are widespread concerns throughout the retail community and with our product suppliers that the rules of the game may change and we could be left exposed to significant liability. For example, EI5 is approved only for certain engines and its use in other engines is prohibited by the EPA due to associated emissions and performance issues. What if E 15 does indeed cause problems in non-approved engines or even in approved engines? What if in the future the product is determined defective, the rules are changed and E 15 is no longer approved for use in commerce? There is significant concern that such a change in the law would be retroactively applied to anyone who manufactured, distributed, blended or sold the product in question.
Contrary to popular misconception, fuel marketers prefer cheap gasoline. The less the consumer pays at the pump, the more money the consumer has to spend in our stores, where our profit margins are significantly greater.
Preface. The global conventional discovery chart above lists natural gas and oil discoveries since 2013. The fossil fuel that really matters is oil, since it’s the master resource that makes all others available, including natural gas, coal, transportation, and manufacturing.
Source: discoveries Rystad (2020), consumption BP statistical review of world energy (2020
As you can see, in 2019 the world burned 7.7 times more oil than was discovered, with a shortfall of 31.74 billion barrels of oil to be discovered to break even in the future. This can’t end well, as anyone whose covid-19 pantry is emptying can easily grasp.
FYI from Peak Oil Review Feb 10, 2020: worldwide production of oil was 60.27% sourced from conventional on-shore oil, 21.59% conventional offshore shallow-water oil, 8.1% conventional offshore deep water, 6.93% U.S. Tight Oil (Fracking), and 3.10% Canadian Oil Sands oil extraction 3.10%.
And an editorial in oilprice.com notes that: “US oil production has peaked, and it will be difficult to climb back to these levels ever again, given how much capital markets have soured on the industry. The EIA said that the US will once again become a net petroleum importer later this year, ending a brief spell during which the US was a net exporter”.
2016 figure only shows exploration results to August. Discoveries were just 230 million barrels in 1947 but skyrocketed the next year when Ghawar was discovered in Saudi Arabia, and it is still the world's largest oil field, though recently it was learned that Ghawar is in decline at 3.5% a year. Source: Wood Mackenzie
Explorers in 2015 discovered only about a tenth as much oil as they have annually on average since 1960. This year, they’ll probably find even less, spurring new fears about their ability to meet future demand.
With oil prices down by more than half since the price collapse two years ago, drillers have cut their exploration budgets to the bone. The result: Just 2.7 billion barrels of new supply was discovered in 2015, the smallest amount since 1947, according to figures from Edinburgh-based consulting firm Wood Mackenzie Ltd. This year, drillers found just 736 million barrels of conventional crude as of the end of last month.
That’s a concern for the industry at a time when the U.S. Energy Information Administration estimates that global oil demand will grow from 94.8 million barrels a day this year to 105.3 million barrels in 2026. While the U.S. shale boom could potentially make up the difference, prices locked in below $50 a barrel have undercut any substantial growth there. Ten years down from now this will have a “significant potential to push oil prices up. Given current levels of investment across the industry and decline rates at existing fields, a “significant” supply gap may open up by 2040″.
Oil companies will need to invest about $1 trillion a year to continue to meet demand, said Ben Van Beurden, the CEO of Royal Dutch Shell Plc, during a panel discussion at the Norway meeting. He sees demand rising by 1 million to 1.5 million barrels a day, with about 5 percent of supply lost to natural declines every year.
New discoveries from conventional drilling, meanwhile, are “at rock bottom,” said Nils-Henrik Bjurstroem, a senior project manager at Oslo-based consultants Rystad Energy AS. “There will definitely be a strong impact on oil and gas supply, and especially oil.
Global inventories have been buoyed by full-throttle output from Russia and OPEC, which have flooded the world with oil despite depressed prices as they defend market share. But years of under-investment will be felt as soon as 2025, Bjurstroem said. Producers will replace little more than one in 20 of the barrels consumed this year, he said.
There were 209 wells drilled through August this year, down from 680 in 2015 and 1,167 in 2014, according to Wood Mackenzie. That compares with an annual average of 1,500 in data going back to 1960.
Overall, the proportion of new oil that the industry has added to offset the amount it pumps has dropped from 30 percent in 2013 to a reserve-replacement ratio of just 6 percent this year in terms of conventional resources, which excludes shale oil and gas, Bjurstroem predicted. Exxon Mobil Corp. said in February that it failed to replace at least 100 percent of its production by adding resources with new finds or acquisitions for the first time in 22 years.
“That’s a scary thing because, seriously, there is no exploration going on today,” Per Wullf, CEO of offshore drilling company Seadrill Ltd., said by phone.