Nuclear powered airplanes, cars, and tanks

Preface. If trucks, tractors, ships, locomotives, and airplanes can’t run on electricity or the electric grid stay up without natural gas to balance wind & solar (see When Trucks Stop Running), if cement and steel and other products requiring the high heat of fossil fuels can’t be electrified and much more (see Life After Fossil Fuels), what’s the point of fusion or fission electricity? Is it even ethical do this since the wastes, toxic for a million years, aren’t being stored? There aren’t even any plans to do that that. 

Alice Friedemann  www.energyskeptic.com  Author of Life After Fossil Fuels: A Reality Check on Alternative Energy; When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Women in ecology  Podcasts: WGBH, Planet: Critical, Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, Kunstler 253 &278, Peak Prosperity,  Index of best energyskeptic posts

***

Nuclear Airplanes

Fuels made from biomass are a lot like the nuclear powered airplanes the Air Force tried to build from 1946 to 1961, for billions of dollars. They never got off the ground.  The idea was interesting – atomic jets could fly for months without refueling.  But the lead shielding to protect the crew and several months of food and water was too heavy for the plane to take off.  The weight problem, the ease of shooting this behemoth down, and the consequences of a crash landing were so obvious, it’s amazing the project was ever funded, let alone kept going for 15 years (Wiki 2020).

Although shielding a plane enough to keep the radiation from killing the crew was impossible, some engineers proposed hiring elderly Air Force crews to pilot nuclear planes, because they would die before radiation exposure gave them fatal cancers. Also, the reactor would have to be small enough to fit onto an aircraft, which would release far more heat than a standard one. The heat could risk melting the reactor—and the plane along with it, sending a radioactive hunk of liquid metal careening toward Earth (Ruhl 2019).

Nuclear powered Cars

In 1958, Ford came up with a nuclear-powered concept, the Nucleon car that would be powered by a nuclear reactor in the trunk.

In the 1950s and 1960s, there was huge hype around nuclear energy. Many believed it would replace oil and deliver clean power.

Had Ford gone ahead and made an actual working version of the Nucleon, the company says drivers would have fueled it with Uranium pellets. Ford never actually made a working version, though.

The Nucleon car would use an atomic reactor like a nuclear submarine, fissioning Uranium pellets to heat water into steam that could turn turbines to produce electric power and turn it into mechanical power.

Running low on uranium? Just head to a Uranium station to get a new nuclear capsule, good for another 5,000 miles and no emissions.

Not surprisingly, the Nucleon project was scrapped since small-scale nuclear reactors and lightweight shielding materials couldn’t be developed. Just as well not to have 100+ mph nuclear bombs on our roads (Beedham 2020).

Nuclear Tanks (Peck 2020)

Chrysler’s design was essentially a giant pod-shaped turret mounted on a lightweight tank chassis, like a big head stuck on top a small body. The crew, weapons and power plant would have been housed in the turret, according to tank historian R.P. Hunnicut’s authoritative “A History of the Main American Battle Tank Vol. 2″.

The four-man vehicle would have weighed 25 tons, with a closed circuit TV to protect the crew from the flash of nuclear weapons and to increase the field of vision, running on a vapor-cycle power plant using nuclear fuel.

The Army also considered a nuclear tank to replace M-48 Patton. The 50-ton tank would have been propelled by a nuclear power plant that created heat to drive a turbine engine. The range of the vehicle would have more than 4,000 miles.

Obviously, such a tank would have been extremely expensive and the radiation hazard would have required crew changes at periodic intervals, as well as more ammunition. On top of the usual dangers such as fire or explosion, crews in combat would have worried being irradiated if their tank was hit. Pity the poor mechanics as well who would have had to fix or tow a damaged tank leaking radioactive fuel and spitting out radioactive particles.

Most important of all, nuclear-powered tactical vehicles would destroy the whole concept of nuclear non-proliferation. A fleet of atomic tanks would have meant hundreds or thousands of nuclear reactors spread out all over the place.

References

Beedham, M. 2020. Remembering the Nucleon, Ford’s 1958 nuclear-powered concept car that never was. thenextweb.com

Peck, M. 2020. Why America’s nuclear-powered tank was a dumb idea.  Nationalinterest.org

Ruhl, C. 2019. Why There Are No Nuclear Airplanes. Strategists considered sacrificing older pilots to patrol the skies in flying reactors. An Object Lesson. The Atlantic.

Wiki. 2020. Nuclear-powered aircraft.

Posted in Airplanes, Automobiles, Far Out, Nuclear Power Energy | Tagged , , | 4 Comments

Distribution – why it is so hard to add E15 or E85 at a gas station

Preface. One of the huge hurdles to shifting from oil to “something else” is the chicken-or-egg problem of no one buying a new-fuel vehicle with few places to get it, so few are made, so service stations don’t add the new fuel since there are few customers.

This is just one piece of the distribution system, it’s also a problem that ethanol can’t flow in oil or gas pipelines because it corrodes them, and has to be transported by truck or rail using diesel fuel (since trucks can’t burn ethanol or diesohol).

This is why it is hard for service stations to add E15, E85, hydrogen, or any fuel for that matter, though of course each has its own unique costs and difficulties.  Go here to see where alternative fuels can be found by state.

And heaven forbid you put in the wrong fuel. Gasoline cars can not burn diesel fuel, it could lead to needing an engine rebuild.  At best the car chugs and lurches and is towed, then billed up to $1500 to flush the tank, fuel lines, injectors, and fuel pump.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

Mr. Shane Karr, Vice President of Federal Government Affairs, the Alliance of Automobile Manufacturers

only about 2% of gas stations have an E85 pump, and most are concentrated in the Midwest, where most com ethanol is produced. This makes sense, because keeping production close to point-of-sale is the most affordable approach. But even in states where E85 pumps are concentrated, actual sale of E85 has been low and stagnant. For example, in 2009 Minnesota had 351 stations with an E85 pump (the most of any state) but the average Flexible fuel vehicle (FFV) in the state used just 10.3 gallons of E85 for the whole year.

Achieving vehicle production mandates in H.R. 1687 by producing E85 FFVs would cost consumers well more than $1 billion per year by the most conservative estimates. And these conservative estimates are severely understated for the vehicle mandates of the bill for two reasons: (I) H.R. 1687 requires a new kind of tri-fuel FFV that can run on gasoline, ethanol, methanol, and any combination of the 3 fuels, and which does not exist today; and (2) it will be more expensive to produce tri-fuel FFVs that can comply with H.R. 1687, especially with the forthcoming California Low Emission Vehicles (LEV III) and federal Tier 3 emissions standards along with very aggressive fuel economy/GHG emission requirements through 2025.

Serial No. 112–159. July 10, 2012. The American energy initiative part 23: A focus on Alternative Fuels and vehicles. House of Representatives. 210 pages.

Jeffrey Miller, President of Miller Oil Company, Norfolk, VA.

On behalf of the National Association of Convenience Stores (NACS) Before the House Energy and Commerce Committee, Subcommittee on Energy and Power May 5, 2011 Hearing on “The American Energy Initiative”

My name is Jeff Miller, President of Miller Oil Company headquartered in Norfolk, VA. As of December 31, 2010, the U.S. convenience and fuel retailing industry operated 146,341 stores of which 117,297 (80.2%) sold motor fuels. In 2009, our industry generated $511 billion in sales (one of every 28 dollars spent in the United States), employed more than 1.5 million workers and sold approximately 80% of the nation’s motor fuel.

To fully understand how fuels enter the market and are sold to consumers, it is important to know who is making the decision at the retail level of trade. Our industry is dominated by small businesses. In fact, of the 117,297 convenience stores that sell fuel, 57.5% of them are single-store companies – true mom and pop operations. Overall, nearly 75% of all stores are owned and operated by companies my size or smaller – and we all started with just a couple of stores.

Many of these companies – mine included – sell fuel under the brand name of their fuel supplier. This has created a common misperception in the minds of many policymakers and consumers that the large integrated oil companies own these stations. The reality is that the majors are leaving the retail market place and today own and operate fewer than 2% of the retail locations.

Taking a chance by offering a new candy bar is very different from switching my fueling infrastructure to accommodate a new fuel. So when a new fuel product becomes available, our decision to offer it to our customers takes more time. We need to know that our customers want to buy it, that we can generate enough return to justify the investment, and that we can sell the fuel legally. These are the fundamental issues that face the introduction of new renewable and alternative fuels.

Today, most of the fuel sold in the United States is blended with 10% ethanol. The transition to this fuel mix was not complicated, but it was not without challenges. When ethanol became more prevalent in my market, we realized what a powerful solvent it is. Ethanol forced us to clean our storage tanks and change our filters frequently to avoid introducing contaminants into the fuel tanks of our customers’ vehicles. Despite our best efforts, however, there were times when the fuel a customer purchased caused problems with their vehicles. In those situations, it was our responsibility to correct the damage. And while the transition to E10 required no significant changes to equipment or systems, it taught us some lessons that influence our decisions concerning new fuels.

Retailers are now hearing reports from Washington that the use of fuel containing 15% ethanol is authorized.

Currently, there is essentially only one organization that certifies our equipment – Underwriters Laboratories (UL). UL establishes specifications for safety and compatibility and runs tests on equipment submitted by manufacturers for UL listing. Once satisfied, UL lists the equipment as meeting a certain standard for a certain fuel.

Prior to last spring, however, UL had not listed a single motor fuel dispenser (a.k.a, pump) as compatible with any fuel containing more than 10% ethanol. This means that any dispenser in the market prior to last spring – which would represent the vast majority of my dispensers – is not legally permitted to sell E15, E85 or anything above 10% ethanol – even if it is technically able to do so safely.

If I use non-listed equipment, I am in violation of OSHA regulations and may be violating my tank insurance policies, state tank fund program requirements, bank loan covenants, and potentially other local regulations. Furthermore, if my store has a petroleum release from that equipment, I could be sued on the grounds of negligence for using non-listed equipment, which would cost me significantly more than the expense of cleaning up the spill.

So, if none of my dispensers are UL-listed for E15, what are my options?

Unfortunately, UL will not re-certify any equipment. Only those units manufactured after UL certification is issued are so certified – all previously manufactured devices, even if they are the same model, are subject only to the UL listing available at the time of manufacture. This means that no retail dispensers, except those produced after UL issued a listing last spring, are legally approved for E10+ fuels.

In other words, the only legal option for me to sell E15 is to replace my dispensers with the specific models listed by UL. On average, a retail motor fuel dispenser costs approximately $20,000.

It is less clear how many of my underground storage tanks and associated pipes and lines would require replacement. Many of these units are manufactured to be compatible with high concentrations of ethanol, but they may not be listed as such. In addition, the gaskets and seals may need to be replaced to ensure the system does not pose a threat to the environment. If I have to crack open concrete to replace seals, gaskets or tanks, my costs can escalate rapidly and can easily exceed $100,000 per location.

MISFUELING

The second major issue I must consider is the effect of the fuel on customer engines and vehicles. Having dealt with engine problems associated with fuel contamination following the introduction of E10, I am very concerned about the potential effect a fuel like E15 would have on vehicles. The EPA decision concerning E15 is very challenging. Under EPA’s partial waiver, only vehicles manufactured in model year 2001 or more recently are authorized to fuel with E15. Older vehicles, motorcycles, boats, and small engines are not authorized to use E15.

How am I supposed to prevent the consumer from buying the wrong fuel? I can deal with the responsibility for fuel quality and contamination control, but self-service customer misfueling is a much more difficult challenge to control.

In the past, when we have introduced new fuels – like unleaded gasoline or ultra-low sulfur diesel – they were backwards compatible; i.e. older vehicles could use the new fuel. In addition, newer vehicles were required to use the new fuel, creating a guaranteed market demand.

Such is not the case with E15 – legacy vehicles are not permitted to use the new fuel. Doing so will violate Clean Air Act standards and could cause engine performance or safety issues. Yet, there are no viable options to retroactively install physical countermeasures to prevent misfueling. Consequently, my risk of liability if a customer uses E15 in the wrong engine – whether accidentally or intentionally – is significant.

First of all, I could be fined under the Clean Air Act for misuse of the fuel – this has happened before. When lead was phased out of gasoline, unleaded fuel was more expensive than leaded fuel. To save a few cents per gallon, some consumers physically altered their vehicle fill pipes to accommodate the larger leaded nozzles either by using can openers or by using a funnel while fueling. Retailers had no ability to prevent such behavior, but the EPA often levied fines against retailers for not physically preventing the consumer from bypassing the misfueling countermeasures.

My understanding is EPA has told NACS that the agency would not be targeting retailers for consumer misfueling. But that provides me with little comfort – EPA policy can change in the absence of specific legal safeguards. Further, the Clean Air Act includes a private right of action and any citizen can file a lawsuit against a retailer who does not prevent misfueling. Whether the retailer is found guilty does not change the fact that defending against such claims can be very expensive.

Finally, I am very concerned about the effect of E15 in the wrong engine. Using the wrong fuel could void an engine’s warranty, cause engine performance problems or even compromise the safety of some equipment. A consumer may seek to hold me liable for these situations even if my company was not responsible for the misfueling. Defending my company against such claims is financially expensive, but also expensive from a customer-relations perspective.

GENERAL LIABILITY EXPOSURE

Retailers are also concerned about long-term liability exposure. Our industry has experience with being sued for selling fuels that were approved at the time but later ruled defective. What assurances are there that such a situation will not repeat itself with new fuels being approved for commerce?

For example, E15 is approved only for certain engines and its use in other engines is prohibited by the EPA due to associated emissions and performance issues. What if E15 does indeed cause problems in non-approved engines or even in approved engines? What if in the future the product is determined defective, the rules are changed and E15 is no longer approved for use in commerce? There is significant concern that such a change in the law would be retroactively applied to any who manufactured, distributed, blended or sold the product in question.

Retailers are hesitant to enter new fuel markets without some assurance that our compliance with the law today will protect us from retroactive liability should the law change in the future. It seems reasonable that law abiding citizens should not be held accountable if the law changes in the future. Congress could help overcome significant resistance to new fuels by providing assurances that market participants will only be held to account for the laws as they exist at the time and not subject to liability for violating a future law or regulation.

MARKET ACCEPTANCE

The final challenge we face is the rate at which consumers will adopt the new fuels. Assume all the other issues are resolved, I have to ask myself: Will my customers purchase the fuel? It is important to note that this is the first fuel transition in which no person is required to purchase the fuel, unlike prior transitions to unleaded gasoline and ultra-low sulfur diesel fuel.

In the situation facing E15, only a subset of the population (about 65% of vehicles) is authorized to buy it. Yet the auto industry is not fully supportive of its use in anything except flexible fuel vehicles (about 3% of vehicles). This situation could dramatically reduce consumer acceptance. The risk of misfueling and potentially alienating customers if E15 causes performance issues also is a serious concern.

With these unknowns, how can I calculate an accurate return on my investment to install E15 compatible equipment? Again, this is not like offering a new candy bar – to sell E15 I will likely have to spend significant resources.

As new fuels enter the market, their compatibility with vehicles and their performance characteristics compared to traditional gasoline will be critically important to determining consumer acceptance. In addition, the cost of entry for retailers will influence the return on investment calculations required to determine whether to invest in the new fuel.

OPTIONS

NACS believes there are options available to Congress to help the market overcome these challenges. I have referenced E15 in this testimony because it is a fuel with which we are all familiar due to its current considerations at EPA. However, E15 alone will not satisfy the renewable fuel objectives of the country. Other products must be brought to market and how they interact with the refueling infrastructure and the consumer’s vehicles should be critical considerations to Congress when deciding whether to support their development and introduction.

Regardless which fuels are introduced in the future, the following recommendations can help lower the cost of entry and provide retailers with greater regulatory and legal certainty necessary for them to offer these new fuels to consumers:
First, because UL will not retroactively certify any equipment, Congress should authorize an alternative method for certifying legacy equipment. Such a method would preserve the protections for environmental health and safety, but eliminate the need to replace all equipment simply because the certification policy of the primary testing laboratory will not re-evaluate legacy equipment. NACS was supportive of legislation introduced in the House last Congress Reps. Mike Ross (D-AR) and John Shimkus (R-IL) as H.R. 5778. This bill directed the EPA to develop guidelines for determining the compatibility of equipment with new fuels and stipulates equipment that satisfied such guidelines would thereby satisfy all laws and regulations concerning compatibility.

Second, Congress can require EPA to issue labeling regulations for fuels that are authorized for only a subset of vehicles and ensure that retailers who comply with such requirements satisfy their requirements under the Clean Air Act and protect them from violations or engine warranty claims in the event a self-service customer ignores the notifications and misfuels a non-authorized engine. H.R. 5778 also included provisions to achieve these objectives.

Third, Congress can provide market participants with regulatory and legal certainty that compliance with current applicable laws and regulations concerning the manufacture, distribution, storage and sale of new fuels will protect them from retroactive liability should the laws and regulations change at some time in the future.

Finally, Congress should evaluate the prospects for the marketing of infrastructure-compatible fuels and support the development of such fuels. These could aid compliance with the renewable fuels standard and save retailers, engine makers and consumers billions of dollars. Policymakers might consider establishing characteristics that new fuels must possess so that equipment and engines can be manufactured or retrofitted to accommodate whichever new fuel provides the greatest benefit to consumers and the economy.

If Congress takes action to lower the cost of entry and to remove the threat of unreasonable liability, more retailers may be willing to take a chance and offer a new renewable fuel. By lowering the barriers to entry, Congress will give the market an opportunity to express its will and allow retailers to offer consumers more choice. If consumers reject the new fuel, the retailer can reverse the decision without sacrificing a significant investment, but new fuels will be given a better opportunity to successfully penetrate the market.

Serial No. 112–159. July 10, 2012. The American energy initiative part 23: A focus on Alternative Fuels and vehicles. House of Representatives. 210 pages.

Jack Gerard, President and CEO of the American Petroleum Institute. Over the past 7 years, the two RFS laws passed in 2005 and in 2007 have substantially expanded the role of renewables in America. Biofuels are now in almost all gasoline. While API supports the continued appropriate use of ethanol and other renewable fuels, the RFS law has become increasingly unrealistic, unworkable, and a threat to consumers. It needs an overhaul. Most of the problems relate to the law’s volume requirements. These mandates call for blending increasing amounts of renewable fuels into gasoline and diesel. Although we are already close to blending an amount that would result in a 10 percent concentration level of ethanol in every gallon of gasoline sold in America, that which is the maximum known safe level, the volumes required will more than double over the next 10 years. The E10, or 10 percent ethanol blend that we consume today could, by virtue of RFS volume requirements, become at least an E20 blend in the future. This would present an unacceptable risk to billions of dollars in consumer investment in vehicles, a vast majority of which were designed, built, and warranted to operate on a maximum blend of E10.

It also would put at risk billions of dollars of gasoline station equipment in thousands of retail outlets across America, most owned by small independent businesses. I believe well over 60 percent of retail establishments in this area are Ma and Pa operations.

Vehicle research conducted by the Auto Oil Coordinated Research Council shows that E15 could also damage the engines of millions of cars and light trucks, estimates exceeding five million vehicles on the road today. E20 blends may have similar, if not worse, compatibility issues with engines and service station attendants.

The RFS law also requires increasing use of cellulosic ethanol, an advanced form of ethanol that can be made from a broader range of feed stocks. The problem is, you can’t buy the fuel yet because no one is making it commercially. While EPA could waive that provision, it has decided to require refiners to purchase credits for this nonexistent fuel, which will drive up costs and potentially hurt consumers. Mandating the use of fuels that do not exist is absurd on its face and is inexcusably bad public policy.

To date, E85 has faced low consumer acceptance as FFV owners use E85 less than 1% of the time. The fuel economy of an FFV operated on E85 is approximately 25-30% lower than when fueled with gasoline due to ethanol’s lower energy content. Also, less than 2% of retail gasoline stations offer E85, which has high installation costs. In 2010 and 2011, EPA approved the use of E15 for a portion of the motor vehicle fleet in order to accommodate the RFS law’s volume increases. We believe these actions were premature and unlawful, and present an unacceptable risk to billions of dollars in consumer investments in vehicles. They also put at risk billions of dollars of gasoline station pump equipment in scores of thousands of retail outlets across America, most owned by small independent businesses. E15 is a different transportation fuel, well outside the range for which the vast majority of U.S. vehicles and engines have been designed and warranted. E15 is also outside the range for which service station pumping equipment has been listed and proven to be safe and compatible and conflicts with existing worker and public safety laws outlined in OSHA and Fire Codes. EPA should not have proceeded with E15, especially before a thorough evaluation was conducted to assess the full range of short- and long-term impacts of increasing the amount of ethanol in gasoline on the environment, on engine and vehicle performance, and on consumer safety. Research on higher blends was already underway when EPA approved El5 in 2010 and 2011. In response to the passage of EISA in 2007, the oil and natural gas industry, the auto industry, and other stakeholders, including EPA and DOE, recognized in early 2008 that substantial research was needed in order to assess the impact of higher ethanol blends including the compatibility of ethanol blends above 10% (E10+) with the existing fleet of vehicles and small engines. Through the Coordinating Research Council (CRC), the oil and auto industries developed and funded a comprehensive multi-year testing program prior to the biofuels industry’s E15 waiver application. API worked closely with the auto and off-road engine industries and with EPA and DOE to share and coordinate research plans. Yet, EPA approved the E15 waiver request before this research effort was finished and the results thoroughly evaluated. The potential for harm from that decision is substantial, as suggested by the results of various research studies, including testing performed by DOE’s National Renewal Energy Laboratory and by the CRC, have been completed to date. The DOE research shows an estimated half of existing service station pumping equipment may not be compatible with a 15% ethanol blend. The CRC research shows that E15 could also damage the engines of millions of cars and light trucks.

E20 may have similar, if not worse, compatibility issues with engines and service station equipment.

JOSEPH H. PETROWSKI. Gulf Oil Group.

We are the Nation’s eighth largest convenience retailer of petroleum products and convenience items in over 13 States. Our wholesale oil division, Gulf Oil, carries and merchandises over 350,000 barrels of petroleum products and biofuels over 29 States, $13 billion revenue places us in the top 50 private companies in the country. We employ 8,000 employees,

We do not drill, we do not refine petroleum products. What we care to sell are products that our customers want to buy that are most economic for them to achieve their desired transport, heating, and other energy uses in a lawful manner.

We blend—in addition to selling petroleum products, which is our primary product that we sell, we blend over 1 million gallons a day of biofuels across our system, and just recently, we have purchased 24 Class A trucks to begin to fuel on natural gas to deliver our fuel products to our stations and stores.

We believe that a sound energy policy rests on four bedrocks. One is that we have diverse fuel sources, and there are two reasons for that. The future is unknowable. The new shale technology that has taken over the industry in natural gas was unheard of more than 2 decades ago. Technology and events are beyond our abilities to understand where we are going, and so to bet any of our future on one single source of fuel would be a mistake. We believe diversity in all systems ensures health and stability. And so we look for diversity in fuel, not only by fuel type, but to make sure that we are not concentrated in taking it from one region, particularly the Middle East and unstable regions.

I do want to point out to all the members that we have billions, hundreds of billions of dollars invested in terminals, gas stations, barges, transportation, and we have to live with the realities of the marketplace and the particulars.

America’s love affair with the automobile is not going away. Neither is the need for transportation fuels that underpin the economy and create jobs. In a country as vast as ours with a density of 79 people per square mile (as opposed to the Netherlands with 1300 people per square mile), the cost of transport is central to economic health.

When total national energy costs exceed 16% of GDP a recession or worse is almost always the result. The United States’ current accounts trade balance for all energy products recently exceeded $1 trillion dollars, and while it has currently been reduced to one half that amount on an annualized basis we look forward to the day when the United States is a net energy exporter. Not only will that be positive to GDP and job growth, but it will position us to revitalize our industrial production, especially in energy-intensive industries with an eye toward value added product exports. And no policy would be more beneficial for the spread of world democracy

Our industry is dominated by small businesses. In fact, of the 120,950 convenience stores that sell fuel, almost sixty percent of them are single-store companies – true mom and pop operations. Many of these companies sell fuel under the brand name of their fuel supplier. This has created a common misperception in the minds of many policymakers and consumers that the large integrated oil companies own these stations. The reality is that the majors are leaving the retail marketplace and today own and operate fewer than 2% of the retail locations. Although a store may sell a particular brand of fuel associated with a refiner, the vast majority are independently owned and operated like mine. When people pull into an Exxon or a BP station, the odds are good that they are in fact refueling at a small mom-and-pop operation.

THE BLEND WALL AND THE NEED FOR A CONGRESSIONAL FIX. Since the enactment of the Energy Independence and Security Act (EISA) of2007, we have heard much about the impending arrival of the so-called “blend wall” – the point at which the market cannot absorb any additional renewable fuels. Most of the fuel sold in the United States today is blended with 10% ethanol. If 10% ethanol were blended into every gallon of gasoline sold in the nation in 2011 (33.9 billion gallons), the market would reach a maximum of 13.39 billion gallons. However, the 2012 statutory mandate for the RFS is 15.2 billion gallons. Meanwhile, the market for higher blends of ethanol (E85) for flexible fuel vehicles (FFVs) has not developed as rapidly as some had hoped. Clearly, we have reached the blend wall.

EPA recently authorized the use ofE15 in certain vehicles. However, this has so far done very little to expand the use of renewable fuels, due largely to retailers’ liability and compatibility concerns, as well as state and local restrictions on selling E15. Congress can do something immediately to mitigate other obstacles preventing new fuels from entering the market. H.R. 4345, the Domestic Fuels Protection Act of 2012-currentiy before the subcommittee on Environment and the Economy-addresses three of these obstacles: infrastructure compatibility, liability for consumer misuse of fuels, and retroactive liability of the rules governing a fuel change in the future.

The reason the retail market is unable to easily accommodate additional volumes of renewable fuels begins with the equipment found at retail stations. By law, all equipment used to store and dispense flammable and combustible liquids must be certified by a nationally recognized testing laboratory. These requirements are found in regulations of the Occupational Safety and Health Administration. Currently, there is essentially only one organization that certifies such equipment, Underwriters Laboratories (UL). UL establishes specifications for safety and compatibility and runs tests on equipment submitted by manufacturers for UL listing. Once satisfied, UL lists the equipment as meeting a certain standard for a certain fuel. Prior to 20I0, UL had not listed a single motor fuel dispenser (aka a gas pump) as compatible with any fuel containing more than 10% ethanol. This means that any dispenser in the market prior to early 20lOis not legally permitted to sell E15, E85 or anything above 10% ethanol – even if it is able to do so safely.

If a retailer fails to use listed equipment, that retailer is violating OSHA regulations and -may be violating tank insurance policies, state tank fund program requirements, bank loan covenants, and potentially other local regulations. In addition, the retailer could be found negligent per se based solely on the fact that his fuel dispensing system is not listed by UL. This brings us to the primary challenge: if no dispenser prior to early 20I0 was listed as compatible with fuels containing greater than ten percent ethanol, what options are available to retailers to sell these fuels? In order to comply with the law, retailers wishing to sell E I 0+ fuels can only use equipment specifically listed by UL as compatible with such fuels. Because UL did list any equipment as compatible with E10+ fuels until 2010, only those units produced after that date can legally sell E I 0+ fuels. All previously manufactured devices, even if they are the exact same model using the exact same materials, are subject only to the UL listing available at the time of manufacture. (UL policy prevents retroactive certification of equipment.)

Practically speaking, this means that a vast majority of retailers wishing to sell EIO+ fuels must replace their dispensers. This costs an average of $20,000 per dispenser. It is less clear how many underground storage tanks and associated pipes and lines would require replacement. Many of these units are manufactured to be compatible with high concentrations of ethanol, but they may not be listed as such. Further, if there are concerns with gaskets and seals in dispensers, care must be given to ensure the underground gaskets and seals do not pose a threat to the environment. Once a retailer begins to replace underground equipment, the cost can escalate rapidly and can easily exceed $100,000 per location.

The second major issue facing retailers is the potential liability associated with improperly fueling an engine with a non-approved fuel. The EPA decision concerning EI5 puts this issue into sharp focus for retailers. Under EPA’s partial waiver, only vehicles manufactured in model year 2001 or more recently are authorized to fuel with E15. Older vehicles, motorcycles, boats, and small engines are not authorized to use E15. For the retailer, bifurcating the market in this way presents serious challenges. For instance, how does the retailer prevent the consumer from buying the wrong fuel? Typically, when new fuels are authorized they are backwards compatible so this is not a problem. In other words, older vehicles can use the new fuel. When EPA phased lead out of gasoline in the late I 970s and early 1980s, for example, older vehicles were capable of running on unleaded fuel newer vehicles, however, were required to run only on unleaded. These newer vehicle gasoline tanks were equipped with smaller fill pipes into which a leaded nozzle could not fit – likewise, unleaded dispensers were equipped with smaller nozzles. E 15 is very different: legacy engines are not permitted to use the new fuel. Doing so will violate Clean Air Act standards and could cause engine performance or safety issues. Yet there are no viable options to retroactively install physical counter measures to prevent misfueling.

Retailers could be subject to penalties under the Clean Air Act for not preventing a customer from misfueling with E15. This concern is not without justification. In the past, retailers have been held accountable for the actions of their customers. For example, because unleaded fuel was more expensive than leaded fuel, some consumers physically altered their vehicle fill pipes to accommodate the larger leaded nozzles either by using can openers or by using a funnel while fueling. We may see similar behavior in the future given the high price of gasoline relative to ethanol. As in the past, the retailer will not be able to prevent such practices, but in the case of leaded gasoline the EPA levied fines against the retailer for not physically preventing the consumer from bypassing the misfueling counter measures. To EPA’s credit, they have asserted in meetings with NACS and SIGMA that they would not be targeting retailers for consumer misfueling. But that provides little comfort to retailers. EPA policy can change in the absence of specific legal safeguards. Additionally, the Clean Air Act includes a private right of action and any citizen can file a lawsuit against a retailer that does not prevent misfueling. Whether the retailer is found guilty does not change the fact that defending against such claims is very expensive. Further, the consumer may seek to hold the retailer liable for their own actions. Using the wrong fuel could void an engine’s warranty, cause engine performance problems or even compromise the safety of some equipment. In all situations, some consumers may seek to hold the retailer accountable even when the retailer was not responsible for the improper use of the fuel. Once again, defending such claims is expensive.

An EPA decision to approve E15 for 2001 and newer vehicles is not consistent with the terms of most warranty policies issued with these affected vehicles. Consequently, while using E15 in a 2009 vehicle might be lawful under the Clean Air Act, it may in fact void the warranty of the consumer’s vehicle. Retailers have no mechanism for ensuring that consumers abide by their vehicle warranties – it is the consumer’s responsibility to comply with the terms of their contract with their vehicle manufacturer. Therefore, H.R. 4345 stipulates that no person shall be held liable in the event a self-service customer introduces a fuel into their vehicle that is not covered by their vehicle warranty.

General Liability Exposure Finally, there are widespread concerns throughout the retail community and with our product suppliers that the rules of the game may change and we could be left exposed to significant liability. For example, EI5 is approved only for certain engines and its use in other engines is prohibited by the EPA due to associated emissions and performance issues. What if E 15 does indeed cause problems in non-approved engines or even in approved engines? What if in the future the product is determined defective, the rules are changed and E 15 is no longer approved for use in commerce? There is significant concern that such a change in the law would be retroactively applied to anyone who manufactured, distributed, blended or sold the product in question.

Contrary to popular misconception, fuel marketers prefer cheap gasoline. The less the consumer pays at the pump, the more money the consumer has to spend in our stores, where our profit margins are significantly greater.

Posted in Biofuels | Tagged , , | Comments Off on Distribution – why it is so hard to add E15 or E85 at a gas station

Global oil discovered 7.7 times less than consumption in 2019

This image has an empty alt attribute; its file name is global-oil-discoveries-2019-rystad-energy.jpg

Source: Rystad Energy (2020) in “Global oil and gas discoveries reach four-year high in 2019, boosted by ExxonMobil’s Guyana success“.

Preface.  The global conventional discovery chart above lists natural gas and oil discoveries since 2013.  The fossil fuel that really matters is oil, since it’s the master resource that makes all others available, including natural gas, coal, transportation, and manufacturing.

This image has an empty alt attribute; its file name is oil-discovery-Rystad-Oil-consumed-BP.jpg

Source: discoveries Rystad (2020), consumption BP statistical review of world energy (2020

As you can see, in 2019 the world burned 7.7 times more oil than was discovered, with a shortfall of 31.74 billion barrels of oil to be discovered to break even in the future.  This can’t end well, as anyone whose covid-19 pantry is emptying can easily grasp. 

FYI from Peak Oil Review Feb 10, 2020: worldwide production of oil was 60.27% sourced from conventional on-shore oil, 21.59% conventional offshore shallow-water oil, 8.1%  conventional offshore deep water, 6.93% U.S. Tight Oil (Fracking), and 3.10% Canadian Oil Sands oil extraction 3.10%.

And an editorial in oilprice.com notes that: “US oil production has peaked, and it will be difficult to climb back to these levels ever again, given how much capital markets have soured on the industry. The EIA said that the US will once again become a net petroleum importer later this year, ending a brief spell during which the US was a net exporter”.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Mikael, H. August 29, 2016. Oil Discoveries at 70-Year Low Signal Supply Shortfall Ahead. Bloomberg.

2016 figure only shows exploration results to August. Discoveries were just 230 million barrels in 1947 but skyrocketed the next year when Ghawar was discovered in Saudi Arabia, and is till the world's largest oil field.  Source: Wood Mackenzie

2016 figure only shows exploration results to August. Discoveries were just 230 million barrels in 1947 but skyrocketed the next year when Ghawar was discovered in Saudi Arabia, and it is still the world's largest oil field, though recently it was learned that Ghawar is in decline at 3.5% a year. Source: Wood Mackenzie

Explorers in 2015 discovered only about a tenth as much oil as they have annually on average since 1960. This year, they’ll probably find even less, spurring new fears about their ability to meet future demand.

With oil prices down by more than half since the price collapse two years ago, drillers have cut their exploration budgets to the bone. The result: Just 2.7 billion barrels of new supply was discovered in 2015, the smallest amount since 1947, according to figures from Edinburgh-based consulting firm Wood Mackenzie Ltd. This year, drillers found just 736 million barrels of conventional crude as of the end of last month.

That’s a concern for the industry at a time when the U.S. Energy Information Administration estimates that global oil demand will grow from 94.8 million barrels a day this year to 105.3 million barrels in 2026. While the U.S. shale boom could potentially make up the difference, prices locked in below $50 a barrel have undercut any substantial growth there. Ten years down from now this will have a “significant potential to push oil prices up. Given current levels of investment across the industry and decline rates at existing fields, a “significant” supply gap may open up by 2040″.

Oil companies will need to invest about $1 trillion a year to continue to meet demand, said Ben Van Beurden, the CEO of Royal Dutch Shell Plc, during a panel discussion at the Norway meeting. He sees demand rising by 1 million to 1.5 million barrels a day, with about 5 percent of supply lost to natural declines every year.

New discoveries from conventional drilling, meanwhile, are “at rock bottom,” said Nils-Henrik Bjurstroem, a senior project manager at Oslo-based consultants Rystad Energy AS. “There will definitely be a strong impact on oil and gas supply, and especially oil.

Global inventories have been buoyed by full-throttle output from Russia and OPEC, which have flooded the world with oil despite depressed prices as they defend market share. But years of under-investment will be felt as soon as 2025, Bjurstroem said. Producers will replace little more than one in 20 of the barrels consumed this year, he said.

There were 209 wells drilled through August this year, down from 680 in 2015 and 1,167 in 2014, according to Wood Mackenzie. That compares with an annual average of 1,500 in data going back to 1960.

Overall, the proportion of new oil that the industry has added to offset the amount it pumps has dropped from 30 percent in 2013 to a reserve-replacement ratio of just 6 percent this year in terms of conventional resources, which excludes shale oil and gas, Bjurstroem predicted. Exxon Mobil Corp. said in February that it failed to replace at least 100 percent of its production by adding resources with new finds or acquisitions for the first time in 22 years.

“That’s a scary thing because, seriously, there is no exploration going on today,” Per Wullf, CEO of offshore drilling company Seadrill Ltd., said by phone.

Posted in How Much Left, Peak Oil | Tagged , | 4 Comments

Ugo Bardi: Collapse. Where can we find a safe refuge?

Preface. This is from the excellent blog Cassandralegacy.blogspot.com of Ugo Bardi’s posted here. I agree that a small town or city might be best, but only if near agriculture, most towns in the desert SouthWest of the U.S. are not going to survive. Also, the younger you are, the better to be as far from large cities as possible, at some point they will collapse too since they’re so far over carrying capacity.

Roman times were different in that there was a whole lot more land to retreat to, most people had not only farming skills, but one or more of hunter-gatherer knowledge, fishing skills, and were able to herd cattle, goats, and sheep, as the book “Against the Grain” argues (and also that these pre-fossil civilizations depended on slave labor to a huge extent).

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Does it make sense to have a well-stocked bunker in the mountains to escape collapse?

Sometimes, you feel that the world looks like a horror story, something like Lovecraft’s “The Shadow Over Innsmouth..” Image from F.R: Jameson.

Being the collapsnik I am, a few years ago I had the idea that I could buy myself some kind of safe haven in the mountains, a place where I and my family could find refuge if (and when) the dreaded collapse were to strike our civilization (as they say, when the Nutella hits the fan). It is a typical idea of collapse-oriented people: run away from cities, imagined being the most vulnerable places in a Mad Max-style scenario.

Maybe I was thinking also of Boccaccio’s Decameron, when he describes how in the mid-14th century a group of wealthy Florentines finds refuge from the plague in a villa, outside Florence. And they had a leisured time telling stories to each other. I don’t oven a villa in the countryside, but I took a tour of villages in the Appennini mountains, a few hundred km from Florence, to seek for a hamlet of some kind to buy. I was accompanied by a friend of mine who is a denizen of the area and whom I had infected with the collapse meme.

We found several houses and apartments for sale in the area. One struck me as suitable, and the price was also interesting. It was a two-floor apartment with the windows opening on the central square of the village where it was located, among wooden hills. It had a wood stove, the kind of heating system you can always manage in an emergency. And it was at a sufficient height you could be reasonably safe from heat waves, even without air conditioning.

Then, I was looking at the village from one of the windows when a strange sensation hit me. People were walking in the square, and a few of them raised their glance to look at me. And, for a moment, I was scared.

Did you ever read Lovecraft’s short story “The Shadow over Innsmouth“? It tells the story of someone who finds himself stuck in a coastal town named Innsmouth that he discovers being inhabited by fish-like humanoids, the “deep Ones,” practicing the cult of a marine deity called Dagon.

Don’t misunderstand me: the people I was seeing in the square were not alien cultists of some monstrous divinity. What had scared me was a different kind of thought. It was that I knew that every adult male in that area owns a rifle or a shotgun loaded with slug ammunition. And every adult male in good health engages in wild boar hunting every weekend. They can kill a boar at 50 meters or more, then they are perfectly able to gut it and turn it into ham and sausages.

Now, if things were to turn truly bad, would some of those people consider me as the equivalent of a wild boar? For sure, I couldn’t even dream to be able to match the kind of firepower they have. I thanked the owner of the place and my friend, and I drove back home. I never went back to that place.

A few years later, with a real collapse striking us in the form of the COVID-19 epidemics,  I can see that I did well in not buying that apartment in the mountains. At the time of Boccaccio, wealthy Florentine citizens could reasonably think of moving to a villa in the countryside. These villas were nearly self-sufficient agricultural units, where one could find food and shelter provided by local peasants and servants (at that time not armed with long-range rifles). But that, of course, is not the case anymore.

The current crisis is showing us what a real collapse looks like. And it shows that some science fiction scenarios were totally wrong. The typical trope of a post-holocaust story is that people run away from flaming cities after having stormed the shops and the supermarkets, leaving empty shelves for those who arrive late. That didn’t happen here. At most, people seemed to think that what they needed most in an emergency was toilet paper and they emptied the supermarket shelves of it. But that was quickly over. Maybe we’ll arrive at that kind of scenario, but what is happening now is not that the supermarkets are running out of goods, everything is available if you have the money to buy it. The problem is that people are running out of money.

In this situation, the last thing the government wants is food riots. And they especially care about cities — if they lose control of the cities, everything is lost for them. So they are acting on two levels: they are providing food certificates for the poor, and, at the same time, clamping down on cities with the police and the army to enforce the lockdown. People are facing criminal charges if they dare to take a walk on the street.

Not an easy situation, but at least we have food and the cities are quiet. Think of what would have happened if I had bought that apartment in the mountains. I wouldn’t even have been able to go there during the coronavirus epidemics. But, if somehow I had managed to dodge the police, then I would be stuck there. And no supermarkets nearby: there is a small shop selling food in the village, but would it be resupplied during the crisis? The locals have ways to survive also with local food, but a town dweller like me doesn’t. And I never tried to shoot a wild boar, I think it is not easy — to say nothing about gutting it and turning it into sausage. Worse, I am sure that no police would patrol that small village, surely not the woods. So, maybe the local denizens would not shoot me and boil me in a cauldron, but if I were to run out of toilet paper, where could I find some? And, worse, what if I were to run out of food?


So, where can we find refuge from collapse? I can think of scenarios where you could be better off in a bunker somewhere in an isolated area, where you stocked a lot of supplies. But in most cases, that would be a terribly bad idea. A well-stocked bunker is the ideal target for whoever is better armed than you, and they can always smoke you out. Of course, you can think of a refuge for an entire group of people, with some of them able to shoot intruders, others to cultivate the fields, others to care for you if you get sick. Maybe, but it is a complicated story. You could join the Amish, but would they want you? It has been done often on the basis of religious ideas and in some cases, it may have worked, at least for a while. And never forget the case of Reverend Jim Jones in Guyana.

In the end, I think the best place to be in a time of crisis is exactly where I am: in a medium-sized city. It is the last place that the government will try to keep under control as long as possible, and not a likely target for someone armed with nukes or other nasty things. Why do I say that? Look at the map, here.

This is a map of the Roman Empire at its peak. Note the position of the major cities: the Empire collapsed and disappeared, but most of the cities of that time are still there, more or less with the same name, the new buildings built in place of the old ones, or near them. Those cities were built in specific places for specific reasons, availability of water, resources, or transportation. And so it made sense for the cities to be exactly where they were, and where they still are. Cities turned out to be extremely resilient. And how about Roman villas in the countryside? Well, many are being excavated today, but after the fall of the Empire, they were abandoned and never rebuilt. It must have been terribly difficult to defend a small settlement against all the horrible things that were happening at the time of the fall of the Empire.

So, overall, I think I did well in moving from a home in the suburbs to one downtown. Bad times may come, but I would say that it offers the best chances of survival, even in reasonably horrible times. Then, of course, the best plan of mice and men tend to gang agley, as we all know.  In any case, collapses are bad and that’s doesn’t change for collapsniks.

Posted in Ugo Bardi, Where to Be or Not to Be | Tagged , | 2 Comments

The U.S. May Soon Have the World’s Oldest Nuclear Power Plants

Preface. This is nuts. Sea level rise threatens many nuclear power plants and drought has shut plants down since they need cooling to operate.

As nuclear reactor age, they require more intensive monitoring and preventive maintenance to operate safely. But reactor owners have not always taken this obligation seriously enough. Given that older reactors require more attention from the regulator, not less, it is perplexing that the NRC wants to scale back its inspections of the aging reactor fleet and its responses to safety violations. Six years ago, the US Government Accountability Office pointed out that “NRC’s oversight will soon likely take on even greater importance as many commercial reactors … are reaching or have reached the end of their initial 40-year operating period.” (Lyman 2019).

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick JensenPractical PreppingKunstlerCast 253KunstlerCast278Peak Prosperity , XX2 report

***

Natter, A. 2020. The U.S. May Soon Have the World’s Oldest Nuclear Power Plants. Bloomberg.


In December federal regulators approved Florida Power & Light Co.’s request to let the facility’s twin nuclear reactors remain in operation for another 20 years beyond the end of their current licenses. By that point they’ll be 80, making them the oldest reactors in operation anywhere in the world.

“That’s too old,” said Rippingille, a lawyer and retired Miami-Dade County judge who was wearing a blue print shirt with white sea turtles on it. “They weren’t designed for this purpose

With backing from the Trump administration, utilities across the nation are preparing to follow suit, seeking permission to extend the life of reactors built in the 1970s to the 2050s as they run up against the end of their 60-year licenses.

“We are talking about running machines that were designed in the 1960s, constructed in the 1970s and have been operating under the most extreme radioactive and thermal conditions imaginable,” said Damon Moglen, an official with the environmental group Friends of the Earth. “There is no other country in the world that is thinking about operating reactors in the 60 to 80-year time frame.”


Indeed, the move comes as other nations shift away from atomic power over safety concerns
Critics such as Edwin Lyman, a nuclear energy expert with the Union of Concerned Scientists, argue that older plants contain “structures that can’t be replaced or repaired,” including the garage-sized steel reactor vessels that contain tons of nuclear fuel and can grow brittle after years of being bombarded by radioactive neutrons. “They just get older and older,” he said. If the vessel gets brittle, it becomes vulnerable to cracking or even catastrophic failure. That risk increases if it’s cooled down too rapidly—say in the case of a disaster, when cold water must be injected into the core to prevent a meltdown.


The commission’s decision doesn’t sit well with Philip Stoddard, a bespectacled biology professor who serves as the mayor of South Miami, a city of 13,000 on about 18 miles away from the Turkey Point plant. He keeps a store of potassium iodide, used to prevent thyroid cancer, large enough to provide for every child in his city should the need arise.


“You’ve got hurricanes, you’ve got storm surge, you’ve got increasing risks of hurricanes and storm surge,” said Stoddard, 62, from the corner office in a biology building on Florida International University’s palm-tree lined campus. All of this not only increases the likelihood of a nuclear disaster, it also complicates a potential evacuation, which could put even more lives at risk. “Imagine being in a radiation cloud in your car and you’re sitting there running out of gas because you’re in a parking lot in the freeway,” he said.


Climate change is also one of the main cases against extending the life of Turkey Point, said Kelly Cox, the general counsel for Miami Waterkeeper, a six-person environmental group that has joined with the Natural Resources Defense Council and Friends of the Earth to challenge the NRC’s approval in the United States Court of Appeals for the District of Columbia Circuit. New data show sea level rise in the area reach as high as 4.5 feet by 2070, but regulators from the Nuclear Regulatory Commission didn’t take those updated figures into account, said Cox.

References

Lyman, E. 2019. Aging nuclear plants, industry cost-cutting, and reduced safety oversight: a dangerous mix. Bulletin of the Atomic Scientists.

Posted in Nuclear Power Energy | Tagged , , | Comments Off on The U.S. May Soon Have the World’s Oldest Nuclear Power Plants

High-level nuclear waste storage degrades faster than thought

Preface. Burying nuclear waste ought to be a top priority, now that it appears peak oil may have happened in November of 2018 (Patterson 2019) and perhaps even sooner if covid-19 crashes the world economy (Tverberg 2020). It won’t happen after oil production peaks, when it is rationed to agriculture and other essential services. Our descendants shouldn’t have to cope with nuclear waste on top of all the other destruction we’re causing in the world.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

OSU. 2020. High-level nuclear waste storage materials will likely degrade faster than previously thought. Ohio State University.

Study finds the materials — glass, ceramics and stainless steel — interact to accelerate corrosion.

The materials the United States and other countries plan to use to store high-level nuclear waste will likely degrade faster than anyone previously knew because of the way those materials interact, new research shows.

The findings, published today in the journal Nature Materials, show that corrosion of nuclear waste storage materials accelerates because of changes in the chemistry of the nuclear waste solution, and because of the way the materials interact with one another.

“This indicates that the current models may not be sufficient to keep this waste safely stored,” said Xiaolei Guo, lead author of the study and deputy director of Ohio State’s Center for Performance and Design of Nuclear Waste Forms and Containers, part of the university’s College of Engineering. “And it shows that we need to develop a new model for storing nuclear waste.”

The team’s research focused on storage materials for high-level nuclear waste — primarily defense waste, the legacy of past nuclear arms production. The waste is highly radioactive. While some types of the waste have half-lives of about 30 years, others — for example, plutonium — have a half-life that can be tens of thousands of years. The half-life of a radioactive element is the time needed for half of the material to decay.

The United States currently has no disposal site for that waste; according to the U.S. General Accountability Office, it is typically stored near the plants where it is produced. A permanent site has been proposed for Yucca Mountain in Nevada, though plans have stalled. Countries around the world have debated the best way to deal with nuclear waste; only one, Finland, has started construction on a long-term repository for high-level nuclear waste.

But the long-term plan for high-level defense waste disposal and storage around the globe is largely the same. It involves mixing the nuclear waste with other materials to form glass or ceramics, and then encasing those pieces of glass or ceramics — now radioactive — inside metallic canisters. The canisters then would be buried deep underground in a repository to isolate it.

In this study, the researchers found that when exposed to an aqueous environment, glass and ceramics interact with stainless steel to accelerate corrosion, especially of the glass and ceramic materials holding nuclear waste.

The study qualitatively measured the difference between accelerated corrosion and natural corrosion of the storage materials. Guo called it “severe.”

“In the real-life scenario, the glass or ceramic waste forms would be in close contact with stainless steel canisters. Under specific conditions, the corrosion of stainless steel will go crazy,” he said. “It creates a super-aggressive environment that can corrode surrounding materials.”

To analyze corrosion, the research team pressed glass or ceramic “waste forms” — the shapes into which nuclear waste is encapsulated — against stainless steel and immersed them in solutions for up to 30 days, under conditions that simulate those under Yucca Mountain, the proposed nuclear waste repository.

Those experiments showed that when glass and stainless steel were pressed against one another, stainless steel corrosion was “severe” and “localized,” according to the study. The researchers also noted cracks and enhanced corrosion on the parts of the glass that had been in contact with stainless steel.

Part of the problem lies in the Periodic Table. Stainless steel is made primarily of iron mixed with other elements, including nickel and chromium. Iron has a chemical affinity for silicon, which is a key element of glass.

The experiments also showed that when ceramics — another potential holder for nuclear waste — were pressed against stainless steel under conditions that mimicked those beneath Yucca Mountain, both the ceramics and stainless steel corroded in a “severe localized” way.

Reference: “Self-accelerated corrosion of nuclear waste forms at material interfaces” by Xiaolei Guo, et al., 27 January 2020, Nature Materials.
DOI: 10.1038/s41563-019-0579-x

References

Patterson, R. 2019. Was 2018 the peak for crude oil production? oilprice.com

Tverberg, G. 2020. Economies won’t be able to recover after shutdowns. ourfiniteworld.com

Posted in Nuclear Waste | Tagged | 10 Comments

Concentrated Solar Power is dying out in the U.S.

Preface.  Concentrated Solar Power (CSP) contributes only 0.06 % of U.S. electricity, mainly in California (64 %) and Arizona (24 %) because extremely dry areas with no humidity, haze, or pollutants are required. Of the 1861 MW power they can generate, only 25% of these plants can also store electricity using thermal energy storage. This is their only advantage over solar panels, the ability to continue to generate electricity after the sun goes down, since CSP costs astronomically more than solar PV.

Energy is stored as heat, usually in molten salt, with total CSP storage rated at 510 MW.

CSP is more capital expensive than any other power generation plant except nuclear. Eight plants cost a total of $9 billion (Solana, Genesis, Mojave, Ivanpah, Rice, Martin, Nevada solar 1, Crescent Dunes (NREL 2013).

Almost all CSP plants also have fossil backup to diminish night thermal losses, prevent molten salt from freezing, supplement low solar irradiance in the winter, and for fast starts in the morning.

CSP electricity generation in winter is significantly less than other seasons, even in the best range of latitudes between 15° and 35°.

To provide seasonal storage, CSP plants would need to use stone, which is much cheaper than molten salt. A 100 MW facility would need 5.1 million tons of rock taking up 2 million cubic meters (Welle 2010).

Since stone is a poor heat conductor, the thick insulating walls required might make this unaffordable (IEA 2011b).

Nevada’s 110 MW Crescent Dunes opened in 2015 with 10 hours of storage and is expected to provide an average of 0.001329 Twh a day. Multiply that by 8265 more Crescent Dune scale plants and presto, we’ll have one day of U.S. electrical storage (11.12/0.001329 TWh).

Or maybe not, the $1 billion dollar Crescent Dunes has gone out of business (Martin 2020).

CSP with thermal energy storage is seasonal, so it can not balance variable power or contribute much power for half the year.

Without storage, solar CSP and solar PV do nothing to keep the grid stable or meet the peak morning and late afternoon demand.

And it appears to be dying out, with just one CSP developer left (Deign 2020).

Alice Friedemann www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Concentrated Solar Power not only needs lots of sunshine, but no humidity, clouds, dust, smog or or anything else that can scatter the sun’s rays.  Above 35 degrees latitude north or south, the sun’s rays have to pass through too much atmosphere to produce high levels of power, and these regions tend to be too cloudy as well.  Between 15 degrees north and south of the equator is also not ideal, it’s too cloudy, rainy, and humid.  That leaves very dry and hot regions in the 15-35 degrees of latitude.  Only deserts are suitable, such as America’s Southwest, southern Africa, the Middle East, north-western India, northern Mexico, Peru, Chile, the western parts of China and Australia,  the extreme south of Europe and Turkey, some central Asian countries, and places in Brazil and Argentina.

The problem with arid, dry regions is that CSP needs water for condenser cooling. Dry-cooling of steam turbines can be done but it costs more and lowers efficiency.

CSP doesn’t wean us totally from fossil fuels, nearly all use fossil fuel as back-up, to remain dispatchable even when the solar resource is low, and to guarantee an alternative thermal source that can compensate night thermal losses, prevent freezing and assure a faster start-up in the early morning.

Even in ideal locations, CSP is highly seasonal:

CSP electric production seasonal low Jan high May

The average CSP capacity factor in the United States in December 2014 was 5.5%, while in August it was 25% (EIA. 2015. Table 6.7.B. Capacity Factors for Utility Scale Generators Not Primarily Using Fossil Fuels, January 2008-November 2014. U.S. Energy Information Administration).

This means that CSP requires seasonal storage, since it provides almost nothing in winter, yet CSP with thermal energy storage (TES) IS one of the few ways even a few hours of energy storage can be accomplished, since there’s very limited pumped hydro storage, compressed air energy storge, and battery storage.

“Averages” are irrelevant.  The seasonal nature of CSP with thermal storage makes balancing variable renewables and year-round power on a national grid — or even within the Southwest some days, weeks, or seasons — impossible without months of energy storage.   

Concentrating Solar Power Average Daily Solar Radiation Per Month, 1961]1990 (NREL 2011b)

Concentrating Solar Power Average Daily Solar Radiation Per Month, 1961-1990 (NREL 2011b)

There will be days or weeks when solar radiation is very low.  Below are some minimums and maximums for an East-West Axis Tracking Concentrator Daily solar radiation per month (NREL 2011b).

January mininum

January minimum

January maximum

January maximum

July minimum

July minimum

July maximum

July maximum

This means, for example, that in central Nevada may reach 10 kWh/m2/day or higher during July, but January average values may be as low as 3 kWh/m2/day, or even zero on a daily basis as a result of cloud cover (NREL 2011a).

The best CSP is in just a few unpopulated, drought-stricken states (AZ, CA, NM, NV)(NREL 2012):

CSP NREL solar resource 2012

The Seasonal Nature of sunshine (International Energy Agency. 2011. Solar Energy Perspectives)

Seasonal storage for CSP plants would require stone storage. The volume of stone storage for a 100 MW system would be no less than 2 million m3, which is the size of a moderate gravel quarry, or a silo of 250 meter diameter and 67 meter high. This may not be out of proportion, in regions where available space is abundant, as suggested by the comparison with the solar collector field required for a CSP plant producing 100 MW on annual average.

Stones are poor heat conductors, so exchange surfaces should be maximized, for example, with packed beds loosely filled with small particles. One option is then to use gases as HTFs from and to the collector fields, and from and to heat exchangers where steam would be generated. Another option would be to use gas for heat exchanges with the collectors, and have water circulating in pipes in the storage facility, where steam would be generated. This second option would simplify the general plan of the plant, but heat transfers between rocks and pressurized fluids in thick pipes may be problematic.

Annual storage may emerge as a useful option, as generation of electricity by CSP plant in winter is significantly less than in other seasons in the range of latitudes – between 15° and 35° – where suitable areas for CSP generation are found. However, skeptics point out the need for much thicker insulation walls as a critical cost factor.

Square miles needed to produce 25,000 TWh/year with CSP

CSP is more efficient than PV per surface of collectors, but less efficient per land surface, so its 25,000 TWh of yearly production would require a mirror surface of 38,610 square miles (100,000 sq km) and a land surface of about 115,831 square miles (300,000 km2).

Best locations for CSP

Tropical zones thus receive more radiation per surface area on yearly average than the places that are north of the Tropic of Cancer or south of the Tropic of Capricorn. Independent of atmospheric absorption, the amount of available irradiance thus declines, especially in winter, as latitudes increase. The average extraterrestrial irradiance on a horizontal plane depends on the latitude (Figure 2.4).

IEA 2011 figure 2.4 average yearly irradiance by latitude

Irradiance varies over the year at diverse latitudes – very much at high latitudes, especially beyond the polar circles, and very little in the tropics (Figure 2.5).  Seasonal variations are greater at higher latitudes:

IEA 2011 figure 2.5 total daily irradiance on a plane horizontal to earth surface

IEA 2011 figure 2.8 yearly profile mean daily solar radiation

Figure 2.8 The yearly profile of mean daily solar radiation for different locations around the world. The dark area represents direct horizontal irradiance, the light area diffuse horizontal irradiance. Their sum, global horizontal irradiance (GHI) is the black line. The blue line represents direct normal irradiance (DNI). Key point: Temperate and humid equatorial regions have more diffuse than direct solar radiation.

So for solar CSP, the blue line is important and needs to be above 6 for a project to be commercially viable.  The South Pacific Islands have too much moisture (blue line), and northern europe likewise plus not enough irradiance.  Concentrating technologies can be deployed only where DNI largely dominates the solar radiation mix, i.e. in sunny countries where the skies are clear most of the time, over hot and arid or semi-arid regions of the globe. These are the ideal places for concentrating solar power (CSP), concentrating photovoltaics (CPV).  PV can work fine in humid regions, but not CSP or CPV.

Formulations such as “a daily average of 5.5 hours of sunshine over the year” are casually used, however, to mean an average irradiance of 5.5 kWh/m2/d (2 000 kWh/m2/y), i.e. the energy that would have been received had the sun shone on average for 5.5 hours per day with an irradiance of 1,000 W/m2. In this case, one should preferably use “peak sunshine” or “peak sun hours” to avoid any confusion with the concept of sunshine duration.

Ground data measurements for 1-2 years before building a CSP plant

Ground measurements are critically necessary for a reliable assessment of the solar energy possibilities of sites, especially if the technology is CSP or CPV. Satellite data can be used to complement short ground measurement periods of one or two years with a longer term perspective. Ten years is the minimum necessary to have a real perspective on annual variability, and to get a sense of the actual average potential and the possible natural deviations from year to year. Satellite data should be used only when they have been bench-marked by ground measurements.

All parabolic trough plants currently in commercial operation rely on a synthetic oil as heattransfer fluid (HTF) from collector pipes to heat exchangers, where water is preheated, evaporated and then superheated. The superheated steam runs a turbine, which drives a generator to produce electricity. After being cooled and condensed, the water returns to the heat exchangers. Parabolic troughs are the most mature of the CSP technologies and form the bulk of current commercial plants. Investments and operating costs have been dramatically reduced, and performance improved, since the first plants were built in the 1980s. For example, special trucks have been developed to facilitate the regular cleaning of the mirrors, which is necessary to keep performance high, using car-wash technology to save water.

Most first-generation plants have little or no thermal storage and rely on combustible fuel as a firm capacity back-up. CSP plants in Spain derive 12% to 15% of their annual electricity generation from burning natural gas. More than 60% of the Spanish plants already built or under construction, however, have significant thermal storage capacities, based on two-tank molten-salt systems, with a difference of temperatures between the hot tank and the cold one of about 100°C.

Salt mixtures usually solidify below 238°C and are kept above 290°C for better viscosity, however, so work is needed to reduce the pumping and heating expenses required to protect the field against solidifying [my comment: so fossil energy to keep the salts hot subtracts from efficiency]

Energy storage

Worldwide energy storage: The volume of electricity storage necessary to make the electricity available when needed would likely be somewhere between 25 TWh and 150 TWh – i.e. from 10 to 60 hours of storage. If 20 TWh are transferred from one hour to another every day, then the yearly amount of variable renewable electricity shifted daily would be roughly 7,300 TWh. Allowing for 20% losses, one may consider 9,125 TWh in and 7,300 TWh out per year.

Studies examining storage requirements of full renewable electricity generation in the future have arrived at estimates of hundreds of GW for Europe (Heide, 2010), and more than 1,000 GW for the United States (Fthenakis et al., 2009). Scaling-up such numbers to the world as a whole (except for the areas where STE/CSP suffices to provide dispatchable generation) would probably suggest the need for close to 5,000 GW to 6,000 GW storage capacities. Allowing for 3,000 GW gas plants of small capacity factor (i.e. operating only 1 000 hours per year) explains the large difference from the 2,500 GW of storage capacity needs estimated above. However, one must consider the role that large-scale electric transportation could possibly play in dampening variability before considering options for large-scale electricity storage.

V2G possibilities certainly need to be further explored. They do entail costs, however, as battery lifetimes depend on the number, speeds and depths of charges and discharges, although to different extents with different battery technologies. Car owners or battery-leasing companies will not offer V2G free to grid operators, not least because it reduces the lifetime of batteries. Electric batteries are about one order of magnitude more expensive than other options available for large-scale storage, such as pumped-hydro power and compressed air electricity storage.

IEA 2014. Technology Roadmap. Solar Thermal Electricity. International Energy Agency

Global horizontal irradiance (GHI) is a measure of the density of the available solar resource per unit area on a plane horizontal to the earth’s surface. Global normal irradiance (GNI) and direct normal irradiance (DNI) are measured on surfaces “normal” (i.e., perpendicular) to the direct sunbeam. GNI is relevant for two-axis, sun-tracking, “1-sun” (i.e., non-concentrating) PV devices.

DNI is the only relevant metric for devices that use lenses or mirrors to concentrate the sun’s rays on smaller receiving surfaces, whether concentrating photovoltaics (CPV) or CSP generating STE. All places on earth receive 4,380 daylight hours per year — i.e., half the total duration of a year – but different areas receive different yearly average amounts of energy from the sun.

When the sun is lower in the sky, its energy is spread over a larger area and energy is also lost when passing through the atmosphere, because of increased air mass; the solar energy received is therefore lower per unit horizontal surface area.

Inter-tropical areas should thus receive more radiation per land area on a yearly average than places north of the Tropic of Cancer or south of the Tropic of Capricorn.

However, atmospheric absorption characteristics affect the amount of this surface radiation significantly. In humid equatorial places, the atmosphere scatters the sun’s rays. DNI is much more affected by clouds and aerosols than global irradiance. The quality of DNI is more important for CSP plants than for concentrated photovoltaics (CPV), because the thermal losses of a CSP plant’s receiver and the parasitic consumption of the electric auxiliaries are essentially constant, regardless of the incoming solar flux. Below a certain level of daily DNI, the net output is null (Figure 2 above).

High DNI is found in hot and dry regions with reliably clear skies and low aerosol optical depths, which are typically in subtropical latitudes from 15° to 40° north or south. Closer to the equator, the atmosphere is usually too cloudy, especially during the rainy season. At higher latitudes, weather patterns also produce frequent cloudy conditions, and the sun’s rays must pass through more atmosphere mass to reach the power plant. DNI is also significantly higher at higher elevations, where absorption and scattering of sunlight due to aerosols can be much lower. Thus, the most favorable areas for CSP resource are in North Africa, southern Africa, the Middle East, north- western India, the south-western United States, northern Mexico, Peru, Chile, the western parts of China and Australia. Other areas that are suitable include the extreme south of Europe and Turkey, other southern US locations, central Asian countries, places in Brazil and Argentina, and some other parts of China.

Areas with sufficient direct irradiance for CSP development are usually arid and many lack water for condenser cooling (Box 1). Dry-cooling technologies for steam turbines are commercially available, so water scarcity is not an insurmountable barrier, but it leads to an efficiency penalty and an additional cost. Wet-dry hybrid cooling can significantly improve performance, with water consumption limited to heat waves.

Almost all existing CSP plants use some fossil fuel as back-up, to remain dispatchable even when the solar resource is low and to guarantee an alternative thermal source that can compensate night thermal losses, prevent freezing and assure a faster start-up in the early morning.

Investment costs for CSP plants have remained high, from USD 4 000/kW to 9 000/kW, depending on the solar resource and the capacity factor, which also depends on the size of the storage system and the size of the solar field, as reflected by the solar multiple.

Costs were expected to decrease as CSP deployment progressed, following a learning rate of 10% (i.e., 10% cost reduction for each cumulative capacity doubling). This decrease has taken a long time to materialize, however, because market opportunities for CSP plants have diminished and the cost of materials has increased, particularly in the most mature parts of the plants, the power block and balance of plant (BOP). Other causes are the dominance of a single technology (trough plants with oil as heat transfer fluid

The few larger plants that have been or are being built elsewhere are either the first of their find in the world, with large development costs and technology risks (e.g., in the United States),

Levelized cost of electricity (LCOE)3 of STE varies widely with the location, technology, design and intended use of plants. The location determines the quantity and quality of the solar resource (Box 1), atmospheric attenuation at ground level, variations in temperature that affect efficiency (e.g., cold at night increases self-consumption, warmth during daylight reduces heat losses but also thermodynamic cycle efficiency) and the availability of cooling water. A plant designed for peak or mid-peak generation with a large turbine for a relatively small solar field will generate electricity at a higher cost than a plant designed for base load generation with a large solar field for a relatively small turbine. LCOE, while providing useful information, does not represent the entire economic balance of a CSP plant, which depends on the value of the generated STE.

Recent CSP plant in the United States secured PPA at USD 135/MWh, but taking investment tax credit into account, the actual remuneration is about USD 190/MWh.  The US DoE’s Sunshot program expects more rapid cost reductions based on current trends, and even aims for LCOE of USD 60/MWh as soon as 2020 [dream on…]

Barriers encountered, overcome or outstanding

Developers have encountered several barriers to establishing CSP plants. These include insufficiently accurate DNI data; inaccurate environmental data; policy uncertainty; difficulties in securing land, water and connections; permitting issues; and expensive financing, leading to difficult financial closure Inaccurate DNI data can lead to significant design errors. Ground-level atmospheric turbidity, dirt, sand storms and other weather characteristics or events may seriously interfere with CSP technologies. Permits for plants have been challenged in courts because of concerns about their effects on wildlife, biodiversity and water use. Some countries prohibit the large-scale use as HTF of synthetic oil or some molten salts, or both.

The most significant barrier is the large up-front investment required. The most mature technology, PT with oil as HTF, with over 200 cumulative years of running, may have limited room for further cost reductions, as the maximum temperature of the HTF limits the possible increase in efficiency and imposes high costs to thermal storage systems. Other technologies offer greater prospects for cost reductions but are less mature and therefore more difficult to obtain finance for. In countries with no or little experience of the technology, financing circles fear risks specific to each country.

In the United States, the loan guarantee program of the DoE has played a key role in overcoming financing difficulties and facilitating technology innovation.

Medium-term outlook

There are no new CSP projects in Spain, as incentives have been cut.

Plants in the approval process or ready to start construction represent 20 MW in France and 115 MW in Italy, while other projects are under development. The Italian environment legislation does not allow for extensive use of oil in trough plants, limiting the technology options to more innovative designs, such as DSG or molten salts as HTF. Projects that would produce several gigawatts are still under consideration or development in the United States, although not all will succeed in obtaining the required permits, PPAs, connections, and financing.

Current average LCOE is high because most existing plants have been built in Spain, which has relatively weak DNI. [my comment: if there is money for energy projects it’s spent regardless of how expensive and foolish – look at all the fracked natural gas by companies deeply in debt, the massive building of solar PV and CSP in Spain, ethanol subsidies, and all kinds of wasteful projects (and research) across the board.  I think this is why there’s no funding for EROI research — nobody wants to know!  Plus foolish projects provide jobs, it’s more important for democrats to provide “green” jobs than whether or not it’s a good idea. And why not, as long as there is oil we can build cities like Las Vegas in the desert that will be abandoned as soon as 2024 or whenever Lake Mead dries up, parking lots, cheap ugly housing projects, and so on]

As deployment intensifies in the southwestern United States and spreads to North Africa, South Africa, Chile, Australia and the Middle East, better resources will be used, improving performance.

Table 4: Projections of LCOE for new-built CSP plants with storage in the hi-Ren Scenario

The possible role of small-scale CSP devices – from 100 kW to a few MW – off-grid or serving in mini-grids, has not been included in the ETP model. There is too little industrial experience of such systems to make informed cost assumptions, whether the systems are based on PT, LFR, parabolic dishes, Scheffler dishes or small towers, using organic Rankin cycle turbines, micro gas-turbines or various reciprocating engines. If they allow thermal storage4 or fuel backup, small-scale CSP systems have to compete against PV with battery storage or fuel backup. They may find a role, although the fact that CSP technology seems to benefit more than PV from economies of scale suggests that smallscale CSP systems may face a greater competitive challenge than large-scale ones. Finding local skills for maintenance may also be challenging in remote, off-grid areas.

Storage is a particular challenge in CSP plants that use DSG. Because water evaporation is isothermal, unlike sensible heat addition or removal in the salt, a round-trip storage cycle would result in severe steam temperature and pressure drops, thereby destroying the efficiency of the thermodynamic cycle in discharge mode. Storing latent heat of saturated steam in pressurised vessels is expensive and provides no scale effect on cost. One option would use three-stage storage devices that preheat the water, evaporate the water and superheat the steam. Stages 1 and 3 would be sensible heat storage, in which the temperature of the storage medium changes. Stage 2 would best be latent heat storage, in which the state of the storage medium changes, using some phase change material. Another option could be to use liquid phase-change materials. The growing relevance of thermal storage in the context of intense competition from cheap PV favors using molten salts as both the heat transfer fluid and the storage medium (termed “direct storage”). If DSG spares heat exchangers for steam generation, the use of molten salts as HTF spares heat exchangers for storage. Salts are less costly than oil. Using salts allows raising the temperature and pressure of the steam, from 380°C to 530-550°C and from 10 to 12-15 megapascals (MPa) in comparison with oil as HTF, increasing the efficiency of the power block from 39% to 44-45% (Lenzen, 2014). Thanks to higher temperature differences between hot and cold salts (currently used salt mixtures usually solidify below 238°C), plants using molten salts as HTF need three times less salts than trough plants using oil as HTF, for the same storage capacity. This lowers the storage system costs, which represent about 12% of the overall plant cost for seven-hour storage of a trough plant. Also, the “return efficiency” of thermal storage, at about 93% with indirect storage (in which heat exchangers reduce the working temperature), is increased to 98% with direct storage. Finally, another advantage of molten salts as HTF over steam is that heat transfer can be carried out at low pressure with thin-wall solar receivers, which are cheaper and more effective. Overall, the substitution of molten salts for oil in CSP would allow for 30% LCOE reduction, according to Schott, the lead manufacturer of solar receiver tubes (Lenzen, 2014). Several companies are developing the use of molten salts as HTF in linear systems, and have built or are building experimental or demonstration devices. One challenge is to reduce the expense required to keep the salts warm enough (usually above 290°C) for better viscosity in long tubes at all times and protect the field against freezing.

Apart from the fundamental choice between DSG and molten salts for HTF, towers currently also offer a great diversity of designs – and present various trade-offs. The first relates to the size (and number) of heliostats that reflect the sunlight onto the receivers atop the tower. Heliostats vary greatly in size, from about 1 m2 to 160 m2. The small ones can be flat and offer little surface to winds. The larger ones need several mirrors that are curved to send a focused image of the sun to the central receiver, and need strong support structures and motors to resist winds. For similar collected energy ranges, however, small heliostats need to be grouped by the thousand, multiplying the number of motors and connections. Manufacturers and experts still have divided views about the optimum size. Heliostats need to be distanced from one another to reduce losses arising when a heliostat intercepts part of the flux received (“shading”) or reflected (“blocking”) by another. While linear systems require flat land areas, central receiver systems may accommodate some slope, or even benefit from it as it could reduce blocking and shadowing, and allow increasing heliostat density. Algorithmic field optimization may help reduce environmental impacts and required ground leveling work while maximizing output (Gilon, 2014).

In low latitudes heliostat fields tend to be circular and surround the central receiver, while in higher latitudes they tend to be more concentrated to the polar side of the tower. Larger fields tend to be more circular to limit the maximum receiver heliostat distance and minimise atmospheric attenuation.

Proper aiming strategy must be ensured by the heliostat field’s control system in order to optimise the solar flux map on the receiver, thereby allowing the highest solar input while avoiding any local overheating of the receiver tubes. This is more difficult with DSG receivers. The heat flux on the different types of solar panel of a DSG receiver differs significantly: superheater panels (poorly cooled by superheated steam) receive a much lower flux than evaporator and preheater panels. Another important design choice relates to the number of towers for one turbine. Heliostats that are in the last rows far from the tower need to be very precisely pointed towards it, and lose efficiency as the light must make a long trip near ground level. They also have greater geometrical (“cosine”) optical losses.

At over 1 million m2, the solar field associated with the 110 MW tower built by SolarReserve with 10-hour storage at Crescent Dunes, (Nevada, United States) is perhaps close to the maximum efficient size.

The additional costs of building several towers may be made up for by the greater optical and thermal efficiencies of multitower design (Wieghardt et al., 2014). However, the optimal field size and number of towers may depend on the atmospheric turbidity of the site considered, which varies greatly among areas suitable for CSP plants. The Californian company eSolar proposes 100 MW molten salt power plants based on 14 solar fields and 14 receivers on top of monopole towers (similar to current large wind turbine masts) for one central dry-cooled power block with 13-hour thermal storage and 75% capacity factor (Tyner, 2013).

As the share of variable energy increases, base load plants, even if technically flexible (which all are not) will become less economically efficient as their utilization rate diminishes. At the same time, more peaking and mid-merit plants become necessary. Below a certain load factor – about 2,000 full load hours – open-cycle gas turbines become a better economic choice than combined-cycle plants, but they are less energy-efficient as they generate large amounts of waste heat.

Open-cycle gas turbines could be integrated with a CSP plant with storage, however, of which the steam turbine is not being used with a very high capacity factor. When the sun does not shine, the otherwise wasted heat could be collected to a large extent in the hot tank of a two-tank molten-salt system. This energy could afterwards be directed to the steam turbine to deliver electricity whenever requested. If more power is needed when the sun shines sufficiently to run the steam turbine by itself, the heat from the gas turbine could be directed to the thermal storage. In both cases, a large part of the waste heat will be used. This concept differs from the existing ISCC in which solar only provides a complement, as the presence of thermal storage allows for a complete reversal of the proportion of solar and gas, which remains a backup, though a more efficient one (Crespo, 2014). The Hysol project, funded by the European Union’s Seventh Program for research, technological development and demonstration, aims to demonstrate the viability of the concept. Similarly, in areas with both high wind penetration and CSP plants, some thermal storage, which is equipped with electric heaters for security reasons, could be used in winter to reduce curtailment from excess wind power.

Molten salts decompose at higher temperatures, while corrosion limits the temperatures of steam turbines. Higher temperatures and efficiencies could rest on the use of fluoride liquid salts as HTFs up to temperatures of 700°C to 850°C,

There are a number of potential pathways to solar fuels. The straightforward thermolysis of water is the most difficult, as it requires temperatures above 2 200°C and may produce an explosive mixture of hydrogen and oxygen. The division of the single-step water-splitting reaction into a number of sub- reactions opens up the field of so called thermochemical cycles for H2 production. The necessary reaction temperature can be decreased even below 1 000°C, resulting in intermediate solid products like metals (e.g., aluminium, magnesium, or zinc), metal oxides, metal halides or sulphur oxides. The different reaction steps can be separated in time and place, offering possibilities for long-term storage of the solids and their use in transportation. These thermochemical cycles are also able to split CO2 into CO and oxygen. If mixtures of water and CO2 are used, even synthesis gas (mainly H2 and CO) can be produced, which can be further processed to synfuels, for example by the Fischer-Tropsch process.

Concentrated solar radiation can also be used to upgrade carbonaceous materials. The most developed process is the steam reforming of methane to produce synthesis gas. Sources are either natural gas or biogas. Methane can also be cracked into hydrogen and carbon, thus producing a gaseous and a solid product. However, the required process temperature is extremely high and a homogeneous carbon product is unlikely to be produced because of the intermittent solar radiation conditions. Additionally, there is a discrepancy between the huge demand for hydrogen and the low demand for high-value carbon, such as carbon black or advanced carbon nano-tubes.

Hydrogen produced in concentrating solar chemical plants could be blended with natural gas and thus used in today’s energy system. Town gas, which prevailed before natural gas spread out, included hydrogen up to 60% in volume or about 20% in energy content. This blend could be used for various purposes in industry, households and transportation, reducing emissions of CO2 and nitrous oxides. Gas turbines in integrated gasification combined cycle (IGCC) power plants can burn a mix of gases with 90% hydrogen in volume. Many existing pipelines could, with some adaptation, transport such a blend from sunny places to large consumption centres (e.g. from North Africa to Europe).

Solar-produced hydrogen could also find niche markets today in replacing hydrogen production from steam-reforming of natural gas in its current uses, such as manufacturing fertilizers and removing sulfur from petroleum products. Regenerating hydrogen with heat from concentrated sunlight to decompose hydrogen sulphide into hydrogen and sulfur could save significant amounts of still gas in refineries for other purposes. Coal could be used together with methane gas as feedstock, and deliver dimethyl ether (DME), after solar-assisted steam reforming of natural gas, coal gasification under oxygen, and two-step water splitting. DME could be used as a liquid fuel, and its combustion would entail similar CO2 emissions to those from burning conventional petroleum products, but significantly less than the life-cycle emissions of other coal-to-liquid fuels.

Besides solar fuels, CSP technology could find a great variety of uses in providing high temperature process heat or steam, such as for enhanced oil recovery, and mining applications (where CSP is already in use), smelting of aluminium and other metals, and in industries such as food and beverages, textiles and pharmaceuticals. Various forms of cogeneration with STE can also be considered. For example, sugar plants require high temperature steam in spring, when the solar resource is maximal but electricity demand minimal. Solar fields providing steam for sugar plants could run a turbine and generate STE for the rest of the year.

STE is not broadly competitive today, and will not become so until it benefits from strong and stable frameworks, and appropriate support to minimise investors’ risks and reduce capital costs.

As with any large industrial projects, STE projects require several permissions, often delivered by many different government jurisdictions at various geographical levels, as well as many branches or agencies of each – local, regional, state, federal or national. Each may protect different interests, all of them legitimate.

Future values of PV and STE in California Researchers at the National Laboratory of Renewable Energy (NREL) in the United States have studied the future total values (operational value plus capacity value) of STE with storage and PV plants in California in two scenarios: one with 33% renewables in the mix (the renewable portfolio standard by end 2020), including about 11% PV, another with 40% renewables (under consideration by California’s governor), including about 14% PV. In both cases there is over 1 GW of electricity storage available on the grid. The main results indicate that at 33% renewable penetration, the bulk of the gap in favour of STE comes from its greater capacity value, which avoids the costs of building additional thermal generators to meet demand (Table 5). At 40% renewable penetration, the value of STE increases slightly, but the value of PV drops significantly, mostly reflecting the drop of its own capacity value (Jorgenson et al., 2014). For investment decisions and planning, system values are as much important as LCOE. Table 6: Total value in two scenarios of renewables penetration in California Value component 33% renewables 40% renewables STE with storage PV Value value (USD/MWh) (USD/MWh) STE with

The built-in storage capability of CSP is cheaper and more effective (with over 95% return efficiency, versus about 80% for most competing technologies) than battery storage and pumped-hydropower storage. Thermal storage allows separating the collection of the heat (during the day) and the generation of electricity (at will). This capability has immediate value in countries having significant increase in power demand when the sun sets, in part driven by lighting requirements. In many such countries, the electricity mix, which during daytime is often dominated by coal, becomes dominated by peaking technologies, often based on natural gas or oil products.

The greatest possible expansion of PV, which implies its dominance over all other sources during a significant part of the day, creates difficult technical and economic challenges to low-carbon base-load technologies such as nuclear power and fossil fuel with CCS. Natural gas is more suited to daily “stop-and-go” with rapid ramps up and down, and is more economical for mid-merit operations (between about 2,000 and 4,000 full-load hours).

changes in the rules applicable to investments already being made or in process can have long-lasting deterrent effects on investments if they significantly modify the prospects for economic returns. This is precisely what has happened over the last few years in Spain, where a series of measures aimed at reducing the return on investment on existing CSP plants. The high risk of losing investors’ confidence may have been deemed acceptable, as these measures followed the decision to stop CSP deployment. However it may have detrimental effects for future investments in CSP plants; for other investments in the energy sector; for other investments in any other sector that requires government involvement; and for investments in other countries

Financing CSP plants, like most renewable energy plants, are very capital-intensive, requiring large upfront expenditures. Financing is thus difficult, especially in new, immature markets, and for new, emerging sub-technologies. In the United States, some private investors have large amounts of money available and might be willing to invest in clean energy for a variety of reasons; but even in this context the risks may have appeared too high for large, innovative CSP projects – costing around USD 1 billion – to materialize, without the loan guarantee program of the US DoE. This program has been essential to the renaissance of CSP in the United States, in allowing projects to access debt at very low cost from a US government bank and facilitating financial closure at acceptable WACC of large projects.

In other countries, such as India, Morocco and South Africa, public low-cost lending has been essential for jump-starting the deployment of CSP. In India and South Africa, private banks would have not provided capital for the very long maturity involved. In Morocco, the presence of a government agency as equity partner significantly reduced the perception of policy risks among other partners. In Morocco and South Africa, international finance institutions provided concessional grants that reduced the overall costs of large CSP projects.

Subsidizing renewable energy projects through long-term and/or low-cost debt-related policies could reduce the total subsidies compared with per-kWh support. However, this transfers the burden of high capital-intensivity to governments, which may not have enough money at hand, and this carries a risk of slowing deployment. Interest subsidies and/or accelerated depreciation have much higher one-year budget efficiency.

Research is under way to test and evaluate methods of measuring DNI accurately using lower-cost instrumentation, and for producing long-term, high-quality DNI data sets by merging long-term, satellite-derived data of moderate accuracy with high-quality, highly accurate ground-based measurements that may only cover a year or less. This research also includes important studies on sunshape and circumsolar radiation, and how these factor into both DNI measurements and STE system performance. In addition, satellite-based methods for estimating DNI are constantly improving and represent a reliable and viable way of choosing the best sites for STE plants. Furthermore, the ability to accurately forecast DNI levels – from a few hours ahead to a few days ahead – is constantly improving, and will be an important tool for utilities operating STE systems.

Abbreviations: ARRA American recovery and reinvestment Act CCS carbon capture and storage CO2 carbon dioxide CPI Climate Policy Initiative CSF concentrated solar fuels CSP concentrating solar power CPV concentrating photovoltaic CRS central receiver system CTF Clean Technology Fund DC direct current DII Desertec Industry Initiative DLR Forschungszentrum der Bundesrepublik Deutschland für Luft- und Raumfahrt (German Aerospace Centre) DME Dimethyl ether DNI direct normal irradiance DSG direct steam generation EDF Électricité de France EIB European Investment Bank EPC engineering, procurement and construction ETP: Energy Technology Perspectives EU European Union EUR euro FiT feed-in tariff FiP feed-in premium G8 Group of Eight GHG greenhouse gas(es) GHI global horizontal irradiance GNI global normal irradiance Gt gigatonnes GW gigawatt (1 million kW)  GWh gigawatt hour (1 million kWh) Hi-Ren high renewables (Scenario) HTF heat transfer fluid HVDC high- voltage direct current IA implementing agreement IEA International Energy Agency IFI international financial institution IGCC integrated gasification combined cycle IRENA International Renewable Energy Agency ISCC Integrated Solar Combined-Cycle (plant) JRC Joint Research Centre kW kilowatt kWh kilowatt hour LCOE levelized cost of electricity LFR linear Fresnel reflectors MW megawatt (1 thousand kW) MWe megawatt electrical MWh megawatt hour (1 thousand kWh) MWth megawatt thermal NGO non-governmental organisation NREAP national renewable energy action plan NREL National Renewable Energy Laboratory (United States) OECD Organization for Economic Co-operation and Development O&M operation and maintenance PPA power purchase agreement PT parabolic trough  TWh terawatthour (1 billion KWh)

IEA (2014a), Technology Roadmap: Solar Photovoltaic Energy, 2014 Edition, OECD/IEA, Paris. IEA (2014b), Energy Technology Perspectives 2014, OECD/IEA, Paris. IEA (2014c), Technology Roadmap: Energy Storage, OECD/IEA, Paris. IEA (2014d), Medium-Term Renewable Energy Market Report, OECD/IEA, Paris. IEA (2014e), The Power of Transformation: Wind, Sun and the Economics of Flexible Power Systems, OECD/ IEA, Paris. IEA (2011), Solar Energy Perspectives, Renewable Energy Technologies, OECD/IEA, Paris. IEA (2010), TechnologyRoadmap: Concentrating Solar Power, OECD/IEA, Paris.

Jorgenson, J., P. Denholm and M. Mehos (2014), Estimating the Value of Utility-Scale Solar Technologies in California under a 40% Renewable Portfolio Standard, NREL/TP-6A20-61695, May.

RED electrica de España (REE) (2014), The Spanish Electricity System – Preliminary Report 2013, RED, Madrid, Spain, http://www.ree.es/sites/default/files/downloadable/preliminary_report_2013.pdf.

REFERENCES

Deign, J. 2020. America’s Concentrated Solar Power Companies Have All but Disappeared. greentechmedia.com

DOE/NETL. August 28, 2012. Role of Alternative Energy Sources: Solar Thermal Technology Assessment. Department of Energy, National Energy Technology Laboratory.

Martin, C., et al. 2020. A $1 Billion Solar Plant Was Obsolete Before It Ever Went Online. SolarReserve’s Crescent Dunes received backing from Citigroup and the Obama Energy Department but couldn’t keep pace with technological advances. Bloomberg.

NREL. 2011a. Solar Radiation Data Manual for Flat Plate and Concentrating Collectors. National Renewable Energy Laboratory.

NREL. 2011b. U.S. Solar Radiation Resource Maps: Atlas for the Solar Radiation Data Manual for Flat Plate and Concentrating Collectors. National Renewable Energy Laboratory.

Maps: http://www.nrel.gov/gis/solar.html

NREL. 2012. Concentrating solar resource of the united states. National Renewable Energy Laboratory.

Posted in Concentrated Solar Power, CSP with thermal energy storage, Grid instability, Seasonal Variation | Tagged , , , , , | 1 Comment

Oil consumption of containerships

This image has an empty alt attribute; its file name is container-ship.jpgPreface.  Since 90% of international goods move by ships, I was curious about how much fuel they burned.  It’s a lot: The very large container ship CMA CGM Benjamin Franklin above, which can carry 18,000 20-foot containers, carries approximately 4.5 million gallons of fuel oil, which takes up 16,000 cubic meters (FW 2020).  As much fuel as 300,000 15-gallon tank cars.

But these ships can carry 200,000 tons of goods, so they end up being more energy efficient than 300,000 cars (Stopford 2010, UNCTAD 2012).

Pound for pound and mile for mile, today’s ships are the most energy-efficient way to move freight. Table 1 shows the energy efficiency of different modes of transport by kilojoules of energy used to carry one ton of cargo a kilometer (KJ/tkm). As you can see, water and rail are literally tons and tons—orders of magnitude—more energy efficient than trucks and air transportation.

Table 1 Energy efficiency of transportation in kilojoules/ton/kilometer (Smil 2013), Ashby 2015)

(A) ……………Transportation mode
50……………. Oil tankers and bulk cargo ships
100–150….. Smaller cargo ships
250–600….. Trains
360………….. Barge
2000–4000 Trucks
30,000…….. Air freight
55,000…….. Helicopter

(A) Kilojoules of energy used to carry one ton of cargo one kilometer Transportation mode

Alice Friedemann www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Further details

Fuel consumption by a container ship is mostly a function of ship size and cruising speed, which follows an exponential function above 14 knots. So an 8,000 TEU container ship consumes 225 tons of bunker fuel per day at 24 knots, but at 21 knots  consumption drops to 150 tons per day, a 33% decline. While shipping lines would prefer consuming the least amount of fuel by adopting lower speeds, this advantage must be mitigated with longer shipping times as well as assigning more ships on a pendulum service to maintain the same port call frequency. The main ship speed classes are (Notteboom 2009):

  • Normal (20-25 knots; 37.0 – 46.3 km/hr). Represents the optimal cruising speed a containership and its engine have been designed to travel at. It also reflects the hydrodynamic limits of the hull to perform within acceptable fuel consumption levels. Most containerships are designed to travel at speeds around 24 knots.
  • Slow steaming (18-20 knots; 33.3 – 37.0 km/hr). Running ship engines below capacity to save fuel consumption, but at the expense a additional travel time, particularly over long distances (compounding effect). This is likely to become the dominant operational speed as more than 50% of the global container shipping capacity was operating under such conditions as of 2011.
  • Extra slow steaming (15-18 knots; 27.8 – 33.3 km/hr). Also known as super slow steaming or economical speed. A substantial decline in speed for the purpose of achieving a minimal level of fuel consumption while still maintaining a commercial service. Can be applied on specific short distance routes.
  • Minimal cost (12-15 knots; 22.2 – 27.8 km/hr). The lowest speed technically possible, since lower speeds do not lead to any significant additional fuel economy. The level of service is however commercially unacceptable, so it is unlikely that maritime shipping companies would adopt such speeds.

In an environment of higher fossil fuel prices, maritime shipping companies are opting for slow steaming for cost cutting purposes.   The ongoing practice of slow steaming is likely to have an impact on supply chain management, maritime routes and the use of transshipment hubs.

REFERENCES

Ashby, M.F. 2015. Materials and sustainable development, table A.14. Oxford: Butterworth-Heinemann.

FW. 2020. How many gallons of fuel does a container ship carry? freightwaves.com

Smil, V. 2013. Prime movers of globalization. The history and impact of diesel engines and gas turbines. Cambridge: The MIT press.

Stopford, M. 2010. How shipping has changed the world and the social impact of shipping. Global Maritime Environmental Congress.

Notteboom, T., et al. 2009. Fuel surcharge practices of container shipping lines: Is it about cost recovery or revenue making?. Proceedings of the 2009 International Association of Maritime Economists (IAME) Conference, June, Copenhagen, Denmark.

UNCTAD. 2012. Review of maritime transport. United Nations.

Posted in Electrification, Ships and Barges | Tagged , | 2 Comments

Life before Cars: When Pedestrians Ruled the Streets

This image has an empty alt attribute; its file name is horses-before-cars.jpg

Preface.  The past is future after fossil fuels, but minus the horses for a while, since before cars they required about a sixth of U.S. farmland for their feed.  My grandfather, Francis J. Pettijohn, used to fondly reminisce about how quiet it used to be before combustion engines in his small town in Minnesota, though in cities that wasn’t the case, the clatter of wagon wheels on cobblestones was excruciatingly loud.

Alice Friedemann www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Clive Thompson. December 2014. When Pedestrians Ruled the Streets  Smithsonian Magazine.

When you visit any city in America today, it’s a sea of cars, with pedestrians dodging between the speeding autos. It’s almost hard to imagine now, but in the late 1890s, the situation was completely reversed. Pedestrians dominated the roads, and cars were the rare, tentative interlopers. Horse-drawn carriages and streetcars existed, but they were comparatively slow.

So pedestrians ruled. “The streets were absolutely black with people,” as one observer described the view in the nation’s capital. People strolled to and fro down the center of the avenue, pausing to buy snacks from vendors. They’d chat with friends or even “manicure your nails,” as one chamber of commerce wryly noted. And when they stepped off a sidewalk, they did it anywhere they pleased.

“They’d stride right into the street, casting little more than a glance around them…anywhere and at any angle,” as Peter D. Norton, a historian and author of Fighting Traffic: The Dawn of the Motor Age in the American City, tells me. “Boys of 10, 12 or 14 would be selling newspapers, delivering telegrams and running errands.” For children, streets were playgrounds.

At the turn of the century, motor vehicles were handmade, expensive toys of the rich, and widely regarded as rare and dangerous. When the first electric car emerged in Britain in the 19th century, the speed limit was set at four miles an hour so a man could run ahead with a flag, warning citizens of the oncoming menace, notes Tom Vanderbilt, author of Traffic: Why We Drive the Way We Do (And What It Says About Us).

Things changed dramatically in 1908 when Henry Ford released the first Model T. Suddenly a car was affordable, and a fast one, too: The Model T could zoom up to 45 miles an hour. Middle-class families scooped them up, mostly in cities, and as they began to race through the streets, they ran headlong into pedestrians—with lethal results. By 1925, auto accidents accounted for two-thirds of the entire death toll in cities with populations over 25,000.

An outcry arose, aimed squarely at drivers. The public regarded them as murderers. Walking in the streets? That was normal. Driving? Now that was aberrant—a crazy new form of selfish behavior.

“Nation Roused Against Motor Killings” read the headline of a typical New York Times story, decrying “the homicidal orgy of the motor car.” The editorial went on to quote a New York City traffic court magistrate, Bruce Cobb, who exhorted, “The slaughter cannot go on. The mangling and crushing cannot continue.” Editorial cartoons routinely showed a car piloted by the grim reaper, mowing down innocents.

When Milwaukee held a “safety week” poster competition, citizens sent in lurid designs of car accident victims. The winner was a drawing of a horrified woman holding the bloody corpse of her child. Children killed while playing in the streets were particularly mourned. They constituted one-third of all traffic deaths in 1925; half of them were killed on their home blocks. During New York’s 1922 “safety week” event, 10,000 children marched in the streets, 1,054 of them in a separate group symbolizing the number killed in accidents the previous year.

Drivers wrote their own letters to newspapers, pleading to be understood. “We are not a bunch of murderers and cutthroats,” one said. Yet they were indeed at the center of a fight that, clearly, could only have one winner. To whom should the streets belong?

***

By the early 1920s, anti-car sentiment was so high that carmakers and driver associations—who called themselves “motordom”—feared they would permanently lose the public.

You could see the damage in car sales, which slumped by 12 percent between 1923 and 1924, after years of steady increase. Worse, anti-car legislation loomed: Citizens and politicians were agitating for “speed governors” to limit how fast cars could go. “Gear them down to fifteen or twenty miles per hour,” as one letter-writer urged. Charles Hayes, president of the Chicago Motor Club, fretted that cities would impose “unbearable restrictions” on cars.

Hayes and his car-company colleagues decided to fight back. It was time to target not the behavior of cars—but the behavior of pedestrians. Motordom would have to persuade city people that, as Hayes argued, “the streets are made for vehicles to run upon”—and not for people to walk. If you got run over, it was your fault, not that of the motorist. Motordom began to mount a clever and witty public-relations campaign.

Their most brilliant stratagem: To popularize the term “jaywalker.” The term derived from “jay,” a derisive term for a country bumpkin. In the early 1920s, “jaywalker” wasn’t very well known. So pro-car forces actively promoted it, producing cards for Boy Scouts to hand out warning pedestrians to cross only at street corners. At a New York safety event, a man dressed like a hayseed was jokingly rear-ended over and over again by a Model T. In the 1922 Detroit safety week parade, the Packard Motor Car Company produced a huge tombstone float—except, as Norton notes, it now blamed the jaywalker, not the driver: “Erected to the Memory of Mr. J. Walker: He Stepped from the Curb Without Looking.”

The use of “jaywalker” was a brilliant psychological ploy. What’s the best way to convince urbanites not to wander in the streets? Make the behavior seem unsophisticated—something you’d expect from hicks fresh off the turnip truck. Car companies used the self-regarding snobbery of city-dwellers against themselves. And the campaign worked. Only a few years later, in 1924, “jaywalker” was so well-known it appeared in a dictionary: “One who crosses a street without observing the traffic regulations for pedestrians.”

Meanwhile, newspapers were shifting allegiance to the automakers—in part, Norton and Vanderbilt argue, because they were profiting heavily from car ads. So they too began blaming pedestrians for causing accidents.

“It is impossible for all classes of modern traffic to occupy the same right of way at the same time in safety,” as the Providence Sunday Journal noted in a 1921 article called “The Jay Walker Problem,” reprinted from the pro-car Motor magazine.

In retrospect, you could have predicted that pedestrians were doomed. They were politically outmatched. “There was a road lobby of asphalt users, but there was no lobby of pedestrians,” Vanderbilt says. And cars were a genuinely useful technology. As pedestrians, Americans may have feared their dangers—but as drivers, they loved the mobility.

By the early ’30s, the war was over. Ever after, “the street would be monopolized by motor vehicles,” Norton tells me. “Most of the children would be gone; those who were still there would be on the sidewalks.” By the 1960s, cars had become so dominant that when civil engineers made the first computer models to study how traffic flowed, they didn’t even bother to include pedestrians.

***

The triumph of the automobile changed the shape of America, as environmentalists ruefully point out. Cars allowed the suburbs to explode, and big suburbs allowed for energy-hungry monster homes. Even in mid-century, critics could see this coming too. “When the American people, through their Congress, voted for a 26-billion-dollar highway program, the most charitable thing to assume is that they hadn’t the faintest notion of what they were doing,” Lewis Mumford wrote sadly in 1958.

Posted in Automobiles, Transportation What To Do | Tagged | 1 Comment

15 Nations that Collapsed because of Drought: will we be the 16th?

Preface.  Another repercussion of drought may be the Muslim religion as Fleitmann (2022) proposes below.

This post began with 10 civilizations that collapsed due to drought (below), and I’ve added 5 more. Will the American South West be #16? Lynn Ingram, a professor at U.C. Berkeley discusses this possibility in her  book: The West without Water: What Past Floods, Droughts, and Other Climatic Clues Tell Us about TomorrowSince 2000, California and the South West have had the worst drought in 1,200 years.  Since California’s aquifer and the Ogallala under 10 states produce half of America’s food, the rest of the nation won’t escape…

Alice Friedemann  www.energyskeptic.com  Author of Life After Fossil Fuels: A Reality Check on Alternative Energy; When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Women in ecology  Podcasts: WGBH, Jore, Planet: Critical, Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, Kunstler 253 &278, Peak Prosperity,  Index of best energyskeptic posts

***

Fleitmann D et al (2022) Droughts and societal change: The environmental context for the emergence of Islam in late Antique Arabia. Science 376: 1317-1321

In Arabia, the first half of the sixth century CE was marked by the demise of Himyar, the dominant power in Arabia until 525 CE. Important social and political changes followed, which promoted the disintegration of the major Arabian polities. Using hydroclimate and stalagmite records from around Southern Arabia, we clearly see unprecedented droughts during the sixth century CE, with the worst of it from ~500 to 530 CE. We suggest that such droughts undermined the resilience of Himyar and thereby contributed to the societal changes from which Islam emerged.

Scroxton J (2020) Circum-Indian ocean hydroclimate at the mid to late Holocene transition: The Double Drought hypothesis and consequences for the Harappan. Climate Past discussions.

The Harappan arose in the Indus valley near Afghanistan and India about 5200 years ago, peaking around 2600 BC. Their written script remains undeciphered, but archeology has revealed skilled metallurgy, intricate sewer systems, reservoirs, public baths and urban planning long before the Roman Empire. But by 1300 BC it collapsed. Scroxton found a sudden drought starting around 2240 BC affecting winter rain fall, and many fled to present-day Indian Gujarat, while others coped by switching to millet and grains that favored summer rain. Then 300 years later, just as the winter rains began to recover, a tropical drought came, reducing the summer rains for several centuries, greatly reducing the population.

Sinha A et al (2019) Role of climate in the rise and fall of the Neo-Assyrian Empire. Science Advances.

New research suggests it was drought that led to the collapse of the Assyrian Empire (whose heartland was based in today’s northern Iraq)—one of the most powerful civilizations in the ancient world.  Neo-Assyria was the first super power in the history of the world. The Neo-Assyrian empire (912-609 BC) was the third and final phase of Assyrian civilization. It was by far the largest empire in the region up to that time, controlling much of the territory from the Persian Gulf to Cyprus. The Assyrians were basically like the Empire in Star Wars, they are the all-devouring machine.

They also had incredible skill as hydro-engineers. The Assyrians were largely responsible for the way that the Tigris River Basin drainage now works, they completely remade the natural water flows of that landscape using aqueducts and other hydraulic infrastructure. Some of these features are still functioning today.

Today Iraq is water challenged, with a low level per capita of fresh water, and until a deluge in the winter of 2019, very little rain since 1988.

Masters J (2016) Ten Civilizations or Nations That Collapsed From Drought.  wunderground

Drought is the great enemy of human civilization. Drought deprives us of the two things necessary to sustain life–food and water. When the rains stop and the soil dries up, cities die and civilizations collapse, as people abandon lands no longer able to supply them with the food and water they need to live. While the fall of a great empire is usually due to a complex set of causes, drought has often been identified as the primary culprit or a significant contributing factor in a surprising number of such collapses. Drought experts Justin Sheffield and Eric Wood of Princeton, in their 2011 book, Drought, identify more than ten civilizations, cultures and nations that probably collapsed, in part, because of drought. As we mark World Water Day on March 22, we should not grow overconfident that our current global civilization is immune from our old nemesis–particularly in light of the fact that a hotter climate due to global warming will make droughts more intense and impacts more severe. So, presented here is a “top ten” list of drought’s great power over some of the mightiest civilizations in world history–presented chronologically.

Collapse #1. The Akkadian Empire in Syria, 2334 BC – 2193 BC. In Mesopotamia 4200 years ago, the great Akkadian Empire united all the indigenous Akkadian-speaking Semites and the Sumerian speakers, and controlled Mesopotamia, the Levant, and parts of Iran, sending military expeditions as far south as present-day Oman. In a 2000 article published in Geology, “Climate change and the collapse of the Akkadian empire: Evidence from the deep sea”, a team of researchers led by Heidi Cullen studied deposits of continental dust blown into the Gulf of Oman in the late 1990s. They discovered a large increase in dust 4200 years ago that likely coincided with a 100-year drought that brought a 30% decline in precipitation to Syria. The drought, called the 4.2 kiloyear event, is thought to have been caused by cooler sea surface temperatures in the North Atlantic. The 4.2 kiloyear event has also been linked to the collapse of the Old Kingdom in Egypt (see below). The paper concluded, “Geochemical correlation of volcanic ash shards between the archeological site and marine sediment record establishes a direct temporal link between Mesopotamian aridification and social collapse, implicating a sudden shift to more arid conditions as a key factor contributing to the collapse of the Akkadian empire.”

Collapse #2. The Old Kingdom of ancient Egypt, 4200 years ago. The same drought that brought down the Akkadian empire in Syria severely shrank the normal floods on the Nile River in ancient Egypt. Without regular floods to fertilize the fields, poor harvests led to reduced tax income and insufficient funds to finance the pharaoh’s government, hastening the collapse of Egypt’s pyramid-building Old Kingdom. An inscription on the tomb of Ankhtifi during the collapse describes the pitiful state of the country when famine stalked the land: “the whole country has become like locusts going in search of food…”

Collapse #3. The Late Bronze Age (LBA) civilization in the Eastern Mediterranean. About 3200 years ago, the Eastern Mediterranean hosted some of the world’s most advanced civilizations. The Mycenaean culture was flourishing in Greece and Crete. The chariot-riding Hittites had carved out a vast empire encompassing a large part of Asa Minor and the Middle East. In Egypt, the New Kingdom was at its height. However, around 1200 BC, these Eastern Mediterranean civilizations declined or collapsed. According to a 2013 study in PLOS, studying grains of fossilized pollen shows that this collapse coincided with the onset of a 300-year drought event. This climate shift caused crop failures and famine, which “precipitated or hastened socio-economic crises and forced regional human migrations at the end of the LBA in the Eastern Mediterranean and southwest Asia.”

Collapse #4. The Maya civilization of 250-900 AD in Mexico. Severe drought killed millions of Maya people due to famine and lack of water, and initiated a cascade of internal collapses that destroyed their civilization at the peak of their cultural development, between 750 – 900 AD. Haug, G.H. et al., in their 2003 paper in Science, “Climate and the collapse of Maya civilization,” documented substantial multi-year droughts coinciding with the collapse of the Maya civilization.

Collapse #5. Another Mayan collapse occurred a few centuries later. Mayapan served as the capital to some 20,000 Maya people in the 13th through mid-15th centuries but collapsed and was abandoned after a rival political faction, the Xiu, massacred the powerful Cocom family. Extensive historical records date this collapse to sometime between 1441 and 1461. Plenty of ethnohistorical records exist to support the city’s violent downfall and abandonment around 1458, she said. But the new evidence of massacre up to 100 years earlier, together with climate data that found prolonged drought around that time, led the team to suspect environmental factors may have played a role.  In particular, researchers found a significant relationship between a period of drought and substantial population decline from 1350 to 1430.

The Maya depended heavily on rain-fed maize but lacked any centralized long-term grain storage. The impacts of rainfall levels on food production, then, are believed to be linked to human migration, population decline, warfare and shifts in political power, the study states. “It’s not that droughts cause social conflict, but they create the conditions whereby violence can occur, that hardship can become politicized in the worst kind of way,” Masson said. “It creates opportunities for ruthlessness and can cause people to turn on one another violently.” (Kennett 2022)

Collapse #6. The Tang Dynasty in China, 700-907 AD. At the same time as the Mayan collapse, China was also experiencing the collapse of its ruling empire, the Tang Dynasty. Dynastic changes in China often occurred because of popular uprisings during crop failure and famine associated with drought. The Tang dynasty–a golden age of literature and art in Chinese civilization–began to weaken in the eighth century, and it fully collapsed in 907 AD. Sediments from Lake Huguang Maar in China dated to the time of the collapse of the Tang Dynasty indicate a sudden and sustained decline in summertime monsoon rainfall. Agriculture in China depends upon the summer monsoon, which supplies about 70% of the year’s rain in just a few months. A 2007 article in Nature by Yancheva et al. speculated that “migrations in the tropical rain belt could have contributed to the simultaneous declines of both the Tang dynasty in China and the Classic Maya in Central America.”

Collapse 7. The Tiwanaku Empire of Bolivia’s Lake Titicaca region, 300 – 1000 AD. The Tiwanaku Empire was one of the most important South American civilizations prior to the Inca Empire. After dominating the region for 500 years, the Tiwanaku Empire ended abruptly between 1000 – 1100 AD, following a drying of the region, as measured by ice accumulation in the Quelccaya Ice Cap, Peru. Sediment cores from nearby Lake Titicaca document a 10-meter drop in lake level at this time.

Collapse 8. The Ancestral Puebloan Anasazi culture in the Southwest U.S. in the 11th-12th centuries AD. Beginning in 1150 AD, North America experienced a 300-year drought called the Great Drought. This drought has often been cited as a primary cause of the collapse of the ancestral Puebloan (formally called Anasazi) civilization in the Southwest U.S., and abandonment of places like the Cliff Palace at Mesa Verde National Park in Colorado. The Mississippian culture, a mound-building Native American civilization that flourished in what is now the Midwestern, Eastern, and Southeastern United States, also collapsed at this time.

Collapse #9. The Khmer Empire based in Angkor, Cambodia, 802-1431 AD. The Khmer Empire ruled Southeast Asia for over 600 years, but was done in by a series of intense decades-long droughts interspersed with intense monsoons in the fourteenth and fifteenth centuries that, in combination with other factors, contributed to the empire’s demise. The climatic evidence comes from a seven-and-a-half century reconstruction from tropical southern Vietnamese tree rings presented in a 2010 study by Buckley et al., “Climate as a contributing factor in the demise of Angkor, Cambodia”. They wrote: “The Angkor droughts were of a duration and severity that would have impacted the sprawling city’s water supply and agricultural productivity, while high-magnitude monsoon years damaged its water control infrastructure.”

Collapse #10. The Ming Dynasty in China, 1368-1644 AD. China’s Ming Dynasty–one of the greatest eras of orderly government and social stability in human history–collapsed at a time when the most severe drought in the region in over 4000 years was occurring, according to sediments from Lake Huguang Maar analyzed in a 2007 article in Nature by Yancheva et al. Drought experts Justin Sheffield and Eric Wood of Princeton, in their 2011 book, Drought, speculated that a weakened summer monsoon driven by warm El Niño conditions in the Eastern Pacific was responsible for the intense drought, which led to widespread famine. An inscription found carved on a wall of Dayu Cave in the Qinling Mountains of Central China dated July 10, 1596, during the 24th year of the MIng Dynasty’s Emperor Wanli, said: Mountains are crying due to drought.”

Collapse #11. Modern Syria. Syria’s devastating civil war that began in March 2011 has killed over 300,000 people, displaced at least 7.6 million, and created an additional 4.2 million refugees. While the causes of the war are complex, a key contributing factor was the nation’s devastating drought that began in 1998. The drought brought Syria’s most severe set of crop failures in recorded history, which forced millions of people to migrate from rural areas into cities, where conflict erupted. This drought was almost certainly Syria’s worst in the past 500 years (98% chance), and likely the worst for at least the past 900 years (89% chance), according to a 2016 tree ring study by Cook et al., “Spatiotemporal drought variability in the Mediterranean over the last 900 years.” Human-caused emissions greenhouse gases were “a key attributable factor” in the drying up of wintertime precipitation in the Mediterranean region, including Syria, in recent decades, as discussed in a NOAA press release that accompanied a 2011 paper by Hoerling et al., On the Increased Frequency of Mediterranean Drought. A 2016 paper by drought expert Colin Kelley showed that the influence of human greenhouse gas emissions had made recent drought in the region 2 – 3 times more likely. Wunderground’s climate change blogger, Dr. Ricky Rood, has his take on the current drought in Syria in his March 21 post, Ineffective Resolution: Middle East and Climate Change.

Collapse #12 Mycenaean Greece   Marshall (2012) Climate change: The great civilisation destroyer?  War and unrest, and the collapse of many mighty empires, often followed changes in local climes. Is this more than a coincidence?  NewScientist.  Also see:Five civilisations that climate change may have doomed

What caused the collapse of Mycenaean Greece, and thus had a huge impact on the course of world history? A change in the climate, according to the latest evidence. What’s more, Mycenaean Greece is just one of a growing list of civilizations whose fate is being linked to the vagaries of climate. It seems big swings in the climate, handled badly, brought down whole societies, while smaller changes led to unrest and wars.

Excavating in what is now Syria, Weiss found dust deposits suggesting that the region’s climate suddenly became drier around 2200 BC. The drought would have led to famine, he argued, explaining why major cities were abandoned at this time (Science, vol 261, p 995). A piece of contemporary writing, called The Curse of Akkad, does describe a great famine:

For the first time since cities were built and founded,
The great agricultural tracts produced no grain,
The inundated tracts produced no fish,
The irrigated orchards produced neither syrup nor wine,
The gathered clouds did not rain, the masgurum did not grow.
At that time, one shekel’s worth of oil was only one-half quart,
One shekel’s worth of grain was only one-half quart. …
These sold at such prices in the markets of all the cities!
He who slept on the roof, died on the roof,
He who slept in the house, had no burial,
People were flailing at themselves from hunger.

In 2000, climatologist Peter deMenocal of Columbia University in New York found more. His team showed, based on modern records going back to 1700, that the flow of the region’s two great rivers, the Tigris and the Euphrates, is linked to conditions in the north Atlantic: cooler waters reduce rainfall by altering the paths of weather systems. Next, they discovered that the north Atlantic cooled just before the Akkadian empire fell apart (Science, vol 288, p 2198). “To our surprise we got this big whopping signal at the time of the Akkadian collapse.”

It soon became clear that major changes in the climate coincided with the untimely ends of several other civilizations (see map). Of these, the Maya became the poster child for climate-induced decline. Mayan society arose in Mexico and Central America around 2000 BC.

Then the Mayan civilization collapsed.  Numerous studies have shown that there were several prolonged droughts around the time of the civilisation’s decline. In 2003, Gerald Haug of the Swiss Federal Institute of Technology in Zurich found it was worse than that. His year-by-year reconstruction based on lake sediments shows that rainfall was abundant from 550 to 750, perhaps leading to a rise in population and thus to the peak of monument-building around 721. But over the next century there were not only periods of particularly severe drought, each lasting years, but also less rain than normal in the intervening years (Science, vol 299, p 1731). Monument construction ended during this prolonged dry period, around 830, although a few cities continued on for many centuries.

When the climate becomes less favorable, less food can be grown. Such changes can also cause plagues of locusts or other pests, and epidemics among people weakened by starvation. When it is no longer feasible to maintain a certain population level and way of life, the result can be collapse.

In 2010, though, a study of river deposits in Syria suggested there was a prolonged dry period between 1200 and 850 BC – right at the time of the so-called Greek Dark Ages. Earlier this year, Drake analyzed several climate records and concluded that there was a cooling of the Mediterranean at this time, reducing evaporation and rainfall over a huge area.

What’s more, several other cultures around the Mediterranean, including the Hittite Empire and the “New Kingdom” of Egypt, collapsed around the same time as the Mycenaeans – a phenomenon known as the late Bronze Age collapse. Were all these civilizations unable to cope with the changing climate? Or were the invading Sea Peoples the real problem? The story could be complex: civilizations weakened by hunger may have become much more vulnerable to invaders, who may themselves have been driven to migrate by the changing climate. Or the collapse of one civilization could have had knock-on effects on its trading partners.

Around 900, the Tang dynasty began losing its grip on China. At its height, the Tang ruled over 50 million subjects. Woodblock printing meant that written words, particularly poetry, were widely accessible. But the dynasty fell after local governors usurped its authority. A study of lake sediments in China by Haug suggests that this region experienced a prolonged dry period at the same time as that in Central America. He thinks a shift in the tropical rain belt was to blame, causing civilisations to fall apart on either side of the Pacific (Nature, vol 445, p 74).

From 2500 BC until the 20th century, a series of powerful empires like the Tang controlled China. All were eventually toppled by civil unrest or invasions.  When Zhang compared climate records for the last 1200 years to the timeline of China’s dynastic wars, the match was striking. Most of the dynastic transitions and periods of social unrest took place when temperatures were a few tenths of a degree colder. Warmer periods were more stable and peaceful (Chinese Science Bulletin, vol 50, p 137).

Zhang gradually built up a more detailed picture showing that harvests fell when the climate was cold, as did population levels, while wars were more common. Of 15 bouts of warfare he studied, 12 took place in cooler times. He then looked at records of war across Europe, Asia and north Africa between 1400 and 1900. Once again, there were more wars when the temperatures were lower. Cooler periods also saw more deaths and declines in the population.

These studies suggest that the effects of climate on societies can be profound.

Trying to move beyond mere correlations, Zhang began studying the history of Europe from 1500 to 1800 AD. In the mid-1600s, Europe was plunged into the General Crisis, which coincided with a cooler period called the Little Ice Age. The Thirty Years war was fought then, and many other wars. Zhang analyzed detailed records covering everything from population and migration to agricultural yields, wars, famines and epidemics in a bid to identify causal relationships. So, for instance, did climate change affect agricultural production and thus food prices? That in turn might lead to famine – revealed by a reduction in the average height of people – epidemics and a decline in population. High food prices might also lead to migration and social unrest, and even wars.

The Khmer empire, centered in what is now Cambodia, began in 802 AD. It built the astounding temple of Angkor Wat, dedicated to the god Vishnu, in the 12th century. We now know that Angkor Wat was not, as long thought, a lone structure. It was the heart of a teeming city covering 1000 square kilometres, surrounded by even larger suburbs. Before the Industrial Revolution, Angkor was perhaps the world’s largest city. But it was sacked and abandoned in 1431 apart from the temple, which by then had been taken over by Buddhists. What made the Khmer abandon their metropolis? According to Brendan Buckley of Columbia University in New York, changes to the monsoon were a contributing factor. Buckley used tree rings to produce a yearly record of monsoon rainfall from 1250 to 2008. He found that the monsoon was weak in the mid to late 1300s. This was followed by a short but harsh drought in the early 1400s, just before Angkor fell. There were also a few years when the monsoons returned with a vengeance, causing severe floods.

Like many south Asian societies, the Khmer relied on the monsoon to water their crops. Canals and reservoirs channelled water to farms and homes in Angkor. Many are now filled with sand and gravel, carried in by floods, and Buckley showed the deposits in at least one canal date to the time of the collapse. This damage would have made it even harder to manage the water supply, at a time when it was already limited and unpredictable.

Between 300 and 500 AD, a people called the Moche thrived and established cities along the coast of Peru. Their farmers built a network of irrigation canals, and grew maize and lima beans. Their capital boasts the largest adobe structure in the Americas, the Huaca del Sol.   After 560, however, the Moche civilisation began to decline. By the time they abandoned the coastal cities around 600 and moved inland, their irrigation channels had been overrun by sand dunes.  The decline may have been triggered by changes in climate. Studies of ice cores suggest that an especially intense El Niño cycle around this time produced intense rainfall and floods, followed by a long and severe drought.

References

Buckley, B.M. et al., 2010, “Climate as a contributing factor in the demise of Angkor, Cambodia,” Proc. Natl. Acad. Sci. U.S.A. 107, 6748–6752 (2010).

Cook, B.I. et al., 2016, “Spatiotemporal drought variability in the Mediterranean over the last 900 years,” JGR Atmospheres, DOI: 10.1002/2015JD023929

Cullen, H.M., and P.B. deMenocal, 2000, North Atlantic Influence on TIgris-Euphrates Streamflow, International Journal of Climatology, 20: 853-863.

Cullen et al., 2000, “Climate change and the collapse of the Akkadian empire: Evidence from the deep sea,” Geology 28, 379 (2000).

deMenocal, P.B., 2001, “Cultural responses to climate change during the late Holocene,” Science 292, 667–673 (2001).

Gleick, P., 2014, Water, Drought, Climate Change, and Conflict in Syria, Weather, Climate, and Society

Haug, G.H. et al., 2003, “Climate and the collapse of Maya civilization,” Science 299, 1731–1735 (2003).

Hoerling, Martin, Jon Eischeid, Judith Perlwitz, Xiaowei Quan, Tao Zhang, Philip Pegion, 2012, On the Increased Frequency of Mediterranean Drought, J. Climate, 25, 2146–2161, doi: http://dx.doi.org/10.1175/JCLI-D-11-00296.1

Kaniewski, D. et al., 2012, Drought is a recurring challenge in the Middle East, PNAS 109:10, 3862–3867, doi: 10.1073/pnas.1116304109

Kaniewski, D. et al., 2013, “Environmental Roots of the Late Bronze Age Crisis,” PLOS one, DOI: 10.1371/journal.pone.0071004

Kelley, C.P. et al., 2016, “Climate change in the Fertile Crescent and implications of the recent Syrian drought,” PNAS vol. 112 no. 11, 3241–3246, doi: 10.1073/pnas.1421533112

Kennett DJ et al (2022) Drought-Induced Civil Conflict Among the Ancient Maya, Nature Communications. DOI: 10.1038/s41467-022-31522-x

Ortloff, C.R. and A.L. Kolata, 1992, “Climate and Collapse: Agro-Ecological Perspectives on the Decline of the Tiwanaku State,” J. of Achaelogical Science 1993, <b<20< b=””> 195-221.

Wendel, JoAnna, 2015, Chinese Cave Inscriptions Tell Woeful Tale of Drought,” EOS, 1 October 2015.

Yancheva, G. et al., 2007, “Influence of the intertropical convergence zone on the East Asian monsoon,” Nature 445, 74–77 (2007).

Posted in Collapsed & collapsing nations, Drought & Collapse | Tagged , , , , , , , , , , , , | 1 Comment