Aging nuclear power plants should be shut down

Preface. Below are my notes from the Greenpeace 146-page “Lifetime extension of ageing nuclear power plants”.  Even if you don’t understand all the terms, read on anyhow, since it certainly conveys why nuclear plants grow more dangerous with age.  Imagine how fast you’d die after being fried by radiation and heat. So do metal and cement.  They too will eventually crack, corrode, and break. 

Reading this makes me want to shut nuclear power plants down as soon as possible. They are clearly not a “solution” to replace fossil energy, especially because their nuclear wastes will poison the earth for hundreds of thousands of years. Both of my books explain why there are no alternatives to fossil fuels for transportation, manufacturing high heat, natural gas fertilizers, half a million products made out of fossil fuels, and the electric grid itself, which requires natural gas as the backup for intermittent energy when it’s not up, and to balance it when it is. 

Physical ageing. A comprehensive range of physical ageing mechanisms is described in the IAEA safety guide on ageing management:  Degradation of mechanical components can be caused by radiation embrittlement (affecting the RPV beltline region), general corrosion, stress corrosion cracking, weld-related cracking, and mechanical wear and fretting (affecting rotating components). Electrical and instrumentation and control components can be affected by insulation embrittlement and degradation (cables, motor windings, transformers), partial discharges (transformers, inductors, medium and high voltage equipment), oxidation, appearance of monocrystals and metallic diffusion.

Civil structures, especially concrete elements, can suffer damage due to aggressive chemical attacks and corrosion of the embedded steel, cracks and distortion due to increased stress levels from settling, and loss of material due to freeze–thaw processes. Pre-stressed containment tendons can lose their pre-stress due to relaxation, shrinkage, creep and elevated temperature.

Ageing of electrical installations.  In the field of instrumentation and control equipment, cables are among the components of most concern in terms of ageing. During the operational lifetime of reactors, the plastics of the cable insulation are exposed to environmental influences that cause deterioration. Oxidation is the dominant ageing mechanism of polymer cable coating, leading to embrittlement of the material, which increases the potential for cracking. Cracked cables can cause short circuits followed by electrical failures or even cable fires. Ageing cables therefore have the potential for serious common-cause failures of instrumentation and control equipment, especially under accident conditions.

Ageing effects on the reactor pressure vessel. The RPV and its internals are the most stressed components in a nuclear power plant. During operation the RPV has to withstand: • neutron radiation that causes increasing embrittlement of the steel and weld seams; • material fatigue due to frequent load cycles resulting from changing operational conditions; • mechanical and thermal stresses from operating conditions, including fast reactor shutdowns (scrams) and other events throughout the operational lifetime; and • different corrosion mechanisms caused by adverse conditions such as chemical impacts or vibrations.

Embrittlement under neutron radiation is of special importance for old reactors. At the time of their construction, knowledge of neutron-induced embrittlement was limited, so sometimes unsuitable materials were used.

Ageing of reactor pressure vessel head penetrations and primary circuit components. Leaks in the primary circuit components of PWRs due to ageing mechanisms such as stress corrosion cracking can lead to accidents involving loss of primary coolant. For systems and components in the primary circuit, especially high-quality standards are required to prevent loss of coolant and consequent loss of function. 

EIA (2020) International Energy Statistics. Petroleum and other liquids. Data Options. U.S. Energy Information Administration. Select crude oil including lease condensate to see data past 2017.

Aging nuclear plants in the news:

Pécout A (2022) French energy supplier EDF shows concern over corrosion problems at its nuclear plants. Cracked pipes were detected in the safety injection systems of several reactors. As inspections continued, only 30 reactors out of 56 were operating by the end of Wednesday, April 20. Le Monde.  The phenomenon of corrosion has been a cause for concern in the industry for several months now, as it causes cracks in reactor pipes, especially in their safety injection system. That is the important backup system of nuclear stations, which is designed to cool the primary circuit by injecting borated water into it in the event of an accident. Inspections have already detected cracks in five reactors, between the second half of 2021 and the beginning of 2022, and at least four more could be affected, which means the issue might affect all of France’s nuclear power plants, although further evaluation is needed.

Alice Friedemann   www.energyskeptic.com  author of  “Life After Fossil Fuels: A Reality Check on Alternative Energy”, 2021, Springer; “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer; Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

Greenpeace. 2014.  Lifetime extension of ageing nuclear power plants: Entering a new era of risk. Greenpeace Switzerland.

Summary of major risk arguments

Important aspects of risk with respect to ageing reactors are: • physical ageing; • conceptual and technological ageing; • ageing of staff and atrophy of knowledge; As of 2014, the average age of European reactors has risen to 29 years. As the number of new-build reactors in the EU has been very limited since the 1990s, European nuclear power plant operators have followed two strategic routes, lifetime extension and power uprating. These two strategies have serious implications for the safety of nuclear power plants, especially with respect to the following aspects:

1) Physical ageing of components in nuclear power plants leads to degradation of material properties. The effects of ageing mechanisms such as crack propagation, corrosion and embrittlement have to be countered by continuous monitoring and timely replacement of components. Nevertheless, an increasing level of material degradation cannot be completely avoided and is accepted to a certain degree, therefore lowering the original safety margins. Particularly under accident conditions that cannot be precisely predicted, an abrupt failure of already weakened components cannot be fully excluded.

2) Power uprating imposes significant additional stresses on nuclear power plant components due to an increase in flow rates, temperatures and pressures. Ageing mechanisms can be exacerbated by these additional stresses. Modifications necessitated by power uprating may additionally introduce new potential sources of failure due to adverse interactions between new and old equipment.

3) Reactor lifetime extension and power uprating therefore decrease originally designed safety margins and increase the risk of failures.

4) Serious problems related to ageing effects have already been encountered in nuclear power plants worldwide, even though they have not yet exceeded their design lifetimes. Typical ageing problems are: • embrittlement, cracks or leaks in the RPV or primary circuit components; • damage to RPV internals such as core shrouds; • degradation of older concrete containment and reactor buildings; and • degradation of electrical cables and transformers.

5) The fundamental design of a nuclear power plant is determined at the time of planning and construction. The science and technology of nuclear reactor safety is continually developing. Subsequent adaptation of a plant’s design to new safety requirements is possible only to a limited degree. Thus, during the lifetime of a facility, the gap between the technology employed and state-of-the-art technology is constantly increasing.

6) To enable lifetime extensions of existing plants, operators must implement enhanced ageing management. Nevertheless, general acceptance criteria for the maximum permitted extent of ageing effects are not defined. Besides technical aspects of ageing, ageing management has to consider loss of experienced staff both in the plant’s workforce and in the supply chain, as well as problems of quality assurance under changing external supply conditions.

7) With increasing lifetime, the radioactive inventory stored in a reactor’s spent fuel pool and, where present, dry storage increases. As the risk associated with the spent fuel pools and dry storage was initially perceived as low, design requirements with respect to cooling and physical protection were weak. New risk perceptions after the 9/11 terrorist attacks and the Fukushima disaster necessitate a considerable improvement in the safety of spent fuel storage.

8) The site specific design basis of older nuclear power plants was usually rather weak concerning external hazards such as earthquakes, flooding and extreme weather. Site-specific reassessments of plants usually result in stricter hazard assumptions due to better knowledge and higher standards. However, comprehensive retrofitting is difficult to implement in older power plants, especially in terms of protection against earthquakes or even terrorist acts such as deliberate aircraft impacts. In the case of multiple-unit sites, the possibility of emergency situations occurring simultaneously in different units had been largely overlooked until the Fukushima disaster.

9) Until now, most evacuation plans for nuclear power plants have covered radii of less than 10 km. No harmonization of country-specific regulations in the EU has yet been achieved. The Chernobyl and Fukushima disasters show that external emergency plans for plants need to include larger evacuation areas. 10) The European Stress Test provided valuable insights into the safety level of European nuclear power plants. Nevertheless, important aspects of ageing were not explicitly addressed and evaluated. ENSREG created a list of good practices and recommended possible safety enhancements. But neither the good practices nor the identified safety enhancements are obligatory for EU nuclear power plants.

***

The heyday of nuclear power plant construction was the 1970s and 1980s. While most of the first generation of reactors have been closed down, the following second generation of reactors are largely still operational. By 11 March 2014, the third anniversary of the Fukushima nuclear disaster, the 25 oldest reactors in Europe (excluding Russia) will be over 35 years old.

Almost half of those are older than their original design lifetime. In Europe excluding Russia, 46 out of 151 operational reactors are older than their original design lifetimes or within three years of reaching that date. However, only a few of those reactors will be closed down in the near future – most have had, or are set to have, their lifetimes extended for a further 20 years or more. In the United States, meanwhile, more than two-thirds of the ageing reactor fleet have received extended licenses to take them to 60 years of operation. As a result, we are entering a new era of nuclear risk.

The design lifetime is the period of time during which a facility or component is expected to perform according to the technical specifications to which it was produced. Life-limiting processes include an excessive number of reactor trips and load cycle exhaustion. Physical ageing of systems, structures and components is paralleled by technological and conceptual ageing, because existing reactors allow for only limited retroactive implementation of new technologies and safety concepts. Together with ‘soft’ factors such as outmoded organizational structures and the loss of staff know-how and motivation as employees retire, these factors cause the overall safety level of older reactors to become increasingly inadequate by modern standards.

Ageing of staff and atrophy of knowledge. The building of new nuclear reactors came to an almost complete halt for many years, beginning in the 1980s. The nuclear sector became less important, the need for personnel declined, and career prospects in the industry deteriorated. Young professionals began to be in short supply. However, the safe operation of nuclear power plants relies on experienced employees in the plants themselves and in the supply chain. Irreplaceable and undocumented knowledge can be lost when older personnel leaves. In the near future, first-hand knowledge from the construction phase will no longer be available – a phenomenon that we can already see today. Adverse effects on the safety performance of ageing reactors due to the atrophy of the knowledge base may be expected.

Another aspect of ageing is that in a declining market the number of manufacturers and service providers working exclusively or predominantly in the nuclear field has diminished over time. Specific experience has been lost and cannot be maintained on an equivalent level, especially where the delivery of technology only used in older plants is required. It has become apparent that the extraordinary high quality standards required for nuclear power plants will no longer be met with the same reliability as before. Manufacturers and subcontractors with insufficient experience in the nuclear field have become a significant factor in the decrease of quality and the increase in failures.

Measures to uprate a reactor’s power output can further compromise safety margins, for instance because increased thermal energy production results in an increased output of steam and cooling water, leading to greater stresses on piping and heat exchange systems, so exacerbating ageing mechanisms. Modifications necessitated by power uprating may additionally introduce new potential sources of failure due to adverse interactions between new and old equipment. Thus, both lifetime extension and power uprating decrease a plant’s originally designed safety margins and increase the risk of failures.

Physical ageing issues include those affecting the reactor pressure vessel (including embrittlement, vessel head penetration cracking, and deterioration of internals) and the containment and the reactor building, cable deterioration, and ageing of transformers. Conceptual and technological ageing issues include the inability to withstand a large aircraft impact, along with inadequate earthquake and flooding resistance. Some reactor types, such as the British advanced gas-cooled reactors (AGC) and Russian-designed VVER-440 and RBMK (Chernobyl-type) reactors suffer specific problems.

Spent fuel storage presents a special risk for ageing nuclear power plants due to the build-up of large amounts of spent fuel. Examples of problems include inadequate protection against external hazards and the risks of a long-term loss of cooling (due to poor redundancy and low quality standards in spent fuel pool cooling systems), both issues illustrated by the Fukushima catastrophe. The re-racking of spent fuel elements into more compact storage units to increase the space available for the larger than expected amount of spent fuel is a further source of risk.

Site-specific risks change over time. New insights into earthquake risk require higher protection standards which cannot be fully met by modification of older nuclear power plants. The lack of emergency preparedness evident during the Fukushima disaster forces a reassessment of risks including those of flooding and loss of external infrastructure. Especially when seen in the light of the implications of climate change in terms of extreme weather and sea level rise.

The Fukushima disaster also highlighted the risk of an external event compromising multiple reactors at the same time – a situation hardly any multi-unit site is prepared for. Sources of common-cause failures include shared cooling inlets, pumping stations, pipelines, electricity infrastructure and so on – issues that were not sufficiently addressed in, for instance, the post Fukushima EU Stress Test of nuclear reactors. Perceptions of the most suitable locations for nuclear power plants have also changed over time. Many older plants are located in highly populated areas, obviously making emergency preparedness much more complex than for plants situated far from population areas, and greatly increasing the potential for harm.

The EU Stress Test furthermore did not explicitly cover ageing-related issues. The use of the original design basis to determine the robustness of reactors was particularly unsatisfactory, because design deficiencies and differences between different reactors were not fully taken into account. Because beyond design basis events had not been systematically analyzed before, too little documentation was available and expert judgement played too large a part.

ECONOMICS OF NUCLEAR AGEING Prof. Stephen Thomas – University of Greenwich

If the cost of modifications is relatively low, life-extended nuclear power plants can be highly profitable to their owners because the capital cost of the plant (making up most of the cost of a unit of nuclear-generated electricity) will already have been paid off, leaving only the operations and maintenance cost to be paid. Other advantages to the owner include the fact that the plant is a known quantity.

In the USA, reactor retirements have mostly been due to economic reasons (including the prohibitive cost of repair), though some have been because of design reasons. In Germany most closures have stemmed from political decisions, though a few have been design-related. Elsewhere, reasons have been mainly economic (France) or technical and economic (Canada, Spain, the UK), political (Italy, Sweden) or political and design-related (Japan, largely in the wake of the Fukushima disaster).

National regulators are constantly increasing safety requirements, but for ageing reactors these can never be set at the level of the best available technology. For instance, design lessons from the 1975 Browns Ferry accident were applied to most designs developed after that, but those from the 1979 Three Mile Island accident and the Chernobyl (1986) and Fukushima (2011) disasters can only be taken into limited account.

Three plants (Vermont Yankee, Kewaunee and Crystal River) recently closed before lifetime extension was obtained because of excessive costs in the context of low electricity prices. San Onofre in California closed even before an extension was applied for, because of the cost of repairs.

The increasing risk posed by nuclear ageing should lead to an increase in operators’ insurance premiums. With ageing nuclear reactors, adequate financial security to cover the costs of a potential accident becomes even more a necessity. It is important for society as a whole that objective calculations are made of the damage that a nuclear accident could potentially cause, and on that basis alternative systems of financing the coverage have to be investigated. It is obviously important to accompany this with a mandatory financial security requirement for operators, but the higher resulting costs resulting from such an analysis should not be a reason to limit liability.

LIABILITY OF AGEING NUCLEAR REACTORS Prof. Tom Vanden Borre – University of Leuven; Prof. Michael Faure – University of Maastricht

It is especially important that compulsory insurance protects victims against insolvency of the operator. Conversely, the conventions, even as revised by their relevant protocols, allow for only up to about 1% of the cost of an accident to be compensated for.

Legal channeling of all liability to the operator is problematic. From the viewpoint of victims it would be preferable to be able to address a claim against several persons or corporations, as this would increase their chances of receiving compensation. It would also have a preventive effect since all parties bearing a share of the risk would have an incentive to avoid damage. Countries considering plant lifetime extension should end funding part of the liability coverage with public means, extend liability to suppliers, and introduce unlimited liability for operators, while requiring the latter to have third-party liability insurance coverage or other financial security of a realistic level in terms of the actual scope for damage.

Countries should opt for reactor lifetime extension only if arrangements for the compensation of victims in the event of an accident are substantially improved. A higher level of liability would not only benefit the victims of a nuclear accident but would again have an important preventive effect. Pooling unlimited liability across Europe would encourage operators to monitor one another, since they would be reluctant to allow a bad risk into their system.

POLITICS, PUBLIC PARTICIPATION AND NUCLEAR AGEING Ir. Jan Haverkamp – Greenpeace, Nuclear Transparency Watch

As of January 2014, more than 50% of operational reactors worldwide were over 30 years old. Forty-five reactors have exceeded 40 years, 14 of them located in Europe including Russia. Beznau 1 in Switzerland is the oldest operational reactor in Europe and – together with Tarapur-1 and 2 in India – the oldest in the world at nearly 45 years. None of the reactors that have so far been permanently shut down worldwide has reached 50 years of operation since first grid connection. The British Calder Hall and Chapelcross reactors have come closest, reaching 44 and 47 years respectively. The reactors at both sites were small units with a power capacity of 60 MW each. The average age of shut down reactors worldwide is less than 25 years. From these numbers it is evident that little operational experience exists of nuclear reactors with more than 40 years of commercial operation.

Construction of new reactors

Around 1980, more than 200 reactors were simultaneously under construction. In the 1990s and 2000s this figure dropped to well under 50 reactors. Only recently has there been a modest increase in construction start-ups. Enhanced safety requirements, generally decreasing acceptance of nuclear power in many countries and financial risks have prevented the European nuclear industry from building new reactors.

Most reactors under construction today are located in Asia, and over the past 10 years, new reactors have been connected to the grid in China (10), India (7), Japan (4), South Korea (4), Russia (3), Ukraine (2), Iran (1), Pakistan (1) and Romania (1).

In order to maintain nuclear energy output levels, European governments and operators are following two strategic routes, both of which are seen as less expensive and politically more convenient than building new reactors: • Plant lifetime extension (PLEX) of reactors; and • Plant power uprating (PPU) of reactors. Lifetime extension and power uprating allow electrical generating capacity to be maintained or enhanced with comparatively little effort in terms of financing, planning, licensing and technical implementation, compared to building a new reactor.

The term ‘physical ageing’ encompasses the time-dependent mechanisms that result in degradation of a component’s quality. After three or four decades of operation under high pressure, temperature, radiation and chemical impacts as well as changing load cycles, the risk of ageing becomes more and more significant. Unexpected combinations of various adverse effects such as corrosion, embrittlement, crack progression or drift of electrical parameters may result in the failure of technical equipment, leading to the loss of required safety functions. Life-limiting processes include the exceeding of the designed maximum number of reactor trips and load cycle exhaustion.

In addition to plant lifetime extension, operators of nuclear power plants may wish to enhance the power output of their reactors. The process of increasing the maximum power level at which a commercial reactor may operate is called a plant power uprate (PPU). To increase the power output, the reactor will be refueled with either slightly more enriched uranium fuel or a higher percentage of new fuel.

A power uprate forces the reactor to produce more thermal energy, which results in an increased production of the steam that is used for electricity generation. A higher power level thus produces a greater flow of steam and cooling water through the systems, and components such as pipes, valves, pumps and heat exchangers must therefore be capable of accommodating this higher flow. Moreover, electrical transformers and generators must be able to cope with the more demanding operating conditions that exist at the higher power level.

While more recent nuclear power plants have equipment hatches for the replacement of large parts already included in the reactor building and containment, in older plants it may be necessary to cut a hole through the concrete, rebar, and steel liner of the reactor building and containment in order to exchange large components such as steam generators. The concrete must first be hydro-blasted, sawn, or chipped away by jackhammer from the rebar and the steel liner of the containment, leaving them exposed to the environment. These methods can weaken the containment and the steel liner severely.  

Accordingly it was planned to cut a large hole in the concrete containment, which was strengthened with hundreds of tightened vertical and horizontal steel tendons. But after the tension in some of the tendons was relaxed, unexpected stresses inside the concrete occurred, causing delamination and cracking of the containment. The operator Progress Energy’s repair attempts made the situation worse, and the plant was permanent shut down in February 2013. Another example of the pitfalls of heavy component replacement concerns the steam generator replacement in units 2 and 3 of the San Onofre nuclear power plant in California, which resulted in permanent shutdown of both plants. Severe and unexpected degradation of tubes appeared in the newly installed steam generators after only approximately 1.7 years and 1 year respectively of effective full power operation. The excessive tube wear was caused by a combination of flow-induced vibration and inadequate support structures. The risk of the replacement became obvious in January 2012, when a tube in the unit 3 steam generator

experienced a coolant leak after only 11 months of operation. Steam generator tube ruptures are severe nuclear incidents which result in radioactivity transfer from primary circuit into secondary circuit and can also affect the core cooling due to loss of coolant.

The safety concept of nuclear reactors builds upon a systematic approach comprising technical and organizational measures. The following fundamental safety functions must be ensured for all plant states, whatever the type of reactor:

1) control of reactivity 2) limiting the insertion of reactivity; 3) ensuring safe shutdown and long-term subcriticality; and 4) ensuring subcriticality during handling and storage of irradiated and new fuel assemblies; 5) removal of heat from the core and from the spent fuel pool: 6) sufficient quantity of coolant and heat sinks; 7) ensuring heat transfer from the core to the heat sink; and 8) ensuring heat removal from the fuel pool; 9) confinement of radioactive material: 10) confinement of radioactive material by effective barriers and retention functions; 11) shielding of people and environment against radiation; and 12) control of planned radioactive releases, as well as limitation of accidental radioactive releases.

Replacement of the RPV (like the replacement of the containment) is impossible for economic and practical reasons. Consequently, if ageing mechanisms prevent further safe operation of these components, the reactor will have to be shut down. The risk of loss of RPV integrity increases under accident conditions, as the IAEA explains: If an embrittled RPV were to have a flaw of critical size and certain severe system transients were to occur, the flaw could propagate very rapidly through the vessel, possibly resulting in a through-wall crack and challenging the integrity of the RPV.

The IAEA identifies such severe transients as: Pressurized thermal shocks (PTS), characterized by rapid cooling of the downcomer and internal RPV surface, followed sometimes by repressurization of the RPV (PWR and WWER reactor types) Cold overpressure (high pressure at low temperature) for example at the end of shutdown situations.

So the unidentified degradation of RPVs, such as cracks and flaws, therefore has the potential to escalate an incident into an uncontrollable accident, even though it does not cause problems during normal operation. During power operation the RPV is not accessible for inspections or intervention measures. As a result defects may remain undetected for longer periods of time.

Extensive research programs are being conducted in order to gauge the resistance and stability of RPVs. At present there are conflicting scientific opinions concerning the current significance and further progression of ageing. Huge uncertainties are involved in estimating and predicting the progression of ageing and the long-term behavior of materials, especially under accident conditions.

A special problem arises from cracks in the RPV head penetrations – nozzles through which the control rods pass into the core. These nozzles are exposed to the high temperature and pressure of the RPV, the chemically aggressive primary coolant, and intense radiation combined with changes of load.

Ageing of reactor pressure vessel internals The main function of RPV internals is to keep the nuclear fuel elements in the reactor core in a stable position. Stable reactor core geometry is a prerequisite for reactor shutdown and fuel cooling. Distortion of internals due to cracks, as well as the release of fragments from internals, may affect the function of the control rods and thus prevent safe shutdown, and may also compromise the cooling of fuel elements. Foreign particles or fragments of RPV internal which are released and transported into the primary circuit can damage other important components such as coolant pumps, pipes or vessels which are connected to the RPV.

Another problem affecting power plant electrical installations arises from the external power supply. The European network of transmission grids for electricity has grown beyond European frontiers in recent years, and has changed from a static to a dynamic system behaviour. The increasing dynamic and higher volatility of the electricity network has various causes, of which the input of electricity from variable renewable sources is only one. It also results from increasing electricity transit through countries, changing characteristics of consumer behavior and the impact of changing electricity markets. Moreover, the upgrading and extension of the transmission grid has often been neglected or addressed belatedly. The resultant increasing dynamic and higher volatility produces high overloads, frequency deviations and other instabilities.

As a result the electro-technical design and components of a power plant – especially the unit transformers at the interface with the transmission network, but also the network protection equipment, other transformers, rectifiers, circuit breakers and so on – have to meet high quality standards. Otherwise short circuits or overloads can affect electro-technical components and propagate up to failures of engineering components of the power plant.

The unit transformers, usually two per unit, are often as old as the reactor itself. Replacement of the transformers is usually not envisaged due to the high costs of necessary power outages. Instead, comprehensive test procedures are conducted on ageing transformers. Nevertheless, ageing unit transformers and their protection systems often give rise to incidents resulting in reactor scrams and even compromising mechanical components of the power plant. Older unit transformers can suffer damage due to network instabilities, which can then result in transformer fires. In many cases, the root causes cannot be identified due to the destruction of the transformer. After several incidents in Germany, most German nuclear power plants have had their unit transformers replaced.

The development of science and technology continuously produces new knowledge about possible failure modes, properties of materials, and verification, testing and computational methodologies. This leads to technological ageing of the existing safety concept in nuclear power plants. At the same time, as a result of lessons learnt from operational experiences such as the major accidents at Three Mile Island, Chernobyl and Fukushima Daiichi, power plants have to fulfil new regulatory requirements. Thus earlier safety concepts are themselves becoming obsolete, in a process of so-called conceptual ageing. Very often, new regulatory requirements are applicable only to new nuclear reactors, while for existing plants different criteria are applied. Changes in the safety philosophy can also be introduced by malicious acts. The 9/11 terrorist attacks in the USA showed the need for more robust protection against external hazards. Older nuclear power plants have not been designed to withstand the impact of an aircraft on the reactor building. While an accidental aircraft impact was required to be taken into account in the design of some newer power plants, not one nuclear power plant worldwide has been designed to withstand the intentional impact of a large commercial aircraft like an Airbus 380. Accordingly, it can be questioned whether any existing nuclear power plant would withstand such an attack.

Ageing PWR and BWR design concepts. The fundamental design principles of modern nuclear power plants consist among others of redundancy; conceptual segregation of redundant subsystems, unless this conflicts with safety benefits; physical separation of redundant subsystems; preference for passive over active safety equipment; and a high degree of automation. Reactors such as the two-loop PWRs Beznau 1 and 2, and Doel 1 and 2, have a limited number of safety subsystems. The original basic design of the Beznau reactors has only one emergency feedwater system and two core cooling subsystems (a small degree of redundancy). One common cooling pipe is used instead of the three or four independent subsystems typical of stateof-the-art modern reactors (therefore having no segregation of redundant subsystems). Although a lot of additional installations have been carried out at Beznau to compensate for the design shortcomings, their quality standards would not meet the current high standards for safety systems80. Retrofitting of additional safety systems under conditions of a shortage of space because main structures cannot be changed, can result in higher complexity and in interface problems between existing and retrofitted systems. Similar problems exist in older BWRs of two-loop design.

A lack of robustness of the reactor building to withstand external hazards is a problem common to many older reactors.

Concerning the only operational German BWRs, Gundremmingen B and C, two former members of the German federal nuclear regulator have produced a list of design deficits. According to their analyses:

• the construction of the reactor vessel does not represent the technical state of the art • only two of the required three redundancies of the emergency core cooling system are sufficiently qualified as safety systems; • the determination of the design basis earthquake has not been reviewed for decades, and the peak ground acceleration of the current design basis earthquake (a key parameter) does not fulfil the IAEA’s minimum requirements; • some safety-relevant components and subsystems are not qualified to resist the design basis earthquake; • the basic design of the spent fuel pool and its cooling system is outdated; and • the basic plant design does not take into account the possibility of flooding as a result of a breach of a nearby weir on the Danube.

VVER-440 The Russian VVER-440/V-213 PWR design (Dukovany 1–4, Paks 1-4, Bohunice V2 and Mochovce 1,2) suffers design problems concerning the emergency core cooling and emergency diesel generator systems. At Dukovany, external hazards may cause simultaneous loss of offsite power to all four reactors. In these circumstances, the simultaneous loss of function of the Jihlava River raw water pumping station, the raw water conditioning and the cooling-towers is unavoidable. As a consequence of the loss of cooling and the following overheating of the essential service water, a loss of the emergency diesel generators could also result. In this event only temporary emergency measures would be available for the cooling of the four reactors and their spent fool pools. Furthermore, the two pipes that supply the raw water for all four reactors are not protected against any external hazards. 85 Comparable design deficits affect the other European VVER-440/V-213s. To overcome major shortcomings of the design, both Finnish VVER-440/V-213 reactors are equipped with Western-type containment and control systems. The VVER-440 reactors are designed as twin units, sharing many operating systems and safety systems, for example the emergency feedwater system, the central pumping station for the essential service water system, and the diesel generator station. The sharing of safety systems increases the risk of common-cause failures affecting the safety of both reactors at the same time.

All VVER-440 type reactors with the exception of Loviisa in Finland have only a basic level of containment. External hazards such as earthquakes, chemical explosions or aircraft impacts were not taken into account in the original design of these plants.

Despite the defects of the type, it almost seems as though certain European countries are competing with one another to extend the lifetimes and uprate the power of their VVER-440/V-230 and V-213 reactors, as shown in Table 1.3. Finland and Hungary, in particular, intend lifetime extension up to 50 years and power uprating of 18 and 15 per cent respectively, while the Czech Republic and Slovakia are also planning lifetime extension and uprating.

The RBMK (Reaktor Bolshoy Moshchnosti Kanalniy) design from the former Soviet Union is a graphite-moderated reactor. The reactor’s characteristic positive void coefficient and instability at low power levels caused the April 1986 Chernobyl disaster, when the reactor core exploded due to a power excursion and released high amounts of radioactivity across Eastern and Western Europe, contaminating areas. There was a consensus during the 1992 G7 summit in Munich to close the last two European RBMK reactors outside Russia, located in Lithuania, due to strong concerns about the design. This decision was implemented as part of Lithuania’s EU accession. Ignalina 1 was closed in December 2004 and Ignalina 2 at the end of 2009, leaving Russia as the only country which has operational RBMK reactors. The EU has agreed to pay Lithuania part of the decommissioning costs and some compensation for closure and extended and increased its financial help in November 201389.

Ageing management as explained so far is explicitly aimed at creating the conditions for the extended operation of old reactors. However, regulatory requirements for extended operation of existing plants do take into account the limited capabilities of ageing design features. Which means that they do not correspond to the safety requirements for new reactors. Against this background, regulation is intended to allow a large degree of flexibility in the case of lifetime extension. It is not intended to set strict limits. Consequently, clear and general accepted criteria for a maximum permitted degree of ageing are usually lacking, which is a major shortcoming in dealing with ageing effects.

The likelihood of system or component failure is commonly illustrated by the so-called ‘bathtub curve’ (Figure 1.9). A high incidence of early failures (mainly caused during design, manufacturing and installation) is followed by a significant decrease in failure probability. Later, the probability will increase again due to the increasing influence of ageing effects. The objective of ageing management is to keep the failure rate at a low level. Monitoring programs and resulting measures such as maintenance, repair and precautionary replacement of components have to come into effect before the failure rate begins to increase significantly towards the end of the technical lifetime. Ageing plants are thus approaching the edge of the bathtub curve. Technical modifications and changing modes of operation which result in higher loads, especially power uprating, have the potential to increase failure rates. Consequently, for ageing plants even a modest increase in lifetime may cause a significant increase in failure frequency, leading to a loss of safety-related functions.

It is difficult to produce an accurate estimation of the risk of ageing-related failures for an extended reactor lifetime of over 40 years. A simple bathtub curve will probably not reflect the reality. Experience shows that a simple distribution of observed data must be qualified by the awareness of additional influences as follows: • Non-technical ageing effects are not considered within the failure rate as illustrated by the bathtub curve. In principle, it is not possible to show a clear mathematical distribution of these impacts over time. • Operational experience, which is an essential basis for the prediction of ageing-related failure rates, is in the case of most reactor types available for less than a 40-year lifetime and so does not cover the proposed lifetime extensions. • Underestimated ageing mechanisms or new mechanisms which are constantly being discovered can result in unexpected damage and serious incidents. Additionally, the precautionary replacement of intact components prevents detailed evaluation of potential ageing mechanisms. • Ageing management programs as implemented so far have not proved sufficient to prevent the occurrence of serious ageing effects. Latent failures and damage at an early stage can remain undetected and cannot be observed in the failure rate. • Technical modifications and changing modes of operation result in higher loads. Power uprating in particular may contribute to a more frequent occurrence of ageing-related failures. • With increasing age, uncertainties in the assessment of the present condition and future performance of components may become more and more significant. • As a result of all these factors, the technical limit of a reactor’s lifetime may be exceeded earlier than initially assumed – contrary to the assumptions underlying extended operation.

A basic safety principle is that safety-related equipment must be proven in use. However, the development of technology means that technology originally used in a power plant design will become obsolete. Identical parts for repair and replacement are available only for a limited time. A change of equipment involves inherent risks, because an equivalent proof of satisfactory performance in service is not available.  EXAMPLE: the replacement of hard-wired control devices by digital control technology has triggered controversial discussions about how to guarantee the required reliability of safety-related control functions. Failure mechanisms and procedures for inspection and quality assurance are not transferable from one technology to the other. Susceptibility to faults may increase, and interaction between old and new control technology may cause additional problems.

There is an increasing trend for components to be delivered and installed without adequate quality certification. As a result, retrofitting or refurbishment of equipment carries a risk of introducing new defects into the plant.   EXAMPLE: in the course of a retrofit required for seismic protection, thousands of anchor bolts were wrongly installed in several plants in Germany and had to be replaced. Some manufacturers and suppliers intentionally offer substandard components to increase profitability. Naturally, such components cannot guarantee the required reliability and effectiveness.

EXAMPLES: In Japan between 2003 and 2012, several thousand electrical parts and fittings were delivered with faked certificates. Most of them were at the time of discovery installed in operational nuclear power plants. A significant proportion were used in components with safety-related functions. It has been suggested that around 100 employees of operators and of several suppliers were involved.

Spent fuel storage.  During operation of a nuclear reactor, a large inventory of radioactive fission products and actinides is produced in the reactor core. This radioactive inventory is concentrated in the nuclear fuel. After three to five years in the reactor core, the spent fuel is taken out of the RPV and replaced with new fuel. The spent fuel is then stored in spent fuel pools, to enable continuous cooling and the decay of the radioactive inventory. Spent fuel pools are fundamentally large pools of water. The radioactivity of the spent fuel assemblies inside the pool is shielded by the water above the fuel. A pool cooling system is required to remove residual decay heat from the pool. Spent fuel pools are located either inside the containment within the reactor building (as in many PWRs), inside the reactor building but outside the actual containment (as in BWRs) or even in a separate spent fuel pool building (as in many older PWRs).

After approximately five years, when the heat generation has decreased sufficiently, it is in principle possible to reload the spent fuel elements into dry storage casks, which can then be placed in an interim storage facility. At this stage heat removal from the spent fuel occurs passively via convection – active systems for heat removal are no longer needed.  As a nuclear power plant ages and spent fuel is added to the pool, the radioactive inventory stored there increases, thus increasing the potential level of radioactive contamination in the event of an accident involving the spent fuel pool.

Spent fuel storage policy varies between European countries. The spent fuel from the Spain’s reactors is currently stored in the plants’ own pools. The original storage racks have been progressively replaced with significantly more compact units, so expanding the storage capacity. This so-called re-racking is also practised at other countries’ power plants, for example Bohunice in Slovakia. As a result of this approach, the radioactive inventory stored in the fuel pools is increased beyond the initial design values.

The cessation of reprocessing of spent fuel from Belgian reactors has led to stockpiling at the spent fuel pools at Tihange. The operator, Electrabel GDF Suez, has stated that by 2020 the on-site storage capacity for spent fuel will be full.

Risks of spent fuel storage. A loss of cooling to a spent fuel pool while there is spent fuel in the pool will lead to heating of the pool water and increased evaporation. The rate of heating of the pool water will depend primarily on the heat load in the fuel pool. Most heat will be contributed by the youngest spent fuel elements in the pool. The heat emitted by a fuel element depends on various factors such as the fuel type, the burnup and the time since shutdown of the critical reaction. Thus, the time taken for the pool to heat by a given amount is not directly related to the quantity of spent fuel in the pool  

Given sufficient evaporation of the water in the pool, the spent fuel elements will become uncovered and there is then a risk of them overheating and becoming damaged – in an extreme case a situation similar to a meltdown of the reactor core can develop, associated with the risk of hydrogen production and explosions.

Physical damage to the spent fuel pool could also lead to water being lost, with the spent fuel elements potentially being uncovered rapidly, again leading to fuel damage and a release of radioactivity.

The risks associated with spent fuel storage were initially perceived to be low in comparison to the risks associated with the nuclear reactor core. Reasons for this were the much lower power density of the spent fuel (compared with that of the fuel in the reactor core, and the much lower risk of a critical reaction in the spent fuel pool. Because of the low power density and the large amount of water in a s spent fuel pool, considerable grace time is available in the event of a loss of spent fuel pool cooling, as long as the integrity of the fuel pool remains unchallenged.

This perception of low risk led to weaknesses in the safety of spent fuel pools especially in older power plants, as follows: • Due to the perceived long grace time in the event of a loss of spent fool pool cooling, cooling systems tend to have a poor level of redundancy in comparison with the emergency cooling systems for the reactor core. • As events involving a loss of external electricity were perceived to be likely to be of only short duration, spent fuel cooling systems are often not supported by emergency power supply systems.

• Spent fuel pools and their cooling systems are often not specifically protected against external hazards, especially in the case of older BWRs and VVER-440 reactors. • The fuel pool is sometimes placed outside the containment (BWRs, some older PWRs and VVER-440), thus making release of radioactivity to the environment possible in the event of fuel damage.

Changed perceptions of risk Following the 9/11 terrorist attacks in the USA, a renewed discussion of the safety of spent fuel storage took place. It was acknowledged that spent fuel pools located outside the reactor building in dedicated spent fuel pool buildings have a considerably lower degree of protection against terrorist attacks such as a deliberate aircraft impact. Such attacks could lead to a long-term loss of cooling or the immediate destruction of the pool structure itself, thus resulting in fuel damage and consequent large-scale releases of radioactivity to the environment.

The 2011 Fukushima disaster demonstrated powerfully the risks associated with other external hazards to spent fuel storage. Cooling of the spent fuel pools was lost after the earthquake, when external power to the site was lost. In addition, the essential service water systems were destroyed by the subsequent tsunami. When the hydrogen explosions in Unit 1, Unit 3 and Unit 4 destroyed the upper parts of the reactor buildings, the spent fuel pools were uncovered and came into direct contact with the environment.

Furthermore, the integrity of the reactor buildings was compromised as a consequence of the earthquake and the explosions. It was consequently feared that the buildings could at least partly collapse, in which case the integrity of the spent fuel pools would also be lost and cooling of the fuel would no longer be possible. Moreover, large amounts of debris from the heavily damaged reactor buildings – including the heavy structures of the fuel handling crane – had fallen into the spent fuel pools, with the risk that it had destroyed fuel assemblies

Staff had to attempt to ensure sufficient cooling of both the three reactor cores and the spent fuel pools simultaneously, which complicated matters further. For several days, the necessary cooling of the spent fuel remained a serious emergency challenge. First attempts were conducted with helicopters and water cannon, while later special truck mounted concrete pumps were used. At the end of 2013, nearly three years after the event, the spent fuel pools, especially that of the badly damaged unit 4, pose a severe danger to the site and surrounding environment. Full recovery of the spent fuel from all fuel pools is expected to take around another decade.

In the aftermath of the Fukushima disaster, the safety of spent fuel storage has again been keenly debated in many countries in the EU and worldwide.

For example, the Swiss nuclear regulator ENSI ordered directly after the Fukushima catastrophe in 2011 a design reassessment of spent fuel storage with regard to risks from earthquake, external flooding or a combination of the two. One outcome was that retrofitting of the spent fuel pool cooling system was required at the Mühleberg plant. However, the spent fuel pool itself has not been given improved protection against terrorist attacks such as a deliberate aircraft impact.

Improvements to the safety of spent fuel storage discussed in the EU amount to additional instrumentation to monitor the spent fuel pool temperature and water level, retrofitting of water feed systems to enable refilling the spent fuel pool from external sources in the event of a loss of cooling, and the need for measures to protect against hydrogen explosions in the area of the spent fuel pool.

While these measures are important first steps to enhance the safety of spent fuel storage, other major shortcomings have not yet been addressed. No fundamental improvement of the physical protection of spent fuel pools that are not located inside well-protected reactor buildings has so far been discussed. Neither is the problem of containing possible releases of radioactivity from damaged spent fuel addressed by the improvements mentioned above. While freshly unloaded spent fuel requires several years of cooling in a spent fuel pool, another important step to enhance the safety of spent fuel storage would be the unloading of the older spent fuel from fuel pools into dry cask storage in physically well protected interim storage facilities.

External hazards and siting issues. Several of the lessons of the Fukushima disaster relate to the insufficient consideration of external hazards in the design and siting of the power plant. Furthermore it has become evident that additional problems arise from a severe accident happening in several units on one site at the same time.

Country-specific regulatory requirements may also change considerably due to new operational experience. For example, France is changing its regulatory requirements with respect to the assessment of flooding risks in response to a severe event happening at the Blayais power plant.

Loss of key external infrastructure as a result of a natural disaster is another important factor. Natural disasters with extensive and long-lasting effects were usually not taken into account as an explicit design basis condition. Today, a more robust degree of plant autonomy is required to cope with situations beyond the original design basis. Unfortunately, some measures to cope with emergency situations are based on conventional installations and infrastructure (external non-nuclear power plants, transportation routes, alternative cooling water resources) which are not as well protected as nuclear installations. This also holds true for some of the emergency preparedness measures for severe accidents that have been specifically introduced in response to the lessons learnt from the Three Mile Island and Chernobyl disasters.

Seismic hazards. Older nuclear power plants were often originally designed to resist a lower magnitude of earthquake than has to be taken into account today. Moreover, in the case of some sites with low seismicity, earthquakes were not considered at all in the original design, or only a very low level of resistance was requested. Today, even for sites with low seismicity, a minimum level of earthquake resistance is required. For several European power plants, this requirement remains to be fulfilled. In addition, new scientific findings require that seismic risk levels of existing plants are redetermined in accordance with the latest methods and data. In several cases, a recalculation of the robustness of existing plants to show consistency with the new standards has been accepted instead of the implementation of expensive retrofit s.

Extreme weather conditions and climate change. The development of the risk posed by extreme weather conditions and the associated changes in risk perception are an important example of conceptual ageing.

In general, it is expected that normally occurring extreme weather conditions can be withstood by solidly constructed buildings, especially those designed to withstand extreme external events such as earthquakes, aircraft impacts or chemical explosions.

Scientific research has shown that an increasing intensity and frequency of extreme weather events must be expected. The possibility of nuclear emergencies due to extreme precipitation (including snowfall), sudden icing, storms and tornadoes, heat waves and droughts has therefore to be considered. The effects of these extreme weather conditions, such as flooding, landslides, cooling water inlet or drainage clogging, forest fires or water shortages can directly compromise a power plant and can cause wide-ranging as well as long-lasting impairment of vital infrastructure. External infrastructure such as electricity and feedwater supplies and access roads are most threatened by natural impacts. It has to be assumed that in the event of an extreme weather event the site will become inaccessible. The effectiveness of fire-fighting and other external assistance and the delivery of external auxiliary emergency equipment and support, can thus be substantially affected.

Weak protection against natural hazards is a typical problem of ageing power plants, if the design is not adapted to cope with changing risk levels and new scientific findings. Nevertheless, in the context of the European stress test some operators refused a re-evaluation of external hazards. Conversely, some countries such as the Czech Republic admitted that they had underestimated extreme weather conditions up to now.

As reactors need large amounts of cooling water, they are usually located on lakes or rivers or by the sea. Consequently, the risk of flooding of the site has to be taken into account. New assessments according to the state-of-the-art of science and technology often reveal insufficient flood protection missed by previous assessments. Changes of land use in the surrounding area (land sealing, water management, embankment) may influence the flooding risk. These changes may happen over a much shorter timescale than climatic changes and thus have to be taken into re-assessed on a regular basis. As a rule public flood protection is designed for less significant and more frequent flooding events than nuclear power plants need to be protected against, for example events with return periods of 100 years rather than 10,000 years. Unforeseen combinations of natural hazards including extreme weather (storm and precipitation, sudden icing, land slides) as well as insufficient plant protection (undersized drainage systems, missing sealing, water ingress through underground channels) can exacerbate the consequences of an extreme weather event. Some sites are forced to rely on temporary measures which are not as reliable as permanent flood protection measures, or indeed a location above the level of a design basis flood.

EXAMPLES: in December 2009, as a result of prolonged and heavy rainfall, large quantities of vegetation were washed into the river Rhône. Subsequently, the feedwater intake of the Cruas 4 reactor was blocked, leading to a shutdown of the reactor. After a shutdown, residual heat removal is still required to avoid overheating of the reactor. However, the residual heat removal system was dependent on the functioning of the same cooling water intake. The operator was forced to take emergency action: it took over five and a half hours to unblock the water intake.

In 2011 a flood had a serious impact on the Fort Calhoun power plant in Nebraska, even though it was less serious than the design basis flood. The site was flooded to a depth of 60cm. A rubber barrier installed as a temporary flood protection measure burst. Simultaneously a fire broke out in the control room. The electricity supply and some of the emergency diesel generators failed due to the flooding. The spent fuel pool cooling system was interrupted until the back-up emergency power supply started successfully. The entire site was inaccessible and some installations could not be reached for needed action. Staff had to remain on site for a prolonged period. Additional fuel had to be delivered rapidly and under difficult conditions to enable the emergency diesel generators to operate for a prolonged time.

Possible effects of climate change are insufficiently addressed, for example, in the safety design of older UK power plants such as Wylfa, Hunterston B and Hinkley Point B. Hunterston B and Hinkley Point B may not tolerate wave overtopping of protection dykes in the event of an extreme storm surge exacerbated by climate change. Flooding of installations may result, especially if the drain water discharge is not as effective as assumed in the safety design, for example due to unforeseen clogging. In this event, the power plants would have to rely on provisional measures, such as the use of fire hydrants to ensure cooling water supply at Hinkley Point, or temporary dams to protect against flooding. Climate change is predicted to result in sea level rise and higher intensity and frequency of extreme storm surge events, as well as increased maximum wave heights. Furthermore it must be acknowledged that dams or dykes do not completely guarantee flood protection. Ageing mechanisms reducing their reliability and efficiency are a common problem. In certain cases it has been shown that these installations are of inadequate size due to incorrect design assumptions and failure to adapt to changing standards. The European stress test report on Hinkley Point B summarized the potential impact of sea level rise there.

However, work subsequent to the second periodic safety review indicated a sea level rise due to climate change of approximately 0.88 m at Hinkley Point B over the current century. This indicated that sea level rise will be 9.18 m AOD [above Ordnance Datum] by 2016. This depth is still not adequate to threaten the main Hinkley Point B nuclear island at 10.21m AOD. However the cooling water pumphouse at 8.08m AOD would be flooded with consequential loss of the systems inside. The increased flood levels due to climate change do not change the nuclear safety arguments as the flooding is infrequent and therefore loss of cooling water systems remains tolerable given that the fire hydrant remains available.

Sites with multiple nuclear power plants and twin units Until the Fukushima disaster, it had usually been assumed that it was an advantage to have several reactors at one site, as they could support each other with shared equipment, personnel or emergency power supply in the event of an emergency affecting one reactor. The negative impacts on a site’s other reactors of a severe accident in one reactor were not appropriately taken into account. In practice, safety-related systems which are connected to multiple units or designed for alternating operation may give rise to adverse interactions. In many cases the shared usage of components and systems such as water reservoirs, pipelines and pumps is intended to compensate for an inadequate capacity of subsystems and/or insufficient redundancies. Multiple units are also often meshed by using cooling water inlets and pumping stations jointly. If a system’s function is requested for one unit its availability for the other unit or units may become insufficient. Switching operations and modifications affecting one unit may also result in unexpected effects on the other unit(s). Moreover, external hazards have the potential to cause simultaneous failures of identical components of several reactors on one site.

EXAMPLES: At Fukushima Daiichi, the site’s external power supply was lost as a consequence of the earthquake. The pumping stations of the cooling systems and most of the emergency diesel generators on site were destroyed by the tsunami. The four oldest reactors at Fukushima suffered the greatest destruction. The oldest unit – Fukushima Daiichi 1 – was the first of three units to suffer a core meltdown, leading to a hydrogen explosion that partly destroyed the reactor building. The reactor cores of units 5 and 6, the newest units at the site and located on higher ground, remained undamaged. Fukushima Daiichi units 3 and 4 used a shared chimney as part of the venting system for severe accidents. Hydrogen gas produced by the overheating of fuel in unit 3 – was released during venting operations and spread over piping to the common chimney into the reactor building of unit 4, leading to a severe hydrogen explosion.

It should be emphasized that the European Stress Test specification did not take specific account of issues facing multi-unit plants, and assessment of the risks due to common-cause failures or consequential failures between units was seldom addressed in the Stress Test reports. The operators of multi-unit power plants often describe only a single reactor as a reference for all units and their reports hardly touch on possible interactions between or simultaneous problems of several units.

Considering the impact of the July 2007 Chuetsu earthquake off the coast of Japan’s Niigata Prefecture on the KashiwazakiKariwa multi-unit power plant, as well as the impacts of the March 2011 earthquake and tsunami on the Fukushima-Daiichi site, the IAEA decided in October 2012 to focus on the problem, admitting that it had hitherto been neglected: The number of sites housing multi-unit nuclear power plants (NPPs) and other co-located nuclear installations is increasing. An external event may generate one or more correlated hazards, or a combination of non-corelated hazards arising from different originating events, that can threaten the safety of NPPs and other nuclear installations. The safety assessment of a site with a single-unit NPP for external hazards is challenging enough, but the task becomes even more complex when the safety evaluation of a multi-unit site is to be carried out with respect to multiple hazards… The currently available guidance material for the safety assessment of NPP sites in relation to external events is not comprehensive. The IAEA has not published safety standards in all the areas of this subject.

Development of infrastructure and population. Nuclear power plants are often built near areas of high population density to ensure proximity between power production and consumption, and because they require well-developed road and power supply infrastructure. Moreover, the extension of existing sites has often been given preference since decisions in favor of new sites became more difficult to secure. Of course, the already high population density surrounding sites may increase with time. In the meantime, increasing knowledge about the possible consequences of accidents and radioactive releases shows the need for new assessments of the risks to the public.

The more people are liable to be affected by emergency civil protection measures in the event of a nuclear accident, the more difficult such measures will become to implement. Information provision, monitoring, decontamination, traffic management and medical care, as well as the process of evacuation, will present severe organizational challenges for the civil protection authorities.

Most European countries have evacuation plans covering a radius of less than 10 km around their nuclear power plants. No harmonization of national regulations has yet been achieved. The experiences of Chernobyl and Fukushima, as well as modern computer simulations, show that external emergency plans for nuclear power plants should be extended. Calculations by the ÖkoInstitut show that an area as large as 10,000 km2 could be affected by evacuation and relocation after a severe nuclear power plant accident involving a large and early release of radioactivity. A radius of more than 50 km around the plant may thus be affected.

The more people are liable to be affected by emergency civil protection measures in the event of a nuclear accident, the more difficult such measures will become to implement. Information provision, monitoring, decontamination, traffic management and medical care, as well as the process of evacuation, will present severe organizational challenges for the civil protection authorities. Most European countries have evacuation plans covering a radius of less than 10 km around their nuclear power plants. No harmonization of national regulations has yet been achieved. The experiences of Chernobyl and Fukushima, as well as modern computer simulations, show that external emergency plans for nuclear power plants should be extended.133 Calculations by the ÖkoInstitut show that an area as large as 10,000 km2 could be affected by evacuation and relocation after a severe nuclear power plant accident involving a large and early release of radioactivity. A radius of more than 50 km around the plant may thus be affected.

Table 1.4 gives examples of older reactors close to the larger cities of Europe. Notably, all the main cities in Switzerland are in the neighborhood of ageing nuclear power plants and might be subject to evacuation in the event of a major accident. It should be emphasized that the region of Basel is the seismically most active region in Western Europe besides Italy and Greece (neither of which has any operational nuclear power plants) and also has six of the oldest active reactors in existence. In the area of Fukushima approximately 150,000 people had to leave their homes; while around Chernobyl 116,000 people from the 30km area, and subsequently another 240,000 people, were permanently relocated.

Older reactor Country Affected cities Doel 1–4 Belgium Antwerp Population in the area of the cities 5,000,000 Tihange 1–3 Belgium Liège, Namur 860,000 Dukovany 1–4 Czech Republic Brno 800,000 Mühleberg Switzerland Bern 500,000 Beznau 1–2 Switzerland Zürich, Basel 2,000,000 Leibstadt Switzerland Zürich, Basel 2,000,000 Gösgen Switzerland Zürich, Basel 2,000,000 Fessenheim 1–2 France Mulhouse, Basel, Freiburg 1,500,000 Gravelines 1–6 France Calais, Dunkirk 300,000 Bugey 2–5 France Lyon 1,300,000 Blayais 1–4 France Bordeaux 720,000 Dungeness B 1–2 United Kingdom London 14,000,000 Borssele Netherlands Ghent 600,000 Table 1.4 – European urban populations potentially affected by a major nuclear incident involving an older reactor

Lessons (to be) learnt from Fukushima – the EU Stress Test

Scope of the EU Stress Test The European Stress Test focused on the ability of nuclear power plants to withstand events beyond the original design basis, sometimes referred to as robustness. To this end, severe events were defined whose consequences had to be investigated by the operators and the national regulators.140 In the light of the Fukushima disaster external hazard played a key role in the EU Stress Test, with earthquake, flooding and extreme weather conditions required to be evaluated. Furthermore, as the earthquake and tsunami that caused the Fukushima disaster resulted in the total loss of important safety functions, an investigation of a postulated loss of electrical power and of the ultimate heat sink for the reactor core and the spent fuel pool, independent of the causing initiating event, was to be conducted.

The pre-planned measures to deal with a severe accident at the Fukushima site were not capable of preventing core meltdown and hydrogen explosions. Accordingly, the severe accident management measures in place in EU nuclear power plants, i.e. measures to secure the cooling of core and spent fuel pool and the integrity of the containment, and to restrict radioactive releases, were also to be investigated.

Shortcomings in the scope of the EU Stress Test

the scope of the EU Stress Test did not include other significant events that could lead to a severe accident, consideration of which is necessary for any comprehensive assessment of the safety of nuclear power plants, such as: • loss-of-coolant accidents; • reactivity-initiated events or anticipated transients without scram; • internal events such as fires or internal flooding; and • anthropogenic events, including terrorist acts such as deliberate aircraft impacts.

The specific topic of the ageing of nuclear power plants was also outside the scope of the EU Stress Test. This is of special importance, as several aspects of ageing as discussed in section 3 will have an impact on either the probability of an initiating event or the possible consequences of such an event. For example, the risk of a small break loss-of-coolant accident will be influenced by the quality of chosen materials, the manufacturing processes and frequency and efficacy of in-service inspections. Ageing mechanisms will enhance the risk of failures of piping. Moreover, issues of design ageing, such as absence or insufficient physical separation of redundancies in older reactors, will increase the risk of common cause failures in events such as internal fires or internal flooding, compared with the risk faced by a more modern reactor. Particularly with respect to malevolent events, the design requirements for older plants were much less demanding than those for more recent plants.

Thus, because of the restricted scope of the safety assessment and its failure to cover ageing as an important topic, the EU Stress Test cannot be seen as a comprehensive assessment of the safety of EU nuclear power plants as originally requested by the European Council.

The procedure clearly did not focus on important shortcomings in the original design basis of European nuclear power plants, nor on significant differences in the design bases of plants either within one country or in different countries. While the operator and national regulator had to discuss the conformance of the plant with its design basis, they were not required to consider the design’s compliance with modern standards such as the WENRA Safety Objectives for New Power Plants or even with safety standards for existing nuclear power plants such as the WENRA Reference Levels.

As a result, the design deficiencies of older plants were not fully covered by the results of the EU Stress Test. For example, for a loss of electrical power, important factors such as the physical separation or protection of the emergency power supply system were not analyzed in detail, even though the Fukushima disaster clearly showed that design flaws such as placing all emergency diesel generators and switchyards in the basement of the building without protection against flooding of the site can have a severe impact on the safety of a plant.

with respect to the robustness of the nuclear power plant, possible cliff-edge effects were to be identified. But at the same time, no procedure was defined to assess the robustness of the plant with respect to those possible cliff-edge effects.

The typical schedule for a comprehensive safety assessment such as those that are performed in many countries on a regular, typically ten-year basis, foresees a longer assessment period. Operators prepare their safety assessment documents over several years, and several years more are required by the authorities and their technical support organizations to evaluate the operator’s reports and reach conclusions regarding necessary safety enhancements. Thus it is evident that, especially with respect to beyond design basis events, which have never before been analyzed in detail, only a very limited quantity of validated or even qualified documents was available for the assessment. An important part of the results produced by the Stress Tests thus had to rely on expert judgement. For older plants, the documentation produced during design and construction was not as comprehensive as is required today. Furthermore, first-hand knowledge of people who designed and constructed the plant is often no longer available, as noted in section 3.3. As a result, an in-depth assessment of older plants relying mostly on existing documentation will of necessity be limited in scope. As the number of site visits conducted in the course of the Stress Test was very limited, discrepancies between documentation and the actual status of individual plants could not be realistically assessed. No site visits were conducted for nearly two-thirds of reactors; for example only 3 out of 16 operational reactors in the UK and 12 out of 58 in France were visited. The oldest British reactors, at Wylfa, Hunterston and Hinkley, received no visits from reviewers.

Although a significant number of possible improvements was identified, not a single plant in the EU faced an unplanned shutdown or was permanently shut down as a direct result of the EU Stress Test. While a broad range of safety issues and good practices was identified in the framework of the Stress Test, there is still no unified or harmonized set of minimum requirements at an EU level. The actual level of improvements implemented is decided on a national basis.

important severe accident response measures (such as hardened filtered vents) that had been developed and promoted well before the Fukushima disaster have still not been implemented in all EU nuclear power plants, and there is still no EU-wide mandatory requirement to implement them. Even in those plants where severe accident measures, like hardened filtered vents have been implemented, they are sometimes not fully protected against external events such as earthquakes. While important safety improvements such as the installation of a diverse and fully independent secondary heat sink and an emergency control building, are identified by the Stress Test as good practices, there is no general consensus in favor of such retrofits. Some countries already have an additional layer of safety systems to ensure fundamental safety functions, including auxiliary systems (such as emergency diesel supply) in physically separated and/or specially protected buildings. Some countries such as France are preparing requirements to install a so-called ‘hardened core’ of equipment. Such a hardened core should safeguard all fundamental safety functions including auxiliary systems, even against external hazards of a much higher impact than has been allowed for by design basis assumptions up until now. A hardened core of this kind would be a very valuable retrofit for all EU nuclear power plants. At the same time, it has to be

that the implementation of such a core will take a number of years, even in France where it is already under discussion for a longer time.

While all the above aspects can be dealt with individually, the complex interactions between all of them have the potential fundamentally to undermine the safety level of ageing nuclear power plants.

The economics of nuclear power plant lifetime extension

The nuclear power plants that came on line in the 1970s, and which make up a significant proportion of the world’s nuclear generating stock, are now coming to the end of their expected operating life, typically 30–40 years. The replacement of these reactors with new nuclear capacity is highly problematic, for example in terms of cost, finance and siting, so utilities are increasingly looking to extend the lifetime of their existing nuclear power plants as the easiest way to maintain their nuclear capacity. If the cost of modifications were to prove relatively low, life-extended plants could be highly profitable to their owners because the capital cost (which makes up the majority of the cost of a unit of nuclear electricity) will already have been paid off, leaving only the operating and maintenance (O&M) costs to be paid.

The report looks at lifetime extensions of 10 years or more, as opposed to shorter extensions which are often granted on a more ad hoc basis. It focuses on pressurized water reactors (PWRs) and boiling water reactors (BWRs), which accounted for 271 and 84 respectively of the 435 reactors in operation worldwide in November 2013, and which encompass the majority of reactors being considered for lifetime extension. In a number of countries, only one or two reactors are coming up for retirement and the authorities’ approach to lifetime extension may be tailored to specific conditions at these reactors. The report therefore focuses on the two countries, the USA and France, which, because they have a significant numbers of reactors nearing their original licensed lifetime, might be expected to have developed a more systematic process for authorizing lifetime extension.

The case for lifetime extension

The advantages to nuclear power plant owners of lifetime extension are as follows: • The cost is expected to be much lower than that of new-build nuclear or other electricity generation capacity. • Maintaining capacity on an existing site is much less likely to cause public opposition than new-build, even on an existing site. • Upgrading an existing plant represents a low economic risk because it is expected to be much less likely to lead to cost escalation and time overruns than new-build. • Unexpected technical problems are much less likely with a long-established design than with a new, relatively untested design. • If a plant’s capacity represents a large proportion of the country’s nuclear capacity, extending its lifetime will help maintain nuclear skills, which may be lost if the reactor(s) involved are closed. • It may allow upgrades to be carried out to improve the plant’s profitability, for example raising the output by installing a more efficient turbine generator. • It delays the start of decommissioning and reduces the annual provisions needed to fund this process. Decommissioning is technologically largely unproven, raises issues of waste disposal and is expected to be an expensive, challenging and controversial process.

However, the process of lifetime extension is dependent on convincing national nuclear safety regulatory authorities that the reactor’s design is safe enough to allow it to be re-licensed for a period of time that represents a significant fraction (up to half) of its original expected lifetime. It is clear that none of the designs that are currently reaching the end of their lifetime could be licensed as new-builds, and even if major safety upgrades are made the plants in question will still fall short of the standards expected of a new plant. However, while the quality of these designs falls short of current requirements, the plants are much more a known quality; any major design flaws or construction errors are likely to have emerged after more than 30 years of operation, and the operating workforces are well-established and ought to be competent.

While lifetime extension is clearly an expedient option in many cases, it does raise serious questions. These include the following: • How appropriate is it to re-license facilities that inevitably fall well short of the design standards required for new plants? • How far can regulators be sure that all significant plant deterioration can be identified, especially in parts of the plant that are effectively inaccessible? • How far can regulators be sure that significant construction quality issues, which would be picked up now because of improved quality control technology or more rigorous procedures, do not exist? 2 Concepts of power plant lifetime While regulatory approval is a necessary condition for continued operation, it is far from being a sufficient condition.

There are at least six different concepts of the lifetime of a power plant, in particular, a nuclear power plant, which are relevant. These include: • design lifetime; • accounting lifetime; • economic lifetime; • political lifetime; • physical lifetime; and • regulatory lifetime.

Nuclear economics. Prior to discussing these concepts, it is useful to outline briefly the main determinants of the economics of nuclear power. A detailed discussion of the subject is beyond the scope of this report, but some basic information is useful. The major element in the cost of a unit of nuclear-generated electricity is the fixed cost, mostly comprising the construction cost. This fixed cost is determined by the construction cost itself and the cost of capital. There is no consensus on the construction cost of a nuclear power plant, and there has been a strong upward trend in real construction costs throughout the history of nuclear power. The cost of capital is highly variable and depends entirely on the circumstances of the plant, specifically the perceived risk of the project to its financiers.

The O&M costs represent the main element of the rest of the cost of a unit of nuclear-generated electricity besides the fixed cost. However, only for the USA -there are reliable data on O&M costs in the public domain. This is available because the US economic regulatory system will only allow properly audited costs to be recovered from consumers. Even this source of data is becoming less extensive as more US plants recover their costs from a non-regulated, competitive market and are not required to publish accurate costs. In other countries, there is no incentive for utilities to publish O&M costs. Utilities regard this information as commercially confidential and also have good reason to present their investments in nuclear power in a good light, so data from other countries have to be treated with skepticism.

Design lifetime. The plant’s design lifetime is set by the specifications of the materials used and equipment installed, and how long these are expected to remain serviceable. The design lifetime is not a precise measure of how long a power plant will last, because this will depend on a number of factors, in particular the O&M regime. For example, if any thermal power plant is shut down and started up more often than expected, this will impose thermal stresses likely to shorten the life of the plant. If the plant is not maintained as well as expected, its life will be shortened. In the case of nuclear power plants, there is still limited experience of how long materials will last when exposed to radioactive bombardment. In practice, plants are retired not on the basis of the design lifetime but according to other factors, and design lifetime is not considered further in this chapter.

Accounting lifetime. Any capital asset is given an accounting lifetime when it enters service: this represents the period over which the construction cost is to be recovered. Once the initial capital cost has been recovered, the plant is said to be ‘amortised’, and the output can be profitably sold at marginal cost plus a profit margin. In the case of a nuclear power plant, for which the operating costs are expected to be a relatively low proportion, perhaps 30 per cent, of the overall cost of a unit of electricity, once the initial costs have been recovered the plant may be seen as a cheap source of electricity. However, this is not invariably the case: for example, in 2013 the retirement of five US nuclear power plants was announced because the costs of operating them and keeping them in service were too high for them to be profitable.

In theory, whether a plant is amortised or not should not influence decisions on retirement – the initial costs have to be repaid whether or not the plant is operating. The operating costs should be the sole determinant of whether or not to retire a plant. However, whether or not plants are amortised may influence political decisions about their future. In Germany, the utilities are demanding compensation for the government’s phase-out policy because closing the plants at about year 30 will prevent the utilities earning large profits from their continued operation.3 In Belgium, the government was demanding the payment of windfall taxes on the profits made by the utilities as a condition for allowing their plants to be life-extended.4 Unsurprisingly, the German utilities who filed for compensation for not being allowed to life-extend their plants, claimed that their foregone profits would have been high so as to ensure that their compensation will be high, while the Belgian utilities claimed that the profits of their life-extended plants would be lower than the Belgian electricity regulator’s estimate so as to minimise the windfall taxes payable. However, like design lifetime, accounting lifetime is an ex ante measure and not generally speaking a determinant of decisions on lifetime extension, and is therefore not considered further in this report.

Economic lifetime. Any piece of industrial plant is generally only kept in service as long as it is profitable. Once a piece of industrial plant such as a power plant is no longer profitable and there is little realistic prospect of it becoming profitable again, it will be retired. This is particularly relevant in the case of technologies in which progress is rapid, or when the costs of the existing technology or its potential replacements changes. For it to be economic to replace a piece of plant, the cost of building and operating its intended replacement must be less than the cost of continuing to operate the existing plant. For example, in the past, old coal-fired power plants were often retired because new coal-fired designs were available that were so much more thermally efficient than their predecessors that the cost savings from lower coal consumption would more than pay for the capital cost of the replacement. Changes in environmental regulations, may also help to justify the retirement and replacement of existing capacity. For example, in the 1990s combined cycle gas turbines had such low overall costs, because of low construction costs, low world gas market prices and high thermal efficiencies, that in some cases they were able economically to replace existing coal-fired capacity, helped by the fact that the cost of retrofitting environmental controls to the coal-fired plants was avoided (the environmental performance of the gas-fired plants being intrinsically superior). It should not be overlooked, however, that any unamortised capital costs of a plant that is retired and replaced will have to be met from the revenues of the replacement plant, in addition to its own capital costs.

Political lifetime. Major pieces of industrial plant may also be subject to considerations of political acceptability: if a process or product is no longer politically acceptable, the plant must be retired. This is clearly illustrated by countries with nuclear ‘phase-out’ policies where plants are retired because they no longer command public acceptance, even if the regulator is prepared to continue to license the plant. In some cases, the political forces are external; this was the case for Eastern European and former Soviet Union countries including Bulgaria, Lithuania, Slovakia and Ukraine, on which the West placed pressure to retire designs of nuclear power plant that it categorized as unsafe.

Physical lifetime. Many components in power plants are readily and quite cheaply replaceable, and plants all of whose major components can readily be replaced can be seen as effectively having an indefinite life-time. In practice, the lifetime of such plants will be determined by economic or regulatory factors. A simple analogy is ‘your grandfather’s axe’, which had had three blades and four handles but was still the same axe. However, where there are components that it would clearly not be economically viable to replace – so-called life-limiting components – the plant’s lifetime will be determined by the lifetime of those components. A simple analogy is a bicycle: failure of the frame means bicycle has to be scrapped. The older a plant gets, the lower its value tends to become once repaired, and the more likely it is that the replacement of a given component will turn out to be prohibitively expensive. For nuclear power plants, the most commonly quoted life-limiting component is the reactor vessel. If the integrity of the vessel can no longer be guaranteed, there is a risk of the core being exposed to the environment and the plant has to be retired.

before the accident at the Three Mile Island power plant in Pennsylvania, USA, it was assumed that the simultaneous failure of two independent safety systems was so unlikely as to be effectively impossible. Three Mile Island proved that this was not the case, so additional safety requirements had to be introduced.

There is variation between countries on the duration of the nuclear power plant licences. In the USA, nuclear plants were given a lifetime of 40 years by the Nuclear Regulatory Commission (NRC), at the end of which the licence must be renewed or the plant shut. At the other end of the spectrum, in the UK, once a nuclear plant has been licensed for operation, that licence remains in force only until the next major maintenance shutdown, usually about a year ahead, after which the regulator (the Office of Nuclear Regulation (ONR)6) must approve the restart. In France, nuclear power plants are subject to a 10-yearly review by the Autorité de ) must approve the restart. In France, nuclear power plants are subject to a 10-yearly review by the Autorité de year licence does not give the operator carte blanche to run the plant for 40 years, as it can be withdrawn at any time. For example, in 1987, the NRC found evidence of poor operating practice at the two-unit Peach Bottom site in Pennsylvania.7 As a result the two reactors were closed for more than two years until the NRC was satisfied that the issues had been resolved. Severe reactor head degradation was found at the Davis-Besse power plant in Ohio and the plant was kept off-line for two years until repairs had been carried out to the NRC’s satisfaction.

Experience of nuclear plant lifetimes Some of the nuclear power plants that have so far been retired around the world were early designs that had been shown to have design problems. For example, four out of six of the first generation BWRs were retired around 1980 because their steam generators were causing serious problems. Experience of nuclear technology and of regulatory approval of new designs should mean that serious design errors are less likely now. However, such errors are still possible, particularly in the case of more radical new designs. For example, the N4 design developed by Framatome (predecessor of Areva, the French public-owned nuclear power corporation) for four reactors built in the 1990s in France contained a number of significant design errors that delayed commercial operation and necessitated significant design changes.

In practice, nuclear power plants may be retired for a combination of reasons; in the following tables the reason for retirement listed is the major one.

Nuclear power plant retirements to date have been dominated by the USA, Germany, Eastern Europe and the countries of the former Soviet Union. By comparison, there have been relatively few retirements in the rest of the world.

In the USA, the dominant reason for plant retirement has been economic, particularly in the 1990s and again in 2013 – both times when the natural gas price was low, and nuclear power plants could be economically replaced by gas-fired plants. The NRC had actually given approval in principle for two of the five plants whose retirement was announced in 2013 to continue to operate for a total of 60 years. One study identifies 38 US reactors as being under threat of closure on economic grounds, with 12 under particular threat (see Annex 1). This shows how quickly the outlook for an operating nuclear power plant can alter with changes in fossil fuel prices, the need for significant repairs and the need for significant safety upgrades. The larger the extent that nuclear plants are exposed to unpredictable wholesale electricity markets, the more economically vulnerable they become. The five plants whose retirement was announced in 2013 deserve further discussion as, while the fundamental issue was cost, there were important differences between them that illustrate the issues involved in lifetime extension.

San Onofre 2 and 3 Units 2 and 3 of the San Onofre plant in California were completed in 1983 and 1984 respectively. They were built and are owned by Southern California Edison (SCE). The retirement of the San Onofre units was related to the cost of replacing the steam generators. The plants had been closed in January 2012 after the discovery of tube wear in the steam generators, which had been replaced as recently as 2010 (Unit 2) and 2011 (Unit 3) at a cost of $602m. SCE claimed in November 2012 that it was safe to continue to operate the units at 70 per cent capacity, but by May 2013 it had been unable to convince the NRC of its case and the plant was shut down. SCE is now trying to recover the cost of the apparently inadequate replacement steam generators from the supplier, Mitsubishi and from its insurer, and also wants to pass any unrecovered costs on to consumers. The issue facing SCE is how far it will be able to recover both these costs and the replacement power costs from its consumers. California has a regulated energy market, and as of September 2013 there were doubts as to whether the regulator, the California Public Utilities Commission (CPUC), would allow these costs to be recovered.17 By November 2013, it seemed likely that CPUC would rule that already calculated replacement power costs would have to be refunded to consumers.18 The closure of the plant therefore seems to have been related more to concerns about the safety of the steam generators and the consecutive need to have them replaced, uncertainties about recovery of the repair costs and related future costs than to the cost of gas-fired alternatives.

In Germany, the dominant reason for plant retirements has been the political decision to phase out nuclear power, first taken in 2002 (as a result of which two reactors were retired) and then reconfirmed in 2011 after the Fukushima disaster, whereupon a further eight reactors were retired. The remaining nine reactors will be progressively retired over the period from 2015 to 2022.

Eastern Europe and the former Soviet Union. In Eastern Europe and the former Soviet Union, the dominant reason for plant retirement has been concerns about the safety of some Soviet technologies – especially the RBMK design used at the Chernobyl site, but also the first generation Soviet PWR, the VVER. A condition for entry into the European Union for Bulgaria, Slovakia and Lithuania was that plants using these suspect designs be retired. Russia’s own regulatory process is not open and the reasons for retirement of plants are not publicly disclosed.

The RBMK design uses graphite as a moderator, and if the integrity of the moderator cannot be assumed, safety issues emerge. During the 1990s Russia essentially rebuilt four reactors of the RBMK design at the Leningradskaya site near St. Petersburg, with shutdowns of about two years. The plants were also upgraded to take account of the lessons from the Chernobyl disaster, and after a further 18 month shutdown to repair the graphite, the first unit at the site was returned to service in November 2013. The other three units are now expected to undergo similar repairs. It has not been reported how long these reactors are expected to continue to operate. The six RBMKs built outside Russia, in Lithuania and at Chernobyl, have all been retired. Including the four at Leningradskaya, eleven RBMKs remain in service in Russia but these will not be considered further because the determinants of their lifetime are very different to those of PWRs and BWRs and because there is no reliable information on the standards the Russian authorities require these plants to meet.

In the rest of the world, there has been a mixture of reasons for retirement. The gas-cooled reactors (GCRs) using carbon dioxide as a coolant and graphite as a moderator (installed in the UK, France, Italy, Spain22 and Japan) were very expensive to operate and all except those in the UK have now been retired. In the UK, all reactors of the first-generation Magnox design have been closed except for one, expected to close in 2015; but all seven plants using the second-generation UK design, the Advanced Gas-cooled Reactor (AGR), remained in service in 2013. For graphite moderated reactors, the main life-limiting component is the graphite moderator framework which thins and distorts with exposure to heat and radiation. The GCRs are not considered further in this report because the determinants of their lifetime are different to those for PWRs and BWRs.

In the Canadian-designed Pressurised Heavy Water Reactors (CANDUs), the fuel is contained in a large number of pressure tubes rather than in a single pressure vessel. Up until 1987, it was assumed that these pressure tubes would leak before breaking so that there would be ample warning of a pressure tube rupture, and tube failure was therefore not seen as a serious safety issue. This assumption was then proved false when it was discovered that rupture could occur unpredictably. Since then, once the integrity of these pressure tubes can no longer be assumed (expected to be after 20–25 years), they must be replaced in a major repair. For three reactors, the cost of this was seen as unjustifiable and they were therefore retired. The special issue of the integrity of the pressure tubes means that the decision-making for CANDUs is somewhat different to that for PWRs and BWRs, and accordingly CANDUs are not considered further in this report.

Following a 1987 referendum Italy took the decision to close its nuclear plants, and although there were attempts by Prime Minister Silvio Berlusconi to reverse this decision, it was confirmed by a second referendum in 2011. A phase-out decision taken in 1980 in Sweden under a referendum led to only two out of 12 of the country’s reactors being shut down before the policy was abandoned in 2010. Similarly, a phase-out promise made in 2004 by the Spanish government has led to the closure of only one of the remaining nine units, a very small, old reactor.

The impetus for lifetime extension programmes Until the last decade, nuclear power plants had an expected lifetime of 40 years or less. As the first wave of commercial nuclear power plants did not enter service until the mid-1960s, plant retirements were few and generally driven by either economic factors, design issues or political factors. Table 2.5 shows that for most countries dealing with retirement is still not a major issue. Nearly half (14) of the 31 countries operating nuclear power plants have no reactors aged 35 or older.

Countries with more than 40 per cent of their reactors in service or under construction aged 35 or older, that use PWRs or BWRs and that have three or more reactors aged over 35 (see Table 2.5) include Belgium, Sweden, Switzerland and the USA. The first three of these countries have or have had nuclear phase-out policies, which if carried through would mean that the issue of lifetime extension would have limited relevance.

The USA is by far the most advanced country in terms of its progress towards lifetime extension: the majority of its reactors have been given approval by the NRC to operate for at least 60 years as opposed to the 40-year life for which they were originally licensed. However, this was done before the Fukushima disaster and, as has been demonstrated by the retirements in 2013, the existence of permission to extend a reactor’s lifetime to 60 years is far from a guarantee that it will actually operate for this long.

While France appears to have less need to consider lifetime extension as yet, the scale and speed of the French nuclear power programme from 1977 to 1992 means that the issue is already of importance for planning. Of the 58 reactors in service in 2013, 23 were commissioned in the period 1977–82 (see Table 2.6), representing more than 20GW of capacity. If France was to replace all this capacity with the latest French design, the European Pressurised Water Reactor (EPR), this would require at least 13 new reactors. If we assume that the cost per reactor would be the same as that agreed by the UK government for its Hinkley Point B EPR, €9.5bn24, and the existing reactors were replaced at age 40, the investment needed before 2022 would be in excess of €120bn in present-day terms, a sum that would be difficult for France to finance. To put this figure in perspective, it represents about double the annual turnover of the entire global EDF group.

However, President François Hollande was elected on a promise to reduce the nuclear contribution to France’s electricity from 75 per cent to 50 per cent, and has promised to close the two oldest reactors, at Fessenheim, by the end of 2016. Moreover, the ASN is requiring an expensive range of upgrades to take account of the lessons from the Fukushima disaster, making lifetime extension less attractive. The French case is therefore complex and highly uncertain.

For the purposes of lifetime extension, it is clear that the technologies under consideration are far short of the standards that would be required for a reactor planned today. By definition, all were designed before the Browns Ferry accident of 1975 and can take only limited account of the lessons learnt there, much less the lessons from the Three Mile Island (1979) accident and the Chernobyl (1986) and Fukushima (2011) disasters. The Browns Ferry accident occurred when a fire in a cable tray disabled the control systems for all three reactors on the site and led to the recognition of the need for a much greater degree of independence of the reactors on a multi-unit site. The first reactors designed post-Chernobyl have yet to enter service, while it is clear that the lessons to be learnt from Fukushima are only now beginning to emerge and that it will be decades before they are fully embodied in the available reactor designs.

Many of these design lessons cannot be applied to existing reactors. For example, the Chernobyl disaster led to a requirement in some jurisdictions that ‘core-catchers’ be installed to prevent the core burning down into the environment in the event of a reactor vessel failure. Similarly the 9/11 terrorist attack led to a requirement that reactor containments should be able to stand up to impact from a full size civil aircraft. It is clear that neither of these requirements could be met in existing reactors, and that the BAT standard cannot therefore be met. So the decision to life-extend inevitably means giving what is essentially a new life of perhaps 20 years to a facility that falls far short of current best practice. Regulators must therefore judge how far short of current standards it is acceptable for facilities to fall.

Conclusions.  Very few nuclear reactors have been retired because they have reached the end of their licensed lifetime. Much likelier life-determining factors are: the economics of the plant; the existence of national phase-out policies; serious and unexpected equipment failures; and, for older designs in particular, existence of design issues that makes their continued operation unacceptable in terms of current standards. There seems to be a consensus among regulators that most existing reactors can be safely operated in principle for 60 years, and there are even investigations in the USA into extending lives to 80 years.

However, in the 15 years since lifetime extension began to be adopted, the perception of the risk attached to assuming a significantly longer life has increased. In the USA, the process of obtaining the first lifetime extensions went smoothly, without major plant modifications being required. However, as more problematic plants came up for consideration and safety-related incidents (initially the 9/11 attack) began to play a role in official thinking, the process became more difficult and expensive. It also became clearer, especially after the Fukushima disaster, that in-principle approval for a reactor to operate for 60 years was far from being a guarantee that it actually would complete a 60-year operational life.

The collapse of natural gas prices in the USA also emphasized that there are economic risks to lifetime extension, with two of the four plants retired in the USA in 2013 being closed purely on the grounds that they were expected to become loss-makers.

A longer lifetime gave utilities the opportunity to justify upgrades aimed at improving the economics of a plant, such as power upgrades. However, as the risks and costs of lifetime extension became clearer, the case for this additional discretionary investment was weakened.

Regulators face the difficult task of determining how safe is safe enough. It is clear that the designs of plants now reaching the point where lifetime extension will be considered fall far short of the requirements for a new plant, and that retrofitting to bring them up to today’s new-build standards would be technically and economically infeasible. As a result the required standard for the upgraded technology of a life-extended plant tends to be merely that the risk should be as low as reasonably achievable (ALARA), with the ‘best available technology’ (BAT) standard being unattainable.

There appears to be a significant difference between the requirements of the US regulator NRC, and those of the French regulator ASN, particularly post-Fukushima. The ASN is now requiring an extensive range of upgrades, for example improved seismic resistance and flood protection of back-up power and control rooms. The NRC does not appear to have modified its requirements significantly in the light of Fukushima, and the cost of related modifications appears to be much lower than in France, despite the fact that some US reactors are of comparable type and vintage to Fukushima’s, whereas the French reactors are of a very different design.

Nuclear Liability Of Ageing Nuclear Reactors

The relationship between reactor lifetime extension and nuclear liability is a key issue, which is the particular focus of this chapter. It analyses the possible impact of lifetime extension on nuclear liability and examines to what extent a nuclear operator would be liable for the costs of an incident affecting a life-extended reactor. It addresses the following questions: • Does the current legal framework on nuclear liability address nuclear ageing and lifetime extension of reactors? • Would it be a good idea to have a specific provision addressing nuclear ageing and lifetime extension of reactors? • What is the liability of suppliers of upgrades for life-extended reactors?

According to European Commission figures, the March 2011 Fukushima disaster caused €130bn of damage.

The question now arises whether a nuclear incident in Europe would cause a similar amount of damage. A report by the French Institut de Radioprotection et de Sûreté Nucléaire (IRSN) has indicated that the damage caused by a serious nuclear incident in France would cost between €120bn and €300bn.

The costs of the Fukushima disaster as well as the recent French study demonstrate once again that the amounts provided for under the nuclear liability conventions are absolutely too low. Even assuming that the 2004 Protocols to the Paris and Brussels Supplementary Convention was in force, this would mean that potentially only half of one per cent of the damage could be compensated for (€1.5bn available against damage of €300bn).

A first consequence of the liability subsidy is that nuclear operators may enjoy a preferential situation in the energy market compared with other producers that do not receive such a subsidy. Since operators of nuclear plants do not have to internalize the full social cost of their activity, the price of nuclear energy will be artificially lowered compared with energy from other sources, leading to a distortion of competition and reducing the incentive to build other types of power plant.

A consequence of inadequate victim compensation, is that it would be very hard to ensure equal treatment of victims. There is a significant risk that victims who have filed a claim first will be awarded compensation first, while, victims who are later in filing a claim (for example because effects on health become apparent only sometime after the incident) face the risk of receiving less compensation or no compensation at all, especially when the compensation already awarded exceeds the limited liability amounts. This possibility raises important issues in terms

Insurance of nuclear risk

Reactor ageing and lifetime extension may of course have important consequences for the demand for nuclear insurance and financial security and for the price of the cover provided. To the extent that the probability of a nuclear accident increases with ageing, there are consequences for the premiums charged; to the extent that chance (larger chance of failure) and the magnitude of the potential damage (because of a decreasing functionality of protection barriers) may increases, there may be consequences for the necessary scope of cover. This prospect threatens to exacerbate the tendency whereby debate on reform of nuclear liability (for example towards unlimited operators’ liability) has always been obstructed by the argument that higher levels of liability than currently provided for by the conventions, and certainly unlimited liability, would be uninsurable. As we will argue below, this argument contains serious fallacies. First, policymakers have been too much dependent on one-sided information provided by the nuclear industry as to what amounts would be insurable. More recent estimates, for example by nuclear reinsurers, hold that substantially larger amounts could be covered30; moreover, it is, as the examples of some EU Member States show, not necessary to link the level of nuclear liability to the available level of insurance coverage on the market. Liability could in principle be unlimited (as in Germany), but the required financial cover could be limited to the amount that could be provided by the market. Policymakers need to become much more critical and rather than relying on one-sided information provided by the nuclear lobby, conduct an objective analysis of cover available on the financial and insurance markets, taking into account information from relevant stakeholders such as large reinsurers.

Conclusions Countries that opt for reactor lifetime extension should do so only in the context of substantially improved arrangements for compensation of victims of a nuclear incident – a higher level liability will not only be beneficial for the victims of a nuclear incident but will also have an important preventive effect. There seems to be little doubt about the advantages of some of the principles of the international nuclear liability regimes, especially as far as strict liability and compulsory insurance are concerned. There has, however, been much criticism of legal channeling, limited liability and state funding. Strict liability favours victims because they do not need to prove negligence or a fault on the part of an operator in order to be compensated. Compulsory insurance guarantees that a certain level of compensation will be available even if, for example, an operator goes bankrupt after a nuclear incident.

The other principles of the conventions were created in favor of the nuclear industry: the limitation of liability is the most striking example of this. The amount of limited liability was set not as a function of the potential cost of the damage caused by an incident, but as a function of the capacity of operators to buy financial security for their third-party liability. Limited liability is an effective subsidy to the nuclear industry and should be abolished. Nuclear operators must be subject to unlimited liability just like any other industrial corporations.

Concentration of liability (legal channeling) also clearly favors the wider nuclear industry because suppliers cannot be held liable for damage caused by goods or services they supplied. Closely linked to concentration of liability is the concentration of jurisdiction. The aim of this provision is to guarantee that no judge in a country other than that where the incident occurred will accept jurisdiction and apply legislation denying limited and concentrated liability. Overall, the balance of the conventions is largely to the advantage of the nuclear industry, which is unsurprising given that their principles are based on studies conducted on behalf of the US Atomic Forum the mentioned Preliminary and Harvard studies).

Given the conclusion that a nuclear operator should not be able to benefit from any limitation of liability, there is little advantage in advocating that the liability levels of power plants whose reactors have been granted lifetime extensions should be higher than those of other nuclear power plants. To allow such a difference would be implicitly to favor limited liability for ‘non-extended’ reactors. There is no reason why non-extended reactors should continue to receive such a subsidy.

The question then arises whether given its larger risk, a life-extended nuclear reactor should perhaps be subject to a higher level of compulsory liability insurance. Such a proposal is unconvincing. If European operators were pooled in an US-type system of retrospective premiums, the operators would mutually monitor one another. We can assume that they would not allow a bad risk into their system. If a life-extended reactor represented a higher risk, this would inevitably be reflected in the premium demanded of the operator.

Another severe criticism of the current nuclear compensation system offered by the conventions is that it would potentially compensate only about one per cent of the damage caused by a major nuclear incident. This situation needs to be changed not only in the framework of reactor lifetime extension, but also for all current and newly built nuclear power plants.

Given the clear advantages of the US nuclear liability and insurance system, other countries should envisage the creation of a similar model. It is true that the US system is not perfect either, since for example it also limits operators’ liability. Moreover, the retrospective premium creates a potential insolvency risk, while it is to be feared that the US Government would intervene if damage were to exceed the second tier of coverage. However, the Price-Anderson Act does internalize the costs of a nuclear accident to a much greater extent than the system defined by the nuclear liability conventions.

Politics, public participation and nuclear ageing

This chapter explores the means by which the public can influence decisions on the lifetime extension of nuclear reactors. As already described in earlier chapters, the decision to extend the lifetime of an ageing nuclear reactor is made on the basis of interactions between a range of factors. Nuclear safety is one of these, and at least in terms of nuclear public relations it is given priority. Reality shows, however, that economic or political arguments can play an overriding role.

As Chapter 1 explains, in terms of nuclear safety we are entering a new era of risk. Due to the short-lived nuclear construction boom starting in the 1970s, the number of reactors operating beyond their originally foreseen design lifetime of 30 or 40 years is growing rapidly. And after Fukushima, public concerns around nuclear power are growing as well. These anxieties have already brought a de facto end to the nuclear renaissance previously talked up by the industry, with reactor construction worldwide slowing considerably. However, the industry is also weary of any increase in public concern about old reactors, hiding the reality behind acronyms such as PLEX (plant life-time extension) or the more recently introduced term LTO (long-term operation). Few people know that these terms denote plans to increase the lifetime of what are already outdated nuclear designs by 50 or even 100 per cent. If they knew, many might feel that this was an unacceptable gamble on technology.

Ownership status of the operator. In a number of countries, such as Ukraine, the Czech Republic and Hungary, the nuclear operator is a state-owned company and dividends from the operation of nuclear power plants go to the state budget. This can compromise the government’s objectivity concerning lifetime extension of older reactors, because their continued operation will help to meet budget commitments. Because the respective governments also have a seat on the board of their state-owned utilities, the national nuclear regulator has to withstand coordinated pressure from both sides.

Conversely, privatization can also lead to complications in reactor lifetime decisions. We have already mentioned the example of Borssele in the Netherlands, where after privatization of the state-owned utility, the lifetime restriction to 40 years (the reactor’s design lifetime) was overturned and the reactor’s lifetime prolonged by 20 years under threat of large compensation claims. The Dutch nuclear regulator, de Kerntechnische Dienst, which is part of the Ministry of Economic Affairs, Agriculture and Innovation, is currently under pressure of this political promise for an extended lifetime in its assessment to allow prolonged operation after a PSR.

Political clout of the operator When Angela Merkel became Chancellor of Germany for the second time in 2009, she had to fulfil her election promise to the four nuclear operators, in return for supporting her new party, that she would reassess the nuclear phase-out law adopted in 2002. This reassessment resulted in September 2010 in an average extension of reactor lifetimes of 8 years for older reactors and 14 years for newer reactors. However, this decision was reversed a few months later after the Fukushima disaster.

Other factors. There are in addition other factors, known from previous nuclear decisions, that may influence a decision to grant a lifetime extension to an ageing nuclear reactor. These include energy security arguments (especially where there is little awareness of potential alternatives), legal complexity, lack of access to information (for example where the operator has an information monopoly on crucial data), and undue influence on the operator’s part on the national media (for example as a major advertiser).

The regulator under pressure Among the stakeholders in the decision process around lifetime extension, a country’s nuclear regulator holds a key position. Not only can it order the closure of a nuclear reactor that it deems substandard, it can also demand proposals for upgrades, prescribe upgrades or prescribe changes in management and safety culture. In addition to nuclear safety, its decisions will have implications for the economics of the power plant and its operator, as well as for its organizational culture. Given the powerful position most nuclear operators hold in national life – many of them have a significant share of the national electricity market, in some cases amounting to more than half – the regulator’s decisions are also highly political. Accordingly, proven independence is vital to enable the nuclear regulator to maintain a non-negotiable emphasis on nuclear safety.

Posted in Energy Infrastructure, Nuclear Power Energy | Tagged , , , , | Comments Off on Aging nuclear power plants should be shut down

945 U.S. Superfund sites vulnerable to climate change

Sources: GAO analysis of Environmental Protectoin Agency, Federal Emergency Management Agency, National Oceanic and Atmospheric Administration, and U.S. Forest Service data; GAO-20-73

Preface. The energy crisis is likely to strike soon since global peak oil production was reached in November 2018 (EIA 2020). Let’s use energy to clean up these Superfund sites and nuclear waste, rather than wasting energy on wind turbines and solar panels. Time is running out. Over 945 Superfund sites (of 1,315) may be affected by climate change due to floods, wildfires, storm surge, or sea level rise in the future.

In the late 90s, during President Bill Clinton’s second term, the EPA averaged 87 completed cleanups per year; over the first six years of the George W. Bush administration, the number dipped to 40; Obama’s first year in office saw 20 completed clean ups and in 2014 the number dived to a piddly eight. By the tail-end of the Obama years there were still 1,300-plus sites on the Superfund National Priorities List—the worst of the worst—and some 53 million people living within three miles of one. Under Trump, officials deleted seven sites from the Superfund list in 2017, 22 in 2018 and 27 in 2019—the highest single-year total since 2001.Stagnated projects like Butte, Montana’s noxious Berkeley Pit have been reinvigorated and schedules have been accelerated, like at Indiana’s USS Lead site, a former lead ore refinery, and the West Lake Landfill in Missouri. (Ferry 2020).

EPA places sites into the following six broad categories based on the type of activity at the site that led to the release of hazardous material:

  1. Manufacturing sites include wood preservation and treatment, metal finishing and coating, electronic equipment, and other types of manufacturing facilities.
  2. Mining sites include mining operations for metals or other substances.
  3. “Multiple” sites include sites with operations that fall into more than one of EPA’s categories.
  4. “Other” sites include sites that often have contaminated sediments or groundwater plumes with no identifiable source.
  5. Recycling sites include recycling operations for batteries, chemicals, and oil recovery.
  6. Waste management sites include landfills and other types of waste disposal facilities.

Superfund in the news:

2020 Biden will inherit hundreds of toxic waste Superfund sites, with climate threats looming. The EPA’s program for cleaning up the nation’s hazardous waste dumps has a backlog of sites that lack funding — the largest in 15 years.

Alice Friedemann  www.energyskeptic.com Women in ecology  author of 2021 Life After Fossil Fuels: A Reality Check on Alternative Energy best price here; 2015 When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Podcasts: Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity

***

GAO. 2019. SUPERFUND. EPA should take additional actions to manage risks from climate change. United States Government Accountability Office.

Climate change may increase the frequency and intensity of certain natural disasters, which could damage Superfund sites—the nation’s most contaminated hazardous waste sites.

Federal data suggests about 60% of Superfund sites overseen by EPA are in areas that may be impacted by wildfires and different types of flooding—natural hazards that may be exacerbated by climate change.

We found that EPA has taken some actions to manage risks at these sites. However, we recommend it provide direction on integrating climate information into site-level decision making to ensure long-term protection of human health and the environment.

*** Notes from the report:

As of September 2019, there were 1,336 active sites on the list, and 421 sites that EPA had determined need no further cleanup action (deleted sites). About 90 percent of these active and deleted NPL sites are nonfederal sites, where EPA generally carries out or oversees the cleanup conducted by one or more potentially responsible parties (PRP). The other NPL sites—approximately 10 percent—are located at federal facilities, and the federal agencies that administer those facilities are responsible for their cleanup.

in a 2007 report, the National Research Council noted that buried contaminated sediments at Superfund sites may be transported during storms or other high-flow events, becoming a source of future exposure and risk.

SEA LEVEL RISE: We identified 110 nonfederal NPL sites—7 percent—located in areas that would be inundated by a sea level rise of 3 feet, based on our analysis of EPA and NOAA data as of March 2019 and September 2018, respectively. Our analysis shows that if sea level in these areas rose by 1 foot, 97 sites would be inundated. If sea level in these areas rose by 8 feet, 158 sites would be inundated. We also identified 84 nonfederal NPL sites that are located in areas that may already be inundated at high tide

In 2017, Hurricane Harvey dumped an unprecedented amount of rainfall over the greater Houston area, damaging several Superfund sites that contain hazardous substances. At one site on the San Jacinto River in Texas, floodwater eroded part of the structure containing such substances, including dioxins, which are highly toxic and can cause cancer and liver and nerve damage

And much more at https://www.gao.gov/assets/710/702158.pdf

References

EIA (2020) International Energy Statistics. Petroleum and other liquids. Data Options. U.S. Energy Information Administration. Select crude oil including lease condensate to see data past 2017.

Ferry D (2020) The One Incredibly Green Thing Donald Trump Has Done. Politico.

 

Posted in Chemicals, Climate Change, Hazardous Waste | Tagged , | 2 Comments

Trucks running on CNG or LNG

Preface. My books “When Trucks stop running” and “Life After Fossil Fuels” explain why trucks can’t run on electricity — batteries simply don’t scale up, they are too heavy leaving little if any room for cargo. A catenary system has many issues, is not commercial anywhere and time is running out, and is too expensive to put on the necessary thousands of miles of federal and state highways. Catenary also requires a second propulsion method for when the truck is not on the overhead wires (picking up and delivering cargo, passing slower trucks, getting around road work and obstacles), which doubles their cost. And besides, as both books show, the electric grid can’t ever be 100% renewable for many reasons, and so the electric grid will someday come down for good, especially when natural gas is in short supply.

The U.S. has very little transportation using Compressed or Liquefied Natural Gas (CNG, LNG). While natural gas lasts, it can be compressed (CNG) for local fleets that fill up overnight, and LNG fleets could go longer distances if LNG distribution systems were built. CNG filling stations cost a lot though, $1 million, and LNG stations $2 million.

China and other nations have already built millions of natural gas vehicles. CNG and LNG trucks will be an essential backup for when oil shortages begin within the next few years (though natural gas shortages will happen too).  Have Amazon and UPS read my books? Amazon has ordered a mix of 700 class 6 and class 8 CNG trucks, and presumably is building CNG stations. UPS plans to buy 6,000 CNG trucks over the next 3 years (Sanicola 2021).

Alice Friedemann   www.energyskeptic.com  author of “Life After Fossil Fuels: A Reality Check on Alternative Energy”, 2021, Springer; “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Below is an overview of obstacles to using Compressed natural gas (CNG) or Liquefied Natural Gas (LNG) in transportation:

NPC chapter 14 obstacles to truck CNG LNG

NPC chapter 14 obstacles to truck CNG LNG 2

 

 

 

 

 

 

energy density volume MJ per liter

 

 

 

 

 

 

 

 

 

 

 

 Volumetric energy density of chemical fuels in MJ/liter

An objection that railroads have to CNG and LNG are their low energy densities (shown above) compared to diesel fuel, and also natural gas’s volatility and low energy density make handling it difficult. Whatever the technology, gas conditioning incurs high handling costs and has limited flexibility. Unlike oil, for instance, which is fungible, natural gas relies on a heavy infrastructure pressurized or storage caverns or cryogenic carrier).

Brief Review Of LNG As A Transportation Fuel

LNG has been used as a transportation fuel since the 1970’s, although in limited volumes for heavy-duty and fleet applications. In 2001, LNG vehicles accounted for only about 7.6 million gallons (about 2%) of the 366 million gallons of alternative fuels consumed in the United States and a fraction of the 30 billion gallons of diesel consumed by freight trucks annually.

There are an estimated 7,000 vehicles with LNG fuel tanks operating in the U.S. today; public transit systems operate hundreds of LNG-fueled buses in Dallas, Phoenix, El Paso, Austin, Los Angeles and Orange County. LNG is also established and growing quickly as a transport fuel for short-haul, heavy-duty fleets. For example in June 2010, the Ports of Los Angeles and Long Beach announced the replacement of 800+ diesel drayage trucks with LNG trucks and, in April 2011, ordered 200 LNG vehicles for water services operations.

Mining and refuse collection vehicles also represent major existing applications. LNG has also been used to fuel the LNG vessels engaged in international trade and in 20 other marine vessel applications (as of 2010) like ferries, offshore supply vessels and patrol vessels, outside of the U.S., predominantly in Norway. A future increase use of LNG as marine fuel on inland waterways and near-sea shipping is expected.

Large vehicles with frame rail mounted tanks can hold up to 300 gallons of LNG. Most natural gas engines can use either LNG or CNG as a fuel source. LNG is typically used in medium/heavy duty applications where the higher fuel density compared to CNG maximizes driving range while minimizing weight and space required for fuel storage.

Imports of LNG or local LNG production for transportation fuel are currently performed throughout the U.S. These producers then contract the transportation of LNG fuel to approximately 65 refueling sites across the country to fleets with purpose-built cryogenic trailers. There are an estimated 170 LNG transportation trailer trucks operating in North America and each truck has the capacity to deliver 9,000-13,000 gallons per load, limited by maximum payload.

Currently LNG vehicle use is heavily concentrated in California with 71% of US refueling facilities located in the state. It is estimated that at least 200,000 gallons/day of LNG were trucked into California in 2006. National consumption in transportation has continued to increase with the addition of new LNG production sites such as Clean Energy’s plant in Boron, CA which produces 160,000 gallons of LNG per day.

Refueling sites are almost all owned and used by transit fleet vehicles.

CNG lower mileage, heavy and expensive tanks

Mileage will not be nearly as good, not only because the energy density of CNG and LNG is much less than diesel, but the tanks to store CNG are very heavy and expensive:

Classification and Comparisons of Light-Duty CNG Cylinder Options

Classification and Comparisons of Light-Duty CNG Cylinder Options

The primary natural gas Heavy-Duty market hurdles that need to be overcome include:

  • High vehicle costs due to limited volumes of factory finished vehicles and engines, and low volume of demand for natural gas systems.
  • Limited refueling infrastructure currently in-place.
  • A broader range of engine options is required to meet the wide variety of HD vehicle applications.

Natural gas retail refueling infrastructure is in an early stage development and will require major expansion and investment to meet the growing demands for natural gas transportation fuel as the industry commercializes. As of March 2012, there were 988 CNG stations compared to ~160,000 retail gasoline stations, and 47 LNG stations serving HD vehicles. The transition to a fully scaled and mature retail infrastructure system to serve the Light and heavy-duty markets will take time and investment.

The technology opportunities for infrastructure include: Improvements in modular CNG dispensing systems to improve the cost effectiveness of retail station upgrades. Cost and performance of CNG compressor systems. Small-scale LNG technology to support localized HD fleets.

There are approximately 500 trucks distributing LNG through specific cryogenic tank trailers. Major LNG tanker firms move the product for two markets: peak shaving facilities in the Northeast and the Heavy-duty transportation market in the Southwest. The economics of LNG distribution have a disadvantage over diesel as typical trailers carry 10,000 gallons of LNG or 6,700 diesel equivalent gallons (DEG) compared to 9,000 gallons of diesel.

CNG stations are designed to accept incoming fuel from the distribution system, and then compress that incoming gas to the dispensing pressures of approximately 3,600 pounds per square inch (psi). On-site equipment typically includes dryers to remove moisture from the natural gas, multistage compressors to boost natural gas from distribution/transmission pressures to 4,500 to 5,000 psi, high-pressure storage cylinders to act as pressure buffers for pressure filling vehicles, and dispensers to transfer fuel to vehicles. CNG is pressure transferred from storage to the lower pressure of the vehicle, which is typically 3,600 psi at full fill. Incremental land requirements for CNG stations are minimal when compared to gasoline stations since large volumes of fuel are not required to be stored due to the interconnection with the distribution system.

There are less than 10,000 truck stops across the nation providing diesel fuel to the heavy-duty truck fleet. These truck stops sell approximately 32 billion gallons of diesel for on-road heavy-duty trucks.

The majority of the engines in the medium and heavy categories tend to be certified using the Diesel engine provisions as they are based on diesel engine platforms. One of the key distinguishing features of the alternative pathways is the useful life.

  • For gasoline Otto engines, this is 10 years or 110,000 miles, whichever occurs first, across all categories.
  • For Diesel engines, the useful life for medium heavy-duty diesel engines is 10 years or 185,000 miles, whichever occurs first, and for heavy heavy-duty diesel engines, useful life rises to 10 years, 435,000 miles, or 22,000 hours, whichever occurs first.

For Class 8b combination trucks running high annual mileage, U.C. Davis estimates fuel can be up to 40% of the total cost.  In an industry with small operating margins, managing the cost of fuel is a key strategic activity, and hence the drive to improve fuel economy or minimize the purchased cost of fuel.

Some of the critical technical pathways for natural gas systems in HD vehicles include: Combustion strategy, Torque and power, Fuel economy and fuel strategies Complexity of changes to base diesel engine, After treatment, Fuel storage (CNG and LNG), and  System incremental cost.

Compared to the diesel baseline engine, the natural gas variants typically have a reduced thermal efficiency due to throttling and low compression ratio resulting in approximately 7 to 10% lower fuel economy in current applications.

Adapting diesel engines to operate with natural gas using spark ignition technologies similar to gasoline engines has been the prevalent approach to date. The adaptation involves lowering compression ratio, modifying cylinder heads to incorporate spark plugs, and the addition of a throttle to modulate airflow, often accompanied by a reduced size of turbocharger because of the lower air demands relative to diesel.

Typical Operating Cost Breakdown of Class 8b Truck. American Trucking Association, “Is Natural Gas a Viable Alternative to Diesel for the Trucking Industry?

Because of the low energy density of natural gas compared to diesel, CNG has largely been restricted to vehicle applications that either require only modest operating range or that can accommodate significant numbers of cylinders such as transit buses and refuse collection.

Cost of Renewable Natural Gas (RNG)

Cost of Renewable Natural Gas (RNG)

References

NPC (2012) Chapter 14 Natural Gas & Topic Paper #21 An Initial Qualitative Discussion on Safety Considerations for LNG Use in Transportation. National Petroleum Council.

Sanicola L (2021) Exclusive: Amazon orders hundreds of trucks that run on natural gas. Reuters.

U.S. Department of Energy, Alternative Fuels and Advanced Vehicles Data Center (website), “Alternative Fueling Station Total Counts by State and Fuel Type,” 2012,  http://www.afdc.energy.gov/afdc/ fuels/stations_counts.html

 

Posted in Natural Gas Vehicles, Trucks | Tagged , , , | Comments Off on Trucks running on CNG or LNG

Was the fall of the Roman Empire due to plagues & climate change?

Preface. Harper (2017) shows the brutal effects of plagues and climate change on the Roman Empire. McConnell (2020) proposes that a huge volcanic eruption in Alaska was a factor in bringing the Roman Empire  and Cleopatra’s Egypt down.

In addition, there are other ecological reasons for collapse not mentioned in this book, such as deforestation (A Forest Journey: The Story of Wood and Civilization by John Perlin, topsoil erosion (Dirt: The Erosion of Civilizations by David Montgomery), and barbarian invasions (“The Fall of Rome: And the End of Civilization” and “Empires and Barbarians: the fall of Rome and the birth of Europe”.

Alice Friedemann  www.energyskeptic.com  Author of Life After Fossil Fuels: A Reality Check on Alternative Energy; When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Women in ecology  Podcasts: WGBH, Jore, Planet: Critical, Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, Kunstler 253 &278, Peak Prosperity,  Index of best energyskeptic posts

***

McConnell JR et al (2020) Extreme climate after massive eruption of Alaska’s Okmok volcano in 43 BCE and effects on the late Roman Republic and Ptolemaic Kingdom.  Proceedings of the National Academy of Sciences.

Caesar’s assassination happened at a time of unusually cold wet weather, as much as 13.3 F cooler than today, and up to 400% more rain, drenching farmland and causing crop failures leading to food shortages and disease.  In Egypt the annual Nile flood that agriculture depended on failed. Although an eruption of Mount Etna in Sicily in 44 BC has been blamed, this paper found evidence that it may have been the eruption of the Okmok volcano in Alaska that altered the climate enough to weaken the Roman and Egyptian states. It was one of the largest in the past few thousand years (Kornei K (2020) Ancient Rome Was Teetering. Then a Volcano Erupted 6,000 Miles Away. Scientists have linked historical political instability to a number of volcanic events. New York Times).

A more nuanced and critical look at this scientific paper can be found here: Middleton G (2020) Did a volcanic eruption in Alaska help end the Roman republic? The Conversation.

Kyle Harper. 2017. The Fate of Rome Climate Disease and the End of an Empire. Princeton University Press.

How the Antonine plague from 165 to 180 AD affected the Pagan cults

To the ancient mind, plague was an instrument of divine anger. The Antonine Plague provoked spectacular acts of religious supplication at the civic level, fired by the great oracular temples of the god Apollo. The emperors started minting a new image on the currency, invoking “Apollo the Healer.” Religious solutions were desperately sought in Rome.

Pagan philosopher Porphyry blamed the insolence of the Christians for this health catastrophe. “And they marvel that the sickness has befallen the city for so many years, while Asclepius and the other gods are no longer dwellers among us. For no one has seen any succor for the people while Jesus is being honored.” Valerian implemented measures that were unequivocally aimed at hunting out Christians.

The rise of Christianity from the Cyprian plague 249-262 (ebola or smallpox)

This plague was instrumental in making Christianity popular, because the pagan religions did nothing for pandemic victims.  But because Christianity was able to forge kinship-like networks among perfect strangers based on an ethic of sacrificial love, Christian ethics turned the chaos of pestilence into a mission of aid. The vivid promise of the resurrection helped convince the faithful not to fear of death. Priests pleaded for them to show love to the enemy, so they helped everyone, pagans and Christians alike. The compassion was conspicuous and consequential. Basic nursing of the sick can have massive effects on case fatality rates; with Ebola, for instance, the provision of water and food may drastically reduce the incidence of death. The Christian ethic was a blaring advertisement for the faith. The church was a safe harbor in the storm.  The traditional civic cults lost favor.

So much death and the alternative of religious life made it hard to find soldiers

The empire’s fortunes reached a low tide in the AD 260s. The cities were never quite the same; even the healthiest late antique cities were smaller than they had formerly been, and in aggregate, even after the recovery, there were simply fewer major towns. The old days when army recruitment could be handled with a light touch were forever gone.

The fourth-century state had to contend with at least one truly novel alternative to military service: the allure of the religious life for men who might have heeded the call to arms. “The huge army of clergy and monks were for the most part idle mouths.” By the end of the fourth century, their total number was perhaps half the size of the actual army, a not inconsiderable drain on the manpower reserves of the empire. The civil service was also an attractive, and safe, career. The vexing issue of military recruitment in the fourth century was not directly a demographic problem.

Supply chains played a role in spreading disease

Supply chains and manufacturing were extensive. For example, consider the accoutrements of soldiers. The Roman soldier carried arms manufactured in over three dozen specialized imperial factories spaced across three continents. Officers wore bronze armor, embellished with silver and gold, made at five different plants. Roman archers would have used bows made in Pavia and arrows made in Mâcon. The foot soldier was dressed in a uniform (shirt, tunic, and cloak) made at imperial textile mills and finished at separate dye-works. He wore boots made at a specialized manufactory. When a Roman cavalryman of the later fourth century rode into battle, he was mounted on a mare or gelding that had been bred on imperial stud farms in Cappadocia, Thrace, or Spain. The troops were fed by a lumbering convoy system that carried provisions across continents in mind-boggling bulk. The emperor Constantius II ordered 3 million bushels of wheat to be stored in the depots of the Gallic frontier and another 3 million bushels in the Alps, before moving his field army to the west.

These extensive supply chains helped to spread the Antonine and Cyprian pandemics, followed by one of the worst pandemics in 542 AD from the plague. The fusion of global trade and rodent led to the greatest disease event human civilization had ever experienced.  The plague is an exceptional and promiscuous killer. Compared to smallpox, influenza, or a filovirus, Y. pestis is a huge microbe, lumbering along with an array of weapons. But, it is in constant need of a ride.

The plague moved at two speeds: swiftly by sea and slowly by land. The mere sight of ships stirred terror.  Once infected rats made landfall, the diffusion of the disease was accelerated by Roman transportation networks. Carts and wagons carried rodent stowaways along Roman roads. It could spread anywhere that rats could travel.

Climate change and the Huns

The 4th-century was a time of mega-drought. The two decades from ca. AD 350 to 370 were the worst multi-decadal drought event of the last two millennia. The nomads who called central Asia home suddenly faced a crisis as dramatic as the Dust Bowl. The Huns became armed climate refugees on horseback. Their mode of life enabled them to search out new pastures with amazing speed. In the middle of the fourth century, the center of gravity on the steppe shifted from the Altai region (on the borders of what is today Kazakhstan and Mongolia) to the west. By AD 370, Huns had started to cross the Volga River. The advent of these people on the western steppe was momentous, terrorizing the tribes north of Italy, who fled to the Roman Empire in great numbers to escape them (for a longer explanation of the effect of the Huns, see my Book review of “The Fall of Rome: And the End of Civilization” and “Empires and Barbarians: the fall of Rome and the birth of Europe”).

They brought new cavalry tactics that terrorized the inhabitants of the trans-Danubian plains. Their horses were ferociously effective. In the words of a Roman veterinary text, “For war, the horses of the Huns are by far the most useful, by reason of their endurance of hard work, cold and hunger.” What made the Huns overwhelming was their basic weapon, the composite reflex bow.

The Justinian Plague (541 to 749 AD)

Justinian reigned as emperor from AD 527 to 565. Less than a decade into his reign, he had already accomplished more than most who had ever held the title. The first part of his reign was a flurry of action virtually unparalleled in Roman history. Between his accession in AD 527 and the advent of plague in AD 541, Justinian made peace with Persia, reattached vast stretches of the western territories to Roman rule, codified the entire body of Roman law, overhauled the fiscal administration, and executed the grandest building spree in the annals of Roman history. He survived a perilous urban revolt and tried to forge orthodox unity in a fractious church, through his own theological labors.

In the spring of 542 AD the plague appeared for the first time (Yersinia pestic) in the capital Constantinople.   For the next 23 years it became difficult to find and field armies. Taxes rose to unseen heights.  There have been two major plague pandemics since then, the Black Death in AD 1346–53, which lasted nearly 500 years, and the third in 1855 AD in Yunnan China and spread globally.

The dependence of the imperial system on the transport and storage of grain made the Roman Empire a heaven for the black rat.

It required one last twist of fate for the bacterium to make its grand entrance into the Roman world. The Asian uplands had prepared a monster in the germ Y. pestis. The ecology of the empire had built an infrastructure awaiting a pandemic. The silk trade was ready to ferry the deadly package. But the final conjunction, what finally let the spark jump, was abrupt climate change. The year AD 536 is known as a “Year without Summer.” It was the terrifying first spasm in what is now known to be a cluster of volcanic explosions unmatched in the last three thousand years. Again in AD 540–41 there was a gripping volcanic winter. As we will see in the next chapter, the AD 530s and 540s were not just frosty. They were the coldest decades in the late Holocene. The reign of Justinian was beset by an epic, once-in-a-few-millennia cold snap, global in scale.

One thing is certain: the relation between climate and plague is not neat and linear. As with so many biological systems, it is marked by wild swings, narrow thresholds, and frenzied opportunism. Rainy years foster vegetation growth, which in turn sparks a trophic cascade in rodent populations. In excess, water can also flood the burrows of underground rodents and send them scurrying for new ground. Population explosions stir the emigration of rodents in search of new habitats.

Given that there is a strong correlation between volcanism and El Niño, the volcanic eruptions of the AD 530s may have stirred the Chinese marmots or gerbils carrying Y. pestis out of their familiar subterranean colonies, triggering an epizootic that reached the rodents of the seaborne trade routes heading west.

The first victims were the homeless. The toll started to rise. “…the mortality rose higher until the toll in deaths reached 5,000 a day, then 10,000, and then even more.” John’s daily counts are similar. He estimated from 5000 rising to 7000, 12000 and 16000 dead per day. At first, there remained a semblance of public order. “Men were standing by the harbors, at the crossroads and at the gates counting the dead.” According to John, the grisly tally continued until 230,000 had been numbered. “From then on the corpses were brought out without being counted.” John reckoned that over 300,000 were laid low. A tally of ca. 250,000–300,000 dead within a population of probably 500,000 would fall squarely within the most carefully derived estimates for the death rates in places hit by the Black Death at 50–60%.

Ancient societies were always tilted toward the countryside. By now some 85–90% of the population lived outside of cities. What set the plague apart from earlier pandemics was its ability to infiltrate rural areas.

Plague had another, even more insidious stratagem in the long run. An obligate human parasite like smallpox lacked an animal reservoir where it could hide between outbreaks. Plague was more patient. As the wave of the first visitation pulled back from a ravaged landscape, small tidal pools were left behind. The plague lurked in any number of rodent species. These biological weapons of the plague—the fact that it does not confer strong immunity and that it has animal reservoirs—allowed the first pandemic to stretch across two centuries and cause repeated mass mortality events.

The social order wobbled and then collapsed. Work of all kinds stopped. The retail markets were shuttered, and a strange food shortage followed. The harvest rotted in the fields. Food was scarce.

The Late Antique Little Ice Age (536 to 660 AD) climate change effects.

AD 536 was the coldest year of the last two millennia. Average summer temperatures in Europe fell instantly by up to 2.5°, a truly staggering drop. In the aftermath of the eruption in AD 539–40, temperatures plunged worldwide. In Europe, average summer temperatures fell again by up to 2.7°.

The decade of 536–545 was the coldest during this time.

Late in AD 589, torrential rains inundated Italy. The Adige flooded. The Tiber spilled its banks and crept higher than Rome’s walls. Whole regions of the city were under water. Churches collapsed, and the papal grain stores were ruined. No one remembered a flood so overwhelming. Then followed the plague again, in early AD 590.

The combination of plague and climate change sapped the strength of the empire.

The Justinian Plague effects on religion

For the first time in history, an apocalyptic mood came to permeate a large, complex society. Gregory’s sense of the approaching end was hardly his alone. The apocalyptic key transcended traditions, languages, and political boundaries in late antiquity. The plague was a last chance to turn from sin. And no sin weighed more heavily on the late antique heart than greed. Anxieties about wealth generated a perpetual moral crisis in late ancient Christianity. Earthly possessions were a trial of faith. Here the plague struck a tender nerve. The most memorable vignettes in John of Ephesus’ history of the plague linger over individuals singled out for punishment because of their greed. From one angle, the plague was God’s final, ghastly effort to pry loose our tight-gripped hold on material things.

Materially and imaginatively, the ascent of Islam would have been inconceivable without the upheavals of nature. The imminent judgment was a call to repentance.

Monotheism and eschatological warning were central to the prophet Muhammad’s religious message. “The coming judgment is in fact the second most common theme of the Quran, preceded only by the call to monotheism.” The Quran proclaims itself to be “a warning like those warnings of old: that Last Hour which is so near draws ever nearer.” “God’s is the knowledge of the hidden reality of the heavens and the earth. And so, the advent of the Last Hour will but manifest itself like the twinkling of an eye, or closer still.” The origins of Islam lie in an urgent eschatological movement, willing to spread its revelation by the sword, proclaiming the Hour to be at hand. Here, the eschatological energy of the seventh century found its most unrestrained development. It was electrifying. The message was the last element in the perfect storm. The southeastern frontier of the empire was erased almost overnight. Political lines of a thousand years were instantaneously and permanently redrawn.

Egypt and the Justinian Plague effects

The Nile valley was the most heavily engineered ecological district in the ancient world. Every year, at the inundation, its divine waters were diverted through an immense network of canals to irrigate the land. The intricate machinery of dikes, canals, pumps, and wheels was a huge symphony of human ingenuity and hard labor. The sudden disappearance of manpower in lands upriver threw the network of water control into disrepair. The controlled flow of water in the valley had been interrupted, and the downstream inhabitants in the fertile delta were overwhelmed. Remarkably, these events were replayed almost exactly in the aftermath of the medieval Black Death.

Famine effects

The twittering climate regime of late antiquity also had an intimate relationship with the pulses of epidemic mortality. Food shortage was a corollary of disease outbreak. Anomalous weather events might trigger explosive breeding of disease vectors. A devastating famine in Italy in AD 450–51 was coincident with a wave of malaria, for instance. Food crisis fanned desperate migrants in search of survival, overwhelming the normal environmental controls embedded in urban order. Food shortages forced the hungry to resort to consuming inedible or even poisonous food, all while depleting the power of their immune systems to resist infection.

A famine and pestilence swept Edessa and its hinterland. In March of AD 500, a plague of locusts destroyed the crops in the field. By April, the price of grain skyrocketed to about eight times the normal price. An alarmed populace quickly sowed a crop of millet, an insurance crop. It too faltered. People began to sell their possessions, but the bottom fell out of the market. Starving migrants poured into the city. Pestilence – very probably smallpox – followed. Imperial relief came too late. The poor “wandered through the streets, colonnades, and squares begging for a scrap of bread, but no one had any spare bread in his house.  In desperation, the poor started to boil and eat the remnants of flesh from dead carcasses. They turned to vetches and droppings from vines. “They slept in the colonnades and streets, howling night and day from the pangs of hunger.” When the December frosts arrived, the “sleep of death” laid low those exposed to the elements.

The migrants were worst affected, but by spring no one was spared. “Many of the rich died, who had not suffered from hunger.” The loss of environmental control collapsed even the buffers that subtly insulated the wealthy from the worst hazards of contagion.

During a famine that swept Syria in AD 384–85, Antioch found its streets filled with hungry refugees, who had been unable to find even grass to eat and suddenly massed in town to scavenge

Rise of Slavery

After the dislocations of the third century, the slave system experienced a brutal resurgence.

Melania the Younger, from one of the most blue-blooded lines in Rome, owned over 8,000 slaves.

Slave-ownership on Melania’s scale was rare. More consequential were the elites, late antiquity’s 1 percent, who owned “multitudes,” “herds,” “swarms,” “armies,” or simply “innumerable” slaves, both in their households and in the fields. To own a slave was a standard of minimum respectability. In the fourth century, priests, doctors, painters, prostitutes, petty military officers, actors, inn-keepers, and fig-sellers are found owning slaves. Many slaves owned slaves. All over the empire we find working peasants with households that included slaves.

Posted in Pandemic Fast Crash, Roman Empire | Tagged , , , , | 1 Comment

Biogas from cow manure is not a solution for the energy crisis

Preface. Smil’s article about biogas sums up why it won’t contribute to energy shortages as fossils decline. Biogass doesn’t scale and is easy to muck up. Hayes (2015) also makes this case, pointing out that even if every ounce of manure was used it would only generate 3% of U.S. electricity, and electricity only provides 20% of the energy we use, yet 64% of electricity is still generated with fossil fuels.  Biogas is not renewable either, and pollutes the air and groundwater.

Biogas also has an extremely low energy return on investment (EROI) of 1.75 to 2.1 (Yazan 2017) or 1.12-1.57 (Wang 2021). Some scientists estimate an EROI of 10:1 or more  is needed to keep modern society functioning (Hall and Cleveland 1981, Mearns 2008, Lambert et al. 2014, Murphy 2014, Fizaine and Court 2016).

I summarize four articles below.

Alice Friedemann  www.energyskeptic.com Women in ecology  author of 2021 Life After Fossil Fuels: A Reality Check on Alternative Energy best price here; 2015 When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Podcasts: Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity

***

Smil, Vaclav. 2010. Energy Myths and Realities: Bringing Science to the Energy Policy Debate. AEI Press.

Before modernization, China’s biogas digesters were unable to produce enough fuel to cook rice three times a day, still less every day for four seasons. The reasons are obvious to anyone familiar with the complexities of bacterial processes. Biogas generation, simple in principle, is a fairly demanding process to manage in practice.  Here are some of the pitfalls:

  1. The slightest leakage will destroy the anaerobic condition required by methanogenic bacteria
  2. Low temperatures (below 20°C),
  3. Improper feedstock addition,
  4. Poor mixing practices
  5. Shortages of appropriate substrates will result in low (or no) fermentation rates,
  6. Undesirable carbon-to-nitrogen ratios and pH
  7. Formation of heavy scum.

Unless assiduously managed, a biogas digester can rapidly turn into an expensive waste pit, which—unless emptied and properly restarted—will have to be abandoned, as millions were in China. Even widespread fermentation would have provided no more than 10% of rural household energy use during the early 1980s, and once the privatization of farming got underway, most of the small family digesters were abandoned.

More than half of humanity is now living in cities, and an increasing share inhabits megacities from São Paulo to Bangkok, from Cairo to Chongqing, and megalopolises, or conglomerates of megacities. How can these combinations of high population, transportation, and industrial density be powered by small-scale, decentralized, soft-energy conversions? How can the fuel for vehicles moving along eight- or twelve-lane highways be derived from crops grown locally?

How can the massive factories producing microchips or electronic gadgets for the entire planet be energized by attached biogas digesters or by tree-derived methanol? And while some small-scale renewable conversions can be truly helpful to a poor rural household or to a small village, they cannot support such basic, modern, energy-efficient industries as iron and steel making, nitrogen fertilizer synthesis by the Haber-Bosch process, and cement production.”

Hayes, Denis and Gail. 2015. Cowed: The Hidden Impact of 93 Million Cows on America’s Health, Economy, Politics, Culture, and Environment.    W.W. Norton & Company.

Digesters are more about controlling pollution than generating electricity. If every ounce of manure from 93 million cows were converted to biogas and used to generate electricity, it would produce less than 3% of the electricity Americans currently use  (Cuellar, A.D., et al. 2008. Cow Power: the energy and emissions benefits of converting manure to biogas. Environmental Research Letters 3).

Bergamin A (2021) Turning Cow Poop Into Energy Sounds Like a Good Idea — But Not Everyone Is on Board. Discover.

Methane is a potent heat-trapping gas that is prone to leaking from gas drilling sites and pipelines in addition to cow feedlots. Because the dairy industry accounts for more than half of California’s methane emissions, the state has allocated more than $180 million to digester projects as part of its California Climate Investments program. Another $26.5 million has come from SoCalGas as part of a settlement for a natural gas leak in Aliso Canyon that dumped more than 100,000 tons of methane into the atmosphere.

While biogas, as it’s known, sounds promising, its potential is limited. Fossil gas alternatives could only supply about 13 percent of current gas demand in buildings — a limitation acknowledged by insiders from both the dairy and natural gas industries, whose research provided the data for this figure.

“So-called efforts to ‘decarbonize’ the pipeline with [dairy biogas] are a pipe dream only a gas utility executive could love,” Michael Boccadoro, executive director of Dairy Cares, an advocacy group for the dairy industry, says. “It just doesn’t make good policy sense.”

Biogas also produces the same contaminants as fossil gas when it’s burned, says Julia Jordan, a policy coordinator at Leadership Counsel for Justice & Accountability, which advocates for California’s low-income and rural communities. For that reason, biogas will do little to address the health issues that stem from using gas stoves, which have been shown to generate dangerous levels of indoor pollution.

The biggest beneficiaries of biogas, advocates say, are gas utilities and dairy operations. As California cities look to replace gas heaters, stoves and ovens with electric alternatives, SoCalGas can tout biogas as a green alternative to electrification. Meanwhile, the dairy industry will profit from the CAFO system while Central Valley communities bear the burden of air and water pollution

“We’re relying on a flawed system that makes manure a money-making scheme for not just the dairies but the natural gas industry,” Jordan says. “And this industrial, animal-feedlot style of agriculture is not working for the people in the Valley.”

Beyond methane, industrial dairies also emit huge sums of ammonia, which combines with pollution from cars and trucks to form tiny particles of ammonium nitrate that irritate the lungs. The Central Valley has some of the highest rates of asthma in the state, particularly among children. While digesters curb methane and ammonia emissions, they don’t eliminate pollution from feedlots entirely.

Feedlots also contaminate water supplies. A 2019 nitrate monitoring report found elevated nitrate concentrations in groundwater at 250 well sites across dairies in the Central Valley. The report said that nitrates seeping from liquid manure lagoons play a role. Young children exposed to nitrates can develop blue baby syndrome, which starves the body of oxygen and can prove fatal. Some studies have also linked nitrates to cancer and thyroid disease.

Tulare County residents are worried that the use of biogas will encourage the growth of industrial dairies, worsening groundwater pollution, says Blanca Escobedo, a Fresno-based policy advocate with Leadership Counsel for Justice & Accountability. Escobedo’s father worked for a Tulare County dairy.

Digesters are most profitable when fed by larger herds. At least 3,000 cows are needed to make an anaerobic digester financially viable, according to a 2018 study. Dairies that have received state digester funding have an average herd size of 7,500 cattle.

“Because of the tremendous concentration of pollutants in one area, [biogas] isn’t a renewable resource when you’re using it on this scale,” says Jonathan Evans, a senior attorney and the Environmental Health Legal Director at Center for Biological Diversity. “Especially in terms of California’s water supply and the impact on adjacent communities who have to suffer the brunt of increasingly poor air quality.”

Weißbach, D., et al. April 2013. Energy intensities, EROIs, and energy payback times of electricity generating power plants. Energy 52: 210–221

Producing natural gas from maize growing, so-called biogas, is energetically expensive due to the large electricity needs for the fermentation plants, followed by the agriculture’s energy demand because of fertilizers and machines.

Biogas-fired plants, even though they need no buffering, have the problem of enormous fuel provisioning effort which brings them clearly below the economic limit with no potential of improvements in reach.

“The Maas brothers decided to set up their Farm Power plant right between the dairies, so the manure wouldn’t need to be trucked long distances to the digester, and the finished product could be piped at reasonable cost to nearby fields. With the farmers lined up, all Farm Power had to do was find $3 million to build a million-gallon tank in which to digest manure, a generator, and tanks to hold the stuff coming in and going out of the digester, which included up to 30% pre-consumer food waste—things like cow blood, dead chickens, and fish waste. Food that has not already been digested by animals contains more energy, allowing the anaerobic bacteria in the digester to pump out more methane. The facility can process 40 to 50,000 gallons of manure daily.

This generator and another, which Farm Power operates at Lynden, Washington, generate enough electricity to power 1,000 homes. The liquid material coming out of the digester is a better fertilizer than raw manure because it contains far fewer pathogens and weed seeds and doesn’t stink as much. It first flows into a pit; from there, as a more stable manure slurry, it’s piped to nearby fields where it can be pumped through an irrigation nozzle or injected into the soil. The dry residue is turned into sanitary, comfy cow bedding. After the dry matter is squeezed through a screen, it’s loaded into trucks and hauled back to the farms. In the future, Farm Power plans to pasteurize the bedding product. Kevin scooped up some finished product stored at one of the nearby dairies. He held it out, inviting Denis to examine it. The bedding was still hot, and smelled like soil and hay.

Digesters don’t solve every environmental problem. Certain antibiotics in cow manure can kill off the fermenting and methanogenic bacteria that make the process possible. The heat in digesters probably doesn’t destroy most antibiotics. New research suggests some pathogenic and antibiotic-resistant bacteria survive anaerobic digestion. Installing a scrubber to remove sulfur dioxide from the digester gas wasn’t economically feasible for the Maas brothers, so they got a permit to emit some pollution. More nitrogen, phosphorus, and potassium remain in the final product than is ideal. Carbon dioxide is also put in the air, and the trucks hauling waste and bedding burn fuel.”

References

Fizaine F, Court V (2016) Energy expenditure, economic growth, and the minimum EROI of society. Energy Policy 95: 172-186.

Hall CAS, Cleveland CJ (1981) Petroleum drilling and production in the United States: Yield per effort and net energy analysis. Science 211: 576-579.

Lambert JG, Hall CAS, Balogh S, et al (2014) Energy, EROI and quality of life. Energy Policy 64:153–167.

Mearns E (2008) The global energy crisis and its role in the pending collapse of the global economy. Presentation to the Royal Society of Chemists, Aberdeen, Scotland. http://www. theoildrum.com/node/4712

Murphy DJ (2014) The implications of the declining energy return on investment of oil production. Philosophical transactions of the Royal Society A. https://doi.org/10.1098/rsta.2013.0126

Wang C et al (2021) Energy return on investment (EROI) of biomass conversion systems in China: Meta-analysis focused on system boundary unification. Renewable and Sustainable Energy Reviews 137.

Yazan DM et al (2017) Cooperation in manure-based biogas production networks: An agent-based modeling approach. Applied Energy 212: 820-833.
see table 8 here: https://www.researchgate.net/figure/Energy-return-on-investment-EROI_tbl3_322251797

 

Posted in Biofuels, Biomass EROI, Peak Biofuels, Pollution | Tagged , , , , , | 2 Comments

The Next Big Thing: Distributed Generation & Microgrids

 

Preface. Last updated 2022-9-5   The first article below explains what microgrids will look like in the future.  But first a brief look at what a microgrid is, as Angwin explains in her book  “Shorting the Grid. The Hidden Fragility of Our Electric Grid”.

Today the grid is mostly a one-way street, with huge power plants pushing power to customers.  A microgrid will have to be “smart” so that people can both buy and consume electricity, pushing it two directions.  So how will you sell power to your neighbors?  Probably not a wind turbine, even in the unlikely event you have enough wind to justify one, they’re expensive, noisy, and break down a lot. Burn wood? No, you would have to build a wood-fired boiler, raise steam, spin a turbine, attach a generator, and connect the whole thing to the grid.  But if you’re a dairy farmer you can buy methane digesters and small diesels attached to the digester using manure as fuel. In reality, if the power goes down a lot, the wealthy, in suburbia, might buy solar panels and batteries for their own home. In India, where Greenpeace tried to supply electricity via solar power and batteries, they were quickly drained, the same is true for most home batteries offered today. The only way you can produce electricity is a noisy and polluting diesel generator, sold to neighbors via jury-rigged and dangerous wires.

This is happening in Beirut. “Power Hungry” Robert Bryce, who runs the “Power Hungry” podcast went to Beirut ask the locals how this worked. They referred to the electricity “brokers” as the “electricity Mafia.” They paid two electricity bills each month: about $35 to the state-owned power company for the little power the provided 6 hours a day, and around $100 a month to their local “mafia” generator. Bryce asked one man why he didn’t just buy his own generator, since he was paying his neighbor a significant amount of money. The answer was that, if he broke away from the local “mafia” generator, he might be killed. At the very least, the wire to his generator would be cut. Bryce reports how a clash between two generator-owners left two people dead and required the Lebanese army to end the violence.”

Pedro Prieto’s work has taken him all over the world and seen “Beiruts” in many places, such as Brazil, the Democratic Republic of Congo, and Cuba to name a few.  Tad Patzek wrote that at 45-50 C people have hours to live, and in the future giant air-conditioned centers will be essential for people to retreat to (if they can get there).

The first article below from Wired magazine, describes Beirut’s diesel generator microgrid in greater detail. It is coming to you some day as  power outages increase when fracked natural gas and imported LNG can’t keep natural gas plants running to balance wind and solar when they happen to be up, and 100% backs them up otherwise.

The second article below explains why renewables are destabilizing the electric grid. Basically, electricity distribution is designed to flow one way from a centralized system to customers. But Distributed Generation (DG) from solar  PV and wind violates this.

Impacts caused by high penetration levels of intermittent renewable DG can be complex and severe and may include voltage increase, voltage fluctuation, interaction with voltage regulation and control equipment, reverse power flows, temporary over-voltage, power quality and protection concerns, and current and voltage unbalance, to name a few.

There are solutions, but they’re expensive, complicated, and add to the already insane challenges of thousands of utilities, power generators, independent system operators, and other entities trying to coordinate the largest machine in the world when cooperation isn’t always in their best interest.

Lebanon in the news:

Bradstock F (2022) Can Lebanon repair its failing energy sector? oilprice.com. Lebanon has been grappling with severe energy shortages over the past year. On average, the Lebanese population gets just one to two hours of electricity per day. Growing political instability threatens to worsen the country’s ongoing energy crisis. Lebanon has continued to face severe energy shortages due to years of poor investment in infrastructure that has led many to rely on polluting diesel generators for their power. Lebanon has faced rolling blackouts for years due to poor infrastructure spending. Now, with such high debt (495% of GDP), the government can no longer afford to run national power plants. Few external powers have been willing to step in to help. In addition, the rise of the militia group Hezbollah is driving others away.

Jo L (2022) Lebanon’s poorest scavenge through trash to survive. AP. In the dark streets of a Beirut now often without electricity, sometimes the only light that shines is from headlamps worn by scavengers, searching through garbage for scrap to sell. Even trash has become a commodity fought over in Lebanon, mired in one of the world’s worst financial crises in modern history. With the ranks of scavengers growing among the desperately poor, some tag trash cans with graffiti to mark their territory and beat those who encroach on it. Meanwhile, even better-off families sell their own recyclables because it can get them U.S. dollars rather than the country’s collapsing currency. The fight for garbage shows the rapid descent of life in Beirut, once known for its entrepreneurial spirit, free-wheeling banking sector and vibrant nightlife. Instead of civil war causing the chaos, the disaster over the past two years was caused by the corruption and mismanagement of the calcified elite that has ruled Lebanon since the end of its 1975-90 conflict. Thugs roaming the streets on motorcycles sometimes target scavengers at the end of day to steal the recyclables they collected.  “They are ready to kill a person for a plastic bag,” Mohammed said. More than half the population has been plunged into poverty. Banks have drastically limited withdrawals and transfers. Hyperinflation has made daily goods either unaffordable or unavailable.

2022 Professor Jan Blomgren: How are we in time?  https://youtu.be/0Oh_w5KrEVc.  There are 5 kinds of large electric power generators, natural gas, coal, oil, nuclear, and hydropower.  These give a stabilizing effect. You can’t keep the grid up with 1,000+ smaller generators. Solar cells have no generators at all. They do not provide stability to the grid like the larger generators.  Big generators also control and regulate electricity so it gets to the right place.  It can’t be done with small generators. They just don’t have the adjustability. They could have more if we built them that way, but even so, could never be as effective.  This means if we shut down large generators and replace them with many smaller ones, the electricity system will be more unstable and inefficient.

Alice Friedemann  www.energyskeptic.com  Author of Life After Fossil Fuels: A Reality Check on Alternative Energy; When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Women in ecology  Podcasts: WGBH, Jore, Planet: Critical, Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, Kunstler 253 &278, Peak Prosperity,  Index of best energyskeptic posts

* * *

Rosen KR (2018) Inside the Haywire World of Beirut’s Electricity Brokers. When the grid goes out, gray-market generators power up to keep the Wi-Fi running and laptops charged.Wired

https://www.wired.com/story/beruit-electricity-brokers/

Sometimes the lights do not stay on, even for the power company in Beirut.  Electrical power here does not come without concerted exertion or personal sacrifice. Gas-powered generators and their operators fill the void created by a strained electric grid. Most people in Lebanon, in turn, are often stuck with two bills, and sometimes get creative to keep their personal devices—laptops, cell phones, tablets, smart watches—from going dead. Meanwhile, as citizens scramble to keep their inanimate objects alive, the local authorities are complicit in this patchwork arrangement, taking payments from the gray-market generator operators and perpetuating a nation’s struggle to stay wired.

Lebanon has been a glimmering country ever since the 15-year civil war began in 1975, and the reverberations from that conflict persist. These days there is only one city, Zahle, with electricity 24/7. Computer banks in schools and large air conditioners pumping out chills strain the grid, and daily state-mandated power cuts run from at least three hours to 12 hours or more. Families endure power outages mid-cooking, mid-washing, mid-Netflix binging. Residents rely on mobile phone apps to track the time of day the power will be cut, as it shifts between three-hour windows in the morning and afternoon, rotating throughout the week.

Beirut’s supplementary power needs are effectively under the control of what is known here as the generator mafia: a loose conglomerate of generator owners and landlords who supply a great deal of the country’s power. This group is indirectly responsible for the Wi-Fi, which makes possible any number of WhatsApp conversations—an indispensable lifeline for the country’s refugees, foreign aid workers, and journalists and locals alike.

Electricité du Liban, the Lebanese electricity company, has a meager budget and relies on a patchwork approach—including buying power from neighboring countries and leasing diesel generator barges—to produce power; meanwhile, corruption in local and state politics means that government-allocated funds often do not reach the people or places for which they are intended. The community—or mafia—of generator owners is thus a solution to a widespread problem, and it has grown into a cottage industry, both intractable and necessary.

Sam says he doesn’t buy backup juice for his apartment, which he rented last spring. Somewhere along the electrical wires cast like nets across the city, a bootlegged electrical line running from a generator was spliced in his favor: A single “magic” outlet powers his wireless router during outages. It’s one thing to be kept from doing your laundry, and another thing entirely to be kept from your friends or family. Besides, tracking down the generator owner responsible for this one outlet would be a journey of more than 1,001 nights. In the city of Beirut alone, there are roughly 12,000 generators and their owners. Though it is technically illegal, regulators have a hard time squashing the network, which has grown to cover most of the country. Officials aren’t so much paid off to look the other way; they’re paid because, it is said, they own some of the generators.

In the Mreijeh neighborhood, one of the electricians is known to locals as “the real energy minister.” His wiring, strung between generators and buildings to which they pump power, are so thick that they blot out the sun. In the Bourj al-Barajneh neighborhood, some residents share their power “subscription,” perhaps with magic outlets of their own; the subscription operator and generator owner turns a blind eye. In the district known as Shiah, the “Dons” do not allow any such manipulation—they do, however, have a weakness for European soccer matches and boost power on game nights. And in al-Fanar, it is important that the distributors of this power pay close attention to usage and monitor peak hours, doing their best to keep service operating when the state fails.

“We cover where there is no state,” says Abdel al-Raham, an owner and operator of generators in East Beirut. He began with a small generator, which he used to power his house, around the start of the civil war in 1975. But the generator was loud and noxious, so over time, as a gesture of good faith, he would give his neighbors a lamp connected to his generator. “Just enough for them to light their house and to make up for all the annoying noise,” he says.

But because of his generosity, his wife soon became unable to run the washing machine. He went out and bought a new, bigger generator. Then shop owners nearby needed more power, and his brother came to him and proposed they split profits on the power they could sell to the neighborhood. Self-sufficiency turned into entrepreneurship.

Raham, like other operators, complains about repair costs; under-the-table operating fees—essentially, bribes—to the local municipalities in which they operate; the unpaid bills by some of the country’s Syrian and Egyptian refugees who are using an estimated additional 486 megawatts; and the increasing cost of diesel fuel to run the generators.

But Raham felt a responsibility to his community in which three-quarters of the homes rely on his generators for some portion of their power. In some of those homes, he says, elderly people rely on medical devices 24 hours a day. A lack of electricity would be a threat to their health.

Residents of Lebanon have three basic options: buy a generator subscription, own your own generator, or splurge for what’s known as an uninterruptible power supply.

When you move into an apartment, you will most often connect with the local generator owner who will set up a subscription for 5 amps, 10 amps, 15 amps, or more, depending on your budget and consumption during the scheduled power outages. Residents will also do this with their water providers—one bill and service provider for filtered water, and another bill and service provider for gray water. (Water utilities are likewise a … gray area.) Internet is handled by another ad hoc collection of quasi-legal independent operators, as is trash, which the city is supposed to take care of but often fails to collect. These entities are more than private providers or secret crusaders. They are a necessary convenience to which one is connected through inconvenient terms.

Though they claim they make little money on their ventures, generator owners can net tens of thousands of dollars in monthly revenue. They also undercut one another, vying for customers in any given neighborhood.

Many developing countries suffer from electricity problems, but a World Bank report from 2015 suggested that Lebanon’s problems go beyond technical issues. It would cost the government $5 billion to $6 billion to bring 24-hour power to the country, according to one estimate, and yet the government spends roughly $1.4 billion a year just to cover the cost of fuel.  The report also noted that, on average, Lebanese households spent more than $1,300 a year on electricity in 2016 at a time when gross national income per capita was roughly $9,800.

Haddad pays for 10 amps a month (roughly 2,200 watts, or enough to power an electric kettle and desktop computer concurrently) and also receives a separate bill for the building elevator and hallway lights. It used to be that residents paid $90 for 10 amps, which cost $14 to generate, but Haddad says that today he pays $267 for 5 amps every month—about four times the amount he pays concurrently to Electricité du Liban. Municipalities now regulate the maximum cost the generator owners can charge their clients, though their control over the generator owners is hardly comprehensive. It is a tractor-pull relationship between local officials and generator owners. “The policy by which the municipalities and generator owners are connected is neither legal nor organized,” Antoine K. Gebara, the mayor of an eastern Beirut suburb, told me. “There is no system. … It should not be like this.”

The generator owners stepped in when the government could not provide services, but had to be controlled and regulated (as best as one might regulate a network of entrepreneurial privateers) by the same municipalities that couldn’t effectively supply power. Now, the generator owners turn around and pay the municipalities for the pleasure of dominating a market in which other generator owners might come to set up shop.

“They call us criminals, electricity thieves, robbers with generators. How are we the criminals?” Antanios asks me, his voice a rasp. “Yes, it’s extremely expensive. But that’s the government’s fault.”   He reaches into a drawer and pulls out another sheaf of receipts from the municipality and one signed by a local politician, each one totaling around $1,300. He had paid his commission to the politician—a headache Antanios wishes he could avoid (though perhaps it is better than being under the thumb of Hezbollah factions, who at times questioned me while working on this story as I sought answers about generators and their owners)—along with his taxes to the city government. Such a monthly burden meant his business had to generate substantial cash. He tells me he can sometimes get $32,000 a month in revenue. But he is quick to point out that he works hard for the money. For example, Antanios says, the night before he and his electricians spent six hours trying to identify the cause of a shortage throughout the neighborhood.

Just then the room darkens. A loud popping rips through the room, as though someone were stepping on a floor made of light bulbs. From across the street, emerging from a shantytown, from under an umbrella of corrugated metal, several of Antanios’ workers race to the office. The power from Electricité du Liban had cut in his sector, and now the breakers and generators were turning on, feeding into the lines that were cast out from his office and the nearby generators. But the switchover happens smoothly. An oscillating fan in the office hadn’t come to a stop before the power kicked back on, less than 30 seconds later.

Last year, researchers visited the Hamra neighborhood, a popular tourism and shopping district in Beirut, to study the health effects of generator usage. Fifty-three percent of the 588 buildings there had diesel generators. The study, by the American University of Beirut’s Collaborative for the Study of Inhaled Atmospheric Aerosols, found that throughout the city, the 747 tons of fuel consumed during a typical daily three-hour outage resulted in the production of 11,000 tons of nitrogen oxide annually. The territory of Delhi, India, relies heavily on diesel generators too, but Beirut emissions are more than five times worse per capita than those in the Indian capital.

***

IEEE. September 5, 2014. IEEE Report to DOE Quadrennial Energy Review on Priority Issues. IEEE

On the distribution system, high penetration levels of intermittent renewable Distributed Generation (DG) creates a different set of challenges than at transmission system level, given that distribution is generally designed to be operated in a radial fashion with one way flow of power to customers, and DG (including PV and wind technologies) interconnection violates this fundamental assumption. Impacts caused by high penetration levels of intermittent renewable DG can be complex and severe and may include voltage increase, voltage fluctuation, interaction with voltage regulation and control equipment, reverse power flows, temporary overvoltage, power quality and protection concerns, and current and voltage unbalance, among others.

Common impacts of DG in distribution grids are described below; this list is not exhaustive and includes operational and planning aspects50, 51.

  • Voltage increase can lead to customer complaints and potentially to customer and utility equipment damage, and service disruption.
  • Voltage fluctuation may lead to flicker issues, customer complaints, and undesired interactions with voltage regulation and control equipment.
  • Reverse power flow may cause undesirable interactions with voltage control and
  • regulation equipment and protection system misoperations.
  • Line and equipment loading increase may cause damage to equipment and service disruption may occur.
  • Losses increase(under high penetration levels) can reduce system efficiency.
  • Power factor decrease below minimum limits set by some utilities in their contractual agreements with transmission organizations, would create economic penalties and losses for utilities.
  • Current unbalance and voltage unbalance may lead to system efficiency and protection issues, customer complaints and potentially to equipment damage.
  • Interaction with Load Tap Changers (LTC), line voltage regulators (VR), and switched
  • capacitor banks due to voltage fluctuations can cause undesired and frequent voltage
  • changes, customer complaints, reduce equipment life and increase the need for maintenance
  • Temporary Overvoltage (TOV): if accidental islanding occurs and no effective reference to ground is provided then voltages in the island may increase significantly and exceed allowable operating limits. This can damage utility and customer equipment, e.g., arresters may fail, and cause service disruptions.
  • Harmonic distortion caused by proliferation of power electronic equipment such as PV inverters.

The aggregate effect from hundreds or thousands of inverters may cause service disruptions, complaints or customer economic losses, particularly for those relying on the utilization of sensitive equipment for critical production processes.

  • Voltage sags and swells caused by sudden connection and disconnection of large DG units may cause the tripping of sensitive equipment of end users and service disruptions.
  • Interaction with protection systems including increase in fault currents, reach
  • modification, sympathetic tripping, miscoordination, etc.
  • Voltage and transient stability: voltage and transient stability are well-known phenomena at transmission and sub-transmission system level but until very recently were not a subject of interest for distribution systems. As DG proliferates, such concerns are becoming more common.

The severity of these impacts is a function of multiple variables, particularly of the DG penetration level and real-time monitoring, control and automation of the distribution system. However, generally speaking, it is difficult to define guidelines to determine maximum penetration limits of DG or maximum hosting capacities of distribution grids without conducting detailed studies.

From the utility perspective, high PV penetration and non-utility microgrid implementations shift the legacy, centralized, unidirectional power system to a more  complex, bidirectional power system with new supply and load variables at the grid’s edge. This shift introduces operational issues such as the nature, cost, and impact of interconnections, voltage stability, frequency regulation, and personnel safety, which in turn impact resource planning and investment decisions.

NREL. 2014. Volume 4: Bulk Electric Power Systems: Operations and Transmission Planning. National Renewable Energy Laboratory.

Initial experience with PV indicates that output can vary more rapidly than wind unless aggregated over a large footprint. Further, PV installed at the distribution level (e.g., residential and commercial rooftop systems) can create challenges in management of distribution voltage.

Meier, A. May 2014. Challenges to the integration of renewable resources at high system penetration. California Energy Commission.

3.2 Distribution Level: Local Issues

A significant class of challenges to the integration of renewable resources is associated primarily with distributed siting, and only secondarily with intermittence of output. These site‐specific issues apply equally to renewable and non‐renewable resources, collectively termed distributed generation (DG). However, DG and renewable generation categories overlap to a large extent due to

  • technical and environmental feasibility of siting renewables close to loads
  • high public interest in owning renewable generation, especially photovoltaics (PV)
  • distributed siting as an avenue to meet renewable portfolio standards (RPS), augmenting the contribution from large‐scale installations Motivation exists; therefore, to facilitate the integration of distributed generation, possibly at substantial cost and effort, if this generation is based on renewable resources.

Distributed generation may therefore be clustered, with much higher penetration on individual distribution feeders than the system‐wide average, for any number of reasons outside the utility’s control, including local government initiatives, socio‐economic factors, or neighborhood social dynamics.

The actual effects of distributed generation at high penetration levels are still unknown but are likely to be very location specific, depending on the particular characteristics of individual distribution feeders.

Technical issues associated with high local penetration of distributed generation include

  • Clustering: The local effects of distributed generation depend on local, not system‐wide penetration (percent contribution). Local penetration level of distributed generation may be clustered on individual feeders for reasons outside the utility’s control, such as local government initiatives, socio‐economic factors, including neighborhood social dynamics Clustering density is relative to the distribution system’s functional connectivity, not just geographic proximity, and may therefore not be obvious to outside observers.
  • Transformer capacity: Locally, the relative impact of DG is measured relative to load – specifically, current. Equipment, especially distribution transformers, may have insufficient capacity to accommodate amounts of distributed generation desired by customers. Financial responsibility for capacity upgrades may need to be negotiated politically.
  • Modeling: From the grid perspective, DG is observed in terms of net load. Neither the amount of actual generation nor the unmasked load may be known to the utility or system operator. Without this information, however, it is impossible to construct an accurate model of local load, for purposes of: forecasting future load, including ramp rates, ascertaining system reliability and security in case DG fails Models of load with high local DG penetration will have to account for both generation and load explicitly in order to predict their combined behavior. • Voltage regulation: Areas of concern, explained in more detail in the Background section below, include: maintaining voltage in permissible range, wear on existing voltage regulation equipment, reactive power (VAR) support from DG

Areas of concern and strategic interest, explained in more detail in the Background section below, include: preventing unintentional islanding, application of microgrid concept, variable power quality and reliability Overall, the effect of distributed generation on distribution systems can vary widely between positive and negative, depending on specific circumstances that include

  • the layout of distribution circuits
  • existing voltage regulation and protection equipment
  • the precise location of DG on the circuit

3.2.2 Background: Voltage Regulation Utilities are required to provide voltage at every customer service entrance within permissible range, generally ±5 percent of nominal. For example, a nominal residential service voltage of 120V means that the actual voltage at the service entrance may vary between 114 and 126 V. Due to the relative paucity of instrumentation in the legacy grid, the precise voltage at different points in the distribution system is often unknown, but estimated by engineers as a function of system characteristics and varying load conditions.

Different settings of load tap changer (LTC) or other voltage regulation equipment may be required to maintain voltage in permissible range as DG turns on and off. Potential problems include the following:

  • DG drives voltage out of the range of existing equipment’s ability to control
  • Due to varying output, DG provokes frequent operation of voltage regulation equipment, causing excessive wear

DG creates conditions where voltage profile status is not transparent to operators

Fundamentally, voltage regulation is a solvable problem, regardless of the level of DG penetration. However, it may not be possible to regulate voltage properly on a given distribution feeder with existing voltage regulation equipment if substantial DG is added. Thus a high level of DG may necessitate upgrading voltage regulation capabilities, possibly at significant cost. Research is needed to determine the best and most cost‐effective ways to provide voltage regulation, where utility distribution system equipment and DG complement each other.

Legacy power distribution systems generally have a radial design, meaning power flows in only one direction: outward from substations toward customers. The “outward” or “downstream” direction of power flow is intuitive on a diagram; on location, it can be defined in terms of the voltage drop (i.e., power flows from higher to lower voltage).

If distributed generation exceeds load in its vicinity at any one moment, power may flow in the opposite direction, or “upstream” on the distribution circuit. To date, interconnection standards are written with the intention to prevent such “upstream” power flow.

The function of circuit protection is to interrupt power flow in case of a fault, i.e. a dangerous electrical contact between wires, ground, trees or animals that results in an abnormal current (fault current). Protective devices include fuses (which simply melt under excessive current), circuit breakers (which are opened by a relay) and reclosers (which are designed to re‐establish contact if the fault has gone away).

The exception is a networked system, where redundant supply is always present. Networks are more complicated to protect and require special circuit breakers called “network protectors” to prevent circulating or reverse power flow. If connected within such a networked system, DG is automatically prevented from backfeeding into the grid. Due to their considerable cost, networked distribution systems are common only in dense urban areas with a high concentration of critical loads, such as downtown Sacramento or San Francisco, and account for a small percentage of distribution feeders in California.

3.2.5 Research Needs Related to Circuit Protection

The presence of distributed generation complicates protection coordination in several ways: • The fault must now be isolated not only from the substation (“upstream”) power source, but also from DG • Until the fault is isolated, DG contributes a fault current that must be modeled and safely managed

Shifting fault current contributions can compromise the safe functioning of other protective devices: it may delay or prevent their actuation (relay desensitization), and it may increase the energy (I2t) that needs to be dissipated by each device.6 Interconnection standards limit permissible fault current contributions (specifically, no more than 10 percent of total for all DG collectively on a given feeder). The complexity of protection coordination and modeling increases dramatically with increasing number of connected DG units, and innovative protection strategies are likely required to enable higher penetration of DG.

Standard utility operating procedures in the United States do not ordinarily permit power islands. The main exception is the restoration of service after an outage, during which islanded portions of the grid are re‐connected in a systematic, sequential process; in this case, each island is controlled by one or larger, utility‐operated generators. Interconnection rules for distributed generation aim to prevent unintentional islanding. To this end, they require that DG shall disconnect in response to disturbances, such as voltage or frequency excursions, that might be precursors to an event that will isolate the distribution circuit with DG from its substation source.

Disconnecting the DG is intended to assure that if the distribution circuit becomes isolated, it will not be energized. This policy is based on several risks entailed by power islands: • Safety of utility crews: If lines are unexpectedly energized by DG, they may pose an electrocution hazard, especially to line workers sent to repair the cause of the interruption. It is important to keep in mind that even though a small DG facility such as a rooftop solar array has limited capacity to provide power, it would still energize the primary distribution line with high voltage through its transformer connection, and is therefore just as potentially lethal as any larger power source. • Power quality: DG may be unable to maintain local voltage and frequency within desired or legally mandated parameters for other customers on its power island, especially without provisions for matching generation to local load. Voltage and frequency departures may cause property damage for which the utility could be held liable, although it would have no control over DG and power quality on the island. • Re‐synchronization: When energized power islands are connected to each other, the frequency and phase of the a.c. cycle must match precisely (i.e., be synchronized), or elsegenerators could be severely damaged. DG may lack the capability to synchronize its output with the grid upon re‐connection of an island.

3.2.7 Research Needs Related to Islanding

In view of the above risks, most experts agree that specifications for the behavior of DG should be sufficiently restrictive to prevent unintentional islanding. Interconnection rules aim to do this by requiring DG to disconnect within a particular time frame in response to a voltage or frequency deviation of particular magnitude, disconnecting more quickly (down to 0.16 seconds, or 10 cycles) in response to a larger deviation. At the same time, however, specifications should not be too conservative to prevent DG from supporting power quality and reliability when it is most needed.

There is no broad consensus among experts at this time about how best to reconcile the competing goals of minimizing the probability of unintentional islanding, while also maximizing the beneficial contribution from DG to distribution circuits.

As for the possibility of permitting DG to intentionally support power islands on portions of the utility distribution system, there is a lack of knowledge and empirical data concerning how power quality might be safely and effectively controlled by different types of DG, and what requirements and procedures would have to be in place to assure the safe creation and re‐connection of islands. Because of these uncertainties, the subject of islanding seems likely to remain somewhat controversial for some time.

Needed: • Modeling of DG behavior at high local penetrations, including o prevention of unintentional islanding o DG control capabilities during intentional islanding • Collaboration across utility and DG industries to facilitate DG performance standardization, reliability and trust. This means that utilities can depend on DG equipment to perform according to expectations during critical times and abnormal conditions on the distribution system, the handling of which is ultimately the utility’s responsibility.

In the long run, intentional islanding capabilities – with appropriate safety and power quality control – may be strategically desirable for reliability goals, security and optimal resource utilization. Such hypothetical power islands are related to but distinct from the concept of microgrids, in that they would be scaled up to the primary distribution system rather than limited to a single customer’s premises. A microgrid is a power island on customer premises, intermittently connected to the distribution system behind a point of common coupling (PCC) that may comprise a diversity of DG resources, energy storage, loads, and control infrastructure. Three key features of a microgrid are • Design around total system energy requirements:

Depending on their importance, time preference or sensitivity to power quality, different loads may be assigned to different primary and/or back‐up generation sources, storage, or uninterruptible power supplies (UPS). A crucial concept is that the expense of providing highly reliable, high‐quality power (i.e., very tightly controlled voltage and frequency) can be focused on those loads where it really matters to the end user (or life of the appliance), at considerable overall economic savings. However, the provision of heterogeneous power quality and reliability (PQR) requires a strategic decision of what service level is desired for each load, as well as the technical capability to discriminate among connected loads and perform appropriate switching operations. • Presentation to the macrogrid as a single controlled entity: At the point of common coupling, the microgrid appears to the utility distribution system simply as a time‐varying load. The complexity and information management involved in coordinating generation, storage and loads is thus contained within the local boundaries of the microgrid.

Note that the concepts of microgrids and power islands differ profoundly in terms of • ownership • legal responsibility (i.e. for safety and power quality) • legality of power transfers (i.e., selling power to loads behind other meters) • regulatory jurisdiction • investment incentives Nevertheless, microgrids and hypothetical power islands on distribution systems involve many of the same fundamental technical issues. In the long run, the increased application of the microgrid concept, possibly at a higher level in distribution systems, may offer a means for integrating renewable DG at high penetration levels, while managing coordination issues and optimizing resource utilization locally. Research Needs: • Empirical performance validation of microgrids • Study of the implications of applying microgrid concepts to higher levels of distribution circuits, including o time‐varying connectivity o heterogeneous power quality and reliability o local coordination of resources and end‐uses to strategically optimize local benefits of distributed renewable generation • Study of interactions among multiple microgrids

Posted in Alternative Energy, Blackouts Electric Grid, Distributed Generation, Electric Grid & EMP Electromagnetic Pulse, Grid instability, Photovoltaic Solar, Wind | Tagged , , , , | 1 Comment

Interdependencies & supply chain failures in the News

Preface. Joseph Tainter, explains in his famous book “The collapse of complex societies” how complexity causes civilizations to collapse. Fossil fuels have created the most complex society that has ever, or will ever exist, using fossil energy that can’t be replaced (as I explain in “Life After Fossil Fuels”). This is starting to happen. The most complex product we make are microchips, and I predict they will be the first to fail. Their supply chains are so long that just one missing component or one natural disaster in one country can stop production (see posts in microchips and computers, critical elements, and rare earth elements). These are the most complex products we make, with precision engineering to almost the atomic scale, and so will be the first to fail as energy declines and supply chains break for many reasons. Microchips, sometimes dozens or more, are in every car, computer, phone, car, laptop, toaster, TV, and other electronic devices.

Plastics are also used across many industries, and are made out of mainly oil and invented only recently. Thwaites tried to build a toaster from scratch for a Masters degree, and plastics were beyond him for many reasons.

Plastics, refineries, and many other chemicals have to be produced around the clock or the pipes clog up. On a Power Hungry podcast, Oxer explained if the power goes down while making styrene plastic precursor (used in many plastics) it takes 6 hours to get plastic out of pipes, and if not done in that time frame, then it will take 6 weeks. Because of this, many factories have their own power plant. But in the Texas power outage, the natural gas stopped flowing to factories and power plants because some of the compressors that keep gas flowing in the pipes were electric (to lower emissions) rather than the gas itself, which is the usual way to do it. Doh!

Oxer further explained that 85 power plants were close to damaging the entire transmission, interconnect transformers, substations, power plants, and if this had happened, it could have taken 3 months, until May, to get it the electric grid back running if it had crashed. You Texans can expect this to happen again: many other storms in the past were a problem as well, such as the Panhandle blizzard 1957, Houston snowstorm 1960, San Antonio snowstorm 1985, winter storm goliath of 2015, North American Ice storm 2017. Also below freezing 1899 and 1933. But ERCOT was created to avoid FERC regulation and so reliability is a low priority for ERCOT.

Alice Friedemann  www.energyskeptic.com Women in ecology  author of 2021 Life After Fossil Fuels: A Reality Check on Alternative Energy best price here; 2015 When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Podcasts: Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity

***

2022 Lessons From Henry Ford About Today’s Supply Chain Mess. NYT.

Ford’s Rouge auto-making plant is bedeviled by a shortage of a crucial component that would have horrified Mr. Ford, who vertically integrated his company to control all manufacturing supplies to prevent shortages disrupting the production of cars. Today, Ford cannot buy enough semiconductors, the computer chips that are the brains of the modern-day car. Ford is heavily dependent on a single supplier of chips located more than 7,000 miles away, in Taiwan. With chips scarce throughout the global economy, Ford and other automakers have been forced to intermittently halt production. Yet given the high cost of a chip fabrication plant and expertise, Ford and other companies are not likely to every try to make chips themselves.

The F-150 pickup produced at the Rouge uses more than 800 types of chips, requiring dependence on specialists. And chips have limited shelf lives, making them difficult to stockpile. And chip companies have catered heavily to their investors by limiting their capacity — a strategy to maintain high prices.

2021 Texas Freeze Creates Global Plastics Shortage

First the demand for electronics caused a shortage of microchips, which hit the automotive industry particularly hard. Now, the Texas Freeze has caused a global shortage of plastics. The Wall Street Journal reported this week that the cold spell that shut down oil fields and refineries in Texas is still affecting operations, with several petrochemical plants on the Gulf Coast remaining closed a month after the end of the crisis. This creates a shortage of essential raw materials for a range of industries, from carmaking to medical consumables and even house building.

The WSJ report mentions carmakers Honda and Toyota as two companies that would need to start cutting output because of the plastics shortage, which came on top of an already pressing shortage of microchips. Ford, meanwhile, is cutting shifts because of the chip shortage and building some models only partially. 

Another victim is the construction industry. Builders are bracing for shortages of everything from siding to insulation.

More than 60 percent of polyvinyl chloride (PVC) production capacity in the United States is still out of operation a month after the Texas Freeze, affecting businesses that use piping, roofing, flooring, cable insulation, siding, car windshields, car seat foam, car interiors, adhesives, bread bags, dry cleaner bags, paper towel wrapping, shipping sacks, plastic wrap, pouches, toys, covers, lids, pipes, buckets, containers, cables, geomembranes, flexible tubing, and the lumber and steel industries. Hospitals are experiencing shortages of plastic medical equipment, such as disposable containers for needles and other sharp items (“Going To Get Ugly” – Global Plastic Shortage Triggered By Texas Deep Freeze).

This has made clear how complex and vulnerable global supply chains are, the other is how dependent we are on plastics. Various kinds of plastic are used in every single industry and there is no way we can wean ourselves off it.

Seeing the energy transition ahead, Big Oil has shifted big time into plastics, but “Big Plastic” plans could lead to $400 billion in stranded assets as oil companies overestimated plastic demand growth. Yet, the current shortage seems to prove the bet on petrochemicals is safe. There are no economically viable alternatives to plastic cable insulation—or a car interior, or a smartphone casing, or a laptop, or a thousand other things from everyday life—has yet to make an appearance.

2021 Microchip shortages

A chip shortage that started in a surge in demand for personal computers and other electronics for work or school from home during the covid-19 pandemic now threatens to snarl car production around the world. Semiconductors are in short supply because of big demand for electronics, shifting business models which include outsourcing production, and effects from former President Donald Trump’s trade war with China. Chips are likely to remain in short supply in coming months.

Car makers production due to lack of microchips is going down at GM, Ford, Honda, Toyota, Subaru, Volkswagen, Audi, and Fiat Chrysler.

Cars can have thousands of tiny semiconductors, many of which perform functions like power management. Cars also use a lot of micro-controllers, which can control traditional automotive tasks like power steering, or are the brain at the heart of an infotainment system. They’re used for in-car dials and automatic braking as well. Car makers also usually use “just-in-time” production, which means they avoid having extra parts in storage. The problem is even if that 10-cent chip is missing, you can’t sell your $30,000 car.

 

 

Posted in Interdependencies, Microchips and computers, Supply Chains | Tagged , , , , , , | 2 Comments

Jason Bradford on reforming the current food system

Preface. Jason Bradford is amazing: He taught ecology for a few years at Washington University in St. Louis, worked for the Center for Conservation and Sustainable Development at the Missouri Botanical Garden, and co-founded the Andes Biodiversity and Ecosystem Research Group (ABERG). After joining with the Post Carbon Institute in 2004 he shifted from academia to sustainable agriculture, had six months of training with Ecology Action (aka GrowBiointensive) in Willits, California, started the Willits Economic LocaLization and hosted The Reality Report radio show on KZYX in Mendocino County. In 2009 he moved to Corvallis, Oregon, as one of the founders of Farmland LP, a farmland management fund implementing organic and mixed crop and livestock systems. He now lives with his family outside of Corvallis on an organic farm.

Below is the Introduction of his book “The Future is Rural” followed by an older piece he wrote back in 2009.

The book “The Future is Rural” is available for “free” as a PDF if you join the mailing list at the Post Carbon Institute here: https://www.postcarbon.org/publications/the-future-is-rural/

You can listen to Bradford at my favorite podcast “Crazy Town”, subscribe or listen here: https://www.postcarbon.org/crazytown/

Organic Agriculture in the news:

2021 Rodale Enlists Cargill in Unlikely Alliance to Increase Organic Farmland. Rodale will help Cargill convert 50,000 acres of corn and soy to being organically grown

Eshel G (2021) Small-scale integrated farming systems can abate continental-scale nutrient leakage. PLOS Biology. Eshel calculated how adopting nitrogen-sparing agriculture in the USA could feed the country nutritiously and reduce nitrogen leakage into water supplies. He proposes to shift to small, mixed agricultural farms with the core 1.43-hectares an intensive cattle facility from which manure production supports crops for humans as well as livestock fodder.

Alice Friedemann  www.energyskeptic.com  Author of Life After Fossil Fuels: A Reality Check on Alternative Energy; When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Women in ecology  Podcasts: WGBH, Jore, Planet: Critical, Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, Kunstler 253 &278, Peak Prosperity,  Index of best energyskeptic posts

***

Bradford J (2019) The Future is Rural. Food System Adaptations to the Great Simplification. Post Carbon Institute.

Introduction

Today’s economic globalization is the most extreme case of complex social organization in history—and the energetic and material basis for this complexity is waning.   Not only are concentrated raw resources becoming rarer, but previous investments in infrastructure (for example, ports) are in the process of decay and facing accelerating threats from climate change and social disruptions. 2 The collapse of complex societies is a historically common occurrence,3 but what we are facing now is at an unprecedented scale. Contrary to the forecasts of most demographers, urbanization will reverse course as globalization unwinds during the 21st century. The eventual decline in fossil hydrocarbon flows, and the inability of renewables to fully substitute, will create a deficiency of energy to power bloated urban agglomerations and require a shift of human populations back to the countryside. 4 In short, the future is rural.

Given the drastic changes that are unfolding, this report has four main aims:

  • Understand how we got to a highly urbanized, globalized society and why a more rural, relocalized society is inevitable.
  • Provide a framework (sustainability and resilience science) for how to think about our predicament and the changes that will need to occur.
  • Review the most salient aspects of agronomy, soil science, and local food systems, including some of the schools of thought that are adapted to what’s in store.
  • Offer a strategy and tactics to foster the transformation to a local, sustainable, resilient food system.

This report reviews society’s energy situation; explores the consequences for producing, transporting, storing, and consuming food; and provides essential information and potentially helpful advice to those working on reform and adaptation. It presents a difficult message. Our food system is at great risk from a problem most are not yet aware of, i.e., energy decline. Because the problem is energy, we can’t rely on just-in-time innovative technology, brilliant experts, and faceless farmers in some distant lands to deal with it. Instead, we must face the prospect that many of us will need to be more responsible for food security. People in highly urbanized and globally integrated countries like the U.S. will need to reruralize and relocalize human settlement and subsistence patterns over the coming decades to adapt to both the end of cheaply available fossil fuels and climate change.

These trends will require people to change the way they go about their lives, and the way their communities go about business. There is no more business as usual. The point is not to give you some sort of simple list of “50 things you should do to save the planet” or “the top 10 ways to grow food locally.” Instead, this report provides the broad context, key concepts, useful information, and ways of thinking that will help you and those around you understand and adapt to the coming changes.

To help digest the diverse material, the report is divided into five sections plus a set of concluding thoughts:

  • Part One sets the broad context of how fossil hydrocarbons—coal, oil and natural gas—transformed civilization, how their overuse has us in a bind, and why renewable energy systems will fall short of most expectations.
  • Part Two presents ways to think about how the world works from disciplines such as ecology, and highlights the difference between more prevalent, but outdated, mental models.
  • Part Three reviews basic science on soils and agronomy, and introduces historical ways people have fed themselves.
  • Part Four outlines some modern schools of thought on agrarian ways of living without fossil fuels.
  • Part Five brings the knowledge contained in the report to bear on strategies and tactics to navigate the future. Although the report is written for a U.S. audience, much of the content is more widely applicable.

During the process of writing this report, thought leaders and practitioners were interviewed to capture their perspectives on some of the key questions that arise from considering the decline of fossil fuels, consequences for the food system, and how people can adapt. Excerpts from those interviews are given in the Appendix section “Other Voices,” and several of their quotes are inserted throughout the main text.

Globalization has become a culture, and the prospect of losing this culture is unsettling. Much good has arisen from the integration and movements of people and materials that have occurred in the era of globalization. But we will soon be forced to face the consequences of unsustainable levels of consumption and severe disruption of the biosphere. For the relatively wealthy, these consequences have been hidden by tools of finance and resources flows to power centers, while people with fewer means have been trampled in the process of assimilation. In the U.S., our food system is culturally bankrupt, mirroring and contributing to crises of health and the environment. We can rebuild the food system in ways that reflect energy, soil, and climate realities, seeking opportunities to recover elements of past cultures that inhabited the Earth with grace. Something new will arise, and in the evolution of what comes next, many may find what is often lacking in life today—the excitement of a profound challenge, meaning beyond the self, a deep sense of purpose, and commitment to place.

Bradford J (2009) Ecological Economics and the Food System. The oil drum: Campfire.

To get by on ambient energy as much as possible, we have sought alternatives to fossil fuels in every aspect of the food system we participate in. Table 1 considers each type of work done on the farm, to the fork, and back again and contrasts how fossil fuels are commonly used with the technologies we have applied.

Type of Work Common Fossil-Fuel Inputs Alternatives Implemented
Soil cultivation Gasoline or diesel powered rototiller or small tractor Low-wheel cultivator, broadfork, adze or grub hoe, rake and human labor
Soil fertility In-organic or imported organic fertilizer Growing of highly productive, nitrogen and biomass crop (banner fava beans), making aerobic compost piles sufficient to build soil carbon and nitrogen fertility, re-introducing micro-nutrients by importing locally generated food waste and processing in a worm bin, and application of compost teas for microbiology enhancement.
Pest and weed management Herbicide and pesticide applications, flame weeder, tractor cultivation Companion planting, crop rotation, crop diversity and spatial heterogeneity, beneficial predator attraction through landscape plantings, emphasis on soil and plant health, and manual removal with efficient human-scaled tools
Seed sourcing Bulk ordering of a few varieties through centralized seed development and distribution outlets Sourcing seeds from local supplier, developing a seed saving and local production and distribution plan using open pollinated varieties
Food distribution Produce trucks, refrigeration, long-distance transport, eating out of season Produce only sold locally, direct from farm or hauled to local restaurants or grocers using bicycles or electric vehicles, produce grown with year-round consumption in mind with farm delivering large quantities of food in winter months
Storage and processing at production end Preparation of food for long distance transport, storage and retailing requiring energy intensive cooling, drying, food grade wax and packaging Passive evaporative cooling, solar dehydrating, root cellaring and re-usable storage baskets and bags
Home and institutional storage and cooking Natural gas, propane or electric fired stoves and ovens, electric freezers and refrigerators Solar ovens, promotion of eating fresh and seasonal foods, home-scale evaporative cooling for summer preservation and “root cellaring” techniques for winter storage

Table 1. Feeding people requires many kinds of work and all work entails energy. In most farm operations the main energy sources are fossil fuels. By contrast, Brookside Farm uses and develops renewable energy based alternatives.

Our use of food scraps to replace exported fertility also reduces energy by diverting mass from the municipal waste stream. Solid Waste of Willits has a transfer station in town but no local disposal site. Our garbage is trucked to Sonoma County about 100 miles to the south. From there it may be sent to a rail yard and taken several hundred miles away to an out of state land fill. We are also installing a rainwater catchment and storage system that will supply about half the annual water needs to offset use of treated municipal water. The associated irrigation system will be driven by a photovoltaic system instead of the usual diesel-driven pumps on many farms.

Let me put the area of lawn from this study into a food perspective. The 128,000 square kilometers of lawns is the same as 32 million acres. A generous portion of fruits and vegetables for a person per year is 700 lbs, or about half the total weight of food consumed in a year.[xviii] Modest yields in small farms and gardens would be in the range of about 20,000 lbs per acre.[xix] Even with half the area set aside to grow compost crops each year, simple math reveals that the entire U.S. population could be fed plenty of vegetables and fruits using two thirds of the area currently in lawns.

Number of people in U.S. 300,000,000
Pounds of fruits and vegetables per person per year 700
Yield per acre in pounds 20,000
People fed per acre in production 29
Fraction of area set aside for compost crops 0.5
Compost-adjusted people fed per acre 14
Number of acres to feed population 21,000,000
Acres in lawn 32,000,000
Percent of lawn area needed 66%

Labor Compared to Hours of T.V.

For its members Brookside Farm’s role is to provide a substantial proportion of their yearly vegetable and fruit needs. Using our farming techniques, we estimate that one person working full time could grow enough produce for ten to twenty people. By contrast, an individual could grow their personal vegetable and fruit needs on a very part-time basis, probably half an hour per day, on average, working an area the size of a small home (700 sq ft in veggies and fruits plus 700 sq ft in cover crops). American’s complain that they feel cramped for time and overworked. But is this really true or just a function of addiction to a fast-paced media culture? According to Nielsen Media Research:[xx]

Posted in Agriculture, Farming & Ranching | Tagged , , , , , , | 1 Comment

There are over 300,000 contaminated groundwater sites in the U.S.

Preface.  If peak oil did indeed happen in 2018 as the EIA world production data shows, then let’s use the oil we still have, before it is rationed, to clean up the 126,000+ sites that threaten to pollute groundwater for thousands of years as this report from the National Research Council explains.  And while we’re at it, nuclear waste, which will pollute for hundreds of thousands of years. 

Pollution in the news:

Westenhaus B (2022) The environmental consequence of burning rubber. oilprice.com. Have you ever wondered what happens to the rubber tread that wears off a vehicle’s tires? On a planet with hundreds of millions of vehicles there has to be quite a lot somewhere. New modeling at the University of British Columbia Okanagan (UBCO) campus suggests an increasing amount of what are microplastics, the fragments from tires and roadways, are ending up in lakes and streams.The researchers found that more than 50 metric tons of tire and road wear particles are released into waterways annually in an area like the Okanagan valley of British Columbia. With1.5 billion tires  produced every year globally, thatl’s six million tonnes of tire and road wear particles plus chemical additives, contaminating fresh water.

Alice Friedemann  www.energyskeptic.com  Author of Life After Fossil Fuels: A Reality Check on Alternative Energy; When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Women in ecology  Podcasts: WGBH, Jore, Planet: Critical, Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, Kunstler 253 &278, Peak Prosperity,  Index of best energyskeptic posts

***

NRC. 2013. Alternatives for Managing the Nation’s Complex Contaminated Groundwater Sites. National Research Council, National Academies Press.

contaminated water sites and costTABLE 2-6 Rough Estimate of the total number of currently known facilities or contaminated sites and estimated costs to complete

CONCLUSIONS AND RECOMMENDATIONS At least 126,000 sites across the country have been documented that have residual contamination at levels preventing them from reaching closure.

This number is likely to be an underestimate of the extent of contamination in the United States for a number of reasons. For some programs data are available only for contaminated facilities rather than individual sites, and the total does not include sites that likely exist but have not yet been identified, such as dry cleaners or small chemical-intensive businesses (e.g., electroplating, furniture refinishing). Information on cleanup costs incurred to date and estimates of future costs, as shown in Table 2-6, are highly uncertain. Despite this uncertainty, the estimated “cost to complete” of $110-$ 127 billion is likely an underestimate of future liabilities. Remaining sites include some of the most difficult to remediate sites, for which the effectiveness of planned remediation remains uncertain given their complex site conditions. Furthermore, many of the estimated costs do not fully consider the cost of long-term management of sites that will have contamination remaining in place at high levels for the foreseeable future.

Despite nearly 40 years of intensive efforts in the United States as well as in other industrialized countries worldwide, restoration of groundwater contaminated by releases of anthropogenic chemicals to a condition allowing for unlimited use and unrestricted exposure remains a significant technical and institutional challenge.

Recent estimates by the U.S. Environmental Protection Agency (EPA) indicate that expenditures for soil and groundwater cleanup at over 300,000 sites through 2033 may exceed $200 billion (not adjusted for inflation), and many of these sites have experienced groundwater impacts.

One dominant attribute of the nation’s efforts on subsurface remediation efforts has been lengthy delays between discovery of the problem and its resolution. Reasons for these extended timeframes are now well known: ineffective subsurface investigations, difficulties in characterizing the nature and extent of the problem in highly heterogeneous subsurface environments, remedial technologies that have not been capable of achieving restoration in many of these geologic settings, continued improvements in analytical detection limits leading to discovery of additional chemicals of concern, evolution of more stringent drinking water standards, and the realization that other exposure pathways, such as vapor intrusion, pose unacceptable health risks. A variety of administrative and policy factors also result in extensive delays, including, but not limited to, high regulatory personnel turnover, the difficulty in determining cost-effective remedies to meet cleanup goals, and allocation of responsibility at multiparty sites.

There is general agreement among practicing remediation professionals, however, that there is a substantial population of sites, where, due to inherent geologic complexities, restoration within the next 50 to 100 years is likely not achievable. Reaching agreement on which sites should be included in this category, and what should be done with such sites, however, has proven to be difficult.  A key decision in that Road Map is determining whether or not restoration of groundwater is “likely.

Summary

The nomenclature for the phases of site cleanup and cleanup progress are inconsistent between federal agencies, between the states and federal government, and in the private sector. Partly because of these inconsistencies, members of the public and other stakeholders can and have confused the concept of “site closure” with achieving unlimited use and unrestricted exposure goals for the site, such that no further monitoring or oversight is needed. In fact, many sites thought of as “closed” and considered as “successes” will require oversight and funding for decades and in some cases hundreds of years in order to be protective.

At hundreds of thousands of hazardous waste sites across the country, groundwater contamination remains in place at levels above cleanup goals. The most problematic sites are those with potentially persistent contaminants including chlorinated solvents recalcitrant to biodegradation, and with hydrogeologic conditions characterized by large spatial heterogeneity or the presence of fractures. While there have been success stories over the past 30 years, the majority of hazardous waste sites that have been closed were relatively simple compared to the remaining caseload.

At least 126,000 sites across the country have been documented that have residual contamination at levels preventing them from reaching closure. This number is likely to be an underestimate of the extent of contamination in the United States

Significant limitations with currently available remedial technologies persist that make achievement of Maximum Contaminant Levels (MCL) throughout the aquifer unlikely at most complex groundwater sites in a time frame of 50-100 years. Furthermore, future improvements in these technologies are likely to be incremental, such that long-term monitoring and stewardship at sites with groundwater contamination should be expected.

IMPLICATIONS OF CONTAMINATION REMAINING IN PLACE

Chapter 5 discusses the potential technical, legal, economic, and other practical implications of the finding that groundwater at complex sites is unlikely to attain unlimited use and unrestricted exposure levels for many decades.  First, the failure of hydraulic or physical containment systems, as well as the failure of institutional controls, could create new exposures. Second, toxicity information is regularly updated, which can alter drinking water standards, and contaminants that were previously unregulated may become so. In addition, pathways of exposure that were not previously considered can be found to be important, such as the vapor intrusion pathway. Third, treating contaminated groundwater for drinking water purposes is costly and, for some contaminants, technically challenging. Finally, leaving contamination in the subsurface may expose the landowner, property manager, or original disposer to complications that would not exist in the absence of the contamination, such as natural resource damages, trespass, and changes in land values. Thus, the risks and the technical, economic, and legal complications associated with residual contamination need to be compared to the time, cost, and feasibility involved in removing contamination outright.

New toxicological understanding and revisions to dose-response relationships will continue to be developed for existing chemicals, such as trichloroethene and tetrachloroethene, and for new chemicals of concern, such as perchlorate and perfluorinated chemicals. The implications of such evolving understanding include identification of new or revised ARARs (either more or less restrictive than existing ones), potentially leading to a determination that the existing remedy at some hazardous waste sites is no longer protective of human health and the environment.

Introduction

Since the 1970s, hundreds of billions of dollars have been invested by federal, state, and local government agencies as well as responsible parties to mitigate the human health and ecological risks posed by chemicals released to the subsurface environment. Many of the contaminants common to these hazardous waste sites, such as metals and volatile organic compounds, are known or suspected to cause cancer or adverse neurological, reproductive, or developmental conditions.

Over the past 30 years, some progress in meeting mitigation and remediation goals at hazardous waste sites has been achieved. For example, of the 1,723 sites ever listed on the National Priorities List (NPL), which are considered by the U.S. Environmental Protection Agency (EPA) to present the most significant risks, 360 have been permanently removed from the list because EPA deemed that no further response was needed to protect human health or the environment (EPA, 2012).

Seventy percent of the 3,747 hazardous waste sites regulated under the Resource Conservation and Recovery Act (RCRA) corrective action program have achieved “control of human exposure to contamination,” and 686 have been designated as “corrective action completed”. The Underground Storage Tank (UST) program also reports successes, including closure of over 1.7 million USTs since the program was initiated in 1984. The cumulative cost associated with these national efforts underscores the importance of pollution prevention and serves as a powerful incentive to reduce the discharge or release of 13 hazardous substances to the environment, particularly when a groundwater resource is threatened. Although some of the success stories described above were challenging in terms of contaminants present and underlying hydrogeology, the majority of sites that have been closed were relatively simple (e.g., shallow, localized petroleum contamination from USTs) compared to the remaining caseload.

Indeed, hundreds of thousands of sites across both state and federal programs are thought to still have contamination remaining in place at levels above those allowing for unlimited land and groundwater use and unrestricted exposure (see Chapter 2).  According to its most recent assessment, EPA estimates that more than $209 billion dollars (in constant 2004 dollars) will be needed over the next 30 years to mitigate hazards at between 235,000 to 355,000 sites (EPA, 2004). This cost estimate, however, does not include continued expenditures at sites where remediation is already in progress, or where remediation has transitioned to long-term management.

It is widely agreed that long-term management will be needed at many sites for the foreseeable future, particularly for the more complex sites that have recalcitrant contaminants, large amounts of contamination, and/or subsurface conditions known to be difficult to remediate (e.g., low-permeability strata, fractured media, deep contamination).

According to the most recent annual report to Congress, the Department of Defense (DoD) currently has almost 26,000 active sites under its Installation Restoration Program where soil and groundwater remediation is either planned or under way. Of these, approximately 13,000 sites are the responsibility of the Army, the sponsor of this report. The estimated cost to complete cleanup at all DoD sites is approximately $12.8 billion. (Note that these estimates do not include sites containing unexploded ordnance.)

Complex Contaminated Sites

Although progress has been made in remediating many hazardous waste sites, there remains a sizeable population of complex sites, where restoration is likely not achievable in the next 50-100 years. Although there is no formal definition of complexity, most remediation professionals agree that attributes include a really extensive groundwater contamination, heterogeneous geology, large releases and/or source zones, multiple and/or recalcitrant contaminants, heterogeneous contaminant distribution in the subsurface, and long time frames since releases occurred.

Complexity is also directly tied to the contaminants present at hazardous waste sites, which can vary widely and include organics, metals, explosives, and radionuclides. Some of the most challenging to remediate are dense nonaqueous phase liquids (DNAPLs), including chlorinated solvents.

Each of the NRC studies has, in one form or another, recognized that in almost all cases, complete restoration of contaminated groundwater is difficult, and in a substantial fraction of contaminated sites, not likely to be achieved in less than 100 years.

Trichloroethene (TCE) and tetrachloroethene are particularly challenging to restore because of their complex contaminant distribution in the subsurface.

Three classes of contaminants that have proven very difficult to treat once released to the subsurface: metals, radionuclides, and DNAPLs, such as chlorinated solvents. The report concluded that “removing all sources of groundwater contamination, particularly DNAPLs, will be technically impracticable at many Department of Energy sites, and long-term containment systems will be necessary for these sites.”

An example of the array of challenges faced by the DoD is provided by the Anniston Army Depot, where groundwater is contaminated with chlorinated solvents (as much as 27 million pounds of TCE and inorganic compounds. TCE and other contaminants are thought to be migrating vertically and horizontally from the source areas, affecting groundwater downgradient of the base including the potable water supply to the City of Anniston, Alabama. The interim Record of Decision called for a groundwater extraction and treatment system, which has resulted in the removal of TCE in extracted water to levels below drinking water standards. Because the treatment system is not significantly reducing the extent or mobility of the groundwater contaminants in the subsurface, the current interim remedy is considered “not protective.” Therefore, additional efforts have been made to remove greater quantities of TCE from the subsurface, and no end is in sight. Modeling studies suggest that the time to reach the TCE MCL in the groundwater beneath the source areas ranges from 1,200 to 10,000 years, and that partial source removal will shorten those times to 830–7,900 years.

The Department of Defense

The DoD environmental remediation program, measured by the number of facilities, is the largest such program in the United States, and perhaps the world.

The Installation Restoration Program (IRP), which addresses toxic and radioactive wastes as well as building demolition and debris removal, is responsible for 3,486 installations containing over 29,000 contaminated sites

The Military Munitions Response Program, which focuses on unexploded ordnance and discarded military munitions, is beyond the scope of this report and is not discussed further here, although its future expenses are greater than those anticipated for the IRP.

The CERCLA program was established to address hazardous substances at abandoned or uncontrolled hazardous waste sites. Through the CERCLA program, the EPA has developed the National Priorities List (NPL).  There are 1,723 facilities that have been on the NPL.

As of June 2012, 359 of the 1,723 facilities have been “deleted” from the NPL, which means the EPA has determined that no further response is required to protect human health or the environment; 1,364 remain on the NPL.

Statistics from EPA (2004) illustrate the typical complexity of hazardous waste sites at facilities on the NPL. Volatile organic compounds (VOCs) are present at 78 percent of NPL facilities, metals at 77 percent, and semivolatile organic compounds (SVOCs) at 71 percent. All three contaminant groups are found at 52 percent of NPL facilities, and two of the groups at 76 percent of facilities

RCRA Corrective Action Program Among other objectives, the Resource Conservation and Recovery Act (RCRA) governs the management of hazardous wastes at operating facilities that handle or handled hazardous waste.

Although tens of thousands of waste handlers are potentially subject to RCRA, currently EPA has authority to impose corrective action on 3,747 RCRA hazardous waste facilities in the United States

Underground Storage Tank Program In 1984, Congress recognized the unique and widespread problem posed by leaking underground storage tanks by adding Subtitle I to RCRA.

UST contaminants are typically light nonaqueous phase liquids (LNAPLs) such as petroleum hydrocarbons and fuel additives.

Responsibility for the UST program has been delegated to the states (or even local oversight agencies such as a county or a water utility with basin management programs), which set specific cleanup standards and approve specific corrective action plans and the application of particular technologies at sites. This is true even for petroleum-only USTs on military bases, a few of which have hundreds of such tanks.

At the end of 2011, there were 590,104 active tanks in the UST program

Currently, there are 87,983 leaking tanks that have contaminated surrounding soil and groundwater, the so-called “backlog.” The backlog number represents the cumulative number of confirmed releases (501,723) minus the cumulative number of completed cleanups (413,740).

Department of Energy

The DOE faces the task of cleaning up the legacy of environmental contamination from activities to develop nuclear weapons during World War II and the Cold War. Contaminants include short-lived and long-lived radioactive wastes, toxic substances such as chlorinated solvents, “mixed wastes” that include both toxic substances and radionuclides, and, at a handful of facilities, unexploded ordnance. Much like the military, a given DOE facility or installation will tend to have multiple sites where contaminants may have been spilled, disposed of, or abandoned that can be variously regulated by CERCLA, RCRA, or the UST program. T

The DOE Environmental Management program, established in 1989 to address several decades of nuclear weapons production, “is the largest in the world, originally involving two million acres at 107 sites in 35 states and some of the most dangerous materials known to man”.

Given that major DOE sites tend to be more challenging than typical DoD sites, it is not surprising that the scope of future remediation is substantial. Furthermore, because many DOE sites date back 50 years, contaminants have diffused into the subsurface matrix, considerably complicating remediation.

More recent reports suggest that about 7,000 individual release sites out of 10,645 historical release sites have been “completed,” which means at least that a remedy is in place, leaving approximately 3,650 sites remaining. In 2004, DOE estimated that almost all installations would require long-term stewardship

As of April 1995, over 3,000 contaminated sites on 700 facilities, distributed among 17 non-DoD and non-DOE federal agencies, were potentially in need of remediation. The Department of Interior (DOI), Department of Agriculture (USDA), and National Aeronautics and Space Administration (NASA) together account for about 70 percent of the civilian federal facilities reported to EPA as potentially needing remediation (EPA, 2004). EPA estimates that many more sites have not yet been reported, including an estimated 8,000 to 31,000 abandoned mine sites, most of which are on federal lands, although the fraction of these that are impacting groundwater quality is not reported. The Government Accountability Office (GAO) (2008) determined that there were at least 33,000 abandoned hardrock mine sites in the 12 western states and Alaska that had degraded the environment by contaminating surface water and groundwater or leaving arsenic-contaminated tailings piles.

State Sites

A broad spectrum of sites is managed by states, local jurisdictions, and private parties, and thus are not part of the CERCLA, RCRA, or UST programs. These types of sites can vary in size and complexity, ranging from sites similar to those at facilities listed on the NPL to small sites with low levels of contamination.

States typically define Brownfields sites as industrial or commercial facilities that are abandoned or underutilized due to environmental contamination or fear of contamination. EPA (2004) postulated that only 10 to 15 percent of the estimated one-half to one million Brownfields sites have been identified.

As of 2000, 23,000 state sites had been identified as needing further attention that had not yet been targeted for remediation (EPA, 2004). The same study estimated that 127,000 additional sites would be identified by 2030. Dry Cleaner Sites Active and particularly former dry cleaner sites present a unique problem in hazardous waste management because of their ubiquitous nature in urban settings, the carcinogenic contaminants used in the dry cleaning process (primarily the chlorinated solvent PCE, although other solvents have been used), and the potential for the contamination to reach receptors via the drinking water and indoor air (vapor intrusion) exposure pathways. Depending on the size and extent of contamination, dry cleaner sites may be remediated under one or more state or federal programs such as RCRA, CERCLA, or state mandated or voluntary programs discussed previously, and thus the total estimates of dry cleaner sites are not listed separately in

In 2004, there were an estimated 30,000 commercial, 325 industrial, and 100 coin-operated active dry cleaners in the United States (EPA, 2004). Despite their smaller numbers, industrial dry cleaners produce the majority of the estimated gallons of hazardous waste from these facilities (EPA, 2004). As of 2010, the number of dry cleaners has grown, with an estimated 36,000 active dry cleaner facilities in the United States—of which about 75 percent (27,000 dry cleaners) have soil and groundwater contamination (SCRD, 2010b). In addition to active sites, dry cleaners that have moved or gone out of business—i.e., inactive sites—also have the potential for contamination. Unfortunately, significant uncertainty surrounds estimates of the number of inactive dry cleaner sites and the extent of contamination at these sites. Complicating factors include the fact that (1) older dry cleaners used solvents less efficiently than younger dry cleaners thus enhancing the amount of potential contamination and (2) dry cleaners that have moved or were in business for long amounts of time tend to employ different cleaning methods throughout their lifetime. EPA (2004) documented at least 9,000 inactive dry cleaner sites, although this number does not include data on dry cleaners that closed prior to 1960. There are no data on how many of these documented inactive dry cleaner sites may have been remediated over the years. EPA estimated that there could be as many as 90,000 inactive dry cleaner sites in the United States.

Department of Defense The Installation Restoration Program reports that it has spent approximately $31 billion through FY 2010, and estimates for “cost to complete” exceed $12 billion

Implementation costs for the CERCLA program are difficult to obtain because most remedies are implemented by private, nongovernmental PRPs and generally there is no requirement for these PRPs to report actual implementation costs.

EPA (2004) estimated that the cost for addressing the 456 facilities that have not begun remedial action is $16-$23 billion.

A more recent report from the GAO (2009) suggests that individual site remediation costs have increased over time (in constant dollars) because a higher percentage of the remaining NPL facilities are larger and more complex (i.e., “megasites”) than those addressed in the past. Additionally, GAO (2009) found that the percentage of NPL facilities without responsible parties to fund cleanups may be increasing. When no PRP can be identified, the cost for Superfund remediation is shared by the states and the Superfund Trust Fund. The Superfund Trust Fund has enjoyed a relatively stable budget—e.g., $1.25 billion, $1.27 billion, and $1.27 billion for FY 2009, 2010, and 2011, 8 respectively—although recent budget proposals seek to reduce these levels. States contribute as much as 50 percent of the construction and operation costs for certain CERCLA actions in their state. After ten years of remedial actions at such NPL facilities, states become fully responsible for continuing long-term remedial actions.

In 2004, EPA estimated that remediation of the remaining RCRA sites will cost between $31 billion and $58 billion, or an average of $11.4 million per facility

Underground Storage Tank Program

There is limited information available to determine costs already incurred in the UST program. EPA (2004) estimated that the cost to close all leaking UST (LUST) sites could reach $12-$19 billion or an average of $125,000 to remediate each release site (this includes site investigations, feasibility studies, and treatment/disposal of soil and groundwater). Based on this estimate of $125,000 per site, the Committee calculated that remediating the 87,983 backlogged releases would require $11 billion. The presence of the recalcitrant former fuel additive methyl tert-butyl ether (MTBE) and its daughter product and co-additive tert-butyl alcohol could increase the cost per site. Most UST cleanup costs are paid by property owners, state and local governments, and special trust funds based on dedicated taxes, such as fuel taxes. Department of Energy

The Department’s FY 2011 report to Congress, which shows that DOE’s anticipated cost to complete remediation of soil and groundwater contamination ranges from $17.3 to $20.9 billion. The program is dominated by a small number of mega-facilities, including Hanford (WA), Idaho National Labs, Savannah River (SC), Los Alamos National Labs (NM), and the Nevada Test Site. Given that the cost to complete soil and groundwater remediation at these five facilities alone ranges from $16.4 to $19.9 billion (DOE, 2011), the Committee believes that the DOE’s anticipated cost-to-complete figure is likely an underestimate of the Agency’s financial burden; the number does not include newly discovered releases or the cost of long-term management at all sites where waste remains in the subsurface. Data on long-term stewardship costs, including the expense of operating and maintaining engineering controls, enforcing institutional controls, and monitoring, are not consolidated but are likely to be substantial and ongoing.

Stewardship costs for just the five facilities managed by the National Nuclear Security Administration (Lawrence Livermore National Laboratory, CA, Livermore’s Site 300, Pantex, TX, Sandia National Laboratories, NM, and the Kansas City Plant, MO) total about $45 million per year (DOE, 2012c).

Other Federal Sites EPA (2004) reports that there is a $15-$22 billion estimated cost to address at least 3,000 contaminated areas on 700 civilian federal facilities, based on estimates from various reports from DOI, USDA, and NASA. States EPA (2004) estimated that states and private parties together have spent about $1 billion per year on remediation, addressing about 5,000 sites annually under mandatory and voluntary state programs. If remediation were continued at this rate, 150,000 sites would be completed over 30 years, at a cost of approximately $30 billion (or $20,000 per site). IMPACTS TO

DRINKING WATER SUPPLIES

The Committee sought information both on the number of hazardous waste sites that impact a drinking water aquifer—that is, pose a substantial near-term risk to public water supply systems that use groundwater as a source. Unfortunately, program-specific information on water supply impacts was generally not available. Therefore, the Committee also sought other evidence related to the effects of hazardous waste disposal on the nation’s drinking water aquifers.

Despite the existence of several NPL and DoD facilities that are known sources of contamination to public or domestic wells (e.g., the San Fernando and San Gabriel basins in Los Angeles County), there is little aggregated information about the number of CERCLA, RCRA, DoD, DOE, UST, or other sites that directly impact drinking water supply systems. None of the programs reviewed in this chapter specifically compiles information on the number of sites currently adversely affecting a drinking water aquifer. However, the Committee was able to obtain information relevant to the groundwater impacts from some programs, i.e. the DoD. The Army informed the Committee that public water supplies are threatened at 18 Army installations

Also, private drinking water wells are known to be affected at 23 installations. A preliminary assessment in 1997 showed that 29 Army installations may possibly overlie one or more sole source aquifers. Some of the best known are Camp Lejeune Marine Corps Base (NC), Otis Air National Guard Base (MA), and the Bethpage Naval Weapons Industrial Reserve Plant (NY).

CERCLA. Each individual remedial investigation/feasibility study (RI/FS) and Record of Decision (ROD) should state whether a drinking water aquifer is affected, although this information has not been compiled. Canter and Sabatini (1994) reviewed the RODs for 450 facilities on the NPL. Their investigation revealed that 49 of the RODs (11 percent) indicated that contamination of public water supply systems had occurred. “A significant number” of RODs also noted potential threats to public supply wells. Additionally, the authors note that undeveloped aquifers have also been contaminated, which prevents or limits the unrestricted use (i.e., without treatment) of these resources as a future water supply.

The EPA also compiles information about remedies implemented within Superfund. EPA (2007) reported that out of 1,072 facilities that have a groundwater remedy, 106 specifically have a water supply remedy, by which we inferred direct treatment of the water to allow potable use or switching to an alternative water supply. This suggests that 10 percent of NPL facilities adversely affect or significantly threaten drinking water supply systems.

RCRA. Of the 1,968 highest priority RCRA Corrective Action facilities, EPA (2008) reported that there is “unacceptable migration of contaminated groundwater” at 77 facilities. Also, 17,042 drinking water aquifers have a RCRA facility within five miles, but without additional information, it is impossible to know if these facilities are actually affecting the water sources.

UST. In 2000, 35 states reported USTs as the number one threat to groundwater quality (and thus indirectly to drinking water). However, more specific information on the number of leaking USTs currently impacting a drinking water aquifer is not available. Other Evidence That Hazardous Waste Sites Affect Water Supplies The U.S. Geological Survey (USGS) has compiled large data sets over the past 20 years regarding the prevalence of VOCs in waters derived from domestic (private) and public wells. VOCs include solvents, trihalomethanes (some of which are solvents [e.g., chloroform], but may also arise from chlorination of drinking water), refrigerants, organic synthesis compounds (e.g., vinyl chloride), gasoline hydrocarbons, fumigants, and gasoline oxygenates. Because many (but not all) of these compounds may arise from hazardous waste sites, the USGS studies provide further insight into the extent to which anthropogenic activities contaminate groundwater supplies

Zogorski et al. (2006) summarized the presence of VOCs in groundwater, private domestic wells, and public supply wells from sampling sites throughout the United States. Using a threshold level of 0.2 µg/L—much lower than current EPA drinking water standards for individual VOCs (see Table 3-1)—14 percent of domestic wells and 26 percent of public wells had one or more VOCs present. The detection frequencies of individual VOCs in domestic wells were two to ten times higher when a threshold of 0.02 µg/L was used (see Figures 2-2 and 2-3). In public supply wells, PCE was detected above the 0.2 µg/L threshold in 5.3 percent of the samples and TCE in 4.3 percent of the samples. The total percentage of public supply wells with either PCE or TCE (or both) above the 0.2 µg/L threshold is 7.3

Chloroform Methyl tert-butyl ether (MTBE) Perchloroethene (PCE) 1,1,1-Trichloroethane (TCA) Dichlorodifluoromethane (CFC-12) Toluene Chloromethane Trichloroethene (TCE) Dibromochloropropane (DBCP) Methylene chloride Trichlorofluoromethane (CFC-11) Bromodichloromethane 1,2-Dichloropropane Dibromochloromethane 1,2,3-Trichloropropane

FIGURE 2-2 Detection frequencies in domestic well samples for 15 most frequently detected VOCs at levels of 0.2 and 0.02 mg/L. SOURCE: Zogorski et al. (2006) with illustration provided by USGS National Water Quality Assessment program. Figure 2-2 Chloroform Methyl tert-butyl ether (MTBE) Perchloroethene (PCE) Bromoform Dibromochloromethane Trichloroethene (TCE) Bromodichloromethane 1,1,1-Trichloroethane (TCA) 1,1-Dichloroethane (1,1-DCA) Dichlorodifluoromethane (CFC-12) cis-1,2-Dichloroethene (cis-1,2-DCE) 1,1-Dichloroethene (1,1-DCE) Trichlorofluoromethane (CFC-11) trans-1,2-Dichloroethene (trans-1,2-DCE) Toluene

FIGURE 2-3 The 15 most frequently detected VOCs in public supply wells. SOURCE: Zogorski et al. (2006) with illustration provided by USGS National Wa ter Quality Assessment program.Figure

Further analysis of domestic wells by DeSimone et al. (2009) showed that organic contaminants were detected in 60 percent of 2,100 sampled wells. Wells were sampled in 48 states in parts of 30 regionally extensive aquifers used for water supply. Aquifers were randomly selected for sampling and there was no prior knowledge of contamination.

Of 30 regionally extensive aquifers used for water supply. Aquifers were randomly selected for sampling and there was no prior knowledge of contamination.

Toccalino and Hopple (2010) and Toccalino et al. (2010) focused on 932 public supply wells across the United States. The public wells sampled in this study represent less than 1 percent of all groundwater that feeds the nation’s public water systems. The samples, however, were widely distributed nationally and were randomly selected to represent typical aquifer conditions. Overall, 60 percent of public wells contained one or more VOCs at a concentration of = 0.02 µg/L, and 35 percent of public wells contained one or more VOCs at a concentration of = 0.2 µg/L.

 

Overall detection frequencies for individual compounds included 23 percent for PCE, 15 percent for TCE, 14 percent for MTBE, and 12 percent for 1,1,1-TCA (see Figure 2-5). PCE and TCE exceeded the MCL in approximately 1 percent of the public wells sampled.

 

PERCENT FIGURE 2-4 VOCs (in black) and pesticides (in white) detected in more than 1 percent of domestic wells at a level of 0.02 µg/L.

 

FIGURE 2-5 VOCs and pesticides with detection frequencies of 1 percent or greater at assessment levels of 0.02 µg/L in public wells in samples collected from 1993–2007. SOURCE: Toccalino and Hopple (2010) and Toccalino et al. (2010)

 

Overall, the USGS studies show that there is widespread, very low level contamination of private and public wells by VOCs, with a reasonable estimate being 60 to 65% of public wells having detectable VOCs. According to the data sets of Toccalino and Hopple (2010) and Toccalino et al. (2010), approximately 1% of sampled public wells have levels of VOCs above MCLs. Thus, water from these wells requires additional treatment to remove the contaminants before it is provided as drinking water to the public. EPA (2009b) compiled over 309,000 groundwater measurements of PCE and TCE from raw water samples at over 46,000 groundwater-derived public water supplies in 45 states. Compared to the USGS data, this report gives a lower percentage of water supplies being contaminated: TCE concentration exceeded its MCL in 0.34 percent of the raw water samples from groundwater-derived drinking water supply systems. There are other potential sources of VOCs in groundwater beyond hazardous waste sites. For example, chloroform is a solvent but also a disinfection byproduct, so groundwater sources impacted by chlorinated water (e.g., via aquifer storage/recharge, leaking sewer pipes) would be expected to show chloroform detections. Another correlation seen in the USGS data is that domestic and public wells in urban areas are more likely to have VOC detections that those in rural areas. This finding is not unexpected given the much higher level of industrial practices in urban areas that can result in releases of these chemicals to the subsurface. Another way to estimate the number of public water supplies affected by contaminated groundwater is to consider the number of water supply systems that specifically seek to remove organic contaminants. The EPA Community Water System Survey (EPA, 2002) reports that 2.3 to 2.6 percent of systems relying solely on groundwater have “organic contaminant removal” as a treatment goal. For systems that use both surface water and groundwater, 10.3 to 10.5 percent have this as a treatment goal.

 

In summary, it appears that the following conclusions about the contamination of private and public groundwater systems can be drawn: (1) there is VOC contamination of many private and public wells (upwards of 65%) in the U.S., but at levels well below MCLs; the origin of this contamination is uncertain and the proportion caused by releases from hazardous waste sites is unknown; (2) approximately one in 10 NPL facilities is impacting or significantly threatening a drinking water supply system relying on groundwater, requiring wellhead treatment or the use of alternative water sources; and (3) public wells are more susceptible to contamination than private wells, due their higher likelihood of being in urban areas and their higher pumping rates and hydraulic capture zones.

 

All of these issues suggest that there can be no generalizations about the condition of sites referred to as “closed,” particularly assumptions that they are “clean,” meaning available for unlimited use and unrestricted exposure. Indeed, the experience of the Committee in researching “closed sites” suggests that many of them contain contaminant levels above those allowing for unlimited use and unrestricted exposure, even in those situations where there is “no further action” required.

 

Furthermore, it is clear that states are not tracking their caseload at the level of detail needed to ensure that risks are being controlled subsequent to “site closure.” Thus, reports of cleanup success should be viewed with caution.

 

CONCLUSIONS AND RECOMMENDATIONS

 

The Committee’s rough estimate of the number of sites remaining to be addressed and their associated future costs is presented in Table 2-6, which lists the latest available information on the number of facilities (for CERCLA and RCRA) and contaminated sites (for the other programs) that have not yet reached closure, and the estimated costs to remediate the remaining sites.

 

 

water/contaminated

 

TABLE 2-6 Rough Estimate of the total number of currently known facilities or contaminated sites that have Not reached closure and estimated costs to complete

 

 

At least 126,000 sites across the country have been documented that have residual contamination at levels preventing them from reaching closure. This number is likely to be an underestimate of the extent of contamination in the United States for a number of reasons. First, for some programs data are available only for contaminated facilities rather than individual sites; for example, RCRA officials declined to provide an average number of solid waste management units per facility, noting that it ranged from 1 to “scores.” CERCLA facilities frequently contain more than one individual release site. The total does not include DoD sites that have reached remedy in place or response complete, although some such sites may indeed contain residual contamination. Finally, the total does not include sites that likely exist but have not yet been identified, such as dry cleaners or small chemical-intensive businesses (e.g., electroplating, furniture refinishing).

 

 

Information on cleanup costs incurred to date and estimates of future costs, as shown in Table 2-6, are highly uncertain. Despite this uncertainty, the estimated “cost to complete” of $110-$ 127 billion is likely an underestimate of future liabilities. Remaining sites include some of the most difficult to remediate sites, for which the effectiveness of planned remediation remains uncertain given their complex site conditions. Furthermore, many of the estimated costs (e.g., the CERCLA figure) do not fully consider the cost of long-term management of sites that will have contamination remaining in place at high levels for the foreseeable future.

 

Remedial Objectives, Remedy Selection, and Site Closure

The issue of setting remedial objectives touches upon every aspect and phase of soil and groundwater cleanup, but none perhaps as important as defining the conditions for “site closure.” Whether a site can be “closed” depends largely on whether remediation has met its stated objectives, usually stated as “remedial action objectives.” Such determinations can be very difficult to make when objectives are stated in such ill-defined terms as removal of mass “to the maximum extent practicable.” More importantly, there are debates at hazardous waste sites across the country about whether or not to alter long-standing cleanup objectives when they are unobtainable in a reasonable time frame. For example, the state of California is closing a large number of petroleum underground storage tank sites that are deemed to present a low threat to the public, despite the affected groundwater not meeting cleanup. In other words, some residual contamination remains in the subsurface, but this residual contamination is deemed not to pose unacceptable future risks to human health and the environment. Other states have pursued similar pragmatic approaches to low-risk sites where the residual contaminants are known to biodegrade over time, as is the case for most petroleum-based chemicals of concern (e.g., benzene, naphthalene). Many of these efforts appear to be in response to the slow pace of cleanup of contaminated groundwater; the inability of many technologies to meet drinking water-based cleanup goals in a reasonable period of time, particularly at sites with dense nonaqueous phase liquids (DNAPLs) and complicated hydrogeology like fractured rock; and the limited resources available to fund site remediation.

There is considerable variability in how EPA and the states consider groundwater as a potential source of drinking water. EPA has defined groundwater as not capable of being used as a source of drinking water if (1) the available quantity is too low (e.g., less than 150 gallons per day can be extracted), (2) the groundwater quality is unacceptable (e.g., greater than 10,000 ppm total dissolved solids, TDS), (3) background levels of metals or radioactivity are too high, or (4) the groundwater is already contaminated by manmade chemicals (EPA, 1986, cited in EPA, 2009a). California, on the other hand, establishes the TDS criteria at less than 3,000 ppm to define a “potential” source of drinking water. And in Florida, cleanup target levels for groundwater of low yield and/or poor quality can be ten times higher than the drinking water standard (see Florida Administrative Code Chapter 62-520 Ground Water Classes, Standards, and Exemptions). Some states designate all groundwater as a current or future source of drinking water (GAO, 2011).

The Limits of Aquifer Restoration

As shown in many previous reports (EPA, 2003; NRC, 1994, 1997, 2003, 2005), at complex groundwater contamination sites (particularly those with low solubility or strongly adsorbed contaminants), conventional and alternative remediation technologies have not been capable of reducing contaminant concentrations (particularly in the source area) to drinking water standards quickly.

 

 

Posted in Chemicals, Hazardous Waste, National Academies of Sciences, Water Pollution | Tagged , , , | 1 Comment

Book review of Fruits of Eden: David Fairchild & Americas Plant Hunters

Preface. Botanist David Fairchild is one of the reasons the average grocery store has 39,500 items. Before he came along, most people ate just a few kinds of food day in day out (though that was partly due to a lack of refrigeration).

I have longed to eat a mangosteen ever since I read this book, Fairchild’s favorite fruit, with mango a close second. But no luck so far.

What wonderful and often adventurous work Fairchild and other botanists had traveling all over the world in search of new crops American farmers could grow. Grains that could grow in colder climates were sought out.

Since 80 to 90% of future generations will be farmers after fossil fuels are gone, who will be growing food organically since fertilizer and pesticides are made from natural gas and oil, it would be wise for them to plant as many varieties of crops as possible not only for gourmet meals, but biodiversity, pest control, and a higher quality of life.

As usual, what follows are Kindle notes, this isn’t a proper book review.

Alice Friedemann   www.energyskeptic.com  author of  “Life After Fossil Fuels: A Reality Check on Alternative Energy”, 2021, Springer; “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer; Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Amanda Harris.  2015. Fruits of Eden: David Fairchild and Americas Plant Hunters. University Press of Florida.   

At the end of the 19th century, most food in America was bland and brown. The typical family ate pretty much the same dishes every day. Their standard fare included beefsteaks smothered in onions, ham with rank-smelling cabbage, or maybe mushy macaroni coated in cheese. Since refrigeration didn’t exist, ingredients were limited to crops raised in the backyard or on a nearby farm. Corn and wheat, cows and pigs dominated American agriculture and American kitchens.

Fairchild transformed American meals by introducing foods from other countries. His campaign began as a New Year’s Resolution for 1897 and continued for more than 30 years, despite difficult periods of xenophobia at home and international warfare abroad. After he persuaded the United States Department of Agriculture to sponsor his project, he sent other smart, curious botanists to Asia, Africa, South America, and Europe to find new foods and plants. They explored remote jungles, desert oases, and mountain valleys and shipped their discoveries to government gardeners for testing across America. Collectively, the plant explorers introduced more than 58,000 items.

Many of their discoveries have been used as breeding material to improve existing plants, and others have become staples of the American table like mangoes, avocados, soybeans, figs, dates, and Meyer lemons.

Fairchild arrived in the nation’s capital on July 25, 1889, four months after the inauguration of Benjamin Harrison, a Republican from Indiana. The United States totaled 38, although four new ones— North Dakota, Washington, South Dakota, and Montana—would be added in November 1889. The country’s population was a little more than 50 million. Farming was an enormously important segment of the economy: the market value of agricultural products was more than $500 million (more than $12.5 billion in current dollars). Young scientists working to improve agriculture were as valuable to the nation as rocket scientists would be 75 years later.

Despite the national importance of farming, the U.S. Department of Agriculture had become a cabinet-level agency—one of seven—only a few months earlier. For decades, presidents had considered creating a separate office to help farmers, but many legislators, especially southerners, vehemently opposed granting the federal government any official role in the family farm, a fiercely independent American institution.  Congress had finally established the office in 1862 only because the southern states had seceded, leaving northern senators and representatives free to approve the legislation without opposition.

After the Civil War ended, his uncle Thomas Barbour Bryan built Graceland Cemetery, a significant urban development that was the city’s first landscaped burial ground. He hired his nephew, Bryan Lathrop, to manage the cemetery, a job he apparently did well. Creating Graceland would probably have remained the family’s biggest accomplishment if not for the Great Chicago Fire of Sunday, October 8, 1871, a day that created one of the biggest real estate investment opportunities in American history. The fire triggered a chain of events that transformed urban architecture and, in the process, produced the personal fortune that bankrolled America’s first plant expeditions.

After Fairchild arrived in Naples he immediately recognized how unexciting American meals had been. “No sooner had I landed in Italy that I began to get a perspective on the limited number of foods which the fare in my home and in American boarding houses had brought to my palate,” he wrote later. His education began in a small restaurant where he usually ate lunch. There he sampled his first foreign food: a dried fig, a wickedly sweet morsel for a young man raised on boiled vegetables. He tried vermicelli with a sauce of tomatoes, a fruit whose possibly poisonous qualities were still being debated in America. He enjoyed Italian pasta so much—it was chewy and flavorful, not the mushy kind made with soft American wheat—that he collected 52 shapes and mailed them to friends in Washington.

As he rushed away from Corsica Fairchild stole a few cuttings from citron trees along a road and hid them under his coat. Unequipped with material to protect the branches from drying out on the long voyage between Italy and America, he jammed the sticks into raw potatoes, packaged the lot and mailed them. The potatoes provided enough moisture to nourish the cuttings, which survived the trip to Washington. Officials sent the twigs to California, where they launched a profitable business.

At the end of 1895, Fairchild went to Java. The ship landed on the west coast of Sumatra at the village of Padang, a collection of low buildings strung along the waterfront and backed by thick jungle. Fairchild was finally in the South Seas, on the verge of seeing the world he had dreamed about in Kansas. He never forgot the thrill of his first visit. “The memory of that first tropical night on shore and of the noise of the myriads of insects and the smell of the vegetation and the sensation of being close to wild jungles and wild people sometimes comes back to me even though millions of later experiences have left their traces on my brain.”  

The Visitors’ Laboratory at the botanical garden in Buitenzorg, a city now called Bogor, was, like the Zoological States in Naples, an unusual spot where botanists from around the world worked together. This spirit of shared scientific inquiry among researchers of all nationalities and all specialties stayed with Fairchild for the rest of his life.

“The institution was to discover and bring to light a knowledge of the plant life of the tropical world,” Fairchild wrote later. “Not for the uses of Holland and Netherlands India alone, but for the whole world of plants—a world which knows no national boundaries, a world which constitutes a vast, magnificent realm of living stuff destined to be of interest to the human race for all time.”  

Most remarkable were the unfamiliar, even bizarre tropical fruits. It was in Java, in the summer of 1896, that David Fairchild began his lifelong love affair with one food: the mangosteen. Four years later he launched a lifelong but ultimately unsuccessful push to cultivate them in America. His enthusiasm mirrored the fascination of Queen Victoria, who in 1855 allegedly promised to pay 100 pounds to the first person to bring her a single mangosteen.

After this Fairchild went to Sumatra, and after landing  toured the public market in a settlement called Pandang. It was a noisy, crowded place that offered a cornucopia of strange cultivated fruits and vegetables. Fairchild was immediately intrigued. The visit “showed me how many new and interesting food plants there were if only we had an established place where they could be sent,” he wrote.

Fairchild’s wealthy supporter, Lathrop, proposed that these strange, foreign plants be sent to America to see which ones take root, produce fruit, and make money for farmers and merchants. At the time, only about 2% of the world’s edible plants were cultivated in America, and the typical farmer grew only about twenty of them. Lathrop wanted Americans to open their mouths to new foods.

“He began to lay before me his idea of what a botanist could do if he were given the opportunity to travel and collect the native vegetables, fruits, drug plants, grains and all the other types of useful plants as yet unknown in America,” Fairchild wrote later. It was a long evening of lively debate, and in the end, Lathrop won. Fairchild agreed to join his project. He would abandon his cloistered studies in Java and take up the mission of foreign plant introduction. As the clock approached midnight, David Fairchild promised Barbour Lathrop that he would spend his life searching the globe for new foods. “Without Barbour Lathrop to goad him into an entirely different life work,” Douglas wrote later, “to pay his salary and his expenses on their long wanderings, David Fairchild might have become a quiet, little-known if distinguished plant pathologist and entomologist, a scientist-scholar whose life might have been lived almost entirely within the walls of some laboratory.

“The greatest service which can be rendered any country is to add a useful plant to its culture,” Jefferson wrote in 1800, a remark that later American plant explorers frequently quoted with pride. Jefferson had followed his own advice: he once smuggled grains of rice from Italy to Virginia in his coat pocket even though Italian officials could have executed him if he had been caught.

When Fairchild and Lathrop began the adventures that would change America’s eating habits, they looked like improbable companions. Lathrop was tall, slim, and always well dressed; in bearing he resembled the military men he admired. He carried a cane and wore a hat wherever he went. Fairchild, in contrast, was gawky and uncertain and rarely wore clothes appropriate to the occasion, whatever it was. Lathrop was demanding and critical; Fairchild was constantly frazzled. In the beginning Lathrop, who had flashing dark blue eyes and expressive bushy eyebrows, called Fairchild “my investment,” with a little bit of a sneer. Fairchild, fully aware of the contrast, felt inadequate. “Somehow I could not do anything quite to suit him,” he admitted. Fairchild was so socially awkward that he agreed to one condition of working with Lathrop: he promised not to get married while he was exploring for plants.

Their expedition began immediately with a leisurely cruise to Singapore and Siam. A few days later when he and Lathrop attended a young couple’s wedding dinner. It was a special occasion because the Crown Prince of Siam also attended the feast. Fairchild found the food unfamiliar and the formal etiquette bizarre. “During the 13-course dinner, every dish was strange to us except the rice,” he wrote later. “Each course was noiselessly placed on the table by a servant deferentially crawling on his knees. Not a person stood or walked erect while the prince and his guests were at the table. At the close of the long meal, the wives appeared and even those of royal birth all hitched themselves across the floor like a child who has not yet learned to creep.” As witnesses to the wedding ceremony, Fairchild and Lathrop were obliged by local custom to trickle perfumed water down the bride and groom’s necks as the couple knelt together with their foreheads touching. “If the others poured as much water from the jeweled conch shell as I did,” he wrote later, “the poor bride and groom must have been well soaked.

The two had a clear plan. First of all, they were only interested in new foods and other useful plants, nothing ornamental or impractical. Also, they needed trained botanists to do the hunting so the government wouldn’t be inundated with worthless material. Next, they wanted experiment gardens prepared to test the foreign plants. Finally, Swingle and Fairchild proposed, the whole operation could be funded by quietly diverting $20,000 (equal to about $500,000 today) from another line in the agriculture department’s budget. It was an audacious scheme from two junior botanists. But by then Fairchild had grown more confident.

Fairchild and Swingle were apprehensive when they entered their new boss’s office at the end of August 1897 even though they had arranged for a senior department employee to go with them to give their idea more credibility. “Secretary Wilson was a tall, gaunt man with a gray beard and deep-set eyes,” Fairchild remembered. “He sat listening to us with his eyes half closed and, at intervals, made use of the nearby spittoon. … I waited breathlessly for his verdict.

Wilson named it “the section of foreign seed and plant introduction”. No modern government had employed its own team of full-time plant explorers. In England and France, large private companies had sponsored many foreign plant expeditions to increase their profits by selling rare plants, usually showy ornamentals. These private firms were fiercely competitive and proprietary about their discoveries, but the U.S. government would be eager to share its findings with the public and let farmers make money.

Lathrop suddenly arrived in person as Fairchild was engaged in his valuable but sedentary work. Wasting no time, Lathrop tempted him with the offer of another exciting trip to faraway lands, one that would be longer and more interesting than their six-month cruise through the South Seas. When Fairchild protested that he had just started his new job, Lathrop argued that he was too inexperienced to supervise international plant collectors. If the government’s scheme were to succeed, Lathrop insisted, Fairchild couldn’t depend on strangers to send the material he wanted. He needed to visit the places himself and make important contacts with botanists, gardeners, and government officials

The two-year trip Lathrop had promised turned into a five-year odyssey. It was a remarkable adventure of luxury travel experiences, punctuated by meetings with prominent horticulturalists—few were lowly enough to be called gardeners—and casual, dreamlike botanizing sessions on remote islands.

His visit to Maine in the summer of 1898 was brief. Because Lathrop was paying the bills, traveling was always conducted on his terms: expensive, comfortable, quick, and not always in a straight line. The zigzagging began immediately after the two men left Maine for California where Fairchild met Luther Burbank, America’s first celebrity nurseryman. Burbank had caused great excitement in horticultural circles by inventing startling new varieties of fruits, vegetables, and flowers in these years before scientists understood the science of plant breeding

Trinidad, Jamaica, and Barbados received a little more attention. In Kingston, Fairchild first tasted chayote, a mild-flavored squash that he later tried hard to persuade Americans to appreciate. Fairchild collected 16 varieties of yams and four kinds of sweet potatoes, nutritious stables in the Caribbean diet.

Throughout South America, Fairchild hunted for plants the easiest way possible: he bought them in local markets and took cuttings from plants in botanical gardens. At this point in his travels everything was so new and Fairchild’s interests were so broad that he randomly collected samples of almost everything that was unfamiliar.

He shipped large batches to Washington, often without providing information or advice for the people who were supposed to test the plants. By July 1899 the department had received more than 200 samples of Latin American beans, peppers, squashes, melons, peas, apples, and other fruits and vegetables. Fairchild’s most successful discovery during the first part of the expedition was an alfalfa from Lima, Peru, that eventually flourished as a forage plant in Arizona known as the “Hairy Peruvian”.

In Chile he bought a bushel of avocado seeds that wound up in California; they produced one of the earliest varieties grown there. Many foods Fairchild collected failed; he admitted that a large percentage of the plants he shipped were lost before they got a chance to grow in America.

The men were constantly exposed to illness. When they arrived in Panama in February 1899, a few years after yellow fever had forced French engineers to abort construction of the canal there, Panama was considered the most dangerous place in South America. Death was so common that all hospital patients were fitted for coffins when they were admitted for treatment.

These secret shipments included broccoli, then virtually unknown in America. In Venice Fairchild also discovered zucchini—identified as “vegetable marrow”—for sale in a market.

Before he arrived in Egypt he said he knew the word sesame only as Ali Baba’s famous password; afterward he understood it to be a source of valuable cooking oil. He also collected chickpeas, okra, strawberry spinach, and more hot peppers.

Lathrop encouraged Fairchild to buy as much cotton as possible. He shipped six bushels of seeds of three varieties, material that eventually boosted the lucrative cotton industries in Arizona and California.

Banda was an important source of nutmeg, an especially handsome plant. “There are few fruit trees more beautiful than nutmeg trees with their glossy leaves and pear-shaped, straw-colored fruits,” he recalled. “As the fruits ripen, they crack open and show the brilliant crimson mace which covers the seed or nutmeg with a thin, waxy covering. The vivid color of the fruit and the deep green foliage make the trees among the most dramatic and colorful of the tropical plant world.” Fairchild, who rarely passed up an opportunity to stroll alone among trees, spent hours wandering through nutmeg groves.

In May 1900, Fairchild visited Scandinavia to collect examples of tough-weather fruits and fodder plants.

the Chinese treated Fairchild well and he had time to introduce himself to John M. Swan, a doctor at a missionary hospital in Canton who helped him collect dozens of peaches, plums, persimmons, and other fruits. Swan also told him how to find the seeds that produce tung oil, the glossy material used to waterproof the exterior of Chinese junks.

Fairchild was able to visit rural areas outside Canton and wander among the small vegetable plots there. “These truck gardens of a city of 2,000,000 people did not contain a single vegetable with which we are familiar in America.

He watched Chinese farmers control pests the old-fashioned way: they picked off each insect on every plant by hand.

By the time Fairchild finished this two-month detour to the Persian Gulf he had collected 224 date palm offshoots or suckers, each weighing about thirty pounds.

After he arranged to send almost four tons of trees to Washington, Fairchild retraced his route and joined Lathrop in Japan in the summer of 1902. They lived comfortably at the Imperial Hotel in Tokyo where Lathrop relaxed and Fairchild searched for plants. He bought fruits and vegetables at public markets and discovered zoysia, a plant that eventually became a popular ground cover in America. At Lathrop’s insistence he also bought bamboo plants, a purchase that triggered Fairchild’s long love affair with this huge grass.

Japanese flowering cherry trees remained one of Fairchild’s passions.

During his travels with Lathrop, Fairchild constantly hunted for varieties of one particular food, the mango. It was his second favorite fruit after the mangosteen, which, despite its name, is not related.

It was Elbridge Gale’s determination and defiance of conventional, wrong-headed wisdom that inspired Fairchild to search for mangos all over the world. During the four years he spent traveling alone and with Lathrop, Fairchild sent 24 varieties from six countries, each supposedly tastier or hardier than the other.

Hansen, who emigrated from Denmark when he was seven years old, was a young plant breeder who worked in the northern plains, the region that Wilson was trying hardest to help. Hansen had done some traveling before Wilson hired him in spring 1897, having visited Russia and seven other countries for four months in 1894 while he was a student at Iowa State College and Wilson ran the plant experiment station there. Hansen also had another, more important qualification for the job. Unlike many other horticulturalists at the time, he was a plant breeder who understood that it was botanically impossible to acclimate plants to tolerate severe conditions; only cross breeding with proven hardy varieties could produce tough plants. Because Hansen possessed this scientific sophistication, Wilson trusted him to know what to look for in the field.

Hansen was thirty-one in 1897 when Wilson convinced him that the future of American agriculture depended on his returning to Russia to find material that could be introduced in the Dakotas, then a dry, unproductive region where few crops grew. The mission was haphazard and dangerous. Wilson paid him $3,000, a generous salary equal to about $78,000 in current dollars.  Shortly after Hansen arrived in Uzbek province in Turkistan in November 1897, a field of alfalfa with small blue flowers attracted his attention. He believed the plant would survive in South Dakota, where temperatures range from 50 degrees below zero to 114 degrees above, to provide year-round feed for livestock, as well as produce nitrogen to enrich the soil. Before he could recommend the plant to Secretary Wilson, however, he needed to figure out how far north the blue alfalfa grew.

On Christmas 1897 he reached Kopal in southwestern Siberia, a town on the same latitude as South Dakota, where the blue Turkistan alfalfa was still growing. Confident it could thrive on the northern plains, he sent thousands of seeds to Washington. (Years later he returned and discovered a hardier type, an alfalfa with tiny yellow flowers, and brought that one to America, too. As a lasting tribute to Hansen’s work, South Dakota State University selected blue and yellow as its school colors.)

At first the parcels trickled in from Russia; soon, however, hundreds of packages arrived in a deluge. One day in February 1898, twelve tons of seeds of a fodder plant called smooth brome grass from the Volga River district turned up. Fairchild struggled to keep the shipments straight and check for dangerous insects or diseases that might have accompanied the material. The department had organized a system of public and private experiment gardens to test the material, so Fairchild arranged the seeds into 5,000 small packages and shipped them around the country. The enormous workload made him miserable. Fairchild, who hated clerical tasks, soon decided that he would rather be exploring himself. Again he was unhappy. “Hansen felt that he had been sent out to collect, and he collected everything and collected it in quantity,” Fairchild recalled. Later in an unpublished essay his criticism was harsher: “Hansen’s collections took on the character of a nightmare.” Nonetheless, Hansen had Secretary Wilson’s support, and Wilson sent him on two more trips to Russia. Fairchild, who may have been jealous of Hansen’s close relationship with his boss, accused Plant Explorer Number One of keeping bad records, overspending, and—perhaps an explorer’s biggest sin—passing off plants he bought in a market as material he found in the wild.

The department’s second staff explorer, who was hired in July 1898, earned Fairchild’s great respect. He was Mark Alfred Carleton, Fairchild’s classmate at Kansas State Agricultural College, who had become a cereal specialist for the department after graduation. Carleton’s great passion was to improve the grains cultivated in America’s wheat belt. Born in Ohio and raised on a farm in Kansas, Carleton spent his childhood and youth watching his neighbors labor constantly to harvest good wheat. Most wheat cultivated in America at this time was a red or white winter variety with soft kernels high in starch and low in protein. America’s earliest settlers had planted it east of the Mississippi River and ground it into flour to make bread and pastry.

As pioneers moved west early in the nineteenth century, they brought seeds of these soft wheats with them, unaware that the varieties couldn’t handle the different growing conditions west of the Mississippi. Midwestern winters are too cold and summers are too hot and dry for most soft wheats. In the prairie fields of Kansas, Carleton learned, they were especially vulnerable to rust, a fungus that shrivels the grain and rots the straw.

Carleton had also learned, however, that not all farmers in Kansas had this problem. The exceptions were Mennonites who had arrived from Russia in 1873. America was the most recent home for these Protestants, who had wandered through Europe for generations. The sect had originally lived in West Prussia, but many members moved to southern Russia about 1770 when Catherine the Great convinced them to settle remote sections of her country in exchange for one hundred years of special privileges, including exemption from military service. The Mennonites were skilled farmers who thrived in the Crimea by developing through trial and error hard wheat varieties that could handle the tough climate there.

In the mid-1800s, as Catherine’s century of protection drew to an end, the Russian government warned the Mennonites that they would soon face conscription despite their pacifist convictions. Many in the community fled Russia and sought religious freedom in the New World.

After exploring for six months, Carleton returned to Washington with several types of wheat, including the hardest of all—durum, often known as macaroni wheat.

While midwestern farmers were pleased with Carleton’s seeds, midwestern millers were not. They didn’t want the trouble and expense of updating their machinery to process harder grains. “Durum, the hardest of hard wheats, met at once with the most violent opposition, chiefly from millers, but also from all grain men,” Carleton wrote later. “Various epithets, such as ‘bastard’ and ‘goose,’ were applied to the wheat without restriction.

Carleton’s promotional campaign worked. Within a few years, large grain processors relented and modified their mills to grind hard wheat into flour. Carleton’s trip cost the U.S. government about $10,000 (about $250,000 today); by 1905 the new crop was worth $10 million a year (more than $250 million today)—a 1,000 percent increase. America had so much durum wheat that the country exported 6 million bushels a year. By 2011 production rose to about 50 million bushels a year. Because of Mark Carleton, American farmers had more than enough wheat, freeing experts at the end of the nineteenth century to worry about something other than widespread famine.

Americans consumed rice primarily as a pudding, not—like most people in the world—as part of a meal’s main course. Americans demanded kernels with a clean, smooth texture. Farmers in Louisiana and Texas grew mostly long-grain varieties originally imported from Honduras, but the kernel’s length made the rice fragile. When the outer coating was polished to whiten the grains, the only kind most Americans would eat, the rice often shattered. To make the product pretty and smooth enough to attract shoppers, processors coated it with paraffin wax. Of course, this beauty came with a price; buffing removed rice’s nutrients and wax removed its taste.

America’s rice-eating habits appalled Fairchild. “Rice is the greatest food staple in the world, more people living on it than on any other, and yet Americans know so little about it that they are actually throwing away the best part of the grains of rice and are eating only the tasteless, starchy, proteinless remainder,” he wrote in a magazine article. He mocked Americans for demanding rice as shiny as “glass beads.” “A pudding of stewed, sweetened rice, dusted with cinnamon is about as unappetizing to a fastidious Japanese as a sugar-coated beefsteak filled with raisins would be to an American,” Fairchild wrote.

Those glass beads were unhealthy as well. In 1908, a decade after Knapp’s trip, scientists determined that a diet of polished white rice could cause beriberi, a discovery that forced rice growers to enrich the grains with the nutrients removed by milling.

Fairchild had taken hundreds during his travels, and as he chatted with Grosvenor, he described one unforgettable scene he had captured. In May 1901 he had gone to North Africa to find date palms. When he landed in Tunis, he noticed an astonishing spectacle: strolling through town were young women wearing yards of brilliantly colored silk and tall pointed hats. Each woman weighed about 300 pounds. “I simply could not turn my eyes away from them,” Fairchild wrote later, “and frequently turned my Kodak toward them too, although they did not like it.”  

Davidia involucrata is the most interesting and most beautiful of all trees which grow in the north temperate regions

That spring Meyer set off for Manchuria, his first long trip inside Asia. It was a remote but promising destination because Manchuria’s growing conditions were similar to those of the northern United States, the section of the country that Secretary Wilson wanted most to help. Problems plagued the trip from the beginning, however. Officials wouldn’t let Meyer travel freely because Russian and Japanese soldiers were still skirmishing in the region, a bitter after-effect of the Russo-Japanese War that had ended only seven months earlier. Notorious outlaws called the Hun-hutzes (Red Beards) also menaced the area. Despite these obstacles, Meyer, confident he would be safe, was determined to make the trip. He knew he could be physically intimidating, especially when he wore a heavy sheepskin coat, big boots, and a bearskin hat to survive temperatures that dropped to 30 degrees below zero Fahrenheit. With a revolver and a Bowie knife in his belt, Meyer was prepared to defend himself. He relished the adventure

He spent only three months in Manchuria, including side trips to northern Korea and Siberia. It was still a rough expedition: he covered 1,800 miles from Liaoyang to Vladivostock almost entirely on foot, averaging twenty miles a day for ninety days. He wore out three pairs of boots in three months. On the way he saw beautiful peonies growing wild and collected many specimens of useful plants, including one that eventually became enormously important to America: the soybean.

Meyer, recognizing that it was a mainstay of the Chinese diet, sent samples to Fairchild: he collected seeds, whole plants, even beans prepared as tofu, which he called cheese. During his travels Meyer shipped more than one hundred varieties—including ones that launched America’s vast soybean oil industry.

Meyer told de Vries that he had wanted to walk across Manchuria to Harbin, but the trip would have been too dangerous, so he took a train. Tigers, panthers, bears, and wolves lurked nearby, but Meyer said he was more afraid of humans than wild animals.

On March 31, 1908, as he was heading to Peking toward the end of his first expedition, Meyer stopped briefly in the small village of Fengtai. In a doorway he noticed something new. It was a small tree bearing about a dozen unusual fruits that looked like a cross between a lemon and an orange. Villagers told him that the strange plant was valuable; rich Chinese paid as much as ten dollars for each tree because it produced fruit all year. “The idea is to have as many fruits as possible on the smallest possible plant,” Meyer explained later. He sliced a thin branch off the tree with his Bowie knife and packed it carefully in damp moss. Meyer delivered it two months later to Fairchild. He gave the cutting an unexciting label—“Plant Introduction No. 23028”—and sent it to the department’s garden in Chico, California, to see if it would grow and, what was more important, produce fruit in America. The experiment lasted seven years, but eventually Fairchild was able to report that the cutting was a success. “Meyer’s dwarf lemon from Peking was producing a high yield,” he said. “It had begun to attract attention as a possible commercial lemon, even though its fruit flesh had an orange tint.

Six weeks after he spotted the lemon, Meyer boarded a ship in Shanghai for San Francisco. He carried twenty tons of trees, cuttings, seeds, and dried herbarium material as well as, almost as an afterthought, two rare monkeys for the National Zoo. “They cause me as much trouble as babies,” Meyer complained when he arrived in California in June 1908.

Roosevelt, who was battling with Congress over the need for tough conservation laws, wanted a firsthand account of the devastation of Wutaishan. The burly plant explorer, seated in a leather armchair in a large room decorated with moose heads and bearskin rugs, described deforestation in China to the president of the United States. “The Chinese peasants have no regard for the wild vegetation and they cut down and grub out every wild wood plant in their perpetual search for fuel,” Meyer explained

Four months later, in the leaflet he sent to Congress as his annual State of the Union message, Roosevelt quoted Meyer by name and included his photographs to illustrate the price America could pay if the nation didn’t protect its trees. Meyer’s pictures, Roosevelt told lawmakers, “show in vivid fashion the appalling desolation, taking the shape of barren mountains and gravel and sand-covered plains, which immediately follows and depends upon the deforestation of the mountains.

While Wilson was sidelined in the hospital, Paul and Homer Brett, a U.S. consul in Muscat, set off into the interior of the Arabian Peninsula to buy date palms. They traveled sixty miles through the desert under the sultan of Oman’s protection in a caravan of eleven of the sultan’s best camels, Wilson told his father. They were ambushed twice, yet they escaped unharmed each time. The assignment was not easy. The Popenoe brothers, who both had fair skin, light hair, and bright blue eyes, must have stood out dramatically in the Mideast. “As we passed through the bazaars [in Basra], merchants would spit on the ground and significantly draw their fingers across their throats,” Paul wrote later. “In Baghdad we were chased for a mile by a crowd throwing stones, and in one of the seaports of Persia a native suddenly took a shot at us with his rifle, which fortunately missed.” Despite the risks, they did not disappoint their father. The brothers bought 9,000 date palm offshoots in Baghdad and Basra and another 6,000 in Algeria and arranged to have the huge lot—each healthy offshoot stood about three feet tall and weighed thirty pounds—shipped across the Atlantic Ocean. The trees survived the voyage because Wilson Popenoe gave his portable typewriter to the ship’s captain in exchange for enough fresh water to keep the palms alive. During the last leg of the journey from Galveston, Texas, to California, the offshoots filled seventeen refrigerated railroad cars, a load so remarkable that newspapers reported the shipment in detail. Paul Popenoe’s separate journey home took long enough for him to write three hundred pages about date palms.

The trip’s primary purpose was to finish an assignment that Frank Meyer had started before he died: save the American chestnut tree. At the beginning of the 20th century, America’s native chestnuts thrived along the Eastern Seaboard. An estimated four billion trees—many as tall as 100 feet—covered about a quarter of the region’s forests. Chestnut wood was hard and straight and vital to serve the nation’s growing needs for railroad ties and telephone poles. But in 1904 a scientist at the Bronx Zoo in New York City noticed a canker or fungus spreading on the trees’ bark. Three years later the same disease was evident on chestnut trees growing across the street in the New York Botanical Garden. It was the beginning of the most significant invasion of a foreign plant disease in American history.

Fairchild’s last official day on the agriculture department’s staff was June 30, 1935. As of that date the office he established had introduced 111,857 varieties of seeds and plants to America.

 “Many of the immigrants have their little day or hour and are never again heard from,” he wrote in the 1928 Yearbook of Agriculture. “Others sink out of sight for a time and later achieve great prominence.” He could have added that a few were out-and-out flops and others were impractical curiosities that Fairchild showed off to his friends and relatives. Yet many of David Fairchild’s plant immigrants were great successes of incalculable value. Mark Carleton’s durum wheat and Frank Meyer’s soybeans completely transformed American agriculture in the twentieth century. And by the beginning of the twenty-first century, Walter Swingle’s dates and figs and Wilson Popenoe’s avocados had become staples of the American diet. Meyer’s lemon was a food lovers’ delight. Many other introductions served the important but less visible role of providing essential breeding material to make existing plants hardier or more productive.

When David Fairchild left Washington in 1924, after giving up a job that kept him at a desk for most of 20 years, his weariness suddenly vanished. Overnight, it seems, he acquired enormous energy and enthusiasm that propelled him into a constant series of adventures that filled the rest of his life. “As the fieldmen used to say, DF had it made,” Ryerson said later. His first project took him back to the tropics. While he waited for Allison Armour to outfit his ship for the scientific expedition, Fairchild helped his friends William Morton Wheeler and Thomas Barbour, an entomologist and a zoologist associated with Harvard University, set up a new scientific research center on an island in Panama’s rain forest. Initially called the Barro Colorado Island Biological Laboratory (and now known as the Smithsonian Tropical Research Institute), the facility was modeled after the botanical institutes Fairchild loved in Naples and Java.

In September 1924, David and Marian Fairchild—and sometimes their children and friends—began exploring for plants, often under Allison Armour’s sponsorship. They drove an old American car through Algeria and Morocco, visiting gardens, ancient cities, and souks. They especially enjoyed Mogador, then a drowsy little town on the sea that was home of the rare argan nut trees. Marian Fairchild showed off her firm feminist convictions by driving their Dodge sedan through Fez. “Marian takes every opportunity to run the car around through the narrow streets just to show that she is not in any way under her husband’s thumb,” Fairchild told Grosvenor on April 4, 1925.

Sumatra and nearby islands were full of fascinating, mysterious plants. In April 1926, Fairchild finally took Marian to Java, fulfilling a promise he had made when they married more than twenty years earlier. Soon after they arrived, they visited a penal colony off the coast of Java where they encountered an imprisoned headhunter. He “had failed to get as many heads as his sweetheart demanded before she would marry him,” Fairchild explained, “because the government stopped him and sent him here after his last murder.” He had only five; she wanted six.

The kepel, whose proper name is Stelechocarpus burahol, is related to the cherimoya and the pawpaw, both fruits Fairchild had promoted in America. Local guides told the Fairchilds that sultans had planted the trees and ordered their lovers to eat kepel fruit because it made their bodily fluids smell like violets. They also warned outsiders that stealing the fruit would bring bad luck. Fairchild immediately went to the open market in Djokjakarta to buy some for America. (Kepel was the 67,491st seed or plant to arrive in Washington from the ends of the earth. In 2012 the plant was growing at The Kampong in Coconut Grove.) At the age of 57, in a beautiful, rundown spot far away from home, Fairchild had discovered one of the world’s most romantic fruits.

Between trips he joined Marjory Douglas on Ernest F. Coe’s early campaign to save the Florida Everglades by becoming the first president of the Tropical Everglades Park Association and writing articles about the natural glories of the swamp. “The Everglades of South Florida have a strange and to me appealing beauty,” he said during a speech on February 28, 1929. “Their charm partakes of the charm of the Pacific Islands.” With the authority of a global traveler, he insisted that the Everglades’ natural beauty was unmatched anywhere in the world.

Fairchild’s many books and articles brought attention to his accomplishments and led to the establishment of the Fairchild Tropical Botanic Garden in Coral Gables by Colonel Robert H. Montgomery, yet another wealthy philanthropist who loved nature—he collected trees, large ones—and was charmed by David Fairchild.

The project began by accident. One day in 1936 Montgomery, an accountant and business executive with a home in Florida, was playing bridge with Stanton Griffis, a New York investor and businessman. Griffis said he wanted some land near Miami, so Montgomery obligingly bought twenty-five acres for him. But Griffis backed out of the deal, leaving Montgomery with land he didn’t need. The situation gave Montgomery the opportunity to create a garden of palms. This palmetum soon expanded into the 83-acre site that is now the Fairchild Tropical Garden. The garden officially opened on March 23, 1938. Griffis became one of its first lifetime members. Montgomery and Fairchild’s love of palm trees led to Fairchild’s last big seagoing adventure.

Fairchild bought hundreds of mangosteens in the market at Penang and sent the seeds to Wilson Popenoe, who was setting up the Lancetilla Agricultural Experiment Station in Tela, Honduras.

Popenoe planted the seeds and waited. Mangosteens are difficult plants to grow for they need the right soil and climate and, most significantly, more time than commercial growers want to give them, especially in America. However, by 1944 the orchard had produced thirty tons of David Fairchild’s favorite fruit.

By the middle of 1954, Fairchild’s own health had deteriorated. He died at home in Coconut Grove on the afternoon of August 6, 1954. He was 85.  

Posted in Agriculture, Farming & Ranching | Tagged , , , , | Comments Off on Book review of Fruits of Eden: David Fairchild & Americas Plant Hunters