George W. Bush home in Crawford Texas

Rose Marie Berger. The Texas Two-Step. George W. and Laura Bush’s new Crawford, Texas home boasts a stunning array of eco-friendly features—perhaps not what you’d expect from one of the least environmentally friendly administrations since…um, creation.

http://www.sojo.net/magazine/index.cfm/action/sojourners/issue/soj0107/article/010722.html

The Bush ranch house in Crawford was designed by Austin environmental architect David Heymann, and built by members of a religious community from nearby Elm Mott, George W. and Laura Bush’s dream home is built of a BTU-efficient, honey-toned native limestone quarried from the nearby Edwards Limestone Formation.

The passive-solar house is positioned to absorb winter sunlight, warming the interior walkways and walls. Underground water, which remains a constant 55 degrees year-round, is piped through a heat exchange system that keeps the interior warm in winter and cool in summer. A gray water reclamation system treats and reuses waste water. Rain gutters feed a cistern hooked to a sprinkler system for watering the fruit orchard and grass. Clearly, Bush goes home from the White House to a green house.

Melinda Suchecki. Western White House Turns Green with Innovative Onsite Treatment System 

http://www.nowra.org/?p=186

The George W. Bush 1500-acre ranch is located near Crawford, Texas, about 30 miles west of Waco. Aside from the gray and black water recycling and irrigation systems, the home features geothermal heating, active and passive solar energy, and a rainwater collection system with a 40,000-gallon underground cistern. The purpose of the cistern and a separate gray water system is for surface irrigation of fruit trees.

According to Ron, “We worked with the architects and plumbers to ensure that there was separation of the gray and black water lines and confirmed their separation prior to the pour of the slab. There was resistance at first on the part of the plumbers; however, once they understood what we were trying to do, everything went off without a hitch. One person told me there was ‘no way they would get it all right, it would be too easy to cross the lines.’ My response was, ‘Then how do they keep the hot and cold water separate?'”

The black water system features over 2,000 gallons of pre-treatment and equalization tanks which meter close to a 1,000 GPD Hoot Aerobic System. However, the treatment process doesn’t stop there. The effluent leaves the aerobic system through a Polylok Effluent Filter and enters a recirculating media filter, which acts like a sand filter. The effluent passes through a unique medium several times prior to discharge from the filter, where it passes through yet another media filter before enter-ing the pump tank. “With this design, we were able to incorporate the high efficiency of an extended aerobic system with the startup and shock load capability of a sand filter. However, the established aeration system will prevent the potential plug-ging effect seen in sand filters because the water enters in 95% reduced of both BOD and TSS.”

The effluent leaves the recirculating filter and is stored in a pump tank. The Hoot Control Center operates the Lighthouse Beacon Filtration System. The filter not only performs effluent filtration, but automatically back-flushes and performs scheduled field flush cycles as well. The effluent is filtered through the 3-dimensional, 100-micron filter before being pumped 350 feet away to a four-zone drip irrigation field. The drip tubing is Netafim Bioline .62 GPH and features a pressure-compensating emitter design. The pressure-compensating design ensures even distribution throughout the entire field. The zones are automatically advanced each time the system doses, ensuring even distribution. If low levels of water usage are observed, the system can utilize just one zone to encourage plant growth.

Further complicating the design was the system location. If the system was to gravity flow, it would require all the treatment equipment to be placed right out-side the bedroom of George and Laura, between them and their new 7-acre lake. This proved to be unacceptable.

The system needed both gray and black water lift stations from the main house to pump to the location of the equipment, over 500 feet away behind the garage. The guest house gravity flows to the system. All of the controls are remotely mounted inside a specially designed utility room inside the middle of the garage. Over two miles of wiring were used to complete the remote location project.

Each tank has duplex pumps and a separate, independent alarm circuit that goes to an alarm system control panel. The system has the ability to remotely alert if one of the duplex pumps fails, latch to the next, then independently alert of a high water situation. This system is in every tank, and works even in the event of a power failure. The system is remotely monitored by an alarm company that can tell service personnel exactly what the problem is and a determination can be made if it requires immediate attention, or if a problem can wait until the next day. For example, if one of the pumps in the recirculation system has failed, then it may not require immediate attention. “If there is a high water level in the lift station on the main house,” Ron asserted, “well, there will be three of us racing to see who gets out there first.”

The Hoot systems, lift stations, and standard as well as custom tanks to complete the project were all pre-cast concrete, made by CPI of Waco, Texas. Mark Kieran of Brazos Wastewater was the installer of the system, with the majority of the hookup being completed by Ron, Jim, and Jim’s father, Frank Prochaska, from Lorena, Texas.

The incorporation of an innovative onsite wastewater strategy is a testament to the acceptance of onsite as a long-term treatment solution. The Bushes’ incorporation of environmentally sensitive approaches to their new home is an example of what individuals can do to create a better place for us all to live.

http://www.hootsystems.com/bush.pdf

http://www.whitehouse.gov/news/releases/2001/08/20010825-2.html

they’ve also got pecan trees, canyons with rivers and wood, wild areas with game, etc.

Posted in Where are the rich going | Tagged , , | 1 Comment

Automated vehicles will lead to more driving and congestion

Preface. There’s no need to actually worry about how automated vehicles will be used and their potential congestion, energy use, and whether there are enough rare earth minerals to make them possible, because they simply can never be fully automated, as explained in this post, with articles from Science, Scientific American, and the New York Times: “Why self-driving cars may not be in your future“.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report ]

***

Mervis, J. December 15, 2017. Not so fast. We can’t even agree on what autonomous, much less how they will affect our lives. Science.

Joan Walker, a transportation engineer at UC Berkeley, designed a clever experiment. Using an automated vehicle (AV) is like having your own chauffeur. So she gave 13 car owners in the San Francisco Bay area the use of a chauffeur-driven car for up to 60 hours over 1 week, and then tracked their travel habits.  There were 4 millennials, 4 families, and 5 retirees.

The driver was free.  The study looked at how they drove their own cars for a week, and how that changed when they had a driver.

They could send the car on ghost trips (errands), such as picking up their children from school, and they didn’t have to worry about driving or parking.

The results suggest that a world with AVs will have more traffic:

  1. the 13 subjects logged 76% more miles
  2. 22% were ghost errand trips
  3. There was a 94% increase in the number of trips over 20 miles and an 80% increase after 6 PM, with retirees increasing the most.
  4. During the chauffeur week, there was no biking, mass transit, or use of ride services like Uber and Lyft.

Three-fourths of the supposedly car-shunning millennials clocked more miles. In contrast to conventional wisdom that older people would be slower to embrace the new technology, Walker says, “The retirees were really excited about AVs. They see their declining mobility and they are like, ‘I want this to be available now.’”

Due to the small sample size she will repeat this experiment on a larger scale next summer.

Posted in Automobiles | Tagged , , | 3 Comments

By 2020 it may be clear to everyone that oil decline has begun

Preface. There are two parts to Dittmar’s study. The first one concerns production, based on the most recent years of oil production.  Dittmar found a strong pattern of oil decline after the plateau of 3% a year for five years, followed by a decline of 6% a year thereafter.

The assumption that OPEC nations (i.e. Saudi Arabia, Iraq, Iran, Kuwait, UAE, and Qatar) can continue producing oil at the current rate is based on potentially exaggerated reserve figures, which went up substantially in 1985 and haven’t budged a barrel down since then.  But for OPEC, and all other regions and nations, Dittmar predicts the maximum possible production based on his model, and says that perhaps the Middle Eastern OPEC nations can continue to produce as much oil as they are now until 2050.

In my opinion, he overestimates the amount of North American tight shale oil and tar sands oil that can be produced given their low EROI’s and high energy/monetary cost, but since all his figures are the best possible, he assigns 4.5 million barrels per day (mbd) production for USA tight oil through 2030 and 3 mbd for Canadian tight oil plus oil sands.

Of course, no matter how accurate the model is, Dittmar points out that it won’t matter if a civil war, terrorism or natural disasters in any oil-producing or refining region occur, which would quickly reduce exports. Plus competition for the remaining oil might increase conflicts the current world’s major powers with catastrophic consequences. The model only applies to a stable world for the next 30 years.

Here are the nations already declining at 6%: the EU and Norway, Azerbaijan (2017), Asian nations Indonesia, Malaysia, Australia, Thailand, Vietnam (2016), Algeria (2015), and Mexico (2014). All other oil-producing nations will join the 6% club by 2031 except OPEC.  Many are already in their 3% decline state, which starts 5 years earlier. Western Russia & Siberia (2020), Eastern Siberia (2030), Kazakhstan (2029), China (2020), India (2025) Egypt (2026), Nigeria (2025), Angola (2019), Sub-Saharan Africa (2026), Venezuela (2025), Brazil, Ecuador & all other South & Central American nations (2021), United States conventional (2021), Canadian conventional (2018).

Part 2 deals with consumption. It appears to me that Asia is the big winner, especially China and India.  All of the Eastern Siberian Russian oil will go to Asia through existing or planned pipelines. Over 80% of Middle Eastern OPEC oil goes to Asia now—and this is likely to continue since Asia is four times closer than Europe or North America.  Plus Asia makes the goods that the Middle East wants.  Yes, the U.S. could trade for food, but Middle Eastern countries have already bought vast tracts of land in Africa, South America, farms in the U.S., and elsewhere.

Dittmar alludes to a potential financial crash because our economic system depends on continual growth. This too would reduce production and exports from the last nations still producing oil in the Middle East.  Nor does he mention their populations are still growing exponentially and consuming their own oil exponentially as well, leaving less to export.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Dittmar, M. 2017. A Regional Oil Extraction and Consumption Model. Part II: Predicting the declines in regional oil consumption. Physics and Society.

By 2020 it may be clear to almost everyone that the current oil-based way of life in the developed and developing countries has begun a terminal decline.

Aside from the OPEC Middle East region, where a rather stable production is modelled for the next 15 to 20 years, production in essentially all other regions is predicted to be declining by 3 to 5% per year after 2020, and some are already declining at this rate.

Based on the evolution of intercontinental oil exports during the past decade, it is predicted that in the near future Western Europe will not be able to replace steeply declining exports from the FSU countries, and especially from Russia. Hence total consumption in Western Europe is predicted to be about 20% lower in 2020 than it was in 2015. For similar reasons, although the export sources are different, total consumption in the U.S. is predicted to be about 10% lower.

Further, it is predicted that neither India nor China will be able to continue their rapid oil consumption growth. At best both countries might be able to stabilize per capita oil consumption close to their current relatively low levels through 2025 by outcompeting other countries in Asia. To put it mildly, the obtained modeled results for future regional oil consumption in almost every part of the planet disagree strongly with essentially all economic-growth-based scenarios like the one from the IEA in their latest WEO 2016 report. Such scenarios assume ongoing growth and would have us believe that the oil required to support such growth will be discovered and produced. It won’t.

The consequences of the declines in oil production will be felt in all regions but OPEC Middle East countries.

Our model predictions indicate that several of the larger oil consuming and importing countries and regions will be confronted with the economic consequences of the onset of the world’s final oil supply crisis as early as 2020. In particular, during the next few years a reduction of the average per capita oil consumption of about 5%/year is predicted for most OECD countries in Western Europe, and slightly smaller reductions, about 2-3%/year, is predicted for all other oil importing countries and regions. The consequences of the predicted oil supply crisis are thoroughly at odds with business-as-usual, never-ending-global-growth predictions of oil production and consumption.

Other factors affecting global oil production:

  1. The quality of the remaining crude and energy/cost to refine it
  2. Since crude oil can’t be used directly, but must be refined, nations with refineries import more oil than they return to exporting nations without refineries. Though if these poor oil producing nations build refineries they’d be able to lower their exports and increase their standard of living.

Whenever terminal decline in all nations begins, one can only hope that people around the globe will be able to learn, quickly, how to live with less and less oil every year, and how to avoid war and other forms of violence, as we travel the path to a future with less and less oil.

 

Posted in Exports decline to ZERO, How Much Left, Peak Oil | Tagged , , | 3 Comments

Want to go off-grid? You might need hundreds of Tesla batteries

Preface. Although you may not be as far north as Victoria, British Columbia (48.4 latitude), you’d ideally want to be at 30 degrees or less latitude from the equator to even consider the expense of off-grid solar power.  And even then you’ll need to be wealthy. Keep in mind that the Tesla Powerwall 2 is $5,500 for the battery alone, plus about $1500 additional charges for installation and other components.

If you’re getting solar for when TSHTF, you’d better have a lot of spare parts and enough mechanical bent to fix the system yourself until the batteries die…

 — Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

November 23, 2017. Want to go off-grid? You might need hundreds of Tesla batteries.The Climate Examiner, Pacific Institute for Climate Solutions.

Going completely off-grid is infeasible for most households in Western Canada, energy systems modellers conclude, due to the diminished amount of sun in our northern latitude. To “cut the cables” to the electricity grid, requires an impractical number of batteries or solar panels.

Note that:

  • The scenarios below do not account for electricity needs to heat homes or charge electric vehicles
  • Fewer solar panels = you need more batteries
  • Fewer batteries = you need more solar panels

Families in BC use solar panels on their roof and install batteries in their garage because they want to reduce electricity costs or do their part to help reduce emissions. Some have dreams of one day going entirely off-grid. So researchers with the Pacific Institute for Climate Solutions’ 2060 energy future pathways project modeled just how feasible this would be.

They used 2016 data from a typical three-bedroom house in Victoria with an annual load—or average electricity demand—of 9,600 kilowatt hour (kWh). The house uses natural gas for its heating and a conventional gasoline vehicle, meaning no extra load from these sources.

A common PV system is 12 kilowatts (kW) as a larger PV system requires more roof. Researchers found that given Victoria’s solar irradiance, a 12 kW PV system needs a 1,766 kWh battery to achieve self-sufficiency. This is equivalent to 131 Tesla Powerwalls.

Another option is to reduce the size of battery and buy a larger PV system, as more energy is available and thus less needs to be stored. If a homeowner bought a 30-kW PV system, they could get away with a 289 kWh battery (equivalent to 21 Powerwalls). But this PV system would require an area of roughly 300 square meters (3,200 square feet)—about the size of a tennis court.

They ran the numbers for Vancouver, Kelowna and Calgary. The results for Vancouver and Kelowna similar to Victoria. But Calgary, with its clearer winters, required less PV and battery capacity to be self-sufficient. Calgarians could make do with a 9 kW PV system and about 62 Powerwalls. With a 30 kW PV system, taking up 240 m2 (2,475 square feet), the homeowner needs roughly 10 Powerwalls.

But in these clear, cold places, the electricity demand of the household rises due to the electrification of heating and transport so the prospect of self-sufficiency is even further out of reach. The researchers found that the increase in demand from heating via electric baseboards at least a 22 kW PV system and 236 Powerwalls. Newer technologies, such as heat pumps would have a reduced impact on electricity demand.

The projections for the number of batteries seem mind-boggling, but they are in line with storage requirement assessments for other jurisdictions.

Posted in Batteries, Photovoltaic Solar | Tagged , , | 9 Comments

Hurricanes will lower Gulf and East Coast carrying capacity

Preface. New Orleans and much of the gulf and east coast remain vulnerable to severe weather, and require help from the federal government to recover.  Clearly as climate change worsens this will only become more of a problem in the future, and harm more and more people, since now 50% of Americans live within 50 miles of a coast.  Declining energy means that rescues will not be as large, and more and more infrastructure remain unrepaired, forcing migrations inland.  Awareness of limits to growth and finite fossil fuels may be painful to contemplate, but if it inspires you to move to a more sustainable region, perhaps a longer and happier life.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts:  KunstlerCast 253, KunstlerCast278, Peak Prosperity]

NRC. 2011. Increasing National Resilience to Hazards and Disasters: The Perspective from the Gulf Coast of Louisiana and Mississippi: Summary of a Workshop.  The National Academies Press.  Excerpts below.

***

Natural disasters are having an increasing effect on the lives of people in the United States and throughout the world. Every decade, property damage caused by natural disasters and hazards doubles or triples in the United States. More than half of the U.S. population lives within 50 miles of a coast, and all Americans are at risk from such hazards as fires, earthquakes, floods, and wind. The year 2010 saw 950 natural catastrophes around the world—the second highest annual total ever—with overall losses estimated at $130 billion.

A consequence of the widespread construction of levees was subsidence of the land. When the areas behind levees were drained, the land compacted and lowered, increasing the susceptibility of housing to extreme damage if the levees failed or were overtopped.

The lessons that should have been learned from Betsy and other hurricanes were not heeded before Katrina, and many of these lessons still are not being heeded. Although the levees are under repair and new surge barriers are in place, the city’s footprint has not been fundamentally reduced, even though the corps no longer considers the levees around New Orleans to provide protection against a 100-year flood event. Today, many houses in New Orleans are below sea level, and even some of the houses built after Katrina are ill suited for high water. After a protracted public process, New Orleans adopted a plan that opens the entire city to redevelopment while targeting certain areas for rebuilding, renewal, and redevelopment. Building can occur in most of the areas that were flooded and remain susceptible to future floods.

Hurricanes Katrina and Rita combined caused an estimated $150 billion in damages across the Gulf Coast. The federal government spent an estimated $126 billion on the recovery effort, but much of that money went to such short-term measures as emergency rescue operations and short-term housing. Only about $45 billion of that money went to rebuilding. Private insurance provided about $30 billion for reconstruction, and philanthropies provided about $6 billion—three times as much as for any other event in history. Even with expenditures of that magnitude, a gap of about $70 billion remains

Renters in the city and suburbs still pay too much of their earnings toward housing. In Orleans Parish, 58% of renters, and 45% of renters in the metropolitan area, pay more than 35% of their pretax household income toward housing, compared with 41% of renters nationally.

Meanwhile, coastal wetlands have continued to erode. More than 23 percent of the land around the New Orleans Metropolitan Area has been lost since measurements began in 1956; the impact of the oil disaster on the wetlands has not yet been measured

Before Hurricane Hugo hit South Carolina in 1989, the United States had not experienced a single disaster that cost the insurance industry more than $1 billion,

Since then, as more and more development has occurred in hazard-prone areas, the cost of natural disasters has gone up “exponentially,” with losses for 2000–2010 exceeding $800 billion

Given that the value of property vulnerable to hurricanes from Texas to Maine is an estimated $9 trillion, retrofitting is essential.

During the 1993 flood on the Mississippi River, the Des Moines Water Plant flooded and was out of operation for weeks. “It shut down the city,” said Gerald Galloway, Jr., the Glenn L. Martin Institute Professor of Engineering at the University of Maryland, College Park. “When a major part of the infrastructure that supports a community goes under, the community can go under at the same time.

The Sewerage and Water Board of New Orleans is responsible for providing drinking water, wastewater, and storm water services for the city of New Orleans. Following the storm, the wastewater treatment plant contained 18 feet of water, and the city cannot exist without viable wastewater treatment. The plant was dewatered within about 10 days of the closure of the federal levee system, and it was doing primary treatment 30 days after that.

The Sewerage and Water Board could not make these and other advances without partners. For example, protecting the city from an incoming storm surge is the responsibility of the U.S. Army Corps of Engineers, and the Sewerage and Water Board is working with the corps to rebuild infrastructures around the levee system. The agency is also responsible for the purification and distribution of drinking water, which requires electrical power. The agency has relied in part on a 1903 25-cycle power plant that is being rebuilt to be more sustainable and reliable.

A major challenge of Katrina was that 80 percent of the agency’s team had lost their homes. The people who were on duty the day of the storm were suddenly homeless.

The agency also had to spend more than $1 billion in restoration and recovery without being able to draw on the capital market, but disaster recovery through the Federal Emergency Management Agency (FEMA) generally involves a reimbursement process. Thus, it was not just the physical and human infrastructure but the financial infrastructure that had to be rebuilt. Future climate change could pose severe challenges to the drinking water system, St. Martin said. If sea level or the volume of water coming down the Mississippi River changes, water quality, the ability to treat water, and the availability of water could all be compromised.

During Katrina, New Orleans lost 31 streetcars, which cost an average of $1.2 million per car to rebuild. It also lost 80% of its bus fleet. That’s not a capital cost you can replace very easily,

In addition, the streetcar network is powered by an electrical grid. In an emergency, the streetcar system needs additional substations that are singly powered for emergency purposes. Public transportation is part of the emergency evacuation system in New Orleans. When government officials tell populations to evacuate, some people will not react.

Operating the public transportation requires people. But drivers and other employees have wives and children who also need to evacuate, and procedures need to be in place to accommodate that process. People are also needed to rebuild the physical infrastructure.

Entergy Corporation is an integrated energy company headquartered in New Orleans that employs nearly 15,000 people. It has about 2.7 million electric customers and 180,000 gas customers in the states of Louisiana, Arkansas, Mississippi, and Texas. It has 15,500 miles of transmission line, 100,000 miles of distribution line, 30 fossil fuel plants, and nine nuclear power plants.

The dependability of other infrastructure functions is critical to the energy industry. Reliable post-storm communications are essential. Transportation systems are needed to recover quickly. Particular components of the infrastructure also require special attention.

Preparing for disasters is a long-term process, which can conflict with the short-term perspectives that are common in government. How can preparations “outlast the 4-year terms of elected officials, the 2-year terms of elected officials, or the 30-second disasters that wreak havoc on our community?” asked Ellis Stanley, director of western emergency management services at Dewberry LLC, who moderated the third panel at the workshop. In addition, governance occurs at multiple levels, from the neighborhood to the federal level, requiring that the various elements of governance be integrated.

 

 

 

Posted in Extreme Weather, Hurricanes, Sea Level Rise | Tagged , , | Leave a comment

Horse Power

What follows was written by https://civilwartriviajunkie.wordpress.com/about/

“In the June 2015 issue of Civil War Times Magazine is an article by the noted historian James I. Robertson entitled A Dead Horse at Antietam wherein he discusses the roll and fate of the noble horse.

The article proceeds to detail numerous statistics regarding the roll of horse-power in the war. Some notable examples are the equine requirement for a single six-gun field battery is 72 horses for full efficiency. It should be noted that these horses had to be larger than cavalry mounts and were specified as such. And the standard of the day for infantry supply wagons was 12 wagons per 1000 men. Four horses pulled a wagon with 2800 pounds of supplies. A similar hitch of mules could pull a load of 4000 pounds over good roads. Few roads qualified as good. The supply train also carried oats and hay for the livestock and as in the case of Grant’s “Cracker Line” at Chattanooga they almost consumed so much they could not bring adequate supply to their destination.

At the time of the Pennsylvania Campaign into Gettysburg, Meade’s army used 4000 wagons and 1100 ambulances. Assuming the ambulance uses four animals that is an estimated total of 26,400 hay burners. Considering the number of animals that were killed or injured it is understandable how Meade had a problem with pursuit as Lee retreated. It also gives an interesting perspective on the 125 wagon train and 900 mules captured by JEB Stuart at Rockville, Maryland. These brand new wagons and teams plus the supplies they contained were a prize no Confederate could easily discount.

As the Civil War began in 1861 no one expected it would last as long as it did. No one ever thought that the 3.4 million horses in the North and the 1.7 million in the Confederacy would not be enough to support the war. The supply of mules was also considered plentiful. The war cost 1.5 million horses and mules their lives and a million more were returned home lame and broken down from over use. Some officers were relentless in their efforts and brutally abused the men and animals in their commands. General “Kill Cavalry” Kilpatrick was well known for pushing his command and horses with abandon. Morgan’s Raiders covered 1100 miles in July 1862 and while the cavalrymen could sleep in the saddle the horses had to keep moving. Stuart’s ride from Carlisle Barracks, Pennsylvania to join Lee at Gettysburg was a similarly taxing ride for the horses, so much so that Lee ordered Stuart out of action to rest his command prior to the cavalry action on 3 July. A detailed account of the condition of his horses might explain the outcome of the day’s action. Numerous other cavalry raids occurred with comparative equine punishment.

Gen. Sherman was aware of the need to care for his livestock on his “March to the Sea”. He required taking “every opportunity at a halt during a march should be taken advantage of to cut grass, wheat, or oats and extraordinary care be taken of the horses upon which everything depends”. Sherman’s previous army experience as a logistician served him well.

During the early period of the war the North suffered from poor cavalry stock due to a faulty procurement system rife with graft. General Joe Hooker corrected this during his brief period as Commanding General where he established an effective depot and procurement system. This began to have telling effect as the quality of Federal horseman improved as the quality of Southern horses began to decline.

Severe weather conditions affected battlefield performance as forage became scarce and cold and freezing weather followed by the following thaw and muddy conditions plagued the livestock. Thomas’ delay at Nashville was a classic example of such delay and almost cost him his command. McClellan complained of the condition of his horses causing President Lincoln to question what he had done to tire his animals. The Civil War was the last war where muscle power was so heavily depended upon. Later conflicts began the ever increasing use of mechanical devices for movement and mobility.

The fall of Vicksburg is often cited as the last obstacle restricting the free movement of the Mississippi to the sea. What is not often understood is that once the last stretch of the river was controlled by the Union movement of replacement horses and mules from Arkansas and Texas was ended. This shortage of horseflesh caused Lee eventually to reduce the number of his artillery batteries as the required replacement horses were not available.

Horses were deliberately slaughtered at times to keep them out of enemy hands. A lame or disabled horse was regularly killed rather than let it be recovered and returned to health by the enemy. Farriers were in great demand and were paid a higher rate because of the need to keep the stock shod. 2.3 million horse and mule shoes were required annually for every 60,000 animals. Again, having the supply and talent at the right place and time was a clear challenge and failure caused the stock to suffer and eventually break down.

Horses and mules require considerable roughage in their diet. Feeding only grain causes diseases like colic and other ailments. Grains are easier to transport and hay is so bulky that often the stock consumes more than they can carry. When there is a shortfall again the animals suffer. Water is also a necessity for man and animal and the logistics of water supply is complicated even for today’s army.

All this deals with the health and well being of the horses and mules. Combat conditions like the one involving Col. Strong’s horse led to large numbers of losses. The Army of the Potomac lost about 881 artillery horses during the three days of combat at Gettysburg. Rufus Ingals, the Army quartermaster eventually estimated he would need 5000 replacement horses for the cavalry and artillery. Such losses must have seemed daunting. Then there is the question of disposal of the carcasses. Burning was the only reasonable method and in some cases, such as after the Battle of Perryville, this horrible chore was left to the civilian population as the armies withdrew.”

Posted in Muscle Power | Tagged , | 7 Comments

Human over-consumption causes far more biodiversity loss than climate change

Posted in Biodiversity Loss, Climate Change, Extinction | Tagged , , , | Leave a comment

Toxic algae slime spreading quickly across the earth

2017-8-19. Ocean Slime Spreading Quickly Across the Earth. Craig Welch, National Geographic.

Toxic algae blooms, perhaps accelerated by ocean warming and other climate shifts, are spreading, poisoning marine life and people.

When sea lions suffered seizures and birds and porpoises started dying on the California coast last year, scientists weren’t entirely surprised. Toxic algae is known to harm marine mammals.

But when researchers found enormous amounts of toxin in a pelican that had been slurping anchovies, they decided to sample fresh-caught fish. To their surprise, they found toxins at such dangerous levels in anchovy meat that the state urged people to immediately stop eating them. The algae bloom that blanketed the West Coast in 2015 was the most toxic one ever recorded in that region.

But from the fjords of South America to the waters of the Arabian Sea, harmful blooms, perhaps accelerated by ocean warming and other shifts linked to climate change, are wreaking more havoc on ocean life and people. And many scientists project they will get worse.

“What emerged from last year’s event is just how little we really know about what these things can do,” says Raphael Kudela, a toxic algae expert at the University of California, Santa Cruz.

It’s been understood for decades, for example, that nutrients, such as fertilizer and livestock waste that flush off farms and into the Mississippi River, can fuel harmful blooms in the ocean, driving low-oxygen dead zones like the one in the Gulf of Mexico. Such events have been on the rise around the world, as population centers boom and more nitrogen and other waste washes out to sea

“There’s no question that we are seeing more harmful blooms in more places, that they are lasting longer, and we’re seeing new species in different areas,” says Pat Glibert, a phytoplankton expert at the University of Maryland.

But scientists also now see troubling evidence of harmful algae in places nearly devoid of people. They’re seeing blooms last longer and spread wider and become more toxic simply when waters warm. And some are finding that even in places overburdened by poor waste management, climate-related shifts in weather may already be exacerbating problems.

Fish kills stemming from harmful algal blooms are on the rise off the coast of Oman. Earlier this year, algae blooms suffocated millions of salmon in South America, enough to fill 14 Olympic swimming pools. Another bloom is a suspect in the death last year of more than 300 sei whales in Chile.

In the north, blooms are on the rise in places like Greenland, where some scientists suspect the shift is actually melting ice. Just this year, scientists showed that domoic acid from toxic algae was showing up in walrus, bowhead whales, beluga, and fur seals in Alaska’s Arctic, where such algae species weren’t believed to be common.

“We expect to see conditions that are conducive for harmful algal blooms to happen more and more often,” says Mark Wells, with the University of Maine. “We’ve got some pretty good ideas about what will happen, but there will be surprises, and those surprises can be quite radical.”

The Birth of a Bloom

If you look at seawater under a microscope, what you see may resemble a weird alphabet soup: tiny photosynthetic organisms that can resemble stacks of slender Lincoln logs, stubby mushrooms, balloons, segmented worms, or mini wagon wheels. Some float about in currents; others propel themselves through the water column. As conditions change, the environment can become perfect for one or two to take over. Suddenly these algae may bloom.

“Every organism on this planet has its ideal temperature,” says Chris Gobler, a professor at Stony Brook University “In a given water body, as it gets warmer, that’s going to favor the growth of some over others, and in some cases the harmful ones will do better.”

Algae is essential for life, but some species and some blooms can trigger serious harm. Some poison the air people breathe or change the color of the sea. Some accumulate in fish and shellfish, causing seizures, stomach illnesses, even death for the birds, marine mammals, and humans that eat them. Some blooms are so thick that when they finally die they use up oxygen needed by other animals, and leave rafts of dead eels, fish, and crabs in their wake.

In 2015, as a blob of warm water along the U.S. West Coast was breaking temperature records, regular sampling showed that dangerous levels of the biotoxin domoic acid from the algae Pseudo-nitzschia was building up in shellfish. Short-term harvest closures for razor clams and crab aren’t uncommon because while domoic acid doesn’t hurt shellfish, it can cause seizures and death in people who eat infected creatures.

While scientists knew domoic acid accumulates in the head and guts of fish—which are often consumed whole by marine mammals and birds—researchers rarely find these water-soluble toxins in the parts of fish that humans eat. And where most blooms last for weeks, this one dragged on for months. And while most are localized, this one covered vast areas of sea from Santa Barbara to Alaska. So when Kudela and his crew started testing, they found trace amounts of the toxin in the meat of rockfish, halibut, lingcod, and nearly every fish they tested. In anchovies it was far beyond what regulators consider safe.

“Before, even when the fish were toxic, they (regulators) were saying ‘Decapitate it and gut it and it will be fine,’ ” Kudela says. “It definitely raises new questions, like ‘Should we be monitoring things like flatfish on a more routine basis? and ‘Are we really prepared for what’s coming?’ ”

While the heat that drove this massive bloom may or may not be linked to climate change, scientists say a warming climate will make marine heat waves more common in the future.

And climate change isn’t just about temperature. It will also change how storms and melting ice add moisture to the marine world, make the oceans more corrosive, and alter the mixing of deep cold waters with light-filled seas at the surface. All of that can and will affect how harmful algae grow.

It’s just not always easy to see how.

Tracking Changes in the Arabian Sea

Joaquim Goes, a research professor at Columbia University’s Lamont Doherty Earth Observatory has been trying to track climate’s role in transforming one of the world’s rapidly changing marine environments, the Arabian Sea.

In the early 2000s, scientists documented blooms of shimmering bioluminescent Noctiluca scintillans, a beautiful green algae that can make the sea light up and sparkle. Now it shows up every year, in ever larger densities and covering more area.

“Globally, I’ve studied lots of ocean basins, and here the change is just massive—this one species is just taking over,” Goes says.

While it’s clear that rising use of fertilizers and massive population growth without corresponding wastewater treatment in places like Mumbai and Karachi are helping fuel this massive change, Goes and some others think that is not the only factor. Rapid melt of Himalayan glaciers is altering monsoon patterns, he says, intensifying them and helping reduce oxygen levels in surface waters, making them more conducive to Noctiluca. That, in turn, is changing what lives there and what they eat.

“Think of it as looking at a forest and over a period of about a decade, all the species have changed,” says Glibert, at Maryland. “The type of algae that grows at the base of the food web set the trajectory for what’s growing at the top of the food web.”

Goes fears these changes ultimately could spell disaster for that region’s fisheries, which provide tens of millions of dollars and help support life for 120 million people.

Thus far, the creatures that most seem to like to eat this algae are jellyfish and sea-centipede-like creatures known as salps. Those, in turn, are eaten by animals that can thrive in low-oxygen environments, namely sea turtles and squid. Landings of squid already are on the rise in places like Oman, Goes says, while tuna and grouper catches are down. And the low-oxygen environment itself can have acute effects. Just last fall, low-oxygen water along the coast of Oman killed fish for hundreds of kilometers.

Complex Ocean Physics

Still, it’s not always obvious what the trends really show or how all these pieces fit together.

Charles Trick, with the University of Western Ontario, says the physics of ocean environments are so complicated that climate change is likely to worsen algal blooms in a select few places, but not necessarily as a general rule. He is skeptical about climate impacts on blooms in the Arabian Sea, for example, but believes environments like the U.S. West Coast are prime for more massive blooms.

“Everything in this field is controversial,” Trick says. “There’s a lot of enthusiasm to challenge the big questions, but not a lot of data.”

What information there is often isn’t so clear. Kathi Lefebvre, with the National Oceanic and Atmospheric Administration’s Northwest Fisheries Science Center in Seattle, has been the one tracking the domoic acid in hundreds of marine mammals in Alaska. The discovery in walrus, bowhead, and other Arctic mammals was a surprise, but it’s not clear if it’s part of a new trend—or simply the way things have always been. No one had ever checked before, so there is no past for Lefebvre to compare to.

“It’s a weird thing—we saw domoic acid in every species we looked at, so they are all being exposed to it,” she says. But domoic acid in high doses sometimes leads to seizure and death, which had never been documented in the Arctic. Has it happened all along, but the region is so sparsely populated that no one noticed? Or are these blooms moving north and still building, potentially responding to warming waters and melting ice?

“It’s pretty clear that if you change temperature, light availability and nutrients, that can absolutely change an ecosystem,” Lefebvre says. “But is it just starting? Is it getting worse? Is it the same as always? I have no idea.”

Posted in Biodiversity Loss, Fisheries, Oceans, Water | Leave a comment

Vaclav Smil on wood

[ I’ve extracted bits about wood from Smil’s book about materials below, read the book for the larger context.  Enormous amounts of wood were used in former civilizations with much smaller populations than today, so it’s clear we can’t go back to wood as an energy resource as fossils decline without very quickly cutting the remaining forests down. Though we’re already destroying forests at such a huge rate for agriculture and construction that perhaps forests will mostly be gone by the time declining fossils are noticeably reducing population.  Though inaccessible boreal and other forests will remain, if climate change hasn’t already converted them to grasslands.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts:  KunstlerCast 253, KunstlerCast278, Peak Prosperity]

Vaclav Smil. 2013. Making the Modern World: Materials and Dematerialization.  Wiley.

The ships that made the first Atlantic crossings were remarkably light: a Viking ship (based on a well-preserved Gokstad vessel built around 890 CE) required the wood of 74 oaks (including 16 pairs of oars).

Fuel-wasting fireplaces and braziers resulted in a huge demand for fuel wood and charcoal to heat the expanding cities of the pre-coal era. In Paris, the demand rose from more than 400,000 loads of wood in 1735 to more than 750,000 loads in 1789 (about 1.6 Mm3) and the same amount of charcoal, prorating to more than a ton of fuel per capita (Roche, 2000).

Wood remained indispensable not only for building houses and transportation equipment (carts, wagons, coaches, boats, ships) but also—as iron smelting rose in parts of Europe—for charcoal production for blast furnaces (substitution by coke began only during the latter half of the eighteenth century and was limited to the UK). And as Europe’s maritime powers (Spain, Portugal, England, France, and Holland) competed in building large ocean-going vessels—both commercial and naval—the increasing number of such ships and their larger sizes brought unprecedented demand for the high-quality timber needed to build hulls, decks, and masts.

With wooden hulls, masts, and spars being as much as 70% of the total mass (the remainder was divided among ballast, supplies, sails, armaments, and crew) these pioneering vessels contained 60–75 tons of sawn timber (Fernández-González, 2006).

Iron production in small blast furnaces required enormous quantities of charcoal and combined with inefficient wood-to-charcoal conversion this led to widespread deforestation in iron-smelting regions: by 1700 a typical English furnace consumed 12,000 tons of wood a year (Hyde, 1977).

All railroad ties (sleepers) installed during the nineteenth century were wooden; concrete sleepers were introduced only around 1900 but remained uncommon until after World War II. Standard construction practice requires the placement of about 1900 sleepers per km of railroad track, and with a single tie weighing between roughly 70 kg (pine) and 100 kg (oak) every kilometer needed approximately 130–190 t of sawn (and preferably creosote-treated) wood. My calculations show that the rail tracks laid worldwide during the nineteenth century required at least 100 Mt of sawn wood for original construction and at least 60 Mt of additional timber for track repairs and replacements (Smil, 2013).

Wooden railway ties, that quintessential nineteenth-century innovation, maintained their high share of the global market throughout the twentieth century. During the 1990s, 94% of America’s ties were wooden.

The energy cost of market-ready lumber (timber) is low, comparable to the energy cost of many bulk mineral and basic construction materials produced by their processing. Tree felling, removal of boles from the forest, their squaring and air drying will add up to no more than about 500 MJ/t, and even with relatively energy-intensive kiln-drying (this operation may account for 80–90% of all thermal energy) the total could be as low as 1.5 and more than 3.5 GJ/t (including cutting and planing) for such common dimensional construction cuts as 2 × 4 studs used for framing North American houses.

The low energy cost of wood is also illustrated by the fact that, in Canada, the energy cost of wood products represents less than 5% of the cost of the goods sold (Meil et al., 2009). Energy costs on the order of 1–3 GJ/t are, of course, only small fractions of wood’s energy content that ranges from 15 to 17 GJ/t for air-dry material. Obviously, the energy cost of wood products rises with the degree of processing (FAO, 1990). Particle board (with a density between 0.66 and 0.70 g/cm3) may need as little as 3 GJ/t and no more than 7 GJ/t, with some 60% of all energy needed for particle drying and 20% for hot pressing.

The energy cost of paper making varies with the final product and, given the size and production scale of modern pape rmaking machines (typically 150 m long, running speeds up to 1800 m/min., and annual output of 300 000 t of paper), is not amenable to drastic changes (Austin, 2010). Unbleached packaging paper made from thermo-mechanical pulp is the least energy-expensive kind (as little as 23 GJ/t); fine bleached uncoated paper made from kraft pulp consumes at least 27 GJ/t and commonly just over 30 GJ/t (Worrell et al., 2008). Most people find it surprising that this is as much as a high-quality steel.

Recycled and de-inked newsprint or tissue can be made with less than 18 GJ/t, but the material is often down-cycled into lower quality packaging materials.

Wooden floors are much less energy intensive than the common alternatives: the total energy per square meter of flooring per year of service was put at 1.6 MJ for wood (usually oak or maple) compared to 2.3 MJ for linoleum and 2.8 MJ for vinyl.

Collection of household waste paper is expensive, and a thorough processing of the material is needed to produce clean fibers for reuse. This includes defibering of paper, cleaning and removal of all nonfiber ingredients (most often adhesive tapes, plastics, and staples), and de-inking is needed if the fibers are to be reprocessed into white paper. Reprocessing shortens the cellulose fibers and this means that paper can be recycled no more than 4 to 7 times.

Late 19th to early 20th century hand-stoked coal stoves converted no more than 20-25% or less of the fuel’s chemical energy to useful heat, though that’s good compared to the less than 10% efficiency of wood-burning fireplaces before that. Oil-fired furnace efficiency can be up to 50%, natural gas home furnaces 70-75%.

Posted in Vaclav Smil, Wood | Tagged , , | 2 Comments

Threats to America’s drinking and sewage treatment infrastructure

[ Here are a few of the points made in this 170 page document about improving the nation’s water system security (excerpts follow):

  • There are many potential threats to water infrastructure, including terrorism, failure of aging infrastructure, flooding, hurricanes, earthquakes, cyber-security breaches, chemical spills, a pandemic causing widespread absenteeism of water treatment employees, and intentional release of chemical, biological, and radiological agents.
  • Preventing a terrorist attack on the nation’s water infrastructure may be impossible because of the number and diversity of utilities, the multiple points of vulnerability, the high number of false positives, and the expense of protecting an entire system.
  • Drinking and sewage water treatment depend on electricity. In a power outage, natural or deliberately started fires would be hard to put out. Explosives could destroy communications.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts:  KunstlerCast 253, KunstlerCast278, Peak Prosperity]

NRC. 2007. Improving the Nation’s Water Security: Opportunities for Research.  Committee on Water System Security Research, National Research Council, National Academies Press. 170 pages

An attack on the water infrastructure could cause mortality, injury, or sickness; large-scale environmental impacts; and a loss of public confidence in the safety and quality of drinking water supplies.

An important overarching issue that remains unresolved is making water security information accessible to those who might need it. The problem of information sharing in a security context is one of the most difficult the EPA faces. Currently, some important information on priority contaminants and threats that could improve utilities’ response.

Improving the Nation’s water security has been classified and cannot be shared with utilities, even through secure dissemination mechanisms.

While contingency plans have existed for decades within the water and wastewater utilities industry to handle power interruptions or natural events such as flooding, new security concerns include disruption of service by physical attack (e.g., explosives), breaches in cyber security, and the intentional release of contaminants (including chemical, biological, and radiological agents).

Both drinking water and wastewater systems are vulnerable to terrorist attack. The consequences of security threats involve potential mortality, injury, or sickness; economic losses; extended periods of service interruption; and a loss of public confidence in the safety and quality of drinking water supplies—a major concern even without a serious public health consequence.

Flushing a drinking water distribution system in response to intentional chemical contamination could transport contaminants to the wastewater system and, unless removed by wastewater treatment, into receiving waters; thus, large-scale environmental impacts could also result from water security events.

Security threats to wastewater systems, while posing a less direct impact on public health, are nevertheless serious concerns. Chemical or microbial agents added in relatively small quantities to a wastewater system could disrupt the treatment process, and a physical attack on a wastewater collection system could create local public health concerns and potentially large-scale environmental impacts.

Wastewater collection systems (e.g., large-diameter sewer mains) may also serve as conduits for malicious attacks via explosives that could cause a large number of injuries and fatalities.

An attack on a wastewater system could also create public health concerns if untreated wastewater were discharged to a river used as a downstream drinking water supply or for recreational purposes (e.g., swimming, fishing).

Infrastructure Interdependencies: Electricity, firefighting, communications, natural disasters, epidemics

Threats to water security also raise concerns regarding cross-sector interdependencies of critical infrastructures. Water utilities are largely dependent upon electric power to treat and distribute water. Likewise, electric power is essential to collect and treat wastewater.

The firefighting ability of municipalities would be seriously weakened without an adequate and uninterrupted supply of water, and intentional fires could be set as part of a terrorist attack to further exacerbate this impact. Explosive attacks in wastewater collection systems could affect other critical co-located infrastructures, such as communications.

Many of the principles used to prepare for and to respond to water security threats are directly applicable to natural hazards. Hurricane Katrina reminded the nation that natural disasters can cause both physical damage and contamination impacts on water and wastewater systems.

Moreover, natural disasters (e.g., earthquakes, floods) and routine system problems (e.g., aging infrastructure, nonintentional contamination events) are far more likely to occur than a terrorist attack.

An epidemic or pandemic illness could also create failures in smaller water or wastewater utilities if supply chains become compromised due to widespread absenteeism or if essential personnel are incapacitated. Thus, threats from intentional attacks are not the only threats to the integrity of the nation’s water systems.

The municipal wastewater industry has over 16,000 plants that are used to treat a total flow on the order of 32,000 billion gallons per day (Bgal/d). More than 92% of the total existing flow is handled by about 3,000 treatment plants that have a treatment capacity of 1 million gallons per day (Mgal/d) or greater, although more than 6,000 plants treat a flow of 100,000 gallons per day or less. Nearly all of the wastewater treatment plants provide some form of secondary treatment and more than half provide some form of advanced treatment using a diversity of treatment processes and configurations. Thus, crafting a wastewater security research strategy that is suitable for all wastewater treatment plants is difficult.

Protecting a very large number of utilities against the consequences of the wide range of possible threats is a daunting, perhaps impossible, task. The development of a workable security system to prevent physical attacks against commercial airline flights is difficult and is still a work in progress, and the comparable problem for water systems is vastly more complex. Security technologies for one type of system might not work for another, and many systems might require custom designs. Further, no systems are immune from concern about an attack. A chemical or biological attack on a system that serves only a few thousand people would still be significant in terms of loss of life, economic damage, or the amount of fear and loss of confidence it would cause. In addition, smaller systems tend to be less protected and more vulnerable to a malicious attack. Approximately 160,000 drinking water systems and 16,000 wastewater systems operate simultaneously 24 hours a day, 7 days a week, with the largest systems each servicing millions of customers, and each is capable of being attacked by many different means requiring different methods of prevention. Expecting utilities to harden water and wastewater infrastructure to eliminate all vulnerabilities is unreasonable. The costs of security for the industry would be borne by the end users, and these users may not be willing to bear the costs of developing and implementing technologies that could prevent even a limited range of terrorist attacks over the entire nation’s water and wastewater systems.

Clearly, the earlier a contaminant is detected, the greater the likelihood that its public health impact can be reduced. Thus, an initial research interest has focused on developing early detection systems for chemical or biological agents that might intentionally be introduced into water or wastewater. Any such effort, however, will have to overcome some significant challenges to fashion advanced technologies into a workable system, considering the challenge of the number and diversity of water and wastewater systems and potential contaminants.

Detecting intruders and chemicals: too many false positive alarms

Let us assume, for example, a very high rate of one such intentional attack per year among the largest 10,000 drinking water systems. To detect such an attack, sensors would have to be placed throughout the systems and take frequent measurements. If a generic intrusion detector samples once every 10 minutes and there are on average 20 detectors per system (a reasonable assumption for one of the 10,000 largest systems, although one might expect more for a very large system and fewer for a very small system), this adds up to a million sampling intervals per system per year. Assuming a false positive rate of one in 10 million measurements (an extraordinarily small rate if also maximizing sensitivity), this would still produce 1,000 false positives per year among these 10,000 water systems. If only one true positive in 10,000 is expected, this means that almost every time the alarm goes off (99.9 percent of the time), it is a false positive. As a result, operators are likely to disconnect, ignore, or simply choose not to install the detection system. If detectors are ignored or not maintained, they cannot practically serve their purpose, whether to prevent, warn, or treat.

The problem is compounded when considering the installation of detectors for each of a large number of potential biothreat agents. Meinhardt published a table of 28 selected agents in 8 broad categories identified by multiple governmental, military, and medical sources as possible biowarfare agents that might present a public health threat if dispersed by water. Assuming success in constructing a 100% sensitive and extremely specific detector for the eight broad agent categories (e.g., viral pathogen, marine biotoxin) and assuming each broad category has an equal probability of being employed in an attack, the probability of a true alarm is reduced by almost another order of magnitude. In other words, the additional analysis of multiple categories of agents requires an order-of-magnitude reduction in the false positive rate of a detector just to get back to the unsatisfactory baseline of a system for a generic intrusion detector. The fundamental problem relates to the rarity of an attack on any particular system. Detectors can be made with high sensitivity and specificity (low false positive and false negative rates), but when applied in situations where the event to be detected is uncommon, the predictive value of an alarm can be very small.

A false positive alarm every few years might conceivably be acceptable to some communities that consider themselves high-risk targets, assuming there is an agreed-upon response plan in place for a positive signal.

(The calculations were conducted as follows: 10,000 water systems * 20 detectors/system * 6 measurements/detector/hour * 8760 hours/year = 10,512,000,000 measurements/year across all 10,000 systems. Given the assumptions in this scenario of a false positive rate of one in 10 million measurements and an attack rate of one per 10,000 drinking water systems, there will be approximately 1,000 false positives and only one is a true positive (one attack) per year).

Improved event detection architecture could possibly reduce the number of false positives. In this approach, a water system would install an array of sensors linked in a way that only triggers an alarm when a statistically significant number of sensors detect abnormal levels. This should reduce or eliminate the false positives caused by independent sensor malfunctions, but it would also increase the false negative rate (i.e., decrease specificity) and the cost of the detection system. The cost of purchasing and maintaining such detection instruments over a period of years needs to be considered in evaluating the likelihood of implementation.

Disease surveillance systems have been proposed as another method to detect a drinking water contamination event. The detection of a water-related event using a human-disease-based surveillance system with an appropriate epidemiologic follow-up investigation is insensitive to any but the largest outbreak events and would occur too late to prevent illness. However, disease surveillance systems could be used to mitigate further exposure and implement treatment or prophylaxis (detect to treat), especially if linked to contaminant monitoring systems. The problems associated with in situ detection systems, discussed in the previous section, apply with even more force to disease surveillance systems designed to detect specific syndromes related to bioterror agents, because disease surveillance systems have only modest sensitivities and specificities. The body’s immune system reacts generically to many in symptoms” seen in so many different diseases at first presentation. The implementation of enhanced disease surveillance systems is costly and has inherent false positive and negative rates. For example, not every case of waterborne disease will eventually be diagnosed as such. Therefore it has been argued that the benefits of such enhanced systems may not outweigh the costs in the general case. Public health researchers have argued that “it is challenging to develop sensible response protocols for syndromic surveillance systems because the likelihood of false alarms is so high, and because information is currently not specific enough to enable more timely outbreak detection or disease control activities” (Berger et al., 2006).

The EPA faces risks in providing water security information and risks in withholding it, and there is no easy solution to a problem that involves risks on both sides. As an example, if research were to find an unforeseen but easy way to contaminate a system, this information might change how utilities protect themselves and improve their ability to recognize that an attack has taken place. At the same time, this information can be used for malicious purposes. As a result, there is a delicate balance between alerting a significant number of water operators of a danger, while minimizing the potential for suggesting a route of attack to a malefactor.

Preventing a terrorist attack on the nation’s water infrastructure may be impossible because of the number and diversity of utilities, the multiple points of vulnerability, and the expense of protecting an entire system.

Overall, the EPA efforts in physical and cyber security are limited in scope, reflecting the relatively low priority of the topic to the EPA. The committee is concerned that the potential seriousness of physical attacks on a drinking water system are being overlooked, and therefore, contingencies and recovery options for physical attacks are not being addressed adequately in the research agenda. The lack of in-house expertise on the topics of physical and cyber security further limits the EPA’s ability to take a leadership role in this area, because contract management alone offers limited guidance and oversight to the work being performed.

Two classified reports have been developed that are related to, but not directly associated with, Section 3.2 of the Action Plan: the Threat Scenarios for Buildings and Water Systems Report and the Wastewater Baseline Threat Document. The first report, as described previously in this chapter, ranked the most likely contamination threats to drinking water,

Disagreggation of large water and wastewater systems should be an overarching theme of innovation. Large and complex systems have developed in the United States following the pattern of urban and suburban sprawl. While there are clear economies of scale for large utilities in construction and system management, there are distinct disadvantages as well. The complexity of large systems makes security measures difficult to implement and complicates the response to an attack. For example, locating the source of incursion within the distribution system and isolating contaminated sections are more difficult in large and complex water systems. Long water residence times are also more likely to occur in large drinking water systems, and, as a result, disinfectant residual may be lacking in the extremities of the system because of the chemical and biological reactions that occur during transport. From a security perspective, inadequate disinfectant residual means less protection against intentional contamination by a microbial agent.

Posted in CyberAttacks, Disease, Terrorism, Water, Water | Tagged , , , | Comments Off on Threats to America’s drinking and sewage treatment infrastructure