550,000 abandoned mines, $50 billion to clean up the worst of them

Aerial photo of waste water rushing out of the Gold King mine in Silverton, CO. Photo: Steve Garrison/The Daily Times

Aerial photo of waste water rushing out of the Gold King mine in Silverton, CO. Photo: Steve Garrison/The Daily Times

[ Abandoned mines can cause soil erosion, heavy metal contamination (i.e., cyanide, lead, arsenic, mercury, uranium), and acid drainage that threatens thousands of streams and rivers. The EPA estimates it would cost $50 billion to reclaim abandoned and inactive mine sites.  

Cleanup funds don’t exist because mining companies, unlike gas and oil concerns, do not pay royalties to the federal government for what they extract from public lands.  The U.S. Bureau of Land Management began requiring funds from miners in 2001 to use for cleanup, though many doubt there is enough money.  Environmental groups are also trying to force the EPA to require mining companies to set aside cleanup money.

2016-05-12 12.46.03

 

 

 

 

 

 

 

 

 

 

 

Mining waste seen from airplane, probably in Utah

Once oil starts its inevitable decline, it is not likely mine sites will be cleaned up as oil increasingly is allocated only to the most critical needs.  We really ought to be cleaning up these mines now since future generations won’t have the energy to do so.  Yet even if someone wants to clean up a mine for free –a “Good Samaritan” –this is not so easy to do, as you’ll find out below in this U.S. House hearing.   

Mining has been going on for nearly 150 years, most of it before any environmental regulation, and mines have been abandoned without any reclamation or efforts to protect the environment from damage.  No one knows exactly how many mines there are, some estimates are that there are over 550,000 on public and private lands.   The Government Accountability Office estimates there are over 160,000 abandoned hard rock mines in the western states and Alaska and that about 20% of these sites (about 33,000) are harming the surrounding environment  (USGAO).

The EPA estimates about 40% of headwaters in rivers and streams in the West, which are the source of drinking water for thousands of persons, have been impacted by discharges from abandoned hardrock mines, thus threatening community and agricultural water supplies, increasing drinking water treatment costs, and limiting fishing and recreation.  The number of mines that are causing or have potential to harm the environment is unknown, but is generally believed to  about 5 to 10% of the half million sites – so somewhere from 25,000 to 50,000 that need remediation of some kind

Here is where you can find some additional information about mine cleanup:

Additional headlines about mining

2016-03-26 90% of Indigenous in Brazil’s Amazon Suffer Mercury Poisoning

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation, 2015, Springer]

Gold mine near Carlin Nevada (from google earth)

Gold mine near Carlin Nevada (from google earth)

House 109-62. March 30, 2006. Barriers to the cleanup of abandoned mine sites. U.S. House hearing. 142 pages.

Excerpts:

Mr. DUNCAN. This hearing is on barriers to the cleanup of abandoned mine sites around the Country. Hopefully we will hear about potential ways to encourage volunteers to help clean up these sites. Past mining activities which occurred when mining practices were less sophisticated than today have disturbed hundreds of thousands of acres of land, altered drainage patterns and generated substantial amounts of waste scattered around the landscape. Today there are several hundred thousands of these old mine sites in the U.S., staff briefing memo say about half a million. Many of these mines were abandoned by the owners or operators a long time ago, when the remaining minerals became too difficult or costly to extract.

Although operated consistent with the governing laws at the time, many of these abandoned mines now pose environmental and health threats to surrounding surface and groundwater and to downstream interests. Nationally, tens of thousands of miles of streams are polluted by acid mine drainage and toxic loadings of heavy metals leeching from many of these old mines, impacting fisheries and water supplies.

State and Federal agencies have worked to remedy these problems, but the number of sites and the expense involved has made progress very slow. A lot of these old mine lands lack a viable owner or operator with the resources to remediate them. Many others are truly abandoned, with no identifiable owner or operator to hold responsible. As a result, few of these old mine sites are getting cleaned up.

Public or private volunteers are Good Samaritans, have been willing to partially remediate many of these sites. These Good Samaritans may be driven by a desire to improve the environment, others may want to improve water quality at their water supply source. Still others may want to clean up an old mine site for the purpose of re-mining the area or developing it in some other way.

However, most Good Samaritans have been deterred from carrying out these projects by the risk of becoming liable for complete cleanup required by various environmental laws. This is because current Federal law does not allow for partial cleanups. For example, if a Good Samaritan steps in to partially clean up an abandoned mine site, that party could become liable under the Clean Water Act or Superfund for a greater level of cleanup and higher costs than the party initially volunteered for. Because they could face the legal consequences if they fall short of complete cleanup, most potential Good Samaritans refrain from attempting to address a site’s pollution problems at all.

Federal policy should encourage and not discourage parties to volunteer themselves to clean up abandoned sites. We should consider whether in some circumstances environmental standards should be made more flexible in order to achieve at least partial cleanup of sites that otherwise would remain polluted.

This is not about letting polluters off the hook. They should remain responsible under existing law. However, if a party unconnected to an abandoned mine site steps forward to help with remediation, everyone wins. I believe there is little disagreement that encouraging volunteers to clean up abandoned mine sites is a worthwhile policy.

However, in exploring the details of such a policy a number of issues arise, such as who should be eligible for a lower standard of cleanup; how should new standards be applied; and how should potential re-mining of these sites be addressed. To help us identify and address these and other issues, we have assembled a number of parties who have been actively involved in the debate over how to address the abandoned mines problem in the Country.

Ms. JOHNSON. I must say that I find it somewhat ironic that we are considering legislative proposals to scale back environmental standards in order to achieve improvements in water quality. But that is where we find ourselves. Existing programs that could address mine runoff are either inadequately funded or inadequately enforced. The Administration and the Republican-led Congress have reduced funding for water quality under the Clean Water Act both for point and non-point pollution controls. They have also refused to reinstate funding for cleanup of toxic releases under the Superfund program, so now we look to volunteers where the Government will not help.

Abandoned, inactive mines are a major source of uncontrolled pollution in America’s waters, particularly in the western States.

This Committee has been talking about doing something to encourage volunteer efforts to improve water quality for nearly 15 years. Hopefully, this is the year we can get something enacted.

BENJAMIN H. GRUMBLES, ASSISTANT ADMINISTRATOR FOR WATER, UNITED STATES ENVIRONMENTAL PROTECTION AGENCY;

The problem is large and complex. There are hundreds of thousands of abandoned or inactive mine sites across the Country, and there are thousands that are contributing to water quality impairments and degrading watersheds.  I will focus first on the threat of liability. It is a barrier, a barrier to Good Samaritans stepping forward. It is a barrier under the Superfund laws because of the operator liability or arranger liability.

Those who do not own the land and want to step forward to help remediate these legacy sites face the very real threat that they won’t be able to rely on protections under the Superfund law. They face the very real threat that they will be liable under the Clean Water Act for permitting responsibilities.

Partial cleanups by Good Samaritans will result in meaningful environmental improvements. In many cases the impaired water bodies may never fully meet water quality standards, regardless of how much cleanup or remediation is done.

If the Good Samaritan is going to be improving water quality, that is the real measure, and that is what we wanted to ensure occurs, but not have a barrier where they are held to the same standard as the industrial polluter or the other polluter who created the problem in the first place.

The thing I really wanted to emphasize in the remaining amount of time is on the legislative front. We applaud the efforts, the bipartisan efforts of members of Congress, both sides of the aisle and both chambers. But we ourselves are also aggressively moving forward with developing legislation on behalf of the Administration that will bring together and help add momentum to the effort to get legislation across the finish line. We would be delighted to share that when we can, hopefully in the very near future. But it is focused on streamlined permitting processes, a targeted approach, realistic and common sense standards and really accelerating watershed restoration and protection.

Mr. FROHARDT.   Abandoned and inactive mines are responsible for many of the greatest threats to water quality in western States. We do have thousands of stream miles that are impacted by drainage from these mines and runoff. And we have encountered a situation where there is often no identifiable financially responsible party to clean up these sites.

Mr. PIZARCHIK.  In regard to the barriers to the cleanup of abandoned mines, I would like to talk about our experiences in Pennsylvania regarding the reclamation of abandoned mine lands under Pennsylvania’s Environmental Good Samaritan Act and under our re-mining program. In my home State, we have had over 200 years of mining that have left a legacy of over 200,000 acres of abandoned mine lines. These abandoned sites include open pits, water-filled pits, spoil piles, waste coal piles, mine openings and subsided surface areas. The water-filled pits shown on the easel there in that photo covers 40 acres and is 238 feet deep. All that water is acid mine drainage. It will cost over $20 million to reclaim it.

We also have thousands of abandoned mine discharges with varying degrees of acid, iron, aluminum, manganese and sulfates in the water. Some of the discharges are small and some are quite large. One such large discharge is a tunnel that drains over a 20 square mile area and discharges 40,000 gallons per minute.

According to an EPA Region III list from 1995, there were 3,158 miles of Pennsylvania streams affected by mine drainage. Over the last 60 years, Pennsylvania has spent hundreds of millions of dollars on abandoned mine problems. It became clear that without help from others, Government efforts alone would take many decades and billions of dollars to clean up all of the problems.

Additional options were needed. One option was re-mining. We found that operators were obtaining mining permits for abandoned sites and were mining the coal that was previously economically and technologically impossible to recover. However, such re-mining and reclamation was not occurring on sites that contained mine drainage.

When Pennsylvania officials tried to leverage the State’s limited resources by working with citizen and watershed groups to accomplish more reclamation, we met significant resistance. Citizen groups and mine operators alike would not tackle sites that had mine drainage on them, because State and Federal law imposed liability on them to permanently treat that discharge.

Our re-mining program has been very successful. Of 112 abandoned surface mines containing 233 pre-existing discharges that were re-mined, 48 discharges were eliminated, 61 were improved, 122 showed no significant improvement and 2 were degraded. Thousands of tons of metals deposited into the streams annually were removed. Approximately 140 miles of our streams were improved. Treatment would have cost at least $3 million a year every year to remove that through conventional measures. The benefits of re-mining are not limited to our water quality improvements. Significant amounts of Pennsylvania’s abandoned mine lands have been reclaimed at no cost to the Government. Over the past 10 years, 465 projects reclaimed over 250,000 acres and eliminated 140 miles of dangerous high wall. Abandoned waste coal piles were eliminated. Abandoned pits were filled. And lands were restored to a variety of productive uses, including wildlife habitat.

On the photo that you see there, those are elk. They are Pennsylvania elk, and they are feeding on a site that was re-mined and reclaimed pursuant to our re-mining program. The estimated value of the reclamation that was accomplished through re-mining in the past 10 years exceeds $1 billion.

Separate from our re-mining laws is Pennsylvania’s Environmental Good Samaritan Act. Like re-mining, only projects approved by our Department are eligible for the protection. Approval is required to ensure that the project is likely to make things better. The project must be an abandoned mine land or abandoned discharge for which there is no liable party. And protections are provided to the land owners, as well as those who are doing the work or providing materials or services.

Pennsylvania has undertaken 34 Good Samaritan projects. Some projects are simple, low-maintenance treatment systems. Others are more complex, like the project in Bittendale, Pennsylvania that transformed an abandoned mine into a part that treats acid mine drainage, celebrates the coal mining heritage and provides recreation facilities for the residents and serves to heighten public awareness about the importance of treating mine drainage.

While Pennsylvania’s Good Samaritan Act has been successful, there are concerns. First, the Federal Clean Water Act citizen supervision still poses a potential liability to these Good Samaritans. Recent developments portend action by some who hold a strict, literal view of the permitting requirements and the total maximum daily load requirements of the Federal Clean Water Act. Without Federal Good Samaritan Act or an amendment to the Clean Water Act providing that Good Samaritan projects and abandoned mine discharges are not point sources or not subject to the permitting requirements, the potential good work of the volunteers in Pennsylvania throughout the Country is at risk. People who would undertake projects to benefit the environment in America could be personally liable for making things better just because they didn’t make them perfect.

Dave Williams, Director, Wastewater, East Bay Municipal Utility District, Oakland, California

I think we need Good Samaritan legislation and what I wanted to do is touch on an example that we had at our district, and then give you some of my thoughts on how the POTW, the publicly owned treatment works, could play a role in this.

Abandoned mines are a big problem. There are over 39,000 alone in California. A lot of waste rock goes along with that, and the acid mine drainage. The East Bay MUD story is that we are a wastewater district serving 1.3 million people in the East San Francisco Bay. Our water supply is on the Mokelumne River in the Sierra Foothills. We have a couple of reservoirs there.

We have a reservoir, and the south shore of that reservoir has an abandoned mine. It is called Penn Mine. It was a major copper producer during World War II, and it was abandoned in the 1950s. When it was abandoned, there were 400,000 cubic yards of waste rock left in piles on the site. That resulted in 100,000 pounds of copper being discharged in the Mokelumne River every single year from acid mind drainage and massive fish kills. We were asked by the State in 1978 to help implement an abatement plan. We said okay. Our part of the plan was to build a berm about 100 feet long, 15 feet tall, to basically keep the acid mine drainage from entering the river. We did that on our land. It resulted in dramatic improvements in the reduction of acid mine drainage and reduction in fish kills. We were then sued in 1990 by the Committee to Save the Mokelumne. They said that it represented a potential to discharge, the spillway on this berm, into the Mokelumne. We argued that in court and lost. When we lost, we were then ordered by EPA to restore the entire site to the pre-mining condition. So we did that at a cost of $10 million. That was completed in 2000.

The story spread like wildfire throughout California, put a chilling effect on any efforts to clean up abandoned mines and little has happened since then.

Currently, San Francisco Bay is impaired for mercury. So how did all the mercury get there? Well, it came from mining operations, 26 million pounds of mercury was used to extract gold during the Gold Rush days. Eight million pounds of mercury found its way down into the sediment into San Francisco Bay. So you have sediment and you have the continued runoff from the abandoned mines. The total maximum daily load report for San Francisco Bay identified the major sources. Two major sources: sediment and abandoned mines. Publicly-owned treatment works were also a contributor. We contribute, total, all 40 treatment plants in the Bay Area, 17 kilograms out of a total of 1,220 kilograms that find its way into San Francisco Bay every year. We are viewed as a de minimis source.

Nonetheless, the Total Maximum Daily Load report is proposing that we cut back the mercury discharge from treatments by 40% in 20 years. You can do a little bit with pollution prevention. But you can’t make the 40%. So we need some assistance there. What we are faced with is installing costly tertiary treatment facilities at a cost of an estimated $200 million to $300 million per year on the ratepayers, in addition to what they are currently paying in the San Francisco Bay area. Doesn’t it make sense to spend much less than that and get much more bang for the buck by creating a mechanism where you could go in and relieve some liability and do some good by cleaning up abandoned mines? It seems to make sense to me.

Mr. WOOD.  Trout Unlimited has about 160,000 members in 36 States across the Country. We have a long history of engaging in watershed restoration projects that improve fisheries and water quality and otherwise improve watershed health. In fact, each of our more than 400 chapters donates well more than 1,200 hours a year in volunteer service doing stream cleanups, including a number of abandoned coal and hard rock mine projects. Since the creation of the Office of Surface Mining’s Abandoned Mine Reclamation Fund in 1977, more than $7.5 billion has been collected from the coal industry to help heal Appalachian and western coal fields. In places such as Kettle Creek watershed of North Central Pennsylvania, our work provides an example of how you can use those resources to both accomplish ecological restoration as well as achieve economic opportunities. In some of the places that we work in that State, thanks in large part to Pennsylvania’s Good Samaritan legislation, which you heard about earlier, coal contributing to acid mine drainage is mined as part of a remediation plan.

From a fisheries and watershed health perspective, issues associated with abandoned gold and silver mines and copper mines are very similar to those of coal mines. The enormity and scope of the abandoned mine problem in the western United States has led to a collective sense of futility which I think you have heard a fair amount about today, that has fostered inactivity in many landscapes.

There are hundreds if not thousands of other cleanups across the west that could be conducted if liability issues and funding issues were addressed. Every commodity developed off public lands has dedicated funding to pay for cleanup associated with production, except for hardrock minerals. Communities and organizations such as ours could get

Ms. SMITH.  More importantly, we don’t believe that the Good Samaritan projects are the real crux of the problem. There are down sides in trying to craft, it is difficult, as you drew out, Mr. Chairman, trying to figure out how craft legislation so you don’t have projects that make mistakes and you don’t weaken liability and stop cleanup at sites like Arrington, Nevada or Kennecott in Utah. The inescapable fact is that there is an enormous universe of abandoned mines, perhaps in the range of half a million total, and neither industry nor Government is spending enough money to make a serious dent in the problem. Money is the single most important barrier to cleanup. Congress needs to appropriate more funds for cleanup. States need to contribute more to cleanup. And the hardrock mining industry needs to follow the approach of their coal mining brethren picking up a share of the cost of cleaning up legacy mining problems.

The other part of the problem is that not all of mining’s problems are in the past. There are many mines, like those in the Copper Basin of Tennessee, that do fit that image of the legacy or historic mine. But there are many more that are far more recent vintage. The west is dotted with abandoned mines that date from the 1980s, not the 1880s, but the 1980s when gold, copper and uranium mining were booming. Too many of those boom projects, once touted as environmental models and economic windfalls, have left large and costly messes. These messes exist today, threats to public health and the environment and drains on the Federal Treasury, because the programs for regulating hard rock mining have failed. There is a desperate need for improvement of mining regulation, for a reasonable and enforceable program to govern disposal of mine waste, for financial assurance rules that actually assure that cleanup funds are available when mining operations cease.

The pressing need is for improved regulation. The pressing need is for scrutiny and controls that recognize that perpetual pollution can occur at facilities like the Zortman Landusky Mine in Montana. The pressing need is for a regulatory system that deals with the vast amount of toxic waste produced by this industry. Now, while the industry is on the crest of a boom.

 

 

Posted in Congressional Record U.S., Mercury, Pollution, Water | Tagged , , , , | Leave a comment

There are over 300,000 contaminated soil and water sites in the U.S.

[ There are too many problems requiring too much energy to solve after oil decline that we should have spent our energy on while it was still bountiful for the sake of future generations.  Along with climate change and nuclear waste, our descendants will be walloped with polluted water that can cause  cancer, adverse neurological, reproductive, or developmental conditions, and shorten lifespan in many other ways.  Groundwater will be polluted for thousands of years at the over 300,000 sites that are unlikely to be cleaned up by the time energy shortages start to occur. 

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation, 2015, Springer]

NRC. 2013. Alternatives for Managing the Nation’s Complex Contaminated Groundwater Sites. National Research Council, National Academies Press. 423 pages

contaminated water sites and costTABLE 2-6 Rough Estimate of the total number of currently known facilities or contaminated sites that have Not reached closure and estimated costs to complete

CONCLUSIONS AND RECOMMENDATIONS

The Committee’s rough estimate of the number of sites remaining to be addressed and their associated future costs is presented in Table 2-6, which lists the latest available information on the number of facilities (for CERCLA and RCRA) and contaminated sites (for the other programs) that have not yet reached closure, and the estimated costs to remediate the remaining sites.

At least 126,000 sites across the country have been documented that have residual contamination at levels preventing them from reaching closure. This number is likely to be an underestimate of the extent of contamination in the United States for a number of reasons. First, for some programs data are available only for contaminated facilities rather than individual sites; for example, RCRA officials declined to provide an average number of solid waste management units per facility, noting that it ranged from 1 to “scores.” CERCLA facilities frequently contain more than one individual release site. The total does not include DoD sites that have reached remedy in place or response complete, although some such sites may indeed contain residual contamination. Finally, the total does not include sites that likely exist but have not yet been identified, such as dry cleaners or small chemical-intensive businesses (e.g., electroplating, furniture refinishing).

Information on cleanup costs incurred to date and estimates of future costs, as shown in Table 2-6, are highly uncertain. Despite this uncertainty, the estimated “cost to complete” of $110-$ 127 billion is likely an underestimate of future liabilities. Remaining sites include some of the most difficult to remediate sites, for which the effectiveness of planned remediation remains uncertain given their complex site conditions. Furthermore, many of the estimated costs (e.g., the CERCLA figure) do not fully consider the cost of long-term management of sites that will have contamination remaining in place at high levels for the foreseeable future.

Despite nearly 40 years of intensive efforts in the United States as well as in other industrialized countries worldwide, restoration of groundwater contaminated by releases of anthropogenic chemicals to a condition allowing for unlimited use and unrestricted exposure remains a significant technical and institutional challenge.

Recent estimates by the U.S. Environmental Protection Agency (EPA) indicate that expenditures for soil and groundwater cleanup at over 300,000 sites through 2033 may exceed $200 billion (not adjusted for inflation), and many of these sites have experienced groundwater impacts.

One dominant attribute of the nation’s efforts on subsurface remediation efforts has been lengthy delays between discovery of the problem and its resolution. Reasons for these extended time-frames are now well known: ineffective subsurface investigations, difficulties in characterizing the nature and extent of the problem in highly heterogeneous subsurface environments, remedial technologies that have not been capable of achieving restoration in many of these geologic settings, continued improvements in analytical detection limits leading to discovery of additional chemicals of concern, evolution of more stringent drinking water standards, and the realization that other exposure pathways, such as vapor intrusion, pose unacceptable health risks. A variety of administrative and policy factors also result in extensive delays, including, but not limited to, high regulatory personnel turnover, the difficulty in determining cost-effective remedies to meet cleanup goals, and allocation of responsibility at multiparty sites.

There is general agreement among practicing remediation professionals, however, that there is a substantial population of sites, where, due to inherent geologic complexities, restoration within the next 50 to 100 years is likely not achievable. Reaching agreement on which sites should be included in this category, and what should be done with such sites, however, has proven to be difficult.  A key decision in that Road Map is determining whether or not restoration of groundwater is “likely.

Summary

The nomenclature for the phases of site cleanup and cleanup progress are inconsistent between federal agencies, between the states and federal government, and in the private sector. Partly because of these inconsistencies, members of the public and other stakeholders can and have confused the concept of “site closure” with achieving unlimited use and unrestricted exposure goals for the site, such that no further monitoring or oversight is needed. In fact, many sites thought of as “closed” and considered as “successes” will require oversight and funding for decades and in some cases hundreds of years in order to be protective.

At hundreds of thousands of hazardous waste sites across the country, groundwater contamination remains in place at levels above cleanup goals. The most problematic sites are those with potentially persistent contaminants including chlorinated solvents recalcitrant to biodegradation, and with hydrogeologic conditions characterized by large spatial heterogeneity or the presence of fractures. While there have been success stories over the past 30 years, the majority of hazardous waste sites that have been closed were relatively simple compared to the remaining caseload.

At least 126,000 sites across the country have been documented that have residual contamination at levels preventing them from reaching closure. This number is likely to be an underestimate of the extent of contamination in the United States

Significant limitations with currently available remedial technologies persist that make achievement of Maximum Contaminant Levels (MCL) throughout the aquifer unlikely at most complex groundwater sites in a time frame of 50-100 years. Furthermore, future improvements in these technologies are likely to be incremental, such that long-term monitoring and stewardship at sites with groundwater contamination should be expected.

IMPLICATIONS OF CONTAMINATION REMAINING IN PLACE

Chapter 5 discusses the potential technical, legal, economic, and other practical implications of the finding that groundwater at complex sites is unlikely to attain unlimited use and unrestricted exposure levels for many decades.  First, the failure of hydraulic or physical containment systems, as well as the failure of institutional controls, could create new exposures. Second, toxicity information is regularly updated, which can alter drinking water standards, and contaminants that were previously unregulated may become so. In addition, pathways of exposure that were not previously considered can be found to be important, such as the vapor intrusion pathway. Third, treating contaminated groundwater for drinking water purposes is costly and, for some contaminants, technically challenging. Finally, leaving contamination in the subsurface may expose the landowner, property manager, or original disposer to complications that would not exist in the absence of the contamination, such as natural resource damages, trespass, and changes in land values. Thus, the risks and the technical, economic, and legal complications associated with residual contamination need to be compared to the time, cost, and feasibility involved in removing contamination outright.

New toxicological understanding and revisions to dose-response relationships will continue to be developed for existing chemicals, such as trichloroethene and tetrachloroethene, and for new chemicals of concern, such as perchlorate and perfluorinated chemicals. The implications of such evolving understanding include identification of new or revised ARARs (either more or less restrictive than existing ones), potentially leading to a determination that the existing remedy at some hazardous waste sites is no longer protective of human health and the environment

Introduction

Since the 1970s, hundreds of billions of dollars have been invested by federal, state, and local government agencies as well as responsible parties to mitigate the human health and ecological risks posed by chemicals released to the subsurface environment. Many of the contaminants common to these hazardous waste sites, such as metals and volatile organic compounds, are known or suspected to cause cancer or adverse neurological, reproductive, or developmental conditions.

Over the past 30 years, some progress in meeting mitigation and remediation goals at hazardous waste sites has been achieved. For example, of the 1,723 sites ever listed on the National Priorities List (NPL), which are considered by the U.S. Environmental Protection Agency (EPA) to present the most significant risks, 360 have been permanently removed from the list because EPA deemed that no further response was needed to protect human health or the environment (EPA, 2012).

Seventy percent of the 3,747 hazardous waste sites regulated under the Resource Conservation and Recovery Act (RCRA) corrective action program have achieved “control of human exposure to contamination,” and 686 have been designated as “corrective action completed”. The Underground Storage Tank (UST) program also reports successes, including closure of over 1.7 million USTs since the program was initiated in 1984. The cumulative cost associated with these national efforts underscores the importance of pollution prevention and serves as a powerful incentive to reduce the discharge or release of 13 hazardous substances to the environment, particularly when a groundwater resource is threatened. Although some of the success stories described above were challenging in terms of contaminants present and underlying hydrogeology, the majority of sites that have been closed were relatively simple (e.g., shallow, localized petroleum contamination from USTs) compared to the remaining caseload.

Indeed, hundreds of thousands of sites across both state and federal programs are thought to still have contamination remaining in place at levels above those allowing for unlimited land and groundwater use and unrestricted exposure (see Chapter 2).  According to its most recent assessment, EPA estimates that more than $209 billion dollars (in constant 2004 dollars) will be needed over the next 30 years to mitigate hazards at between 235,000 to 355,000 sites (EPA, 2004). This cost estimate, however, does not include continued expenditures at sites where remediation is already in progress, or where remediation has transitioned to long-term management.

It is widely agreed that long-term management will be needed at many sites for the foreseeable future, particularly for the more complex sites that have recalcitrant contaminants, large amounts of contamination, and/or subsurface conditions known to be difficult to remediate (e.g., low-permeability strata, fractured media, deep contamination).

According to the most recent annual report to Congress, the Department of Defense (DoD)  currently has almost 26,000 active sites under its Installation Restoration Program where soil and groundwater remediation is either planned or under way. Of these, approximately 13,000 sites are the responsibility of the Army, the sponsor of this report. The estimated cost to complete cleanup at all DoD sites is approximately $12.8 billion. (Note that these estimates do not include sites containing unexploded ordnance.)

Complex Contaminated Sites

Although progress has been made in remediating many hazardous waste sites, there remains a sizeable population of complex sites, where restoration is likely not achievable in the next 50-100 years. Although there is no formal definition of complexity, most remediation professionals agree that attributes include a really extensive groundwater contamination, heterogeneous geology, large releases and/or source zones, multiple and/or recalcitrant contaminants, heterogeneous contaminant distribution in the subsurface, and long time frames since releases occurred.

Complexity is also directly tied to the contaminants present at hazardous waste sites, which can vary widely and include organics, metals, explosives, and radionuclides. Some of the most challenging to remediate are dense nonaqueous phase liquids (DNAPLs), including chlorinated solvents.

Each of the NRC studies has, in one form or another, recognized that in almost all cases, complete restoration of contaminated groundwater is difficult, and in a substantial fraction of contaminated sites, not likely to be achieved in less than 100 years.

Trichloroethene (TCE) and tetrachloroethene are particularly challenging to restore because of their complex contaminant distribution in the subsurface.

Three classes of contaminants that have proven very difficult to treat once released to the subsurface: metals, radionuclides, and DNAPLs, such as chlorinated solvents. The report concluded that “removing all sources of groundwater contamination, particularly DNAPLs, will be technically impracticable at many Department of Energy sites, and long-term containment systems will be necessary for these sites.”

An example of the array of challenges faced by the DoD is provided by the Anniston Army Depot, where groundwater is contaminated with chlorinated solvents (as much as 27 million pounds of TCE and inorganic compounds. TCE and other contaminants are thought to be migrating vertically and horizontally from the source areas, affecting groundwater downgradient of the base including the potable water supply to the City of Anniston, Alabama. The interim Record of Decision called for a groundwater extraction and treatment system, which has resulted in the removal of TCE in extracted water to levels below drinking water standards. Because the treatment system is not significantly reducing the extent or mobility of the groundwater contaminants in the subsurface, the current interim remedy is considered “not protective.” Therefore, additional efforts have been made to remove greater quantities of TCE from the subsurface, and no end is in sight. Modeling studies suggest that the time to reach the TCE MCL in the groundwater beneath the source areas ranges from 1,200 to 10,000 years, and that partial source removal will shorten those times to 830–7,900 years.

The Department of Defense

The DoD environmental remediation program, measured by the number of facilities, is the largest such program in the United States, and perhaps the world.

The Installation Restoration Program (IRP), which addresses toxic and radioactive wastes as well as building demolition and debris removal, is responsible for 3,486 installations containing over 29,000 contaminated sites

The Military Munitions Response Program, which focuses on unexploded ordnance and discarded military munitions, is beyond the scope of this report and is not discussed further here, although its future expenses are greater than those anticipated for the IRP.

The CERCLA program was established to address hazardous substances at abandoned or uncontrolled hazardous waste sites. Through the CERCLA program, the EPA has developed the National Priorities List (NPL).  There are 1,723 facilities that have been on the NPL.

As of June 2012, 359 of the 1,723 facilities have been “deleted” from the NPL, which means the EPA has determined that no further response is required to protect human health or the environment; 1,364 remain on the NPL.

Statistics from EPA (2004) illustrate the typical complexity of hazardous waste sites at facilities on the NPL. Volatile organic compounds (VOCs) are present at 78 percent of NPL facilities, metals at 77 percent, and semivolatile organic compounds (SVOCs) at 71 percent. All three contaminant groups are found at 52 percent of NPL facilities, and two of the groups at 76 percent of facilities

RCRA Corrective Action Program Among other objectives, the Resource Conservation and Recovery Act (RCRA) governs the management of hazardous wastes at operating facilities that handle or handled hazardous waste.

Although tens of thousands of waste handlers are potentially subject to RCRA, currently EPA has authority to impose corrective action on 3,747 RCRA hazardous waste facilities in the United States

Underground Storage Tank Program In 1984, Congress recognized the unique and widespread problem posed by leaking underground storage tanks by adding Subtitle I to RCRA.

UST contaminants are typically light nonaqueous phase liquids (LNAPLs) such as petroleum hydrocarbons and fuel additives.

Responsibility for the UST program has been delegated to the states (or even local oversight agencies such as a county or a water utility with basin management programs), which set specific cleanup standards and approve specific corrective action plans and the application of particular technologies at sites. This is true even for petroleum-only USTs on military bases, a few of which have hundreds of such tanks.

At the end of 2011, there were 590,104 active tanks in the UST program

Currently, there are 87,983 leaking tanks that have contaminated surrounding soil and groundwater, the so-called “backlog.” The backlog number represents the cumulative number of confirmed releases (501,723) minus the cumulative number of completed cleanups (413,740).

Department of Energy

The DOE faces the task of cleaning up the legacy of environmental contamination from activities to develop nuclear weapons during World War II and the Cold War. Contaminants include short-lived and long-lived radioactive wastes, toxic substances such as chlorinated solvents, “mixed wastes” that include both toxic substances and radionuclides, and, at a handful of facilities, unexploded ordnance. Much like the military, a given DOE facility or installation will tend to have multiple sites where contaminants may have been spilled, disposed of, or abandoned that can be variously regulated by CERCLA, RCRA, or the UST program.

The DOE Environmental Management program, established in 1989 to address several decades of nuclear weapons production, “is the largest in the world, originally involving two million acres at 107 sites in 35 states and some of the most dangerous materials known to man”.

Given that major DOE sites tend to be more challenging than typical DoD sites, it is not surprising that the scope of future remediation is substantial. Furthermore, because many DOE sites date back 50 years, contaminants have diffused into the subsurface matrix, considerably complicating remediation.

More recent reports suggest that about 7,000 individual release sites out of 10,645 historical release sites have been “completed,” which means at least that a remedy is in place, leaving approximately 3,650 sites remaining. In 2004, DOE estimated that almost all installations would require long-term stewardship

As of April 1995, over 3,000 contaminated sites on 700 facilities, distributed among 17 non-DoD and non-DOE federal agencies, were potentially in need of remediation. The Department of Interior (DOI), Department of Agriculture (USDA), and National Aeronautics and Space Administration (NASA) together account for about 70 percent of the civilian federal facilities reported to EPA as potentially needing remediation (EPA, 2004). EPA estimates that many more sites have not yet been reported, including an estimated 8,000 to 31,000 abandoned mine sites, most of which are on federal lands, although the fraction of these that are impacting groundwater quality is not reported. The Government Accountability Office (GAO) (2008) determined that there were at least 33,000 abandoned hardrock mine sites in the 12 western states and Alaska that had degraded the environment by contaminating surface water and groundwater or leaving arsenic-contaminated tailings piles.

State Sites

A broad spectrum of sites is managed by states, local jurisdictions, and private parties, and thus are not part of the CERCLA, RCRA, or UST programs. These types of sites can vary in size and complexity, ranging from sites similar to those at facilities listed on the NPL to small sites with low levels of contamination.

States typically define Brownfields sites as industrial or commercial facilities that are abandoned or underutilized due to environmental contamination or fear of contamination. EPA (2004) postulated that only 10 to 15 percent of the estimated one-half to one million Brownfields sites have been identified.

As of 2000, 23,000 state sites had been identified as needing further attention that had not yet been targeted for remediation (EPA, 2004). The same study estimated that 127,000 additional sites would be identified by 2030. Dry Cleaner Sites Active and particularly former dry cleaner sites present a unique problem in hazardous waste management because of their ubiquitous nature in urban settings, the carcinogenic contaminants used in the dry cleaning process (primarily the chlorinated solvent PCE, although other solvents have been used), and the potential for the contamination to reach receptors via the drinking water and indoor air (vapor intrusion) exposure pathways. Depending on the size and extent of contamination, dry cleaner sites may be remediated under one or more state or federal programs such as RCRA, CERCLA, or state mandated or voluntary programs discussed previously, and thus the total estimates of dry cleaner sites are not listed separately in

In 2004, there were an estimated 30,000 commercial, 325 industrial, and 100 coin-operated active dry cleaners in the United States (EPA, 2004). Despite their smaller numbers, industrial dry cleaners produce the majority of the estimated gallons of hazardous waste from these facilities (EPA, 2004). As of 2010, the number of dry cleaners has grown, with an estimated 36,000 active dry cleaner facilities in the United States—of which about 75 percent (27,000 dry cleaners) have soil and groundwater contamination (SCRD, 2010b). In addition to active sites, dry cleaners that have moved or gone out of business—i.e., inactive sites—also have the potential for contamination. Unfortunately, significant uncertainty surrounds estimates of the number of inactive dry cleaner sites and the extent of contamination at these sites. Complicating factors include the fact that (1) older dry cleaners used solvents less efficiently than younger dry cleaners thus enhancing the amount of potential contamination and (2) dry cleaners that have moved or were in business for long amounts of time tend to employ different cleaning methods throughout their lifetime. EPA (2004) documented at least 9,000 inactive dry cleaner sites, although this number does not include data on dry cleaners that closed prior to 1960. There are no data on how many of these documented inactive dry cleaner sites may have been remediated over the years. EPA estimated that there could be as many as 90,000 inactive dry cleaner sites in the United States.

Department of Defense

The Installation Restoration Program reports that it has spent approximately $31 billion through FY 2010, and estimates for “cost to complete” exceed $12 billion

Implementation costs for the CERCLA program are difficult to obtain because most remedies are implemented by private, nongovernmental PRPs and generally there is no requirement for these PRPs to report actual implementation costs.

EPA (2004) estimated that the cost for addressing the 456 facilities that have not begun remedial action is $16-$23 billion.

A more recent report from the GAO (2009) suggests that individual site remediation costs have increased over time (in constant dollars) because a higher percentage of the remaining NPL facilities are larger and more complex (i.e., “megasites”) than those addressed in the past. Additionally, GAO (2009) found that the percentage of NPL facilities without responsible parties to fund cleanups may be increasing. When no PRP can be identified, the cost for Superfund remediation is shared by the states and the Superfund Trust Fund. The Superfund Trust Fund has enjoyed a relatively stable budget—e.g., $1.25 billion, $1.27 billion, and $1.27 billion for FY 2009, 2010, and 2011, 8 respectively—although recent budget proposals seek to reduce these levels. States contribute as much as 50 percent of the construction and operation costs for certain CERCLA actions in their state. After ten years of remedial actions at such NPL facilities, states become fully responsible for continuing long-term remedial actions.

In 2004, EPA estimated that remediation of the remaining RCRA sites will cost between $31 billion and $58 billion, or an average of $11.4 million per facility

Underground Storage Tank Program

There is limited information available to determine costs already incurred in the UST program. EPA (2004) estimated that the cost to close all leaking UST (LUST) sites could reach $12-$19 billion or an average of $125,000 to remediate each release site (this includes site investigations, feasibility studies, and treatment/disposal of soil and groundwater). Based on this estimate of $125,000 per site, the Committee calculated that remediating the 87,983 backlogged releases would require $11 billion. The presence of the recalcitrant former fuel additive methyl tert-butyl ether (MTBE) and its daughter product and co-additive tert-butyl alcohol could increase the cost per site. Most UST cleanup costs are paid by property owners, state and local governments, and special trust funds based on dedicated taxes, such as fuel taxes. Department of Energy

The Department’s FY 2011 report to Congress, which shows that DOE’s anticipated cost to complete remediation of soil and groundwater contamination ranges from $17.3 to $20.9 billion. The program is dominated by a small number of mega-facilities, including Hanford (WA), Idaho National Labs, Savannah River (SC), Los Alamos National Labs (NM), and the Nevada Test Site. Given that the cost to complete soil and groundwater remediation at these five facilities alone ranges from $16.4 to $19.9 billion (DOE, 2011), the Committee believes that the DOE’s anticipated cost-to-complete figure is likely an underestimate of the Agency’s financial burden; the number does not include newly discovered releases or the cost of long-term management at all sites where waste remains in the subsurface. Data on long-term stewardship costs, including the expense of operating and maintaining engineering controls, enforcing institutional controls, and monitoring, are not consolidated but are likely to be substantial and ongoing.

Stewardship costs for just the five facilities managed by the National Nuclear Security Administration (Lawrence Livermore National Laboratory, CA, Livermore’s Site 300, Pantex, TX, Sandia National Laboratories, NM, and the Kansas City Plant, MO) total about $45 million per year (DOE, 2012c).

Other Federal Sites EPA (2004) reports that there is a $15-$22 billion estimated cost to address at least 3,000 contaminated areas on 700 civilian federal facilities, based on estimates from various reports from DOI, USDA, and NASA. States EPA (2004) estimated that states and private parties together have spent about $1 billion per year on remediation, addressing about 5,000 sites annually under mandatory and voluntary state programs. If remediation were continued at this rate, 150,000 sites would be completed over 30 years, at a cost of approximately $30 billion (or $20,000 per site). IMPACTS TO

DRINKING WATER SUPPLIES

The Committee sought information both on the number of hazardous waste sites that impact a drinking water aquifer—that is, pose a substantial near-term risk to public water supply systems that use groundwater as a source. Unfortunately, program-specific information on water supply impacts was generally not available. Therefore, the Committee also sought other evidence related to the effects of hazardous waste disposal on the nation’s drinking water aquifers.

Despite the existence of several NPL and DoD facilities that are known sources of contamination to public or domestic wells (e.g., the San Fernando and San Gabriel basins in Los Angeles County), there is little aggregated information about the number of CERCLA, RCRA, DoD, DOE, UST, or other sites that directly impact drinking water supply systems. None of the programs reviewed in this chapter specifically compiles information on the number of sites currently adversely affecting a drinking water aquifer. However, the Committee was able to obtain information relevant to the groundwater impacts from some programs, i.e. the DoD. The Army informed the Committee that public water supplies are threatened at 18 Army installations

Also, private drinking water wells are known to be affected at 23 installations. A preliminary assessment in 1997 showed that 29 Army installations may possibly overlie one or more sole source aquifers. Some of the best known are Camp Lejeune Marine Corps Base (NC), Otis Air National Guard Base (MA), and the Bethpage Naval Weapons Industrial Reserve Plant (NY).

CERCLA. Each individual remedial investigation/feasibility study (RI/FS) and Record of Decision (ROD) should state whether a drinking water aquifer is affected, although this information has not been compiled. Canter and Sabatini (1994) reviewed the RODs for 450 facilities on the NPL. Their investigation revealed that 49 of the RODs (11 percent) indicated that contamination of public water supply systems had occurred. “A significant number” of RODs also noted potential threats to public supply wells. Additionally, the authors note that undeveloped aquifers have also been contaminated, which prevents or limits the unrestricted use (i.e., without treatment) of these resources as a future water supply.

The EPA also compiles information about remedies implemented within Superfund. EPA (2007) reported that out of 1,072 facilities that have a groundwater remedy, 106 specifically have a water supply remedy, by which we inferred direct treatment of the water to allow potable use or switching to an alternative water supply. This suggests that 10 percent of NPL facilities adversely affect or significantly threaten drinking water supply systems.

RCRA. Of the 1,968 highest priority RCRA Corrective Action facilities, EPA (2008) reported that there is “unacceptable migration of contaminated groundwater” at 77 facilities. Also, 17,042 drinking water aquifers have a RCRA facility within five miles, but without additional information, it is impossible to know if these facilities are actually affecting the water sources.

UST. In 2000, 35 states reported USTs as the number one threat to groundwater quality (and thus indirectly to drinking water). However, more specific information on the number of leaking USTs currently impacting a drinking water aquifer is not available. Other Evidence That Hazardous Waste Sites Affect Water Supplies The U.S. Geological Survey (USGS) has compiled large data sets over the past 20 years regarding the prevalence of VOCs in waters derived from domestic (private) and public wells. VOCs include solvents, trihalomethanes (some of which are solvents [e.g., chloroform], but may also arise from chlorination of drinking water), refrigerants, organic synthesis compounds (e.g., vinyl chloride), gasoline hydrocarbons, fumigants, and gasoline oxygenates. Because many (but not all) of these compounds may arise from hazardous waste sites, the USGS studies provide further insight into the extent to which anthropogenic activities contaminate groundwater supplies

Zogorski et al. (2006) summarized the presence of VOCs in groundwater, private domestic wells, and public supply wells from sampling sites throughout the United States. Using a threshold level of 0.2 µg/L—much lower than current EPA drinking water standards for individual VOCs (see Table 3-1)—14 percent of domestic wells and 26 percent of public wells had one or more VOCs present. The detection frequencies of individual VOCs in domestic wells were two to ten times higher when a threshold of 0.02 µg/L was used (see Figures 2-2 and 2-3). In public supply wells, PCE was detected above the 0.2 µg/L threshold in 5.3 percent of the samples and TCE in 4.3 percent of the samples. The total percentage of public supply wells with either PCE or TCE (or both) above the 0.2 µg/L threshold is 7.3

Chloroform Methyl tert-butyl ether (MTBE) Perchloroethene (PCE) 1,1,1-Trichloroethane (TCA) Dichlorodifluoromethane (CFC-12) Toluene Chloromethane Trichloroethene (TCE) Dibromochloropropane (DBCP) Methylene chloride Trichlorofluoromethane (CFC-11) Bromodichloromethane 1,2-Dichloropropane Dibromochloromethane 1,2,3-Trichloropropane

FIGURE 2-2 Detection frequencies in domestic well samples for 15 most frequently detected VOCs at levels of 0.2 and 0.02 mg/L. SOURCE: Zogorski et al. (2006) with illustration provided by USGS National Water Quality Assessment program. Figure 2-2 Chloroform Methyl tert-butyl ether (MTBE) Perchloroethene (PCE) Bromoform Dibromochloromethane Trichloroethene (TCE) Bromodichloromethane 1,1,1-Trichloroethane (TCA) 1,1-Dichloroethane (1,1-DCA) Dichlorodifluoromethane (CFC-12) cis-1,2-Dichloroethene (cis-1,2-DCE) 1,1-Dichloroethene (1,1-DCE) Trichlorofluoromethane (CFC-11) trans-1,2-Dichloroethene (trans-1,2-DCE) Toluene

FIGURE 2-3 The 15 most frequently detected VOCs in public supply wells. SOURCE: Zogorski et al. (2006) with illustration provided by USGS National Water Quality Assessment program.

Further analysis of domestic wells by DeSimone et al. (2009) showed that organic contaminants were detected in 60 percent of 2,100 sampled wells. Wells were sampled in 48 states in parts of 30 regionally extensive aquifers used for water supply. Aquifers were randomly selected for sampling and there was no prior knowledge of contamination.

Of 30 regionally extensive aquifers used for water supply. Aquifers were randomly selected for sampling and there was no prior knowledge of contamination.

Toccalino and Hopple (2010) and Toccalino et al. (2010) focused on 932 public supply wells across the United States. The public wells sampled in this study represent less than 1 percent of all groundwater that feeds the nation’s public water systems. The samples, however, were widely distributed nationally and were randomly selected to represent typical aquifer conditions. Overall, 60 percent of public wells contained one or more VOCs at a concentration of = 0.02 µg/L, and 35 percent of public wells contained one or more VOCs at a concentration of = 0.2 µg/L.

Overall detection frequencies for individual compounds included 23 percent for PCE, 15 percent for TCE, 14 percent for MTBE, and 12 percent for 1,1,1-TCA (see Figure 2-5). PCE and TCE exceeded the MCL in approximately 1 percent of the public wells sampled.

PERCENT FIGURE 2-4 VOCs (in black) and pesticides (in white) detected in more than 1 percent of domestic wells at a level of 0.02 µg/L.

FIGURE 2-5 VOCs and pesticides with detection frequencies of 1 percent or greater at assessment levels of 0.02 µg/L in public wells in samples collected from 1993–2007. SOURCE: Toccalino and Hopple (2010) and Toccalino et al. (2010)

Overall, the USGS studies show that there is widespread, very low level contamination of private and public wells by VOCs, with a reasonable estimate being 60 to 65% of public wells having detectable VOCs. According to the data sets of Toccalino and Hopple (2010) and Toccalino et al. (2010), approximately 1% of sampled public wells have levels of VOCs above MCLs. Thus, water from these wells requires additional treatment to remove the contaminants before it is provided as drinking water to the public. EPA (2009b) compiled over 309,000 groundwater measurements of PCE and TCE from raw water samples at over 46,000 groundwater-derived public water supplies in 45 states. Compared to the USGS data, this report gives a lower percentage of water supplies being contaminated: TCE concentration exceeded its MCL in 0.34 percent of the raw water samples from groundwater-derived drinking water supply systems. There are other potential sources of VOCs in groundwater beyond hazardous waste sites. For example, chloroform is a solvent but also a disinfection byproduct, so groundwater sources impacted by chlorinated water (e.g., via aquifer storage/recharge, leaking sewer pipes) would be expected to show chloroform detections. Another correlation seen in the USGS data is that domestic and public wells in urban areas are more likely to have VOC detections that those in rural areas. This finding is not unexpected given the much higher level of industrial practices in urban areas that can result in releases of these chemicals to the subsurface. Another way to estimate the number of public water supplies affected by contaminated groundwater is to consider the number of water supply systems that specifically seek to remove organic contaminants. The EPA Community Water System Survey (EPA, 2002) reports that 2.3 to 2.6 percent of systems relying solely on groundwater have “organic contaminant removal” as a treatment goal. For systems that use both surface water and groundwater, 10.3 to 10.5 percent have this as a treatment goal.

In summary, it appears that the following conclusions about the contamination of private and public groundwater systems can be drawn: (1) there is VOC contamination of many private and public wells (upwards of 65%) in the U.S., but at levels well below MCLs; the origin of this contamination is uncertain and the proportion caused by releases from hazardous waste sites is unknown; (2) approximately one in 10 NPL facilities is impacting or significantly threatening a drinking water supply system relying on groundwater, requiring wellhead treatment or the use of alternative water sources; and (3) public wells are more susceptible to contamination than private wells, due their higher likelihood of being in urban areas and their higher pumping rates and hydraulic capture zones.

All of these issues suggest that there can be no generalizations about the condition of sites referred to as “closed,” particularly assumptions that they are “clean,” meaning available for unlimited use and unrestricted exposure. Indeed, the experience of the Committee in researching “closed sites” suggests that many of them contain contaminant levels above those allowing for unlimited use and unrestricted exposure, even in those situations where there is “no further action” required.

Furthermore, it is clear that states are not tracking their caseload at the level of detail needed to ensure that risks are being controlled subsequent to “site closure.” Thus, reports of cleanup success should be viewed with caution.

Remedial Objectives, Remedy Selection, and Site Closure

The issue of setting remedial objectives touches upon every aspect and phase of soil and groundwater cleanup, but none perhaps as important as defining the conditions for “site closure.” Whether a site can be “closed” depends largely on whether remediation has met its stated objectives, usually stated as “remedial action objectives.” Such determinations can be very difficult to make when objectives are stated in such ill-defined terms as removal of mass “to the maximum extent practicable.” More importantly, there are debates at hazardous waste sites across the country about whether or not to alter long-standing cleanup objectives when they are unobtainable in a reasonable time frame. For example, the state of California is closing a large number of petroleum underground storage tank sites that are deemed to present a low threat to the public, despite the affected groundwater not meeting cleanup. In other words, some residual contamination remains in the subsurface, but this residual contamination is deemed not to pose unacceptable future risks to human health and the environment. Other states have pursued similar pragmatic approaches to low-risk sites where the residual contaminants are known to biodegrade over time, as is the case for most petroleum-based chemicals of concern (e.g., benzene, naphthalene). Many of these efforts appear to be in response to the slow pace of cleanup of contaminated groundwater; the inability of many technologies to meet drinking water-based cleanup goals in a reasonable period of time, particularly at sites with dense nonaqueous phase liquids (DNAPLs) and complicated hydrogeology like fractured rock; and the limited resources available to fund site remediation.

There is considerable variability in how EPA and the states consider groundwater as a potential source of drinking water. EPA has defined groundwater as not capable of being used as a source of drinking water if (1) the available quantity is too low (e.g., less than 150 gallons per day can be extracted), (2) the groundwater quality is unacceptable (e.g., greater than 10,000 ppm total dissolved solids, TDS), (3) background levels of metals or radioactivity are too high, or (4) the groundwater is already contaminated by manmade chemicals (EPA, 1986, cited in EPA, 2009a). California, on the other hand, establishes the TDS criteria at less than 3,000 ppm to define a “potential” source of drinking water. And in Florida, cleanup target levels for groundwater of low yield and/or poor quality can be ten times higher than the drinking water standard (see Florida Administrative Code Chapter 62-520 Ground Water Classes, Standards, and Exemptions). Some states designate all groundwater as a current or future source of drinking water (GAO, 2011).

The Limits of Aquifer Restoration

As shown in many previous reports (EPA, 2003; NRC, 1994, 1997, 2003, 2005), at complex groundwater contamination sites (particularly those with low solubility or strongly adsorbed contaminants), conventional and alternative remediation technologies have not been capable of reducing contaminant concentrations (particularly in the source area) to drinking water standards quickly.

 

Posted in Water | Tagged , , , | Leave a comment

Geothermal power is seasonal: more power in winter than summer

Eventually fossil fuels will decline to the point that renewables will have to replace them, and the electric grid become 100% renewable.  One of the factors that will make this difficult is that 50 to 90% will need to come from wind and solar, and both wind and solar are seasonal. And so is geothermal, though the amount of power it does or could provide is so trivial that it doesn’t matter, except perhaps for local areas near geothermal hotspots that also have transmission lines.

What follows comes from: NREL. January 2016. Doubling Geothermal Generation Capacity by 2020: A Strategic Analysis.  National Renewable Energy Laboratory.  Technical Report NREL/TP-6A20-64925

Many geothermal power plants generate less power in the summer.  There are several reasons, but the main one is that there is a smaller brine-ambient air temperature differential.  [ Though evaporative cooling can help (though I wonder how much extra power this takes).]

Figure A1. Comparison of Nameplate to Net Summer Capacity, 1990-2013. Figure shows current discrepancy between installed nameplate capacity (design output of installed projects) to net summer capacity (net output of geothermal power available for sale during the summer). As of 2013, EIA survey data from geothermal generators shows that geothermal nameplate capacity was 3,765 MW in comparison to 2,607 MW reported net summer capacity. Sources: Energy Information Association (2015) Nameplate Capacity: Form 860 Generator Data, State Electricity Profiles (July 2015). Summer Capacity: Annual Energy Review (2015).

Figure A1. Comparison of Nameplate to Net Summer Capacity, 1990-2013. Figure shows current discrepancy between installed nameplate capacity (design output of installed projects) to net summer capacity (net output of geothermal power available for sale during the summer). As of 2013, EIA survey data from geothermal generators shows that geothermal nameplate capacity was 3,765 MW in comparison to 2,607 MW reported net summer capacity. Sources: Energy Information Association (2015) Nameplate Capacity: Form 860 Generator Data, State Electricity Profiles (July 2015). Summer Capacity: Annual Energy Review (2015).

As you can see in figure 2 below, most geothermal power is found in the west, especially California.

Figure 2. Map of United States geothermal regions.

Figure 2. Map of United States geothermal regions.

Posted in Geothermal | Tagged , | 1 Comment

How is California’s AB2514 experiment with utility scale battery storage coming along?

[ This is an excellent article by Tod Kiefer about tests of sodium-sulfur batteries, which are the only kind of battery for which there is enough material on earth to make.

Battery electric storage is meant to “replace nimble, fast-ramping natural gas plants that are currently required to buffer and back up the intermittent power produced by California’s fleet of wind and solar farms”.  He doesn’t mention it, but natural gas is finite, so long-term a substitute must be found if the grid is to stay up.  At this point, batteries are still far from being cost effective.  And “despite all the hype and giga-promises, there has yet been no breakthrough in electricity storage technology that delivers all the requisite features of high energy density, high power, long life, high round-trip efficiency, safe handling, and competitive cost.”  Kiefer points out many other major technical challenges, though neglects to mention that lithium is also finite and therefore not a good choice for utility scale energy storage.  He concludes with “batteries are still a long way from being a substitute for fossil fuel power plants or any other actual power generators because of physical and economic limits of current technology.”

This article talks only about very short periods of balancing the grid. But given the seasonality of wind and solar, at least 6 weeks of energy storage are needed, mainly from batteries since there are few places left to build dams for pumped-hydro or salt caverns for Compressed Air Energy Storage.  Here is an excerpt from my book “When Trucks stop running” about what would be required to store just one day of U.S. electricity generation (11.12 TWh), using data from the Department of Energy (DOE/EPRI 2013) “Electricity storage handbook in collaboration with NRECA”, to calculate the cost, size, and weight of Sodium Sulfur NaS batteries capable of storing 24 hours of electricity generation in the United States.  The cost would be $40.77 trillion dollars, the battery would cover 923 square miles, and weigh a husky 450 million tons.

Sodium Sulfur (NaS) Battery Cost Calculation:

  • NaS Battery 100 MW. Total Plant Cost (TPC) $316,796,550. Energy
    Capacity @ rated depth-of-discharge 86.4 MWh. Size: 200,000 square feet.
  • Weight: 7000,000 lbs, Battery replacement 15 years (DOE/EPRI p. 245).
  • 128,700 NaS batteries needed for 1 day of storage = 11.12 TWh/0.0000864 TWh.
  • $40.77 trillion dollars to replace the battery every 15 years = 128,700 NaS * $316,796,550 TPC.
  • 923 square miles = 200,000 square feet * 128,700 NaS batteries.
  • 450 million short tons = 7,000,000 lbs * 128,700 batteries/2000 lbs.

Using similar logic and data from DOE/EPRI, Li-ion batteries would cost $11.9 trillion dollars, take up 345 square miles, and weigh 74 million tons. Lead–acid (advanced) would cost $8.3 trillion dollars, take up 217.5 square miles, and weigh 15.8 million tons.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report ]

Tod “Ike” Kiefer. November 21, 2016. CAISO Battery Storage Trial.  The Grid Optimization Blog.

The Hope.   In the wake of the massive natural gas leak from Sempra Energy’s Aliso Canyon storage facility in 2015, the California State Assembly and California Public Utility Commission directed the state’s electric utilities to build and deploy electricity storage at an unprecedented scale and pace [in AB2514].  The current requirement is 1,325 MW of battery storage by 2020, with emergency authority to fast-track projects that can be online by 31 December 2016.  This electricity storage capacity is intended to replace nimble, fast-ramping natural gas plants that are currently required to buffer and back up the intermittent power produced by California’s fleet of wind and solar farms.  These natural gas plants are short of fuel reserves for the winter due to the leak, and California legislators also want to move away from fossil fuel plants long-term to reduce CO2 emissions.

The Trial. This week, Pacific Gas and Electric released a report of an 18-month trial of installed utility-scale battery storage on the grid.  The trial encompassed 6 MW of storage split between two sites, both integrated to function as dispatched by the California Independent System Operator (CAISO) that manages operations of the state’s grid and wholesale power market.  The specific storage hardware examined was sodium-sulfur batteries, which are at the high-end of the technology maturity scale and the low end of the cost spread for storage options of similar performance, having been used at utility scale in several nations for 25 years.

The Report. The report contains very valuable details of the various grid services that batteries can provide, and the market prices of those services.  It also illuminates the unique complexities of managing non-generating resources that don’t have a pre-purchased fuel supply and thus a reasonably stable volumetric O&M cost.  In a nutshell, operating electricity storage is to simultaneously play multiple markets: the day-ahead and real-time markets for various grid services that batteries happen to be postured to perform, and the real-time market for the cost of wholesale electricity from the grid from which the batteries must repeatedly pull and push electricity as they perform their grid services.  It is not unlike playing poker and blackjack at the same time and only getting a net payoff when both hands are winners.

The Spin. Various web sites and advocacy groups are trying to spin this PG&E report as purely positive and a sign of battery storage coming of age.  However, a careful reading and some simple mathematical analysis reveal this report is actually a cautionary tale.

My Executive Summary:

1. Batteries are still far from cost-effectiveCertain grid services can generate enough revenue to cover operating costs, but none can come close to recouping the capital investment, even within the trial’s very optimistic assumption of 20-year battery life.  Therefore deploying battery storage today has to be for reasons other than intrinsic economics.  A 2MW/14MWh sodium-sulfur battery storage array (PG&E’s Vaca site) cost approximately $11 million ($5,500/kW, $783/kWh) to build.  The report included two external studies that found that cost of battery storage must come down to about $800/kW to achieve economic break-even.  However that number has two false assumptions baked in: a 20-year service life and only 15-minutes of storage capacity.  To aggressively dispatch the batteries as was done in the trial to maximize revenue requires at least 30 minutes of storage capacity and would consume the 4,500-cycle service life within 10 years.  With these adjustments, the real break-even cost is approximately $200/kW.  Indeed, $197/kW is the estimate PG&E itself empirically found to be the break-even cost for a typical month in 2015.  This is a factor of 27 cheaper than the Vaca system cost of $5,500/kw.

2. Charging and discharging batteries for energy arbitrage (charging when electricity is cheap and discharging when it is expensive) is what first comes to mind as an obvious use of electricity storage.   This time-shifting of generation to match consumption peaks involves techniques such as peak shaving and load leveling; these are easy to envision and model and optimize when looking at yesterday’s load and price curves, but very difficult to do in real-time when the load and price are varying stochastically and neither the height nor timing of the actual load peak can be known or recognized till well after the fact.  In practice, energy arbitrage only generated enough revenue to barely cover operating expensesThe margin achieved in cost of power arbitrage was consumed by the 25% power lost between cycles due to charging and discharging inefficiencies and the stream of energy necessary to keep the batteries at operating temperature.  When marketed to CAISO for all possible services including energy arbitrage, the $11 million 2MW array netted less than $9,000 per month.

3. The most lucrative use of batteries on the grid, as evidenced by this trial and the almost universal employment of utility-scale battery storage around the world, is what is called frequency regulation.  In this mode, the batteries are maintained close to 50% charge levels and stand ready to charge or discharge rapidly to damp out momentary dips and spikes in grid frequency that mark mismatches between generation and load.  CAISO monitors grid frequency continuously and sends out automatic generation control (AGC) signals every 4 seconds that tell generators to ramp up or ramp down to chase increasing or decreasing load.  Those resources that can ramp the fastest and most precisely can earn the most money for this service.  Batteries are ideal for this role as they can follow the AGC signal almost instantaneously with their full capacity.   However, the frequent charging and discharging is hard on the cells and causes them to age more quicklyThis high stress is also unforgiving of any mechanical failures or design flaws, and batteries used in this role have the most frequent incidence of firesThe relatively low capacity of batteries also limits how much regulation they can do in a particular direction, as they must stay within their charge and discharge limits.  In this case, the guessing game is to predict whether more up-regulation or down-regulation is expected in the next operating period, and to enter that window with the appropriate state of charge (SOC) to allow maximum headroom.  Since SOC must be managed by real-time power purchases and sales, energy arbitrage can work for or against revenue when operating in frequency regulation mode.  When marketed exclusively for frequency regulation, the 2MW storage array netted less than $35,000 per month; much better than other strategies, but still far short of achieving payback for the expensive capital asset.

4. Actual revenue during the trial was less than predicted by CAISO-approved models for storage.  This was due to two main factors: falsely idealized load and price curves that proved less predictable in practice, and over-estimated market price for the various grid services.

5. The trial also revealed how different batteries are from actual generation resources.  To optimally take advantage of day-ahead and real-time market pricing, dispatch (operational control) has to be managed remotely by CAISO, as it does for generators.  However, it proved essential that the dispatcher know the battery SOC at all times, as it affected what types of services the batteries could immediately perform.   Batteries morph in their capabilities and value for specific grid services depending upon SOC, and the dispatcher must be kept abreast of that shifting menu of the moment.  A critical question is who decides when and how much to charge the batteries – the owner/operator (PG&E) or the customer (CAISO)?  A bad decision can prevent the asset from being optimally dispatched for the most lucrative service, or might prevent it from being utilized at all.  Or the energy arbitrage costs of charging and discharging to manage SOC may consume the revenue from the actual services.  Maintenance of SOC and precise dispatch is also complicated by parasitic load, a periodic maintenance task called “string balancing,” and charging rates that differ depending upon SOC.  Optimal use of storage is dependent upon developing finely-tuned algorithms tailored to a specific battery technology and the rules and prices of a particular wholesale market and independent system operator (ISO), and also upon developing the necessary supervisory control and data acquisition (SCADA) linkages to allow robust remote monitoring and dispatch.  These factors exceed in complexity their counterparts for generation resources.

6. Round-trip efficiency for the two systems tested averaged 75%, matching a thumb rule that has been true for decades.

7. Parasitic load for sodium sulfur batteries averaged 60kW/MW.  These particular batteries have to be heated to 300C to operate, and thus consume more electricity for maintenance when they are idle and less when they are generating heat from activity.  Other battery types have to be cooled when they are active and thus have more parasitic load when in use.  Since this parasitic load comes off the same grid the batteries are serving, it changes the batteries’ raw input/output to a net input/output that makes their performance less precise and complicates dispatch.

8. A surprising finding was that wholesale electricity price varied so much by geographic location on the California grid that often it was not economical for these two battery arrays to store surplus power being generated by wind or solar farms.  California now has enough “renewable” energy capacity that it can produce negative locational marginal price (LMP) in the vicinity of the wind and solar farms.  However, these low prices do not necessarily propagate as far as the electricity storage sites.  This is often blamed on “grid congestion” as if to say it is a shortcoming of the pre-existing grid, but in reality this bottlenecking is a predictable consequence of adding large capacities of remote, diffuse, and uncontrollably intermittent generators at the fringes of the grid far from the load centers that consume their power.  If batteries are to be used for energy arbitrage, they would be optimally co-located at the fringes with the wind or solar farms.  However, if they are to be used for frequency regulation, they are better located near the loads in cities and industrial centers.  Since the revenue stream of the latter is much more attractive than the former, it is likely that the utilities would prefer downtown rather than desert locations for assets they own.  That leaves solar and wind developers to install storage at their sites.

PG&E’s Cautionary Summary Statement to the California Assembly:

“The project gained significant real-world data on the financial performance of battery energy storage resources providing energy and ancillary services in CAISO markets that can better inform an assessment of market benefits in cost-effectiveness valuations of future battery storage procurements. Over the course of the 18 months of market participation during this project, the financial revenues from battery participation in CAISO markets were limited. If revenues from market participation are to be the key driver of evaluating the cost-effectiveness of battery storage, it is recommended to be conservative in the forecasting of those revenues. With California Assembly Bill 2514 and its requirements that utilities procure 1.3 gigawatts of energy storage, California ratepayers could expect to pay billions of dollars for the deployment and operations of these resources.”

Other Battery Technologies: While not mentioned in the trial, it is good for comparison purposes to briefly consider alternative battery technologies.  The most common lithium-ion battery storage chemistry in commercial use today as manufactured by Panasonic and utilized by Tesla is lithium nickel cobalt aluminum oxide (NCA).  It is good for about 500 cycles, 1/9th the life of sodium-sulfur batteries.  Alternative lithium battery chemistries with 2,000-8,000 cycles of service life are emerging and may be on the verge of become price competitive with sodium-sulfur.  Many of the near-term proposals being heard by the California Public Utility Commission are for lithium batteries.  It is telling to note that ancient lead-acid battery technology continues to be competitive enough in cost and performance to be the starter battery of virtually every automobile on the road, including every state-of-the-art Prius hybrid, and has only recently faded as a grid-storage player.  Despite all the hype and giga-promises, there has yet been no breakthrough in electricity storage technology that delivers all the requisite features of high energy density, high power, long life, high round-trip efficiency, safe handling, and competitive cost. 

Conclusion

Batteries are still a long way from being a substitute for fossil fuel power plants or any other actual power generators because of physical and economic limits of current technology.

Posted in Batteries, Battery - Utility Scale, Electricity | Tagged , , , , , | 1 Comment

Notes from “The Powerhouse: Inside the Invention of a Battery to Save the World” by Steve LeVine

[ My notes from 2015 “The Powerhouse: Inside the Invention of a Battery to Save the World” by Steve LeVine follow this introduction.

I read this book because I’ve done extensive research on batteries and was surprised by all the hype about this book because it appeared to be a book about a real battery breakthrough.  How could I have not heard about it, and how had they done this given all the problems with developing new batteries?   But there was no battery breakthrough.  What a huge waste of my time!

LeVine assumed that the battery would be a winner.   Yet he must have have been aware of some of the issues since he says that:

  • “After accounting for the loss of energy in combustion, a kilogram of gasoline contains 1,600 watt-hours of stored energy. State-of-the-art lithium-ion batteries, by comparison, delivered about 140.”
  • Within the periodic table “only so many of the elements that were truly attractive in a battery.”
  • “In 1859, a French physicist named Gaston Planté invented the rechargeable lead-acid battery. … In more than a century, the science hadn’t changed.”
  • In 1966, Ford Motor tried to bring back the electric car. It announced a sodium sulfur battery that that had several disadvantages. “The Ford battery did not operate at room temperature but at about 300 degrees Celsius. The internal combustion engine operates at an optimal temperature of about 90 degrees Celsius. Driving around with much hotter, explosive molten metals under your hood was risky” and not suitable for cars, only for stationary storage.
  • The same electro-chemical reactions that enabled lithium batteries also made them want to explode: the voltage would run away with itself, a cell would ignite, and before you knew it the battery was spitting out flames. But you seemed no better off if you played it safe and used other elements—you’d find that they slowly fell apart on repeated charge and discharge.
  • The public and regulators insisted battery-electric cars must be safe, so of course a battery that was chronically explosive would be rejected.  But a safe battery that could go a long “distance and [with high] acceleration tended to make the battery more dangerous.”
  • “Thackeray’s goal for NMC 2.0 was to double current performance plus cut the cost. But even that would leave batteries still about a sixth the energy density of gasoline.”
  • “The battery race would involve a series of unforeseen, terrible problems that you simply could not recognize in the tiny volumes and coin cells produced in the national labs. You needed a ton of the material and hundreds of cells, and you had to charge and recharge them again and again before the problems surfaced. Only then could you think about the solutions necessary to get the technology into a car.”
  • “Consumer electronics typically wear out and require replacement every two or three years. They lock up, go on the fritz, and generally degrade. They are fragile when jostled or dropped and are often cheaper to replace than repair. If battery manufacturers and carmakers produced such mediocrity, they could be run out of business, sued for billions and perhaps even go to prison if anything catastrophic occurred. Automobiles have to last at least a decade and start every time. Their performance had to remain roughly the same throughout.”

But then LeVine says “When a development is needed badly enough, it comes. Without some drastic change, American cities will eventually become uninhabitable. The electric automobile can stop the trend toward poisoned air. Its details are yet to be decided. But it will come. And it won’t be long.”

Huh? LeVine really believes that after what he knows about batteries above? 

According to George Blomgren, a former senior technology researcher at Eveready “It’s been more than 200 years and we have maybe 5 different successful rechargeable batteries” .  Yet a better battery has always been just around the corner:

  • 1901: “A large number of people … are looking forward to a revolution in the generating power of storage batteries, and it is the opinion of many that the long-looked-for, light weight, high capacity battery will soon be discovered.” (Hiscox)
  • 1901: “Demand for a proper automobile storage battery is so crying that it soon must result in the appearance of the desired accumulator [battery]. Everywhere in the history of industrial progress, invention has followed close in the wake of necessity” (Electrical Review #38. May 11, 1901. McGraw-Hill)
  • 1974: “The consensus among EV proponents and major battery manufacturers is that a high-energy, high power-density battery – a true breakthrough in electrochemistry – could be accomplished in just 5 years” (Machine Design).
  • 2014 internet search “battery breakthrough” gets 7,710,000 results, including:  Secretive Company Claims Battery Breakthrough, ‘Holy Grail’ of Battery Design Achieved, Stanford breakthrough might triple battery life, A Battery That ‘Breathes’ Could Power Next-Gen Electric Vehicles, 8 Potential EV and Hybrid Battery Breakthroughs.

And there are far more issues with developing batteries than LeVine mentioned which you can read about in my post “Who Killed the Electric Car?” at http://energyskeptic.com/2016/who-killed-the-electric-car/

Since civilization ends if trucks stop running, batteries for TRUCKS is what matters.  Battery electric cars do nothing to solve the liquid fuels transportation energy crisis since diesel engines can’t burn gasoline, so the fuel saved is no big deal. The heavy-duty trucks that do the actual work of civilization (and locomotives and ships) can’t run on batteries because even if batteries were improved 10-fold they’ll still be too heavy (see electric truck posts here).

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report ]

Related Articles

Notes from “The Powerhouse”:

Before returning home to Beijing, Wan, China’s minister of science, had asked to visit two places—Argonne National Laboratory, a secure federal research center outside Chicago, and a plant near Detroit where General Motors was testing the Volt, the first new electric car of its type in the world. Jabbing his finger into a book again and again, Chamberlain said that Wan was no mere sightseer. He had a mission, which was to stalk Chamberlain’s team of geniuses, the scientists he managed in the Battery Department at Argonne. They had invented the breakthrough lithium-ion battery technology behind the Volt, and Wan, Chamberlain was certain, hoped to appropriate Argonne’s work. But Chamberlain was not going, and hoped that no one at the lab would not explicitly mention nickel manganese cobalt, or NMC, the compound at the core of the Argonne invention contained in the Volt during an Gang’s visit.

Argonne possessed formidable intellectual firepower and inventions, such as the American patent for its NMC breakthrough. It achieved three grand aims—allowing the Volt to travel 40 miles on a single charge, to accelerate rapidly, and to do both without bursting into flames.

The electric age would puncture the demand for oil and thus rattle petroleum powers such as Russia’s Vladimir Putin, Saudi Arabia’s ruling family, and the Organization of the Petroleum Exporting Countries as a whole, stripped of tens of billions of dollars in income. China could put its population in electric cars, shun gasoline propulsion, and clean up its air. Generally speaking, the world might spend less on oil and worry less about climate change.

By 2030, advanced battery companies would swell into a $100 billion-a-year industry and the electric car business into several $100 billion-a-year behemoth corporations.  When you sought justification for this enthusiasm, you heard a mainstream assumption that hybrid and pure electric vehicles would make up 13 to 15% of all cars produced around the world by 2020; a decade or two later, they would reach about 50 percent.

Volta created his battery while carrying out experiments to disprove Galvani. Benjamin Franklin, a contemporary, had already coined the word to describe a rudimentary electric device he built out of glass panes, lead plates, and wires. But Franklin’s was a battery in name only, while Volta’s was a true electric storage unit. After Volta’s brainchild, scientists kept hooking up batteries to corpses to see if they could be coaxed back to life. Many wondered whether electricity could cure cancer or if it was the source of life itself. What if souls were electric impulses?

To make a battery, you start with two components called electrodes. One is negatively charged, and is called the anode. The other, positively charged electrode is called the cathode. When the battery produces electricity—when it discharges—positively charged lithium atoms, known as ions, shuttle from the negative to the positive electrode (thus giving the battery its name, lithium-ion). But to get there, the ions need a facilitator—something through which to travel—and that is a substance called electrolyte. If you can reverse the process—if you can force the ions now to shuttle back to the negative electrode—you recharge the battery. When you do that again and again, shuttling the ions back and forth between the electrodes, you have what is called a rechargeable battery. But that is a quality that only certain batteries possess.

The small number of parts has both helped and hindered the efforts of scientists to improve on Volta’s creation. They had only the cathode, the anode, and the electrolyte to think about, and, to fashion them, a lot of potentially suitable elements on the entire periodic table. Yet this went both ways—there was no way to bypass those three parts and, as it soon became apparent, only so many of the elements that were truly attractive in a battery.

In 1859, a French physicist named Gaston Planté invented the rechargeable lead-acid battery. Planté’s battery used a cathode made of lead oxide and an anode of electron-heavy metallic lead. When his battery discharged electricity, the electrodes reacted with a sulfuric acid electrolyte, creating lead sulfate and producing electric current. But Planté’s structure went back to the very beginning—it was Volta’s pile, merely turned on its side, with plates stacked next to rather than atop one another. The Energizer, commercialized in 1980, was a remarkably close descendant of Planté’s invention. In more than a century, the science hadn’t changed.

In 1966, Ford Motor tried to bring back the electric car. It announced a battery that used liquid electrodes and a solid electrolyte, the opposite of Planté’s configuration. It was a new way of thinking, with electrodes—one sulfur and the other sodium—that were light and could store 15 times more energy than lead-acid in the same space. There were disadvantages, of course. The Ford battery did not operate at room temperature but at about 300 degrees Celsius. The internal combustion engine operates at an optimal temperature of about 90 degrees Celsius. Driving around with much hotter, explosive molten metals under your hood was risky. Realistically speaking, that would confine the battery’s practical use to stationary storage, such as at electric power stations. Yet at first, both Ford and the public disregarded prudence. With its promise of clean-operating electric cars, Ford captured the imagination of a 1960s population suddenly conscious of the smog engulfing its cities. Popular Science described an initial stage at which electric Fords using lead-acid batteries could travel 40 miles at a top speed of 40 miles an hour. As the new sulfur-sodium batteries came into use, cars would travel 200 miles at highway speeds, Ford claimed. You would recharge for an hour, and then drive another 200 miles.

A pair of rival reporters who were briefed along with the Popular Science man were less impressed—despite Ford’s claims, one remarked within earshot of the Popular Science man that electrics would “never” be ready for use. The Popular Science writer went on: They walked out to their cars, started, and drove away, leaving two trains of unburned hydrocarbons, carbon monoxide, and other pollution to add to the growing murkiness of the Detroit atmosphere.

When a development is needed badly enough, it comes. Without some drastic change, American cities will eventually become uninhabitable. The electric automobile can stop the trend toward poisoned air. Its details are yet to be decided. But it will come. And it won’t be long.

For a few years, the excitement around Ford’s breakthrough resembled the commercially inventive nineteenth century all over again. Around the world, researchers sought to emulate and, if they could, best Ford. As it had been on nuclear energy, Argonne sought to be the arbiter of the new age. In the late 1960s, an aggressive electrochemist named Elton Cairns became head of a new Argonne research unit—a Battery Department. Cairns initiated a comprehensive study of high-temperature batteries like Ford’s. Someone suggested a hybrid electric bus assisted by a methane-propelled phosphoric acid fuel cell, and it was examined as well. Welcoming suggestions, the lab director insisted only that any invention be aimed at rapid introduction to the market. To be sure that would happen, he invited companies to embed scientists at Argonne for periods of a few months to a year, and many did so. John Goodenough, a scientist at the Massachusetts Institute of Technology, said that everything suddenly changed. Batteries were no longer boring. Goodenough attributed the frenzy to a combination of the 1973 Arab oil embargo, a general belief that the world was running out of petroleum, and rousing scientific advances on both sides of the Atlantic.

The same electro-chemical reactions that enabled lithium batteries also made them want to explode: the voltage would run away with itself, a cell would ignite, and before you knew it the battery was spitting out flames. But you seemed no better off if you played it safe and used other elements—you’d find that they slowly fell apart on repeated charge and discharge.

In 1980, four years after Goodenough arrived at Oxford, lithium-cobalt-oxide was a breakthrough even bigger than Ford’s sodium-sulfur configuration. It was the first lithium-ion cathode with the capacity to power both compact and relatively large devices, a quality that made it far superior to anything on the market. Goodenough’s invention changed what was possible: it enabled the age of modern mobile phones and laptop computers. It also opened a path to the investigation of a potential resurrection of electric vehicles.

In 1991, Sony, pivoting off Yoshino’s brainchild, released a lithium-ion battery for small electronic devices. Later versions of the Sony battery would contain a better anode made of benign graphite, whose absorptive layers were a perfect temporary burrowing place for lithium ions. But the advance as a whole—the combination of Goodenough’s cathode and a carbon or graphite anode—created an overnight blockbuster consumer product. It enabled several multibillion-dollar-a-year industries of small recording devices and other electronics. It triggered copycat batteries and a frenzy in labs around the world to find even better lithium-ion configurations that would pack more energy in a smaller and smaller space.

If you were thinking about an electric car, the NMC led to a better cathode than Goodenough’s lithium-cobalt-oxide, his lithium-iron-phosphate, or Thackeray’s own manganese spinel. Not only was it cheaper and safer, but Thackeray also calculated that the extra lithium in the system improved its performance. The double lattice let you pull out 60 or 70% of the lithium before collapsing, well over the 50 percent you could withdraw from Goodenough’s lithium-cobalt-oxide. That extra lithium—the added 10 or 20%—meant more energy.

Very few people would settle for a single trait in an electric car. The ability to travel a long distance was important, but it was not sufficient; drivers demanded other qualities, too. They wanted the car to take off—immediately—when they pressed the accelerator, and to keep on accelerating to high speeds. They insisted that their vehicle be safe—consumers, not to mention regulators, would reject any car with a chronically explosive battery. The last quality was possibly the hardest to deliver: pushing for such performance in distance and acceleration tended to make the battery more dangerous.

The NMC and manganese spinel—in a combined-formulation battery for the Volt, its first new electrified car, a plug-in hybrid that it launched in 2010. GM said the battery’s 40-mile distance was ideal for a first-iteration Volt.

Dahn, a blunt and outspoken battery researcher whose own version of the NMC had been patented by the 3M Company just after the Argonne pair, announced a big jump in the material’s performance. It happened when, as an experiment, he juiced the voltage. The capacity surged. If you pack lithium into a battery and apply voltage to move it from the cathode to the anode—the act of charging the battery—the structure puts up fierce resistance. It restricts the lithium’s free movement, thus limiting how fast energy can be extracted, and thus how fast a car could go. Some goes astray along the way, stuck in one or the other side of the battery. In the case of NMC, it had high energy—you could pack in a lot of lithium—but relatively low power, meaning that you could not extract the lithium very fast. What Dahn did was to raise the voltage used to charge the battery above 4.5 volts—to about 4.8 volts, considerably more than the usual 4.3. That boost triggered a race of shuttling electrons. The result was staggering.

Theoretically speaking, Dahn was putting almost all of the lithium into motion between the cathode and the anode. In principle, you should not have been able to extract that much lithium from the cathode, thus removing important walls from the latticework of the cathode—the house of oxygen and metal atoms should collapse. But Dahn discovered that he could do so. Johnson went into the lab and tried to duplicate Dahn’s claims using the Li2MnO3. He pushed the voltage over 4.5 volts. Just as Dahn had reported, the capacity surged. It was an important discovery. The numbers told the tale. Ordinarily, lithium-ion batteries such as Goodenough’s lithium-cobalt-oxide store around 140 milliampere-hours of electric charge per gram, a revolutionary capacity when it was invented but insufficient for the ambitions of the new electric age. By pushing the voltage, Johnson was getting much more—250 milliampere-hours per gram, which was even higher than the 220 that Dahn was reporting. Trying again, Johnson got 280, almost twice lithium-cobalt-oxide’s performance. The experiments suggested that the NMC was even more powerful than they had thought on pioneering it five years earlier—far more. At once Li2MnO3 was not simply a fortifying agent, as had been presumed. At just over 4.5 volts, it came alive in a very muscular manner. At this higher voltage, you activated a new, heretofore unrecognized dimension of NMC. This was NMC 2.0, the breakthrough that could push electric cars over the bar and challenge gasoline-fueled engines.

It was his voice that captured attention in meetings. In a room of competing opinions, his basso profundo seemed to prevail. The voice made it impossible to ignore Chamberlain when he began to moralize. Among his gripes was “anti-intellectualism among elected officials.” Another was how Americans were “beholden to the interests of those who produce oil.” Chamberlain would continue to anyone listening: “We are the Saudi Arabia of coal and have nuclear energy. We should aim at energy independence with coal, solar, wind, and nuclear, then use them to charge up electric cars. Use electricity instead of oil—for everything. How do we get there?” He was hokey, which endeared him to the rank and file, scientists who were unmoved by talk of a battery war but gung-ho on the subject of importing less Middle East oil. Their passions rose at the idea that batteries could help stop climate change.

They believed Chamberlain when he said over the following years that many oil despots would be in trouble if drivers turned to electric cars to the degree Obama and Wan Gang both sought and those vehicles were charged with electricity produced by natural gas. Oil prices would fall, undercutting the long-running flood of money to Russia and OPEC, especially members that themselves did not possess gas. Since China would require less foreign oil, a current subtext to tension with outsiders—its colossal need for imported resources—would soften, and its air would be cleaner. When you added up these factors, you also emitted much less carbon. What was to dislike? Chamberlain understood that his boosterism infused the lab with a sense of purpose and that led him to promote the big energy picture even more.

Chamberlain and Schroeder tried another idea. A material known as a dendritic polymer was generating excitement. It was a compound that could be turned into a variety of products. What caught Chamberlain’s and Schroeder’s attention was that it could prevent melting in silicon wafers, a crucial need in computers—you needed to remove as much metal as possible and keep down the heat or your system would go down. A New England inventor had found a way to make dendritic polymers cheaply, and Chamberlain and Schroeder took his idea to Silicon Valley. Here was a certain path to fortune. But no venture capitalist they met felt the same confidence. All the pair heard was, “Do you have anything in energy?” The issue was timing. The smart money was shifting from chips to alternative energy.

So he began talking to American companies that were in the battery game. He pushed them to shift to the NMC. Johnson Controls and Procter & Gamble both said they could in principle manufacture batteries installed with the NMC. But they would have to give it a long think. Configuring factories anew for a different battery would take five years. That he and Sinkula had launched their own start-up company. It would center around the NMC and be marketed to carmakers. In the coming years, the move on their own would be the subject of a considerable dispute with Michael Pak, NanoeXa’s CEO. But for now, fortune was with them. As Jeff Chamberlain had found in his own start-up stage, energy was the rage in Silicon Valley. Venture capital firms were competing fiercely for the most promising ideas. They had decided that renewable energy was the next big boom. But their eagerness seemed different from the past manias. It wasn’t just about money. The fever aligned with the Valley’s strain of politics, which generally vilified oil, embraced its technological rivals, and fretted about climate change. Here was a way for the venture capitalists to do well and do good.

Nationally and globally, a similar sentiment took hold about global warming. Barack Obama, at the time an American senator initiating a campaign for president, vowed to promote non–fossil fuel technology and reduce emissions of heat-trapping gases. But it was generally believed that whoever was elected, Democrat or Republican, would push through laws and federal spending to buoy solar, wind, biofuel—and battery companies. Silicon Valley’s venture capital community was prepared for these new policies and the commerce that would follow.

Moroccan-born Khalil Amine unapologetically hired only foreigners. His group included not a single American-born researcher. Over the years, Amine had employed the occasional American and even a Frenchman. But now, apart from two other Moroccans (and himself), his group was entirely Chinese. Over sushi after work, Amine said he had concluded that the job was too demanding for United States–born Americans. And not just for them—some Asians, too, were not up to the task. “I have had Caucasians in my group before. Also Indians, Koreans,” Amine said. “But I will tell you this—I’m very demanding. I come to work at six A.M., five A.M. I work weekends. I have to make sure that we produce. The Chinese work this way, too—they are extremely hardworking. But some of the Caucasians, they don’t like that. It seems like big stress on them.

Amine was not alone in invoking a supposedly unique Asian cultural DNA when it came to science, technology, and the work ethic, in particular one native to Chinese, but he said the results spoke for themselves. If you considered inventions and published papers, his group was the most prolific in the Battery Department. By Amine’s own count, his group had produced 120 or so inventions over the last decade. “The next group is not even close,” he said, which was true. “And if you look at papers—last year we published about forty-seven, forty-eight. Some professors, they publish that many in their entire careers.

The subtext wasn’t merely the view that foreign-born battery guys worked harder but that Americans were simply not a large part of the job pool. The battery guys said that when they advertised a new position, dozens of applicants would respond of whom just two or three typically would be American. The proportions explained why these few Americans, whatever their qualifications, were often outshined by the mountain of overseas competition. There simply did not seem to be many Americans eager to invent the next big battery. Americans trained in the disciplines attacking the battery challenge—in physics, chemical engineering, material science. But their jobs of choice tended to be in other fields. Among the places they landed were Silicon Valley’s high-tech firms. Or, even if they did go into batteries, they rejected basic research, which almost certainly required up to three years of uncertain toil as a postdoctoral assistant, and went into private industry.

One trait of Argonne’s foreign-born staff was traditional personal and family aspirations: they were seeking a new life with greater prospects for their children. “I’m not saying it in a way to degrade the other guys,” Amine said, “but Caucasian Americans—they don’t want to do Ph.D.s. They go for an MBA or something like that. For example, I was invited to give a talk at MIT. I would say seventy percent of the students were Asian. Chinese, Koreans, and Japanese. I went to Berkeley—same thing.” Foreign battery guys in fact often completed not just one postdoctoral assistantship before securing permanent employment, but two or even three three-year stints. A postdoctoral researcher at Argonne earned about $61,000 a year, which was high for such a position. When offered a staff job, the pay was bumped up a bit and rose regularly from there, which became even more attractive in combination with the stability of federal lab work. But it was not high-tech scale. Their determination was distinct not just from Americans’ but also from that of the Silicon Valley immigrants. Once you settled on a life in batteries, a simple calculus made Argonne and the other national labs special magnets for such foreign Ph.D.s—the number of private battery companies was small and with it the possibility of obtaining an H-1B visa. The national labs, on the other hand, could sponsor an unlimited number of H-1Bs—in 2000, Congress had created a working visa exemption for nonprofit, university, and national labs.

“They go an extra length. They’re smart. And they are extremely reliable,” Amine said. Why was his team predominantly Chinese? “That’s why,” he said. Amine said his strategy did not always work in his favor. He had lost numerous military contracts because the Pentagon permitted only American citizens to work on such sensitive projects, and his group lacked them. But he was straightening that out, too. Six years earlier, Amine himself had taken American citizenship. His two Moroccan researchers had as well, and a Chinese scientist was on his way. “I think within five years, all these Chinese will be U.S. citizens,” Amine said. “It’s just a matter of time.” Ultimately, Amine said, his personnel preferences were unimportant. “At Argonne, the policy is you hire people based on capability. Not nationality,” he said. Of course, Amine had determined that there was a difference—he was hiring according to nationality. It was among the reasons why an American victory in the battery race oddly depended on scientists from rival countries.

Government incentives were attracting increasing numbers of Chinese students to repatriate but this trend largely excluded the staff at Argonne. Of the lab’s foreign researchers, the Chinese were among the least likely to repatriate.  The professional conditions in China were a disincentive, you could end up lost in a sprawling lab in your native country, serving an autocratic boss interested not in new ideas but largely in retaining his own position.

Argonne employed some 3,000 scientists but Amine was appalled at its relatively small intellectual property unit. The lab seemed content to file away strong inventions without seeking publicity. There was no explaining it apart from either a diffidence toward the business of science or plain languor. Whichever, Argonne’s IP team was passive when it came to licensing the lab’s inventions. So Amine set out to create his own little Japan. Amine organized his staff along the lines of the Kyoto invention machine where he learned his craft. He whipped his researchers into a cadre that at his direction worked systematically through every possible approach to the solution of a chemical puzzle—hundreds if necessary. The enviable record of papers, patents, and industry interest followed. Of one of his Chinese researchers, Amine said, “When you give him an experiment, he does it fast. He’ll give you the result in two days. With some people it’s like pulling teeth.” Amine’s critics pilloried his record of picking up a promising idea produced elsewhere, blending it with his own flashes of intuition and the work of his efficient staff, and emerging with a patent application or a new paper. They insinuated that it was theft. But in Japan—or any of the big Asian manufacturing economies—his methods would be recognized as fair and even sensible. Japan, China, and South Korea continued to retain their economic edge with a willingness to build on others’ ideas and spend money for years and years with the confidence that a profitable industry would eventually result. Amine was merely following the Japanese way. As critical as they were of him, Amine was savage toward the usual practices in American industry and labs. Western scientists championed the visionary moment but that led to “the moon or nothing. So they have nothing,” he said. He was prepared to go step by step. And he winnowed down his group to those who would work the way he saw fit.

That meant only two nationalities—Chinese and Moroccans.  On its face, Amine’s hiring sounded racist. His management style was dictatorial. But Amine was neither unethical nor a bigot. Rather, he was opportunistic in noticing others’ advances, uncanny in identifying and resolving a flaw, and ruthless in cutting through to a product bearing his name. That made him no different from countless other successful Americans. Jun Lu, a researcher on futuristic lithium-air batteries, defended Amine’s Japanese notions. Jun and his wife, Temping Yu, who also worked at Argonne, had no relatives in the Chicago area. “So we have more time to focus on research. You work harder” on Amine’s team, he said, but that was only part of the picture. “If you want to be successful, you still have to have the ideas. You have to have common sense.” But there were also pockets of anger in Amine’s group. This was not Japan. Some members of his group did not appreciate serving as cogs in Amine’s machine rather than innovators and thinkers in their own right. Amine held out the coin of the realm—an American visa and the later hope of citizenship. Their names appeared on the papers to which their grunt work contributed. But some of Amine’s best staff bristled at

There was a divide between the Chinese and the rest of the battery department. The Americans were suspicious of the Chinese and also themselves insular. The old days of Argonne scientists hanging out at one another’s homes were long past—in 2011, five years after he joined the lab, Chamberlain had yet to throw a party. Almost none of the battery guys had ever been to his house. An administrative staff member’s ears perked up when her boss mentioned dinner plans with a colleague—it was the first time she had ever heard of lab executives socializing together. She could only speculate why so little entertaining went on. It wasn’t that the scientists were unfriendly. But there seemed to be an unspoken midwestern distance. Andy Jansen and Kevin Gallagher, both battery guys, threw backyard barbecues for department colleagues, but Asians were rarely present.

Kang moved to Chicago with a position on Khalil Amine’s team at double his Austin pay. It was not long before Kang felt like “a workhorse.” He was carrying out repetitive tasks in which Amine was attempting again and again to advance yet another theory that would produce yet another paper or patent “that doesn’t change anything.” The Moroccan traveled frequently but provided his subordinates no opportunity to attend the same international conferences, mix with peers, or make a name

Americans, Kang said, had more potential than almost anyone because they had the fundamentals—from childhood, they were trained to argue and discuss. But they, too, were handicapped: they were not desperate. “They are not prepared to lose everything.” At Argonne itself, senior scientists did too little to prepare their young subordinates for big future breakthroughs.

A typical way to express the economics of a battery was the cost to produce a steady 1,000 watts of electricity for an hour (the amount needed to iron your clothes, for instance). According to Kumar, the Envia cathode lessened the battery cost to $250 per kilowatt-hour at laboratory scale, less than half the prevailing market rate at the time it was built. Envia’s next product promised to shrink the cost further—to $200 per kilowatt-hour, a very large jump. The ultimate aim, if Kumar succeeded with a superbattery on which he was currently working, would be a phenomenal $180 per kilowatt-hour. Kumar told Nissan that he could reach that goal in eighteen or so months. His promises, not to mention the time line, were exceedingly bold seeing as how GM was thought to be currently spending $650 to $750 per kilowatt-hour on the battery in the Volt, for a total of $12,000 to $14,000. Dave Howell, head of the electric-car battery research effort at the Department of Energy, was challenging researchers to lower costs to $300 a kilowatt-hour by 2014 or 2015. His longer objective was $125 a kilowatt-hour by 2022. But Kumar was suggesting he needed a mere year and a half to cut battery costs by three quarters and bring down the Volt battery to around $3,000. Given those numbers, you could understand

The Obama administration had allotted about $2 billion to build six lithium-ion battery factories largely from scratch. No one could say how many would survive, but most had no intellectual property of their own. In Kumar’s view they ought to be eager to grab Envia’s battery material. But, hearing silence, he said, “I don’t think it’s my job to convince them. I am working to make a product.

Though it boosted GM’s image, the Volt did not actually sell well. The car cost $41,000 and most motorists were unimpressed by the 40 miles it could travel on a charge.

Studies showed that that was the maximum average distance that American motorists traveled in a day. But in practice, actual potential buyers wanted to pay less, drive farther, and charge up where and when they wanted. Until these benchmarks were met, most were not buying the Volt or any other electric vehicle.

As for Steven Chu, he felt like a member of the “chosen ones” when he joined Bell in 1978. The atmosphere was “electric,” and “the joy and excitement of doing science permeated the halls,” he said. Chu grew up on Long Island, the son of Chinese immigrants who expected their children to earn Ph.D.s. His maternal grandfather was an American-trained engineer. His father was an MIT-educated chemical engineer and his mother an economist. He earned his doctorate at Berkeley and was hired to stay on as an assistant professor, but before starting the job he was offered a leave of absence to broaden his experience and he used the time to go to work at Bell. Chu’s first Bell boss admonished him to be satisfied with nothing less than starting a new scientific field. Five years later, he was leading the lab’s quantum electronics research team. Among his first accomplishments was measuring the energy levels of positronium, an atomlike object with its electric charges flipped. Measurements were hard because positronium has an average lifetime of 125 picoseconds (125 trillionths of a second, a scale that is to a second as a second is to 31,700 years). Then Chu puzzled out how to use laser light to cool and trap atoms. “Life at Bell Labs, like Mary Poppins, was practically perfect in every way,” he said. As secretary of energy under Obama, Chu wanted to capture the magic of Bell and its peers, the great industrial labs that had been run by scientific and commercial visionaries like Thomas Edison and T. J. Watson. He wanted to assemble the best minds in one place and focus on a single mission. The objective would be to disrupt the largest industry on the planet—fossil fuels.

He himself could be an exacting boss. When he later was named director of Lawrence Berkeley National Laboratory, he became known for his “Chu-namis,” stormy fits of pique when something had not been carried out to his standard. Chu wanted to replicate this atmosphere at the national labs that the Department of Energy funded.

One day, Jim Greenberger, an outside member of the group with which Chamberlain was speaking, mentioned a vague boyhood link to a close ally of Senator Obama, whose presidential campaign was gaining momentum. Obama seemed to be intensely interested in batteries. Why not pitch the battery Sematech proposal to the senator’s team? Everyone agreed that it was a good idea. The group found itself in a Chicago office before a single economic adviser to Obama. Greenberger described Sematech and the aim of beating the big Asian battery makers. “Why do you think we can compete with the Japanese auto industry?” the adviser asked. Chamberlain said American companies, while currently struggling, could recover and figure large in a reconstituted global industry. But he added that if electrics truly took off, Detroit, with its record of stodginess, “will go the way of the dinosaur.” They would not manage the transition to the new world. “What kind of money do you need?” The group had discussed this question. If they were modeling on Sematech, the sum should be around $500 million. But they wanted a cushion in case expenses were higher. So they decided on $1 billion. It was perhaps a hubristic price, but that was what they would request for the battery Sematech. “Two billion dollars,” Greenberger said. The rest of the group went quiet. Chamberlain could not see the expression on the Obama adviser’s face, and no one could fathom the origin of the new number.

“Okay,” the adviser said. Outside, the group laughed. Why did Greenberger double the figure? “I don’t know,” he said. “It just felt right.” As Obama was elected, the economic landscape transformed. The world was in financial collapse and the country in a panic. On taking office two months later, Obama quickly proposed, and Congress approved, a $787 billion economic stimulus package. It was meant to rescue the economy and plant the seeds of future industries. Chamberlain smiled as he studied the breakdown of spending. It included a $2.4 billion line item—a $2 billion lithium-ion battery manufacturing program plus $400 million for the development of electric-car–manufacturing processes. Rahm Emanuel, Obama’s new chief of staff, had remarked that, politically speaking, no crisis should go to waste. The battery Sematech was a “go.” It was and it wasn’t. The money would fund the creation of an American lithium-ion battery industry, just as Chamberlain and the companies envisioned.

Only now, with the unexpected largesse of a $2.4 billion research-and-development fund, the companies changed their minds about working collaboratively. Johnson Controls received $249 million of the fund, EnerDel won $118 million, and $200 million went to A123. They would compete against one another for the market. There would be no battery Sematech—no industry-government consortium. But the United States would be in the battery game. Steven Chu also saw no reason to squander the crisis. In his case, there was the matter of his dream to recreate Bell Labs. He proposed eight projects, each tasked to solve a single big problem, at a total five-year cost of $1 billion. For those who did not grasp the significance, he said, “We are taking a page from America’s great industrial laboratories in their heyday.” On paper, they would be called “innovation hubs.” But more explicitly, they were “Bell Lablets.” One of Chu’s hubs was to be aimed at revolutionizing batteries.

As impressive as NMC 2.0 was compared with its predecessors, it couldn’t power an electric car competitively with the internal combustion engine. After accounting for the loss of energy in combustion, a kilogram of gasoline contains 1,600 watt-hours of stored energy. State-of-the-art lithium-ion batteries, by comparison, delivered about 140.

Thackeray’s goal for NMC 2.0 was to double current performance plus cut the cost. But even that would leave batteries still about a sixth the energy density of gasoline. The Battery Hub’s goal was to make the next big jump after lithium-ion—to 600 or 800 watt-hours a kilogram. Toward that goal, the Battery Hub would receive $25 million of federal funding a year for five years, $125 million in all. A competition would decide which university, national lab, or consortium would host the Hub. Chu advised that those interested stay tuned as to

John Newman, an electrochemistry professor at UC Berkeley, phoned Thackeray. Newman was an icon who had written the standard university textbook on electrochemical systems. “Why don’t you lead the Battery Hub and we’ll do it with you?” Newman said. The competition had not yet been announced, but Newman was suggesting an interesting head start. He wanted Argonne and Lawrence Berkeley National Laboratory, traditionally bitter rivals in the battery space, to submit a joint bid. The approach was surprising given the jealousy between their two institutions. Argonne and Berkeley never worked together. They harbored a deep well of mutual suspicion. The stakes, however, were enormous—whoever landed the hub would be the undisputed center of American battery research. Therefore, if they joined hands, agreed to divide the research funds, and did not quarrel, Berkeley and Argonne might stand an improved chance of winning the competition. In June 2009, Newman traveled as part of a Berkeley group to Argonne. Crowded into a small conference room, they began to brainstorm what a Battery Hub would look like. So much was already going on in the field—depending on the year, the Department of Energy alone was spending $50 million to $90 million on battery research. What could a hub add? Someone suggested starting over—that they wipe the whiteboard clean and simply construct a chart of a first-rate, industry-leading battery research program. They could then shade in areas where there was already sufficient work. What remained would be the proposed Argonne-Berkeley Battery Hub. The result was a blockbuster, over-the-top plan for a $100-million-a-year, multiyear partnership of companies and scientific institutions. On paper, it was four times the size of Chu’s hubs. Both teams loved it. When Chamberlain described it quietly to a few industry friends, they seemed equally enthusiastic, making clear they were prepared in principle to share the cost fifty-fifty with the Department of Energy. Chamberlain thought he understood the companies’ eagerness. It wasn’t that it looked like Sematech, although the resemblance to Chamberlain’s obsession was more than passing. It was because “it was like Bell,” he said. Genuinely like Bell, and not the lablets that Chu was proposing. The Argonne-Berkeley team called it the National Center for Energy Storage Research, which they pronounced “En-Caesar.

Congress had to directly approve such spending, and it treated Chu’s proposal with skepticism. Its 2010 budget funded just three of the eight innovation hubs. Worse, it guaranteed the money for only a year rather than five and allocated $22 million for each hub instead of the proposed $25 million. The Battery Hub did not make the cut.

“Oh, crap,” Chamberlain said. He was reading a news bulletin on the Internet—a Chevy Volt had caught fire while undergoing federal crash testing in Wisconsin. The vehicle had been through the usual harsh examinations, which included ramming a pole into its side, and had already achieved the top five-star rating. Three weeks later, as the car sat on the lot, the battery burst into flames. It engulfed the Volt along with three other vehicles parked nearby.

Fox News blamed Obama. Neil Cavuto, a Fox commentator, said the Volt was part of a gigantic social disaster that would lead to divorces “when someone forgets to plug it in,” not to mention a conspiracy. “Someone bought off Motor Trend to say it was car of the year,” Cavuto said. “You have to be a dolt to buy a Volt.” The vehicle had nothing to do with Obama and in fact was conceived during the George W. Bush administration. But by embracing electrics, Obama infuriated the right. The carping grew when two more fires occurred during tests just six months later. The thing about large lithium-ion battery packs was that if you were not going to use them for a long time, you were advised to drain them of electricity. When fully charged, they could be unstable.

Chamberlain said that it wasn’t only his personal connection to the car that decided him. Notwithstanding the opinion of Fox News, he agreed with the assessment of Motor Trend, which was that the Volt was “a game-changer.” The Volt was the future, he said, “something that is amazing.”

Rechargeable lithium-ion batteries became commercial products only a decade later. When Sony commercialized Goodenough’s battery in 1991, it became the go-to formulation for virtually every laptop, smart phone, recorder, or really any battery-enabled consumer device. Goodenough’s batteries lasted longer than the technology they superseded—nickel metal hydride—and did not suffer nearly the severity of capacity loss after long use. Even two decades later, lithium-cobalt-oxide batteries remained the world’s workhorse consumer battery.

The inspiration to use lithium-ion to revive electric cars, though, came later still. Lithium-cobalt-oxide was too expensive—specifically the ingredient cobalt—for serious contemplation in passenger vehicles. It packed a wallop of energy density—the best among any commercial battery—but was economically feasible only for compact purposes, meaning small electronic devices. When Toyota pioneered the modern-day push into electrics in Japan in 1997, its Prius hybrid again contained nickel-metal-hydride batteries.

Riley received an e-mail from a 41-year-old South Korean staff researcher named Young-Il Jang. NMC 2.0, Young said, appeared to have a problem. And not just any problem, but one so substantial as to possibly doom it outright for use in cars. Young told Riley and other colleagues copied in the e-mail that the jolt of voltage that gave NMC 2.0 its potency also seemed to thermodynamically change it. When the high voltage forced much of the lithium to begin shuttling, thus removing the cathode’s pillars, the structure sought to shore itself up and keep its shape. Other atoms rearranged themselves. Nickel took the place of lithium, and cobalt of oxygen. When the lithium returned, its old places were occupied. It had to try to find a new home. Thermodynamics made the atoms seek a new natural balance. The voltage steadily declined. Hence in actual application in an automobile, NMC 2.0 might not provide the consistent potency suggested when Thackeray was working on coin-size test cells in the laboratory. Unless the atomic reorganization could be controlled, Young concluded, the material might never find use in a car, which required reliability. In a gasoline-driven vehicle, the driver expected the engine to deliver more or less the same propulsion each time the accelerator was depressed—the pistons had to push out a smooth flow of power continuously, every time. It could not deliver the acceleration of a Ferrari the first day and a Mini Cooper on the hundredth. Similarly, in an electric system, the voltage in the second cycle could not differ from that of the fiftieth; you could not create a dependable, ten-year propulsion system with such instability.

Riley was suggesting that the parade of companies that had paid to license NMC 2.0—not just Envia, but BASF, GM, LG, and Toda—were holding a seriously flawed product. As his researcher had stated, NMC 2.0 perhaps could not be deployed for the purpose for which it had been purchased—longer-range, cheaper electrified vehicles. At least in its current state, it perhaps could only be used at lesser voltages, which would mean performance not much different from the lithium-cobalt-oxide batteries commercialized two decades before. There might be no reason for anyone to absorb the expense of switching to NMC 2.0. If you asked the battery guys at what stage they understood that there was a problem with NMC 2.0, it prompted a nervous response. They would go quiet, glance around, and provide not quite precise answers. This conveyed the impression that either no one knew the precise answer or no one wanted to disclose it. The reason being that, if you looked at the situation squarely, you could not escape the conclusion that Argonne had in fact sold the companies a faulty invention. Not that the companies themselves were off the hook—the engineers, venture capitalists, and other executives and staff who had signed off on the licenses had to be in some hot water among their bosses, too. If anyone was predominantly responsible, it was the Thackeray team, because their names were on the patent. Chamberlain, who had led the negotiations on Argonne’s behalf, said simply, “We didn’t know about it.” But how was that possible? “Because making a product is not the scientists’ objective. You have to look at a certain data set to notice the fade,” he said. “If you look at a different data set where all of your requirements are for capacity, you can actually miss the voltage curves.” He added, “That is why interaction with industry is so important, because if you are making a product, like a battery that is going into a car, you look at everything like this.

Department of Energy staff summoned him to Washington. They wanted to hear more about voltage fade. A few days before his departure to Seoul, Kang sat before six Department of Energy officials with his slide deck. His core message resembled A123’s: NMC 2.0 required a fundamental fix. How did some of the best minds in batteries overlook a defect this basic? Voltage fade was deeply pernicious, Kang said. It was what Chamberlain said—if you were employing the standard measuring tools, determining a battery’s stability by checking its capacity, you would notice nothing wrong with the NMC 2.0. From cycle to cycle, you observed a stable composition. That is what Thackeray and Johnson saw and reported in their invention. Voltage fade became conspicuous only when you incorporated gauges of stability that, while familiar in industry, were highly uncommon in research labs. Only then did you understand that NMC 2.0 was profoundly flawed.

Further in the future, Faguy saw the problem as a dress rehearsal for nightmares to come. The battery race would involve a series of unforeseen, terrible problems that you simply could not recognize in the tiny volumes and coin cells produced in the national labs. You needed a ton of the material and hundreds of cells, and you had to charge and recharge them again and again before the problems surfaced. Only then could you think about the solutions necessary to get the technology into a car.

Croy said the slides assumed two ways to understand voltage fade: it was either repairable or forever unmanageable, the latter because of the immutable laws of thermodynamics, the most basic physics of energy. The answer, he said, was actually both—voltage fade challenged the limits of fundamental physics, but there could be a fix. To get there, he and Thackeray had used the beam line to explore the bowels of the NMC. They observed that the nickel and manganese had wanderlust. The metals liked to move around through the layers. It was their nature—once the lithium shuttled to the anode, taking a bit of oxygen out of the cathode, the nickel and manganese could not help but shift in order to find a new, comfortable balance. By the time the metals settled down, the material itself was changed—its voltage profile was vastly different. For a carmaker, such a transformation was unacceptable. But how could you stop it?

The extra manganese in NMC 2.0—the Li2MnO3—that was largely responsible for the battery’s exceptional performance also contributed to its instability. The manganese settled down and stopped rattling the structure when near nickel. So wherever you had manganese, you wanted to make sure nickel was also present. The flower pattern represented the best depiction of that balance.

In February 2012, about a thousand men and women assembled at an upscale Orlando golf resort called Champions Gate. There are two types of battery conferences—scientific gatherings that attract researchers and technologists attempting to create breakthroughs; and industry events, attended by merchants and salespeople. Orlando was the latter. A pall hung over the assembled businesspeople. Americans were not snapping up electric cars: GM sold just 7,671 Volts the previous year against a forecast of 10,000. There was no reasonable math that got you to the one million electric vehicles that Obama said would be navigating American roads by 2015, even when you threw in the Japanese-made Nissan Leaf, of which 9,674 were sold in 2011. That became even clearer when just 603 Volts sold in January 2012. No one seemed consoled that China was doing even worse, selling just a combined 8,159 across the country, fewer than half the American number.

There could eventually be the type of market shift that both Obama and Wan Gang had forecast. But it would not be in the current decade. Until at least the 2020s, electric cars would remain at best a niche product.

The Japanese believed the race was already over. They—and their Prius—had won. Toyota was nearing four million cumulative hybrid sales worldwide, including 136,463 Priuses in the United States alone—the world’s second-largest car market behind China—in 2011. The Japanese themselves bought 252,000 Priuses.

Researchers might achieve a genuine breakthrough in a decade or so, Anderman said. But meanwhile the internal combustion engine would keep improving and raising the bar.

The vice presidents of major industry players like GM, Ford, Bosch, and Nissan, the men who, one step down from the CEO, decided what cars their companies actually produced. They tended not to “put up with any crap,” Hillebrand said. “They are not interested in what sounds interesting and what sounds cool,” he said, but in “things that are really going to happen.” It became evident that they did not foresee a breakout of the electric car for many years to come. Electrics cost too much to produce. There was no indication that the economics were going to significantly improve. Motorists might keep buying 20,000 or 30,000 Leafs and Volts a year, they said, but there was no sign that either model would achieve the hundreds-of-thousands-of-cars-a-year sales that signaled mass appeal. The old guys were right, Hillebrand said.

He himself foresaw internal combustion vehicles that could run automatically on almost any fossil fuel. As it stood, mass-market diesel engines, relying on compression rather than spark plugs to ignite the fuel that drove the car, were probably the most efficient on the planet—fully 45% of the diesel poured into the tank ended up in the propulsion of the vehicle; just 55% burned off as wasted heat in the process of combustion. As for gasoline, just 18% of its energy actually reached the wheels; a whopping 82% went into the ether.

Consumer electronics typically wear out and require replacement every two or three years. They lock up, go on the fritz, and generally degrade. They are fragile when jostled or dropped and are often cheaper to replace than repair. If battery manufacturers and carmakers produced such mediocrity, they could be run out of business, sued for billions and perhaps even go to prison if anything catastrophic occurred. Automobiles have to last at least a decade and start every time. Their performance had to remain roughly the same throughout. They had to be safe while moving—or crashing—at high speed.

The generally accepted physical limit of a lithium-ion battery using a graphite anode was 280 watt-hours per kilogram. No one had ever created a 400-watt-hour-per-kilogram battery. In all, ARPA-E received some 3,700 submissions for $150 million in awards. Thirty-seven were selected. Envia was among them—Kumar won a $4 million grant.

The subsequent year, Kumar’s team worked through the handful of silicon anode concepts he had proposed until it settled on one. Kumar said Amine’s anode, a composite of silicon and graphene, pure carbon material the thickness of an atom, had failed to meet the necessary metrics. Instead, the best anode was made of silicon monoxide particles embedded into carbon. Kumar’s team built pores into this silicon-carbon combination measuring between 50 nanometers and 5 microns in diameter, and filled them with electrolyte. Carbon in the shape of fibers or nano-size tubes were also mixed into the anode, thus creating an electrically conductive network. The silicon’s expansion was thus redirected and absorbed. Even if the silicon broke apart immediately, the carbon fibers and tubes provided a path across which the lithium ions could pass on their way to and from the cathode. Kumar said the results were excellent

This path to the better battery was expensive. You started with a vacuum reactor and a costly substrate, sometimes using platinum, a precious metal. Then you grew nanowires and nanotubes. What resulted was like pixie dust—you derived just milligrams of material each time while what was required was bulk powder. The process might decline in cost over time, but for now it could not be justified.  The battery was only a prototype—he had charged and discharged it just 300 times. Experts in the audience knew that Kumar would have to more than triple the number of cycles before the battery could be used in a car.

Dahn was notorious for ripping into the ideas of his colleagues—publicly and usually with precision. He pointed out flaws that most battery guys, knowing how hard it was to make an advance of any type, typically kept to themselves. Dahn was with Anderman in the belief that battery scientists often cherry-picked their results in order to postulate nonexistent advances.

The basic NMC-spinel battery in the GM Volt delivered about 100 watt-hours per kilogram. Since GM over-engineered the battery to maintain a margin for error, about 37% of it went unused—the excess was there just in case added capacity was needed. So it was effectively running at about 66 watt-hours per kilogram. If you now doubled the capacity using the Envia formulation and slimmed down the unused capacity, you would triple your range—rather than 40 miles, the Volt would travel more than 120 miles on a single charge. Alternatively, GM could stay with the 40-mile range and cut about $10,000 off the price of the car. “You have your choice,” Dahn said. “This is why people are fighting for higher energy and longer life. It is what it is all about.” Dahn had questions. For example, why Envia’s 300 cycles would increase. “How long and how fast? Nobody knows,” Dahn said. “But you can bet your bottom dollar it is going to get better.

Canadian energy thinker Vaclav Smil was his favorite writer, and Gates was a seed investor in a molten metal battery prototype invented by Donald Sadoway, a celebrity MIT chemist. Conversing with Chu, Gates said that clean power was perhaps the world’s greatest challenge. It would be exceptionally harder than anything he himself had attempted. Bill Gates said that when you contrasted energy and computer software, “people underestimate the difficulty getting the breakthroughs. And they underestimate how long it is going to take.” Crossing from the invention to the marketplace was the longest wait of all—the general adoption of a new energy technology could take five to six decades, he said. That’s right, Chu replied.

A photograph of Kumar and the Envia team went up on the triple screens. The day before, Majumdar said, this start-up company had announced “the world record in energy density of a rechargeable lithium-ion battery.” Its 400-watt-hour-per-kilogram battery, if scaled up, could take a car that entire Washington-to-New York journey in a single charge at half the cost of the current technology. And more was coming, he said.

At a major presentation Majumdar said that the Envia team had achieved “the world record in energy density of a rechargeable lithium-ion battery.” Its 400-watt-hour-per-kg battery when scaled up would take a car Washington to New York on a single charge at half the cost of current technology.

Envia claimed this could be done for hundreds of cycles, but in fact it went just 3 cycles before the energy plunged. To be usable in an electric car it would need to be capable of being charged and discharged 1,000 times.

The Argonne battery guys cringed and then went ballistic. Kevin Gallagher said Majumdar’s claims about Envia were “bullshit” and made him wonder about the other 8 start-ups showcased. ARPA-E with its pressures to deliver big leaps was “basically set up for companies to lie”.

Gallagher didn’t belive Envia could go 300 miles on a single charge—he would have had to densely pack the lithium into an unusually thick cathode. That was the only way. The problem was that thick electrodes were a blunt-force method—they could deliver the distance, BUT ONLY IN THE LAB. They couldn’t be placed with confidence into a 300-mile electric car. Being so fat, they would suffer early and fatal maladies and die long before the 10-year life span required and might even shatter. The opposite was needed – slender electrodes and cathodes less than 100 microns thick.

In the audience, the Argonne battery guys cringed. Then they went ballistic. Kevin Gallagher said Majumdar’s claims about Envia were “bullshit,” making him wonder about the other eight start-ups that he showcased. ARPA-E as a whole, with its pressures to deliver big leaps, was “basically set up for companies to lie,” he said.

Chamberlain said that deceit was in the DNA of start-ups and VCs: you needed that quality in order to raise funding, sell your product, and ultimately achieve a successful exit—to flip your company in either an acquisition or an IPO.

He decided that Majumdar’s high-profile announcement was politically driven. Department of Energy investments were a primary target of harsh Obama critics. The furor centered on Solyndra, a California solar power company that was awarded a $535 million stimulus loan and then filed for bankruptcy. Solyndra, critics said, exemplified the folly of “picking winners”—of favoring specific companies rather than general swaths of potential economic prosperity in which any enterprise might emerge a success. The loan, they said, was particularly suspect given that a Department of Energy official handling it was simultaneously a presidential campaign fund-raiser and married to a Solyndra lawyer. In fact, ARPA-E and other programs were picking winners. But that was what they were supposed to do. The question was whether they picked wisely. In any case, while the wisdom of the Solyndra loan was debatable, its origins were in the Bush administration.

Gallagher was still irritated about Envia. He did not desire a public argument over the matter but said again that Kumar’s 400-watt-hour-per-kilogram disclosure was just show. Gallagher was disposed to irritable pessimism—Thackeray said that was to be expected since he was an engineer. But he defended his suspicions on the basis of the girth of Kumar’s electrodes: in order to deliver the performance that Envia claimed—meaning that an electric car could travel three hundred miles on a single charge—he would have had to densely pack the lithium into an unusually thick cathode. That was the only way. The problem was that thick electrodes were a blunt-force method—they could deliver the distance, but only in the lab. They probably could not be placed with confidence into a three-hundred-mile electric car. Being so fat, they would suffer early and fatal maladies and die long before the ten-year life span required for such batteries. They might even shatter. The future, Gallagher said, was slender electrodes—cathodes less than one hundred microns thick, or slimmer than the diameter of a human hair. In its rush to the market, Gallagher said, Envia had unveiled an attention-grabbing but flawed product that still required fundamental improvement.

Lynn Trahey called Gallagher “K-Funk.” She had joined Argonne three years earlier as a postdoc from Berkeley. Scientists in the United States were not only largely foreign born, but also mostly men. So Trahey was an anomaly on both accounts—she was the only female staff scientist in the Battery Department. She had been a cheerleader and played varsity doubles tennis in high school. As a graduate student, she wore a purple- and green-dyed ponytail. Trahey’s current toned-down style appeared aimed at reducing her conspicuousness among these mostly plain men. She tied her hair back, unadorned. She dressed like one of the guys in loose-fitting jeans and sneakers.

None of it worked. Trahey still stuck out. The guys behaved bizarrely around her. They spoke inexpressively, almost robotically. Except for Gallagher and Mike Slater, a lot of them simply stayed away. While colleagues behaved awkwardly, she was ideal for public relations exercises. At Berkeley, her professors dispatched her on community-outreach visits to neighborhood schools and senior-citizen groups. She would show up and attract favorable press for the department. Chamberlain employed Trahey to the same advantage. He featured a photograph of her posed in protective glasses on the department’s home page and in a handful of press releases.

“Why don’t we get rid of the old people” at the lab? Gallagher said. “I’d like to see their output. I’ll bet it’s low.” He said that if you calculated the average age of the department’s researchers, you might be surprised as to how elderly the staff was as a whole. Gallagher and Trahey agreed that their older colleagues were costing too much money. Trahey said, “The reason there are so few jobs is these people won’t leave. These guys suck up all this money that could go to other things.” It particularly galled her that Gruen was paid at the lab’s top salary rank. “He is a 710!” she said. Such grousing poured out of the pair. They suggested that battery science was a young person’s game. But were the ideas developed by over-the-hill scientists under scrutiny, or was it simply their ages?

One reason battery science didn’t produce results was that scientists proposed a new chemistry, got funding, proved or failed to make it work in coin cells, wrote a paper, garnered any accolades, and moved onto the next thing. The small coin cells were never tested for practicality.  At no point was your idea typically tested for practicality—no one checked whether it could produce a superior battery. Experimentation alone was the final product.

Elon Musk’s Tesla made no battery breakthrough at all – he just strung together existing battery technology – 8,000 batteries made by Panasonic weighing 1,300 pounds. He chose this battery based on price, it was cheapest based on kilowatt-hour.

The Argonne scientists disputed the wisdom of Musk’s choice because nickel-cobalt-aluminum was the most volatile of the lithium-ion chemistries and easily caught on fire.  If a pure lithium node could be made that didn’t catch on fire, it was be a colossal achievement and great recognition to anyone who could figure out how to do this.

Another thing Kumar at Envia needed to fix was DC resistance in the cathode, which made the car suddenly sluggish when it got to the last 20 miles of a 100 to 200 mile battery.

Envia’s 400 watt-hour per kilogram – not doing that by a long shot. They did on the 2nd cycle, but by 5th cycle it was down to 302, the 100th cycle 267, the 200th cycle 249, and by the 342nd cycle of 232 it had lost 42% of its energy.

The GM team didn’t even get 2 cycles at 400. GM insisted that Envia get 4.4 volts – but at that state of charge, atoms begin to move around at an accelerated pace, the cathode expanded and contracted with shuttling of lithium and the material could crack.

Envia had contracted with GM and had again missed the milestones on both the volt and 200 mile car batteries.  The 400-2att-hour-per-kg material was still not performing as advertised.

The GM men were furious. “The anode material is not Envia’s,” said Matthus Joshua, the automaker’s purchasing executive. Envia had “misrepresented the material. The product claims prior to the contract were inaccurate and misleading.”

The anode was represented as proprietary but was actually bought from a 3rd party. After Envia admitted it had misrepresented the composition, origin and intellectual property content of their prototype battery, they asked for additional time and still the project hadn’t moved forward, and was unable to even replicate prior reported test results.  Given the facts GM was entitled to terminate the contract and wanted back the $4 million it had paid out.

Was Kumar a con man? Was he looking to cash out before he was found out?  The Argonne guys–all of them skeptics from the time that Kumar began to boast about his big breakthrough–could not decide.

Nor were journalists educated enough in battery technology to catch the problems with Kumar’s technology, even though slides were shown by Kumar and Kapadia at an ARPA-E Summit, though many of the slides were extremely deceptive (see page 277-278 for details).  These slides depicted only the capacity giving the impression that the energy density of 400 watt-hours per kilogramp was being achieved for hundreds of cycles, even though the energy density was going haywire.

Despite this, the board of investors and executives kept quiet hoping that Kumar would somehow still improve the battery enough so they could cash out. GM did too since there was no profit in going public with a fiasco and discredit the Volt and GM’s ability to develop new technology, plus Wall street might pummel the stock.

When the Envia board refused to depart, a 52-page civil suit was filed in the Alameda county courthouse against Envia and Kumar personally that alleged fraud and other charges, a lawsuit that revealed many of the past 6 years of corporate secrets, and all hopes of keeping the sorry story under wraps was blown.

Faguy at the Department of Energy realized that the problem of voltage fade couldn’t be solved simply by throwing money at it. “These kind of problems are intractable.”

 

Posted in Automobiles, Batteries, Energy | Tagged , | 1 Comment

Stansberry on “The End of America”

[ Stansberry has been predicting a market crash and currency collapse for a long time.  But this hasn’t happened yet, so Stansberry re-evaluates his ideas. He notes that the top 20 industrialized nations have pension and retiree obligations that aren’t on the balance sheets, with over $80 trillion coming due in 10 to 20 years.  The unprecedented explosion of debt the past 20 years is now more guaranteed by governments than the Market.

He says “Over the next decade, the biggest threat to your wealth won’t be the risk of losing your savings to a market crash. The biggest threat, by far, is the risk of losing your wealth to our government via confiscation or devaluation… or both….By guaranteeing so many of these debts and obligations, governments are setting up an unprecedented collapse of not only the banking system, but of the political system itself.  The U.S. government has already pledged a large amount of your wealth to other people …My fear is that the stock market disappears. My fear is that the government defaults. My fear is that no bank will survive.”

“Negative interest rates have become pervasive in two out of the three major developed currency blocs. And we could certainly be next.” Stansberry asks how that could possibly work out in the long-run: “Do you think the public is going to volunteer to buy bonds that not only don’t pay interest, but charge a monthly fee to own? How will life-insurance companies meet death-benefit claims if bonds no longer pay any interest? How much capital would you put into the banking system if the banks begin charging you 2% or 3% a year just to keep your savings with them?”

He doesn’t offer any solutions, and never mentions declining energy or natural resources, which is the true source of our problems.

Alice Friedemann  www.energyskeptic.com author of “When Trucks Stop Running: Energy and the Future of Transportation, 2015, Springer]

Porter Stansberry. April 22, 2016. The End of America. The Stansberry Digest.

“Remember the ‘End of America’?… How the global economy got ‘Enronized’… How negative interest rates work… What happens when doctors won’t take Medicare?…  Longtime readers might remember a documentary we produced back in 2010 called the “End of America.”

The thesis was pretty straightforward: America, having racked up debts (both private and public) so large they could never be repaid in sound money, would inevitably be forced to print its way out of perdition. As a result, our dollar would inevitably lose its position as the world’s leading reserve currency.  For Americans, the days of cheap and easy credit would be over. Going forward, we wouldn’t be allowed to merely print up paper to pay for our foreign loans. Such a development would be catastrophic to a lot of Americans, similar in many ways to the economic and social challenges Great Britain faced after World War II.

In our “End of America” presentation, we predicted several important developments that have since come to pass, such as a general increase in social unrest. See the recent riots in Baltimore and the “Occupy Wall Street” movement. We were right about America’s credit rating, which was downgraded from “AAA” in 2011 by ratings agency Standard & Poor’s. We were right about the rise of new “alternative currencies” like Bitcoin.

We’ve also seen more and more political challenges to the status quo, and even a sharp rise in political violence. You’d have to be completely ignorant of history if “strongmen” like Donald Trump don’t remind you of Mussolini or other similar figures from history. Leaders like this arise as countries go bankrupt because the public doesn’t want to accept the consequences of its profligacy. Wars break out, too… like the kind Trump seems determined to start against Mexico… or the kind that Hillary will probably continue to wage in the Middle East.

But in one important way, our predictions haven’t come to pass (at least, not yet). Incredibly… the currency collapse hasn’t happened – either in America or in any other major developed nation. Sure, the yen and the euro have weakened a lot against the dollar. They’re down 15% and 28% since 2012. But we haven’t seen the kind of panic I know we’ll see sooner or later in the world’s leading paper-money brand – the U.S. dollar.

I (Porter) have been thinking about why that’s so… and how the system could endure for far longer than I believe is possible. Let’s look at the numbers.

Here’s an incredible statistic: Since 2009, total global debt has increased by $57 trillion, according to consulting firm McKinsey. That’s about the same amount of debt as America owed, in total, back in 2009. Said another way, in a little more than six years, the world has added a new pile of debt as big as the one that blew up the American economy.

Meanwhile, total debt (public and private) in the U.S. has increased, too. We’re up to $65 trillion, from around $55 trillion in 2009. Our total debt is up 150% since 2000. Just think about that for a minute. Imagine what our economy would have looked like over the past 15 years without that incredible level of stimulus. Think about what our unemployment figures would look like without all of that debt.

What’s the big deal? Who cares about some “hot money” lending? The problem, as McKinsey points out, is that all around the world, debt growth is far outpacing economic growth. As a result, we haven’t had the ability to finance these new obligations. This raises the question: If economic growth can’t finance these new loans (or the old ones), who is foolhardy enough to lend all of this money?

The answer won’t surprise you. It’s the government, of course!

A new report published by the Richmond, Virginia branch of the Federal Reserve says 61% of all liabilities in the U.S. financial system are now implicitly or explicitly guaranteed by the government. That’s way up from 1999, when only 45% of the liabilities of the financial system were guaranteed (mostly Fannie and Freddie). In other words, more and more of our financial institutions rely on the government (aka taxpayers) for access to credit.

These guarantees, however, can’t be found on any U.S. government balance sheet.

Imagine if a publicly traded company did the same. It would be called “Enron” and its leaders would be put in jail. That’s the status of our entire banking system: It has been “Enronized.” It runs on the same financial engineering as Enron. And not just in America. You can find the same problem in every major economy in the world.

Does that sound like a good idea?

Well, it has been fun so far. Over the last 20 years or so, the world has seen an explosion of debt unlike any other period in history. Most of these obligations wouldn’t have been financed by the free market. Individuals investing their own savings would have never agreed to those risks or the tiny interest rates now being offered to lenders in every major economy.

But rather than live within the means of the free market, governments from almost every major nation have engaged in massive currency and interest-rate manipulation. And that’s not all. They haven’t merely guaranteed the availability of capital in more and more ways… They’ve also guaranteed the principal of the loans.

Does that sound sensible?

I know, you’ve heard all of this before… But none of these problems stopped the big bull market we’ve seen since 2012. So even if we’re right that this isn’t sustainable, how can anyone know when the boom will end or when the music will stop? We don’t know, of course. Nobody can know for certain if the next market correction or bear market will be the “big one.”

But here’s an indicator of where things might finally hit a real breaking point: Banking giant Citigroup (C) warned in a recent report that the top 20 industrialized nations have pension and retiree obligations (also held off the balance sheets) that exceed $80 trillion. All of these come due over the next decade or two. And of course, none of these obligations can be financed based on current GDPs or tax rates. The mountains of debt these economies continue to labor under ensure there is no growth.

How will it all end? I wish I knew exactly… but I have no doubt that it will be far worse and far more violent than anyone could possibly predict.

So I hope that while you’re thinking about what the stock market will do next week or next month, you also spend a little bit of time thinking about the bigger picture. Over the next decade, the biggest threat to your wealth won’t be the risk of losing your savings to a market crash. The biggest threat (by far) is the risk of losing your wealth to our government via confiscation or devaluation… or both.

 

Just think about it… If these loans were purely private, a run on the bank would result in the collapse of the banking system. Depositors would suffer massive losses. We would see the same kind of credit deflation we last saw in the 1930s. Most financial assets and a lot of “hard assets” would be lost to bankruptcy. Prices would decline massively. But the real wealth wouldn’t disappear. All that would happen is a massive transfer of wealth from creditors to lenders.

But that isn’t the only thing that will happen this time. By guaranteeing so many of these debts and obligations, governments are setting up an unprecedented collapse of not only the banking system, but of the political system itself. You might not know it, but the U.S. government has already pledged a large amount of your wealth to other people. And when that bill comes due, we’re going to have a huge problem. Think Detroit, on an international scale.

We’re already so late in the game that the expense of just maintaining the existing debts can’t be honestly financed. Negative interest rate policy (“NIRP”) is the new idea. Charging insurers and big banks negative interest rates might work for a while to keep the music playing because the public generally fears and hates these massive institutions.

But what will happen when the government must finally begin to tax the ultimate guarantor in our debt-backed, global banking system? What will happen when the taxpayers face negative interest rates, huge increases in taxes, enormous cuts in benefits, or crashing currency values?

When I look at the big picture, my fear isn’t that the market will crash… or that default rates will rise… or that interest rates will go up (or down). Those things are all going to happen in the normal course of events. My fear is that the stock market disappears. My fear is that the government defaults. My fear is that no bank will survive.

Sounds a little crazy, I’m sure. But it’s obvious to anyone who looks at the numbers that our current path is not sustainable. It is clearly beginning to completely break down.

Try to explain how negative interest rates will influence the housing market, for example. Will we soon see people applying for a “mortgage” at the Federal Housing Administration or Fannie Mae, and then being paid a monthly stipend in exchange for living in a house, for free, that someone else paid to build?

Does that make any sense?

Or consider the government-bond market itself. Do you think the public is going to volunteer to buy bonds that not only don’t pay interest, but charge a monthly fee to own? How will life-insurance companies meet death-benefit claims if bonds no longer pay any interest? How much capital would you put into the banking system if the banks begin charging you 2% or 3% a year just to keep your savings with them?

None of that stuff makes any sense. And yet, negative interest rates have become pervasive (along with their handmaiden, unsustainable levels of debt) in two out of the three major developed currency blocs. And we could certainly be next.

So were we right about the End of America? In some ways, yes, and in some ways, no. Like Yogi Berra famously said, “it’s tough to make predictions, especially about the future.” But in the most important way of all, our warnings simply weren’t big enough. We could never have imagined the debt bubble would continue to grow at an even faster pace… or that the government would have agreed to guarantee still more (and lower-quality) obligations, like student loans.

What should you do? The most important thing is to learn to avoid the “normalcy bias.” As these financial pressures build, keep your eye out for things that just don’t look right.

Here’s a good example… About 10,000 doctors each year “opt out” of serving Medicare patients. Thus, according to a new study from the nonprofit Kaiser Family Foundation, more than 20% of all U.S. primary care physicians will not accept Medicare patients. These numbers will continue to get worse as the government can’t afford to pay for the entire Baby Boomer generation’s health care costs. That means no matter what you’ve been promised about health care, actually getting an appointment (or care) keeps getting harder and harder.

Nobody will tell you doctors abandoning their profession by the tens of thousands every year is a sign that the value of the dollar is falling. The nightly news will keep telling people that the consumer price index is flat – no matter how much actual living expenses are rising. And most people will believe it. Don’t be one of them.

That’s just one example of how the system is breaking… and it’s a tiny harbinger of what’s to come. If you keep your eyes open, you’ll see dozens more signs like this… the way stuff just doesn’t seem to work like it used to… and there doesn’t seem to be any way to get anything done unless you can afford to spend a lot of money.

Why is this all happening? The system is falling apart because the most important input in capitalism is the cost of money – the cost of capital. The longer the government manipulates the cost of borrowing, the worse all of these problems are going to get… and the slower our economy will grow.

The other sure sign that something is fundamentally broken in our society is that wages haven’t risen in about 40 years – just debts. How is that going to end? I think you know. Again… just keep your eyes on these topics. Look at the numbers. Don’t trust the government. It’s not going to save you… It’s going to try to save itself.

I might not write about these big, “behind the scenes” macroeconomic themes often, but I’m definitely watching them for you. ”

 

 

Posted in Crash Coming Soon | Leave a comment

Book review of “Spiral: Trapped in the forever war”

[ I understand why anyone who might be believed about the energy crisis keeps their mouth shut about peak oil, it would be like shouting “fire” in a crowded theater and could bring down stock markets world-wide.  Why?  Because there are no businesses that don’t depend on energy to exist and grow. Only in a growing economy can debts be repaid.  In a shrinking, post-fossil economy, creditors will no longer be willing to lend money (i.e. peak oil study done for the German military.)

Still, I am annoyed when experts like Mark Danner get a lot of media attention and don’t even mention the word oil. We wouldn’t be torturing people if we didn’t need oil so badly! And now that we are at peak fossil fuels, we won’t be torturing people for long. 

Perhaps I wouldn’t have bothered with this book review if I hadn’t sat through an excruciatingly long interview with Danner at U.C. Berkeley on “weapons of mass destruction” and whether these weapons existed or not.  I kept thinking he would use this opportunity to explain that we didn’t go to war over weapons of mass destruction, but because we depend so much on oil.  But no, the word “oil” didn’t even get mentioned.  Nor is oil mentioned in this book.

If experts don’t dare mention peak oil, there are other things they can do.  Especially bring up population and talk about the need for population control and women’s right to control their own bodies and lives with birth control and abortion to stop the 6th extinction. And also because getting population down via one child per woman is one of the only ways left at this very late date to soften energy and resource decline.  Why doesn’t Danner use his public platform to get gun control laws so that when times get hard, we don’t all become terrorists of each other in a chaotic civil war of all against all? I’ve read a lot of world history, and it appears to me that only the most brutal and the most cooperative survive in hard times, war, and collapse. With over half of Americans owning a gun, surely our destiny is brutal and not cooperative, dictatorship rather not democracy, local terrorists, not foreign.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts:  KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report ]

Mark Danner. 2016. Spiral: Trapped in the Forever War.

The opening quote in this book is “We must define the nature and scope of this struggle, or else it will define us.” Obama 2013

Danner has defined the nature and scope of this struggle as a war on terror.  He says that our presence in Iraq and Afghanistan is a Republican attempt to replace “being tough on communism as a defining cause in their political identity” with a war on terrorism.

To make the case for a “war on terror” as our reason for being there, Danner needs to state why we are NOT there for the 1980 Carter doctrine, which states “the overwhelming dependence of the Western democracies on oil supplies from the Middle East…[any] attempt by an outside force to gain control of the Persian Gulf region will be regarded as an assault on the vital interests of the United States of America, and such an assault will be repelled by any means necessary, including military force.”

Or the the Reagan Corollary to the Carter Doctrine, in which the U.S. guarantees both the territorial integrity and internal stability of Saudi Arabia.

Since then we’ve invaded, occupied, or bombed Iran (1980, 1987–1988); Libya (1981, 1986, 1989, 2011); Lebanon (1983); Kuwait (1991); Iraq (1991–2011, 2014–present); Somalia (1992–1993, 2007-present); Saudi Arabia (1991, 1996); Afghanistan (1998, 2001–present); Sudan (1998); Yemen (2000; 2002-present); Pakistan (2004-present); and now Syria.

The reason Carter said this is because many Americans, Europeans, and Chinese would die if the oil stopped flowing, but especially Americans since no other nation on earth is as dependent on oil as we are (why we have to be the world’s unpaid policeman is another topic).  Just consider a few of the things that what would happen if trucks stopped running:  by day 6 grocery stores would be out of food, restaurants, pharmacies, and factories closed, ATMS out of cash, sewage treatment sludge and slime storage tanks full, gas stations closed, 685,000 tons of trash piling up every day, livestock suffering from lack of feed deliveries. Within 2 weeks clean water would be gone since purification chemicals couldn’t be delivered. Within 1 to 2 months coal power plants would shut down due to lack of coal, and much natural gas is pumped through pipelines electrically, so natural gas power plants would shut down too.  And there goes the financial system – our energy, electricity, and other 16 vital infrastructures are inter-dependent, which makes us incredibly vulnerable, since many of them can pull each other down (see [[ASIN:3319263730 When Trucks Stop Running: Energy and the Future of Transportation (SpringerBriefs in Energy)]] for details)

Michal Breen, of the Truman National Security Project, explained at the 2012 U.S. House of Representatives hearing “The American energy initiative part 23: A focus on Alternative Fuels and vehicles” why we’re doomed to continue to fight wars in the Middle East.  He said:  “Our dependence on oil as a single source of transportation fuel poses a clear national security threat to the nation. As things now stand, our modern military cannot operate without access to vast quantities of oil. A lack of alternatives means that oil has ceased to be a mere commodity. Oil is a vital strategic commodity, a substance without which our national security and prosperity cannot be sustained. The United States has no choice but to do whatever it takes in order to obtain a sufficient supply of oil. We share that sad and dangerous predicament with virtually every other nation on earth”

The word “oil” appears just once in the book as an adjective for Iraq (secular, middle-class, urbanized, rich with oil), and the words petroleum, gasoline, and diesel don’t appear at all.  But the words torture, terror, terrorist, and terrorism each appear about 90 times.

If we want to get out of the middle east, and stop risking that our ghastly activities on citizens of the Middle East aren’t turned on our own citizens in the U.S. someday, then the President needs to educate the public about the need for energy conservation.  Right now, Americans rush out to buy gas guzzling cars every time the price of gasoline goes down.  In fact, the New York Times reported today (June 24, 2016) that people are turning in their electric vehicles for gas guzzlers (see “American Drivers Regain Appetite for Gas Guzzlers”).  CAFÉ standards were supposed to go up to 54 mpg, but they’ve dropped to 24 mpg since gasoline prices began dropping in 2014.

Former President Carter was invited to a 2009 Senate Hearing “Energy Security: Historical perspectives and modern challenges” to advise the Senate.  He said the president has a responsibility to educate the American public about energy, like he did over his four years in office. Memorably, one of his speeches in 1977 began: “Tonight I want to have an unpleasant talk with you about a problem unprecedented in our history. With the exception of preventing war, this is the greatest challenge our country will face during our lifetimes. The energy crisis has not yet overwhelmed us, but it will if we do not act quickly. It is a problem we will not solve in the next few years, and it is likely to get progressively worse through the rest of this century. We must not be selfish or timid if we hope to have a decent world for our children and grandchildren. We simply must balance our demand for energy with our rapidly shrinking resources. By acting now, we can control our future instead of letting the future control us”. This was unpleasant dinner conversation. President Carter was not invited back to serve a second term.

Energy and transportation policy, diesel engines, and the trucking companies need to focus on energy efficiency, not endless growth. Conventional oil peaked in 2005 and has been on a plateau since then. That’s why our economy isn’t growing either – try to think of a business that doesn’t use energy.  We need to reduce our consumption.  Alternatives to Just-in-time delivery where trucks arrive half empty with just what’s needed and return empty has to stop.

We’ve traded away energy to gain time. We’ve traded away energy security to get stuff ASAP. Do we really have to have everything RIGHT NOW?

To address some of the comments at amazon:

This book is not worth reading if the premise is incorrect.

The one good thing about peak oil, peak coal, and peak natural gas is that starting possibly this year, fossil fuel production of oil, and perhaps coal (we’re near or past peak coal), and natural gas as well are about to decline, since peak oil means peak everything since it’s master resource that makes all other resources possible, including wind, solar, nuclear and other “alternatives” possible, from mining to diesel-fueled supply chains and delivery.

The premise that climate change is the greatest worry is incorrect. We are on the cusp of an energy crisis, and few see it coming because everyone assumes that solar, wind, biofuels and so on can save us.  Oil, coal, and natural gas replaced our wood/biomass civilization and enabled the human population to grow from 1.5 to 7 billion.

That means possibly starting this year, or within the next decade, carbon dioxide will begin to decline, although 20% of it is likely to remain in the atmosphere for millennia. Still, at at worst this means only the lowest 4 or so of the IPCC projections will be reached.  At  energyskeptic I back this up with peer-reviewed science at: 3) Fast Crash, Extinction, But not from climate change: peak fossil fuels.  I am not a climate change denier, and I worry that we’ve already set in place some non-linear, irreversible changes.

Low oil prices have led to fracked oil and gas production declining 25%. Fracked oil comprised about half of the rise of oil production since the plateau began in 2005, and low oil prices have led to less oil found in 2015 than since over 60 years ago, and in 2016 we’re finding even less oil.  Only 3 billion new barrels were found in 2015 but globally we burned 30 billion.  It won’t help for the price to rise again either, that will drive us back into an even worse depression than the 2008 crash, and oil prices even lower.  All we have left is nasty, remote, hard to get expensive oil that takes far more energy (and money) to get than the cheap oil that has fueled us up to 7 billion people from 1.5 billion the past 100 years.

Clearly the biggest danger is that resource wars will lead to nuclear war and a consequent nuclear winter that will kill billions of people. Preventing nuclear war, and using the remaining fossil energy to bury nuclear and other industrial waste should clearly be our main priority.  And allowing carrying capacity globally to go 5.5 billion people beyond what a biomass (wood)-based civilization can support in the future means that our fellow citizens will be the new terrorists in the future as the middle east reverts back to a nearly uninhabited desert as it was before the brief age of oil.

Posted in Caused by Scarce Resources, Social Disorder, Terrorism, War, Wars | Tagged , , , | Leave a comment

2006 Senate hearing on oil and gas, geothermal, and hydrogen fuel cells

Senate 109-503. June 27, July 11, July 17, 2006. Implementation of the provisions of the energy policy act of 2005. Enhancing oil and gas production; geothermal energy and other renewables; and hydrogen and fuel cell research and development.  U.S. Senate hearing.

Excerpts from this 221 page document follow.

KATHLEEN CLARKE, DIRECTOR, BUREAU OF LAND MANAGEMENT, DEPARTMENT OF THE INTERIOR, ACCOMPANIED BY DALE HALL, DIRECTOR, U.S. FISH AND WILDLIFE SERVICE

BLM is an agency that is really quite small, but with a huge mission. We manage over 260 million acres of Federal lands in the West, and over 700 million acres of subsurface land. And the vision that we bring to the BLM is that we should manage these lands to sustain and enhance the quality of life for Americans. And we recognize that the multiple-use mission that we have requires that we pay attention to many resource values and to all of the ways that the public relate to those lands and benefit from their uses. And clearly an important element of our mission is managing the energy resources to serve the needs of the public, particularly at this time.

BLM lands produce about 18% of the natural gas that is consumed in this Nation.

The demand for onshore oil and gas is reflected in the dramatic increase in the number of applications for permit to drill (APDs) the BLM receives from one year to the next. The number of APDs received by the BLM has increased every year since 2002, and we anticipate this trend to continue into 2007 and beyond. A recitation of the numbers illustrates this dramatic trend. The BLM received 4,585 APDs in 2002; 5,063 in 2003; 6,979 in 2004; and 8,351 in 2005. Our current projection is that we will receive over 9,300 in 2006 and over 10,500 in 2007.

MEL MARTINEZ, FLORIDA.   Because of the incredible size of reserves and the escalating price of natural gas, applications for permits to drill (APDs) have sky-rocketed from 4,585 in 2002 with projects for 10,500 APDs by 2007. This rapid increase has concerned many not just in the environmental community, but in the sportsmen groups as well. As a Senator from an environmentally sensitive state, I well understand these concerns when dealing with energy development on federal resources. Florida has very little public land left for hunters and fisherman to enjoy, which is partly a result of the staggering growth the state has experienced.

We also need to remember that our public lands are also our nation’s heritage— our inheritance, if you will. The forests, mountains, rivers, streams, the picturesque vistas and solitary wide-open spaces—as we move forward we need to remember that there is an intrinsic public value that cannot be measured only in Btu’s or kilowatts.

A Government Accounting Office (GAO) Oil and Gas Report June 2005 identified the concern that increased permitting activity by the BLM has lessened the agency’s ability to meet its environmental protection and liability responsibilities. The report indicates that field managers under pressure to complete permitting processes often shift workloads from inspection and enforcement to application processing. Examples from the report describe how the Buffalo, Wyoming and Vernal, Utah field offices, the two field offices with the largest amount of permitting activity, were only able to each meet their annual inspection goals once in the past six years. Additionally, the report highlights that the Buffalo Field Office was only able to achieve 27 percent of its required environmental inspection goals during the 2004 fiscal year. Clearly it is in the interest of the public, state agencies, the BLM and industry to ensure that the guidelines of leases and permits are being followed.

TOM REED, WYOMING FIELD ORGANIZER, TROUT UNLIMITED, ARLINGTON, VA

Our public lands sustain some of the cleanest water, healthiest habitats, and finest fishing and hunting in North America. More than 50 million Americans hunt and fish, however, too often their voices are lost in the din of controversy that has come to define public land management. A significant and growing concern among sportsmen is the impact of energy development on fish and wildlife habitat on our public lands.

Wyoming is at the forefront of these energy and public land issues and is more than carrying its weight for the energy needs of this country. Oil and gas exploration and development is taking place at an unprecedented rate. It is estimated that 25 percent of the state will be impacted by oil and gas development to help meet our nation’s demands.

More than 26 million acres of public land managed by the Bureau of Land Management (BLM) in Wyoming, New Mexico, Utah, Colorado, and Montana are open for leasing. In a year’s time, BLM approved 5,700 new drilling permits in those states—a 62% increase over the previous year. BLM has a total of nine fisheries biologists in those five states. That’s about 3 million acres of leased land per fisheries biologist. Most people agree that is an impossible responsibility to place on nine people.

At the current rate of oil and gas exploration, scientists, particularly with the Wyoming Game and Fish Department, are having a difficult time keeping up with the pace of development.

Funding for land management agencies such as the Bureau of Land Management needs to be secured specifically for scientists who deal with impacts on wildlife and fisheries. State wildlife agencies like the Wyoming Game and Fish Department also need national funding so that biologists can be hired to deal solely with oil and gas issues.

These biologists should collect data, monitor impacts and design and implement mitigation, working closely with industry and land management agencies. There is a willingness among many in the industry to move in this direction, but Congress also needs to step up with money for these agencies so that our wildlife and fisheries resources are taken care of. We believe that the scale and pace of development on oil and gas fields far outstrips the organizational capacities of both state and federal agencies responsible for managing fish and wildlife and the habitats they depend on.

To clarify how overworked and understaffed our biologists are, consider: there are only three people in the Game and Fish Department’s Cheyenne office that deal with oil and gas issues. When one realizes that just one corner of Wyoming the Powder River Basin—faces an estimated 60,000 wells, it is clear that is too much, too fast.

The Department estimates that it needs staff biologists that deal with nothing but oil and gas development to study, understand and try to mitigate impacts to crucial big game, sage grouse, sensitive species and fisheries habitats. That tally is as much as $2 million per year. Similar expenditures will be needed in other states and for federal agencies.

The purpose of this hearing is to determine the effects of the Energy Policy Act’s provisions. The fact is that after 11 months it is difficult to determine the effects on fish, wildlife, and water resources from the acceleration of development. As a lifelong hunter and angler, I can say with certainty, it isn’t looking good for game and fish. A biologist within the Wyoming Game and Fish Department told me that wildlife and fisheries are going to lose, and the best we can hope for is to minimize the loss.

Along these lines, one aspect of the Energy Policy Act that I would urge the Committee to look into is implementation of Section 1811 of the Act. That section authorized the National Academy of Sciences (NAS) to prepare a report on the impact of coalbed methane development on water. Unfortunately, because NAS is depending on funding from BLM to get this report together, and the BLM has not provided any money to them to do it, the study has not been initiated.

I mentioned earlier that there are also places where oil and gas development is inappropriate and I’d like to specifically thank our Senator Craig Thomas for his landmark stance against oil and gas drilling on our national forests. We, too, believe that our national forests should be off-limits to oil and gas drilling. These are our headwaters and our hunting grounds. They are places where Wyomingites go to recreate and relax, to spend time with family and friends. These are heirloom places that should be passed down to our children and to their children.

The Wyoming Range in the Bridger-Teton National Forest harbors some of finest mule deer, moose and elk hunting in the state. It also is home to three important subspecies of native trout: the Colorado River, Bonneville and Snake River cutthroat. People from all over the country come to this region to fish, hunt and relax. Today, we are heavily developing country east of the range for oil and gas. Places like the Pinedale Anticline and the Jonah gas field are helping to fuel this nation, but they are also places that have been historically used as winter range for our big game herds. We are very concerned about the amount of development that is taking place on these winter ranges. It is a virtual certainty that our big game resource, and as an extension, the quality of big game hunting in this region is going to decline. If we develop not only winter ranges, but migration routes and summer ranges as well, we believe it will spell the end of quality hunting in western Wyoming.

An example of why some places should be off-limits to energy development is the La Barge Creek drainage in the Wyoming Range. This stream is the site of a large restoration project being undertaken by the Wyoming Game and Fish Department to bring back a native trout, the Colorado River cutthroat. At a cost of an estimated $2 million, some 58 miles of stream are being reclaimed and revitalized for this native, pure fish that has swum these waters for thousands of years. Yet even while fisheries biologists are hard at work with the restoration process, there are daily flights of helicopters doing seismic testing in the backcountry headwaters of La Barge Creek for potential gas field development. Oil and gas development in the headwaters would mean roads and roads heavily impact fish by flushing sediment into drainages and blocking the passage of spawning fish. These two things: native pure fish swimming in clear, clean water on our national forests and industrial development cannot make for a happy marriage.

I want to share with you a few more examples from the field that help to explain why state fish and game departments, federal fish and wildlife biologists, and hunters and sportsmen across the Rocky Mountain West are so concerned about energy development.

In the past two years on the Uinta National Forest in Utah, the leasing of National Forest Lands was approved and carried out, and did not take into account the important fisheries restoration work that has occurred or the 2000 Range-wide Conservation Agreement and Strategy for Bonneville Cutthroat Trout. In at least one instance, neither the forest’s fisheries biologist nor District Ranger was aware that the resources they are charged with managing would be facing new threats and challenges resulting from leasing that occurred in the Diamond Fork, a watershed that sustains a Conservation Population of native Bonneville Cutthroat Trout and also in the Strawberry Valley, where Utah’s most popular trout fishery, Strawberry Reservoir, is located.

The Forest Service leased areas of the Wyoming Range. Many of these leases are part of watersheds that sustain core-conservation populations of Colorado River cutthroat trout, a species that is currently regarded as ‘‘sensitive’’ by both State and Federal agencies. However, the Bridger-Teton National Forest is lacking baseline data and inventory information. In addition to other concerns such as air quality, Canada Lynx habitat damage, and cumulative impacts, we don’t think it’s prudent to lease and develop areas in the absence of baseline data.

Preliminary results of an ongoing study on mule deer impacts in the Upper Green River Basin of Wyoming by Western EcoSystems Technology, Inc. (WEST), BLM, the energy industry and Wyoming Fish and Game show: —Mule deer abundance on the Mesa has declined. The Mesa’s overall mule deer population is down 46 percent since 2002. —Over-winter fawn survival rates have been slightly lower on the Mesa compared to the control region for four of the five years; —Mule deer are moving from previously ‘‘high use’’ winter habitat areas into areas that previously had been of ‘‘low use’’ suggesting that drilling and development has displaced mule deer to less suitable habitats; —Sublette County’s mule deer are among the most migratory in the West, traveling between 60 to 100 miles between summer and winter ranges. Documented migration routes, such as Trapper’s Point Bottleneck, remain important pathways between winter range in the Upper Green and summer range in the surrounding mountains.

KEN SALAZAR, COLORADO. The Energy Policy Act of 2005 contained many provisions to enhance domestic oil and gas production. I think it is necessary to recognize that, as we seek to expand our domestic energy production, land use conflicts are increasing. The search for energy is taking companies to land that is closer to, or neighboring, local communities as well as onto lands that generations of westerners have grown up fishing, hunting, and recreating on. There are also a sizable number of split-estate situations that are affecting family farms and ranches across the west. These lands are essential to our natural heritage and must be treated accordingly. I am increasingly concerned about the BLM’s rush to lease every acre of land as quickly as possible without regard to local communities. This rush is often at the expense of local communities with real, substantive concerns as to how this activity will affect their communities and the natural heritage of their area. I am further alarmed at the BLM’s willingness to brush these concerns aside and the contentious atmosphere that is being created.

In the west, we believe in multiple-use on our lands, but we realize that every use on every acre is not a sustainable approach. It seems, though, that the BLM has elevated energy exploration and development above every other use when multiple uses conflict.

There are two good examples in Colorado I would like to talk about. On Colorado’s Western Slope the City of Grand Junction and the Town of Palisade learned that mineral leases underlying their watersheds were to be leased. Both Grand Junction and Palisade protested the inclusion of these parcels in the lease sale, asking the BLM to delay their leasing so that the local communities could work with the BLM to assess the situation and to address their concerns prior to leasing. Along with Congressman John Salazar, who represents the district, I supported the local governments’ protests. The BLM went ahead anyway, ignoring the legitimate concerns of a pro-growth and pro- development community who simply needed more time to work with the agency. Also in western Colorado is the Roan Plateau. The Roan Plateau has been a contentious topic as the BLM develops the resource management plan for the area that is highly valued by local communities and sportsmen in Colorado. The final EIS is likely to contain provisions that have not been previously addressed in the process. I asked the BLM to commit to re-submit the plan for further public comment, if that proves to be the case, only to be flatly told ‘‘no’’.

As a United States Senator who is having difficulty working with the BLM is his own state, I can empathize with the local communities who feel that their concerns are being brushed aside in a mad rush to lease every acre for oil and gas exploration and development.

STATEMENT OF WESTERN COLORADO CONGRESS; WYOMING OUTDOOR COUNCIL EARTHJUSTICE; SOUTHERN UTAH WILDERNESS ALLIANCE; OIL & GAS ACCOUNT- ABILITY PROJECT; WESTERN RESOURCE ADVOCATES AMIGOS BRAVOS; SUSTAINABLE OBTAINABLE SOLUTIONS; CALIFORNIANS FOR WESTERN WILDERNESS; COLORADO ENVIRONMENTAL COALITION; POWDER RIVER BASIN RESOURCE COUNCIL; SAN JUAN CITIZENS ALLIANCE; THE WILDERNESS SOCIETY; COALITION FOR THE VALLE VIDAL; NORTHERN PLAINS RESOURCE COUNCIL; NATURAL RESOURCES DEFENSE COUNCIL; WESTERN ORGANIZATION OF RESOURCE COUNCILS

Our lives and communities continue to suffer damage from oil and gas activities. We do not oppose all exploration and drilling, but we want it to be done responsibly in the places where it is appropriate. We urge the Senate Energy and Natural Resources Committee to work with the Environmental Protection Agency (EPA), the Bureau of Land Management (BLM), as well as state and local governments, to ensure that pollution from oil and gas development is addressed and not simply ignored.

DAMAGE TO WATER QUALITY—BEST MANAGEMENT PRACTICES SHOULD BE MANDATORY Landowners and communities across the West are suffering from erosion and runoff of large amounts of sediment from oil and gas activities. Sediment increases water-treatment costs for municipalities responsible for delivering drinking water to its residents. It can cause a loss of storage in reservoirs and increase agricultural ditch maintenance. It impacts recreation. It harms fish and other aquatic life. It decreases property values. The U.S. Environmental Protection Agency has determined that ‘‘siltation is the largest cause of impaired water quality in rivers.’’ National Pollutant Discharge Elimination System—Regulations for Revision of the Water Pollution Control Program Addressing Storm Water Discharges, 64 Fed. Reg. 68722, 68724 (Dec. 8, 1999). We have enclosed additional evidence of the harm excessive erosion and sediment from energy development is causing in the West.

DAMAGE FROM TOXIC CHEMICALS—MONITORING AND DISCLOSURE IS NECESSARY We urge the Committee to press the Bureau of Land Management and Forest Service to disclose and regulate toxic chemicals used in oil and gas development. Where potentially toxic chemicals are used during oil and gas exploration and development operations, responsible agencies should monitor the levels and effects of these chemicals. The groups believe such complete disclosure and monitoring requirements are necessary for several reasons.

Toxic chemicals with known health effects are being used. Many of the products used in the exploration, drilling, and production phases of the natural gas and oil industry contain toxic chemicals with known human health effects. A recent analysis of products and ingredients used in natural gas development in western Colorado shows that oil and gas operators are using toxic chemicals throughout the development process, including during hydraulic fracturing. Of the 192 chemicals on the list, 53 percent are toxic to skin and sense organs, 48 percent cause gastrointestinal and liver damage, and 43 percent are neurotoxins. More than 26 percent of the chemicals are reproductive, kidney, or cardiovascular/blood toxicants, and 22 percent are carcinogens. 2. Toxic chemicals are being released into the environment Toxic chemical products, as well as harmful hydrocarbons produced during oil and gas production, can and do escape into the environment via a number of pathways. For example, spills release chemicals into the air through volatilization, and spills can enter the water and soil. Additionally, chemicals injected into the ground may come in contact with drinking water aquifers; chemicals may escape from recovery fluids that are stored or placed in pits or tanks on the surface; and flammable chemicals may burn, releasing a host of toxic by- products into the air.

GEOTHERMAL ENERGY AND OTHER RENEWABLES TUESDAY, JULY 11, 2006

LYNN SCARLETT, DEPUTY SECRETARY, DEPARTMENT OF THE INTERIOR

The BLM manages 354 geothermal leases, 55 of which are producing and provide geothermal energy to 35 power plants.

Since 2001, the BLM has processed more than 200 geothermal lease applications, compared to 20 lease applications received in the preceding 5 years.

The USGS is updating a nationwide geothermal resource assessment, which will include estimates of electric power production potential from identified geothermal systems.

GEOTHERMAL ENERGY

Nearly 50 percent of the nation’s geothermal energy production comes from Federal lands. There are currently 354 Federal geothermal leases, 116 on NFS lands. At the present time, there are 5 producing leases on NFS lands contributing to a 12 mega-watt power plant and a 45 mega-watt power plant. Generally, one megawatt provides enough electricity for about 1,000 homes.

SALLY COLLINS, ASSOCIATE CHIEF, FOREST SERVICE, DEPARTMENT OF AGRICULTURE

Woody biomass is woody materials removed from National Forest System, other Federal, State and private lands as a byproduct of forest management activities. Woody biomass includes tree stems, limbs, tops, needles, leaves and other woody parts. Currently most of this material is underutilized, commercial value is low, markets are small to non-existent and the infrastructure needed to process this material is insufficient or nonexistent in many parts of the country.

JIM WELLS, DIRECTOR, NATURAL RESOURCES AND ENVIRONMENT, GOVERNMENT ACCOUNTABILITY OFFICE

The Energy Policy Act of 2005 (Act) contains provisions that address challenges to developing geothermal resources, including the high risk and uncertainty of developing geothermal power plants, lack of sufficient transmission capacity, and delays in federal leasing.

This testimony summarizes the results of a recent GAO report, GAO-06-629. In this testimony, GAO describes: (1) the current extent of and potential for geothermal development, (2) challenges faced by developers of geothermal resources, (3) federal, state, and local government actions to address these challenges, and (4) how provisions of the Act are likely to affect federal geothermal royalty disbursement and collections.

WHAT GAO FOUND

Geothermal resources currently produce about 0.3 percent of our nation’s total electricity and heating needs and supply heat and hot water to about 2,300 direct-use businesses, such as heating systems, fish farms, greenhouses, food-drying plants, spas, and resorts. Recent assessments conclude that future electricity production from geothermal resources could increase by 25 to 367% by 2017. The potential for additional direct-use businesses is largely unknown because the lower temperature geothermal resources that they exploit are abundant and commercial applications are diverse.

One study identified at least 400 undeveloped wells and hot springs that have the potential for development.

The challenges to developing geothermal electricity plants include a capital- intensive and risky business environment, technological shortcomings, insufficient transmission capacity, lengthy federal review processes for approving permits and applications, and a complex federal royalty system. Direct-use businesses face numerous challenges, including challenges that are unique to their industry, remote locations, water rights issues, and high federal royalties.

Harnessing geothermal energy is not easy. Developers of geothermal energy face many challenges, including the high risk and uncertainty of developing geothermal power plants, lack of sufficient capacity to transmit electricity from these plants to consumers, inadequate technology, and delays in leasing federal lands, which supply about 50 percent of the geothermal resources used to generate electricity.

My testimony today is based on a report we recently completed entitled ‘‘Renewable Energy: Increased Geothermal Development Will Depend on Overcoming Many Challenges.’’

Future electricity generation from geothermal resources suggest that the current production of 2,500 megawatts of electricity—enough to supply 2.5 million homes—could increase to as much as 12,000 megawatts in 11 years. Although the future potential of other geothermal applications is less known, about 400 undeveloped geothermal wells and hot springs could supply heat and hot water directly to a variety of businesses and other organizations.

The developers of geothermal resources face significant financial, technical, and logistical challenges. Geothermal electric power plant developers face a capital intensive and risky business environment in which obtaining financing and securing a contract with a utility are difficult, where recouping the initial investment takes many years, and where transmission expenses could be costly due to remote locations or capacity constraints on the electric grid. These developers must also use exploration and drilling technologies that are inadequate for the unique attributes of geothermal reservoirs. Developers of electric power plants on federal lands face additional administrative and regulatory challenges and a complicated royalty payment system. Businesses and individuals trying to tap geothermal resources for direct use face unique marketing, financing, and technical challenges and, in some cases, must contend with remote locations, restrictive state water rights, and high royalties.

Meeting Energy Demand in the 21st Century: Many Challenges and Key Questions, GAO05-414T (Washington, D.C.: March 16, 2005).

BACKGROUND.   Geothermal energy is literally the heat of the earth. This heat is abnormally high where hot and molten rocks exist at shallow depths below the earth’s surface. Water, brines, and steam circulating within these hot rocks are collectively referred to as geothermal resources. Geothermal resources often rise naturally to the surface along fractures to form hot springs, geysers, and fumaroles. For centuries, people have used naturally occurring hot springs as places to bathe, swim, and relax. More recently, some individuals have constructed buildings over these springs, transforming them into elaborate spas and resorts, thereby establishing the first direct use of geothermal resources for business purposes. Businesses have also established other direct uses of geothermal resources by drilling wells into the earth to tap the hot water for heating buildings, drying food, raising fish, and growing plants. Where the earth’s temperature is not high enough to supply businesses with geothermal resources for direct use, people have made use of the ground’s heat by installing geothermal heat pumps. Geothermal heat pumps consist of a heat exchanger and a loop of pipe extending into the ground to draw on the relatively constant temperature there for heat in the winter and air conditioning in the summer.

Geothermal resources can also generate electricity, and this is their most economically valuable use today. Only the highest temperature geothermal resources, generally above 200 degrees Fahrenheit, are suitable for electricity generation.

When companies are satisfied that sufficient quantities of geothermal resources are present below the surface at a specific location, they will drill wells to bring the geothermal fluids and steam to the surface. Upon reaching the surface, steam separates from the fluids as their pressure drops, and the steam is used to spin the blades of a turbine that generates electricity. The electricity is then sold to utilities in a manner similar to sales of electricity generated by hydroelectric, coal-fired, and gas-fired power plants.

In the United States, geothermal resources are concentrated in Alaska, Hawaii, and the western half of the country, primarily on public lands managed by the Bureau of Land Management (BLM).

As of January 2006, 54 geothermal power plants were producing electricity, and companies were constructing 6 additional geothermal power plants in California, Nevada, and Idaho that collectively will produce another 390 megawatts of electricity.

Over half of the nation’s electricity generated from geothermal resources comes from geothermal resources located on federal lands in The Geysers Geothermal Field of northern California; in and near the Sierra Nevada Mountains of eastern California; near the Salton Sea in the southern California desert; in southwestern Utah; and scattered throughout Nevada.

Industry and government estimates of the potential for electricity generation from geothermal resources vary widely, due to differences in the date by which forecasters believe the electricity will be generated, the methodology used to make the forecast, assumptions about electricity prices, and the emphasis placed on different factors that can affect electricity generation. Estimates published since 1999 by the Department of Energy, the California Energy Commission, the Geothermal Energy Association, the Western Governor’s Association, and the Geo-Heat Center at the Oregon Institute of Technology indicate that the potential for electrical generation from known geothermal resources over the next 9 to 11 years is from about 3,100 to almost 12,000 megawatts.

In 2005, over 2,300 businesses and heating districts in 21 states used geothermal resources directly for heat and hot water. Nearly all of these are on private lands. About 85 percent of these users are employing geothermal resources to heat homes, businesses, and government buildings. While most users heat one or several buildings, some users have formally organized heating districts that pipe hot water from geothermal wells to a central facility that then distributes it to heat many buildings. The next most plentiful direct use application is for use by resorts and spas, accounting for over 10 percent of sites. About 244 geothermally heated resorts and spas offer relaxation and therapeutic treatments to customers in 19 states. Two percent of geothermal direct use applications consist of heated greenhouses in which flowers, bedding plants, and trees are grown. Another two percent of geothermal direct use applications are for aquaculture operations that heat water for raising aquarium fishes for pet shops; catfish, tilapia, freshwater shrimp and crayfish for human consumption; and alligators for leather products and food.

Other direct use geothermal applications include dehydrating vegetables, like onions and garlic, and melting snow on city streets and sidewalks.

Geothermal groups reported that most attempts to develop geothermal resources for electricity generation are unsuccessful, that costs to develop geothermal power plants can surpass $100 million, and that it can take 3 to 5 years for plants to first produce and sell electricity.

Although some geothermal resources are easy to find because they produce tell-tale signs such as hot springs, most resources are buried deep within the earth—at depths sometimes exceeding 10,000 feet—and finding them often requires an in-depth knowledge of the area’s geology, geophysical surveys, remote sensing techniques, and at least one test well. The risks and high initial costs associated with exploring for and developing geothermal resources limit financing. Moreover, few lenders will finance a geothermal project until a contract has been signed by a utility or energy marketer to purchase the anticipated electricity. Geothermal industry officials describe the process of securing a contract to sell electricity as complicated and costly.

In addition, lack of available transmission creates a significant impediment to developing geothermal resources for electricity production. In the West where most geothermal resources are located, many geothermal resources are far from existing transmission lines, making the construction of additional lines economically prohibitive, according to federal, state, and industry officials. Finally, inadequate technology adds to the high costs and risky nature of geothermal development. For example, geothermal resources are hot and corrosive and often located in very hard and fractured rocks that wear out and corrode drilling equipment and production casing.

Developing geothermal resources for direct use also faces a variety of business challenges, including obtaining capital, overcoming specific challenges unique to their industry, securing a competitive advantage, distant locations, and obtaining water rights. While the amount of capital to start a direct-use business that relies on geothermal resources is small compared to the amount of capital necessary to build a geothermal power plant, this capital can be substantial relative to the financial assets of the small business owner or individual, and commercial banks are often reluctant to loan them money.

Challenges that are unique to certain industries include avoiding diseases in fish farms; combating corrosive waters used in space heating; and controlling temperature, humidity, and light according to the specifications of the various plant species grown in greenhouses. Even when overcoming these unique challenges, successful operators of direct use businesses may need to secure a competitive advantage, and some developers have done so by entering specialty niches, such as selling alligator meat to restaurants and constructing an ‘‘ice museum’’ in Alaska where guests can spend the night with interior furnishings sculptured from ice.

Furthermore, developing direct uses of geothermal resources is also constrained because geothermal waters cannot be economically transported over long distances without a significant loss of heat. Even when these resources need not be moved, obtaining the necessary state water rights to geothermal resources can be problematic. In areas of high groundwater use, the western states generally regulate geothermal water according to some form of the doctrine of prior appropriations, under which specific amounts of water may have already been appropriated to prior users, and additional water may not be available.

[ My note: this is a fantastic place to stay to see the Aurora Borealis northern lights near Fairbanks, Alaska ]

BERNIE KARL, PROPRIETOR, CHENA HOT SPRINGS RESORT, FAIRBANKS, AK. My name is Bernie Karl. I am the proprietor of Chena Hot Springs outside of Fairbanks, Alaska. Chena Hot Springs will be the site of the only new geothermal power plant installation in the United States this year. It will also be the site of the lowest temperature resource (165 °F) ever used for commercial power generation in the world.

Moderate temperature geothermal resources are by far the most prevalent in the United States and around the world. Estimates indicate there are between 20,000 and 40,000 MW of geothermal electrical energy potential in the U.S. alone in the 190 to 300 degrees Fahrenheit range. In fact, you could hit those temperatures right here underneath Washington DC if a hole 20,000 feet deep 1 were drilled. Heat from the earth, whether used for power generation or heating buildings and homes is the most reliable form of renewable energy available to us. It doesn’t depend on clear skies, windy days, or rainfall, making geothermal a good base load alternative energy. While using the heat from the earth for heating and cooling is economical throughout the U.S., our best geothermal resources for power generation are in the western states.

HYDROGEN AND FUEL CELL RESEARCH AND DEVELOPMENT MONDAY, JULY 17, 2006

PETE V. DOMENICI, NEW MEXICO.  The President’s request for a hydrogen initiative was $289 million this year, and we hit that mark in the Energy and Water Appropriations Subcommittee. With this level, over a quarter of a billion dollars, we know we cannot support every possible fuel cell technology for every possible application, and we have to have priorities, and that means we get people and institutions who feel let down and who feel like they have a lot to complain about.

So, with level investment, over a quarter of a billion annually, we know we cannot support everyone, as I indicated, but we’re trying to do it in a way to give it the best chance of success. I’m hopeful that today’s witnesses are going to advise the committee on whether we are achieving the right balance among the technologies, and I know there is an ongoing argument about research on technologies that are fixed and technologies that are mobile as it applies to the particular fuel cell. We can’t resolve that. Both are needed. One could maintain that the Department of Energy’s entire Hydrogen Program is a high-risk research area.

DONALD L. PAUL, VICE PRESIDENT & CHIEF TECHNOLOGY OFFICER, CHEVRON CORPORATION, SAN RAMON, CA

As Chevron’s chief technology officer, I oversee all facets of our company’s new energy technology development and commercialization, including hydrogen generation and hydrogen infrastructure, can share our experience as well as our views regarding the critical next steps.

Oftentimes, the infrastructure part of the energy equation is ignored. Our current infrastructure took us almost a century to build. The challenge of building an entirely new one is unique, and we haven’t faced that as a Nation for some time. It’s absolutely critical that both the devices that use hydrogen as a fuel for the vehicles and the hydrogen infrastructure be developed simultaneously. This is part of the key challenge.

We believe that central vehicle fleets and transit systems are the most practical means of using hydrogen in the near future in addressing both infrastructure as well as vehicle challenges. Fleets, such as buses, use a centralized fueling point and hydrogen storage can be overcome by vehicle size.

CHALLENGES TO COMMERCIALIZATION

Production and distribution of hydrogen.  Hydrogen must be available when and where it is will be needed. Hydrogen is a fuel—not a natural resource. It must be manufactured from other sources, so how the supply system is developed is critical. The two primary sources of hydrogen are water and hydrocarbons. For the past 50 years, Chevron and the industry have been engaged in the large-scale conversion of hydrocarbons to hydrogen through refinery and gasification processes. As you may be aware, oil refineries are the largest current producers and users of hydrogen.

Additional industrial uses are for chemicals, metals, and electronics manufacturing. Approximately 9 million tons of hydrogen is produced for industrial applications in the United States (world-wide production is about 40 million tons).

The core technical and business challenge is to transform and adapt the hydrogen production and distribution

Storage of hydrogen

Storing hydrogen in the car, at the refueling station and throughout the delivery infrastructure is a significant critical path challenge. While much attention is given to storing hydrogen on board the vehicles, and rightly so, similar attention is needed in the other critical locations in the hydrogen infrastructure. In particular, cost effective dynamic storage in moderate volume is essential at the production and fueling sites. Today, all hydrogen storage is essentially in high-pressure vessels, typically at 5,000 pounds per square inch. Even at these pressures, the energy stored is far lower than with typical liquid hydrocarbon fuels.

For the evolution to light duty vehicles, most believe that cost effective solid-state storage will be required. This is an important focus area for R&D programs. The bottom line is that the development of the infrastructure for hydrogen as a fuel will require advancements across a full system including production, distribution, and storage.

In sum, to develop a commercial-scale infrastructure, the cost of using hydrogen to consumers needs to be competitive in the market with other energy fuels. Large scale deployment requires that energy suppliers be convinced that hydrogen can compete with other fuels in the market. While there is reason for encouragement in special markets, broad commercial applicability has not been demonstrated.

Dr. MCCORMICK.  Given the magnitude of the challenges that we see, both environmentally and in terms of dependence on petroleum, hybrid vehicles won’t get us there.  Secretary Garman has testified about the Department of Energy projections, and if you put those kind of efficiencies on top of what we’re seeing, in terms of growth population and things, you can’t get there from here. So, clearly, under any circumstances, it’s a stopgap. So, what we have done is really taken a portfolio approach. In the near term, it’s hybrids, it’s advanced engine technology of more conventional sorts, it’s the E85, recognizing as we look at the world and all the emerging economies, and the pressures that are going to be there, both environmentally and energy-wise, we’re going to need every amount of energy we can get, and we’ve got to use it most efficiently.

Mr. LEULIETTE. Let me echo that from our perspective as a supplier. We see the economics of hybrid and E85, et cetera, being such that they are intermediate solutions. Our biggest concern in the supplier community is that the industry, the Government, or other groups look at these, what we call, ‘‘feel-good solutions’’ as solutions, and stop the focus in the energy in the longer-term scenario.  Because if we spend a lot of money on E85 infrastructure, if we promise that hybrids will be the solution, we will all be sitting around this table 3 or 4 years from now, facing an even greater challenge, and had not spent the money properly to solve the root cause and deal with the root cause of the problem.

 

Posted in Energy Policy, Geothermal | Tagged , | Leave a comment

U.S. House looks at how to improve the nation’s highway freight network

[ Like all books and articles I read on transportation for my book, this session assumes endless growth and worries about future congestion, which will not be a problem on the other side of peak oil, which is coming soon.  Conventional oil peaked in 2005, over half of the 500 giant oil fields that provide 50% of oil are declining at 6%, a rate that increases exponentially, and unconventional oil (10% of supplies) won’t be able to keep up with that.  At least this U.S. House of Representatives session is more concerned freight than cars, which are wasting what conventional oil remains…

Also of note is Susan Alt’s comment: “We have electric trucks, but the big, heavy ones we would not be able to haul any load because we would have 50,000 pounds of batteries unfortunately.” Alt is a senior vice president for public affairs at Volvo Group North America.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation, 2015, Springer]

House 113-55. February 27, 2014. Improving the Nation’s Highway Freight Network. U.S. House of Representatives. 101 pages. 

Excerpts

THOMAS E. PETRI, WISCONSIN.  The Nation’s highway system is an essential part of the broader freight transportation system. Not every community is located adjacent to a railroad, airport, waterway or port, but consumer goods are almost invariably transported along the Nation’s 4 million miles of highways and roads for at least part of the journey.

America’s reliance on the highway system is growing faster than the system is itself. The Federal Highway Administration estimates that in the next 30 years there will be 60% more freight that must be moved across the United States.

ELEANOR HOLMES NORTON, DISTRICT OF COLOMBIA.  The American people understand all too well what we mean when we say we have got to transport people. They think about the roads and the highways. They think about their transit. They think about their cars, but I am not sure that they understand what makes this country great, and it is the transportation of goods so that those people can use the goods.

MARK GOTTLIEB, P.E., SECRETARY, WISCONSIN  Department of Transportation, on behalf of the America Associati0n of State Highway & Transportation officials  

I want to thank you for the opportunity to testify on behalf of AASHTO and the State DOTs on the importance of efficient and safe freight movement to our State’s economies and to provide input on our freight transportation challenges. We support the establishment of an overall national freight transportation policy. However, we believe that designation of highway and freight networks cannot be accomplished through a top-down Federal process. A one-size-fits-all set of designation criteria fails to address unique, state-specific freight considerations. The methodology used to designate the 27,000-mile national highway freight network resulted in critical gaps and omissions and does not reflect many significant freight corridors operating within, between and among the States.

Gerald R. Bennett, mayor, Palos Hills, Illinois, on behalf of the Chicago Metropolitan Agency for Planning.

My agency, the Chicago Metropolitan Agency for Planning, CMAP, elevated freight as a high priority within our region’s award winning Go to 2040 comprehensive plan. Our region is an unparalleled hub not only of domestic but also international freight. Over a billion tons of freight worth more than $3 trillion move through the Chicago region each year. A quarter of all U.S. freight and nearly all U.S. intermodal freight originates, terminates or passes through Metropolitan Chicago. Nearly half of the freight in the region is through traffic, an indication of our central role in the national freight system. To address freight congestion, the Chicago Region Environmental and Transportation Efficiency Program, called CREATE, the first in this Nation, was established in 2003. This is a public-private partnership of the U.S. Department of Transportation, the Illinois Department of Transportation, and the Chicago Department of Transportation, AMTRAK, the region’s Metra Transit System, and private railroads. CREATE is dedicated to implementing specific rail improvements in and around the Chicago area. Its 70 projects include new fly- overs, grade separations, improved signaling, equipment modernization, and as of November 2013, 20 projects have been completed and 9 more are under construction. Most of the completed projects are rail improvements, many of which are on the belt corridor that circles Chicago to the west and south, with connections to multiple railroads. Eight of the eleven belt corridor projects have been completed and another is under construction. In contrast, relatively few projects move forward to mitigate freight’s negative impacts on local communities. Only 3 of CREATE’s 25 highway rail grade separation projects have been completed and only 3 are under construction.

Due to the lack of funding, 13 grade separations have not started at all and not one of the program’s 7 passenger corridor projects was completed in the past 10 years. This is also highly problematic because in a truly intermodal economy, grade separations facilitate the movement of truck traffic through the region.

Susan Alt, senior vice president for public affairs, Volvo Group North America

Volvo Group manufactures heavy trucks under the brand names of Mack Trucks, Volvo Trucks, Volvo construction equipment, Volvo Penta marine engines, Prevost and Nova transit coaches and city buses. The Volvo Group has six manufacturing facilities in the United States, in the States of Virginia, Tennessee, Maryland, Pennsylvania, New York and we are headquartered in North Carolina.

We rely on more than 50,000 truckloads of freight, of material coming into our factories each year. We rely heavily on the Ports of Norfolk and Baltimore to import 25 percent of our production material, and those same ports plus the Port of Charleston, South Carolina, for the export of our finished goods. We rely on the entire Interstate Highway System for the movement of our material, most notably Interstate 81, as four of our factories are located on or very near it. It is America’s infrastructure that makes all of this possible. The health of America’s freight network matters because it is important that our American manufacturing operations remain competitive in a global economy. In recent years the industry has embraced ‘‘just in time’’ or lean manufacturing philosophies that reduce manufacturing material in the production line. This new efficiency has manifested as a substantial benefit to Volvo, our customers and the economy as a whole. However, to be efficient, we have to have the right material at the right place at the right time.

In modern manufacturing, we cannot have excess inventory in our assembly or our delivery process. We deliver parts to the production line just as it is needed for assembly. Our ability to move parts from our supplier to our factory and finished goods from our factory to our end customer relies on the infrastructure of America. There are disturbances we can plan for, but what we cannot control for is unexpected delays due to congestion. This is where we get into real trouble. When, for example, a truck is caught in a traffic jam and cannot make its delivery, the ripple effect of that one delivery can be costly. It means we do not build the product on time, tying up capital. It means the product will have to be reworked, tying up man- hours, not following manufacturing quality processes. It means sending workers home early. It means not delivering to the customer on time and hurting our competitiveness all because of that one missed shipment.

Mr. PETRI. Chicago is where the railroad industry came together 150 years ago. Trains went to Chicago, and they went west from Chicago, and you can go to the center of Chicago and north, south, east, west tracks are the same grade level. They have to stop and wait for each other. I understand anyone who sees railroad cars up in my part of that area they are covered with graffiti because these trains stopped for hours or days negotiating their way through Chicago. And we understand still they take freight off one railroad, put it on trucks, drive it through Chicago to another railroad. In this age of ‘‘just in time’’ delivery and mobility, this is a significant burden on commerce,

Mr. BENNETT. You know, the story was it would take 2 days to go from Los Angeles to Chicago and 2 days through Chicago and then another 2 days to the east coast. Six of the seven major national Class I railroads come through the Chicago metropolitan area.

Ms. NORTON. And yet this is a crossroads of the United States, perhaps dramatically pointing to the need to create a stronger focus. We note that with the TIGER grants, which are probably the only lump sum we have for such intermodal projects, when freight competes with what people experience every day, which is getting in their own cars, freight sometimes loses out. So my question here goes to how do we get the focus on funding freight. When you consider, for example, that MAP–21 scratches the surface, if you will forgive the pun, of just daily transportation across the roads, of course freight uses that, too, but do you think, for example, that there should be a separate set-aside for freight? Do you think there should be a freight-only fund?

Ms. ALT. I do not think the consumers would accept an increase for freight because they do not appreciate the fact that it is the freight that brings them everything that they have every day.

Mr. MAIER. If you look at our business today, the fundamental change that is occurring is e-commerce, which means that, you know, 10 or 15 years ago packages went primarily to businesses. You know, with the growth of the World Wide Web and shopping online, more and more of our packages are going to people’s homes. And to be frank, I mean, that has changed the business. Package weights have come down, for instance, as shipments that used to be destined to a manufacturing facility or a distributor or to a retail store, those packages are now becoming smaller because they are going directly to somebody’s home. And in our business, our volume, and this would be LTL and certainly parcel express or ground, our business goes to where people are. So you have to look at population centers.

FedEx Ground is headquartered just outside of Pittsburgh. Last fall the Pennsylvania Department of Transportation imposed weight limits on approximately 1,000 bridges in the State. Now, they did that to slow deterioration and extend the operational life of the bridges pending the approval of transportation funding legislation that was subsequently signed last November. This requires transportation companies like ours to take alternate routes to go around those bridges and adds time and cost. We burn more fuel. We create more carbon emission. I mean, it requires us to engineer our network differently based on those changes, and that creates, you know, costs that we have to figure out how to cover somehow.

There are only 11 States in the country that allow the use of 33 footers within the border. We need Congress to change the policy so that we can use them nationwide.

Ms. ALT. Yes. So the Federal excise tax is 12 percent of the purchase price of the vehicle. Taking natural gas aside for a second, since 2010 the cost of the typical truck has gone up from an average of around $100,000 to $125,000.  And you add 12 percent to the purchase price, and the $25,000 increase has come from emission reduction control systems. So we have cleaner trucks. They are the cleanest they ever have been, and that is a great thing, but they cost a whole lot more to produce. So in the last 4 years, Federal excise tax went from $12,000 on a $100,000 truck to now another $3,000 more just to meet emissions. So the Federal excise tax already has been dramatically increased because the purchase price of the trucks has gone up so dramatically because of emissions. When we sell a truck with natural gas, primarily because the fuel tanks themselves are very expensive, you are now getting to sometimes as close to $200,000 for the cost of a truck, and regardless of a cleaner truck or a lower emission truck, you are paying 12 percent on the purchase price of that truck. So it is hard for the buyer to actually have to pay that extra tax. So they are being burdened.

Ms. HAHN. cargo leaves Los Angeles and takes maybe 48 hours to get to Chicago and then another 30 hours to get through Chicago. What do you think are some proposals out there? What are the best proposals we have out there for that last mile before it leaves or meets its destination of our cargo? And what can we do to really ease congestion, which in my mind will certainly help you on your own time deliveries? It also reduces pollution. We know that when trucks line up for that last hour queue getting in and out of ports, that is sometimes the worst pollution in those neighboring communities.  What is a proposal out there or a recommendation that we could make to ease congestion in the last mile?

Ms. ALT. We have electric trucks, but the big, heavy ones we would not be able to haul any load because we would have 50,000 pounds of batteries unfortunately.

Mr. BENNETT. grade crossings are very expensive, around $50 million per grade crossing,

Mrs. NAPOLITANO. People cannot afford $125,000 with the new equipment for environmental purposes. So they buy used ones and so we continue to pollute.

Mr. BARLETTA. I understand completely the impact that freight has on our local roads. And my question to you would be: how can we better assist the States as they support these critical roads and bridges, especially in light of Mr. Maier’s observation that the volume of freight moving by truck is expected to more than double by 2035? And then putting hard hat and mayor’s hat back on, being in a construction industry, I also know the difference between an interstate highway and a local road. I know there is up to 12 inches of concrete on an interstate, and I also know there is only a few inches of asphalt on the local road. My question to Mayor Bennett is: can you discuss the impact of freight on the first and last mile? And how do localities bear this burden?

Mr. BENNETT. It is obviously a lot of money, and as far as the situation in our community, and I think it was mentioned in California also, is that the last mile literally is most of these grade separations need to be fixed around the intermodal system of trains and freight or transport of freight from those trains to the highways, and it is in and around those rail yards. So it is all tied together. The cost of doing that for a local community is unbearable. It is a $50 million cost. It is not so much the roadway itself. It is the overpass or underpass that costs the huge amounts of money for the local government.

Mr. GOTTLIEB. Thank you. The first and last mile connections are critical and vital to have an effective network, and one of the things we have had happen in our State is we have become a leader in the production of frack sand for hydraulic fracturing, and we are sort of a hub for it in the western part of the State. And one of the things we have found as we have looked at the increasing demand for the transportation of frack sand both by rail and on the highway system is that we do not really have a big problem on our system, but when you get off of the State system and you get close to these facilities, then there can be problems.

 

 

Posted in Transportation, Trucks | Tagged , , | Leave a comment

Water as a geopolitical threat. U.S. House of Representatives 2014

[ Water scarcity is causing unrest and could led to war in Asia and the Middle East. 

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts:  KunstlerCast 253, KunstlerCast278, Peak Prosperity]

House 113-127. January 16, 2014. Water as a geopolitical threat. U.S. House of Representatives, 87 pages.

EXCERPTS:

DANA ROHRABACHER, CALIFORNIA.  We examine the topic of water as a strategic resource and its potential use as a threat.

Those of us who have lived around water our whole lives may be unaware of how water may be manipulated maliciously for both material gain and for political coercion. Although in our country’s history, I think it is very clear that there were water wars and people in conflict or people in great accomplishments of people working together, that our country’s history is filled with focusing on the issue of water.

Our witnesses today made clear such conduct is routine when it comes to countries like Communist China that routine conduct is manipulation of water for power’s sake. As our witness today, Gordon Chang will explain, China’s illegal occupation of Tibet puts it in control of the roof of the world and thus, the headwaters that service half the world’s population. We could be confident that resulting water disputes would be handled responsibly and reasonably, perhaps solved in international forums or in agreements like many other countries do, if that is we could be confident in that if China were a country that wasn’t the world’s worst human rights (1) abuser that has had no political reform whatsoever in these last 20 years when we have seen such incredible reform in other and former communist countries. Our Congressional Research Service testimony makes clear that most of these matters in terms of water are resolved through negotiations and peaceably and I might say remarkably these issues are solved by people acting responsibly and providing leadership and reaching out to people and to find solutions. Some of the 300 agreements over the last 70 years have unfolded in that way. Today, a warning alarm is sounding about China’s control of such water resources because we have seen that China, even in the last few months, is not so reasonable when it is making its territorial claims. China isn’t the only flash point for the water issue, however, and water controversies are nothing new.

Water is a volatile issue in the Middle East today, for example, but if you read the history, water played a very significant role in creating the environment that led to the Six Day War back in 1967. Basically, that conflict began when the Syrian Government decided to dam up waters that were flowing into Israel followed by an Israeli air attack which destroyed those dams. Then Egypt and other Arab neighbors were called into the conflict and it almost led to a superpower confrontation which would have been a disaster for the whole planet. And that all began with what, a water controversy over how much water was going to be flowing into Israel and the attempt by Syria to dam up that water. Today, there are heartening signs, however,

The situation involving the basin countries in the Nile River, for example, deserves watching and we need to look at this very closely because the Nile, of course, flows through ten different countries and Egypt is one of the final ones and basically Egypt views the Nile as its primary national security and economic lifeline. So with so many countries upstream, that is an area we have got to look and try to work with these powers to make sure that there are again efforts made for cooperation, rather than confrontation. This subcommittee held a hearing in July of last year on the dam controversy between Tajikistan and Uzebekistan and that was a controversy that is now at the high level international conference of water cooperation which opened up in August. The Uzbeks are arguing that the proposed Rogun Dam in Tajikistan would cost them some $600 million a year. Since this issue has not been resolved, we will continue to monitor it closely.

I have studied the history of water between California and the other border states and Mexico. And I think we have played pretty hardball with the Mexicans on this. And I think there have been very legitimate complaints on the part of Mexico in the past that the United States was not operating with them with the same type of sincerity and the same type of respect that we should have been doing to a country that is our neighbor that we wanted to maintain a peaceful relationship with.

According to the State Department, nearly 800 million people around the world do not have access to clean water. More than 1.5 billion still lack access to improved sanitation facilities. Each year, more than 4 billion cases of diarrhea caused 2.2 million deaths. Most are in children under the age of 5. In addition to the lives lost, the total economic losses associated with inadequate clean water supply and sanitation is estimated at more than $250 billion annually. The scarcity of clean water and sanitation disproportionately affects women and children. In many countries, women and young girls bear responsibility for meeting the water needs of the entire family. Collecting water can consume up to 5 hours a day, time that could be spent in school or improving their families’ livelihoods.

Mr. BLUMENAUER.  What we are seeing in Syria today, the experts tell us, is in no small measure a result of sustained drought that drove almost 1 million farmers to migrate to urban areas, hungry, jobless, and was a flash point for that initial protest against the regime as Assad had no interest or ability to deal with it.

Over the next 20 years, we are going to see more urban instability due to population increase, disease, poverty, and social unrest. We have been working with the United States and international partners making some progress, but we risk reversing that progress that we have made due to the explosive population growth that is going to occur in sprawling urban slums which is difficult and expensive to provide sanitation, quickly leading to pollution and disease.

JEREMY M. SHARP, SPECIALIST IN MIDDLE EASTERN AFFAIRS, FOREIGN AFFAIRS, DEFENSE, AND TRADE DIVISION, CONGRESSIONAL RESEARCH SERVICE

I will provide an overview of the so-called Red-Dead Canal and its potential implications for U.S. policy. To the surprise of many outside observers, just over a month ago, the World Bank Headquarters here in Washington, Israeli, the Hashemite Kingdom of Jordon, and the Palestinian Authority signed a tri-lateral Memorandum of Understanding, or MOU. This MOU outlines a series of water-sharing agreements which includes the initial phase construction of what has been informally referred to as the Red-Dead Canal. The Red-Dead Canal is a decades-old plan to provide fresh water to water-scarce countries in the surrounding area while simultaneously restoring the Dead Sea, which has been shrinking at an alarming rate. The original Red-Dead concept was to pump water from the Red Sea and desalinate it for use by the participating countries. The leftover brine would then be gradually channeled to the Dead Sea, helping restore the sea’s receding water levels. Regional environmentalists have long criticized plans to restore the Dead Sea using Red Sea water. They warn that the transfusion of water from the Red Sea into the Dead Sea could have serious ecological consequences that would negatively impact both Dead Sea tourism and industry. In 2005, the World Bank sponsored what became an 8-year-long feasibility study of the Red-Dead Canal concept. Almost a year ago to the day, various media outlets reported that construction firms involved in the feasibility study had declared that the project was technically feasible, although it would come with a steep price tag, costing at least $10 billion and take years to construct. The Kingdom of Jordan has vigorously pursued the Red-Dead Canal concept. Jordan is one of the most water-deprived countries in the world and is constantly searching for new water resources. The civil war in neighboring Syria is exacerbating Jordan’s water crisis as over 1/2 million Syrian refugees have fled to Jordan increasing the population by 9 percent within just 2 years. In August 2013, the Jordanian Government announced its intent to construct a scaled-down version of the canal entirely on Jordanian territory. In terms of scale and cost what the Jordanians have announced and agreed on with Israel and the Palestinian Authority is far less ambitious than the initial Red-Dead concept. Estimates suggest that construction of the desalinization plan and pipeline under the new MOU may cost between $450 million to $1 billion. However, it is unclear who will pay for the new project.

For Jordan, the MOU could be considered a major diplomatic achievement. Though the current plan is a scaled-down version of the original concept, the Kingdom will receive additional fresh water resources at a time of heightened scarcity, owing to the Syrian civil war. Nevertheless, as the title of this hearing suggests, security and political challenges remain. Arab cooperative infrastructure projects with Israel could be possible targets for extremist violence as has been the case in Egypt, where gas pipelines traversing the Sinai peninsula to Israel and Jordan have been repeatedly sabotaged by terrorists.

Regional environmentalists have long criticized plans to restore the Dead Sea using Red Sea water. They warn that the transfusion of water from the Red Sea into the Dead Sea could have serious ecological consequences, including large scale growth or algae and formation or gypsum that would negatively impact both Dead Sea tourism and industry. Some of these environmentalists propose instead that countries should stop diverting water from the Jordan River, which feeds into the Dead Sea.

There are also risks associated with doing nothing, such as potential instability in a water-deprived Jordan. If living conditions in Jordan deteriorated further, one could argue that tile stability of a dependable Arab partner for tile United States and a reliable peace partner for Israel would be jeopardized. Over the past few years. rural southern Jordan has witnessed repeated protests coming from within tribal communities that serve as the bedrock of the monarchy. These areas require economic development if they are to remain stable.

MAURA MOYNIHAN, AUTHOR & ACTIVIST

Below is the text from her slides: CLIMATE CHANGE IN TIBET ASIA’S RIVERS AT RISK Maura Moynihan

http://docs.house.gov/meetings/FA/FA14/20140116/101658/HHRG-113-FA14-Wstate-MoynihanM-20140116.pdf

The Tibetan Plateau is a unique geomorphic entity, its 46,000 glaciers comprise the Earth’s third largest ice mass. This “Third Pole” is a vital component of the planet’s ecosystem, filled with minerals, timber and above all, water; Tibet is the fount of the Yangtze, Yellow, Indus, Ganges, Brahmaputra, Chenab, Sutlej, Salween and Mekong, which flow through 11 nations, nourishing three billion people from Peshawar to Beijing. The preservation and management of Tibet’s glaciers and the rivers they sustain is one of the greatest challenges facing humanity in the 21st century. Tibet’s waters flow through eleven countries, where population growth and industrial development is projected to double within 50 years. The combined effects of rapid development, desertification and water scarcity has already created extreme cycles of droughts and floods, food shortages and pandemics.

SHRINKING GLACIERS, DEPLETED AQUIFERS

  • In 2009 the United Nations Inter-Governmental Panel on Climate Change reported that the glaciers on the Tibetan Plateau, the source of fresh water for a fifth of the world’s population, are receding at an alarming rate. Temperatures in Tibetan are rising 7 times as faster than in China. Scientists predict that most Tibetan glaciers could vanish by 2035 if present levels of carbon gas emissions are not reduced. Carbon emissions must be cut by 80% by 2030 to preserve the glaciers, of Tibet, the source of water for, China, India, Bangladesh, Pakistan, Burma, Thailand, Vietnam and Laos.
  • Asia is now facing a shrinkage of river-based irrigation water supplies, which will disrupts grain and rice harvests. Overpumping is swiftly depleting underground water resources in India and China. Water tables are rapidly falling in the North China Plain, East Asia’s principal grain producing region. In India, wells are going dry in almost every state.
  • The United States international climate negotiator Todd Stern stated “the science is clear, and the threat is real. The facts on the ground are outstripping the worst case scenarios. The costs of inaction-or inadequate actions-are unacceptable.”

Industrial Development in an Age of Scarcity

70% of the world’s irrigated farmland is in Asia. China and India, the world’s most populous nations and largest grain producers, have millions of new irrigation projects that are rapidly depleting aquifers. Satellite images released in August 2009 by the National Aerospace and Space Administration (NASA) of the United States show massive depletion of groundwater storage in Rajasthan, Punjab and Haryana during the 2002-2008. Indian government data shows that major reservoirs have shrunk by 70% since 2000. Deglaciation on the Tibetan Plateau, combined with depletion of underground water resources, could create “permanent famine conditions”, as described by the environmental scientist Lester Brown in his 1995 Worldwatch Institute report “Who Will Feed China?” China’s growth has pushed rivers system to a dangerous tipping point. Two thirds of all cites in China are short of water, agricultural runoff from chemical fertilizers, industrial effluent and urban waste have poisoned reservoirs. China’s Environmental Protection Administration reports that that environmental protests are rising by 50% a year. Since 1949, two-thirds of the Yangtze Valley lakes have disappeared, today the total surface area of lakes in the middle and lower Yangtze Valley has shrunk from 18,000 square kilometers to 7,000 in 50 years.

Today, all but one Asia’s major rivers – the Ganges – are controlled at their sources by the Chinese Communist Party

  • In a mere quarter century the People’s Republic of China has risen from poverty and isolation into the 21st century’s emergent superpower. China’s rise as an industrial and military super power has dramatically altered the global balance of power in the quest for what remains of the planet’s resources. The Chinese government dismisses concerns of its own scientists and those of neighboring states, alarmed by a sudden decline in water levels and fish stocks, caused by hydro dams. China has increased militarization of the Tibetan Plateau and strictly controls journalists, scientists and international observers who seek to research conditions in Tibet.
  • Few international agreements exist for sharing data and coordinating usage of these rivers. As developing nations manage water supplies as economic commodities in an age of scarcity, water rights and laws must be reappraised in the context of the climate crisis. The effects of receding glaciers and rivers choked by hydro dams will be felt well beyond the borders of the Tibetan Plateau, with profound impacts over a wide area in Asia and great risks of increased poverty, reduced trade and economic turmoil. In the 1990’s China refused to sign the UN treaty on transboundary rivers.
  • Since Chairman Mao invaded Tibet in 1951, China has administered a huge military infrastructure across the Tibetan Plateau, which gives China a continuous border with Thailand, Burma, Bhutan, India, Nepal and Pakistan, and is now filled with military airfields and PLA battalions. In the coming age of “water wars”, China has a firm hand on the water tower of Asia.

THE THREE PHASES of the CHINESE COMMUNIST OCCUPATION of TIBET

PHASE 1: 1950’s – 1960’s: MILITARY INVASION

From 1951-56, Khampa Warriors fight back against Chinese aggression. THE PLA sends reinforcements, thousands of survivors from Kham and Amdo are driven into Utsang. In 1957 HHDL and Panchen Rinpoche go to Varanasi for Buddha Jayanti: HHDL asks Nehru for refuge to expose Chinese atrocities in Tibet. Cho EnLai tells Nehru to send HHDL back to Tibet. Two years later, the Chushi Gandruk delivers HHDL to Indian custody. Nehru’s Hindi-Chini Bhai-Bhai policy, which gave China control of Tibet, becomes one of the great blunders of the 20th century. 1959; HHDL escapes to India. PLA troops slaughter Tibetan civilians and commence looting and razing of over 6,000 monasteries. The PLA advances to the borders of India, Bhutan, Sikkim, Nepal and Ladhak.

  • In 1962 China invades India from the Tibetan Plateau and occupies large swaths of Indian territory, India is defeated, China commences its military consolidation the plateau, unhindered.
  • 1963: Tibet is sealed behind the Bamboo Curtain and caught in the catastrophe of the Great Leap Forward, wherein 60-80 million people die under Mao’s adoption of the Soviet model of collectivized farming. 1.2 Tibetans, likely more, are killed through armed conflict and famine. NO news of conditions inside China Tibet reaches international governments or media. US launches the Vietnam War to contain Chinese expansionism, while millions in China are starving to death.
  • Chinese Military Engineers build roads across and install military bases and armed encampments across the Tibetan Plateau. Millions of acres of virgin forest is clear-cut and shipped to the mainland

CHINA IN TIBET: Phase 2: 1970-1980’s: The DEATH of MAO and the rise of DENG

  • ORPHANS OF THE COLD WAR: The Tibetan people are imprisoned behind the Bamboo Curtain throughout the Cultural Revolution, which is extremely vicious in Tibet.
  • 1976 Mao Zedong dies. 1981 Deng Xiaoping comes to power. Deng launches the policy of “Reform and Opening Up”. China builds the Friendship Highway linking Lhasa and Kathmandu.
  • 1980; Yu Habong visits Tibet and writes his famous White Paper condemning China’s treatment of the Tibetan people. The Deng regime relaxes restrictions on Tibetan religion and culture. In 1981, China issues the first tourist visas to Tibet for western travelers.
  • MILITARY ROADS built by the PLA across Tibet in Phase 1 of the occupation, allow massive population transfer of Han Chinese onto the Tibetan Plateau.
  • The roads also facilitate a 2nd exodus of refugees to escape from Tibet: since the 1980’s over 20,000 people have escaped from Tibet.
  • 1987: Anti-Chinese demonstrations break out in Lhasa. For the first time since the Chinese invasion, tourists capture images of extreme military repression.
  • These images reach the international press; China’s Tibet is at last EXPOSED – and CHINA DECLARES MARTIAL LAW

CHINA IN TIBET: Phase 3: 1990’s 2000’s MINES, DAMS and WAR GAMES

  • 1988-1989; MORE demonstrations in Lhasa are captured by tourist cameras. China starts restricting western tourists by periodically banning western tourists.
  • 1989: The Berlin Wall goes down, but the Tiananmen Square Massacre follows: The death of Hu Yabong summons millions of Chinese mourners into the streets of Beijing. Gorbachev arrives in Beijing, students from Beijing University launch a hunger strike in support of democratic reforms in China’s government. After a month-long stand-off, Deng orders PLA troops into the square to crush the protestors. Thousands of unarmed Chinese citizens are slaughtered.
  • In response to the Tiananmen Square Massacre, HH Dalai Lama is awarded the 1989 Noble Peace Prize. The true history of the China’s rape and pillage of Tibet is exposed. BUT as HHDLs’ start rises, China cracks down harder on the people of Tibet.
  • 1995 : Despite pressure from the US congress and rights groups, US President Bill Clinton grants China MFN: Most Favored Nation Trading Status, removing all trade sanctions imposed on the PRC after The Tiananmen Square Massacre. China implements the Strike Hard” Policy: banning all images of HH Dalai Lama, enforcing Communist Re-education at monasteries, aggressive suppression of Tibetan ethnic identity.

CHINA IN TIBET: Phase 3: 1990’s 2000’s Mines, Dams and War Games continued…

2000: China is granted entry into the World Trade Organization and launches XI BU DAI FA: ”The Opening Up of the Western Regions” a vast industrial development plan, to exploit and extract Tibet’s vast natural resources, facilitated by rail and roadway expansion.

2001: 9/11 strikes New York City. China fades from international attention and scrutiny, and accelerates exploitation of Tibet’s natural resources. Chinese engineers launch construction of huge mining operations and hydro dams on Tibet’s rivers, which flow into South and Southeast Asia.

2006: The Qinghai–Xizang railway OPENS in LHASA, bringing millions of tourists into Tibet. The railroad also facilitates the transport of minerals, stone and lumber from Tibet, and brings over 250,000 Chinese engineers into Tibet.

2010: China announces that it has built 6 military airfields in Utsang, and debuts a new fleet of drone aircraft, with technology the US claims has been stolen by Chinese spies. A 2012 US Dept. of Defense report to Congress on China’s military capabilities notes Beijing’s push to develop longer-range unmanned aircraft, including armed drones, “expands China’s options for long-range reconnaissance and strike.”

In 2000 China launched a vast development project entitled “Xi bu dai fa”, the “Opening and development of the Western Regions” of Xinjiang and Tibet, which together comprise half of China’s land mass.

POPULATION TRANSFER: A massive influx of Chinese settlers, urbanization and forced relocation of nomads swiftly followed. The Xizang railway, which opened in 2006, transports Tibet’s vast supplies of minerals, stone and lumber to the mainland and brings in a flood of Chinese engineers and laborers who have built at least 160 hydro dams across Tibet and have plans for hundreds more.

The Chinese government is aggressively re-settling Tibetan nomads and pastoralists into concrete housing complexes. Xinhua, the Chinese state run media, claims the resettlement is necessary to protect the source area of key Chinese rivers in north-west China’s Qinghai province. Dr. Andreas Schild, the Director General of the International Centre for Integrated Mountain Development said; “Mountains without mountain people will be not sustainable.”

MINES and DAMS: Chinese engineers now operate multiple dams and mines all across Tibet, polluting the rivers at their source – you can find images on Google Earth and on Michael Buckley’s comprehensive website www.meltdownintibet.com The Chinese mainland is also imperiled: in April 2013, Yangtze River water flows were at their lowest level in record. Dams and industrial waste have caused the Yellow River to dry up before it reaches the sea. Large swaths of northern China have had no snow or rain since 2008. Nearly half of China’s wheat crop, covering of 9.5 million hectares, was afflicted by drought. In 2008 China’s State Council admitted: “ By 2030, China will have exploited all its available water supplies to the limit”.

To date, at least 131 people inside Tibet have self-immolated to protest of Chinese Communist assaults on Tibetan religion and culture and the desecration of Tibet’s ancestral lands. There is another potent source of this explosion of Tibetan outrage, which receives negligible international coverage; the covert history of China’s rape and pillage of Tibet’s ancestral lands and waters. The elemental facts about Tibet’s size, wealth of natural resources, and its strategic location on the Eurasian Continent, are not widely understood, but satellite images, maps and environmental studies of the Tibetan Plateau reveal the enormous resource and strategic advantage gained by its capture. and explains why China refuses to enter into dialogue with the Dalai Lama, or share information with the nations of South and Southeast Asia about the exploitation of Tibet’s lands and waters. CHINA’S OCCUPATION of TIBET has created a looming environmental catastrophe for the nations of South and Southeast Asia, but China refuses to discuss its development plans with neighboring states.

TIME MAGAZINE states that despite the wave of self-immolations in Tibet is the “Most under-reported story of 2013

CHINA’S ATTACKS on the DALAI LAMA SUBVERT DISCUSSION of the EXPLOITATION of TIBET’S RESOUCRES

China has succeeded in its mission to isolate and discredit the Dalai Lama by punishing heads of state who meet with the Tibetan leader and threatening any institution that invites him to speak, thereby stifling any discussion of China’s oppressive and destructive governance of Tibet. A study from the University of Gottingen in Germany of countries whose top leadership met with the Dalai Lama, showed that they incurred an average 8.1 percent loss in exports to China in the two years following the meeting. Called the “Dalai Lama Effect,” the found the negative impact on exports began when Hu Jintao took office in 2002. China’s obsessive demonization of the Dalai Lama, the distinguished Nobel Peace Prize Laureate who has lived in exile in India since 1959, has succeeded in subverting all rational and increasingly urgent discussion of China’s exploitation of Tibet’s resources, and how Chinese mining and hydro dams projects across Tibet have created a looming environmental catastrophe in Asia, the world’s most populous continent. Despite irrefutable evidence of the dangers of over-exploiting Tibet’s water resources, the Chinese government will not modify or downscale plans for dams, tunnels, railroads and highways across the Tibetan plateau. Of all the countries which depend of Tibet’s waters, the People’s Republic of China alone, can finance any project it chooses without recourse to international lenders.

TIBET IS A WAR ZONE

In 2012, Chinese Defense Minister Liang Guanglie stated: “In the coming five years, our military will push forward preparations for military conflict in every strategic direction…We may be living in peaceful times, but we can never forget war, never send the horses south or put the bayonets and guns away.” In 2009, computer analyst Greg Walton examined computers in the Dalai Lama’s Private Office in Dharamshala and uncovered “Ghost Net”, a massive Chinese cyberespionage hacking system which penetrated 103 countries, as far as the personal laptop of US Defense Secretary Robert Gates. Sec. of Defense Robert Gates stated that “Chinese cyber espionage intrusions into US defense networks is nothing less than an act of war”. Tourists who have visited Tibet provide witness: A physician from Boston who went to Tibet in Nov. 2013, observed; “The Tibetan people appeared totally dominated by a chilling degree of militarization and repression. I did not see any ways or means by which the Tibetans could fight back against such overwhelming force. I could see people wanted to talk to me but were too afraid…I have never seen such a ruthless, cruel and effective police state in my life.”

The Chinese Communist leadership is facing a crisis of legitimacy, at home and abroad

  • The Chinese economy is in decline. For decades CCP propaganda has been highly effective in promoting China as the new military and economic super power of the 21st century, but financial analysts are concerned about bad debt, a real estate bubble and declining exports.
  • There are violent uprisings in China EVERY DAY: in 2010 over 100,000 “incidents” occurred. The CCP propaganda machine is weakening. Chinese netizens are subverting Xinhua and censorship: images of police brutality are now widely circulated.
  • China’s “Peaceful Rise” is now seen as a threat to global stability. China has installed a formidable military-industrial infrastructure across the high ground of the Tibetan Plateau, with military roads, airfields, army bases, dams, mines bordering Burma, Bhutan, Nepal, India, Pakistan. At the ASEAN Conference in Bali in Nov. 2011, representatives from Vietnam and Cambodia vehemently criticized Chinese aggression in Southeast Asia and asked for American protection from the “Chinese Threat.”
  • In 2013 Chinese Troops made over 200 incursions into Indian territory from TIBET. Chinese soldiers planted the Chinese flag in three regions of Bhutan that border Tibet, and are now claiming sovereignty over “Southern Tibet”, all Tibetan cultural zones in India, Nepal and Bhutan.

THE PRICE OF APPEASEMENT For six decades the People’s Republic of China has raped and pillaged Tibet without impediment or penalty But the world will pay a high price for IGNORING the Chinese Communist occupation of Tibet….So goes the old saying:

HE WHO CONTROLS TIBET CONTROLS THE WORLD

Moynihan testimony at the house session

This is a NASA astronaut photograph of Tibet. One great success of Chinese propaganda is to persuade the world that Tibet is insignificant, that it is a lot smaller than it is, but it wasn’t until the 20th century, the era of armed warfare, airplane, and the tank that Tibet could be conquered. Even Ghengis Khan failed. So here is another NASA astronaut photograph of the Tibetan Plateau which is considered the third pole. It is the third largest ice mass concentration on planet Earth after the North and the South Pole. And in Asian folklore, it is known as the western treasure house because it is also one of the world’s largest suppliers of minerals. Next slide. This is a 1920s British map of independent Tibet and as you can see in the insert just how large the Tibetan Plateau is. Tibetan Plateau is a unique geomorphic entity with 46,000 glaciers comprising the world’s third largest ice mass, but what is significant about this in the age of water scarcity is that it is the source of the great rivers of Asia, the Yangtze, the Yellow, the Indus, the Ganges, the Brahmaputra, the Chenab, the Sutleg, the Salween, and the Mekong which flow through 11 nations, nourishing 3 billion people from Peshawar to Beijing. They all rise in Tibet. And the preservation and the management of Tibet’s glaciers and the rivers they sustain is one of the greatest challenges facing humanity in the 21st century because Asia is the most populist nation and industrial development and population growth is projected to double within the next 50 years. The combined effects of rapid development, decertification, and water scarcity has already create cycles of droughts and flood, food shortages and pandemics. But what is China doing about this? Shrinking glaciers, depleting aquifers. I am going to skip over some of this in the interest of

Asia is now facing a very serious water crisis.

Today, all of Asia’s rivers except one, the Ganges, are controlled at their sources by the Chinese Communist party. There are very few international agreements that exist for sharing data and coordinating usage of these rivers. As developing nations manage water supplies as an economic commodity in the age of scarcity, water rights and laws must be appraises. However, China has refused to engage in any negotiations with the downstream riparian nations on the use of Tibet’s waters. Here is a map which shows where the major rivers come from. There is four that come from eastern Tibet and four that come from western Tibet from Mount Kailash. Again, the Ganges originates just a few kilometers outside of control of the Chinese Communist Party. Now, most maps will only show U-Tsang Province which is in yellow as being Tibet, but in the 1950s and into the early 1960s, the Chinese partitioned Tibet as it moved from east to west. Amdo Province, Kham Province have all been partitioned into Quinghai, into Ganze, into all these other provinces, but this is historical Tibet, so you can see how large it is. It comprises almost one third of Communist China’s land mass. As you can see, this is another important map. It shows China’s grip on Asia and the occupation of Tibet gives China an enormous strategic and resource advantage. This is a map I got next from a Japanese Web site which—next slide, which shows the major ethnic regions. And of course, China learned a lesson from the collapse of the Soviet Union which my father predicted would happen through the forces of ethnicity. China is, in fact, a multi-ethnic state. The one star of the Han and the four stars of the other groups declares that it is a multi-ethnic state. And as you can see in yellow that is East Turkestan, the Uighur people; Tibet, Inner Mongolia, and Manchuria. So there is potential for ethnic conflict also again over exploitation of resources. There are the three main faces of the Chinese Communist occupation of Tibet. Phase 1, 1960s, military invasion. And that is when the deforestation, especially of eastern Tibet began. Millions upon millions of acres of first-growth forest were destroyed at this time which had for many centuries functioned also as a barrier to prevent flooding into Southeast Asia and Southwest China. Phase 2, the death of Mao, the rise of Deng and these are details you can go into later when you have more time.

Now we are into Phase 3 which is mines, dams, and war games. In Phase 2, a lot of military roads were built across Tibet. I have traveled over Tibet several times. As my friend and colleague, Paul Berkowitz said, it is very, very remote and you can see that there is no one to stop the Chinese. There will be no NATO. There will no NATO troops. There will be no U.N. peacekeeping forces. They control the roof of the world. And now because of the population transfer of Han Chinese onto the Tibetan Plateau, and the military infrastructure that they installed, they have been able to now in Phase 3 build thousands upon thousands of hydro-electric dams and mines and military airstrips and military garrisons. In 2000, China launched a vast development project called Xi Bu Dai Fa, opening a development of the western regions of Xizang and Tibet which together comprise half of Communist China’s land mass.

Here is a hydro dam on the Sengye Kabab which means mouth of the lion. Before these were Chinese rivers, Indian rivers, they were Tibetan rivers and there is an enormous body of folklore and mythology associated with all these rivers. Sengye Kabab means mouth of the lion. This is the Indus which flows through India and Pakistan. This is one of the many, many—okay, this is one of the most serious sources of conflict between Communist China and democratic India which is diverting the Yarlung Tsangpo, a Tibetan name, which is the Brahmaputra in the north south water transfer program. The Chinese are building a tunnel to divert the waters of the Brahmaputra to northern China which has been suffering from extreme drought conditions for many, many years.

Mr. ROHRABACHER. Could you please repeat where you said the water is being diverted from where to where?

Ms. MOYNIHAN. From the bend in the Brahmaputra as it flows down into northern India and into Bangladesh.  Here is a dam on the Mekong. There are over seven hydro-electric dams on the Mekong which is the main source of fresh water for all of Southeast Asia.

Mr. ROHRABACHER. Is that actually affecting the amount of water that flows into Southeast Asia then?

Ms. MOYNIHAN. Absolutely. Water flows on the Mekong are said to be down 40 to 50 percent and fish stocks have also declined dramatically. And I met with several Thai senators who were flown by the Chinese Government to northern Tibet to look at the dam projects of which they are very proud and the Thai senators——

Mr. ROHRABACHER. And that water is going to be used in China?  The water then, rather than flowing into the Mekong which is a very wide river, now you say the water is being diverted from there to and it is staying in China then?

Ms. MOYNIHAN. Yes. It is being used to create reservoirs that mostly serve southern Tibet and southwestern China and to create hydro-electric. This is a very important map created by my friend, Michael Buckley, whose Web site meltdown in Tibet, I encourage everybody to visit. This shows some of the hydro dams on the Drichu, the Zachu, and the Gyalmo Ngulchu which are the Mekong, the Salween and the Yangtze. Just look how many hydro-electric dams. There are dams that are 10 to 15 feet high and the tallest dam in the world is on the Mekong. The widest dam is at Three Gorges on the Yangtze. But you can ese this is creating a looming environmental crisis in all of South and Southeast Asia. Next slide. China has over 300,000 dams. It is the world’s number one dam builder. You can see most of the concentration of dams are in Tibet, the four rivers of eastern Tibet. Tibet was always called in the nation’s folklore the western treasure house because of the mineral, oil, gas, and salt deposits. Again, you can study these maps in detail. Another important issue is the decline of permafrost in Tibet which will release methane gas and the shrinking glaciers are also of tremendous concern. If we go to the next, there is the map of the melting permafrost. Next slide. This is a glacial lake created near the Rongbuk glacier on the northern side of Mount Everest in Chinese-occupied Tibet. In the last 90 years, the glacier’s tail has lost 90 vertical meters in depth.

Why is this one of the most under reported stories in the world? China spends so much time attacking the Dalai Lama, the distinguished Nobel Peace Prize laureate who has lived for almost 55 years in exile in India. What has this done? It confused diplomats, but it subverts all discussions of the exploitation of Tibet’s resources. My dad always said the Chinese have a perverse obsession with the Dalai Lama, but it works because it diverts everyone’s attention to this strange obsession they have and we are not talking about what is going on in Tibet—next slide, please—because Tibet is a war zone. In 2012, Chinese Defense Minister Liang Guanglie said, ‘‘In the coming 5 years, our military will push forward with preparations for military conflict in every strategic direction. We may be living in peaceful times, but we can never forget war, never send the horses south or put the bayonets and guns away.’’

The Chinese are not about to engage in any negotiation, which you see are possible in the Middle East and other conflict zones, about the use of Tibet’s waters. There is a map next of China’s military investment and expansion. Tibet is also a strategic launching pad for drones. The Chinese have stolen drone technology from American firms and an American State Department official went to an air show in southern China and was alarmed to see all these drones. And they have installed many of these drones in six new military airports they have built in southern Tibet. They can reach India. They can reach New Delhi in 20 minutes.

What is the price of appeasement? For six decades the People’s Republic of China has raped and pillaged Tibet without impediment or penalty, but the world will pay a high price for ignoring the Chinese Communist occupation of Tibet. Ghengis Khan is said to have uttered the famous phrase, ‘‘He who controls Tibet, controls the world.’’

In 2009 the United Nat ions Inter-Governmental Panel on Climate Change reported that the glaciers on the Tibetan Plateau, the source of fresh water for a fifth of the world’s population, are receding at an a alarming rate. Temperatures in Tibetan are rising 7 times as faster than in China. Scientists predict that most Tibetan glaciers could vanish by 2035 if present levels of carbon gas emissions are not reduced. Carbon emissions must be cut by 80% by 2030 to preserve the glaciers, of Tibet, the source of water for, China, India, Bangladesh, Pakistan, Burma, Thailand, Vietnam and Laos.

Asia is now facing a shrinkage of river-based irrigation water sup lies, which will disrupts grain and rice harvests. Overpumping is swiftly depleting underground water resources in India and China. Water tables are rapidly falling in the North China Plain, East Asia’s principal grain producing region. In India, we ll s are going dry in almost every state.

70% of the world’s irrigated farmland is in Asia. China and India, the world’s most populous nation s and largest grain producers, have millions of new irrigation projects that are rapidly depleting aquifers.

Satellite images released in August 2009 by the National Aerospace and Space Administration (NASA) of the United States show massive dep let ion of groundwater storage in Rajasthan , Punjab and Haryana during the 2002-2008. Indian government data shows that major reservoirs have shrunk by 70% since 2000.

Deglaciation on the Tibetan Plateau, combined with depletion of underground water resources, could create ” permanent famine conditions”, as described by the environmental scientist Lester Brown in his 1995 Worldwatch Institute report ” Who Will Feed China?”

China’s growth has pushed rivers system to a dangerous tipping point. Two thirds of all cities in China are short of water, agricultural runoff from chemical fertilizers, industrial effluent and urban waste have poisoned reservoirs. China’s Environmental Protection Administration reports that that environmental protests are rising by 50% a year. Since 1949, two-thirds of the Yangtze Valley lakes have disappeared, today th e total surface area of lakes in the middle and lower Yangtze alley has shrunk from 18,000 square kilometers to 7,000 in 50 years. Today, all but one Asia’s major rivers – the Ganges – are controlled at their sources by the Chinese Communist Party In a mere quarter century the People’s Republic of China has risen from poverty and isolation into the 21st century’s emergent superpower.

Since Chainnan Mao invaded Tibet in 1951, China has administered a huge military infrastructure across the Tibetan Plateau, which gives China a continuous border with Thailand, Burma, Bhutan, India, Nepal and Pakistan, and is now filled with military airfields and PLA battalions.

Gordon G. Chang Subcommittee on Europe, Eurasia, and Emerging Threats of the House Committee on Foreign Affairs

I am a writer and live in Bedminster, New Jersey worked as a lawyer in Hong Kong from 1981-1991 and Shanghai from 1996-2001. Between these two periods, 1 frequently traveled to Asia from California. 1 regularly go there now. I am the author of The Coming Collapse of China (Random House, 2001) and Nuclear Shutduwn: North Korea Takes On the World (Random House, 2006). 1 write regularly about China’s relations with its neighbors and the United States.

China’s Water Crisis The People’s Republic of China, over the course of decades, has grossly misused and mismanaged its lakes, rivers, and streams. The resulting freshwater crisis, in the words of senior Beijing leaders, even threatens the existence of the Chinese state. As Wang Shucheng, a former water minister, tells us, “To fight for every drop of water or die: that is the challenge facing China.” Beijing officials, unfortunately, act as if they believe their overblown rhetoric and are now fighting their neighbors for water. China, the world’s “hydro-hegemon,” is the source of river water to more countries than any other nation, controlling the headwater  needed by almost half of the world’s population, in Central, South, and Southeast Asia as well as Russia. The People’s Republic has 14land neighbors- 13 of them co-riparians-but is a party to no water-sharing treaties, refusing to even begin negotiations on water-sharing with other capitals. “No other country has ever managed to assume such unchallenged riparian preeminence on a continent by controlling the headwaters of multiple international rivers and manipulating their cross-border flows,” notes Brahma Chellaney in Water, Peace, and War: Confronting the Glohal Water Crisis. As the noted water expert reports, the Chinese have commandeered Asia’s great rivers by completing on average one large dam a day since 1949. Until recently, those dams were located inside China’s borders. Now, however, Beijing is seeking to harness the water resources of one of its neighbors, Burma, for its own benefit. As it does so, it is encountering local resistance there, and as it encounters local resistance it is blaming the United States for its deteriorating relationships with that once pliant neighbor. The tendency of Chinese leaders to hold us responsible for their own failures can only worsen our ties with them in the years ahead.

The Myitsone Dam

In 2009, a Sino-Burmese consortium controlled by China Power Investment, a Chinese state-owned entity, began work on the Myitsone Dam, located at the headwaters of the Irrawaddy River. It will be the first dam on that vital waterway and a part of a seven-dam cascade, a $20 billion undertaking. Myitsone has been called Beijing’s attempt to export the Three Gorges Dam, and it is even more unpopular in Burma than that massive project is in China. The Burmese version has been called “a showcase” for the country’s former military government, which signed the deal with China without public consultation. Therefore, those who disliked the junta-an overwhelming majority in the country-came out against the dam. And to make matters worse for Myitsone’s Beijing backers, the project became a symbol of Chinese exploitation of Burma, which the junta renamed Myanmar. It does not help that, in a power-starved nation, 90% of the dam’s electricity will be exported to southern China. The Burmese have condemned Myitsone for other reasons as well The dam is located in Kachin State, a minority area, and the Kachins have been uniformly against it, not just the tens of thousands who have been or will be forced to move to avoid the waters. The dam will Hood historical and cultural sites, including what is considered to be the birthplace of the country.

The area that will be lost has been called one of the world’s “top biodiversity hotspots and a global conservation priority.” Downstream rice farmers expect that Myitsone will rob the river of crucial sediments. The dam is about 60 miles from a major fault line, and ifit failed, it would Hood Myikyina, the largest city in Kachin State. Says Ah Nan of Burma Rivers Network, an environmental

DAVID GOODTREE, CO–CHAIR AND FOUNDER, SYMPOSIUM ON WATER INNOVATION

I have studied China most of my life, been to China. China is a very wealthy country. It has wrapped its arms around capitalism and loves it. Still a dictatorship, a brutal country. Constantly violates human rights, has no concern for the environment. Possesses one half of the U.S. outside debt, spending money all over the world, investments we should call them, building its military at an unbelievable rate and buying gold up by the boatloads. Given all that, it is 1.3 going on 1.4 billion people, the Communist Party is still very strong and I think that in my lifetime I will not see that change. What do we do, what does the United States and its allies do to at least curtail the activities of China on a wide variety of bases? Mr. CHANG.

Ms. MOYNIHAN. Well, of course, the hydro dams do produce reservoirs and energy and in Chinese-occupied Tibet, most of that is going to industrial development. And there is one issue I wanted to mention is that China is also rapidly building mines at the source of a lot of the rivers so they are creating long-term pollution that will go downstream to the other riparian nations. And that could be a whole other hearing.

Mr. ROHRABACHER. But that is very relevant, extremely relevant in the discussion of water in terms of countries that are permitting that type of pollution which then again eliminates that as a source for their neighbors and thank you for bringing that up. I think it is important.

 

 

Posted in Caused by Scarce Resources, Congressional Record U.S., Water, Water | Leave a comment