Extreme Events. CEC 2011. Variable distributed generation from solar and wind increase the chance of large blackouts

Morgan, M., et al.   (Pacific Northwest National Laboratory, University of Wisconsin-Madison, Electric Power Research Institute, BACV Solutions, Southern Company, CIEE, University of Alaska – Fairbanks, and KEMA). 2011. Extreme Events. California Energy Commission. Publication number: CEC-500- 2013-031.

distributed gen blackout 1

Figure 18: BLACKOUT FREQUENCY and SIZE (figure 19, not shown, similar to above) Increases Greatly With Highly Variable Distributed Generation, decreases with Reliable Distributed Generation 

Summary

This study showed that in some cases, increasing the proportion of variable distributed generation could actually increase the long-term frequency of the largest blackouts. If the decentralized generation is highly variable, as is the case with wind and solar power, the operation of the grid can be severely degraded. This may increase the probability of large blackouts and a higher frequency of failures.

One potentially problematic scenario is that as the early penetration of distributed generation comes on line, it will actually make the system more reliable and robust since it will effectively be adding to the capacity margin. However, as new distributed generation is added, the system could become much less reliable as the demand grows, the fraction of distributed generation grows, and the capacity margin falls back to historical, mandated levels.

Possible trigger events that can lead to a blackout include short circuits due to lightning, tree contacts, or animals, severe weather, earthquakes, operational or planning errors, equipment failure, or vandalism.

The worst case occurs when highly centralized high-variability generation, such as large wind farms, are added without the necessary increase in generation margins.

Large blackouts pose a substantial risk that must be mitigated to maintain the high overall reliability of an electric power grid. As the control of the power grid becomes far more complex with the increasing penetration of new generation sources such as wind and solar power and new electric loads such as electric cars, maintaining high reliability of the electric grid becomes even more critical.

Generator capacity margin or generation variability leveling mechanisms are critical to reducing the degradation that can be caused by the increased penetration of sustainable distributed generation.

The backbone of electric power supply is the high-voltage transmission grid. The grid serving California is part of the larger Western Interconnection, administered by the Western Electricity Coordinating Council (WECC), which extends from the Mexican border well into Canada and from the Pacific coast to the Rocky Mountains.

The western power grid is an impressively large and complex structure. The full WECC interconnection system comprises 37 balancing authorities (BAs), 14,324 high-and medium-voltage transmission lines, 6,533 transformers, 16,157 buses (8,230 are load buses), and 3,307 generating units. The grid has 62 major transmission paths between different areas.

While the extent of this grid provides it with certain reliability benefits, it also adds vulnerabilities because it provides multiple paths for any local disturbance to propagate. This is the problem of cascading failure; a series of failures occur, each weakening the system further, making subsequent failures more likely.

System cascading failures may occur due to the loss of several important elements, such as multiple generating units within a power plant, parallel transmission lines or transformers and common right-of-way circuit outages. The failure of these elements may widely propagate through the interconnected power network and result in a local or wide-area blackout. These kinds of failures that cause severe consequences are initiating events to a cascading failure.

The electrical transmission system of California, like all interconnected transmission systems, is vulnerable to extreme events in which complicated chains of exceptional events cascade to cause a widespread blackout across the state and beyond.

A reliable transmission grid is essential for enabling transition to renewable energy sources and electric cars, especially as the grid itself evolves toward a “smart” infrastructure.

The high voltage transmission grid for California is part of the larger western power grid, a complicated and intricately coordinated structure with hundreds of thousands of components that support the electrical supply and hence the way of life for California citizens, business, and government.

Although the transmission grid is normally very reliable, extreme events in which disturbances cascade across the grid and cause large blackouts do occasionally occur and result in direct costs to society amounting to billions of dollars.

There is an evident need to expand the list of initiating events to reflect the complexities of modern power systems as well as new factors such as the increasing penetration of variable renewable generation resources, demand-side load management, virtual and actual consolidation of balancing authorities, new performance standards, and other factors.

 

Excerpts from the 85 pages:

These large blackouts always have a substantial impact on citizens, business and government. Although these are rare events, they pose a substantial risk. Much is known about avoiding the first few failures near the beginning of a cascade event series, but there are no established methods for directly analyzing the risks of the subsequent long chains of events. The project objective is to find ways to assess, manage, and reduce the risk of extreme blackout events. Since this is a difficult and complex problem, multiple approaches are pursued, including examining historical blackout data, making detailed models of the grid, processing simulated data from advanced simulations, and developing and testing new ideas and methods. The methods include finding critical elements and system vulnerabilities, modeling and simulation, quantifying cascade propagation, and applying statistical analyses in complex systems. The project team combines leading experts from industry, a national laboratory, and universities.

Although such extreme events are infrequent, statistics show that they will occur. The electric power industry has always worked hard to avoid blackouts, and there are many practical methods to maintain reliability. However, the cascading- failure problem is so complex that there are no established methods that directly analyze the risk of the large blackouts. The overall project objective is to assess the risk of extreme-blackout events and find ways to manage and reduce this risk. Managing the risk of extreme events such as this is particularly important as society moves toward environmental sustainability.

 

Although extreme events only occur occasionally, the NERC data show a substantial risk of extreme events in the WECC region.

From the area of operations, the researchers found that the average fractional load (the load divided by the limit) of the transmission lines is a good representation for the risk of large failures. If this average is kept below about 50%, the probability of large failures appears to decrease. This in turn has major implications for the ratepayer; operating at less than 50% of line capacity would lead to improved reliability for the users but would probably require investment in both the transmission capacity and demand-side control.

 

Researchers found that decentralized generation can greatly improve the reliability of the power transmission grid. However, if the decentralized generation is highly variable, as is the case with wind and solar power, the operation of the grid can be severely degraded. This may increase the probability of large blackouts and a higher frequency of failures. The project results suggest that one of the critical factors is the generation margin. If high-variability non-centralized generation is brought on-line as an increase in the generation capacity margin, it is likely to improve the network robustness; however, if over time that margin declines again (as the demand increases) to the standard value, the grid could undergo a distinct decline in reliability characteristics. This suggests a need for care in planning and regulation as this decentralization increases. The worst case occurs when highly centralized high-variability generation, such as large wind farms, are added without the necessary increase in generation margins. Increased use of de-centralized generation in the system has numerous effects on the ratepayer, from decreased electricity costs and increased reliability, if implemented carefully, to decreased reliability and an accompanying increase in costs, if not.

CHAPTER 1: Introduction

On August 10, 1996, a blackout started in the northwestern United States and cascaded to disconnect power to about 7,500,000 customers over the West Coast, including millions of customers in both northern and southern California. Power remained out for as much as 9 hours, snarling traffic, shutting down airports and leaving millions in triple- digit heat. An initially small power- system disturbance, a sagging power line, cascaded into a complicated chain of subsequent failures leading to a widespread blackout. Although such extreme events are infrequent, historical statistics show they will occur. The resulting direct cost is estimated to be in the billions of dollars, not including indirect costs resulting from social and economic disruptions and the propagation of failures into other infrastructures such as transportation, water supply, natural gas, and communications.

 

  1. 5.2 Line-Trip Data

The transmission line outage data set consists of 8864 automatic line outages recorded by a WECC utility over a period of ten years. This is an example of the standard utility data reported to NERC for the Transmission Availability Data System (TADS). The data for each transmission line outage include the trip time. More than 96% of the outages are of lines rated 115 k V or above. Processing identified 5227 cascading sequences in the data. Some of these cascades are long sequences of events, but most are short.

CHAPTER 4: Extreme Event Risk. Anatomy of Cascading Failure

Cascading failure can be defined as a sequence of dependent events that successively weakens the power system. The events are often some individual power system component being outaged or damaged or mis-operating, but can also include a device functioning as designed but nevertheless contributing to the cascade, or actions by operators, software, or automatic controls. As shown in Figure 6, cascading failure starts with a trigger event and proceeds with further events. All the events interact with the system state as the cascade proceeds. The occurrence of each event depends on the system state, and the system state is affected by every event that has already occurred, and thus the system state changes throughout the cascade. The progressive weakening of the system as the cascade propagates is characteristic of cascading failure.

Possible trigger events include short circuits due to lightning, tree contacts, or animals, severe weather, earthquakes, operational or planning errors, equipment failure, or vandalism. The system state includes factors such as component loadings, which components are in service, generation margin, hidden failures, situational awareness, and weather.

The triggers and the subsequent propagation of events have different mechanisms, so that different approaches are needed to mitigate the triggers or mitigate the propagation. Moreover, the triggers and the propagation have different effects on the risks of small, medium, and large blackouts, so that managing these risks may require different combinations of mitigations for triggers and/or propagation. Limiting the triggers and initiating events reduces the frequency of all blackouts, but can in some cases actually increase the occurrence of the largest blackouts, whereas limiting the propagation tends to reduce the larger blackouts, but may have no effect on the frequency of the smaller events.

The notions of causes (and blame) often can become murky in complicated cascades. For example, it is possible that automatic or manual control decisions that are advantageous in many standard system operational states and are overall beneficial may occasionally be deleterious.

  1. 2 Probabilistic Approach to Simulation of Rare Events Cascading

Failure in power systems is inherently probabilistic. There are significant uncertainties in the initial state of the power system, in the triggering events, and in the way that the cascading events propagate or stop. The initial state of the power transmission system is always varying and includes factors such as patterns of generation and loading, equipment in service, weather, and situational awareness. Examples of trigger events are lightning, earthquakes, shorts involving trees and animals, equipment failure, and operational errors. The progress of cascading events depends on exact conditions and thresholds, can be very complicated, and can involve combinations drawn from dozens of intricate mechanisms, some of which involve unusual or rare interactions, that span a full range of physical and operation al factors. It is appropriate to understand all these uncertainties probabilistically. Large black outs are particular samples from an astronomically large set of possible but unusual combinations of failures. From a modeling perspective, the underlying probabilistic view is driven by several factors. It is impossible to enumerate all the possible large blackouts because of the combinatorial explosion of possibilities. While some selected mechanisms of cascading failure can be usefully approximated in a simulation, it is well beyond the current state of the art to represent all or even only the physics- based) mechanisms in great detail in one simulation. The full range of power system phenomena involved in cascading failure occur on diverse time-scales, and obtaining the full data (such as fast dynamical data) is difficult for the large-network cases needed to study large cascading blackouts. Most important, such a simulation, even if otherwise feasible, would be too slow.

 

In WECC, one could consider small blackouts to be less than 100 MW load shed, medium blackouts to be between 100 MW and 1000 MW load shed, and large blackouts to be more than 1000 MW load shed. The historical data implies that large blackouts are rarer than medium blackouts, but that the large blackouts are more risky than the medium blackouts because their cost is so much higher.

Based on these cost assumptions, a rough calculation of large and medium blackout risk can be made. The NERC WECC blackouts are divided into small (<100 MW) medium (100 – 1000 MW) and large blackouts (>1000 MW). The largest recorded blackout is 30,390 MW. Small blackouts are not systematically covered by the reported data and are put aside. According to the data, the large blackouts have about 1/3 the probability of the medium blackouts. The average large blackout is roughly 8 times the size of the average medium blackout, so its cost is roughly 20 times larger. Since risk is probability times cost, the risk of an average large blackout is roughly 7 times the risk of an average medium blackout.

CHAPTER 5: Results, Analysis, and Application to California and the Western Interconnection

  1. 1.1 Selection of Initiating Events Power

System cascading failures may occur due to the loss of several important elements, such as multiple generating units within a power plant, parallel transmission lines or transformers and common right-of-way circuit outages. The failure of these elements may widely propagate through the interconnected power network and result in a local or wide-area blackout. These kinds of failures that cause severe consequences are initiating events to a cascading failure. Some of the selected initialing events are in NERC Category D. Such events are not routinely analyzed by system planners and operators due to the complexity of such events. The selection of initiating events is a critical step in accurately simulating and analyzing large-scale cascading failures. Successful identification of initiating events can help effectively identify the most severe disturbances and help system planners propose preemptive system reinforcements that will improve both the security and the reliability of the system. Analyzing too few initiating events may not be sufficient to reveal critical system problems. At the other extreme, scanning all combinations of initiating events in a bulk power system is computationally impossible. As an example, the Western Interconnection contains approximately 20,000 transmission lines. Screening all combinations of N-2 contingencies requires approximately 199,990,000 simulation runs, which is beyond the capability of available simulation tools; for example, if time per run were 90 seconds, the total run time would be about 570 years.   Currently, only 5-50 contingencies are selected annually to perform extreme event analysis to comply with NERC requirements in the WECC system. The selections of these contingences are based on the experience of power grid operators and planners, that is, knowing critical elements in their systems. This limited set of events is included in the list created in this study. In this study, eight categories of initiating events were collected for the entire WECC system from multiple sources such as historical disturbance information, known vulnerable system elements, engineering judgment, transmission sensitivity analysis methods and others. A large list with more than 35,000 initiating events was created for the full WECC model. The different types of initiating events are summarized below.

  1. 1.1.3 Substation Outage. This type of initiating event considers the complete loss of a substation (bus) in the WECC model. It is used to simulate extreme events that result in a complete outage of all elements within a substation. 8,000 initiating events in this category were generated considering all substations with voltage levels higher than 115 kV.
  1. 1.1.4 The Loss of Two Transmission Lines Based on Contingency Sensitivity Study
  1. 1.1.5 Parallel Circuits Transmission Line Outage. Many of the higher- kV lines are made of two or more circuits on a common tower to increase their transmission capacity. However, during catastrophic events such as thunderstorms, lightning strikes or tornadoes, all the circuits of a multi- circuit transmission line can be out of service leading to huge power- transfer capacity loss. This contingency list considers all the transmission lines that have two or more parallel circuits originating and ending on the same buses. 996 initiating events in this category were collected.
  1. 1.1.6 Common Right of Way and Line Crossings Outage. This outage list contains common corridors or common right-of-way (ROW) lines. Common ROW is defined by WECC as “Contiguous ROW or two parallel ROWs with structure centerline separation less than the longest span of the two transmission circuits at the point of separation or 500 feet, whichever is greatest, between the two circuits” events is very important since the right-of- way lines generally fall within similar geographical areas and any natural calamity can easily cause the outage of these transmission lines.
  2. 1.1.7 Flow Gates between Balancing Authorities. The flow gates between various balancing authorities represent important transmission-path gateways transporting large amounts of power. Loss of a flow gate can cause major problems for a balancing authority, especially if the BA is normally a power importer without sufficient local generation to meet demand. 54 initiating events in this category were collected.
  1. 1.1.8 Major Transmission Interfaces in the WECC System. This event considers outages of major transmission interfaces or paths between different major load and/or generation areas as identified in WECC power-flow base planning case. These interfaces are the backbone of the WECC power grid, and the loss of any of these paths can have large impact. 62 initiating events in this category were collected.
  1. 3.1 Critical Events Corridors Analysis

Although no two blackouts follow the same sequence of events, similar partial sequences of cascading outages may exist in a particular power system. Partial patterns in which transmission lines, generators or buses are forced out in a certain order can repeatedly appear across a variety of initiating events and system conditions. These patterns can result from multiple different initiating events, and therefore are seen as parts of different cascading processes. Figure 9 illustrates the hypothesis of these “critical event corridors.” Critical-corridor identification can be used to recommend transmission-system enhancements, protection-system modification, and remedial actions to help eliminate these most frequently observed, and therefore most probable, critical sequences that lead to severe consequences.

Selection of optimal locations for high penetration of renewables to minimize effects on system reliability; if location choice is not under control of the BA, results can point out potential extreme events due to the concentration of renewable resources in few locations

  1. 4.2 Finding Line Clusters That Are Critical During Propagation Finding

The triggers for a large blackout is only the first step. Most large blackouts have 2 distinct parts, the triggers/initiating event followed by the cascading failure. The cascade can be made up of as few as one subsequent stage or as many as dozens or even hundreds of stages. The cascading part of the extreme event is critically dependent on the “state” of the system: how heavily the lines are loaded, how much generation margin exists, and where the generation exists relative to the load. However, during large cascading events there are some lines whose probability of overloading is higher than the others. Statistical studies of blackouts using the OPA code allow the identification of such lines or groups of lines for a given network model, thereby providing a technique for identifying at risk (or critical) clusters. These lines play a critical role in the propagation of large events because they are likely to fail during the propagation of the cascade, making it more likely that the cascade will propagate further and turn into an extreme event. Therefore, it is clearly very important to identify them.

  1. 4.3 System State Parameters That Correlate With Large Blackouts. In a complex system, extreme events may be triggered by a random event. However, the much- higher-than-Gaussian probability of extreme events (the heavy tail) is a consequence of the correlations induced by operating near the operational limits of the system and has little to do with the triggering events. The result is that the extreme-event distribution is independent of the triggering events. Therefore, trying to control the triggering events does not lead to a change of the power-tail distribution. A careful reduction of triggering events may reduce the frequency of blackouts but will not change the functional form of the size distribution. The process of trying to plan for and mitigate the triggering events can in fact lead to a false sense of security since one might think one is having an effect on risk by doing so when in reality, the unexpected triggers which will certainly occur will lead to the same distribution of blackout sizes.

In these complex systems, an initiating event cannot be identified by just the random trigger event, but by the combination of the triggering event and the state of the system. This “state of the system” can be characterized by different measurements of the parameters of the system. In the case of power systems, for example, the system state includes the distribution and amounts of loads and power flows in the network. A simulation model like OPA is continually changing the network loading and power flows. This, importantly, gives a large sample of initiating events. The statistics of the results reflect many combinations of initial events and system states. It is also important to distinguish between blackout initiating events and general cascade initiating events. In power systems, a cascade, in particular a very short cascade, does not always lead to a blackout. Therefore, those two sets of initiating events are different. Within the OPA simulations, a blackout is defined as any event in which the fraction of load shed is greater than 0.00001. However, for comparison with the reported data we use fraction of load shed being greater than 0.002, which is consistent with the NERC reporting requirements from emergency operations planning standard EOP-004-1.

In calculating the probability of a blackout occurring, good measures include the number of lines overloaded in the first iteration, the average fractional line loading every day, the variance of the fractional line loading every day, and the number of lines with a fractional line loading greater than 0.9. They all show strong positive correlation with the probability of a blackout. When a blackout occurs, the size of the blackout correlates strongly with the number of lines overloaded in the initiating state. This is a very clear correlation. The size also has a positive correlation with the average fractional line loading every day, variance of the fractional line loading every day, and the number of lines with a fractional line loading greater than 0.9 (Figure 16).

 

Having found a number of system parameters that strongly correlate with blackout probability, and even more importantly with extreme event size, it is possible to consider monitoring these quantities in the real system. The goal there would be to see (1) whether they show variations that are meaningful and the same correlations exist, and (2) if so, whether the noise level is low enough to make any of them useful as a precursor measure- the ultimate objective of the work in this section.

  1. 4.4 Impact of Distributed Generation

The increased utilization of local, often renewable, power sources coupled with a drive for decentralization, the fraction of electric power generation that is “distributed” is growing and set to grow even faster. It is often held that moving toward more distributed generation would have a generally positive impact on the robustness of the transmission grid. This intuited improvement comes simply from the realization that less power would need to be moved long distances, and the local mismatch between power supply and demand would be reduced. The project approached the issues of system dynamics and robustness with this intuitive understanding in mind and with the underlying question to be answered, “is there an optimal balance of distributed versus central generation for network robustness?” In the interest of understanding the effects of different factors, the investigation was initiated by intentionally ignoring the differences in the economics of centralized vs. distributed generation and trying to approach the question in a hierarchical manner, starting from the simplest model of distributed generation and then adding more complexity.

Using OPA to investigate the effects of increased distributed generation on the system, it was found that:

  1. Increased distributed generation can greatly improve the overall “reliability and robustness” of the system.
  2. Increased distributed generation with high variability (such as Wind and Solar) can greatly reduce overall “reliability and robustness” of the system, causing increased frequency and size of blackouts.
  3. Generator capacity margin or generation variability leveling mechanisms are critical to reducing the degradation that can be caused by the increased penetration of sustainable distributed generation.

Figure 18 shows the blackout frequency as the degree of distribution (a surrogate for the amount of distributed generation) is increased. It can be clearly seen that with reliable distributed generation (same variability as with central generation) the overall blackout frequency decreases, while Figure 19 shows a concomitant decrease in the load-shed sizes as the degree of distribution increases. However, Figures 18 and 19 show a large increase in both the frequency and size of the blackouts when using distributed generation with realistic variability. In some cases, the distributed generation can make the system less robust, with the risk of a large blackouts becoming larger. It is clear that distributed generation can have a range of effects on the system robustness and reliability, coming from the reliability of the generation (wind, solar, and so forth), the fraction that is distributed and the generation capacity margin.   Many more aspects of distributed generation such as local storage, demand- side control, and so forth, remain to be investigated.   Figure 18: Blackout Frequency Decreases with Increased Reliable Distributed Generation but Increases Greatly With Increased Highly Variable Distributed Generation

One potentially problematic scenario is that as the early penetration of distributed generation comes on line, it will actually make the system more reliable and robust since it will effectively be adding to the capacity margin. However, as new distributed generation is added, the system could become much less reliable as the demand grows, the fraction of distributed generation grows, and the capacity margin falls back to historical, mandated levels.

  1. 5.3 Predicting Extent of Blackout Triggered by an Earthquake

This section summarizes the project results about the size of blackouts triggered by earthquakes. Chapter 6.5.5 of the Phase 1 report gives details.   If there is a large initial shock to the power system such as from an earthquake, what is the risk of the failure cascading to other regions of the WECC? This is an important question because the time required to restore electric power and other infrastructure in the region that experienced damaging ground motion depends on how far the blackout extends. Long restoration times would multiply the consequences of the direct devastation not only to conventional measures such as load loss but also to restoration of lifeline services. Since earthquakes can produce orders of magnitude more costly damage than a blackout, any prolongation of earthquake restoration due to the blackout cascading beyond the shaken region has a significant effect. The project made an illustrative calculation of the blackout extent as measured by number of lines tripped as a result of a large shock to the system in which initially 26 lines outaged based on a real earthquake scenario. The calculations assumed and applied the branching-process model and observed propagation. (Figure 22) shows an initial estimate of the distribution of the total number of lines tripped due to the combined effect of the earthquake and subsequent cascading. The most likely extent is about 90 lines tripped, but there is a one-in-ten chance that more than 150 lines would trip. (The chance of more than 150 lines tripped is the sum of the chances of 151, 152, and 153 … lines out.) This initial estimate is illustrative of probable outage scenarios. A detailed examination of actual earthquake initiating failures and line- trip propagation data would be required to improve it. Similar calculations would be feasible for other large disturbances such as extreme weather events, wildfires or floods.

  1. 2.3.2 Additional Types of Initiating Events

There is an evident need to expand the list of initiating events to reflect the complexities of modern power systems as well as new factors such as the increasing penetration of variable renewable generation resources, demand-side load management, virtual and actual consolidation of balancing authorities, new performance standards, and other factors.

  1. 3.1.8 Impact of Distributed Generation

The project studied the impact of increased distributed generation on cascading failure risk with the OPA simulation. The results of this work suggest that a higher fraction of distributed generation with no generation variability improves the system characteristics. However, if the distributed generation has variability in the power produced (and this is typical of distribute d generation sources such as wind or solar), the system can become significantly less robust with the risk of a large blackouts becoming much larger. It is possible to find an optimal value of the fraction of distributed generation that maximizes the system robustness. Further investigations with different models of the reduced reliability of the distributed generation power and different distributions of the distributed generation would be worthwhile, as would the extension of this work to the larger WECC models.

Historical Data

North American Electric Reliability Corporation (NERC) has made public data for reportable blackouts in North America. Blackouts in the WECC for the 23 years from 1984 to 2006 have been analyzed. The 298 blackouts in the WECC data occur at an average frequency of 13 per year. The main measures of blackout size in the NERC data that are used in the project are load shed (MW) and number of customers affected. Blackout duration information is also available, but the data quality is less certain.

The NERC data follows from government reporting requirements. The thresholds for the report of an incident include uncontrolled loss of 300 MW or more of firm system load for more than 15 minutes from a single incident, load shedding of 100 MW or more implemented under emergency operational policy, loss of electric service to more than 50,000 customers for 1 hour or more, and other criteria detailed in the U.S. Department of Energy forms EIA-417 and OE-417.

 

Posted in Grid instability | Tagged , , , , , , , , , , , , , , | Leave a comment

Electricity & Diesel / Gasoline interdependency

Freight trucks, trains, ships, airplanes all stop when the electricity is out because the pumps depend on it.  Related: Why you should love trucks and When Trucks Stop

Lively, M. February 14, 2014. Pricing Gasoline When the Pumps Are Running on Backup Electricity Supply

At the February 11, 2014 MIT Club of Washington Seminar Series dinner on the topic of “Modernizing the U.S. Electric Grid,” Michael Chertoff gave a talk on “The Vulnerability of the U.S. Grid.

He said that after a hurricane hit Miami in about 2005, electrical workers couldn’t get to work because they had no gasoline for their cars.  The gas stations had gasoline but no electricity to pump the gasoline. 

Back-up electricity generators would have required an investment of $50,000 which was not justified on the razor thin margins on which most gas stations operate.

The gas station owners thought process was that the sales lost during the blackout would just be gasoline that would be sold after the power came back on.  Investment in a back-up generator would not change the station’s revenue and would just hurt its profitability.

Posted in Automobiles, Fuel Distribution, Interdependencies, Trucks | Tagged , , , , | Leave a comment

Utility Scale Energy Storage Batteries limited by both materials and energy

Stanford study quantifies energetic costs of grid-scale energy storage over time; current batteries the worst performers; the need to improve cycle life by 3-10x

10 March 2013. GreenCarCongress.com

GA
A plot of ESOI for 7 potential grid-scale energy storage technologies. Credit: Barnhart and Benson, 2013. Click to enlarge.

A new study by Charles J. Barnhart and Sally M. Benson from Stanford University and Stanford’s Global Climate and Energy Project (GCEP) has quantified the energetic costs of 7 different grid-scale energy storage technologies over time. Using a new metric—“Energy Stored on Invested, ESOI”—they concluded that batteries were the worst performers, while compressed air energy storage (CAES) performed the best, followed by pumped hydro storage (PHS). Their results are published in the RSC journal Energy & Environmental Science.

As the percentage of electricity supply from wind and solar increases, grid operators will need to employ strategies and technologies, including energy storage, to balance supply with demand given the intermittency of the renewable supply. The Stanford study considered a future US grid where up to 80% of the electricity comes from renewables.

Only about 3% currently is generated from wind, solar, hydroelectric and other renewable sources, with most of the electricity produced in the United States currently coming from coal- and natural gas-fired power plants, followed by nuclear according to data from the US Energy Information Administration (EIA) .

They quantified energy and material resource requirements for currently available energy storage technologies: lithium ion (Li-ion), sodium sulfur (NaS) and lead-acid (PbA) batteries; vanadium redox (VRB) and zinc-bromine (ZnBr) flow batteries; and geologic pumped hydroelectric storage (PHS) and compressed air energy storage (CAES).

The current total energy storage capacity of the US grid is less than 1%, according to Barnhart. What little capacity there is comes from pumped hydroelectric storage, which works by pumping water to a reservoir behind a dam when electricity demand is low. When demand is high, the water is released through turbines that generate electricity.

By introducing new concepts, including energy stored on invested (ESOI), we map research avenues that could expedite the development and deployment of grid-scale energy storage. ESOI incorporates several storage attributes instead of isolated properties, like efficiency or energy density.

Calculations indicate that electrochemical storage technologies will impinge on global energy supplies for scale up—PHS and CAES are less energy intensive by 100 fold. Using ESOI we show that an increase in electrochemical storage cycle life by tenfold would greatly relax energetic constraints for grid-storage and improve cost competitiveness. We find that annual material resource production places tight limits on Li-ion, VRB and PHS development and loose limits on NaS and CAES.

This analysis indicates that energy storage could provide some grid flexibility but its build up will require decades. Reducing financial cost is not sufficient for creating a scalable energy storage infrastructure. Most importantly, for grid integrated storage, cycle life must be improved to improve the scalability of battery technologies. As a result of the constraints on energy storage described here, increasing grid flexibility as the penetration of renewable power generation increases will require employing several additional techniques including demand-side management, flexible generation from base-load facilities and natural gas firming.

—Barnhart and Benson

The first step in the study was to calculate the cradle-to-gate embodied energy—the total amount of energy required to build and deliver the technology—from the extraction of raw materials, such as lithium and lead, to the manufacture and installation of the finished device.

To determine the amount of energy required to build each of the five battery technologies, the authors used data collected by Argonne National Laboratory and other sources. The data revealed that all five battery technologies have high embodied-energy costs compared with pumped hydroelectric storage.

After determining the embodied energy required to build each storage technology, the next step was to calculate the energetic cost of maintaining the technology over a 30-year timescale. To quantify the long-term energetic costs, Barnhart and Benson came up with a new mathematical formula they dubbed ESOI, or energy stored on investment.

ESOI is the amount of energy that can be stored by a technology, divided by the amount of energy required to build that technology. The higher the ESOI value, the better the storage technology is energetically.

—Charles Barnhart

The results showed that CAES had the highest value: 240. In other words, CAES can store 240 times more energy over its lifetime than the amount of energy that was required to build it. (CAES works by pumping air at very high pressure into a massive cavern or aquifer, then releasing the compressed air through a turbine to generate electricity on demand.) PHS followed at 210.

The five battery technologies fared much worse. Lithium-ion batteries were the best performers, with an ESOI value of 10. Lead-acid batteries had an ESOI value of 2, the lowest in the study.

The best way to reduce a battery’s long-term energetic costs would be to improve its cycle life, the Barnhart said. Pumped hydro storage can achieve more than 25,000 cycles; none of the conventional battery technologies featured in the study has reached that level. Lithium-ion is the best at 6,000 cycles, while lead-acid technology is at the bottom, achieving a mere 700 cycles.

The most effective way a storage technology can become less energy-intensive over time is to increase its cycle life.Most battery research today focuses on improving the storage or power capacity. These qualities are very important for electric vehicles and portable electronics, but not for storing energy on the grid. Based on our ESOI calculations, grid-scale battery research should focus on extending cycle life by a factor of 3 to 10.

—Sally Benson

In addition to energetic costs, Barnhart and Benson also calculated the material costs of building these grid-scale storage technologies. In general, they found that the material constraints aren’t as limiting as the energetic constraints. However, PHS has a different type of challenge—the number of geologic locations conducive to pumped hydro is dwindling, and those that remain have environmental sensitivities, Barnhart noted.

A primary goal of the study was to encourage the development of practical technologies that lower greenhouse emissions and curb global warming, Barnhart said. Coal- and natural gas-fired power plants are responsible for at least a third of those emissions.

I would like our study to be a call to arms for increasing the cycle life of electrical energy storage. It’s really a basic conservative principal: The longer something lasts, the less energy you’re going to use.

—Charles Barnhart

Resources

  • Charles J. Barnhart and Sally M. Benson (2013) On the importance of reducing the energetic and material demands of electrical energy storage. Energy Environ. Sci., doi: 10.1039/C3EE24040A
Posted in Alternative Energy Resources, Batteries | Tagged , , , , , , , , , , , , , , , , , | Leave a comment

Solar Photovoltaics (PV) limited by raw materials

This paper (excerpts below) shows that there are limits to growth — there simply aren’t enough minerals in the world that can be produced physically and/or at a reasonable cost for the most popular kinds of PV being made now. The authors suggest that research ought to focus on solar PV technologies for which enough physical material in the world is available.

The availability of some rare elements may limit the growth of some PV technologies. Of particular concern is tellurium used for cadmium telluride, and indium used for copper indium gallium diselenide. Tellurium is primarily extracted as a byproduct of electrolytic copper refining, and global supply is estimated at approximately 630 MT/yr. Tellurium supply is expected to increase over time based on increasing global copper demand. Indium is primarily extracted as a byproduct of zinc refining, and global supply is estimated at about 1,300 MT/yr. Nearly all of the indium supply is used to make transparent conductive oxide coatings, such as those used for flat-panel liquid crystal displays. Global indium supply is projected to increase to meet demand for non-PV applications, and potentially for PV applications as well. Currently, it takes approximately 60–90 MT of tellurium to make 1 GW of cadmium telluride, and approximately 25–50 MT of indium to make 1 GW of copper indium gallium diselenide.  Competition with non-PV applications for rare materials could significantly restrict supply, particularly for indium, and could increase both material prices and price volatilities. Material feedstocks for crystalline silicon PV are virtually unlimited, and supply constraints are not likely to limit growth. However, crystalline silicon cells typically use silver for electrical contacts, which could be subject to price spikes if there are supply shortages.  Source: 2014. Renewable Electricity Futures Study Exploration of High-Penetration Renewable Electricity Futures. National Renewable Energy Laboratory.

Wadia, C. et al. 2009. Materials Availability Expands the Opportunity for Large-Scale Photovoltaics Deployment. Environ. Sci. Technol. 43 2072-2077

Our analysis highlights a photovoltaic future that may not be dependent on either silicon technologies or currently popular thin films.

solar PV 1 limited minerals for 17000 TWh

 

 

 

 

 

 

 

 

 

 

 

FIGURE 1. Annual electricity production potential for 23 inorganic photovoltaic materials. Known economic reserves (also known as Reserve Base) and annual production are taken from the U.S. Geological Survey studies 21 . Total U.S. and worldwide annual electricity consumption are labeled on the figure for comparison.

Forecasts of the future costs of vital materials have a high-profile history. In 1980, Paul Ehrlich and Julian Simon made a public wager on the future price change of chrome, copper, nickel, tin, and tungsten. Ehrlich and his colleagues waged a total of $1000, or $200/metal. In 1990, as Simon had predicted, the inflation-normalized price of all five metals had dropped to ~$430 because cheaper plastics and ceramics replaced more costly metals, lowering demand and subsequently the price of those metals (14).

Today that basket of 5 metals is now worth over $1500. Continued demands for higher-purity and thus valued materials have been the driver of this reversal of the initial Ehrlich-Simon wager (15–19).

For example, the average quality of copper ore has gone from 2.4% to 1% in the last 100 years.

Indium, a secondary metal byproduct of zinc mining, has shot up 400% the past 5 years due to an increase in demand from the digital display market (20, 21).

We explore the material limits for PV expansion by examining both material supply and least cost per watt for the most promising semiconductors as active photogenerating materials across 23 potential photovoltaic technologies were evaluated. Low-efficiency cell types were not significantly investigated, regardless of cost.

Conclusion

We estimated the electricity contribution and cost impact of material extraction to a finished solar module by calculating the maximum TWh and minimum ¢/W of each of the 23 compounds evaluated (Figures 1 and 2).

solar PV 2 limited minerals cost

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

FIGURE 2. Minimum ¢/W for 23 inorganic photovoltaic materials. Component cost contribution in ¢/W is a strong indicator of value for future deployment. Calculated values for all 23 compounds evaluated are shown. The range of costs are between 0.327¢/W for Ag2S and 0.000002¢/W for FeS2 . While the actual dollar figure per watt for material extraction will appear small compared to the entire cost of an installed PV system, the cost of processing the material for PV grade applications is a larger cost contributor and should be evaluated further.

PV materials that could achieve extraction costs lower than x-Si at 0.039¢/W and demonstrate equal or greater electricity production potential versus x-Si include FeS2, Zn3P2, and a-Si. Iron pyrite (FeS2) is significantly more attractive in both cost and availability than all other compounds, whereas several of the leading thin-film technologies like CdTe are not able to meet the large-scale needs. The two materials PbS and NiS are both promising, but outside of a quantum confined system, they will be hampered by disproportionately higher BOS and installation costs due to low power conversion efficiencies. Furthermore, some unusual candidate compounds, like ZnO, have a high abundance but fail to meet an acceptable limit on cost, and some compounds, like CdS, show favorable cost but a low production potential, making them candidate technologies primarily for niche markets.

Silicon Comparison. It is important to compare results of these novel material systems to silicon, the second most abundant element in the earth’s crust at 28% of the lithosphere by mass. Despite its abundance, silicon has an annual production that trails that of copper by 145,000 metric tons and a cost of extraction of ~$1.70/kg, as compared to the $0.03/kg for iron (21). This disparity in costs is traced to the energy input of 24 kWh/kg for useable metallurgical- grade silicon from silica (SiO2) as opposed to the 2 kWh/kg for converting hematite (Fe2O3) to iron (31, 32). While both processes are already quite efficient, the Gibbs free energy of processing silica is a fixed thermodynamic barrier that will always be present. Crystalline silicon is further disadvantaged by a weighted photon flux absorption coefficient two orders of magnitude smaller than that for FeS2, thereby requiring a much larger material input to achieve the same absorption properties.

(14) Tierney, J. Betting on the Planet. The New York Times, 1990.

(15) Solow, R. M. Economics of Resources or Resources of Economics. Am. Econ. Rev. 1974

(16) Slade, M. E. Trends in natural-resource commodity prices – An analysis of the time domain. J. Environ. Econ. Manage. 1982

(17) Nordhaus, W. D. Allocation of Energy Resources. Brookings Pap. Econ. Activity 1973 , (3), 529–570.

(18) Hotelling, H. The Economics of Exhaustible Resources (Re- printed from Journal of Political-Economy, Vol 39, Pg 137-175, 1931). Bull. Math. Biol. 1991,53 1-2), 281–312.

(19) Withagen, C. Untested hypotheses in non-renewable resource economics. Environ. Resour. Econ. 1998,11 3-4), 623–634.

(20) Gordon, R. B.; Bertram, M.; Graedel, T. E. Metal stocks and sustainability. Proc. Natl. Acad. Sci. U. S. A. 2006, 103 (5), 1209– 1214.

(21) U.S. Geological Survey: Mineral commodity summaries 2007; U.S. Geological Survey: Washington, DC, 2007.

(31) Green, M. A. Solar cells: operating principles, technology, and system applications ; Prentice-Hall: Englewood Cliffs, NJ, 1982.

(32) Chapman, P. F.; Roberts, F. Metal resources and energy; Butterworths: London, 1983.

Posted in Alternative Energy Resources, Photovoltaic Solar | Leave a comment

Limits to Growth

cartoon never run out of anything argument

Below are links or excerpts of articles about limits to growth

Scientists vindicate ‘Limits to Growth’ – urge investment in ‘circular economy’

Early warning of civilizational collapse by early to mid 21st century startlingly prescient.

Limits to Growth was right. New research shows we’re nearing collapse

Research from the University of Melbourne has found the book’s forecasts are accurate, 40 years on. If we continue to track in line with the book’s scenario, expect the early stages of global collapse to start appearing soon.

As the MIT researchers explained in 1972, growing population and demands for material wealth would lead to more industrial output and pollution. Resources are being used up at a rapid rate, pollution is rising, industrial output and food per capita is rising. The population is rising quickly.  So far, Limits to Growth checks out with reality. So what happens next?  According to the book, to feed the continued growth in industrial output there must be ever-increasing use of resources. But resources become more expensive to obtain as they are used up. As more and more capital goes towards resource extraction, industrial output per capita starts to fall – in the book, from about 2015. As pollution mounts and industrial input into agriculture falls, food production per capita falls. Health and education services are cut back, and that combines to bring about a rise in the death rate from about 2020. Global population begins to fall from about 2030, by about half a billion people per decade. Living conditions fall to levels similar to the early 1900s.  It’s essentially resource constraints that bring about global collapse in the book. However, Limits to Growth does factor in the fallout from increasing pollution, including climate change.

The issue of peak oil is critical. Many independent researchers conclude that “easy” conventional oil production has already peaked. Even the conservative International Energy Agency has warned about peak oil. Peak oil could be the catalyst for global collapse. Some see new fossil fuel sources like shale oil, tar sands and coal seam gas as saviors, but the issue is how fast these resources can be extracted, for how long, and at what cost. If they soak up too much capital to extract the fallout would be widespread.

Exhaustion of cheap mineral resources is terraforming Earth – scientific report.

Soaring costs of resource extraction require transition to post-industrial ‘circular economy’ to avoid collapse. 

June 4, 2014. Nafeez Ahmed. The Guardian.

A new landmark scientific report drawing on the work of the world’s leading mineral experts forecasts that industrial civilisation’s extraction of critical minerals and fossil fuel resources is reaching the limits of economic feasibility, and could lead to a collapse of key infrastructures unless new ways to manage resources are implemented.

The peer-reviewed study – the 33rd Report to the Club of Rome – is authored by Prof Ugo Bardi of the Department of Earth Sciences at the University of Florence, where he teaches physical chemistry. It includes specialist contributions from fifteen senior scientists and experts across the fields of geology, agriculture, energy, physics, economics, geography, transport, ecology, industrial ecology, and biology, among others.

The Club of Rome is a Swiss-based global think tank founded in 1968 consisting of current and former heads of state, UN bureaucrats, government officials, diplomats, scientists, economists and business leaders.

Limits to Growth–At our doorstep, but not recognized

February 6, 2014. Gail Tverberg

How long can economic growth continue in a finite world? This is the question the 1972 book The Limits to Growth by Donella Meadows sought to answer. The computer models that the team of researchers produced strongly suggested that the world economy would collapse sometime in the first half of the 21st century.

I have been researching what the real situation is with respect to resource limits since 2005. The conclusion I am reaching is that the team of 1972 researchers were indeed correct. In fact, the promised collapse is practically right around the corner, beginning in the next year or two. In fact, many aspects of the collapse appear already to be taking place, such as the 2008-2009 Great Recession and the collapse of the economies of smaller countries such as Greece and Spain. How could collapse be so close, with virtually no warning to the population?

Reaching Limits to Growth: What Should our Response Be?

February 17, 2014 Gail Tverberg

Oil limits seem to be pushing us toward a permanent downturn, including a crash in credit availability, loss of jobs, and even possible government collapse. In this process, we are likely to lose access to both fossil fuels and grid electricity. Supply chains will likely need to be very short, because of the lack of credit. This will lead to a need for the use of local materials.

Time to Wake Up: Days of Abundant Resources and Falling Prices Are Over Forever

April 29, 2011.  Jeremy Grantham, the Chief Investment Officer of GMO Capital (with over $106 billion in assets under management). Mr. Grantham began his investment career as an economist with Royal Dutch Shell and earned his undergraduate degree from the University of Sheffield (U.K.) and an M.B.A. from Harvard Business School. His essay, reformatted for TOD, is below the fold. (Original, on GMO Website, here)

 

Revisiting the Limits to Growth After Peak Oil

In the 1970s a rising world population and the finite resources available to support it were hot topics. Interest faded—but it’s time to take another look

Charles A. S. Hall and John W. Day, Jr. May-June 2009. American Scientist, Volume 97, pp 230-37.

Some excerpts from this excellent paper:

“Despite our inattention, resource depletion and population growth have been continuing relentlessly. Our general feeling is that few people think about these issues today, but even most of those who do so believe that technology and market economics have resolved the problems. The warning in The Limits to Growth —and even the more general notion of limits to growth—are seen as invalid. Even ecologists have largely shifted their attention away from resources to focus, certainly not inappropriately, on various threats to the biosphere and biodiversity. They rarely mention the basic resource/human numbers equation that was the focal point for earlier ecologists.

Although many continue to dismiss what those researchers in the 1970s wrote, there is growing evidence that the original “Cassandras” were right on the mark in their general assessments.

There is a common perception, even among knowledgeable environmental scientists, that the limits-to-growth model was a colossal failure, since obviously its predictions of extreme pollution and population decline have not come true. But what is not well known is that the original output, based on the computer technology of the time, had a very misleading feature: There were no dates on the graph between the years 1900 and 2100. If one draws a timeline along the bottom of the graph for the halfway point of 2000, then the model results are almost exactly on course some 35 years later in 2008 (with a few appropriate assumptions). Of course, how well it will perform in the future when the model behavior gets more dynamic is not yet known. Although we do not necessarily advocate that the existing structure of the limits-to-growth model is adequate for the task to which it is put, it is important to recognize that its predictions have not been invalidated and in fact seem quite on target. We are not aware of any model made by economists that is as accurate over such a long time span.

technology does not work for free. As originally pointed out in the early 1970s by Odum and Pimentel, increased agricultural yield is achieved principally through the greater use of fossil fuel for cultivation, fertilizers, pesticides, drying and so on, so that it takes some 10 calories of petroleum to generate each calorie of food that we eat. The fuel used is divided nearly equally between the farm, transport and processing, and preparation. The net effect is that roughly 19 percent of all of the energy used in the United States goes to our food system. Malthus could not have foreseen this enormous increase in food production through petroleum.

Together oil and natural gas supply nearly two-thirds of the energy used in the world, and coal another 20 percent. We do not live in an information age, or a post-industrial age, or (yet) a solar age, but a petroleum age.

Most environmental science textbooks focus far more on the adverse impacts of fossil fuels than on the implications of our overwhelming economic and even nutritional dependence on them. The failure today to bring the potential reality and implications of peak oil, indeed of peak everything, into scientific discourse and teaching is a grave threat to industrial society.

The concept of the possibility of a huge, multifaceted failure of some substantial part of industrial civilization is so completely outside the understanding of our leaders that we are almost totally unprepared for it.

There are virtually no extant forms of transportation, beyond shoe leather and bicycles, that are not based on oil, and even our shoes are now often made of oil. Food production is very energy intensive, clothes and furniture and most pharmaceuticals are made from and with petroleum, and most jobs would cease to exist without petroleum. But on our university campuses one would be hard pressed to have any sense of that beyond complaints about the increasing price of gasoline, even though a situation similar to the 1970s gas shortages seemed to be unfolding in the summer and fall of 2008 in response to three years of flat oil production, assuaged only when the financial collapse decreased demand for oil.

No substitutes for oil have been developed on anything like the scale required, and most are very poor net energy performers. Despite considerable potential, renewable sources (other than hydropower or traditional wood currently provide less than 1 percent of the energy used in both the U.S. and the world, and the annual increase in the use of most fossil fuels is generally much greater than the total production (let alone increase) in electricity from wind turbines and photovoltaics. Our new sources of “green” energy are simply increasing along with (rather than displacing) all of the traditional ones.”

Revisiting The Limits to Growth: Could The Club of Rome Have Been Correct, After All?

October 2000. Matthew R. Simmons

In the early 1970’s, a book was published entitled, The Limits To Growth, a report of the Club of Rome’s project on the predicament of mankind. Its conclusions were stunning. It was ultimately published in 30 languages and sold over 30 million copies. According to a sophisticated MIT computer model, the world would ultimately run out of many key resources. These limits would become the “ultimate” predicament to mankind.

Over the past few years, I have heard various energy economists lambast this “erroneous” work done. Often the book has been portrayed as the literal “poster child” of misinformed “Malthusian” type thinking that misled so many people into believing the world faced a short mania 30 years ago. Obviously, there were no “The Limits To Growth”. The worry that shortages would rule the day as we neared the end of the 20th Century became a bad joke. Instead of shortages, the last two decades of the 20th Century were marked by glut. The world ended up enjoying significant declines in almost all commodity prices. Technology and efficiency won. The Club of Rome and its “nay-saying” disciples clearly lost!

The critics of this flawed work still relish in pointing out how wrong this theory turned out to be. A Foreign Affairs story published this past January, entitled Cheap Oil, forecast two decades of a pending oil glut. In this article, the Club of Rome’s work was scorned as being the source document which led an entire generation of wrong-thinking people to believe that energy supplies would run short. In this Foreign Affairs report, the authors stated, “….the “sky-is-falling school of oil forecasters has been systematically wrong for more than a generation.

What the Limits to Growth Actually Said

After reading The Limits to Growth, I was amazed. Nowhere in the book was there any mention about running out of anything by 2000. Instead, the book’s concern was entirely focused on what the world might look like 100 years later. There was not one sentence or even a single word written about an oil shortage, or limit to any specific resource, by the year 2000.

The group all shared a common concern that mankind faced a future predicament of grave complexity, caused by a series of interrelated problems that traditional institutions and policy would not be able to cope with the issues, let alone come to grips with their full context. A core thesis of their work was that long term exponential growth was easy to overlook. Human nature leads people to innocently presume growth rates are linear. The book then postulated that if a continuation of the exponential growth of the seventies began in the world’s population, its industrial output, agricultural and natural resource consumption and the pollution produced by all of the above, would result in severe constraints on all known global resources by 2050 to 2070.

The first conclusion was a view that if present growth trends continued unchanged, a limit to the growth that our planet has enjoyed would be reached sometime within the next 100 years. This would then result in a sudden and uncontrollable decline in both population and industrial capacity.

The second key conclusion was that these growth trends could be altered. Moreover, if proper alterations were made, the world could establish a condition of “ecological stability” that would be sustainable far into the future.

The third conclusion was a view that the world could embark on this second path, but the sooner this effort started, the greater the chance would be of achieving this “ecologically stable” success.

 

Brown, J., et al. January 2011. Energetic Limits to Economic Growth. Bioscience Vol 61 no. 1

In just a few thousand years the human population has colonized the entire world and grown to almost 7 billion. Humans now appropriate 20% to 40% of terrestrial annual net primary production, and have transformed the atmo- sphere, water, land, and biodiversity of the planet (Vitousek et al. 1997, Haberl et al. 2007). For centuries some have questioned how long a finite planet can continue to sup- port near-exponential population and economic growth (e.g., Malthus 1798, Ehrlich 1968, Meadows et al. 1972). Recent issues such as climate change, the global decline in population growth rate, the depletion of petroleum reserves and resulting increase in oil prices, and the recent eco- nomic downturn have prompted renewed concerns about whether longstanding trajectories of population and eco- nomic growth can continue (e.g., Arrow et al. 2004).

Economic growth and development require that energy and other resources be extracted from the environment to manufacture goods, provide services, and create capital. The central role of energy is substantiated by both theory and data. Key theoretical underpinnings come from the laws of thermodynamics: first, that energy can be neither created nor destroyed, and second, that some capacity to perform useful work is lost as heat when energy is converted from one form to another. Complex, highly organized systems, including human economies, are maintained in states far from thermodynamic equilibrium by the continual intake and transformation of energy (Soddy 1926, Odum 1971, Georgescu-Roegen 1977, Ruth 1993, Schneider and Kay 1995, Hall et al. 2001, Chen 2005, Smil 2008). Empirically, the central role of energy in modern human economies is demonstrated by the positive relationship between energy use and economic growth (Shafiee and Topal 2008, Smil 2008, Payne 2010).

Increased energy supply. The sources of energy that may be used to support future economic growth include finite stocks of fossil fuels as well as nuclear, renewable, and other proposed but unproven technologies. Fossil fuels currently provide 85% of humankind’s energy needs (figure 5), but they are effectively fixed stores that are being depleted rapidly (Heinberg 2003, IEA 2008, Hall and Day 2009). Conventional nuclear energy currently supplies only about 6% of global energy; fuel supplies are also finite, and future developments are plagued by concerns about safety, waste storage, and disposal (Nel and Cooper 2009). A breakthrough in nuclear fusion, which has remained elusive for the last 50 years, could potentially generate enormous quantities of energy, but would likely produce large and unpredictable socioeconomic and environmental consequences. Solar, hydro, wind, and tidal renewable energy sources are abundant, but environmental impacts and the time, resources, and expenses required to capture their energy limit their potential (Hall and Day 2009). Biofuels may be renewable, but ecological constraints and environmental impacts constrain their contribution (Fargione et al. 2008). More generally, most efforts to develop new sources of energy face economic problems of diminishing returns on energy and monetary investment (Hall et al. 1986, Tainter 1988, Allen et al. 2001, Tainter et al. 2003).

The nonlinear, complex nature of the global economy raises the possibility that energy shortages might trigger massive socioeconomic disruption. Again, consider the analogy to biological metabolism: Gradually reducing an individual’s food supply leads initially to physiological adjustments, but then to death from starvation, well before all food supplies have been exhausted. M ainstream economists historically have dismissed warnings that resource shortages might permanently limit economic growth. Many believe that the capacity for technological innovation to meet the demand for resources is as much a law of human nature as the Malthusian- Darwinian dynamic that creates the demand (Barro and Sala-i-Martin 2003, Durlauf et al. 2005, Mankiw 2006). However, there is no scientific support for this proposition; it is either an article of faith or based on statistically flawed extrapolations of historical trends. The ruins of Mohenjo Daro, Mesopotamia, Egypt, Rome, the Maya, Angkor, Easter Island, and many other complex civilizations provide incontrovertible evidence that innovation does not always prevent socioeconomic collapse (Tainter 1988, Diamond 2004).

Conclusions

We are by no means the first to write about the limits to economic growth and the fundamental energetic constraints that stem directly from the laws of thermodynamics and the principles of ecology. Beginning with Malthus (1798), both ecologists and economists have called attention to the essential dependence of economies on natural resources and have pointed out that near-exponential growth of the human population and economy cannot be sustained indefinitely in a world of finite resources (e.g., Soddy 1922, Odum 1971, Daly 1977, Georgescu-Roegen 1977, Cleveland et al. 1984, Costanza and Daly 1992, Hall et al. 2001, Arrow et al. 2004, Stern 2004, Nel and van Zyl 2010. Some ecological economists and systems ecologists have made similar theoretical arguments for energetic constraints on economic systems (e.g., Odum 1971, Hall et al. 1986). However, these perspectives have not been incorporated into mainstream economic theory, practice, or pedagogy (e.g., Barro and Sala-i-Martin 2003, Mankiw 2006), and they have been downplayed in consensus statements by influential ecologists (e.g., Lubchenco et al. 1991, Palmer et al. 2004, ESA 2009) and sustainability scientists (e.g., NRC 1999, Kates et al. 2001, ICS 2002, Kates and Parris 2003, Parris and Kates 2003, Clark 2007).

Excerpts from: Carolyn Lochhead. 4 Jan 2014. Critics question desirability of relentless economic growth. San Francisco Chronicle.

“We are approaching the planet’s limitations. So when I see the media barrage about buying more stuff, it’s almost like a science fiction movie where .. we are undermining the very ecological systems which allow life to continue, but no one’s allowed to talk about it.”  Annie Leonard, founder of the Story of Stuff project, a Berkeley-based effort to curb mass consumption.

Ecologists warn that economic growth is strangling the natural systems on which life depends, creating not just wealth, but filth on a planetary scale. Carbon pollution is changing the climate. Water shortages, deforestation, tens of millions of acres of land too polluted to plant, and other global environmental ills are increasingly viewed as strategic risks by governments and corporations around the world.

Stanford University ecologist Gretchen Daily

As the world economy grows relentlessly, ecologists warn that nature’s ability to absorb wastes and regenerate natural resources is being exhausted. “We’re driving natural capital to its lowest levels ever in human history,” Daily said.

The physical pressure that human activities put on the environment can’t possibly be sustained,” said Stanford University ecologist Gretchen Daily, who is at the forefront of efforts across the world to incorporate “natural capital,” the value of such things as water, topsoil and genetic diversity that nature provides, into economic decision-making.

For example, scientists estimate that commercial fishing, if it continues at the present rate, will exhaust fisheries within the lifetime of today’s children. The global “by-catch” of discarded birds, turtles, and other marine animals alone has reached at least 20 million tons a year.

Mainstream economists universally reject the concept of limiting growth.

As Larry Summers, a former adviser to President Obama, once put it, “The idea that we should put limits on growth because of some natural limit is a profound error, and one that, were it ever to prove influential, would have staggering social costs.”

Since World War II, the overarching goal of U.S. policy under both parties has been to keep the economy growing as fast as possible. Growth is seen as the base cure for every social ill, from poverty and unemployment to a shrinking middle class.  Last month, Obama offered a remedy to widening income inequality: “We’ve got to grow the economy even faster.”

U. C. Berkeley’s Energy & Resources Richard Norgaard: We don’t have to have a free-market economy

Economies are not fixed and unchangeable.  The United States had a centrally planned economy in World War II, then a mixed Cold War economy that built the Interstate Highway System and established social welfare programs like Medicare. Today’s more free-market economy took root in the 1980s.

“Economies aren’t natural,” Norgaard said. “We build them to do what we need to do, and we built the economy we have.”

 

 

 

Cassandra’s curse: how “The Limits to Growth” was demonized

March 9, 2008, Ugo Bardi

In 1972, the LTG study arrived in a world that had known more than two decades of unabated growth after the end of the Second World War. It was a time of optimism and faith in technological progress that, perhaps, had never been so strong in the history of humankind. With nuclear power on the rise, with no hint that mineral resources were scarce, with population growing fast, it seemed that the limits to growth, if such a thing existed, were so far away in the future that there was no reason to worry. In any case, even if these limits were closer than generally believed, didn’t we have technology to save us? With nuclear energy on the rise, a car in every garage, the Moon just conquered in 1968, the world seemed to be all set for a shiny future. Against that general feeling, the results of LTG were a shock.
The LTG study had everything that was needed to become a major advance in science. It came from a prestigious institution, the MIT; it was sponsored by a group of brilliant and influential intellectuals, the Club of Rome; it used the most modern and advanced computation techniques and, finally, the events that were taking place a few years after publication, the great oil crisis of the 1970s, seemed to confirm the vision of the authors. Yet, the study failed in generating a robust current of academic research and, a couple of decades after the publication, the general opinion about it had completely changed. Far from being considered the scientific revolution of the century, in the 1990s LTG had become everyone’s laughing stock. Little more than the rumination of a group of eccentric (and probably slightly feebleminded) professors who had really thought that the end of the world was near. In short, Chicken Little with a computer.
With time, the debate veered more and more on the political side. In 1997, the Italian economist Giorgio Nebbia, noted that the reaction against the LTG study had arrived from at least four different fronts. One was from those who saw the book as a threat to the growth of their businesses and industries. A second set was that of professional economists, who saw LTG as a threat to their dominance in advising on economic matters. The Catholic world provided further ammunition for the critics, being piqued at the suggestion that overpopulation was one of the major causes of the problems. Then, the political left in the Western World saw the LTG study as a scam of the ruling class, designed to trick workers into believing that the proletarian paradise was not a practical goal. And this by Nebbia is a clearly incomplete list; forgetting religious fundamentalists, the political right, the believers in infinite growth, politicians seeking for easy solutions to all problems and many others. – See more at: http://europe.theoildrum.com/node/3551#sthash.bhJ3H4t4.dpuf
Posted in Limits To Growth, Peak Oil | Leave a comment

Arthur Berman: The Real Cause Of Low Oil Prices

The Real Cause Of Low Oil Prices: Interview With Arthur Berman

By James Stafford, 04 January 2015, oilprice.com

With all the conspiracy theories surrounding OPEC’s November decision not cut production, is it really not just a case of simple economics? The U.S. shale boom has seen huge hype but the numbers speak for themselves and such overflowing optimism may have been unwarranted. When discussing harsh truths in energy, no sector is in greater need of a reality check than renewable energy.

In a third exclusive interview with James Stafford of Oilprice.com, energy expert Arthur Berman explores:

• How the oil price situation came about and what was really behind OPEC’s decision
• What the future really holds in store for U.S. shale
• Why the U.S. oil exports debate is nonsensical for many reasons
• What lessons can be learnt from the U.S. shale boom
• Why technology doesn’t have as much of an influence on oil prices as you might think
• How the global energy mix is likely to change but not in the way many might have hoped

OP: The Current Oil Situation – What is your assessment?

Arthur Berman: The current situation with oil price is really very simple. Demand is down because of a high price for too long. Supply is up because of U.S. shale oil and the return of Libya’s production. Decreased demand and increased supply equals low price.

As far as Saudi Arabia and its motives, that is very simple also. The Saudis are good at money and arithmetic. Faced with the painful choice of losing money maintaining current production at $60/barrel or taking 2 million barrels per day off the market and losing much more money—it’s an easy choice: take the path that is less painful. If there are secondary reasons like hurting U.S. tight oil producers or hurting Iran and Russia, that’s great, but it’s really just about the money.

Saudi Arabia met with Russia before the November OPEC meeting and proposed that if Russia cut production, Saudi Arabia would also cut and get Kuwait and the Emirates at least to cut with it. Russia said, “No,” so Saudi Arabia said, “Fine, maybe you will change your mind in six months.” I think that Russia and maybe Iran, Venezuela, Nigeria and Angola will change their minds by the next OPEC meeting in June.

We’ve seen several announcements by U.S. companies that they will spend less money drilling tight oil in the Bakken and Eagle Ford Shale Plays and in the Permian Basin in 2015. That’s great but it will take a while before we see decreased production. In fact, it is more likely that production will increase before it decreases. That’s because it takes time to finish the drilling that’s started, do less drilling in 2015 and finally see a drop in production. Eventually though, U.S. tight oil production will decrease. About that time—perhaps near the end of 2015—world oil prices will recover somewhat due to OPEC and Russian cuts after June and increased demand because of lower oil price. Then, U.S. companies will drill more in 2016.

OP: How do you see the shale landscape changing in the U.S. given the current oil price slump?

Arthur Berman: We’ve read a lot of silly articles since oil prices started falling about how U.S. shale plays can break-even at whatever the latest, lowest price of oil happens to be. Doesn’t anyone realize that the investment banks that do the research behind these articles have a vested interest in making people believe that the companies they’ve put billions of dollars into won’t go broke because prices have fallen? This is total propaganda.

We’ve done real work to determine the EUR (estimated ultimate recovery) of all the wells in the core of the Bakken Shale play, for example. It’s about 450,000 barrels of oil equivalent per well counting gas. When we take the costs and realized oil and gas prices that the companies involved provide to the Securities and Exchange Commission in their 10-Qs, we get a break-even WTI price of $80-85/barrel. Bakken economics are at least as good or better than the Eagle Ford and Permian so this is a fairly representative price range for break-even oil prices.

Related: Low Prices Lead To Layoffs In The Oil Patch

But smart people don’t invest in things that break-even. I mean, why should I take a risk to make no money on an energy company when I can invest in a variable annuity or a REIT that has almost no risk that will pay me a reasonable margin?

Oil prices need to be around $90 to attract investment capital. So, are companies OK at current oil prices? Hell no! They are dying at these prices. That’s the truth based on real data. The crap that we read that companies are fine at $60/barrel is just that. They get to those prices by excluding important costs like everything except drilling and completion. Why does anyone believe this stuff?

If you somehow don’t believe or understand EURs and 10-Qs, just get on Google Finance and look at third quarter financial data for the companies that say they are doing fine at low oil prices.

Continental Resources is the biggest player in the Bakken. Their free cash flow—cash from operating activities minus capital expenditures—was -$1.1 billion in the third- quarter of 2014. That means that they spent more than $1 billion more than they made. Their debt was 120% of equity. That means that if they sold everything they own, they couldn’t pay off all their debt. That was at $93 oil prices.

And they say that they will be fine at $60 oil prices? Are you kidding? People need to wake up and click on Google Finance to see that I am right. Capital costs, by the way, don’t begin to reflect all of their costs like overhead, debt service, taxes, or operating costs so the true situation is really a lot worse.

So, how do I see the shale landscape changing in the U.S. given the current oil price slump? It was pretty awful before the price slump so it can only get worse. The real question is “when will people stop giving these companies money?” When the drilling slows down and production drops—which won’t happen until at least mid-2016—we will see the truth about the U.S. shale plays. They only work at high oil prices. Period.

OP: What, if any, effect will low oil prices have on the US oil exports debate?

Arthur Berman: The debate about U.S. oil exports is silly. We produce about 8.5 million barrels of crude oil per day. We import about 6.5 million barrels of crude oil per day although we have been importing less every year. That starts to change in 2015 and after 2018 our imports will start to rise again according to EIA. The same thing is true about domestic production. In 2014, we will see the greatest annual rate of increase in production. In 2015, the rate of increase starts to slow down and production will decline after 2019 again according to EIA.

Why would we want to export oil when we will probably never import less than 37 or 38 percent (5.8 million barrels per day) of our consumption? For money, of course!

Remember, all of the calls for export began when oil prices were high. WTI was around $100/barrel from February through mid-August of this year. Brent was $6 or $7 higher. WTI was lower than Brent because the shale players had over-produced oil, like they did earlier with gas, and lowered the domestic price.

U.S. refineries can’t handle the light oil and condensate from the shale plays so it has to be blended with heavier imported crudes and exported as refined products. Domestic producers could make more money faster if they could just export the light oil without going to all of the trouble to blend and refine it.

This, by the way, is the heart of the Keystone XL pipeline debate. We’re not planning to use the oil domestically but will blend that heavy oil with condensate from shale plays, refine it and export petroleum products. Keystone is about feedstock.

Would exporting unrefined light oil and condensate be good for the country? There may be some net economic benefit but it doesn’t seem smart for us to run through our domestic supply as fast as possible just so that some oil companies can make more money.

OP: In global terms, what do you think developing producer nations can learn from the US shale boom?

Arthur Berman: The biggest take-away about the U.S. shale boom for other countries is that prices have to be high and stay high for the plays to work. Another important message is that drilling can never stop once it begins because decline rates are high. Finally, no matter how big the play is, only about 10-15% of it—the core or sweet spot—has any chance of being commercial. If you don’t know how to identify the core early on, the play will probably fail.

Not all shale plays work. Only marine shales that are known oil source rocks seem to work based on empirical evidence from U.S. plays. Source rock quality and source maturity are the next big filter. Total organic carbon (TOC) has to be at least 2% by weight in a fairly thick sequence of shale. Vitrinite reflectance (Ro) needs to be 1.1 or higher.

If your shale doesn’t meet these threshold criteria, it probably won’t be commercial. Even if it does meet them, it may not work. There is a lot more uncertainty about shale plays than most people think.

OP: Given technological advances in both the onshore and offshore sectors which greatly increase production, how likely is it that oil will stay below $80 for years to come?

Arthur Berman: First of all, I’m not sure that the premise of the question is correct. Who said that technology is responsible for increasing production? Higher price has led to drilling more wells. That has increased production. It’s true that many of these wells were drilled using advances in technology like horizontal drilling and hydraulic fracturing but these weren’t free. Has the unit cost of a barrel of oil gas gone down in recent years? No, it has gone up. That’s why the price of oil is such a big deal right now.

Domestic oil prices were below about $30/barrel until 2004 and companies made enough money to stay in business. WTI averaged about $97/barrel from 2011 until August of 2014. That’s when we saw the tight oil boom. I would say that technology followed price and that price was the driver. Now that prices are low, all the technology in the world won’t stop falling production.

Many people think that the resurgence of U.S. oil production shows that Peak Oil was wrong. Peak oil doesn’t mean that we are running out of oil. It simply means that once conventional oil production begins to decline, future supply will have to come from more difficult sources that will be more expensive or of lower quality or both. This means production from deep water, shale and heavy oil. It seems to me that Peak Oil predictions are right on track.

Technology will not reduce the break-even price of oil. The cost of technology requires high oil prices.

The companies involved in these plays never stop singing the praises of their increasing efficiency through technology—this has been a constant litany since about 2007—but we never see those improvements reflected in their financial statements. I don’t doubt that the companies learn and get better at things like drilling time but other costs must be increasing to explain the continued negative cash flow and high debt of most of these companies.

The price of oil will recover. Opinions that it will remain low for a long time do not take into account that all producers need about $100/barrel. The big exporting nations need this price to balance their fiscal budgets. The deep-water, shale and heavy oil producers need $100 oil to make a small profit on their expensive projects. If oil price stays at $80 or lower, only conventional producers will be able to stay in business by ignoring the cost of social overhead to support their regimes. If this happens, global supply will fall and the price will increase above $80/barrel. Only a global economic collapse would permit low oil prices to persist for very long.

OP: How do you see the global energy mix changing in the coming decades? Have renewables made enough advances to properly compete with fossil fuels or is that still a long way off?

Arthur Berman: The global energy mix will move increasingly to natural gas and more slowly to renewable energy. Global conventional oil production peaked in 2005-2008. U.S. shale gas production will peak in the next 5 to 7 years but Russia, Iran, Qatar and Turkmenistan have sufficient conventional gas reserves to supply Europe and Asia for several decades. Huge discoveries have been made in the greater Indian Ocean region—Madagascar, offshore India, the Northwest Shelf of Australia and Papua New Guinea. These will provide the world with natural gas for several more decades. Other large finds have been made in the eastern Mediterranean.

There will be challenges as we move from an era of oil- to an era of gas-dominated energy supply. The most serious will be in the transport sector where we are thoroughly reliant on liquid fuels today —mostly gasoline and diesel. Part of the transformation will be electric transport using natural gas to generate the power. Increasingly, LNG will be a factor especially in regions that lack indigenous gas supply or where that supply will be depleted in the medium term and no alternative pipeline supply is available like in North America.

Related: Economic Inefficiency Means Low Oil Prices Are Here To Stay

Of course, natural gas and renewable energy go hand-in-hand. Since renewable energy—primarily solar and wind—are intermittent, natural gas backup or base-load is necessary. I think that extreme views on either side of the renewable energy issue will have to moderate. On the one hand, renewable advocates are unrealistic about how quickly and easily the world can get off of fossil fuels. On the other hand, fossil fuel advocates ignore the fact that government is already on board with renewables and that, despite the economic issues that they raise, renewables are going to move forward albeit at considerable cost.

Time is rarely considered adequately. Renewable energy accounts for a little more than 2% of U.S. total energy consumption. No matter how much people want to replace fossil fuel with renewable energy, we cannot go from 2% to 20% or 30% in less than a decade no matter how aggressively we support or even mandate its use. In order to get to 50% or more of primary energy supply from renewable sources it will take decades.

I appreciate the urgency felt by those concerned with climate change. I think, however, that those who advocate a more-or-less immediate abandonment of fossil fuels fail to understand how a rapid transition might affect the quality of life and the global economy. Much of the climate change debate has centered on who is to blame for the problem. Little attention has been given to what comes next namely, how will we make that change without extreme economic and social dislocation?

I am not a climate scientist and, therefore, do not get involved in the technical debate. I suggest, however, that those who advocate decisive action in the near term think seriously about how natural gas and nuclear power can make the change they seek more palatable.

The great opportunity for renewable energy lies in electricity storage technology. At present, we are stuck with intermittent power and little effort has gone into figuring out ways to store the energy that wind and solar sources produce when conditions are right. If we put enough capital into storage capability, that can change everything.

Posted in Arthur Berman | 3 Comments

Challenges to making California’s grid 33% renewables

Meier, A. May 2014. Challenges to the integration of renewable resources at high system penetration. California Energy Commission.

Energy Research and Development Division FINAL PROJECT REPORT.  California Energy Commission California Institute for Energy and Environment MAY 2014 Alexandra von Meier California Institute for Energy and Environment University of California

Excerpts

Successfully integrating renewable resources into the electric grid at penetration levels to meet a 33 percent Renewables Portfolio Standard for California presents diverse technical and organizational challenges.

Renewable and distributed resources introduce space (spatial) and time (temporal) constraints on resource availability and are not always available where or when they are wanted.

Although every energy resource has limitations, the constraints associated with renewables may be more stringent and different from those constraints that baseload power systems were designed and built around.

These unique constraints must be addressed to mitigate problems and overcome difficulties while maximizing the benefits of renewable resources. New efforts are required to coordinate time and space within the electric grid at greater resolution or with a higher degree of refinement than in the past. This requires measuring and actively controlling diverse components of the power system on smaller time scales while working toward long‐term goals. These smaller time scales may be hourly or by the minute, but could also be in the milli‐ or even microsecond range.

To cope with intermittent renewables there needs to be

  • reserve generation capacity at least a day
  • dispatchable generation with high ramp rates in MW/s
  • generation with regulation capability
  • dispatchable electric storage
  • electric demand response (from customers)
  • direct load control down to a 5-second time scale without impacting end-use (!) their exclamation mark, not mine in http://uc-ciee.org/downloads/Renewable_Energy_2010.pdf

It also important to plan and design around the diverse details of local distribution circuits while considering systemic interactions throughout the Western interconnect. Simultaneously coordinating or balancing these resources in an electric under a variety of time and distances, without any specific technology to assist, is defined as a “smart grid.”

Temporal coordination specifically addresses the renewable resources time‐varying behavior and how this intermittency interacts with other components on the grid where not only quantities of power but rates of change and response times are crucially important.

Research needs for temporal coordination relate to:

  • resource intermittence,
  • forecasting and modeling on finer time scales;
  • electric storage and implementation on different time scales;
  • demand response and its implementation as a firm resource;
  • and dynamic behavior of the alternating current grid, including stability and low‐frequency oscillations, and the related behavior of switch‐controlled generation.

Different technologies, management strategies and incentive mechanisms are necessary to address coordination on different time scales.

Spatial coordination refers to how resources are interconnected and connected to loads through the transmission and distribution system. This means connecting remote resources and also addressing the location‐specific effects of a given resource being connected in a particular place. The latter is particularly relevant for distributed generation, which includes numerous smaller units interconnected at the distribution rather than the transmission level.

Research needs for spatial coordination relate to: technical, social and economic challenges for

  • long‐distance transmission expansion;
  • problematic aspects of high‐penetration distributed generation on distribution circuits, including clustering, capacity limitations, modeling of generation and load, voltage regulation, circuit protection, and prevention of unintentional islanding;
  • microgrids and potential strategic development of microgrid concepts, including intentional islanding and variable power quality and reliability.

A challenge to “smart grid” coordination is managing unprecedented amounts of data associated with an unprecedented number of decisions and control actions at various levels throughout the grid.

This report outlined substantial challenges on the way to meeting these goals.

More work is required to move from the status quo to a system with 33 percent of intermittent renewables. The complex nature of the grid and the refining temporal and spatial coordination represented a profound departure from the capabilities of the legacy or baseload system. Any “smart grid” development will require time for learning.

Researchers concluded that time was of the essence in answering the many foundational questions about how to design and evaluate new system capabilities, how to re‐write standards and procedures accordingly, how to create incentives to elicit the most constructive behavior from market participants and how to support operators in their efforts to keep the grid working reliably during these transitions. Addressing these questions early may help prevent costly mistakes and delays later on.

CHAPTER 1: Introduction to the Coordination Challenge

Successfully integrating renewable resources in the electric grid at high penetration levels – that is, meeting a 33 percent renewables portfolio standard for California – requires diverse technical and organizational challenges. Some of these challenges have been well‐recognized in the literature, while others are emerging from more recent observations. What these challenges have in common is that they can be characterized as a coordination challenge. Renewable and distributed resources introduce space or location (spatial) and time (temporal) constraints on resource availability. It is not always possible to have the resources available where and when they are required.

New efforts will be required to coordinate these resources in space and time within the electric grid.

A combination of economic and technical pressures has made grid operators pay more attention to the grid’s dynamic behaviors, some of which occur within a fraction of an alternating current cycle (one‐sixtieth of a second). The entire range of these relevant time increments in electric grid operation and planning spans fifteen orders of magnitude: from the micro‐second interval on which a solid‐state switching device operates, to the tens of years it may take to bring a new fleet of generation and transmission resources online or as a billion seconds (a season).CA grid 33 pct renewable time scaleIn the spatial dimension, it is also the case that power systems have expanded geographically and become strongly interdependent over long distances, while local effects such as power quality are simultaneously gaining importance. About six orders of magnitude covered ‐ from the very proximate impacts of harmonics (on the scale of an individual building) to the wide‐area stability and reliability effects that reach across the Western Interconnect, on the scale of a thousand miles.

Because of their unique properties, any effort to integrate renewable resources to a high penetration level will push outward time and distance scales on which the grid is operated. For example, it will force distant resource locations to be considered as well as unprecedented levels of distributed generation on customer rooftops.

The physical characteristics of these new generators will have important implications for system dynamic behavior.

In extending the time and distance scales for grid operations and planning, integrating renewable resources adds to and possibly compounds other, pre‐existing technical end economic pressures.

This suggests at least a partial definition for what has recently emerged as a” Holy Grail” or the “smart grid.” The “smart grid” is one that allows or facilitates managing electric power systems simultaneously on larger and smaller scales of distance and time.

Special emphasis is at the smaller end of each scale, where a “smart grid” allows managing energy and information at higher resolution than the legacy or baseload system.

The fact that solar and wind power are intermittent and non‐dispatchable is widely recognized.

More specifically, the problematic aspects of intermittence include the following:

High variability of wind power. Not only can wind speeds change rapidly, but because the mechanical power contained in the wind is proportional to wind speed cubed, a small change in wind speed causes a large change in power output from a wind rotor.

  1. High correlation of hourly average wind speed among prime California wind areas. With many wind farms on the grid, the variability of wind power is somewhat mitigated by randomness: especially the most rapid variations tend to be statistically smoothed out once the output from many wind areas is summed up. However, while brief gusts of wind do not tend to occur simultaneously everywhere, the overall daily and even hourly patterns for the best California wind sites tend to be quite similar, because they are driven by the same overall weather patterns across the state.
  2. Time lag between solar generation peak and late afternoon demand peak. The availability of solar power generally has an excellent coincidence with summer‐peaking demand. However, while the highest load days are reliably sunny, the peak air‐conditioning loads occur later in the afternoon due to the thermal inertia of buildings, typically lagging peak insolation by several hours.
  3. Rapid solar output variation due to passing clouds. Passing cloud events tend to be randomized over larger areas, but can cause very rapid output variations locally. This effect is therefore more important for large, contiguous photovoltaic arrays (that can be affected by a cloud all at once) than for the sum of many smaller, distributed PV arrays. Passing clouds are also less important for solar thermal generation than for PV because the ramp rate is mitigated by thermal inertia (and because concentrating solar plants tend to be built in relatively cloudless climates, since they can only use direct, not diffuse sunlight).
  4. Limited forecasting abilities. Rapid change of power output is especially problematic when it comes without warning. In principle, intermittence can be addressed by firming resources, including • reserve generation capacity • dispatchable generation with high ramp rates • generation with regulation capability • dispatchable electric storage • electric demand response that can be used in various combinations to offset the variability of renewable generation output. Vital characteristics of these firming resources include not only the capacity they can provide, but their response times and ramp rates.

Solar and wind power forecasting obviously hinges on the ability to predict temperature, sunshine and wind conditions. While weather services can offer reasonably good forecasts for larger areas within a resolution of hours to days, ranges of uncertainty increase significantly for very local forecasts. Ideally, advance warning could be provided at the earliest possible time before variations in solar and wind output occur, to provide actionable intelligence to system operators.

Needed:

Real‐time forecasting tools for wind speed, temperature, total insolation (for PV) and direct normal insolation (for concentrating solar), down to the time scale of minutes

Tools for operators that translate weather forecast into renewable output forecast and action items to compensate for variations.

A related question is the extent to which the variability of renewable resources will cancel or compound at high penetration levels, locally and system‐wide. Specifically, we wish to know how rapidly aggregate output will vary for large and diverse collections of solar and wind resources.

Needed: • Analysis of short‐term variability for solar and wind resources, individually and aggregate, to estimate quantity and ramp rates of firming resources required.

Analysis of wide area deployment of balancing resources such as storage, shared among control areas, to compensate effectively for short‐term variability.

2.1.3 Background: Firming Resources Resources to “firm up” intermittent generation include

  • reserve generation capacity
  • dispatchable generation with high ramp rates

The various types of firming generation resources are distinguished by the time scale on which they can be called to operate and the rate at which they can ramp power output up or down.
The most responsive resources are hydroelectric generators and gas turbines.

The difficult question is how much of each might be needed.

Electric storage includes a range of standard and emerging technologies:

  • pumped hydro
  • stationary battery banks
  • thermal storage at solar plants
  • electric vehicles
  • compressed air (CAES)
  • supercapacitors
  • flywheels
  • superconducting magnetic (SMES)
  • hydrogen from electrolysis or thermal decomposition of H2O

An inexpensive, practical, controllable, scalable and rapidly deployable storage technology would substantially relieve systemic constraints related to renewables integration.

The spectrum of time scales for different storage applications is illustrated in Figure 5.

  • months: seasonal energy storage (hydro power)
  • 4‐8 hours: demand shifting
  • 2 hours: supplemental energy dispatch
  • 15‐30 minutes: up‐ and down‐regulation
  • seconds to minutes: solar & wind output smoothing
  • sub‐milliseconds: power quality adjustment; flexible AC transmission system (FACTS) devices that shift power within a single cycle

Given that storing electric energy is expensive compared to the intrinsic value of the energy, the pertinent questions at this time concern what incentives there are for electric storage, at what level or type of implementation, and for what time target.

Alternating ‐current (a.c.) power systems exhibit behavior distinct from direct‐current (d.c.) circuits. Their essential characteristics during steady‐state operation, such as average power transfer from one node to another, can usually be adequately predicted by referring to d.c. models. But as a.c. systems become larger and more complex, and as their utilization approaches the limits of their capacity, peculiar and transient behaviors unique to a.c. become more important.

 

3 Eto, Joe et al. 2008. Real Time Grid Reliability Management. California Energy Commission, PIER Transmission research Program. CEC‐500‐2008‐049.

The increased need to manage California’s electricity grid in real time is a result of the ongoing transition from a system operated by vertically integrated utilities serving native loads to one operated by an independent system operator supporting competitive energy markets. During this transition period, the traditional approach to reliability management—construction of new transmission lines—has not been pursued due to unresolved issues related to the financing and recovery of transmission project costs. In the absence of investments in new transmission infrastructure, the best strategy for managing reliability is to equip system operators with better real-time information about actual operating margins so that they can better understand and manage the risk of operating closer to the edge.

Traditional rotating generators support grid stability by resisting changes in rotational speed, both due to magnetic forces and their own mechanical rotational inertia. Through their inherent tendency to keep rotating at a constant speed, these generators give the entire AC system a tendency to return to a steady operating state in the face of disturbances. Legacy power systems were designed with this inertial behavior in mind.

Large fossil fuel and nuclear generators naturally promote 60-Hz grid stability because their rotational speed is constant due to magnetic forces and inertia. Despite disturbances they to revert to a steady operating state. But the inverters that renewable energy use to supply AC power depend on very rapid on-off switching within solid-state semiconductor materials. It’s possible that at some point when a larger percent of power comes from renewables, these inverters will destabilize the grid voltages, frequencies, and oscillations by not responding collectively well to temporary disturbances and that we’ll need to keep large rotating generators to maintain stability.

Unlike conventional rotating generators, inverters produce alternating current by very rapid on‐off switching within solid‐state semiconductor materials. Inverters are used whenever 60‐Hz AC power is supplied to the grid from

  • c. sources such as PV modules, fuel cells or batteries
  • variable speed generators, such as wind whose output is conditioned by successive a.c.‐c.‐a.c. conversion (this does not include all wind generators, but a significant fraction of newly installed machines). What we do not understand well are the dynamic effects on a.c. systems of switch‐controlled generation:
  • How will switch‐controlled generators collectively respond to temporary disturbances, and how can they act to stabilize system voltage and frequency?
  • What will be the effect of switch‐controlled generation on wide‐area, low‐frequency oscillations?
  • Can inverters “fake” inertia and what would it take to program them accordingly?
  • What is the minimum system‐wide contribution from large, rotating generators required for stability?

Needed:

  • Modeling of high‐penetration renewable scenarios on a shorter time scale, including dynamic behavior of generation units that impacts voltage and frequency stability
  • Generator models for solar and wind machines
  • Inverter performance analysis, standardization and specification of interconnection requirements that includes dynamic behavior
  • Synchro‐phasor measurements at an increased number of locations, including distribution circuits, to diagnose problems and inform optimal management of inverters

CHAPTER 3: Spatial Coordination

Relevant distance scales in power system operation span six orders of magnitude, from local effects of power quality on the scale of an individual building to hundreds or even thousands of miles across interconnected systems. A “smart grid” with high penetration of renewables will require simultaneous consideration of small‐ and large‐scale compatibilities and coordination.

3.1 Transmission Level: Long-distance Issues

3.1.1 Background: Transmission Issues

The need for transmission capacity to remote areas with prime solar and wind resources is widely recognized. It is worth noting that renewable resources are not unique in imposing new transmission requirements. For example, a new fleet of nuclear power plants would likely be constrained by siting considerations that would similarly require the construction of new transmission capacity. In the case of solar and wind power, however, we know where the most attractive resources are – and they are not where most people live. Challenges for transmission expansion include social, economic and technical factors. Social and economic challenges for transmission expansion include • Long project lead times for transmission siting, sometimes significantly exceeding lead times for generation

NIMBY resistance to transmission siting based on aesthetics and other concerns (e.g., exposure to electromagnetic fields) • Higher cost of alternatives to visible overhead transmission • Uncertainty about future transmission needs and economically optimal levels

On the technical side, • Long‐distance AC. power transfers are constrained by stability limits (phase angle separation) regardless of thermal transmission capacity • Increased long‐distance AC power transfers may exacerbate low‐frequency oscillations (phase angle and voltage), potentially compromising system stability and security

Both of the above technical constraints can in theory be addressed with a.c.‐d.c. conversion, at significant cost. The crucial point, however, is that simply adding more, bigger wires will not always provide increased transmission capacity for the grid. Instead, it appears that legacy a.c. systems are reaching or have reached a maximum of geographic expansion and interconnectivity that still leaves them operable in terms of the system’s dynamic behavior. Further expansion of long‐distance power transfers, whether from renewable or other sources, will very likely require the increased use of newer technologies in transmission systems to overcome the dynamic constraints.

3.1.2 Research Needs Related to Transmission

On the social‐political and economic side, research needs relate to the problems of deciding how much transmission is needed where, and at what reasonable cost to whom. In addition, options for addressing siting constraints can be expanded by making transmission lines less visible or otherwise less obtrusive. Needed: • Analysis of economic costs and benefits to communities hosting rights of way • Political evaluation of accelerated siting processes • Continuing analysis to identify optimal investment level in transmission capacity relative to intermittent generation capacity, and to evaluate incentives • Public education, including interpretation of findings regarding EMF exposure • Continuing R&D on lower‐visibility transmission technologies, including compact designs and underground cables

Needed: • Dynamic system modeling on large geographic scale (WECC) providing analysis of likely stability problems to be encountered in transmission expansion scenario, the benefit potential of various d.c. link options • Continuing R&D on new infrastructure materials, devices and techniques that enable transmission capacity increases, including: dynamic thermal rating, power flow control, e.g. FACTS devices o fault current controllers, intelligent protection systems, e.g. adaptive relaying, stochastic planning and modeling tools, new conductor materials and engineered line and system configurations4

CHAPTER 4: Overarching Coordination Issues

Refinement of both spatial and temporal coordination – in other words, “smartness” – demands a substantial increase of information flow among various components on the electric grid. This information flow has implications for system control strategies, including the role of human operators. Some of this coordination is specifically associated with renewable and distributed resources, requiring increased information volume for • mitigating intermittence of renewable resources accommodating siting constraints for renewable and distributed generation

Problematic issues in the context of information aggregation include the following: • How much data volume is manageable for both operators and communications systems? • What level of resolution needs to be preserved? • What data must be monitored continuously, and what opportunities exist to filter data by exceptional events? • How can information best be presented to operators to support situational awareness?

Once data have been selected and aggregated into manageable batches, they must be translated or somehow used to frame and inform action items for operators. For example, we might ask what local information goes into an operator’s decision to switch a particular feeder section, or to dispatch demand response, generation or storage. Operating procedures are necessarily based on the particular sets of information and control tools available to operators. The introduction of significant volumes of new data as well as potential control capabilities on more refined temporal and spatial scales also forces decisions about how this information is to be used, strategically and practically. Issues concerning actionable items include the following: • What new tasks and responsibilities are created for grid operators, especially distribution operators, by distributed resources? • How are these tasks defined? • What control actions may be taken by parties other than utility operators? Needed: • Modeling of distribution circuit operation with high penetration of diverse distributed resources, including evaluation of control strategies. 4. Locus of Control

A question related to the definition of action items is who, exactly, is taking the action. With large amounts of data to be evaluated and many decisions to be made in potentially a short time frame, it is natural to surmise that some set of decisions would be made and actions initiated by automated systems of some sort, whether they be open‐loop with human oversight or closed‐loop “expert systems” that are assigned domains of responsibility. Such domains may range from small to substantial: for example, automation may mean a load thermostat that automatically resets itself in response to an input (e.g. price or demand response signal); distributed storage that charges or discharges in response to a schedule, signal or measurement of circuit conditions; or it could mean entire distribution feeders being switched automatically.

Finally, it would be naive to expect any substantial innovation in a technical system as complex as the electric grid to proceed without setbacks, or for an updated and improved system to operate henceforth without failures. Rather than wishing away mistakes and untoward events, the crucial question is what corrective feedback mechanisms are available, not if but when failures do occur. This includes, for example, contingency plans in response to failures of hardware, communications or control algorithms, cyber‐security breach, or any other unexpected behavior on the part of a system component, human or machine. A higher degree of spatial and temporal resolution in coordinating electric grids – more information, more decisions, and more actions – means many more opportunities for intervention and correction, but first it means many more opportunities for things to go wrong.

CHAPTER 5: Conclusion

The effective integration of large amounts of new resources, including distributed and renewable resources, hinges on the ability to coordinate the electric grid in space and time on a wide range of scales. The capability to perform such coordination, independent of any particular technology used to accomplish it, can be taken to define a “smart grid.”

Ultimately, “smart” coordination of the grid should serve to • mitigate technical difficulties associated with renewable resources, thereby enabling California to meet its policy goals for a renewable portfolio • maximize beneficial functions renewable generation can perform toward supporting grid stability and reliability

Much work lies between the status quo and a system with 33 percent of intermittent renewables. Due to the complex nature of the grid, and because the refinement of temporal and spatial coordination represents a profound departure from the capabilities of our legacy system, any “smart grid” development will require time for learning, and will need to draw on empirical performance data as they become available. Time is of the essence, therefore, in answering the many foundational questions about how to design and evaluate new system capabilities, how to re‐write standards and procedures accordingly, how to incentivize the most constructive behavior from market participants, and how to support operators in their efforts to keep the grid working reliably in the face of these transitions. With all the research needs detailed in this white paper, the hope is that questions addressed early may help prevent costly mistakes and delays later on. The more aggressively these research efforts are pursued, the more likely California will be able to meet its 2020 goals for renewable resource integration.

National Renewable Energy Laboratory. Western Wind and Solar Integration Study. May 2010. http://wind.nrel.gov/public/WWIS/

Vittal, Vijay, “The Impact of Renewable Resources on the Performance and Reliability of the Electricity Grid.” National Academy of Engineering Publications, Vol. 40 No. 1, March 2010. http://www.nae.edu/Publications/TheBridge/Archives/TheElectricityGrid/18587.aspx

Posted in Grid instability | Tagged , , , , , , , | Leave a comment

Distributed Generation is destabilizing the Electric Grid

Electricity distribution is designed to flow one way from a centralized system to customers. But Distributed Generation (DG) from solar  PV and wind violates this.

Impacts caused by high penetration levels of intermittent renewable DG can be complex and severe and may include voltage increase, voltage fluctuation, interaction with voltage regulation and control equipment, reverse power flows, temporary over-voltage, power quality and protection concerns, and current and voltage unbalance, to name a few.

There are solutions, but they’re expensive, complicated, and add to the already insane challenges of thousands of utilities, power generators, independent system operators, and other entities trying to coordinate the largest machine in the world when cooperation isn’t always in their best interest.

IEEE. September 5, 2014. IEEE Report to DOE Quadrennial Energy Review on Priority Issues. IEEE

On the distribution system, high penetration levels of intermittent renewable Distributed Generation (DG) creates a different set of challenges than at transmission system level, given that distribution is generally designed to be operated in a radial fashion with one way flow of power to customers, and DG (including PV and wind technologies) interconnection violates this fundamental assumption. Impacts caused by high penetration levels of intermittent renewable DG can be complex and severe and may include voltage increase, voltage fluctuation, interaction with voltage regulation and control equipment, reverse power flows, temporary overvoltage, power quality and protection concerns, and current and voltage unbalance, among others.

Common impacts of DG in distribution grids are described below; this list is not exhaustive and includes operational and planning aspects50, 51.

  • Voltage increase can lead to customer complaints and potentially to customer and utility equipment damage, and service disruption.
  • Voltage fluctuation may lead to flicker issues, customer complaints, and undesired interactions with voltage regulation and control equipment.
  • Reverse power flow may cause undesirable interactions with voltage control and
  • regulation equipment and protection system misoperations.
  • Line and equipment loading increase may cause damage to equipment and service disruption may occur.
  • Losses increase(under high penetration levels) can reduce system efficiency.
  • Power factor decrease below minimum limits set by some utilities in their contractual agreements with transmission organizations, would create economic penalties and losses for utilities.
  • Current unbalance and voltage unbalance may lead to system efficiency and protection issues, customer complaints and potentially to equipment damage.
  • Interaction with Load Tap Changers (LTC), line voltage regulators (VR), and switched
  • capacitor banks due to voltage fluctuations can cause undesired and frequent voltage
  • changes, customer complaints, reduce equipment life and increase the need for maintenance
  • Temporary Overvoltage (TOV): if accidental islanding occurs and no effective reference to ground is provided then voltages in the island may increase significantly and exceed allowable operating limits. This can damage utility and customer equipment, e.g., arresters may fail, and cause service disruptions.
  • Harmonic distortion caused by proliferation of power electronic equipment such as PV inverters.

The aggregate effect from hundreds or thousands of inverters may cause service disruptions, complaints or customer economic losses, particularly for those relying on the utilization of sensitive equipment for critical production processes.

  • Voltage sags and swells caused by sudden connection and disconnection of large DG units may cause the tripping of sensitive equipment of end users and service disruptions.
  • Interaction with protection systems including increase in fault currents, reach
  • modification, sympathetic tripping, miscoordination, etc.
  • Voltage and transient stability: voltage and transient stability are well-known phenomena at transmission and sub-transmission system level but until very recently were not a subject of interest for distribution systems. As DG proliferates, such concerns are becoming more common.

The severity of these impacts is a function of multiple variables, particularly of the DG penetration level and real-time monitoring, control and automation of the distribution system. However, generally speaking, it is difficult to define guidelines to determine maximum penetration limits of DG or maximum hosting capacities of distribution grids without conducting detailed studies.

From the utility perspective, high PV penetration and non-utility microgrid implementations shift the legacy, centralized, unidirectional power system to a more  complex, bidirectional power system with new supply and load variables at the grid’s edge. This shift introduces operational issues such as the nature, cost, and impact of interconnections, voltage stability, frequency regulation, and personnel safety, which in turn impact resource planning and investment decisions.

NREL. 2014. Volume 4: Bulk Electric Power Systems: Operations and Transmission Planning. National Renewable Energy Laboratory.

Initial experience with PV indicates that output can vary more rapidly than wind unless aggregated over a large footprint. Further, PV installed at the distribution level (e.g., residential and commercial rooftop systems) can create challenges in management of distribution voltage.

Meier, A. May 2014. Challenges to the integration of renewable resources at high system penetration. California Energy Commission.

3.2 Distribution Level: Local Issues

A significant class of challenges to the integration of renewable resources is associated primarily with distributed siting, and only secondarily with intermittence of output. These site‐specific issues apply equally to renewable and non‐renewable resources, collectively termed distributed generation (DG). However, DG and renewable generation categories overlap to a large extent due to

  • technical and environmental feasibility of siting renewables close to loads
  • high public interest in owning renewable generation, especially photovoltaics (PV)
  • distributed siting as an avenue to meet renewable portfolio standards (RPS), augmenting the contribution from large‐scale installations Motivation exists; therefore, to facilitate the integration of distributed generation, possibly at substantial cost and effort, if this generation is based on renewable resources.

Distributed generation may therefore be clustered, with much higher penetration on individual distribution feeders than the system‐wide average, for any number of reasons outside the utility’s control, including local government initiatives, socio‐economic factors, or neighborhood social dynamics.

The actual effects of distributed generation at high penetration levels are still unknown but are likely to be very location specific, depending on the particular characteristics of individual distribution feeders.

Technical issues associated with high local penetration of distributed generation include

  • Clustering: The local effects of distributed generation depend on local, not system‐wide penetration (percent contribution). Local penetration level of distributed generation may be clustered on individual feeders for reasons outside the utility’s control, such as local government initiatives, socio‐economic factors, including neighborhood social dynamics Clustering density is relative to the distribution system’s functional connectivity, not just geographic proximity, and may therefore not be obvious to outside observers.
  • Transformer capacity: Locally, the relative impact of DG is measured relative to load – specifically, current. Equipment, especially distribution transformers, may have insufficient capacity to accommodate amounts of distributed generation desired by customers. Financial responsibility for capacity upgrades may need to be negotiated politically.
  • Modeling: From the grid perspective, DG is observed in terms of net load. Neither the amount of actual generation nor the unmasked load may be known to the utility or system operator. Without this information, however, it is impossible to construct an accurate model of local load, for purposes of: forecasting future load, including ramp rates, ascertaining system reliability and security in case DG fails Models of load with high local DG penetration will have to account for both generation and load explicitly in order to predict their combined behavior. • Voltage regulation: Areas of concern, explained in more detail in the Background section below, include: maintaining voltage in permissible range, wear on existing voltage regulation equipment, reactive power (VAR) support from DG

Areas of concern and strategic interest, explained in more detail in the Background section below, include: preventing unintentional islanding, application of microgrid concept, variable power quality and reliability Overall, the effect of distributed generation on distribution systems can vary widely between positive and negative, depending on specific circumstances that include

  • the layout of distribution circuits
  • existing voltage regulation and protection equipment
  • the precise location of DG on the circuit

3.2.2 Background: Voltage Regulation Utilities are required to provide voltage at every customer service entrance within permissible range, generally ±5 percent of nominal. For example, a nominal residential service voltage of 120V means that the actual voltage at the service entrance may vary between 114 and 126 V. Due to the relative paucity of instrumentation in the legacy grid, the precise voltage at different points in the distribution system is often unknown, but estimated by engineers as a function of system characteristics and varying load conditions.

Different settings of load tap changer (LTC) or other voltage regulation equipment may be required to maintain voltage in permissible range as DG turns on and off. Potential problems include the following:

  • DG drives voltage out of the range of existing equipment’s ability to control
  • Due to varying output, DG provokes frequent operation of voltage regulation equipment, causing excessive wear

DG creates conditions where voltage profile status is not transparent to operators

Fundamentally, voltage regulation is a solvable problem, regardless of the level of DG penetration. However, it may not be possible to regulate voltage properly on a given distribution feeder with existing voltage regulation equipment if substantial DG is added. Thus a high level of DG may necessitate upgrading voltage regulation capabilities, possibly at significant cost. Research is needed to determine the best and most cost‐effective ways to provide voltage regulation, where utility distribution system equipment and DG complement each other.

Legacy power distribution systems generally have a radial design, meaning power flows in only one direction: outward from substations toward customers. The “outward” or “downstream” direction of power flow is intuitive on a diagram; on location, it can be defined in terms of the voltage drop (i.e., power flows from higher to lower voltage).

If distributed generation exceeds load in its vicinity at any one moment, power may flow in the opposite direction, or “upstream” on the distribution circuit. To date, interconnection standards are written with the intention to prevent such “upstream” power flow.

The function of circuit protection is to interrupt power flow in case of a fault, i.e. a dangerous electrical contact between wires, ground, trees or animals that results in an abnormal current (fault current). Protective devices include fuses (which simply melt under excessive current), circuit breakers (which are opened by a relay) and reclosers (which are designed to re‐establish contact if the fault has gone away).

The exception is a networked system, where redundant supply is always present. Networks are more complicated to protect and require special circuit breakers called “network protectors” to prevent circulating or reverse power flow. If connected within such a networked system, DG is automatically prevented from backfeeding into the grid. Due to their considerable cost, networked distribution systems are common only in dense urban areas with a high concentration of critical loads, such as downtown Sacramento or San Francisco, and account for a small percentage of distribution feeders in California.

3.2.5 Research Needs Related to Circuit Protection

The presence of distributed generation complicates protection coordination in several ways: • The fault must now be isolated not only from the substation (“upstream”) power source, but also from DG • Until the fault is isolated, DG contributes a fault current that must be modeled and safely managed

Shifting fault current contributions can compromise the safe functioning of other protective devices: it may delay or prevent their actuation (relay desensitization), and it may increase the energy (I2t) that needs to be dissipated by each device.6 Interconnection standards limit permissible fault current contributions (specifically, no more than 10 percent of total for all DG collectively on a given feeder). The complexity of protection coordination and modeling increases dramatically with increasing number of connected DG units, and innovative protection strategies are likely required to enable higher penetration of DG.

Standard utility operating procedures in the United States do not ordinarily permit power islands. The main exception is the restoration of service after an outage, during which islanded portions of the grid are re‐connected in a systematic, sequential process; in this case, each island is controlled by one or larger, utility‐operated generators. Interconnection rules for distributed generation aim to prevent unintentional islanding. To this end, they require that DG shall disconnect in response to disturbances, such as voltage or frequency excursions, that might be precursors to an event that will isolate the distribution circuit with DG from its substation source.

Disconnecting the DG is intended to assure that if the distribution circuit becomes isolated, it will not be energized. This policy is based on several risks entailed by power islands: • Safety of utility crews: If lines are unexpectedly energized by DG, they may pose an electrocution hazard, especially to line workers sent to repair the cause of the interruption. It is important to keep in mind that even though a small DG facility such as a rooftop solar array has limited capacity to provide power, it would still energize the primary distribution line with high voltage through its transformer connection, and is therefore just as potentially lethal as any larger power source. • Power quality: DG may be unable to maintain local voltage and frequency within desired or legally mandated parameters for other customers on its power island, especially without provisions for matching generation to local load. Voltage and frequency departures may cause property damage for which the utility could be held liable, although it would have no control over DG and power quality on the island. • Re‐synchronization: When energized power islands are connected to each other, the frequency and phase of the a.c. cycle must match precisely (i.e., be synchronized), or elsegenerators could be severely damaged. DG may lack the capability to synchronize its output with the grid upon re‐connection of an island.

3.2.7 Research Needs Related to Islanding

 

In view of the above risks, most experts agree that specifications for the behavior of DG should be sufficiently restrictive to prevent unintentional islanding. Interconnection rules aim to do this by requiring DG to disconnect within a particular time frame in response to a voltage or frequency deviation of particular magnitude, disconnecting more quickly (down to 0.16 seconds, or 10 cycles) in response to a larger deviation. At the same time, however, specifications should not be too conservative to prevent DG from supporting power quality and reliability when it is most needed.

There is no broad consensus among experts at this time about how best to reconcile the competing goals of minimizing the probability of unintentional islanding, while also maximizing the beneficial contribution from DG to distribution circuits.

As for the possibility of permitting DG to intentionally support power islands on portions of the utility distribution system, there is a lack of knowledge and empirical data concerning how power quality might be safely and effectively controlled by different types of DG, and what requirements and procedures would have to be in place to assure the safe creation and re‐connection of islands. Because of these uncertainties, the subject of islanding seems likely to remain somewhat controversial for some time.

Needed: • Modeling of DG behavior at high local penetrations, including o prevention of unintentional islanding o DG control capabilities during intentional islanding • Collaboration across utility and DG industries to facilitate DG performance standardization, reliability and trust. This means that utilities can depend on DG equipment to perform according to expectations during critical times and abnormal conditions on the distribution system, the handling of which is ultimately the utility’s responsibility.

In the long run, intentional islanding capabilities – with appropriate safety and power quality control – may be strategically desirable for reliability goals, security and optimal resource utilization. Such hypothetical power islands are related to but distinct from the concept of microgrids, in that they would be scaled up to the primary distribution system rather than limited to a single customer’s premises. A microgrid is a power island on customer premises, intermittently connected to the distribution system behind a point of common coupling (PCC) that may comprise a diversity of DG resources, energy storage, loads, and control infrastructure. Three key features of a microgrid are • Design around total system energy requirements:

Depending on their importance, time preference or sensitivity to power quality, different loads may be assigned to different primary and/or back‐up generation sources, storage, or uninterruptible power supplies (UPS). A crucial concept is that the expense of providing highly reliable, high‐quality power (i.e., very tightly controlled voltage and frequency) can be focused on those loads where it really matters to the end user (or life of the appliance), at considerable overall economic savings. However, the provision of heterogeneous power quality and reliability (PQR) requires a strategic decision of what service level is desired for each load, as well as the technical capability to discriminate among connected loads and perform appropriate switching operations. • Presentation to the macrogrid as a single controlled entity: At the point of common coupling, the microgrid appears to the utility distribution system simply as a time‐varying load. The complexity and information management involved in coordinating generation, storage and loads is thus contained within the local boundaries of the microgrid.

Note that the concepts of microgrids and power islands differ profoundly in terms of • ownership • legal responsibility (i.e. for safety and power quality) • legality of power transfers (i.e., selling power to loads behind other meters) • regulatory jurisdiction • investment incentives Nevertheless, microgrids and hypothetical power islands on distribution systems involve many of the same fundamental technical issues. In the long run, the increased application of the microgrid concept, possibly at a higher level in distribution systems, may offer a means for integrating renewable DG at high penetration levels, while managing coordination issues and optimizing resource utilization locally. Research Needs: • Empirical performance validation of microgrids • Study of the implications of applying microgrid concepts to higher levels of distribution circuits, including o time‐varying connectivity o heterogeneous power quality and reliability o local coordination of resources and end‐uses to strategically optimize local benefits of distributed renewable generation • Study of interactions among multiple microgrids

 

Posted in Electric Grid, Photovoltaic Solar, Wind | Leave a comment

Geothermal – can it make up for peak fossil fuels?

94% of all known U.S. geothermal resources are located in California.

Only a few urban areas in California and other states with geothermal resources (i.e. volcanoes, hot springs, and geysers) are near enough to exploit them.  This is because the cost of adding very long transmission lines to faraway geysers can make a geothermal resource too expensive – they’re already very expensive even when closer to cities.  On top of that, unless the geothermal resource is very large, more of the power is lost over transmission lines than conventional power plants (CEC 2014 page 73).

Getting the financing

The current environment for financing independent power projects is challenging. These challenges include weak corporate profits, changes in corporate direction, and heightened risk aversion. As a result, a number of the financial institutions that were lead underwriters in the past are either pulling out of the market or are taking a lower profile in project financing.

Biomass and geothermal projects are considered riskier than natural gas, solar, and wind projects. This is seen in the lower leverage, higher pricing, and higher DSCRs than for the other generating technologies. The higher level of project risk for biomass and geothermal projects is partly attributed to the technology and fuel sources. Solid fuel power plants require more project infrastructure than do other fuel types. Geothermal projects have inherently uncertain steam supplies as has been seen at the Geysers. Some of the risk also is based on the relatively small number of these projects being developed.

The steadily increasing wheeling access charges the California ISO expects to put in place over the next decade represent a growing, significant cost to renewable developers who find their best renewable resources in locations that are distant from demand.

They can be risky to develop since they don’t always work out. In June 1980, Southern California Edison (SCE) began operation of a 10 MW experimental power plant at the Brawley geothermal field, also in Imperial County. However, after a few years of operation further development was ceased due to corrosion, reservoir uncertainties, and the presence of high salinity brines.

How they Work

There are two components to the geothermal resource base: hydrothermal (water heated by Earth) that exists down to a depth of about 3 km, and enhanced geothermal systems (EGS) associated with low-permeability or low-porosity heated rocks at depths down to 10 km.

A National Academy of Sciences concluded that hydrothermal resources are too small to have a major overall impact on total electricity generation in the United States — at best 13 GW of electric power capacity in identified resources (NAS 2009).

The largest geothermal installation in the world is the Geysers in Northern California, occupying 30 square miles. The 15 power plants have a total net generating capacity of about 725 MW of electricity—enough to power 725,000 homes (Heinberg).

  • Geothermal plants often emit hydrogen sulfide, CO2, and toxic sludge containing arsenic, mercury, sulfur, and silica  compounds.
  • Extra land may be needed to dispose of wastes and excess salts
  • groundwater and freshwater can be a limiting force since both hydrothermal and dry rock systems need water
  • Maintenance costs are high because the steam is corrosive and deposits minerals, which clogs pipes and destroys valves.
  • When you extract energy from just about anything it decreases, the same is true for geothermal. So you endlessly need to keep looking for more prospects. For example, the “Geysers” area of Northern California has gone from 2000 MWe to 850 MWe since it was first tapped for power. J. Coleman. 15 Apr 20001. Running out of steam: Geothermal field tapped out as alternative energy source. Associated Press.
  • We need a breakthrough in materials that won’t melt to drill deeply enough to get significant power in non-geothermal areas.
  • You can lose a significant amount of steam because the water you pour down the hole is so hot it fractures rocks and escapes into cracks before it can return up the steam vent. Over time, less and less steam for power generation is produced.
  • If you wanted to tap the heat without any geothermal activity, it becomes energy intensive, because you have to drill much deeper (geothermal sources are already near the surface), the rock below has to be fractured (which it already is in geothermal regions) to release steam, and fracturing and keeping the rock fractured takes far more ongoing energy than the initial drilling.
  • No one has figured out how to do hot dry rock economically – time’s running out.
  • Even if Geodynamics succeeds in scaling their experiments into a real geothermal power plant, it will in huge part be due to the location: “This is the best spot in the world, a geological freak,” Geodynamics managing director Bertus de Graaf told Reuters. “It’s really quite serendipitous, the way the elements — temperature, tectonics, insulating rocks — have come together here.

Although it would be great if we could access the heat 3 to 10 km below the earth, such operations would cool down so much that they’d have to be shut down within 20 to 30 years, and production wells would need to be re-drilled every 4 to 8 years meanwhile.  We don’t know how to do this anyhow.  Despite oil and gas drilling, we don’t have much experience going this deep or know how to enhance heat transfer performance for lower-temperature fluids in power production. Another challenge is to improve reservoir-stimulation techniques so that sufficient connectivity within the fractured rock can be achieved.  France has been trying to make this work for over 2 decades, so don’t hold your breath (NAS 2009).

Geothermal Technology Costs

Geothermal technologies remain viable in California, although they are subject to a number of limitations that are likely to reduce the number of sites developed in California.

Geothermal resource costs are driven largely by the highly variable and significant costs of drilling and well development. These costs are unique to each site and represent a significant risk on the part of the developer. While a successful well may be able to produce electricity at low cost, other wells in the same area may require much more investment in time and resources before they are producing efficiently. Costs for new geothermal plants are projected to increase slightly over the coming years. Limitations of location and drilling are unlikely to see improvement in California, while nationally there are very few geothermal projects under development.

Factors Affecting Future Geothermal Development

California’s relative abundance of geothermal resources in comparison to the rest of the United States does not mean that geothermal power production would be viable or cost-effective everywhere in the state. Developers must consider multiple factors of cost and viability when deciding where to locate new geothermal plants. In turn, these considerations drive the estimates of future costs of new geothermal power plants in California. Considerations for developing geothermal power plants in liquid-dominated resources include (Kagel, 2006):

  • Exploration Costs- Exploration and mapping of the potential geothermal resource is a critical and sometimes costly activity. It effectively defines the characteristics of the geothermal resource.
  • Confirmation Costs-These are costs associated with confirming the energy potential of a resource by drilling production wells and testing their flow rates until about 25 percent of the resource capacity needed by the project is confirmed.
  • Site/Development Costs- Covering all remaining activities that bring a power plant on line, including:   Drilling- The success rate for drilling production wells during site development average 70 percent to 80 percent (Entingh, et al., 2012). The size of the well and the depth to the geothermal reservoir are the most important factors in determining the drilling cost.
  • Project leasing and permitting-Like all power projects, geothermal plants must comply with a series of legislated requirements related to environmental concerns and construction criteria.
  • Piping network- The network of pipes are needed to connect the power plant with production and injection wells. Production wells bring the geothermal fluid (or brine) to the surface to be used for power generation, while injection wells return the used fluid back to the geothermal system to be used again.
  • Power plant design and construction- In designing a power plant, developers must balance size and technology of plant materials with efficiency and cost effectiveness. The power plant design and construction depends on type of plant (binary or flash) as well as the type of cooling cycle used (water or air cooling).
  • Transmission- Includes the costs of constructing new lines, upgrades to existing lines, or new transformers and substations.

Another important factor contributing to overall costs is O&M costs, which consist of all costs incurred during the operational phase of the power plant (Hance, 2005). Operation costs consist of labor; spending for consumable goods, taxes, royalties; and other miscellaneous charges.

Maintenance costs consist of keeping equipment in good working status. In addition, maintaining the steam field, including servicing the production and injection wells (pipelines, roads, and so forth) and make-up well drilling, involves considerable expense.

Development factors are not constant for every geothermal site. Each of the above factors can vary significantly based on specific site characteristics.

Make-up drilling aims to compensate for the natural productivity decline of the project start-up wells by drilling additional production wells. drive costs for geothermal plants (not mentioned directly above since they are highly project specific) are project delays, temperature of the resource, and plant size.

The temperature of the resource is an essential parameter influencing the cost of the power plant equipment. Each power plant is designed to optimize the use of the heat supplied by the geothermal fluid. The size, and thus cost, of various components (for example, heat exchangers) is determined by the temperature of the resource. As the temperature of the resource increases, the efficiency of the power system increases, and the specific cost of equipment decreases as more energy is produced with similar equipment. Since binary systems use lower resource operating temperatures than flash steam systems, binary costs can be expected to be higher. Figure 33 provides estimates for cost variance due to resource temperature. As the figure shows, binary systems range in cost from $2,000/kWh to slightly more than $4,000/kWh, while flash steam systems range from $1,000/kWh to just above $3,000/kWh (Hance, 2005).

Technology Development Considerations

In addition to the cost factors listed in the previous section of the report addressing geothermal binary plants, for some flash plants a corrosive geothermal fluid may require the use of resistive pipes and cement. Adding a titanium liner to protect the casing may significantly increase the cost of the well. This kind of requirement is rare in the United States, found only in the Salton Sea resource in Southern California (Hance, 2005).

Bradley, Robert L., Jr. Geothermal: The Nonrenewable Renewable. National Center for Policy Analysis.

CEC. 2014. Estimated cost of new renewable and fossil generation in California. California Energy Commission.

Hance, C. August 2005. Factors Affecting Costs of Geothermal Power Development, Geothermal Energy Association.

Heinberg, Richard. September 2009. Searching for a Miracle. “Net Energy” Limits & the Fate of Industrial Society. Post Carbon Institute.

Kagel, A. October 2006. A Handbook on the Externalities, Employment, and Economics of Geothermal Energy. Geothermal Energy Association.

Murphy, Tom. 10 Jan 2012. Warm and Fuzzy on Geothermal? Do the Math.

Murphy concludes:

Abundant, potent, or niche? Hmmm. It’s complex. On paper, we have just seen that the Earth’s crust contains abundant thermal energy, with a very long depletion time. But extraction requires a constant effort to drill new holes and share the derived heat.

Globally we use 12 TW of energy.  Heat released from all land is 9 TW, but practical utilization is impossible. For one thing, the efficiency with which we can produce electricity dramatically reduces the cap to the 2 TW scale. And for heating just 1 home, you’d need to capture heat from an area 100 meters on a side.

Clearly, geothermal energy works well in select locations (geological hotspots). But it’s too puny to provide a significant share of our electricity, and direct thermal use requires substantial underground volumes/areas to mitigate depletion. All this on top of requirements to place lots of tubing infrastructure kilometers deep in the rock (do I hear EROEI whimpering?). And geothermal is certainly not riding to the rescue of the imminent liquid fuels crunch.

NAS 2009. America’s Energy Future: Technology and Transformation. 2009. National Academy of Sciences, National Research Council, National Academy of Engineering.

Posted in Alternative Energy, Energy, Geothermal | Leave a comment

Inage: calculate storage for short-term wind variation

Inage, S. 2009. Prospects for Large-Scale Energy Storage in Decarbonized Power Grid. International Energy Agency.

This paper limits itself to the issue of frequency stability in systems with increasing shares of variable renewable generation assets (wind power in Western Europe (WEU) goes from 9.8% now  to 25.4% in 2050, etc see page 23)

Electric frequency is controlled within a small deviation: for example, i n Japan the standard is 0.2-0.3 Hz; in the U.S. it is 0.018-0.0228 Hz; and in the European UCTE it is 0.04-0.06Hz. As renewables increase, the potential for fatal frequency changes grows, since such generators rarely have frequency control systems and can produce large variations in output as weather conditions change.

The need to ensure supply that matches demand under all circumstances poses particular challenges for variable renewable power options such as wind and solar generation, whose supply heavily depends on season, time and weather conditions. Short-term variations are quite random and difficult to forecast.

Existing regional grids with high shares of variable renewable do not always provide a relevant reference for a future power system with high share s of renewables. The reason is that such grids do not operate as islands; rather, they are well connected to other grids that stabilize their operation.

This is the case for Denmark and Northern Germany. In 2001, the demand and supply of wind power corresponded fairly closely. When excess power was available, it could be exported t hrough interconnections with Norway, Sweden and Germany
. Conversely, power could be imported in periods of shortfall. Ther efore, in Denmark, no counter measure would be needed to mitigate short-term and long-term variation s, despite an anticipated greater share of wind power. Interconnectors provide a key short-and medium-term option to deal with the variability of renewable power generation, but will not be sufficient to deal with large grids on a continental scale with high renewables penetration.

This paper looks at what’s needed if wind power and solar power provides 12% and 11% of global electricity generation by 2050.

Variable output renewable technologies such as wind and solar are not dispatchable.

The variability characteristics of solar, wind and impoundment hydro power vary substantially from season to season, day to day, time to time. Wind turbines may be shut off during storm conditions that could last for hours. Wind speeds may fall to zero or very low levels over large areas for days. Solar power is not generated at night, and insolation levels may be significantly reduced in winter, especially at higher latitudes. Solar power may also fluctuate depending on cloud levels and the moisture content of the air. Finally, hydro power may be absent in dry years, depending on the water inflow (glaciers or rainfall). These different variability characteristic require different types of response strategies.

With large shares of these technologies, steps would need to be taken to ensure the continued reliable supply of electricity. While related issues include voltage and frequency variations,  this report focuses on frequency stability. Constant balance of demand and supply is essential to achieve this, and, in the majority of today’s power systems, mid load technologies such as coal and gas and in some cases hydro, play the chief role in this regard.

The main focus of this paper is to investigate the storage growth and total global storage capacity needed between 2010 and 2050, to assist in the balancing of power systems with large shares of variable renewables.

Variable renewable energies are associated with weather-related power output variations, which consist of short term variations on a scale of seconds to several minutes, superimposed on long term variation on the scale of several hours. Frequency change depends on the short-term variation, therefore this report focuses on short–term variations.

Although the output of individual wind or solar plants can vary considerably, wide geographical dispersal of wind power and PV plants reduces the net variation of many plants as seen by the system as a whole. The net output variation of renewables is an important parameter in this analysis. To date, the impact of this smoothing effect varies from region to region. If the outputs of individual wind and PV plants are uncorrelated, the extent of variation decreases with the inverse square root of the overall number of plants. On the other hand, over relatively small areas with large numbers of wind and PV plants, plants may show strong correlation with each other. In such situations a significant net variation will remain.

The extent to which a power system can accommodate variation s in supply is governed to a large extent by its flexibility–a measure of how fast and how much the system can quickly increase or decrease supply or demand, to maintain balance at all times. A range of measures exist to i ncrease the flexibility of power systems, and thus the extent to which they can accommodate variable renewables. This paper looks at one of these measures–storage.

Another option is to interconnect among adjacent power systems. For instance, in Western Europe (WEU), interconnected power grid and electricity trading play an important role.

Flexible power plants such as gas and hydro can act as reserves to provide for deficits in wind power generation across the interconnected area, while at the same time the geographic smoothing effect is increased because the total area is larger. At present, in Denmark, where the average share of wind power is approximately 20%, effective balancing of supply and demand is facilitated through electricity trade with other Scandinavian countries.

However, taking for example a cluster of interconnected systems lying under a single weather system, all with a high share of variable renewables, trade of electricity may not be relied upon for fast access to additional electricity during low wind / solar periods, nor to dispose of surpluses, because deficits and surpluses among all such systems will coincide to a large extent. Moreover, reduced flexible power plant capacity over the entire region in 2050, due to partial displacement by renewables and nuclear, as seen in the BLUE Scenario, may lead to a lack of flexible reserves. To provide for such cases, internal solutions are needed to be in place. Balance will not be maintained by interconnectors alone, and system designers and operators should look at additional measures such as energy storage.

Simulations of wind power variation levels between 5% and 30% yield estimates of energy storage capacity in the WEU ranging from 0GW to 90GW in 2050. The balance between the demand and the supply was calculated for every 0.1 hr (i.e., 6 minutes). To estimate energy storage worldwide, net variations were assumed as 15% and 30%. Simulations undertaken suggest that worldw ide energy storage capacity ranging from 189GW to 305 GW would be required.

As men tioned above, as each storage system has different specifications, t he optimal arrangement of these systems depends on circumstances in individual countries. In Annex 1,
the current technical potential of NaS cells, pumped hydro, redox flow cells, Compress ed Air Energy Storage (CAES), electric double-layer capacitors, Li-ion batteries, Superconducting Magnetic Energy Storage (SMES) and flywheel systems is reviewed. Reducing costs of such storage technologies may be a key to expanding the use of energy stora ge technologies to keep pace with the growth of variable renewables.

Grid Operation and Load Curves

Load duration curves can be split into base and peak loads. Base load s are generated by plants whose output is difficult to change ; they therefore operate most of the time at full capacity. Base loads are generally served by either high-efficiency fossil-fired or nuclear reactor power plant s with low production cost. Peak loads are usually served by natural gas combined-cycle plants , gas turbine generation, or hy dropower plants that can change their output in a short time, although with high production cost.

An interesting case of a power system with a high proportion of wind power is found in Spain and Portugal on the Iberian Peninsula. In 2008, there was a day when the share of wind power in the total power supply reached 23% in Spain. This high proportion created power quality problems that have since been resolved through better interconnect ion s within Spain. In addition, Spain has significant pumped hydropower capacity that can mitigate power supply variation s during the operation.

It is preferable that wind power generation resources be distributed to maximize the smoothing effect, which is the key to reducing net variation of the wind power supply. 2.) Since the necessarily capacities of energy storage depend on the net variation of wind power, measuring methods and analytical systems should be established by individual countries or groups of countries. Through an accumulation of these efforts , the necessary countermeasures should be determined

Posted in Research | Leave a comment