Integrating renewable power research

Below are intermittent energy integration posts, workshops, and other research.

Andrew Dodson. 2014. Issues Integrating Renewables.

This is a fairly technical podcast, this link cites some of the most interesting points made, here are a couple of them:

  • “There is not a single transmission expansion project in this country that is not currently being challenged by land owners.” – Pat Hoffman, Assistant secretary of DOE office of electricity Delivery and energy reliability
  • Dodson says that $10 trillion would be needed to rebuild the electric grid to integrate solar and wind on a large scale

Savage, W. 2012. The Full Cost of Renewables: Managing Wind Integration Costs in California. Pomona Senior Theses. Paper 57.

[This 71-page paper has some great explanations of how hard it already is to operate the electric grid and issues with wind integration]

The costs of building and operating a renewable power generator do not paint a complete picture. Due to their unpredictable and variable generation profiles, renewable sources of energy such as wind impose a unique burden on the rest of the electric power system. In order to accommodate this less reliable renewable power, the remaining conventional generation units must deviate from their optimal operating profiles, increasing their costs and potentially releasing additional GHG. Although this burden is conceptually understood, it is not explicitly valued in the market today. Thus, when analysts and policymakers discuss the cost-effectiveness of renewable energy as a GHG-reduction strategy, a key element is missing from the cost side of the equation, known as wind integration costs.

Wind integration costs will only increase with time. Thanks to a diverse resource mix, California should see modest integration costs for the time being. However, as policymakers consider moving beyond the 33% RPS standard to even more ambitious goals, they are more likely to encounter the non-linearities found in most studies. Furthermore, if California truly wants to be a national leader, it needs to demonstrate that its solutions can be replicated at the national scale, not just in areas whose wind resources are balanced by significant solar, geothermal and hydroelectric potential.

The cost of integrating renewable power can be generally defined as the cost of all actions taken to maintain the reliability of the electric grid in response to the uncertainty and variability of renewable power. This chapter will explain, from a physical operations perspective, exactly what those actions are.

Traditional System Operations. The day-to-day job of electric system operators is organized around one central goal: maintaining the reliable flow of electricity to customers, or more colloquially, “keeping the lights on”. In order to do this, groups known as balancing authorities maintain careful control over the electric grid at all times. Each such organization is responsible for maintaining reliability within a certain geographic region; for example, the California Independent System Operator (CAISO) is responsible for maintaining the reliable supply of power to most of California. In order to maintain reliability, each balancing authority much match the supply and demand for power within its territory at all times. The demand for power is known as “load”, and represents the sum of electricity being drawn by residential, commercial and industrial customers. This power is supplied by electric generation from power plants, such as coal-fired steam power plants, nuclear generation stations, hydroelectric dams or wind turbines. Power can also be imported or exported from one balancing authority to another.

One crucial feature of electric power is that, generally speaking, it cannot be stored. Most consumer goods are produced, put into inventory, and then sold whenever a customer wants to buy them. Electricity has no such “shelf life”. When electricity is generated at a power plant, it must be consumed instantly. Therefore, balancing authorities must make sure that the amount of power being generated is equal to load, not just in the aggregate, but at any given instant.

If generation exceeds load, it will increase the frequency of the alternating current power that flows through transmission lines, and vice versa. By convention, electric devices in the United States are designed to operate using an alternating current at a constant, 60-hertz frequency. Even small deviations to this frequency can cause serious damage to electrical equipment, and can trigger generator trips or load shedding to avoid a system emergency. Even without intermittent renewable technologies, the task of instantaneously balancing load and generation is a significant challenge for system operators.

As a general rule, system operators cannot control the amount of load that customers demand at any given time. Therefore, they must forecast the expected load, and then plan ahead so that enough generation will be available to meet demand. For example, on any given morning, the CAISO will estimate the hourly load profile for the next day. The ISO might estimate a load of 16,000 MW for the hour of 12am to 1am, a load of 15,000 MW for the hour of 1am to 2am, and so forth. Then, power plants can submit bids to provide this energy. The ISO will accept as many bids as necessary to meet projected demand, starting with the lowest-cost bids and moving up the cost curve. Based on the results of this bidding process, the ISO will produce an energy schedule, which specifies which power plants will generate power, when they will generate power, how much power they will produce, and how much they will be paid. It also issues daily unit commitment instructions, so that power plants with long start-up times can turn on or off (CAISO, 2010a). However, this process is imperfect. The actual load drawn by consumers does not follow the neat forecast assumed during the planning process. For example, the forecast of 16,000 MW for 12am to 1am is almost certainly wrong. Specifically, three types of errors are possible. First, the estimate of load could be biased. For example, during the first hour of the day, customers could use more total energy than expected. Second, the load could rise or fall during the hour, giving it an intra-hour load shape. For example, customers might use 17,000 MW at 12am, and decrease their usage to 15,000 MW by 1 am. Third, the load could fluctuate randomly about the average of 16,000 MW, creating a sawtooth pattern. All three of these possibilities are both realistic and common in normal grid operations.

In addition to load uncertainty, there is also a possibility that expected generation will be unavailable. For example, a fire might damage a transmission line scheduled to provide power from a distant source, or a mechanical failure could force a natural gas plant to shut down. While it is impossible to prepare for every possible contingency, grid operators include the possibility of these unexpected events in their planning process.

In short, “irrespective of current and future levels of wind generation, power systems are already required to cope with significant variability and intermittency concerns” (Fox et. al, 2007). The issues of uncertainty and variability, so often associated with renewable power, already exist in electric systems. Operators manage these issues using “ancillary services”, which are used to match generation and load at a more granular level.

A power plant is said to provide ancillary services if a certain portion of its capacity is set aside to be flexible. In addition to simply providing energy, power plants can choose sell the ability to accommodate changes in demand on short notice. For example, a 100-MW gas-fired power plant might provide 80 MW of steady power, and also offer the ability to increase or decrease its generation by up to 20 MW. The capacity set aside for the purpose is known as the operating reserve. Power plants can offer several different types of operating reserves, differentiated primarily based on how fast they can respond to a dispatch order requiring them to increase or decrease generation.

Unfortunately, there is no single set of ancillary service definitions; the names and exact technical specifications vary among different balancing authorities and countries, largely as a matter of convention. However, there are a few common categories. Almost all balancing authorities will have some kind of fast-responding ancillary service, variously known as frequency regulation or primary control. Regulation service is designed to respond on the order of seconds, and is controlled by an Automated Generation Control (AGC) system. This allows generation to automatically adjust to small fluctuations in load (Rebours et. al, 2007).

Balancing authorities also have ancillary services that allow manual adjustments to generation, which are generally slower in response time and larger in magnitude.

Generally speaking, there will be different types of operating reserves for load following, imbalance energy and contingencies. Load following refers to the ability to track the shape of the day’s load profile at a greater granularity than hourly schedules, and generally operates on the order of minutes. Imbalance reserves help to compensate for net schedule bias, and contingency reserves are in place to replace generation that could be lost in a system emergency, such as the loss of a major transmission line (Dragoon, 2010).

It would be economically infeasible – to say nothing of physically impractical – to have enough operating reserves to respond to every imaginable contingency. Instead, balancing authorities select a reasonable operating margin to provide a satisfactory level of reliability. The size of this operating margin is based on several factors, including the largest possible single contingency event, the availability of power plants connected to the system, and the expected error in demand forecasts (Ferris and Infield, 2008).

Greater operating reserve requirements to maintain grid reliability impose a cost that is ultimately paid by electric ratepayers. These ancillary services are the means through which system operators manage uncertainty and variability. Currently, that operating challenge is driven by the characteristics of load. The addition of variable energy sources, such as wind power, will increase the magnitude of this operating challenge; however, the challenge remains conceptually the same.

Understanding Wind Power’s Impact. Therefore, our first task in evaluating the cost of wind integration is to assess the extent to which wind power increases the requirement for balancing reserves. To do so, it is helpful to think of wind as “negative load”. Since wind power can generally not be controlled, its behavior is more similar to load than generation. By subtracting the amount of wind generation from load, one creates a new “net load” profile.

Then, balancing authorities must operate traditional power plants so that their generation matches net load, as opposed to raw load. Due to the inclusion of wind power, net load will be more unpredictable and more variable than raw load. However, the techniques used to balance net load are the same ancillary services that are provided in traditional systems. Kirby and Milligan (2008) note that wind has many similar characteristics to load, and that the differences in managing the two are “more of degree than kind,” as wind “add[s] to aggregate variability.”

The crucial question is how much of each type of ancillary service is required, and then how much will it cost. In order to determine the impact of wind power on the reserve requirements for net load, it is important to first understand the characteristics of wind power generation.

The power generated by a turbine is a function of wind speed, and has 4 distinct regions. Light winds will not generate any power at all; the minimum level of wind required to generate electricity is known as the cut-in speed, often around 4 m/s. From there, the wind power increases as a cubic function of wind speed, until the turbine reaches its maximum rated power output. Within this region, the output can change dramatically in response to even small changes in the wind. Once the rated power is reached, usually at 13-14 m/s, the wind speed can continue to increase but output will remain constant. However, if the wind reaches too high of a speed, often at 25 m/s, the turbines must shut down, or “cut off”, to avoid damaging the equipment. The sudden drop-off of power is another potential source of power variability (Laughton, 2007).

Incremental Reserve Requirements. These trends of variability and uncertainty help determine the incremental reserve requirements; in other words, how much more balancing capacity is required to maintain reliability on a grid with wind than one without wind? One common misconception is to assume that all variability and uncertainty associated with wind power must be counter-balanced by a dedicated flexible power plant. This is simply not true. Kirby and Milligan (2008) describe how “the power system does not need respond to the variability of each individual turbine”; instead, the system must “meet the North American Reliability Corporation (NERC) reliability standards and balance aggregate load-net wind with aggregate generation.

Fortunately, wind and load tend to be uncorrelated, so they do not add linearly, greatly reducing the net flexibility required from conventional generation.” Reliability standards are typically proportional to the standard deviation of the differences between actual load and scheduled load, or the load errors. For example, NERC standards require balancing authorities to maintain sufficient reserves such that 10-minute errors can be contained within certain limits 90% of the time in each month (Dragoon, 2010). In other words, the required balancing reserves depend on magnitude of the 90th percentile error, which is directly proportional to the standard deviation for approximately normal distributions. As more wind is added to the grid, the standard deviation of net load errors will increase, requiring more incremental reserves. However, as Kirby and Milligan explain above, the standard deviation of net load error is not simply the sum of the standard deviations of load error and wind error.

With a wind penetration level of 20% scheduled using persistence forecasts, the grid would require 7% of wind capacity to be set aside as operating reserves.

Millborrow (2007) estimates that if wind supplies 10% of electricity, the incremental reserve requirements would equal 3-6% of the wind’s rated capacity; that number grows to 4-8% at 20% penetration levels.

Milligan (2003) estimates that with 17% of energy coming from wind, incremental reserve requirements equal 6-11% of rated wind capacity, depending largely on forecast quality.

Gross et al. (2006) find similar results in a review of several studies, with 5-10% reserve requirements at 20% wind. Most studies find that reliability can be achieved by procuring balancing reserves of approximately three times the standard deviation of net load error (Holttinen et. al, 2008).

Second, the studies confirmed that costs of additional ultra-fast regulation reserves were minimal. This is consistent with the idea that, aggregated across an entire system, very large swings in power output simply do not happen within seconds, or even a few minutes.

Third, many studies find that costs of integration increase non-linearly as a function of wind penetration level. There are several intuitive reasons for this result. First, as demonstrated in the previous chapter, there are increasing marginal quantities of balancing reserves required to deal with increasing levels of wind. At low levels of wind, the variability of net load only increases by a small fraction of the variability in wind alone. At higher levels of wind, the variability of net load increases at an almost 1:1 rate with the variability of wind alone. Second, the marginal costs of providing these balancing reserves also increase as more wind is added to the grid. In well-functioning markets, economic dispatch systems are used to find the most cost effective way to balance wind power. This means that highly flexible units that can easily provide ancillary services are used first, and more expensive balancing services come later. Third, earlier projects are likely to use the geographic areas with the highest wind speeds and capacity factors, which tend to have a more stable energy output. The addition of inferior project sites can cause integration costs to rise.

Many studies find a key point of inflection in wind integration costs to be on the order of 20% penetration.

Millborrow (2007) reviews several more theoretical studies on high wind penetrations, and finds that double-digit integration costs are likely to begin when wind reaches 20-30% of electric generation on a standard system.

Although California has an aggressive RPS, its current and projected mix of renewable projects is relatively well balanced. Forecasts for the year 2020 shown below suggest that wind will only comprise approximately 30% of California’s RPS goals; solar power will comprise another 35%, geothermal another 20%, and the remaining 15% will come from biomass, biogas and small hydro (CPUC). The existence of legacy contracts in geothermal power from the days of the Public Utility Regulatory Policy Act and excellent solar resources have helped achieve this balance. Generally speaking, geothermal provides baseload power, and solar’s fluctuations are independent of wind. Therefore, it seems that for the time being, wind’s penetration within the entire electric grid will remain below 15%, sparing California from the significantly higher integration costs that seem to begin at around 20%.

California’s on-shore wind resources are clustered in three main areas: Altamont Pass which is east of San Francisco, Tehachapi Pass which is south of Bakersfield, and San Gorgonio Pass outside of Palm Springs; together, these three areas produce over 95% of California’s wind power from over 13,000 turbines (California Energy Commission). Within each area, geographic diversity is limited, as the best resources are tightly clustered. However, the fact that all three areas work under the same ISO is good for costs, because they are far enough apart to achieve low cross-correlations. Another relevant factor to integration costs is overall grid flexibility, which is influenced by the type and cost of other generating units available to provide balancing services.

In 2010, just over 70% of energy was generated inside of California, as opposed to imports; in-state generation is generally used for renewables integration. Of in state generation, over half comes from natural gas, which is a decently flexible resource. Combined-cycle gas turbines that are already on, as well as gas turbines, provide an important source of flexibility for the grid. Approximately 20% of energy comes from “baseload” sources, such as coal, geothermal and nuclear power, which have difficulty with fast cycling. Hydroelectric power, which is physically the most flexible resource when not subject to policy constraints, provides 15% of in-state generation, and the remainder comes from variable renewable sources (CEC 2010).

The physical flexibility of California’s resources is quite good, especially the mix of natural gas and hydroelectric power. This, along with the geographic distance between major wind farms and the fact that wind levels are relatively low, indicates that integration costs have the potential to be comparatively low in California.

California’s Market Design. It is worth understanding the conventions, terminology and market processes used in California’s electric markets to avoid potential confusion. While most modern electric system operators follow the same principles, specific details vary from region to region. Within California, CAISO is responsible for making sure that generation and load are always equal, and it does so in several stages. The first stage is the day-ahead market (DAM), also known as the integrated forward market (IFM), and is the “first cut” at scheduling energy generation to match demand. The process to schedule energy for any given operating day begins with the submissions of energy bids. Generating units submit bid curves for each operating hour, containing several important characteristics. All generators have minimum and maximum physical operating levels; for example, a gas-fired plant may be able to operate between 20 MW and 100 MW. Then, bids may include a portion of capacity that is “self-scheduled”, meaning that the generator is willing to supply that quantity regardless of price. The bid curve then includes minimum prices that the generator is willing to accept for various quantities of energy. Continuing in the example, the gas generator may be willing to supply between 20 and 40 MW at any price, so it would submit a self-schedule bid up to 40 MW. Then, it might offer to provide between 40 MW and 70 MW for a minimum price of $20 / MWh, and up to 100 MW for a minimum price of $30 / MWh. Generators may use up to 10 different price-quantity combinations in their bid curves, and may submit different bids for different operating hours.

Finally, every bid contains operational details, including the cost and time required to startup the plant, information about whether the plant is already online, and how quickly the plant can move (“ramp”) from one power level to another. Bids may come from generators within the CAISO or anyone wishing to import power from a neighboring balancing authority. Simultaneously, generation units may also submit bids to provide ancillary services. Specifically, the ISO explicitly procures four types of ancillary services: regulation up, regulation down, spinning reserves, and non-spinning reserves. (Load-following services are not an explicit ancillary service in CAISO, and will be discussed shortly). Regulation up and down are the capacity to adjust output in response to an automatic signal on a near-instantaneous basis, while spinning and non-spinning reserves are reserves that can provide power within 10 minutes in the event of a system contingency. Generators wishing to participate must include, for each hour, the quantity of each ancillary service they wish to provide, their minimum price for doing so, and operational information about their ramp rates. These generators may submit mutually exclusive bids for energy and ancillary services. Thirdly, in the DAM, load-serving entities submit bids to purchase energy. Similar to supply bids, demand bids can either come as self-schedules (i.e. willing to buy a certain quantity of energy at any price) or as price-quantity curves. These demand bids can be used to serve load within CAISO or to export power to a neighboring balancing authority. Finally, based on these load forecasts, CAISO will determine the desired quantity of ancillary services to meet its reliability obligations. The DAM closes at 10:00 AM on the day before any given operating day. For each operating hour, CAISO uses a co-optimization model to take the energy supply bids, energy demand bids, ancillary service supply bids, ancillary service demand requirements, and any available information such as transmission constraints, and find the least-cost way to dispatch generation units to meet load and ancillary service requirements. Later in the afternoon, the results are published, and generation units can see their schedules for the next day. This DAM process is where the bulk of the work happens: the bulk of energy, non-spinning reserves and spinning reserves are scheduled through the DAM, and all regulation reserves are procured in this time.

The HASP market is where the bulk of wind scheduling comes into play. Currently, California uses a program known as PIRP, the Participating Intermittent Resource Program. Under PIRP, CAISO contracts with an external vendor to create generation forecasts for all wind farms under the program. These forecasts are released 105 minutes prior to the start of each operating hour, and participating generators use that forecast as a self-scheduled supply bid quantity during the HASP. Using the officially sanctioned forecast has economic benefits for wind forecasters that will be discussed later. New bids and adjustments for the HASP market must be submitted by no later than 75 minutes prior to the start of any given operating hour. CAISO re-runs its optimization software, and publishes the results no later than 45 minutes before the start of the operating hour. By this point, the “baseline” hourly energy schedule is fixed, the energy schedules on the interties between CAISO and other balancing authorities are fixed, and the quantities of available ancillary services are fixed. The third and final stage involves real-time operations. This stage uses two tools, real time economic dispatch and regulation reserve, to match generation to the intra-hour variations in load. Real-time economic dispatch (RTED) is how the CAISO provides load-following (LF) services. Suppose that the final hourly energy schedule was 5,000 MW, but load quickly increased to 5,100 MW. In this situation, CAISO would look back at the economic energy supply bids it had received, and award an additional 100 MW to the cheapest available generation, subject to operational and locational constraints. Alternatively, if fell to be 4,900 MW, the CAISO would reduce the most expensive 100 MW of generation that could feasibly make that adjustment. In real-time, CAISO makes these adjustments to its economic dispatch every 5 minutes to provide load-following. The other tool, regulation reserve, is automatically dispatched minutes to provide load-following. The other tool, regulation reserve, is automatically dispatched minute period, CAISO uses its real-time economic dispatch to compensate for the net change that has occurred since the last adjustment. This way, regulation reserves can be “reset” to their base point, so that this ultra-fast capacity will be fully available in the next 5-minute period.

Wind integration costs come in several forms: from energy imbalance met by load following, from increased requirements for regulation reserve, and from less efficient use of conventional plants.

One commonly mentioned solution to the integration challenge is dedicated energy storage technologies. Proponents argue that energy storage devices, such as large battery arrays, can store excess energy when the wind is producing large amounts of power, and discharge that energy to the grid when the wind stops blowing. Popular media often portrays storage technologies as a “silver bullet” solution, and technology vendors are not shy about echoing that idea. For example, A123 Systems, a manufacturer of lithium ion batteries, published a white paper that showcases how the company’s technology can manage fluctuations in renewable energy, help reduce CO2 emissions, and promote grid reliability (Vartanian).

Despite the appeal of energy storage, the economics simply do not add up for its use in renewables integration. Rittershausen and McDonagh (2010) examine the use of energy storage for intermittent energy smoothing and shaping, an application that could potentially reduce load following requirements, and find that costs exceed benefits by two orders of magnitude. Other potential uses of energy storage, such as providing ancillary services or shifting load from off-peak to on-peak are (1) also not cost effective, and (2) are not linked to renewables integration nearly as directly as industry insiders would argue. However, there are other ways to induce a negative correlation between changes in load and wind generation, apart from dedicated energy storage devices. Demand-side management (DSM) uses devices that are already deployed on the grid, and as a result, can achieve many of the same benefits of storage at considerably lower cost.

Giant energy storage projects are not cost-effective, and CAISO can not simply “spread out” wind generators.

The inherent flexibility of generating resources is largely fixed, and policymakers have only limited control over the mix of renewable technologies.


CEC. 2008. Transmission technology research for renewable integration. Calilfornia Inst. for Energy & Env for California Energy Commission. CEC-500-2014-059. 123 pages.

From a transmission operational dynamics perspective, geothermal and biomass energy are similar to traditional power generators, especially base-load, and therefore do not pose much concern about their operational behavior within the power grid, though some biomass resources vary seasonally.

Some types of renewable generation, however, are “fueled” by variable, or intermittent, energy sources like wind and sunshine, i.e., insolation, which are controlled by weather and rotation of the earth. These intermittent renewables can create renewable energy power plant behaviors for which the grid was not designed and that are quite unfamiliar to grid operators and outside their control. To achieve a 20% renewable energy content will require a projected renewable nameplate capacity of over 14,000 MW with more than 60% of that capacity coming from the intermittent renewable forms of wind and solar. To achieve 33% would require 26,000 MW of renewable nameplate capacity.

Relatively small penetrations of intermittent renewables are expected to have “operational implications significant but manageable” (“California Independent System Operator Integration of Renewable Resources,” David Hawkins & Clyde Loutan, Cal ISO, November 2007). For greater penetration levels, however, transmission infrastructure expansion, improved wind and solar forecasting, increased ancillary services for the grid, and new technologies for a smarter grid will likely be required. Energy storage might also be deployed to mitigate some of the effects of intermittency.

The overall situation is complicated by the current and projected status of the grid over the next few years, even without considering the addition of renewables. Much equipment is aging and planned to be retired during the next 10 years. Prospective once-through cooling regulations may accelerate this trend. Operating margins have been steadily shrinking as transmission investment has not kept pace with increases in demand. Dynamic operating constraints have emerged which prevent major transmission lines from operating at the levels for which they were designed. Increasing levels of imported power have led to a substantially larger, more interconnected regional grid than envisioned when much of the infrastructure was planned.

Excess Total Generation – To achieve the increasing percentages of renewables, a rapid addition of renewable power plants will be required. The needed rate of addition is considerably higher than the growth of demand and is projected to be higher than the sum of demand growth and the retirement of existing equipment. In other words, the addition of the renewable plants may force the retirement or lowered use of some existing thermal plants, even though they are still viable. Cal ISO forecasts 13% less non-renewable generation in 2020 than in 2008.

Congestion Costs – Once connected to the grid, remotely located resources must be brought into major load centers. Lines which currently have adequate capacity are likely to experience increased periods of congestion.

Stability – Over wide areas, the grid can exhibit unstable behavior if power flows exceed dynamic limits. If not controlled, this can trigger large scale outages. This dynamic grid stability, even without the addition of renewable resources, is a critical issue. To maintain reliability, potential instabilities must be sensed and responded to quickly. While transmission lines have a designed power handling capacity based on thermal limits, instabilities frequently limit maximum transmitted power to levels significantly less. In particular, this limits both the amount of power which can be imported from out of state and amounts which can be transferred from one part of the state to another. The addition of significant remote generating facilities, much of it with low inertia, may have undesirable effects.

Local Area Limitations – Within the state, it appears that much of the new renewable energy will be generated in remote areas, while most of the consumption will be concentrated in the population centers such as the Los Angeles Basin. Five load centers comprise 87% of the total load in California. Power flow transmitted into these areas is channeled through key substations called gateways. Many of these gateways are already operating at their limits, which is typically in the range of 50% of the locally consumed power. If the gateways to an area are limited to 50% of the consumed power, then the balance of the power must be generated within the local area. As a result, even if there is abundant renewable power generated within the state and connected to the transmission system, many existing parts of the system will need significant increases in capacity.

Limited Bulk Storage – Existing large storage facilities, which could act to shift loads from day to night, are extremely limited and may be constrained by transmission limits. The Helms Pumping Facility, one of the largest in the state, with a maximum pumping capability of 900 MW from 3 pumps, operated at this level less than 250 hours in 2005, primarily due to transmission constraints. New pumping facilities require 10 – 12 years to implement.

Intermittent power sources generally complicate the problem of managing  the grid.

Extreme Events can be described as system disturbances characterized by multiple failures of transmission system components, resulting in widespread system collapse through cascading outages. Such large-scale events have always been difficult to analyze, plan for, and manage, but the potential severity of such events has grown with the interconnectedness of the grid, and is likely to grow more with the increasing integration of intermittent renewables in the system. Operators in adjoining systems generally don’t have good visibility of each other’s systems, hindering both the detection of impending or initiating extreme events, and effective countermeasures once an operator becomes aware an extreme event is propagating. Existing tools for operators have not been adequate to respond to these events. Currently there is significant effort focused on real-time system awareness and online analytical tools utilizing phasor measurements; additional areas of potentially beneficial research include advanced planning to better identify critical transmission paths, adaptive protection systems, and strategies for automated islanding of the grid.

Because most new renewable power plants will be located in areas rich in renewable resources but remote from California electricity customers, electric transmission will be crucial for transporting the renewable electricity to load centers, and thus for meeting the state’s renewable energy goals. Consequently, each new renewable power plant must be successfully integrated with the transmission system. To fulfill this mission, transmission must achieve three broad objectives:

  1. provide physical access for each new power plant,
  2. reliably accommodate any unique renewable generator behaviors, and
  3. increase its power carrying capacity to handle the additional electric power flows.

It is reasonable to assume that modest penetrations of renewable generation, perhaps up to 20%, can be successfully integrated into the grid by traditional system investments, such as building new lines and conventional generation for increased capacity and to maintain reliability. However, as the penetration of renewables grows, to perhaps 33% and beyond, and more transmission infrastructure is added to the system, its complexity will grow along with operational difficulties. It also will likely become increasingly difficult to meet the environmental and economic criteria for siting new infrastructure in a timely manner, further reducing the effectiveness of the “build” approach.

As an alternative, new technologies can be deployed in the transmission system to endow it with expanded or new capabilities that, at a minimum, will make renewable integration easier and less costly, and ultimately at some higher renewable penetration level, will probably be required to achieve California’s renewable energy goals. Some transmission stakeholders have expressed the opinion that we are already at the level of renewable penetration in California where new technologies will be required.

For most new renewable power plants, access to the transmission system can be directly translated into acquiring new right of way (ROW), and building new transmission lines between the power plant and an interconnect point on the transmission grid. The siting process for new transmission project is highly complex and difficult, involves many different stakeholders, and takes many years, typically 10 to 12 years for a major line.

While there are a number of state and national policy changes being pursued to shorten this time, concern remains that it will take longer to build the new transmission extension to a renewable power plant than it will to build the power plant. Two major impediments to timely new ROW approvals are cost/benefit allocation economic debates, and siting challenges, exemplified by, “not in my backyard.”

From a transmission operational dynamics perspective, some renewable energy plants such as geothermal, biomass and perhaps solar thermal with enough thermal storage will benignly operate similar to traditional baseload thermal power generators. Wind and some solar renewable generation, however, are intermittent, and exhibit power plant behaviors unfamiliar to grid operators, and for which the grid was not designed.  time

The Energy Commission Intermittency Analysis Project has projected that meeting the 33% goal by 2020 will result in power production capacity in excess of total demand requirements. Existing conventional plants would need to be closed or operated at lower capacity factors, potentially reducing the availability of system support generation. This situation might be compounded if coastal thermal plants using once through cooling must be shutdown.

Finally, to stimulate the private development of renewable power plants, utility contracts generally include the guaranteed acceptance of power generated.

Any transmission line has physical limits on the amount of power that can be transmitted Which limit is the dominant factor constraining the capacity of a given line at a given time depends on the conditions of that particular line and the broader wide-area transmission grid.

Thermal Limits: The maximum power a particular line can ever handle is its thermal limit. The primary source of heat comes from the interaction between the electrical resistance of the line material and the electric current flowing through it. Above this limit, a line may excessively sag, creating a safety hazard or an outage, or be physically damaged by excessive temperature.

Stability Limits: Poor voltage support, and dynamic and transient instabilities can result in even substantially lower capacity limits below the thermal limits in some situations. It is not unusual for a major interconnection path to be operationally limited by instabilities to half its rated static thermal limit. This effect imposes severe limits on the amount of renewable power which can be imported into California, and into major load centers within the state.

The most common way of transporting bulk electric energy is by means of overhead AC transmission lines, which are typically constructed of stranded, bare aluminum or aluminum/steel cables, suspended by insulators from steel lattice towers or wood poles. At some point, the loading limit is reached, and some method must be used to increase the line’s capacity. One way, of course, is to build another line, either as a parallel line or higher capacity replacement in the same corridor, or in a suitable alternate route. Assuming that the existing corridor is the only feasible one and has no additional space, there are a number of technological approaches available for increasing the power carrying capacity within the constraints of the existing ROW.

4.2.2 New Capabilities Addressed

Access Siting Capability #1: To facilitate environmental and societal deliberations, and enhance acceptability of new transmission lines. The addition of substantial amounts of new renewable generation to the electric system will require that new transmission lines be built between the renewables plants to the existing transmission grid, and also likely require significantly increased power-carrying capacity from the transmission gateways to the loads. These overhead transmission technologies can provide the additional needed capacity with reduced visual and environmental impacts compared to conventional overhead lines, potentially simplifying and easing the permitting process.

Reconductoring involves replacing the stranded conductors in the line with new ones of larger diameter. This is the most common upgrading method, with minimal visual impacts due to the new appearance of the line. Since current-carrying capacity (and by corollary, power transfer capacity) of a conductor is proportional to the cross-sectional area, a conductor of 50% larger diameter can have up to 2.25 times the capacity. This increase in conductor size is not difficult to accommodate; if the tower crossarms do not need strengthening, the only modifications needed are replacement of suspension clamps attach the conductor to the insulator string. Even if towers and crossarms need strengthening, the additional costs will still be reasonable, and visual changes to the line will not be significant. The only other issues are possible upgrades to terminal equipment, such as transformers, relays, switches, etc., to handle the additional current; and stability studies to assess the need for greater remedial action for contingencies at the higher current level. In general, this is a mature and cost-effective technology, and is the first and best option for utilities when additional capacity is needed.

Bundling simply means using two or more conductors per phase. Adding a second conductor identical to the first (the usual practice) doubles the current, which doubles the power transfer. Like reconductoring, this is a mature and cost- effective technology that is one of the first alternatives considered by transmission planners, usually involving simple retrofits of suspension clamps, possible replacement of insulators, and possible upgrades to towers and crossarms. Visual impacts are slightly higher than for reconductoring, which may be an issue in the permitting process.

When it is not feasible to increase the current in a transmission line corridor by reconductoring or bundling conductors, the line can be converted to the next voltage level, e.g., from 115 kV to 230 kV. The increase in power is proportional to the increase in voltage, in this case, by a factor of 2. If the existing conductors are used, the only changes to the line itself are new insulators and possibly some strengthening of the towers and crossarms, so the visual impacts are minimal. However, the terminal equipment, including transformers, circuit breakers, relays switches, etc., must be upgraded, and the costs for this will be significant. This is also a mature technology, the cost parameters of which are well known and included in the transmission planning analysis process.

4.2.4 Gaps Reconductoring, Bundling and Voltage Uprating

These are all mature technologies, well-known to the utility industry, cost-effective, and widely used. Barriers to wider use include issues of cost, cost recovery, and visual and environmental impacts that lead to intervention in the permitting process by various stakeholders.

Conventional underground transmission lines are constructed with copper wires (conductors) encased in an insulating material such as oil-impregnated paper, inside a pipe-type enclosure (conduit), and buried in a trench under special backfill material to dissipate the heat generated in the cables. The inside of the conduit is filled with an insulating oil similar to that used in transformers, or an insulating gas such as SF6, to provide high dielectric strength (insulating ability between the copper conductors and the conduit, which is at ground potential. Newer types use polyethylene sheathing as the dielectric material, and do not use oil or gas insulating media. The public generally views underground lines as having far fewer negative impacts than overhead lines, although there are still several difficult issues to address:

  1. Construction costs for an underground line can be up to 10 times the cost of an overhead line of the same capacity, and construction can take much longer.
  2. Underground lines are impractical in mountainous areas, where drilling through rock is required.
  3. The biggest environmental impact will be ground disturbance to in the immediate vicinity of the trench during construction, which can be significantly disruptive, albeit temporary.   Access to underground lines is more difficult when maintenance is required, which can lengthen outage times.
  4. Underground lines are more susceptible to damage from construction activities, because they are not visible to crews operating equipment.
  5. Joints in the conduit can leak, spilling oil into the surrounding soil, or releasing the insulating gas (SF6 is about 15,000 times more potent as a greenhouse gas than CO2).
  6. Lengths are limited to about 40 miles between substations, because of the high capacitive reactance of transmission cables.
  7. The main barrier to wider use of underground cables is cost: not just the cost of the cables themselves, but also the costs of constructing the trench for the cable. HDPE technology is helping to make the cable cost itself more reasonable over time, but more economical methods for installing the cable are needed.
  8. The environmental effects of current construction and trenching methods are also significant.

4.4 High-Voltage Direct Current (HVDC) Transmission Technologies

4.4.1 Technology Overview

High-Voltage DC (Conventional) HVDC transmission lines, as they have been typically developed and implemented to date, consist of AC-to-DC converters on the sending end, DC-to-AC converters on the receiving end, and an overhead transmission line or an underground cable system as the transmission path. The converters, which can be considered solid-state transformers, rely on high-voltage, high-power thyristors (semiconductors that are triggered by the AC voltage). Since only two phases are needed for DC, vs. 3 phases for AC, the transmission line, insulators and towers can be more compact and less expensive than AC lines, and less space is needed (and less land needs to be acquired) for the ROW. However, the converter terminals for HVDC are very expensive, being based on high-voltage solid-state electronics and requiring large amounts of AC capacitors at both ends to provide reactive support; thus, intermediate substations for stepping down the voltage add significantly to the cost of HVDC transmission systems. HVDC has traditionally been used when large blocks of power need to be transmitted long distances, and has been used at voltages up to 800 kVDC and several thousand MW of power capability. Historically, the breakeven point for AC-vs.-DC overhead lines has been around 400 miles: HVDC is more economic for transmission distances longer than that (where its lower line costs predominate), and AC is more economic for distances shorter than that (where its lower terminal costs predominate). Underground HVDC cables have an additional advantage over AC cables in that they do not have the problem of AC capacitance; therefore their length is not limited to the 40 miles or so that AC cables are.

The standard HVDC technology as it has been used to date, e.g., in the Pacific HVDC Intertie, the Intermountain Power Project, and many others, is a mature technology that has been continually refined over the last 50+ years, with virtually no research gaps. It is cos- effective for long-distance bulk power transmission when intermediate substations to serve loads along the transmission route are not needed. However, it is likely to be considered too expensive for new line construction for the anticipated power levels of integrating renewables. Conversion of AC lines to DC lines is fairly straightforward, and most utilities are familiar with the technical and cost issues, as well as when it might be considered a feasible alternative. I t has not been done much in the US, for the simple reason that additional ROW and upgraded AC lines have almost always been the feasible alternatives and cheaper than conversion to DC. Now that corridors are getting maxed out, this may be a feasible, albeit more costly, alternative to re-building AC lines or building new ones.

The external barriers to wider use of HVDC technologies are greater than the engineering or technical challenges. Research activities focused purely on technical issues with HVDC technologies are unlikely to make a significant difference in terms of implementing HVDC. The principal stumbling block will continue to be the perceived additional cost per MW of capacity compared to the traditional “least-cost” alternative of overhead AC.

Storage has taken on added importance with the increase of renewables plants, given that the intermittency and variability of renewables increases the complexity of the system operator’s job.

Storage systems have several basic characteristics that can vary depending upon the technology and the desired application: o Power capability: how many kW or MW the storage plant can discharge. This is usually a direct function of the electrical generating mechanism, be it a rotating machine or a solid-state electronics interface.

  • Bulk energy: how many kWh or MWh of energy can be stored.
  • Charge time: the number of hours or minutes required to fully charge the system.
  • Discharge time: the number of hours or minutes the system can supply its rated kW or MW output.
  • Efficiency: the ratio of energy discharged to the energy required for charging. Also called “round-trip” efficiency. Most storage systems fall into the 60-75 percent range.
  • Capital cost: the total cost to build a storage plant; it is usually given in terms of the power capability and bulk energy components.
  • Maintenance costs: consist of both $/kW and $/kWh components.

The use of storage to provide high quality and highly reliable electric service for one or more adjacent facilities. In case of an intermittent or extended grid outage, the storage system provides enough energy for some combination of the following: an orderly shutdown of customer processes, transfer of customer loads to on-site generation resources, or high-quality power needed for sensitive loads.

Pumped hydro is very site‐dependent, and most of the best sites are already developed; therefore, it can’t always be located where it’s needed in the transmission system. 

Batteries : The energy density of chemically-based battery systems is not as high as desired, requiring a fairly large footprint for even modestly sized battery systems for utility applications. Costs in both per-kW and per-kWh terms are relatively high. There are significant maintenance requirements including periodic replacement of internal components, safety issues with the chemicals involved, and life expectancy.

The AC electric power system, by its nature, does not have a high degree of controllability, in terms of system operators being able to designate which transmission paths the power flows on. The electric system is a giant interconnected network of generating sources, loads (customers) and the transmission and distribution lines that provide the connections among them all. To a great extent, the power flows on the system are determined by the customer loads and the generators that are on the system at any given time; the power then flows over the transmission and distribution lines as determined by the impedance of the lines and paths and Kirchhoff’s laws.

Because of the numerous parallel paths that power can flow on, the contract path for power, defined as the line or path over which the contracted power from a generator to a load is meant to flow, is not necessarily the only path over which that power will flow. For example, Bonneville Power Administration (BPA) can contract with Pacific Gas and Electric (PG&E) to send 3,000 MW of power over the 500 kV Pacific Intertie, but in reality about 20% of that power can flow through parallel paths on the eastern side of the Western Electricity Coordinating Council (WECC) system. This phenomenon, called loop flow or inadvertent flow, frustrates the efficient exchange of power within transmission grids and between utilities, and can result in diseconomies.

For controlling real power, system operators have just a few tools at their disposal. They can adjust the output of generators under their direct control; however, in today’s power markets, this control is diminishing. When lines or paths reach their thermal or stability limits, congestion occurs, and generators are forced to adjust their outputs to relieve the line overloads, with congestion payments both to generators who must curtail and to reliability-must-run generators who must generate in their place. Series capacitors in the transmission lines can be switched in or out to reduce or increase, respectively, the impedance of a line or path, increasing or decreasing the power flowing in that path; this is typically not a real-time control option, as most series capacitors are manually switched, usually on a seasonal basis. Devices called phase-shifting transformers are sometimes used to increase the apparent impedance of a line or path, the objective being to shift power flow from a specific line or transmission path to adjacent circuits or paths. The only other options for controlling real power are to change the configuration of the lines in the system, i.e., switch lines in or out, or to change the bus connection arrangements in the substations; neither of these are generally desirable options, and cannot be done feasibly on a real-time basis.

HVDC transmission lines (see section 4.4 on HVDC Transmission Technologies), in contrast to AC lines, have the ability to control their power flow due to the power electronics in the converter stations at the terminals of the lines. There are currently only a few HVDC lines in the Western grid, whose purpose in mainly to transmit large blocks of inexpensive but remote generation to load centers in Southern California.

Asynchronous HVDC links, also called back- to-back HVDC links, are sometimes used to provide control and isolation between utility control areas: the power transfer between the areas can be precisely controlled, and the system frequencies of the adjoining systems do not have to be in synchronism with each other, and system disturbances do not propagate through the links as they would through AC lines.

Reactive power is much more controllable than real power. Generating plants have the capability to adjust their volt-amps reactive (VAR) outputs automatically to match the reactive demands of the system. Shunt capacitors and inductors can be installed at any substation, and are switched in and out as needed (not always in real time) to control the voltage profile of the system and adjust to the reactive power demands of the loads on a local as well as system level. Series capacitors also help to control voltage levels by reducing the reactive impedance of transmission lines. Devices called synchronous condensers can provide a measure of dynamic voltage control. Synchronous condensers are rotating synchronous generators without prime movers, and appear as reactive power devices only; by adjusting their excitation systems (voltage to the stator coil) they can either produce or consume VARs. Transmission transformers, for the most part, do not have tap changers and can’t control voltage. Distribution transformers (transmission voltage to distribution voltage) can have some measure of voltage control to adjust to the demands of the loads on the distribution side. Other control methods are used in the context of remedial action schemes to control system stability: generator dropping, load dropping, fast reactor insertion, series capacitor switching, and braking resistors, to name the major ones.

While technologies can be used to bring new or enhanced capabilities to the transmission infrastructure for meeting the three major objectives via new hardware measures, technologies can also bring new capabilities for operating the infrastructure in a reliable, economic and integrated fashion. Indeed, given the additional operating uncertainties that renewable generation will likely add, the new operating capabilities will be a necessity, especially those for real time and wide- area systems operations. This class of technology generally consists of sensors for detection and measuring of system conditions; communication systems; data management; analysis for monitoring, diagnosis, prediction and decision support; visualization for human interface; and instructions for automation. Much of this technology platform is enabled by an emerging sensing technology known as synchrophasors, or, more commonly, phasors.

In addition to enhancing grid reliability and avoiding major blackout conditions, the KEMA study identified Disturbance Detection, Diagnosis and Compliance Monitoring as a phasor application that offers the potential to significantly reduce the capacity derating of key transmission pathways that are critically important for 33 percent and greater renewables integration. This would seek to analyze PMU data from various locations within the regional power grid to detect, diagnose and mitigate low frequency oscillations and, through improved operating tools, free up significant underutilized transmission capacity for importing renewable power into the state and into major urban areas.

With the addition of 4,500 Megawatts of new wind generation in Tehachapi in the 2010 timeframe, Cal ISO and other grid operators are likely to experience periods where electricity production from these wind plants will rapidly decline while simultaneously the load is rapidly increasing. Energy ramps as high as 3,000 MW per hour or larger may occur between 7 AM and 10 AM in the morning in the 2010 and larger ramps over the longer term, as progress is made in pursuing the 33% and 50% renewables goals. Fast ramping generation, such as hydro units, will be essential for the Cal ISO to keep up with the fast energy changes.

There will be other periods, particularly in the winter months, where large pacific storms will impact the wind parks and their energy production will rapidly ramp up to full output.   The Cal ISO Renewable Integration study recommends the development of a new ramp- forecasting tool to help system operators anticipate large energy ramps, both up and down, on the system. The longer the lead-time for forecasting a large ramp, the more options the operators have to mitigate the impact of the ramp.   The Cal ISO report also identifies the need for research to analyze the impact of large central station solar power intermittency in producing large energy ramps, within the context of anticipated wind energy ramps as well as load variations and distributed customer-side-of the-meter solar photovoltaic (PV), small wind turbines and other distributed energy resources.

There exists today a wealth of methods for short-term prediction of wind generation. An excellent summary of the state-of-the-art in wind power forecasting, available at the following website:,

Future PHEVs are anticipated to have expanded battery power for extended electric-only operation, and presumably they will largely recharge overnight when minimum loads traditionally occur. This situation creates a potentially synergistic relationship between wind and PHEVs, i.e., coordinating PHEV electric demand with wind generation in a “smart” infrastructure can mitigate impacts to the grid. In the simplest instance, PHEV load could be switched off to counter drops in wind generation (similar to demand response), and switched on as wind generation increases.

Typical protection systems utilize digital relays individually or in combination to protect valuable assets, such as transmission lines or generators. Advanced relays incorporate PMU technology directly into the relay. Transmission lines may incorporate redundant primary relays and back up relays in complex relays designed to insure reliable action. Operation of these systems is programmed based on the expectation of a relatively normal operating configuration.

However, under abnormal conditions, such as can occur during a fault, the relay system may operate, or fail to operate, in a manner which was not intended.   During major cascading blackouts, protective relays have either been implicated in increasing the severity of the blackout or of failing to slow or stop the spread. In the August 14, 2003 blackout on the East Coast and the July 2 and August 10 1996 blackouts in the West, zone 3 impedance relays played a major contributing role as well as many transmission and generation protective relays.

In each of these blackouts, due to an unusual and unanticipated set of circumstances, the EHV transmission grid became configured in highly abnormal operational states that were not anticipated or studied by protection and system operating engineers. These protection systems are almost exclusively local in nature. Wider area protection systems – Remedial Action Schemes (RAS) or Special Protection Schemes (SPS) have been created to provide a variety of system protection actions. As these systems grow in scope and complexity, there is the increasing possibility of unintended consequences. The term “intelligent protection systems” is not precisely defined and can be used to mean any of a variety of related concepts. For this report, the term is used to primarily describe protection systems which use phasor data and are adaptive, i.e. which can monitor conditions in real time, and “intelligently” adapt their operation to reflect actual conditions on the power grid. Ultimately, intelligent wide area protection systems can be seen as “protecting” the system by controlling its operation in such a manner as to prevent faults or instabilities from becoming large scale outages.

Major outages such as described here are sometimes referred to as “Extreme Events,” because of the multiple contingencies that occur, and because they are beyond the ability of planning and operations engineers to foresee, and in many cases, to mitigate once they start. There is research currently underway to develop new methodologies for analyzing extreme events and test the methodologies; first in simple network systems, and next in larger, more complex and realistic network systems, modeling the California grid and its western interconnections.

New Accommodation Dynamic Behavior Capability #3: To operate the grid in response to renewable power plant dynamic behaviors. The increasing penetration of renewables with different types of dynamic behavior increases the risk of serious consequences in response to a transient event. Intelligent protection systems offer the possibility of improved mitigation of the consequences of a fault and reduced likelihood of a fault triggering a cascading blackout.

Traditional utility electric power systems were designed to support a one-way power flow from the point of generation through a transmission system to distribution level loads. These system s were not originally intended to accommodate the back-feed of power from distributed solar photovoltaic, small- scale wind turbines and other distributed energy systems at the distribution level.

Current interconnection requirements for residential net-metered PV systems in California require that the system include a UL 1741 certified inverter (meaning that it has been tested to meet the Institute of Electrical and Electronic Engineers IEEE 929- 2000, recommended practice for safe utility interface of generating systems) that will disconnect from the utility distribution if the voltage decreases or frequency deviation. Disconnect switches must meet the National Electrical Code’s Article 690 on solar photovoltaic systems published by the National Fire Protection Association. When the utility is able to restore electric service on the distribution circuit, the customer is normally responsible for realizing that the distributed energy system has been disconnected from the grid and taking action to restore normal operation.

The IEEE standards for the inverter, along with system design components such as a lockable disconnect switch, are necessary to prevent “Islanding.”

Islanding refers to a situation where the grid power is down and a customer’s generator is still on, creating the potential for power to feed back into the grid. This would cause an unsafe situation for linesmen working on an otherwise non-electrified portion of the power grid. Owners of grid-tied systems should know that their system’s anti-islanding design also prevents them from having power on-site when the grid goes down.

Grid operators are concerned that manual restoration of power production by distributed renewable energy systems may not be workable approach when a significant amount of the customer end-use electricity load is supplied by these distributed systems.

Based on discussions with grid operators and transmission owners there appear to be two interrelated needs:

  1. There is a need for customer-side-of-the-meter interconnection equipment that will permit the automatic restoration of the operation of distributed energy systems if the voltage, frequency and other operating characteristics of the electricity distribution system are within normal operating ranges.
  2. There is a need for reliable information about the operating status of these distributed energy systems to be readily available to grid operators and utilities, within the overall context of customer loads that will be connected when service is restored. These information needs are one of the important evolutionary features of the smart grid.

The current status of this research is available at the following website:

Load Management Standards Proceeding; more information is available at

DOE is also actively involved in planning and funding research on smart power grid; more information is available at the following DOE website:

Grid planning and operating decisions rely on simulations of dynamic behavior of the power system. Both technical and commercial segments of the industry must be confident that the simulation models and database are accurate and up to date. If the transfer limits are set using overly optimistic models, a grid operator may unknowingly operate the system beyond its capability, thereby increasing the risk of widespread outages, such as occurred during summer 1996 outages. If the models are pessimistic, a grid operator may be overly conservative and impose unnecessary restrictions on the transfer paths, thereby increasing the risk of power shortages in energy deficient regions. Therefore, having realistic models is very important to ensure reliable and economic power system operation. Because accurate end-use load models and renewable generation models are likely to have a significant impact on the capacity derating of major transmission paths carrying renewable energy into and within California, it is vitally important that these models accurate current conditions as well as future changes over the 2009 to 2030 time frames addressed by the 20 percent, 30 percent and 50 percent renewables goals.

Uncertainty is a persistent theme underlying virtually every aspect of the transmission planning and grid operations. Traditional power system analysis tools do not directly assess the many, inescapable uncertainties that are inherent in all models and in all data they on which they rely. Responsible users of these tools cannot ignore these uncertainties because they routinely have a major influence on the results.   Common

uncertainties in power system analyses used in transmission planning might include estimates of load growth in time, by region and by end-use composition, potential location and generating capacity of wind, solar, other renewable and central station power generation facilities, retirements or upgrades of existing generating facilities, and likelihood that transmission facilities and substations will be approved and constructed in the future. Common uncertainties in analyzing grid operations might include weather impacts on load and renewable generation output, operational status of various transmission pathways, operational status of power generation facilities, possibilities of unplanned outages of generation and transmission equipment, and real-time actions of market players to maximize revenues or reduce costs in generation or utilization of power. This uncertainty has been compounded by the disaggregation of the vertical structured utility, deregulated power markets, and the increased size of the grid interconnections crossing state and national boundaries.

6.2.2 Uncertainty Analysis and Probabilistic Forecasting Tools

Access of Renewable Resources to the Transmission Grid

Meeting 20 percent, 30 percent and 50 percent renewables goals will require a substantial amount of new transmission development, as most large-scale renewable resources are located in remote areas rather than near the state’s major load centers. Energy Commission IAP study concluded that, for the 2010 Tehachapi case, 74 new or upgraded transmission line segments are needed at a first order estimated cost of $1.2 billion plus $161 million for transformer upgrades and unknown costs for land use and right-of-way costs. The 2020 case would require 128 new or upgraded transmission line segments, with just over half (66) needed to serve increasing load requirements. For just the 500 kV and 230 kV additions, a first order estimated cost would be $5.7 billion. In addition, 40 new or improved transformers would be needed at an estimated cost of $655 million (excluding detailed land use and right-of-way costs).

Wind generation output varies significantly during the course of any given day and there is no predictable day-to-day generation pattern.

Daily patterns of wind power which exhibit a high degree of variability and uncertainty will likely cause more serious congestions with greater uncertainty.

The following summarizes one near-term operating scenario of interest to Cal ISO that might be the focus of research on pattern recognition methods applied to real-time grid operations. The Tehachapi Area is expected to have one of the largest installations of wind generation in the State of California. Over 5,600 MW of wind generation consisting of both traditional induction generators as well as the latest modern doubly fed induction generators with power electronics controls are planned for the Tehachapi Area. In addition, the Tehachapi has one of the largest water pumping operations in the world. Through pumping, water is elevated 3,000 feet to over the Tehachapi Mountains to serve the greater Los Angeles Area. The combination of large amounts of wind generation and large pumping operation at the Tehachapi Area is expected to severely tax the power grid in the Southern California area and is therefore selected for analysis in this research. A new 500 kV transmission system is planned for the Tehachapi Area. Through this research, it can be validated how this new transmission facility enhances the statistical distribution of the power grid parameters in the Tehachapi Area.

DOE. September 30, 2014. Summary of Discussion U.S. Department of Energy Workshop on Estimating the Benefits and Costs of Distributed Energy Technologies. Department of Energy.

DOE did a study on 30% penetration of wind that showed $143 billion of additional transmission would be needed to meet the additional wind.

PV generation is relatively predictable but it is not necessarily coincidental with peak usage.

For avoided transmission investment, we need to determine the relative coincidence of distributed PV production with peaks on the transmission system.

The way to look at capacity is through the reliability lens. Once you get high penetration, reliability starts to decline. The system in Hawaii has become less robust against big transient events, so the utility now has to spend millions to enable the grid to respond to transient events as it did before. Adding flexible generation also adds capacity cost. When penetration levels get significant, huge ramp events can occur for which the system was never designed

Enabling high penetration of DETs will increase the cost of the distribution infrastructure.

Germany paid 56 cents per kilowatt-hour to incentivize rooftop installation, and they face a price tag of a trillion dollars.

There are costs for wear on assets used in ways for which they were not designed

Grid operators have addressed ramping through the same mundane approach for decades, but with penetration of RE, the cost of dealing with ramping increases.

With increased penetration of variable generation, frequency regulation becomes more of a challenge at the bulk system level. Primary and secondary costs are straightforward. States are having individual issues. Most reliability activities are trans-state, and two interconnections have seen increased de gradation at the bulk system level. Some of that is from losing inertia. Frequency regulation at the bulk system level is not a resolved issue and will get more complex.

Distribution system impacts are more discrete, which is both good and bad. Extremely granular data are required–an overwhelming level. With “dumb” inverters, there is a risk of voltage violations and losses of 10% to 30%. We can avoid overloaded feeders. Avoided capacity also has a potential impact on extension of service life for system equipment.

Even at low penetration rates, DER can cause reliability issues. Mr. Fine showed a chart with possible effects at 10% penetration levels.

The current business development model for customer solar PV in Hawaii is not sustainable due to economic, policy and grid-related technical challenges associated with high solar penetration levels. Customers must recognize that the recent rapid pace of customer solar PV interconnections is not sustainable when grid infrastructure mitigations need to be developed and deployed.

Commissioner Champley discussed lessons learned from the experiences of Hawaii’s utilities. The state has had high growth of residential and other solar photovoltaic (PV) over the last five years and is poised for a major thrust in the development of utility-scale PV. As a result, the state faces a number of significant economic, policy, and grid-related technical challenges. Electrically speaking, Hawaii is a collection of island electric grids. There is no interconnection between islands; each island has effectively become a laboratory for renewable resource integration. The Federal Energy Regulatory Commission (FERC) and NERC have no jurisdiction, so the Hawaii Public Utilities Commission can establish its own rules, within state statutes.

Annual renewable energy output in 2013 ranged from 12% on Oahu the main population center) to 48% on the main island (Hawaii) and renewable energy growth continues. The state leads the nation in penetration of rooftop PV and, as a result, is at the forefront of the integration challenges associated with high distributed PV penetration levels. By 2017, two islands will have over 75% of day-time system load supplied by distributed and utility-scale solar. Solar has seen exponential growth, but the growth has been slowing down in 2014. Hawaii is approaching 50,000 solar customers; over 10% of total residential customers have solar PV. Installed customer solar PV capacity represents roughly 23% of annual system peak load. Average residential customer electricity usage has dropped by about 30% over last ten years due to customer energy efficiency, conservation and distributed generation (but the grid investment did not shrink 30% and in fact, increased during this time). On Kauai Island, solar generation is approaching 50 megawatts (MW), while oil will soon be down to around 10 MW. However, solar resources energy output contributed only 18% of the daily energy used due to limited hours of full solar energy output. Regarding solar penetration at the distribution level, approximately 50% of all distribution circuits for the Hawaiian Electric Companies have greater than 75% solar PV penetration

Exponential growth in renewables was market-driven, but if the consequences are not anticipated and appropriately addressed proactively, such growth will lead to unintended results. Developing renewables makes sense in Hawaii due to its current dependency on oil for electric generation, but with state tax and rate incentives and no penetration level check points, the growth outpaced the utility’s ability to manage interconnection queue and grid integration issues. As a result, the residential PV industry in Hawaii faces a boom-bust cycle. Commissioner Champley noted that there are now emerging substantial integration challenges uniquely associated with incremental additions of utility-scale and distributed solar PV, and that the integration costs of solar may exceed those of other forms of renewables, due to less solar energy output to spread integration fixed costs and due to PV ’s inherent low capacity factor. Other technical issues include:

  • Many issues have arisen that were not initially evident at lower penetration levels.
  • The size of a customer’s PV grid “footprint” matters when excess solar energy is exported.
  • Bulk power system reliability challenges, not distribution circuit issues, have become binding constraints on the island grids.
  • PV inverters are a crucial part of the distributed solar PV integration equation.
  • Inability to curtail customer solar PV output leads to curtailment of utility-scale renewable projects, to the economic detriment of customers without solar PV.
  • Legacy customer and technology issues are an emerging concern.

Most studies indicate that above 10% energy penetration of distributed PV, the capacity credit and capacity value of additional distributed PV is very low.


CEC. April 2012. Summary of recent wind integration studies. Experience from 2007-2010. California Wind Enegy Collaborative for California Energy Commission. CEC-500-2013-124.

Transmission studies are often neglected or extremely simplified for current wind integration studies. Detailed transmission studies are necessary for before each additional wind plant is installed. Transmission elements must be designed specifically for wind generation to ensure reliability. This would entail an AC transmission analysis as opposed to the DC analysis which is common to most integration studies. The AC analysis would likely focus on the possible electrical issues such as: inertial response, reactive power support, and transient stability. Another important aspect of a transmission study is a land use study. This study is necessary to ensure that proposed transmission can be built. It would need to consider the arrangement of wind projects to ensure that transmission is appropriately sized and that the connections to the system and made in the most optimal way.

The expected growth of electric vehicles is another aspect to study in relation to wind generation. Electric vehicles are expected to charge at night when demand is low. This could prove beneficial for wind generation because wind generation in many areas will be at its peak at night. It seems as though electric vehicles will be able to absorb wind energy that won’t otherwise be needed. There are several concerns with how this will work in practice. Such as, what happens if the wind dies?

Also, will electric vehicles start charging all at the same time leading to a sudden load spike? Is there a way for the chargers to be responsive to the power grid?

Wind generation is an intermittent, variable and uncertain generating resource. This uncertainty is an important characteristic in that it is in contrast to conventional generation which is available as needed and controllable.

The increase in variability will require system operators to take more and larger control actions to keep the system balanced.

The uncertainty of wind in the power system is the largest concern. The variability introduced is generally manageable but it is made much worse by the uncertainty. Uncertainty will lead to less efficient operation and can lead to reliability problems. The variability and uncertainty of wind generation will cause operators to increase the amount of ancillary services they procure to keep the system balanced.

The regulation reserve is the most affected because it is primarily charged with managed short term fluctuations. The amount of additional regulation that systems will need to procure varies greatly between studies. Regulation needs increase with higher penetrations of wind generation. It is important for system operators to quantify the regulation needs to ensure the system will have the capability to provide it.

There are many possible ways that wind generation can impact the power system costs. It can affect the energy costs, ancillary service costs, unit commitment costs, congestion costs, uplift costs, transmission costs, and so forth.

The third strategy for managing integration is increasing diversity. Diversity can be increased in a number of ways. Building wind generation in different resource areas is one way to increase diversity. Constructing sufficient transmission to ensure wind power can be moved where it is needed is another. Combining control areas is another or increasing the cooperation between areas. Increasing cooperation would involve increased scheduling frequency across inter-ties and sharing of renewable energy data.

The uncertainty of wind in the power system was the largest concern. The variability introduced was generally manageable but it was made worse by the uncertainty. Uncertainty made it much more difficult to plan generation schedules in an optimal way. The variability and uncertainty of wind generation will cause operators to increase the amount of ancillary services they procure to keep the system balanced. Ancillary services are a subset of a group of services that are necessary to maintain operation of the power grid. They are used to maintain short-term balance of the system and to recover from unexpected outages. Ancillary services included operating, contingency, and regulating reserves. The regulation reserve was the most affected because it was primarily charged with managing short-term fluctuations. The amount of additional regulation that systems will need to procure varied greatly between studies. Regulation needs increased with higher penetrations of wind generation. It was important for system operators to quantify the regulation needs to ensure that the system would have the capability to provide it.

Determining the costs of wind integration was one of the main goals of many studies. These studies used a wide range of methods and assumptions to determine costs. There were many possible ways that wind generation could affect power system costs. Wind generation can affect energy costs, ancillary service costs, unit commitment costs, congestion costs, uplift costs, and transmission costs, among others. Direct comparisons of wind integration costs were difficult because different studies chose to include different factors when making cost calculations. Studies found wind can reduce energy cost by displacing more expensive generation. These savings may be offset by higher costs introduced from other elements such as increased ancillary service costs. The extra cost estimates ranged from $0/megawatt hour (MWh) to $9.35/MWh.

Various studies recommended many ways to successfully integrate wind power into the system. The recommendations fell into three basic categories: reducing uncertainty, increasing flexibility, and increasing diversity. Reducing the uncertainty of wind generation was the primary method recommended to facilitate integration. Forecasting for wind generation was the most important strategy for integrating wind into the power grid. Forecasting reduced the uncertainty of wind directly and could potentially result in very large savings for the power system. Forecasts would be designed for each area to fit current operating practices. These forecasts could provide insight into the expected level of generation and variability that wind power will introduce into the system, which will give operators the ability to make adjustments or procure extra capacity as needed. Increasing the flexibility of the power grid was the second strategy for managing wind integration. Increasing the amount of ancillary services, specifically regulation, was a common tactic to increase the system’s flexibility. This would literally increase the amount of capacity that is tasked with following variations between the load and generation. Other methods of increasing flexibility were also suggested but were more dependent on system and operating practices. Increasing diversity was the third strategy for optimally managing integration. Diversity can be increased in a number of ways. Building wind generation in different resource areas was one way to increase diversity. Constructing sufficient transmission to ensure wind power can be moved to where it is needed was another. Combining control areas or increasing the cooperation between areas was another viable strategy. Increasing cooperation would involve increased scheduling frequency across entities and sharing renewable energy data.

Generation is the most controllable element and is relied upon to maintain the balance as the usage changes. If there is too much generation it can cause components to overload or burn out. Too little generation will lead to brown or black outs. Load or generation can change rapidly and unexpectedly, as a result sufficient flexibility must be maintained to quickly rebalance the system. Wind generation with its intermittent and variable nature adds another source of variability to balance with controllable resources. Reliable operation of the power system is critical and maintaining the reliability is the primary focus for the system operators.

There are six reliability regions within the eastern interconnection and one each within the western connection and the ERCOT connection.

Within the reliability regions are balancing areas. There are over 100 balancing areas within the United States. Balancing areas range in size from individual cities, such as Sacramento, to areas that cover several states, such as the PJM4 interconnection. The balancing areas are responsible for controlling the generation within their area and coordinating with neighbors to control their inter-ties.

1.2.1 System Control. Reliable operation and planning in power systems require consideration of a wide range of timescales. Resource adequacy and capacity planning takes place on scales of one year to several years, this includes transmission and generation siting, sizing and construction. On shorter time scales in the range of days to months, maintenance planning is done. Generation and transmission facilities plan scheduled maintenance far in advance and coordinate with other facilities to minimize grid disturbance. In the range of hours to days, the unit commitment and scheduling processes are done, in these processes generation is selected to provide for the forecast load. To adapt to any forecast errors or unplanned events generator dispatch is done on the minutes to hours time scales. Automatic Generator Control (AGC) which dispatches generation automatically to keep the system balanced operates on the seconds’ timescale. A number of other automatic controls including generator governors, automatic voltage regulators, power system stabilizers, special protection and remedial action schemes operate on the milliseconds to seconds’ timescales. Most planning is done on the longer time frames from years to days, while the operations time-frames range from seconds to days.

Committing generation to serve load is a very important process for reliability and to minimize system costs. Generators must be committed in advance of their scheduled operation because it can take many hours for them to start up. If too much generation is committed it is costly, inefficient and in extreme cases can overload system components. If not enough generators are committed in advance other power will have to be procured or blackouts will be risked. The resource pool to procure more generation will be diminished because there is insufficient time for many units to respond. The units capable of responding are likely to be expensive gas turbine units. Both under commitment and over commitment of generation can lead to higher energy costs.

Dispatch of generation is another important part of operating the power system. In dispatch the units that are committed are given schedules to follow. There are three basic categories of generation which determines the extent that they are dispatched: base load, intermediate load and peaking generation. Base loaded typically operates at its forward schedule and is rarely dispatched away from that point. Intermediate generators perform most of the changes in output. They will typically ramp to minimum at night, or shut off, and then ramp up with load the next day. The third type of generation is peaking generation, which is started and used for only extreme conditions. Forward scheduling is done along with the unit commitment process and accounts for the majority of energy schedules. Dispatch of generation away from its forward schedule makes up a small part of the overall energy flows. Dispatch away from the forward schedules to reflect forecast error is usually called load following.

Load following is an important consideration in many studies, because wind provides uncertainty and variability to the system that needs to be balanced. Keeping the power system balanced and reliable requires more than adjusting the supply energy. Reliability-related services are a group of services that are necessary to maintain operation of the power grid.

There are a wide range of reliability related services that vary by region. Reserves, regulation, voltage support, black start capability, are all examples of reliability related services. Ancillary services are a subset of reliability-related services which include operating, contingency, and regulating reserves. Ancillary services are used to maintain short term balance of the system and recover from unexpected outages. Ancillary services provided by generation reduce the amount of energy a generator can supply when it supplies ancillary services. Operating reserves are made up of unloaded generating capacity which is synched to the power grid and capable of responding in a certain amount of time. Operating reserve is a very broad term which includes the ability to provide spinning reserve, regulation, supplemental reserve, and load following. Contingency reserves are power system reserves which can be called to respond to a contingency event, or are interruptible loads which will reduce consumption.

Power systems maintain a few dedicated operating and contingency reserves to meet their reliability needs. They purchase these reserves from generators who reserve that capacity in case they are need. Spinning reserve is a common ancillary service used as an operating and contingency reserve. Systems procure an amount of spinning reserve which is synchronized to the power grid and available within 10 minutes. Non-spinning reserve is another reserve used for contingency reserves. Non-spinning reserve is offline capacity that needs to synchronize and deploy within 10 minutes. The levels of spinning and non-spinning reserves that systems maintain is related to the system size, the largest single contingency, and the makeup of the generation fleet.

The amount of reserves a system maintains depends primarily on the system size. All systems maintain some degree of all the reserves but have a wide variety of mechanisms for procuring and implementing. The power grid must be managed in a way that a single contingency will not affect the security of the grid. NERC has specific requirements for the amount of spinning and non-spinning reserve that must be maintained by a balancing authority. The requirements have to do with system size, contingency size, and type of generator resources. The system must be able to recover from a contingency in a certain amount of time to be prepared for the next one. Spinning reserves and non-spinning reserves are deployed following contingencies, and are not used during normal system operation.

Wind generation is a variable, intermittent, and uncertain resource.

Another important factor to consider with wind generation is the location of the resource.

Intermittency describes the wind’s nature to come and go, to be available to produce electricity sometimes, but be unavailable other times. Most areas have distinct weather patterns for when the wind blows. California tends to have a diurnal wind pattern with the period of strongest wind occurring at night, with the day experiencing lower winds. In addition to the diurnal pattern there are also seasonal patterns for winds with the most productive periods occurring in spring and summer with the fall and winter being less productive. Other areas may have significantly different wind patterns. These wind patterns can be very important if wind power is playing a large role in supplying electricity. Intermittency is in contrast to most conventional generation which has a fuel that can be stored used on demand. The intermittency can affect the system’s resource adequacy calculations. The system operators will need to determine if the wind is likely to be available during the system peaks or if other generation will need to be available.

In addition to this longer term intermittency wind power is also variable in nature. Variability describes the wind’s tendency to change speeds as it is blowing. The variability of the wind can happen in seconds as gust blow through, or in longer time frames as regional weather patterns change. Because wind generation is weather dependent it is sensitive to the fluctuations of the weather. This variability is in contrast to most conventional generation can generally choose its desired generation level and maintain a steady output. Wind variability gives the operators another source of variability to consider other than the load.

Uncertainty of wind generation is caused by the intermittency and variability of wind generation though it can pose a different set of challenges. Uncertainty is related to the unknown future wind conditions. Even with certain repeatable weather patterns, prediction is not an exact science. The output of wind generation is unknown ahead of time, forecast can help bound the problem and often give quite good estimates, but compared with generation that can predictably operate at a certain output wind generation presents a challenge.

The electrical generator characteristics of wind generation also differ from conventional generation. Conventional generators use permanent magnet generators which operate at fixed speeds to produce power. Permanent magnet generators provide inertia for the grid; the electro-mechanical link helps resist changes in system frequency. Wind generation however has used induction generators (IG). The electrical differences of wind generation are primarily a concern for transmission design, which typically isn’t covered in recent wind integration studies.

Location of wind generation is another difference from most conventional generation. With fossil fuel generation it is possible to transport fuel to a generator so the location is not as constrained by resources. Wind resources are often located far from load centers and far from main transmission pathways. While conventional generators can be located much more flexibly on the transmission system, wind generators are limited to where there is sufficient resource. Remote resources often require large transmission upgrades to connect wind to the system. The transmission upgrades for wind will have a set of design considerations specific to wind generation.

Wind generation in the power system is described as having a penetration level. There are several different ways to define the relative amount of wind generation in the power system. The energy penetration is the ratio of energy produced by wind generation to the ratio of total demand over the same time. Typically the energy penetration is expressed on an annual basis. RPS standards are usually defined in terms of energy penetrations.

Capacity penetration is the ratio of the installed capacity of wind generation to the historical peak demand of the system. Finally the instantaneous penetration is the ratio of the wind energy production to the system demand at that time. Instantaneous penetration can also be calculated over short periods of time, for example the hourly time step of a production cost simulation. While penetration is a good way to compare systems of different sizes there are often significant differences between systems which may drive the impacts to be significantly different at similar penetration levels.

With higher penetrations of wind generation unit commitment algorithms take wind forecasts into account to avoid over commitment of generation which could risk over generation conditions. Generator schedules will be affected as more expensive generation is displaced for less expensive wind energy. The load following process will need to adjust for the load and wind together rather than just the load.

Wind is generally the largest contributor but solar, geothermal, and hydro can all factor in.

The size of the system is one of the critical factors. Larger systems often can have easier time incorporating wind. Larger systems take advantage of aggregation. The load variability scales more slowly than the load. Larger systems often tend to have a larger and more diverse generation fleet. The physical infrastructure of the power grid is also an important consideration. The makeup of the generator fleet can also impact the studies. System that are hydro dominated often have many fast moving generators available. Coal, natural gas, and hydro generators have different characteristics and the overall system capabilities will depend on the mixture of generators. The operations of the system are also important, is it a market system, what are the scheduling periods, and so forth. There are also systems that are net importers, or net exporters. The ERCOT system is essentially an islanded system.

Transmission and Reliability. Strong wind resources are often located far from population centers that consume the bulk of the electricity. Transmission is required to move the energy to where it will be used. Transmission can be one of the most expensive components of integrating wind. Older wind integration studies focused a great deal on the transmission design to accommodate wind. The need for transmission analysis in a wind integration study has diminished due to the lessons learned from previous studies and the new technology in wind turbines that eliminates some problems. Transmission is important for wind integration even if it no longer a prominent focus. The primary transmission considerations for wind resources are sizing, voltage regulation, reactive capability, grid disturbances, control, and frequency response.

There are a few reasons that design of transmission facilities takes a different approach when it comes to integrating wind generation. Transmission facilities include not only transmission lines, but also transformers, capacitors and other hardware. The intermittent nature of wind is one concern. Wind generates below its rated power most of the time so lines may not need to be sized for full delivery.

Another issue with transmission design is the location of wind resources. Strong wind resource areas are often located far from load centers in weak areas of the power grid. Transmission lines for wind may be trunk lines that connect radial to the power grid, which would not have alternate routes in case of outage. More recent integration studies haven’t emphasized the transmission component as much as in the past. Assumptions are that transmission will be built or upgraded as necessary to accommodate the new wind generation, and that the changes in operating characteristics are more important to focus on.

Systems with more frequent scheduling will have an easier time adjusting to changes than those with longer scheduling blocks.

The hydro system in California will need to be increased roughly 50% from current practice to accommodate load growth and wind. The study estimates the additional regulation needed to be 20MW on 350MW with 20% renewable generation. … increases in load following capability will be needed, an increase of about 10MW/minute to 130 MW/minute. The load following increase needs to be maintained for 5 minutes.

20% Wind Energy by 2030 – July 2008 Performed by the U.S. Department of Energy, this study takes a broad look at the issues that the country would face if it were to try to supply 20 percent of electric energy demand from wind power by the year 2030. It is very broad and includes sections examining turbine technology, manufacturing processes, materials, resources, and equipment and O&M costs. It is not a typical wind integration study that looks at the operating changes for specific wind scenarios. It gives very good information about all aspects of wind generation and how it may be able to contribute in the future. The study takes a balanced view of wind plant siting and potential environmental effects. The study looks at the impacts wind generation could have on greenhouse gas emissions, water conservation, energy security and stability, and costs. It also considers potential negative environmental costs such as bird kill, and noise. The study looks at the transmission requirements for integrating wind power throughout the U.S. It takes a national view of the best resource locations and the load centers and considers how wind can best be moved around. The study looks at a possible design of 12,650 miles of new transmission at a cost of $60 billion. This study includes analysis of distributed wind as well as off shore wind energy. This study includes a review of wind integration studies from 2006 and earlier, and uses them as a basis for analysis. The study concludes that the US possesses sufficient resources to power 20% of the electricity needs using wind energy by 2030. Doing so would require 300 GW of installed wind capacity, compared to the 11 GW installed by 2006. This would decrease greenhouse gases by 825 metric tons annually, and reduce the electricity sectors water use by 8 percent (4 trillion gallons). The prediction of the cost differential is a modest 2% increase over a conventional generation build out. In real dollars it is still a significant sum of $43 billion. Spread out over the total generation it represents an increase of $0.0006 per kWh.

Eastern Wind Integration and Transmission Study – January 2010. The Eastern Wind Integration and Transmission Study (EWITS) study looks at the eastern interconnection in the US. The eastern connection is the largest system considered with a system peak load as studied is 530 GW. The study considers 4 transmission scenarios that are primarily made up of ultra-high voltage lines from Midwest to the Northeast. The four scenarios consider three different 20% penetration build outs and one 30% penetration in the year 2024. The 20% wind scenarios consider different utilizations of resources one considers high capacity factor; another considers local resources, and a third to consider more off shore development. The study uses three base years; 2004 through 2006 for the input data sets. These scenarios are compared to a reference case that includes current development and some near term development.

The results show that 20% and 30% wind energy penetrations are possible in the eastern interconnection but will require significant new transmission. Substantial curtailment of wind would be required without new transmission, so much so that all cases will require some amount of new transmission. The study calculates wind integration costs that include transmission, and wind capital costs as well as operating changes costs. Production costs decline with increased wind penetration, though overall wind integration costs increase with penetration due largely to capital costs of transmission and wind generation. Very high increases in regulation will be needed. The regulation changes are calculated for each of the balancing areas individually, regulation increases of over 1,000 percent in some areas. Integration costs range from $5.00 -6.68/MWh wind production.

The ERCOT study shows that the energy from combined cycle units is offset the most by additional wind generation. Energy from coal is also slightly reduced. It is interesting that energy from gas turbines increases except for the highest penetration of wind generation. This is likely because of the flexibility that gas turbines have they are able to respond quickly to make up for variability and uncertainty. Similar studies have similar results; the resources displaced are largely a factor of the system configuration. Though increased wind generation tends to displace the most expensive units.

EWITS has some very interesting results for how the wind impacts the energy from different sources. The study shows that having different amounts of forecast error can affect the energy from different sources. Increasing forecast error cause energy to shift from less flexible sources such as coal to more flexible sources such as combined cycle and gas turbines. Reductions in coal energy were in the range of 3-4 percent for the cases with forecast versus the perfect forecast cases. Meanwhile combined cycle generation increases roughly 20%, and energy from gas turbines increase 20 to 30% percent. It should be noted that coal energy is roughly 15 times that of combined cycle, while gas turbine energy is roughly 25% of the combined cycle generation.

Reserve Requirements.  Determining proper levels of ancillary service or reserves required with different wind penetrations is also one of the main concerns addressed in each study. There are two concerns with reserves; how much reserves is needed and does the system have the capability to provide them. The system’s ability to provide reserves will also depend on the displacement of other generation. If the uncertainty and variability of wind generation is significant, reserves could increase beyond the system’s ability to provide them. System planning will likely need to take into account the ability to provide reserves as well as energy. Regulation is the most impacted reserve requirement. Regulation is also the most expensive reserve as it is the most flexible. Spinning reserve could be affected if the wind was concentrated and represented a credible contingency. Some systems also carry a replacement reserve product, which could be dispatched either in a contingency or if there are significant schedule deviations. These other reserves can also be affected. Determining regulation reserves requirements varies significantly between the studies. Some studies rely on statistical techniques to estimate the regulation, while others use the operational models.

Both Montana and CAISO employ techniques to measure the regulation requirements based on the expected system needs caused both from the variability of the wind power and from the uncertainty in the short term forecasts. This is in contrast to some of the other studies whose methodologies implicitly assumed perfect forecasts in the short term, and therefore measured only the variability components. The CAISO study estimates that regulation with 7 GW wind capacity would need to increase by 100-500 MW depending on the hour and season to maintain the same performance. California normally maintains 350 MW of regulation. The Montana study estimates a 0-241 MW increase in regulation needs, on up to 1450 MW wind addition. This results in 1-3.84 fold increase in the procurement of regulation.

Load Following and Ramping. Load following is another aspect that is often considered with wind integration. Load following is fairly loosely defined and it can vary quite a bit between different regions. Generally speaking load following is the dispatch of generation necessary to keep the system balanced. Load following is measured as the difference between the forward schedule of generation and the dispatch. It has typically been used to make up for load forecast errors, and for the natural differences that occur when scheduling is done on hourly blocks. Using regulation for these larger and longer term changes is expensive. Ramping is closely related to load following. Most systems will have peak load in the day and minimum load at night. In order to match the load the generation in the system ramps up in the morning as the load rises to the peak. Then it ramps down in the evening towards the minimum load. The magnitude, rate, and duration of these ramps are important to keeping the system balanced. The generation on the system must have sufficient flexibility to meet the demands of the ramps on a system wide basis. Wind generation has the ability to affect the perceived ramp the system sees. Generation must be able to follow the net ramp on the system. In many regions wind has a diurnal pattern that is out of phase with load. Wind generation will peak at night and be at a minimum during the day. This has the effect of increasing the needed ramping as other generation is used to balance the wind. The CAISO study shows lots of analysis relating to the load following and ramping concerns. The study uses both statistical and operational models to address the concerns. The study uses a statistical model to consider the potential impacts of wind generation on the morning and evening ramps. The methodology is designed to look at extreme ramps that may potentially occur. The data is separated into different seasons and the maximum seasonal ramps are calculated for load alone, and for load minus wind. The CAISO does an analysis examining the expected maximum net load ramps during the shoulder hours, or the hours of the morning when load rises rapidly and the hours in the evening when it declines. The CAISO study shows that the maximum net ramp could increase over 30% from the baseline values. Their analysis represents the extreme combination for each season, and is boosted by the consistent diurnal pattern of wind generation which is opposite the load shape. The WWSIS uses a similar analysis and shows that the largest net ramp increases 50% from the baseline at the 30% penetration level. The CAISO load following methodology is an operational model that considers short term dispatch for a simplified system, considering the net load changes and short term forecast error. The load following study estimates that the amount of load following increase roughly 800 MW from a base of 2,200 MW. A sensitivity analysis is performed with a modest decrease in forecast error. The sensitivity shows forecasts improvements can reduce the additional load following requirement by about 50%.

Each new facility when built will be subject to a transmission study, which will determine in detail the needed transmission and the expected impacts. The studies that have included some transmission analysis have shown that new transmission is useful and sometimes necessary. One interesting thing is that extra high voltage transmission lines are often called for in relation to wind. The electrical generator characteristics of wind generation also differ from conventional generation. Conventional generators use permanent magnet generators which operate at fixed speeds to produce power. Permanent magnet generators provide inertia for the grid; the electro-mechanical link helps resist changes in system frequency. Wind generation however has used induction generators (IG). Modern turbines use more advanced doubly fed induction generators (DFIG) instead of standard IGs. DFIGs are variable speed generators allowing the generators to operate more efficiently over a wider range of wind speeds. Also DFIG generators can control the amount reactive power used or supplied much like a traditional generator. Another common generator for a modern turbine is a full conversion system. With a full conversion system the generator generates an AC power which is converted to a DC and then back to grid synchronized AC power. The AC-DC-AC systems have the benefits of the DFIG system over the standard IG and are capable of providing inertial response. These new turbine generator systems have alleviated much of the worry with respect to the electrical connection.

Fault tolerance was another area of concern for wind generation on the transmission system. Older wind turbine generators were not fault tolerant and often dropped off line during grid disturbances. Operators encouraged this performance in the past for small amounts of wind, because the grid would drop a small amount of generation that could be easily replaced. As wind penetrations increase the potential disruptions from all the wind dropping is of large concern. Grid codes in many areas now require wind generation to have low voltage ride

Voltage regulation is another concern for wind generation. The concern is related to the characteristics of IG. In order to generate IG consumes reactive power this affects the local voltage. Wind generation has evolved beyond simple IG to doubly fed induction generation, DFIG, and full AC-DC-AC conversion. These new generators do not use reactive power the way older models did and are able to support voltage; many of these generators can even help support voltage when there is insufficient wind to generate power. Supporting the local voltage is important for generation particularly if there is no other generation in the area.

Frequency response or inertial response is another area of concern when integrating wind in the power system. When a large fault happens the system becomes suddenly imbalanced, the frequency changes as a result. The large amount of rotating mass behind generators helps arrest the frequency changes keeping it in a manageable range as the system is rebalanced. Some wind generators do not provide inertia this way due to the generator types. If large amounts of wind replace conventional generation care will need to be taken to make sure that frequency changes will be manageable. Conventional generator can also have frequency responsive governors, which are able to provide an injection of power if frequency dips suddenly to counteract the change. Since wind generation attempts to make maximum use of the available wind it may not be able to respond to frequency dips. If wind has displaced significant conventional generation, there is concern that a frequency dip may not be arrested, which could cause a cascading blackout as generators shut down to avoid damage that is caused by generating under frequency. This is still an area of investigation as national standards are developing for frequency response initiatives. Modern wind turbines do have some ability to provide inertial response. Modern supervisory control and data acquisition systems (SCADA) have also helped improved integration. Modern SCADA systems allow real time detail measurements of wind plants to be visible to operators. Additionally they allow wind plants to control their output, while historically they would be subject only to the weather. SCADA systems combined with improvements in wind generators can help eliminate many of the transmission and reliability concerns associated with large penetrations of wind power.

There are three basic strategies for managing wind integration; reduce uncertainty, increase flexibility, and increase diversity.

Wind forecasting is seen as one of the best ways to reliably accommodate wind power.

Curtailment. Another very common operating practice to manage wind is to curtail the wind generation periodically.

Adjust Scheduling and Dispatch. Scheduling practices vary widely between different balancing areas. The scheduling lead time and time steps are both very important when it comes to integrating wind generation. Many areas do the majority of scheduling well ahead of the operating hour and fix hourly schedules for most units. This puts them at a disadvantage when try to deal with forecast error and variability within an hour. Without a way to change dispatch or adjust schedules within an hour systems must use expensive regulation to keep the system in balance. Having a real time market, or a similar process to adjust schedules within the hour will help prevent flexibility from being stranded, and will reduce the amount of regulation needed. It will allow the flexibility of units to be realized through a more efficient dispatch process. Changing the dispatch process should allow ancillary services, primarily regulation to be reduced. Systems that operate markets on a five or ten minute basis have more flexibility to adjust to wind generation. The Avista study models how the market structure can change the wind integration costs. The study considered adding a 10-minute market on top of the hourly market that exists. The study shows that between 45 and 75 percent of the integration costs are attributable to factors that occur within hour, meaning that most of the costs occur because the system is locked in for an hour. When the 10-minute market is added to the analysis is gives the system more flexibility to respond to conditions. For their system the integration costs are lowered between 40 percent and 60 percent with the addition of the 10-minute market.

Change Ancillary Services. The majority of studies show that the necessary amount of ancillary services will increase. Many studies focus on regulation, and suggest increasing the amount of regulation procured. The estimates of regulation increases vary significantly between the studies. Though there is a considerable range all studies agree that increasing wind generation will increase the amount of ancillary services a region maintains to keep the same reliability level. Regulation is the ancillary service most affected. Ancillary services are usually more expensive than energy and the system tries to keep the cost down. The variation in ancillary services relative needs between balancing areas depends in large part on the scheduling time frames. One way to offset increases in ancillary services is to increase the entities that can provide them. This includes having more generation certified to provide ancillary services, including wind generation.

Encourage Flexible Generation. Operators are concerned about the increased variability in the system, as well as additional need for ancillary services. As a way of managing this operators suggest encouraging the development of more flexible generation. This includes constructing new generation that can meet future needs or retrofitting current generation to operate more flexibly. In order to encourage generators with sufficient flexibility to be constructed operators will need to compensate generators for the extra benefits they provide. One way to do this is to make resource adequacy payments that consider more than available capacity. There are several attributes that system operators consider when thinking about generator flexibility. One way for generation to provide more flexibility

Frequent cycling capability is another desirable feature. Generators that can cycle daily or more frequently are desirable. Faster start and stop capability goes along with frequent cycling. This refers to the amount of time it takes for a generator to turn on or off from when it receives an instruction. This amount of time can be one the order of days so reducing it to hours or less could greatly increase flexibility.

Demand response stands on its own as a way to enhance the power system. Demand response can allow systems to avoid installing expensive peak generation that is rarely used.

Zoning and Aggregation. Many studies have shown there are benefits for increasing diversity of wind resources. Diverse wind resources aggregated together have a smaller amount of variability that one’s close together. If systems can encourage wind generation in many areas they will have to make fewer changes to accommodate the wind.

Another option for aggregation is a larger balancing area. Larger balancing areas reduce the penetration of wind generation which reduces the effects. Additionally larger balancing areas will have larger generation fleets and more flexibility in how they are dispatched.

Grid Codes. Reliability organizations are actively studying renewable energy and revising grid codes to ensure the reliability of the system. LVRT and reactive control are some of the standards that have been introduced. Balancing areas, reliability regions, and NERC will continue to review wind turbine technology and power system performance to assure that there will be the reliable operation of the power grid. Grid codes will ensure that wind won’t harm the power systems reliability.

Telemetry. It is very important for system operators to have quality information on the state of the system. This will enable them to ensure the reliability. Telemetry of the power system elements is crucial to giving system operators the information they need. Telemetry at wind sites should provide both meteorological and power data. Real time measurement of wind plant performance can have a variety of benefits. Power systems have real time monitoring systems which typically measure generation and transmission conditions. System operators continually monitor the conditions of the system and make operating decisions based off the measurements. These systems can be easily adapted to include wind production information as well as local meteorological information for wind farms. This information includes things such as wind speed, wind direction, air temperature, pressure, and humidity. The data can give operators real time information on the generation of wind, its variability, and recent trends. Measurements can also be used by forecasters to help better predict the wind generation. Telemetry for transmission elements should also be increased to monitor the greater system, especially for upgrades to integrate wind generation.

Storage. Storage is often discussed in the integration studies and there are many possibilities for how storage could benefit a system with high penetrations of renewable. Energy storage has the potential to address both the intermittency of wind and the variability depending on characteristics of the storage. There are a wide variety of different storage technologies with a variety of different operating characteristics. The CAISO study describes many of the technologies and properties that they offer. The predominant technologies are pumped storage hydro, compressed air, batteries, flywheels, super capacitors, and hydrogen. The ability of storage to mitigate potential problems will depend on its characteristics. A large storage system would be able to charge during windy conditions if the energy isn’t needed then and could then discharge when it is calm and more electricity is needed.

To shift energy pumped storage hydro is the most practical storage type; many areas already have some pumped storage hydro installed. Smaller storage could be used to manage the variability principally within hour. Under the right conditions storage could contribute to a variety of operations functions. Storage could be used to provide reserves and regulation. It could be used for load following services. It could be used to provide peak power, or add demand in minimum load conditions. Storage could provide reactive support to the system or provide inertial response. While storage has many potential benefits for power systems it is important to understand a few things about it. First, storage needs to make sense from the system perspective compared with other operational strategies. If wind does add variability to the system the systems needs should be evaluated relative to the variability, storage should not be used to return the system to its state before wind is added. Second is the cost and benefit analysis. Storage will be competing with other generation for all those possible functions and the revenue model for storage is often unclear. On the cost issue alone building storage is more expensive than equivalent capacity gas turbines. The EWITS study suggests that in certain situations storage can be used instead of transmission upgrades, though further

Coordination. System operators with a few notable exceptions are not alone in trying to maintain system reliability. Interconnections have dozens of control areas that need to interact to maintain reliability. Current practices typically lock in the flows between control areas in hourly blocks. More flexibility between balancing areas is seen as a way to increase the diversity in the system and mitigate wind generation. There are many ways for areas to coordinate their integration efforts. More frequent scheduling on shared line or connections is one. This allows greater flexibility within the areas if they do not have to maintain fixed flows across their interconnections for a full hour. Data sharing is another way areas can coordinate. Sharing of information about weather conditions can help areas coordinate their wind generation and perhaps get better forecasts. Coordination could mean consolidating balancing areas into large areas with one operator.

Extreme Conditions. Another area that studies often suggest future work is the extreme weather events. Wind studies consider typical operating conditions and historical years, if weather patterns didn’t exist in the input data they won’t be considered in the study even though they may be possible. There are extreme scenarios which could pose problems to the operation of the grid. Studying those cases could determine strategies to successfully handle them if they occur. Sub hourly studies are often necessary to fully consider extreme events or other areas of concern. Extreme events can be based not only on the wind generation behavior, but also on the simultaneous behavior of the power system.

Along with the studies of possible extreme events studies need to be done on how to mitigate them. Extreme events may require special attention and solutions that are not typically used.




This entry was posted in Renewable Integration. Bookmark the permalink.

Comments are closed.