Smart Grid Challenges

Meier, A. May 2014. Challenges to the integration of renewable resources at high system penetration. California Energy Commission.

A challenge to “smart grid” coordination is managing unprecedented amounts of data associated with an unprecedented number of decisions and control actions at various levels throughout the grid.

This report outlined substantial challenges on the way to meeting these goals.

More work is required to move from the status quo to a system with 33% of intermittent renewables. The complex nature of the grid and the refining temporal and spatial coordination represented a profound departure from the capabilities of the legacy or baseload system. Any “smart grid” development will require time for learning.

IEEE. September 5, 2014. IEEE Report to DOE Quadrennial Energy Review on Priority Issues. Institute of Electrical and Electronics Engineers.

A large cost-benefit ratio is by no means assured. Potential benefits may be overestimated; for example some of the expectations for smart meters are being scaled back both in the U.S. and in Europe (19). Germany found that while smart metering would be beneficial for a particular group of customers, the majority of consumers would not benefit from a global installation of smart meters (20).

19 European Commission. June 2014. Benchmarking smart metering deployment in the EU-27 with a focus on electricity.

20 Ernst & Young. July 2013. Cost-benefit analysis for the comprehensive use of smart metering.

National Institute of Standards and Technology.  24 Jan 2014. Electromagnetic Compatibility of Smart Grid Devices and Systems. U.S. Department of Commerce.

The Smart Grid will dramatically increase the dependency of the electric grid on microprocessors, and turn the electric system into a giant computer that will monitor itself, optimize power delivery, remotely control and automate processes, and increase communications between control centers, transformers, switches, substations, homes, and businesses.

Smart Grid devices have the potential of making the electric grid less stable: “Many of these devices must function in harsh electromagnetic environments typical of utility, industrial, and commercial locations. Due to an increasing density of electromagnetic emitters (radiated and conducted, intentional and unintentional), the new equipment must have adequate immunity to function consistently and reliably, be resilient to major disturbances, and coexist with other equipment.”

 

Posted in Smart Grid | Tagged | Leave a comment

Spain Wind Integration

2 articles below:

[energyresources] Digest Number 8957 [altered slightly]

Jan 14, 2015   papp20032000  Pedro Prieto

In Spain we have this mix, as of the end of 2014:

INSTALLED POWER

MW     %         GWH     %

  • 19893 18.4    43191   16.7     Hydro (incl. Mini/micro)
  •   7866  7.3     57179   22.1     Nuclear
  • 11482 10.6    46264   17.9     Coal
  •   3498  3.2       6620    2.6     Fuel/gas
  • 27206 25.2    25869   10.0     Combined cycle gas fired
  • 23002 21.3    51439   19.9     Wind power
  •   4672  4.3       8211     3.2    Solar PV
  •   2300  2.1       5013     1.9    Concentrated Solar Power (CSP)
  •   8212  7.6     30935   12.0    Thermal renewable & Others

TOTAL 108131 100.0 GENERATION

Pump up hydro & Generation Consupmtion: 12663 -4.9 Exports to neighbors -3543 -1.4, TOTAL 258515 100.0 LOAD FACTOR GROSS % Hydro (incl. Mini/micro) 24.78 Nuclear 82.98 Coal 46.00 Fuel/gas 21.60 Combined cycle gas fired 10.85 Wind power 25.53 Solar PV 20.06 CSP 24.88 Thermal renewable & Others 43.0

Conclusions

  1. Spain has a huge excess of installed power, with daily maximum peaks of hardly 40 GW, averages of 30 GW and a total installed power of 108 GW. This is the consequence, at the end of 90s and early 2000s of belief in infinite growth and preparation to it.
  1. Spain has a reasonably good hydro system, which is an excellent buffer for pumping up to back up the intermittencies of renewables.
  1. The international exchanges (in the balance exports) are basically with France (we have a positive balance), Morocco, through the Gibraltar Strait and Portugal, a country with also some renewables that Spain sometimes helps to balance as well. But at the end, it is basically more an island (with even less connection with Europe that the UK) from the electric point of view, as you can see.
  1. Spain has a huge installed combined cycle gas fired power plants, that were built in the mentioned belief of eternal growth about 10-15 years ago and now are basically backing the renewables, as the second source, if hydro has a bad year. This is very good for the renewable system, but an economic and financial tragedy for operators who invested heavily in combined cycles and are now having a misery of 10% load factor, when they were thought and designed for at least 5,500 hours a year.
  1. CSP has a bigger load factor than solar because the law admits some 15% of gas (classified as renewable energy) to back the plants and thus avoid the molten salt deposits to solidify on cloudy days or during the nights.
  1. Self-consumption is high and pump up (5,403 GWh/year) is not only to back up renewables, but mostly to help nuclear to offload in the nights.
  1. Renewable energies represented in 2014 about 43.7% of the total yearly national demand. Exactly the same percentage of installed power, but the trick is that in Spain, renewables (except big hydro) enter first into the grid by law, so, other sources (except the non-stop nuclear) have to give way to them and besides, regulate when possible (mainly hydro and combined cycles or fuel/gas, which is installed basically in the Canary and Balearic archipelagos) , as coal or nuclear are not good to back up fast variations of renewables.
  1. Last but not least, the Control Center you have mentioned in a previous post, located in Red Eléctrica Española (REE, the responsible entity for transport high voltage power lines in our country) is a world leader in handling and managing intermittent generations. They have a very sophisticated national network of sensors and meteorological devices all around the country and in neighboring countries and sophisticated algorithms connected to the national weather system, so that they can already very accurately predict how much a given wind field or solar field is going to produce every day at almost every hour, with at least six seven hours anticipation, so that they can program the 1-2 hours warm-up or disconnection of the combined cycle plants, which are suffering from much more on/off switches than originally programmed.

Neither wind, or solar in its two modalities in Spain (PV and CSP) has the need to be added or subtracted to balance the network, as they have priority of entrance into the grid by law. So, they deliver as much as they can produce in every instant. Only in very few exceptional circumstances have they had to switch off from the grid for a while. This balancing function is reserved basically to the hydro and to the combined cycle gas fired plants, that are the ones suffering the impact,  today working 870 hours a year (10% load factor), when originally designed to work 5,500 hours/year and suffering from faster degradation in their life cycles because the much increased number of pre-warmings and post-coolings and switching on and off more than originally expected.

There has been a decline in electrical usage in Spain over the last 3 years, obviously due to the international financial and economic crisis that is impacting mainly Southern European countries, but this is not affecting to the renewable generation, but has stopped addition of new power plants.

As for the vastly overbuilt capacity, the big mistake was not only in installing renewables, that everybody knows demand overcapacity and storage or handling to provide a safe and continuous service, but also to believe in Kyoto and to install huge amounts of modern combined cycle gas fired plants (Spain has 7 regasifying ports, first in Europe in handling this gas traffic and besides two gasoducts coming from Algeria). The idea was to burn gas and dismantle coal plants to minimize or avoid penalties (see Germany today and smile). I suggest that those countries and governments believing in 2000 that economic growth could not be sustained forever, while growing like Spain at 3-4% yearly, should raise their hands. No one had foreseen this and Spain was trapped in this belief. Only a handful of people like in this forum knew that growth could not be forever.

Of course, some countries like France (+75% of electricity coming from nuclear) can presume having less installed overcapacity, because the load factors of nuclear and the policy to “warm” the country also with electricity. This may have some other enormous inconveniences in the future. Germany is another case, with plenty of coal plants and still some nuclear plants running, despite of having many more renewables than Spain (but not as high a penetration percentage). The Netherlands can also add a lot of renewables because they have an essential buffer with the neighboring countries, in case no wind/no sun exists or if it goes in excess.

But in general, people have to accept that if they want renewables, they will have to build and install a considerable amount of over-capacity, and also, and most important, a massive energy storage system which will bring costs of the so called “renewables” to prices that will always escape the so called grid parity.

Finally, the very high prices Spanish consumers are paying for the electricity are not only due to the “overbuilt capacity” of renewables, but also and mainly due to a poor, corrupted and politically biased energy policy of the government, always willing to accept what the big electric oligopolies demand to continue with their sick benefits. The well-known and publicized case of former dinosaurs of politics being appointed to the boards of the big electric or energy corporations, with insultingly high salaries, immediately after having regulated them, while in the government, those in favor of them (the so called revolving doors scandal) is a very sensitive and painful issue for the Spaniards. To such an extent that probably the traditional bipartisan system is going to explode. So we are not accepting AT ALL thise perverse system, we are just suffering them and fighting it as much as we can.

 

NREL. 2012. Integrating Variable Renewable Energy in Electric Power Markets: Best Practices from International Experience. National Renewable Energy Laboratory.

Appendix F. Case Study: Spain Author: David Pérez Méndez-Castrillón, Ministry of Industry, Energy, and Tourism Coordinated and Integrated Planning Policy and Planning Spain’s energy situation as well as the policies pursued in the last decades are the direct result of certain challenges: a high degree of energy dependence, a lack of sufficient interconnections (as it is almost an isolated electric system), high energy consumption per unit of gross domestic product, and high levels of greenhouse gas emissions (mostly due to a strong growth in electricity generation and to the energy demand in the transport sector).

To face these challenges, energy policy in Spain (and in other European countries) has spun round three axes: security of supply, enhancement of the competitiveness of Spain’s economy and a guarantee of sustainable economic, and social and environmental development. The RE energy policy proposed takes into account that Spain has one of the highest levels of energy dependence in Europe, and that the Iberian Peninsula makes up an electric system that is isolated from Europe.

Energy Demand Coverage: At the end of 2011, RE covered 13.2% of final energy consumption and 33% of the total electricity production in Spain. On November 6, 2011, Spain achieved a new record when wind power provided 59.6% of electricity demand; the previous peak was 54.0%. In 2010, RE covered 11.8% of final energy consumption and 33.3% of the total electricity production in Spain.

The impact of high RE levels in the production required from conventional generation implies that thermal power plants must be able to cope with the variability of RE production. When this is not feasible, the TSO must rely on imports from, and exports to, neighboring systems. However, when the level of interconnection is not enough, as it is the case of Spain, RE curtailment will be the only solution.

A main objective in the planning studies of the TSO in Spain is to propose mechanisms to minimize those RE curtailments. New pumping stations, new interconnections, and new fast response power plants (i.e., those using open-cycle gas turbine or OCGT technology) can be considered and evaluated. From an electric point of view, Spain has one of the lowest interconnection ratios in the European Union.66 This lack of sufficient interconnection capacity has prevented the Spanish system from taking advantage of cross-border exchanges for the integration of RE, as cross-border exchanges enable electricity exports when the surplus of renewable production cannot be properly dispatched in the system, thus diminishing RE curtailments and increasing the overall efficiency.

This means special attention must be paid to coordinating, aggregating, and controlling the overall production that is fed into the grid because a certain volume of non-RE units must also be dispatched to fulfill with security and technical constraints.

That RE plants tend to be far more distributed and dispersed than conventional power plants complicates this task. In response to this challenge, the system operator in Spain established a control center of special regime, the Spanish Control Centre of Renewable Energies (CECRE), whose objective is to monitor and control RE production, maximizing its production while ensuring the safety of electrical system. CECRE was established in June 2006 as wind generation started to become a relevant technology in the Spanish electrical system. It is composed of an operational desk where an operator continuously supervises RE production. Renewable energy control centers collect real-time information and channel to the CECRE. To minimize the number of points of contact dealing with the TSO, the renewable energy control center acts as the only real-time speaker with the TSO. The control center also manages the limitations established by set-points, and they are responsible for assuring than the non-manageable plants comply with them.

The Iberian Peninsula has a very low electricity interconnection capacity compared with the rest of Europe. The existing interconnections between Spain and Portugal under the MIBEL framework do not facilitate the integration of intermittent generation produced in Spain (as Portugal is not interconnected to any other country). For this reason, interconnections between Spain and the rest of Europe through France are essential. The use of information GEMAS was designed taking into account that the operator must be able to create, manage, and activate a plant rapidly as situations may arise in which returning the system to a balanced N-1 secure state as soon as possible might be necessary. Because more than 800 wind parks are installed in the Spanish peninsular system, they must be as managed as automatically as possible. The reliability of the tool is a crucial issue as the failure to deliver limitations to the RE control centers could result in a significant decrease of the security of supply.

Posted in Renewable Integration | Tagged , , , , , | Leave a comment

German wind integration

NREL. 2012. Integrating Variable Renewable Energy in Electric Power Markets: Best Practices from International Experience. National Renewable Energy Laboratory.

Germany has developed a fund to encourage new fossil-fired power plants to use the most flexible technology available to maximize their ability to ramp to meet the system’s balancing need.

The Greennet study determined that additional balancing costs in Germany, at around 10% penetration, would be around €2.5 ($3.3)/MWh (Holttinen et al. 2009).

Germany’s wind industry association believes an additional 25 GW could be installed on land and at sea by 2020, on top of the 29 GW today (GWEC n.d.). ENTSO-E estimates that in the Nordic region as a whole, meanwhile, wind capacity could rise to approximately 15-20 GW in the same year (ENTSO-E 2010), at which point less than half of Nordic wind capacity would be located within Danish borders. Output throughout this northern region is likely to be highly correlated. This means that competition for flexible resources such as Norwegian hydropower, to balance these largely wind power ambitions, is going to increase. Denmark may need to increase its domestic flexibility.

Denmark is a small system, heavily interconnected with both Scandinavian neighbors in the Nordic power market and Germany to the south, with a transfer capacity equal to approximately 80% of its peak demand. In other words, surpluses and deficits of power production resulting from a large variable RE share can relatively easily be compensated for. Other systems are likely to have a far smaller potential to trade, relative to their size.

Germany must manage very large flows of wind energy into and around its grid area. Until recently, with the scaling up of solar photovoltaic power plants (PV) in the south of the country, almost all variable renewable energy (RE) generation (i.e., wind power) has been in the middle and north if the country. The lack of balance between rural areas with high wind energy shares and principal consumption areas all over Germany has led to transmission congestion between these different areas. The challenge is likely to be compounded by growing flows of variable electricity from outside Germany’s borders. Germany’s immediate neighbor to the north is Denmark, which targets 50% wind power. Moreover, wind penetration is likely to be highest in the Jutland Peninsula, which is part of the same power system as Germany (i.e., the synchronous grid of continental Europe). Instantaneous shares in Jutland can already rise above 100% today. Grid congestion in the border region during times of high wind is likely to increase without reinforcement.

In addition, flows of electricity from Germany to and through Eastern neighbors are already challenging, to the extent that eastern neighbors are considering remedial measures. Finally, fast-growing, distributed solar photovoltaic (PV) installations in the south of the country will increase the complexity of the system operation task, particularly because the distribution grid is managed passively.

The 2010 Energy Concept includes a “Government-Länder Initiative on Wind Energy,” which intends to improve cooperation between federal and state levels in the search for higher quality wind resources on land. This is particularly important as the majority of the best resources may already have been exploited.

Protecting the Revenue of Existing Flexible Resources. As wind and solar PV electricity production increases, and because of their low marginal cost and priority dispatch, less production is needed from existing conventional plants, such as gas and coal, which have higher operating costs (mainly fuel). This is known as the “merit-order effect” (i.e., whereby conventional power plants are pushed down the order in which plants are used). This “missing revenue” problem may adversely affect the economics of those plants to the point that owners no longer consider their continued production to be profitable and retire them from service. If this would occur, it would not only reduce the amount of flexible power on the system able to balance fluctuating variable RE output, it might also undermine the adequacy of the system (i.e., its ability to meet its peak power requirements).

Even if fossil-fueled plants are displaced to some extent by new variable RE output, they will be needed to compensate for the nuclear power plants already retired (nearly 10 GW), alongside imports of electricity from France.

Challenges for Neighboring Countries Polish and Czech system operators are considering blocking action in the face of large wind-based flows into and through their systems. Poland is considering installing devices to enable this (Platts 2011). Austria, for example, buys wind power to fill its pumped-hydropower reservoirs, and 35% of electricity flowing from Germany to Austria passes through the Czech Republic.

The task of TSOs, which manage the high-voltage grid in areas with very large shares of variable RE electricity, is increasingly complex. Very large amounts of data need to be managed and continually updated, while more dynamic management of power plants, such as re-dispatching or using curtailment, requires high-speed decision-making.

Serious delays to essential grid expansion work are also apparent in the increasing need to curtail wind plants in the north of the country. Though an important system management tool, the curtailment of power plants (or “feed-in management”) leads essentially to the waste of what was wanted in the first place (i.e., clean energy) so it should be minimized. Curtailment in 2010 increased by up to 69% over the previous year. Even if it only amounted to 0.2% – 0.4% (72-150 GWh) of total wind electricity, in some northern wind farms as much as 25% of output was curtailed (Ecofys 2011).

Figure D-1. Development of electricity generation from RE in Germany since 1990 Source: BMU 2011a Table 1 shows the average annual share of wind power in total electricity generation increasing from 1% in 1999 to 6% in 2010. Solar PV, from a much later start, reached nearly 2% in 2010. While these figures seem still quite modest, instantaneous shares can be very challenging.

Table D-1. Shares of Wind and Solar Wind Share Solar PV (%) Share (%) Table 2 shows the maximum ratio of solar PV and wind power output to power demand in Germany as a whole and in the four TSO control areas into which it is divided (See Figure 2). Perhaps surprisingly, given the modest annual figures above, penetration reached over 60% on Sunday May 8 at 1:00 p.m., when demand dropped to a low on a quiet, sunny afternoon. At the same time, in the area managed by TenneT, which stretches from the north to the south of the country-picking up power both in the windy north and in the sunny south-penetration reached 160% of the entire demand of the area. Eastern Germany saw similarly little activity at 6:00 a. m. on January 1, 2011, and the system operator (50Hertz) had to manage wind output amounting to 124% of the area’s demand.

It remains to be seen whether the Energy Concept will solve the biggest challenge: rolling out and reinforcing the grid.

Table D-2. Maximum Ratio of Wind and Solar PV to Load, by TSO, in Germany in 2011. The Market Stimulation Program has provided grants since 2000. These initially included the power sector, but they are now exclusively for the heat sector. Another important driver is the public bank, Kreditanstalt fuer Wiederaufbau (KfW).44 KfW provides long-term, fixed, low interest investment loans, and loan guarantees, to projects, amounting to some €10bn by 2008 (RETD 2008). Recently KfW announced EUR 5 billion ($6.6 billion) of loan guarantees to offshore wind projects up to 2020 (Platts 2011). In 2010 alone, it provided EUR 11 billion ($15.5 billion) “for the construction of facilities using renewable energies,” including heat).

The German Renewable Energy Concept targets are as follows: 18% of energy consumption by 2020; 30% by 2030; 45% by 2040; and 60% by 2050 o 35% electricity consumption by 2020; 50% by 2030; 65% by 2040; and 80% by 2050. These targets are highly ambitious. The Energy Concept was updated in summer 2011 following the government’s decision to phase out nuclear power by 2022, which represented approximately 23% of German capacity in 2011, after the events at the Fukushima Daiichi nuclear plant in Japan in March 2011. The change of policy resulted in additional promotion of renewable electricity as well as conventional options such as coal power. A recent study, which modeled balancing costs in a number of European countries, found that in Germany, additional balancing costs of wind power at approximately 10% penetration of electricity (i.e., more than present penetration) amounted to approximately EUR 2.5 per MWh wind).

Posted in Renewable Integration | Tagged , , | Leave a comment

Electric grid large power transformers take up to 2 years to build

[Large power transformers are essential critical infrastructure to the electric grid, and are huge, weighing up to 820,000 pounds.  If large power transformers are destroyed by a geomagnetic disturbance (GMD) electromagnetic pulse (EMP), cyber-attack, sabotage, severe weather, floods, or simply old age, the electric grid could be down in a region for 6 months to 2 years. This is because the USA imports 85% of them, there is competition with other nations for limited production and raw materials such as special grade electrical steel, a high cost ranging from $2.5 to $10 million dollars (including transport/installation), require close supervision, and are custom built, with long lead times to design, bid, manufacture, and deliver.   The United States large power transformers are aging faster than they’re being replaced, and even more are needed for new intermittent renewable generation, which has the potential to damage them if not integrated carefully into the existing electric grid.  There are possibly tens of thousands of LPT’s in America, mostly built between 1954 and 1978, so an increasing percentage of these aging LPT’s will need to be replaced within the next few decades.  Alice Friedemann, www.energyskeptic.com]

Electric grid interdependency with limited raw materials, foreign production/supply chains, long lead time to replace due to financing/design/build/deliver

DOE April 2014. Large Power transformers and the U.S. electric grid. Infrastructure Security and Energy Restoration Office of Electricity Delivery and Energy Reliability. U.S. Department of Energy.

Excerpts from this 55 page document follow.

LPTs have long been a concern for the U.S. Electricity Sector, because the failure of a single unit can cause temporary service interruption and lead to collateral damage, and it could be difficult to quickly replace it. Key industry sources have identified the limited availability of spare LPTs as a potential issue for critical infrastructure resilience in the United States.

The U.S. electric power grid serves one of the Nation’s critical life-line functions on which many other critical infrastructure depend, and the destruction of this infrastructure can cause a significant impact on national security and the U.S. economy. The U.S. electric power grid faces a wide variety of threats, including natural, physical, cyber, and space weather. LPTs are large, custom-built electric infrastructure. If several LPTs were to fail at the same time, it could be challenging to quickly replace them.

Large power transformers are a critical component of the transmission system, because they adjust the electric voltage to a suitable level on each segment of the power transmission from generation to the end user. In other words, a power transformer steps up the voltage at generation for efficient, long-haul  transmission of electricity and steps it down for distribution to the level used by customers. Power transformers are also needed at every point where there is a change in voltage in power transmission to step the voltage either up or down.

Although prices vary by manufacturer and by size, an LPT can cost millions of dollars and weigh between approximately 100 and 400 tons (or between 200,000 and 800,000 pounds). The procurement and manufacturing of LPTs is a complex process that includes pre-qualification of manufacturers, a competitive bidding process, the purchase of raw materials, and special modes of transportation due to its size and weight.

The result is the possibility of an extended lead time that could stretch beyond 20 months and up to five years in extreme cases if the manufacturer has difficulty obtaining any key inputs, such as bushings and other key raw materials, or if considerable new engineering is needed.

The United States is one of the world’s largest markets for power transformers, with an estimated market value of more than $1 billion in 2010, or almost 20% of the global market. The United States also holds the largest installed base of LPTs in the world. Using certain analysis and modeling tools, various sources estimate that the number of EHV LPTs in the United States to be approximately 2,000.

While the estimated total number of LPTs (capacity rating of 100 MVA and above) installed in the United States is unavailable, it could be in the range of tens of thousands, including LPTs that are located in medium-voltage transmission lines with a primary voltage rating of 115 kV.

Two raw materials— copper and electrical steel—account for more than half of the total cost of an LPT. Special grade electrical steel is used for the core of a power transformer and is critical to the efficiency and performance of the equipment; copper is used for the windings.

Power Transformer electrical steel prd

 

 

Power transformer electrical steel exports

 

In recent years, the price volatility of these two commodities in the global market has affected the manufacturing condition and procurement strategy for LPTs. The rising global demand for copper and electrical steel can be partially attributed to the increased power and transmission infrastructure investment in growing economies, as well as the replacement market for aging infrastructure in developed countries.

The United States is one of the world’s largest markets for power transformers and holds the largest installed base of LPTs, and this installed base is aging.

The average age of installed LPTs in the United States is approximately 38 to 40 years, with 70% of LPTs being 25 years or older. While the life expectancy of a power transformer varies depending on how it is used, aging power transformers are potentially subject to an increased risk of failure. 

Our power transformers are aging far faster than they’re being replaced:

Power transformer aging usa

 

Since the late 1990’s, the United  States has experienced an increased demand for LPTs; however, despite the growing need, the United States has a limited domestic capacity to produce LPTs. In 2010, 6 power transformer manufacturing facilities existed in the United States, and together, they met approximately 15% of the Nation’s demand for power transformers at a capacity rating greater than or equal to 60 megavolt-amperes (MVA). Although the exact statistics are unavailable, global power transformer supply conditions indicate that the Nation’s reliance on foreign manufacturers was even greater for extra high-voltage (EHV) power transformers with a maximum voltage rating greater than or equal to 345 kilovolts (kV).

[As more unreliable, intermittent, uncertain, variable wind and solar are added, the risks of damage from line disturbance, overload, and electrical disturbances increases]:

power transformer failures 1991-2010

Electrical disturbances” included phenomena such as switching surges, voltage spikes, line faults/flashovers, and other utility abnormalities, but excludes lightning.

Although age is not included as a cause of transformer failure in Figure 18, age is certainly a contributing factor to increases in transformer failures. Various sources, including power equipment manufacturers, estimated that the average age of LPTs installed in the United States is 38 to 40 years, with approximately 70% of LPTs being 25 years or older. According to an industry source, there are some units well over 40 years old and some as old as over 70+ years that are still operating in the grid. An LPT is subjected to faults that result in high radial and compressive forces, as the load and operating stress increase with system growth. In an aging power transformer failure, typically the conductor insulation is weakened to the degree at which it can no longer sustain the mechanical stresses of a fault.

Given the technical valuation that a power transformer’s risk of failure is likely to increase with age, many of the LPTs in the United States are potentially subject to a higher risk of failure. In addition, according to an industry source, there were also some bad batches of LPTs from certain vendors. The same source also estimated that the failure rate of LPTs is around 0.5 percent. In addition to these traditional threats to power transformers, the physical security of transformers at substations has become a public safety concern due to a coordinated physical attack on cyber infrastructure of a California substation in 2013.

Throughout this report, the term large power transformer (LPT) is broadly used to describe a power transformer with a maximum capacity rating greater than or equal to 100 MVA unless otherwise noted.

In addition to the need for the replacement of aging infrastructure, the United States has a demand for transmission expansion and upgrades to accommodate new generation connections and maintain electric reliability.

In particular, this study addresses the considerable dependence the United States has on foreign suppliers to meet its growing need for LPTs. The intent of this study is to inform decision makers about potential supply concerns regarding LPTs in the United States. This report provides the following observations: The demand for LPTs is expected to remain strong globally and domestically. Key drivers of demand include the development of power and transmission infrastructure in emerging economies (e.g., China and India) and the replacement market for aging infrastructure in mature economies (e.g., United States), as well as the integration of alternative energy sources into the grid and an increased focus on nuclear energy in light of climate change concerns.

The United States has limited production capability to manufacture LPTs. In 2010, only 15% of the Nation’s demand for power transformers (with a capacity rating of 60 MVA and above) was met through domestic production. Although the exact statistics are unavailable, power transformer market supply conditions indicate that the Nation’s reliance on foreign manufacturers was even greater for EHV power transformers with a capacity rating of 300 MVA and above (or a voltage rating of 345 kV and above).

While global procurement has been a common practice for many utilities to meet their growing need for LPTs, there are several challenges associated with it. Such challenges include: the potential for an extended lead time due to unexpected global events or difficulty in transportation; the fluctuation of currency exchange rates and material prices; and cultural differences and communication barriers. The utility industry is also facing the challenge of maintaining an experienced in- house workforce that is able to address procurement and maintenance issues. The U.S. electric power grid is one of the Nation’s critical life-line functions on which many other critical infrastructure depend, and the destruction of this infrastructure can have a significant impact on national security and the U.S. economy. The electric power infrastructure faces a wide variety of possible threats, including natural, physical, cyber, and space weather.

The failure of a single unit could result in temporary service interruption and considerable revenue loss, as well as incur replacement and other collateral costs. Should several of these units fail at the same time, it will be challenging to quickly replace them.

LPTs are special-ordered machines that require highly skilled workforces and state-of-the-art manufacturing equipment and facilities. The installation of LPTs entails not only significant capital expenditures but also a long lead time due to the intricate manufacturing processes, including the securing of raw materials. As a result, asset owners and operators invest considerable resources to monitor and maintain LPTs, as failure to replace aging LPTs could present potential concerns, including increased maintenance costs, equipment failures, and unexpected power failures.

The workshop considered four risk scenarios concerning the Electricity Sector, including a severe geomagnetic disturbance (GMD) or electromagnetic pulse (EMP) event that damaged a difficult-to-replace generating station and substation equipment causing a cascading effect on the system.

The size of a power transformer is determined by the primary (input) voltage, the secondary (output) voltage, and the load capacity measured by MVA. Of the three, the capacity rating, or the amount of power that can be transferred, is often the key parameter rather than the voltage.13 In addition to the capacity rating, voltage ratings are often used to describe different classes of power transformers, such as extra high voltage (EHV), 345 to 765 kilovolts (kV); high voltage, 115 to 230 kV; medium voltage, 34.5 to 115 kV; and distribution voltage, 2.5 to 35

Power Transformers in the Electric Grid. North America’s electricity infrastructure represents more than $1 trillion U.S. dollars in asset value and is one of the most advanced and reliable systems in the world. The U.S. bulk grid consists of approximately 390,000 miles of transmission lines, including more than 200,000 miles of high-voltage lines, connecting to more than 6,000 power plants.

An LPT can weigh as much as 410 tons (820,000 pounds (lb)) and often requires long-distance transport.

Physical Characteristics of Large Power Transformers. An LPT is a large, custom-built piece of equipment that is a critical component of the bulk transmission grid. Because LPTs are very expensive and tailored to customers’ specifications, they are usually neither interchangeable with each other nor produced for extensive spare inventories. According to an industry source, approximately 1.3 transformers are produced for each transformer design. Figure 2 illustrates a standard core-type LPT and its major internal components. Although LPTs come in a wide variety of sizes and configurations, they consist of two main active parts: the core, which is made of high-permeability, grain- oriented, silicon electrical steel,

Power transformer costs and pricing vary by manufacturer, market condition, and location of the manufacturing facility. In 2010, the approximate cost of an LPT with an MVA rating between 75 MVA and 500 MVA was estimated to range from $2 million to $7.5 million in the United States; however, these estimates were Free on Board (FOB) factory costs, exclusive of transportation, installation, and other associated expenses, which generally add 25 percent to 30 percent to the total cost (see Table 2).

LPTs require substantial capital and a long-lead time (in excess of six months) to manufacture, and its production requires large crane capacities, ample floor space, and adequate testing and drying equipment. The following section provides further discussions on the production processes and requirements of LPTs, including transportation and key raw commodities.

LPTs are custom-made equipment that incurs significant capital costs. Utilities generally procure LPTs through a competitive bidding process, in which all interested producers must pre-qualify to be eligible to bid. Pre-qualification is a lengthy process that can take several years. A typical qualification process includes an audit of production and quality processes, verification of certain International Organization for Standardization (ISO) certifications, and inspection of the manufacturing environment. This process can often be rigorous and costly to purchasers; however, it is an important step, because the manufacturing environment and capability can significantly affect the reliability of the product, especially of high-voltage power transformers.

LPTs are custom-designed equipment that entails a significant capital expenditure and a long lead time due to an intricate procurement and manufacturing process. (1) Request for proposal (2 months), (2) Submit Bid (1-2 months), (3) contract negotiation/technical specification (1-2 months), (4) Design (2-4 months), (5) purchase materials (2-4 months), (6) production (2-4 months), (7) Testing (days to weeks), (8) Transportation & Site Set-up (weeks to months).

Bidding Process. A standard bidding process is initiated by a purchaser, who sends commercial specifications to qualified LPT producers. The producers then design LPTs to meet the specifications, estimate the cost, and submit a bid to the purchaser. The bids not only include the power transformer, but also services such as transportation, installation, and warranties. Except for a few municipalities, most utilities do not announce the amount of the winning bid or the identity of the winning bidder. The winning bidder is notified, and bid terms normally require that the results be kept confidential by all parties involved.

Production. The typical manufacturing process of an LPT consists of the following steps: 1. Engineering and design: LPT design is complex, balancing the costs of raw materials (copper, steel, and cooling oil), electrical losses, manufacturing labor hours, plant capability constraints, and shipping constraints. 2. Core building: The core is the most critical component of an LPT, which requires a highly-trained and skilled workforce and cold-rolled, grain-oriented (CRGO) laminated electrical steel. 3. Windings production and assembly of the core and windings: Windings are predominantly copper and have an insulating material. 4. Drying operations: Excess moisture must be removed from the core and windings because moisture can degrade the dielectric strength of the insulation. 5. Tank production: A tank must be completed before the winding and core assembly finish the drying phase so that the core and windings do not start to reabsorb moisture. 6. Final assembly of the LPT: The final assembly must be done in a clean environment; even a tiny amount of dust or moisture can deteriorate the performance of an LPT. 7. Testing:  Testing is performed to ensure the accuracy of voltage ratios, verify power ratings, and determine electrical impedances.

In 2010, the average lead time between a customer’s LPT order and the date of delivery ranged from five to 12 months for domestic producers and six to 16 months for producers outside the United States. The LPT market is characterized as a cyclical market with a correlation between volume, lead time, and price. In other words, the average lead time can increase when the demand is high, up to 18 to 24 months. This lead time could extend beyond 20 months and up to five years in extreme cases if the manufacturer has difficulty obtaining any key inputs, such as bushings and other key raw materials, or if considerable new engineering is needed.

Once completed, a power transformer is disassembled for transport, including the removal of oil, radiators, bushings, convertors, arrestors, and so forth. The proper transportation of a power transformer and its key parts is critical to ensuring the high reliability of the product and minimizing the period for onsite installation.

Transporting an LPT is challenging—its large dimensions and heavy weight pose unique requirements to ensure safe and efficient transportation. Current road, rail, and port conditions are such that transportation is taking more time and becoming more expensive. Although rail transport is most common, LPTs cannot be transferred over normal railcars, because they cannot be rolled down a hill or bumped into other rail cars, which can damage the power transformer. This is because the heaviest load a railroad normally carries is about 100 tons, or 200,000 lb, whereas an LPT can weight two to three times that amount.  A specialized railroad freight car known as the Schnabel railcar is used to transport extremely heavy loads and accommodate height via railways.  There are a limited number of Schnabel cars available worldwide, with only about 30 of them in North America. Certain manufacturers operate a Schnabel car rental program and access to a railroad is also becoming an issue in certain areas due to the closure, damage, or removal of rail lines.

Photos: 1) A German machine called the Goldhofer, which “looks like a caterpillar with 144 tires and features a hydraulic system” to handle the heavy weight, is another mode of transportation used on the road. 2) Workers move wires, lights, and poles to transport a 340-ton power transformer, causing hours of traffic delay.

Logistics and transportation accounted for approximately three percent to 20% of the total cost of an LPT for both domestic and international producers. While important, this is less significant than the cost of raw materials and the potential sourcing concerns surrounding them. The next section describes some of the issues concerning raw materials vital to LPT manufacturing.

Raw Materials Used in Large Power Transformers. The main raw materials needed to build power transformers are copper conductors, silicon iron/steel, oil, and insulation materials. The cost of these raw materials is significant, accounting for well over 50% of the total cost of a typical LPT. Specifically, manufacturers have estimated that the cost of raw materials accounted for 57% to 67% of the total cost of LPTs sold in the United States between 2008 and 2010. Of the total material cost, about 18% to 27% was for copper and 22% to 24% was for electrical steel. For this reason, this section examines the issues surrounding the supply chain and price variability of the two key raw materials used in LPTs— copper and electrical steel.

Electrical Steel and Large Power Transformers. The electrical steel used in power transformer manufacture is a specialty steel tailored to produce certain magnetic properties and high permeability. A special type of steel called cold-rolled grain-oriented electrical steel (hereinafter refer to as “electrical steel”) makes up the core of a power transformer. Electrical steel is the most critical component that has the greatest impact on the performance of the power transformer, because it is designed to provide low core loss and high permeability, which are essential to efficient and economical power transformers. Electrical steel is produced in different levels of magnetic permeability: conventional and high – permeability. Conventional products are available in various grades from M-2 through M-6, with thickness and energy loss increasing with each higher number (see Figure 5). High-permeability product allows a transformer to operate at a higher level of flux density than conventional products, thus permitting a transformer to be smaller and have lower operating losses. The quality of electrical steel is measured in terms of loss of electrical current flowing in the core. In general, core losses are measured in watts per kilogram (W/kg), and the thinner the material, the better the quality. An industry source noted that an electrical steel grade of M3 or better is typically used in LPTs to minimize core loss.

The average annual prices of electrical steel ranged from $1.20 to $2.20 per pound between 2006 and 2011, with peak prices occurring in 2008. According to an industry source, the price of electrical steel has been recorded as high as $2.80 per pound (lb). As a reference, approximately 170,000 to 220,000 lb of core steel is needed in a power transformer with a capacity rating between 300 and 500 MVA. The continued increase in global demand for grain oriented electrical steel, particularly in China and India;

Global Electrical Steel Suppliers. The availability of electrical steel supply sources worldwide is limited. In 2013, there were only two domestic producers—AK Steel and Allegheny Ludlum. In addition to the 2 domestic producers, there were 11 major international companies producing grain oriented electrical steel. However, only a limited number of producers worldwide are capable of producing the high- permeability steel that is generally used in LPT cores. AK Steel is the only domestic producer of the high-permeability, domain-refined (laser- scribed) core steel used in high-efficiency stacked cores.

In 2009, China’s four companies produced 35% of the world’s electrical steel, the majority of which were consumed domestically. Conversely, Japan produced 14% of the world’s electrical steel mainly for the purpose of export. The two U.S. producers accounted for 14% of the world’s electrical steel production.

According to the USITC, in 2012, a total of 1.5 million metric tons of electrical steel were exported around the world, and exports from three countries—Japan, Russia, and South Korea— accounted for more than half of that total. While China was the largest producer of electrical steel, it contributed only 2% of the total global export in 2012. Japan was the largest exporter of electrical steel with 27%, followed by Russia and Korea, each exporting 15% and 10% of total global electrical steel in 2012, respectively.

The average price of copper more than quadrupled between 2003 and 2013, costing more than $4.27 per pound by 2011.

In 2012, China was the single largest buyer of steel in the world, consuming more than 45% of the world’s total steel consumption of 1,413 million metric tons that year. Although China’s primary need for steel is in the construction sector, China also has a significant demand for power transmission infrastructure. China’s and India’s demands for steel, including high-efficiency, grain-oriented steel, are expected to continue to affect the availability and price of steel and copper to the rest of the world.

Global Power Generation Capacity. In 2013, the world had more than five trillion watts of power generation capacity, which was growing at an annual rate of 2%. China and the United States had the largest generation capacity, with each holding about 21% and 20% of the world’s total installed capacity, respectively.

The key catalyst for power infrastructure investment in developed countries (e.g., United States) was the replacement market for aging infrastructure. In addition to aging infrastructure, the United States has a need for transmission expansion and upgrade to accommodate new generation connections and maintain electric reliability.

Large Power Transformer Manufacturing Capacity in North America. The United States was not an exception to the global, strategic consolidation of manufacturing bases. By the beginning of 2010, there were only 6 manufacturing facilities in the United States that produced LPTs. Although certain manufacturers reported having the capability to produce power transformers with a capacity rating of 300 MVA or higher, industry experts cautioned that the capacity to produce does not necessarily warrant actual production of power transformers of that magnitude. Often, domestic producers did not have the required machinery and equipment to produce power transformers of 300 MVA, or 345 kV, and above. A number of firms identified constraints in equipment (e.g., cranes, ovens, testing, winding, and vapor phase systems) and the availability of trained personnel set limits to their production capacity.

Ocean and inland transportation, compliance with specifications, quality, testing, raw materials, and major global events (e.g., hurricanes) can significantly influence a supplier’s lead time and delivery reliability. In addition, some railroad companies are removing rail lines due to infrequent use and other lines are not being maintained. This can pose a challenge to moving the LPTs to certain locations where they are needed.

Foreign factories may not understand the U.S. standards such as the Institute of Electrical and Electronics Engineers (IEEE) and the National Institute of Standards and Technology (NIST) or have appropriate testing facilities. Foreign vendors may not have the ability to repair damaged power transformers in the United States. It is expensive to travel overseas for quality inspections and to witness factory acceptance testing. The utility industry is also facing the challenge of maintaining an experienced, well trained in-house workforce that is able to address power transformer procurement and maintenance issues.

 

Posted in Electric Grid, Electricity, Infrastructure, Interdependencies | Tagged , , , , , | Leave a comment

Homeland Security and Dept of Energy: Dams and Energy Sectors Interdependency Study

[Below are excerpts from this 45 page document. Dams not only provide power but also water for agriculture, drinking water, cooling water for thermal power plants, ecosystem health, fisheries, and so on.  All dams have a finite lifespan of 50 to 200 years due to siltation and the limited lifespan of concrete. Within the next 20 years, 85% of U.S. dams that cost taxpayers $2 trillion dollars will have outlived their average 50-year lifespan.]

DOE HS. September 2011. Dams and Energy Sectors Interdependency Study. U.S. Department of Energy and Homeland Security.

Figure 1: Top 10 Hydropower-Generating States and Their Reliance on Hydro Sources for Electricity, 2009, total hydroelectric power generation 273 MWh. These states together produce more than 80% of the Nation‘s total hydroelectric power.

ID 80%           WA 71%         OR 59%          MT 35%          NY 21%          CA 14%

TN 11%           AL 8%            AZ 6%           NC 4%

The U.S. Department of Energy (DOE) and the U.S. Department of Homeland Security (DHS) collaborated to examine the interdependencies between two critical infrastructure sectors – Dams and Energy. The study highlights the importance of hydroelectric power generation, with a particular emphasis on the variability of weather patterns and competing demands for water which determine the water available for hydropower production. In recent years, various regions of the Nation suffered drought, impacting stakeholders in both the Dams and Energy Sectors. Droughts have the potential to affect the operation of dams and reduce hydropower production, which can result in higher electricity costs to utilities and customers. Conversely, too much water can further complicate the operation of dams in ways that can be detrimental to hydropower production and to the infrastructure of the dams.

The requirements for providing sufficient water for irrigation, environmental protection, transportation, as well as community and industrial uses are already in conflict in many places. Low water conditions (e.g., drought) and high water conditions (e.g., flood) resulting from extreme weather variability can strain the operation of dams.

Although hydroelectric facilities are a type of asset that falls under the auspices of the Dams Sector, they are also an important element to the Energy Sector because the electric power they generate is critical to maintaining the reliability of the Nation‘s electricity supply.

The National Infrastructure Protection Plan (NIPP) provides an overarching framework for the protection and resilience efforts for the Nation‘s 18 critical infrastructure sectors.

DOE and DHS support and coordinate the protection and resilience activities for the Dams and Energy Sectors‘ critical infrastructure as defined below: Dams Sector assets include dam projects, hydropower generation facilities, navigation locks, levees, dikes, hurricane barriers, mine tailings and other industrial waste impoundments, and other similar water retention and water control facilities. Energy Sector, as delineated by Homeland Security Presidential Directive 7 (HSPD-7), includes the production, refining, storage, and distribution of oil, gas, and electric power, except for hydroelectric and commercial nuclear power facilities.

Chief among these concerns is the fact that hydroelectric power generation is affected by extreme fluctuations of water flow, as well as long-term issues surrounding the management and uses of water supply to generate hydroelectricity. In recent years, various regions of the Nation suffered droughts affecting stakeholders in both the Dams and Energy Sectors.6 Although recent drought conditions have not caused a serious problem in terms of electricity supply and reliability, they have the potential to affect the operation of dams by decreasing hydropower production,

The report investigates how different variables might affect the operation of hydroelectric facilities and the supply of hydroelectric power, especially in times of drought and other extreme weather events. Such variables include: The relationship between hydroelectric power generation and the variability of hydrology and weather patterns; Operation of major reservoirs and streamflow regulations at these reservoirs; and Management for flood control, fish habitat protection, and power generation.

Importance of Hydroelectric Dams for Power Generation

Historically, hydroelectric sources have been a vital source of electric power generation that accounted for as much as 40% of the Nation‘s electricity supply in the early 1900s. Although the share of hydropower generation has declined to 7% of the U.S. total electric power generation as production as other types of power plants grew at a faster rate, hydroelectric dams remain an important power source. Hydropower is critical to the national economy and the overall energy reliability.

  • Hydroelectric sources produce 7% of the U.S. total annual electric generation.
  • Hydroelectric generating capacity constitutes 8% of the U.S. total existing generation capacity.
  • The top ten hydropower-generating States produce more than 80% of the U.S. total hydroelectric generation.
  • The 20 largest hydroelectric dams produce almost half of the U.S. total hydroelectric generation.
  • Hydroelectric power generation has declined in most parts of the country during the 2007-2009 period compared to the historical average.

Hydropower is important because it’s:

  1. The least expensive source of electricity, as it does not require fossil fuels for generation;
  2. An emission-free renewable source, accounting for over 65% of the U.S. total annual net renewable generation;
  3. Able to shift loads to provide peaking power (it does not require ramp-up time like combustion technologies); and
  4. Often designated as a black start source that can be used to restore network interconnections in the event of a blackout.

Hydropower serves an essential purpose of enhancing electric grid reliability, and can rapidly adjust output to meet changing real time electricity demands and provide black-start capability to help restore power during a blackout event. Black start capability is defined as the ability to start generation without an outside source of power. Because hydropower plants are the only major generators that can dispatch power to the grid immediately when all other energy sources are inaccessible, they provide essential back-up power during major electricity disruptions such as the 2003 blackout. With black start capability, hydropower facilities can resume operations in isolation without drawing on an outside power source and help restore power to the grid.

Hydroelectric Power Capacity vs. Generation. As seen in figures 2 and 3, hydropower generation capacity has remained steady in the last 20 years, whereas production from hydro sources has fluctuated dramatically year-to-year. According to EIA, hydropower capacity grew at an annual rate of 0.3 percent or a total of 4,600 megawatts (MW) in the past 20 years (1990: 73,925 MW vs. 2009: 78,525 MW).

The interannual variability of hydropower generation in the United States is very high—a drop of 59 million megawatt hours (MWh) (or 21% of the U.S. total hydropower generation) was seen from 2000 to 2001. Sensitivity of hydroelectric power generation to changes in precipitation and river discharge is high; in the range of 1.0+ (a sensitivity level of 1.0 means that one percent change in precipitation results in one percent change in generation). Although it is evident that precipitation is a determining factor in available hydropower generation for a given period of time, the variability of weather patterns impose uncertainty in the operation of hydroelectric facilities. Hydropower operations are also affected indirectly by the changes in air temperatures, humidity, and wind patterns which change water quality and reservoir dynamics. For example, reservoirs with large surface areas (such as Lake Mead in the lower Colorado River) are more likely to experience greater evaporation, which affects the availability of water for all uses including hydropower. In addition, altering snowfall patterns and associated runoff from snowpack melt are a matter of concern, particularly in the Pacific Northwest, where snows are melting earlier and the proportion of precipitation in the form of snow is decreasing.

A 20-year period from 1990 to 2009 was examined to see the changes in hydropower production at the State level. The results indicate that the national annual average of hydroelectric power generation between 2007 and 2009 was 11 percent less than that of the historical average between 1990 and 2006 in the top 10 hydropower generating states, which all experienced a decline, with certain States losing up to 28% of their normal annual hydropower generation.

Largest Hydro Dams. According to the 2010 Dams Sector-Specific Plan, the total number of dams in the United States is estimated to be around 100,000. However, most dams were constructed solely to provide irrigation and flood control, and only about 2% (or 2,000) of the Nation‘s dams produce electricity.

Table 1 provides a list of the 20 largest hydroelectric dams in the United States ranked by summer capacity as of December 2009. These 20 hydroelectric facilities account for 40% of the Nation‘s hydroelectric power capacity; they provided 44% of the hydropower generated in the United States during the 20-year period from 1990 to 2009. The majority of the 20 largest hydroelectric power plants are located in the Columbia River basin in the Pacific Northwest, all of which experienced decreased production in the 2007 to 2009 time span compared to the historical average between 1990 and 2006.

EIA reports that the largest hydroelectric facility in the United States is the Grand Coulee Dam with a summer capacity of 6,765 MW, located in the Columbia River basin. It is also the largest hydropower producer. To compare the magnitude of the Grand Coulee, the next two largest dams, Chief Joseph and Robert Moses Niagara, each have only about a third of Grand Coulee‘s capacity. Note, however, that the capacity factor at hydro plants varies significantly, generally in the range of 30 to 80%, with an average capacity factor of about 40 to 45%. To illustrate this varied capacity factor of hydroelectric plants, the capacity factor of the Grand Coulee Dam is about 36%, whereas the Robert Moses Niagara Dam has a relatively high capacity factor of 71%.

Table 1. 20 Largest Hydroelectric Dams in the United States Plant Name Owner State

Drought can play a significant role in hydropower production—it can decrease upstream flow and require the diversion or retention of water that would otherwise go to produce electricity or to other water purposes during times of scarcity.

The Columbia River basin is the predominant river system in the Pacific Northwest, encompassing 250 reservoirs and about 150 hydroelectric projects. The system spans seven western States: Washington, Oregon, Idaho, Montana, Wyoming, Nevada, and Utah, as well as British Columbia, Canada.

Today, the Columbia River system operations serve multiple purposes — flood control and mitigation, power production, navigation, recreation, and environmental needs—that are guided by a complex and interrelated set of laws, treaties, agreements, and guidelines. These include the Endangered Species Act, a Federal law that protects threatened or endangered species— protection that can result in setting restrictions on the time and amount of allowed flow and spill—as well as numerous treaties and agreements with Canada dealing with flood control and division of power benefits and obligations.35 Streamflow in the Columbia River system does not follow the region‘s electricity demand pattern in which the peak occurs during winter when the region‘s homes and businesses need heating. Although most of the annual precipitation occurs in the winter from snowfall, most of the natural streamflows occur in the spring and early summer when the snowpack melts. About 60 percent of the natural runoff occurs during May, June, and July (see figure 7). Thus, the objective of reservoir operation is to store snowmelt runoff in the spring and early summer for release in the fall and winter when streamflows are lower and electricity demand is higher.

Hydropower supplies approximately 60 to 70% of the electricity in the Pacific Northwest Region. In the Columbia River system, power generation operations are generally compatible with flood control requirements. However, under the current operating strategy, conflicts between power generation and fish protection are generally resolved in favor of fish protection.

The current strategy requires increased water storage in the fall and winter and increased flows and spill during the spring and summer to benefit migrating juvenile salmon. This approach does not provide an optimal operating strategy for power generation as it results in more water for fish protection, but reduced hydropower generation during the peak demand periods. As a result, BPA is often likely to purchase power frequently during high load periods in the winter and sell surplus power in the spring and summer.

The Pacific Northwest has been affected by widespread temperature-related reductions in snow pack, as well as a changing annual runoff pattern. Recent studies indicate 1) a transition to more rain and less snow and 2) a shifting pattern of snow melt runoff in western North America— contemporary snow melt runoff has been observed 10 to 30 days early in comparison to the period from 1951 to1980. To adapt to these changes, the ability to modify operational rules and water allocations is critical to ensuring the reliability of water and energy supplies, as well as to protecting the environment and critical infrastructure. However, the current set of laws, regulations, and agreements is intricate and creates institutional and legal barriers to such changes in both the short and long term. In 2010, the Pacific Northwest experienced the third driest year in the last 50 years and the fifth lowest water level on record since 1929, causing low runoff in the lower Columbia River. According to BPA‘s 2010 Annual Report, BPA‘s gross purchased 37%, from 2009, mainly due to below normal basin-wide precipitation and stream flows, resulting in insufficient power generation to fulfill load obligations.

Not only droughts, but too much water can also bring challenges to hydropower operation. After a dry winter, spring 2010 river flows were expected to stay fairly low. However, in June 2010, a strong Pacific storm system brought heavy precipitation that almost doubled the stream flows in the Columbia River.45 During the month of June, dam operators faced the challenges of managing flooding and an oversupply of hydropower and, at the same time, complying with Federal regulations for fish protection that restricted the amount of spill allowed. Since water that goes through power turbines does not increase dissolved gas levels, thus maintaining safe conditions for fish, dam operators were forced to produce power for which they could not find a market.46 As a result, BPA disposed of more than 50,000 MWh of electricity for free or for less than the cost of transmission and incurred a total of 745,000 MWh of spill for lack of market in June 2010.47 Figure 10 shows that BPA balancing authority generation significantly exceeded load in early June.

High flows in the Columbia River system are common, resulting from above average snowpack and/or early warming periods that result in rapid snowmelt. However, operating the Columbia River system through those events has become much more complex in recent years due to the following new factors: 1) multiple flow and storage requirements to protect threatened and endangered salmon and steelhead under the Endangered Species Act; 2) changing uses of the transmission system in a deregulated electric power market; and 3) the significant addition of variable, non-dispatchable wind power capacity (3,400 MW as of February 2011) with financial incentives for operation—production tax credits of $21 per MWh and renewable energy credits of $20 per MWh.48

The Colorado River System is considered one of the most legally complex river systems in the world, governed by multiple interstate and international compacts, legal decrees, and prior appropriation allocations, as well as federally-reserved water rights for Native Americans.52 The river basin extends over seven U.S. States— Arizona, California, Colorado, Nevada, New Mexico, Utah, and Wyoming and parts of northwestern Mexico (see figure 11), serving about 25 million people in the Southwest. Its water yield is only 8% of the annual flow of the Columbia River.

In the early 21st century, water use issues intensified as the Colorado River region experienced some of the Nation‘s highest population growth, as well as the start of a long period of drought considered to be the worst drought in the 100-year recorded history (hereinafter referred to as the ?early 21stcentury drought?).  The Colorado River region is of particular concern because of the continuing trend of rising temperatures seen across the region that contributes to increased evaporative losses from snowpack, surface reservoirs, irrigated land, and vegetated surfaces.

Lakes Mead and Powell comprise approximately 80% of the basin‘s entire storage capacity.

In October 2010, Lake Mead stood at 39% capacity or 1,084 feet in elevation, curtailing power generation at the Hoover Dam, the region‘s largest hydro facility. For every foot of elevation lost in Lake Mead, Hoover Dam produces 5.7 MW less power. That is because at lower water levels air bubbles flow through with the water causing the turbines to lose efficiency. As a result, electricity available from Hoover Dam declined 29% since 1980, which meant that local utilities had to buy power on the open market where rates were up to four times higher.

The Tennessee River System territory includes most of Tennessee and parts of Alabama, Georgia, Kentucky, Mississippi, North Carolina, and Virginia, serving more than 8.7 million people. TVA manages the Tennessee River and its reservoirs as a whole, regulating the flow of water through the river system for flood control, navigation, power generation, water quality, and recreation. TVA is also the Nation‘s largest public power provider, wholly owned by the U.S. Government; it maintains 29 conventional hydroelectric dams.

On average, the Tennessee Valley gets 51 inches of rain a year, which is more than double the average rainfall in the southwestern United States. Nonetheless, the Tennessee Valley has experienced water shortages during the 2007-2008 droughts that forced communities around the watershed to restrict water withdrawals and take conservation measures. In December 2010, Gary Springston, TVA program manager for water supply, stated that the present situation was still tenuous and ?even systems connected to the Tennessee River system could face conflicts between instream flow needs to support water quality and aquatic life and withdrawals for offstream uses such as public-water supply, industry, thermoelectric power generation, and irrigation. Water supply concerns continue to increase due to population growth and interbasin transfers, especially since the Tennessee River is surrounded by areas that may require more water to accommodate growing needs.

The 2007-2008 droughts in the TVA region were among the worst on record, during which low reservoir water levels caused TVA to lose almost half of its total hydroelectric generation. At the same time, coal prices more than doubled, forcing TVA to rely on additional natural gas purchases to meet electric generation needs while keeping prices as low as possible. Even with the increased reliance on natural gas as opposed to coal, TVA raised rates by 20% in October 2008 to absorb more than $2 billion of increased costs for coal, natural gas, and purchased power costs associated with infrastructure modernization can become an issue. Financial resources to design and implement facility upgrades generally come through public funds and/or power sales for publicly held hydropower infrastructure, and from rate increases approved by public utility commissions for privately held facilities. Although payback periods could be as short as 3-5 years for technology upgrades, securing the initial investment can be challenging. Some owners have received offers from investors and other utility companies to enter into a variety of energy savings performance contracts that would provide the initial investment for modernization in return for a share of the subsequent increased energy production. None of the participants indicated that they were presently involved in such contracts and several raised concerns as to whether they could legally enter into such arrangements.

The potential for technology upgrades at some hydropower infrastructure may also be limited or made more expensive due to the age or physical condition of the facility.

Although operators want to retain as much water as possible in the reservoir for hydropower production, storing it in the reservoir during high water conditions may be hard to manage, as it might impact residences surrounding the reservoir.

Many dams have multiple missions; for some, the requirement for flood control takes precedence over hydropower production. Adherence to this primary mission may require passing high volumes of water through the dam turbines even though there may be low power demand. These increased flows may also require downstream dams to pass through water and not be able to sell the resulting power at a reasonable price. Even if flood control is not a facility mission, owners do their best to avoid or minimize downstream harm when they manage high water conditions. Debris buildup associated with flooding can be dangerous to the facility infrastructure and affect operations. Trees, lumber, sheds, animals, and other debris can be swept into rivers from floods and can build up against dams. The cost and personnel resources required to remove this debris can be significant.

Hydroelectric facilities serve multiple purposes that can include flood control, recreation, industrial and community water supply, irrigation, and transportation. The demands for water for these uses can come into conflict with hydropower production in terms of how much water can be used for nonpower generation and the condition of the water associated with power generation. For multifunction facilities, the combination of existing water rights, treaties, contracts, laws, or court cases determine who gets how much water and when they receive it. Modifying these controlling forces to consider reduced water availability can be difficult because they may involve multiple States and parties, and sometimes, international partners. In addition to these legally binding obligations on water delivery, softer forces, such as providing or storing water to protect recreational uses or the value of residences around the reservoir, can also limit the availability of water for hydropower generation. The condition of the water used in producing hydropower may also be heavily controlled through Federal and State laws and regulations, operating permits and licenses, and court cases related to the protection of natural resources and the environment. These controlling forces may stipulate water conditions such as tail water temperature, streamflow, and dissolved oxygen levels. Operating stipulations are primarily designed to protect species designated as threatened or endangered under Federal or State laws. They may also serve to protect downstream banks, channels, and river branches.

Southern Co. 85 2007 “Georgia Power’s hydroelectric power generation was down 51% in 2007, forcing the company to spend $33.3 million for purchasing coal and oil to replace lost hydropower generation although hydropower sources account for less than two percent of Georgia Power’s generation portfolio.” – Nov. 2007, Atlanta Business

Chronicles Manitoba Hydro86 2003 “A net loss of $436 million was reported in Manitoba Hydro’s 53rd annual report for the fiscal year ending March 31, 2004. The loss was primarily due to the prolonged drought conditions that affected normal electricity production at the utility’s 14 hydroelectric generating stations.” – 2004, Manitoba Hydro

Water is used as the primary coolant in the condensers in both steam and natural gas-fired, combined cycle plants; the amount of water used for cooling in these plants can be significant, depending on the type of cooling system used. Plants that use “once-through” or “open-loop” cooling systems withdraw large amounts of water from nearby surface water sources. This water passes through a condenser as a coolant and, in doing so, transfers heat energy from the hot steam to the coolant water, raising the temperature of the water. After moving through the condenser, the water is released to the original lake, pond, or river source. The increased temperature of the discharge water also increases the rate of evaporation for the body of water. The quantity of water lost from the hydrological system by evaporation caused by elevated temperatures is said to be “consumed.” Closed-loop cooling

Coal Transport by Barge. Transportation on the inland waterways and Great Lakes is an important element of the domestic coal distribution system, carrying approximately 20% of the Nations‘ coal, enough to produce 10% of U.S. electricity annually. Barge transport is often used to transfer coal from the initial source to a railroad, from a railroad to the coal-fired power plant, or the entire distance from the mine to the plant. Barge traffic is particularly important in the Midwestern and Eastern States, with 80% of shipments originating in States along the Ohio River. The amount of waterborne transported coal has remained relatively constant over the last two decades. Barge transport and the amount transported on a single barge are dependent upon the depth of the river on which the barge travels. Reducing the barge load is costly. Losing one foot of draft typically means losing 17 tons of cargo on a single barge and 255 tons on a typical 15-barge tow. In addition, idle tow-boats cost shipping companies $5,000 – $10,000 per day. Droughts have the potential to reduce the rate at which all goods, including coal, can be transported by barge. Some river systems, like the Missouri River, have a system of reservoirs that are used to control river depths. When river levels are low, water is released from the reservoirs to increase river depths and permit barge travel. To mitigate the potential for low water levels to significantly disrupt electric power generation, most coal-burning plants with barge access can also receive coal shipments by rail. However, because barge is the cheapest mode of transportation, utilities pay a higher rate for transportation.

By affecting the availability of cooling water, drought has had an impact on the production of electricity from thermoelectric power plants. The problem for power plants becomes acute when river, lake, or reservoir water levels fall near or below the level of the water intakes used for drawing water for cooling. A related problem occurs when the temperature of the surface water increases to the point where the water can no longer be used for cooling. The Southeast experienced particularly acute drought conditions in August 2007, which forced the shutdown of some nuclear power plants and curtailed operations at others in order to avoid exceeding environmental limits for water temperature. A similar situation occurred in August 2006 along the Mississippi River, as well as at some plants in Illinois and Minnesota.

Thermoelectric freshwater withdrawals accounted for 41% of all freshwater withdrawals in 2005; however, it is important to note that only 3% of the withdrawn water is consumed and the rest is returned to natural flow.

Limitations of the Study. To maintain the focus of the study, this report is limited to issues that specifically relate to electric power generation at hydroelectric dams. Specifically, this study examines issues pertinent to overall management of reservoirs and stream flows at dams that are affected by the variability of weather patterns. In-depth analysis of certain topics considered outside of the scope of the study is omitted from the report. These include: climate change, new hydropower technologies, renewable energy credits, the value of hydropower‘s avoided greenhouse gas emissions, and the effects of reduced hydropower generation on the overall power market. There are three types of hydroelectric power plants: conventional, pumped storage, and diversion facilities. The focus of this report is on the conventional hydroelectric facilities, which are the most common type of hydroelectric power plant. The U.S. Energy Information Administration (EIA) defines a conventional hydroelectric power plant as a plant in which all of the power is produced from natural streamflow as regulated by available storage. Most pumped storage units have closed-loop systems in which water can be stored and reused; therefore, electricity production at pumped storage is more resistant to drought or changing weather patterns. For this reason, the discussion of and data on hydroelectric power generation provided in this report excludes generation from pumped storage, unless noted otherwise.

 

 

Posted in Dams, Energy Production, Interdependencies | Tagged , , , | Leave a comment

Solar Thermal ESOI (Energy Stored on Invested)

Barton, N. April 17, 2013. ESOI for solar thermal.

http://sunoba.blogspot.com/2013/04/esoi-for-solar-thermal.html

Published information is available to evaluate the ESOI score for the most common solar thermal storage technology – a molten 60-40 mixture of sodium and potassium nitrates, commonly known as solar salt.

Burkhardt, Heath and Turchi [2] made a life cycle assessment of a hypothetical 100 MW parabolic trough concentrating solar plant at Daggett, California. The storage envisaged is 62,000 tons of solar salt, capable of storing 1,988 MWh of thermal energy, which can be converted into an electrical equivalent by multiplying by the thermal-electric efficiency of the plant.

Many individual items were taken into account by Burkhardt et al. to calculate the embodied energy of the storage component of the plant; these included obvious items like steel, concrete, pumps, heat exchangers, insulation and solar salt. However the biggest single item is the energy required to keep the salt molten and stirred for daily operations.

It’s noteworthy that the embodied energy of solar salt is low if it mined (as assumed to be the case in [2]), but high if it produced synthetically. In the latter case, which Burkhardt et al. say applies to slightly more than half of all installations, the manufacturing process involves pre-production of ammonia, for which there is a natural gas requirement.

I have also made an as-yet unpublished estimate for the ESOI score for thermal storage in air-blown pebble beds. This estimate is in the context of a new concept for solar thermal power generation entitled BRRIMS, denoting Brayton-cycle, Re-heated, Recuperated, Integrated, Modular and Storage-equipped. Here what needs to be considered is the embodied energy in hardware such as steel tanks, ducts, concrete footings, insulation and pebbles. Heat exchangers, pumps and fans are not required.

Results of Barnhart & Benson can now be extended as follows, with the new data highlighted. This is a fair comparison (“apples with apples”) between storage technologies since the new figures represent electrical energy that would be produced from the underlying thermal storage.

Technology ESOI
compressed air energy storage 240
pumped hydro storage 210
pebble bed thermal, BRRIMS 62
solar salt, parabolic trough [2] 47
Li-ion battery 10
Sodium-Sulphur battery 6
Vanadium redox battery 3
Zinc-Bromine battery 3
Lead-acid battery 2

The simple conclusion from the ESOI metric is that geologic storage is excellent, thermal storage is good, whilst electrochemical storage is poor.

That is not the whole story however. Geological storage is not particularly cheap, and its applicability is limited by the availability of suitable sites. My estimates show that thermal storage is the cheapest option, and I propose to present details of this work at the World Renewable Energy Congress in July.

References
[1] C J Barnhart and S M Benson, “On the importance of reducing the energetic and material demands of electrical energy storage”, Energy Environ. Sci., 6 (2013), 1083.

[2] J J Burkhardt III, G A Heath and C S. Turchi, “Life cycle assessment of a parabolic trough concentrating solar power plant and the impacts of key design alternatives”, Environ. Sci. Technol. 45 (2011), 2457–2464.

 

 

 

Posted in Energy Storage, Solar Thermal | Tagged , , , | Leave a comment

Generating electricity with biomass at utility-scale in California limited to direct combustion in small 50 MW plants

CEC. 2014. Estimated cost of new renewable and fossil generation in California. California Energy Commission. CEC-200-2014-003-SD. 254 pages.

CHAPTER 8: Biomass Technology

Biomass technologies are plants that use biological resources, such as forestry waste or farming by-products, to produce electricity through thermal and chemical processes. Biomass technologies are in limited production here in California. While these technologies are designed to harness biological by-products sustainably, they suffer from:

  1. The limitation of requiring large, reliable fuel sources to produce energy economically.
  2. The high cost of transporting the fuel from the origination site to the generation site. This limitation exposes the producer to the volatile market for diesel or other petroleum fuels, which can unexpectedly add significant costs.

Biomass is plant- based material, agricultural vegetation, or agricultural wastes used as fuel and has three primary technology pathways:

  • Pyrolysis- transformation of biomass feedstock materials into fuel (often liquid “biofuel”) through the application of heat in the presence of a catalyst.
  • Combustion- transformation of biomass feedstock materials into useful energy through the direct burning of those feedstocks using a variety of burner/boiler technologies also used for burning materials such as coal, oil, and natural gas.
  • Gasification- transformation of biomass feedstock materials into synthetic gas through the partial oxidation and decomposition of those feedstocks in a reactor vessel and oxidation.

Of these technology pathways, only direct combustion of biomass is commercially available for utility-scale plants.

Gasification methods are used in some small-scale applications but are not yet viable for utility-scale applications. Active research into pyrolysis for biofuel production is ongoing but is not used for electricity production.

Combustion technologies are widespread and include the following general approaches:

Stoker boiler combustion uses similar technology for coal-fired stoker boilers to combust biomass materials, using either a traveling grate or a vibrating bed.

  • Fluidized bed combustion uses a special form of combustion where the biomass fuel is suspended in a mix of silica and limestone through the application of air through the silica/limestone bed. This is similar to technology used in newer coal-fired boilers. Fluidized bed combustion boilers are classified as either bubbling fluidized bed (BFB) or circulating fluidized bed (CFB) units.
  • Biomass- cofiring uses biomass fuel burned in conjunction with coal products in current pulverized- coal boiler technology used in utility-scale electricity production.

Recent sources of data and analysis have focused on the fluidized bed technology. It is also the most likely biomass technology to be installed in California. The remainder of this chapter will focus on fluidized bed technology

The inherent fuel versatility of fluidized bed systems provides a plant operator the ability to burn many biomass resource types, including those feedstocks with significant moisture variations.

Biomass fuel type and uniformity-The type and uniformity of delivered biomass fuel supply are a primary cost driver for any biomass technology. Given the variation of the delivered moisture content and heating value of biomass fuel feedstocks, along with fuel processing issues, the handling and processing costs of biomass fuels can vary greatly. As a result, the type and characteristics of the different biomass fuels can have a material impact on the capital cost of the boiler design, as well as the overall fuel handling and operations cost.

Fuel transport and handling costs- The availability of sufficient biomass fuel resources near the plant location is a critical driver for operating cost. Most biomass fuel is transported by truck to a plant site. To maintain commercially reasonable prices, the effective economic radius from the plant location to the aggregate fuel supply is limited to about 100 miles. The varied nature of biomass fuel feedstocks also necessitates special handling equipment and larger numbers of dedicated staff than are needed for coal- fired combustion power plants of equivalent size. As a result, the typical maximum size of biomass plants is limited to about 50 MW in California  (McCann, et al., 1994).

Small biomass facilities lose a great deal of power over transmission lines.

Interconnection Loss Estimates for Generation Tie-Linestransmission losses interconnection tie-lines .

 

 

 

 

 

 

 

 

Boiler island cost-Capital cost of the boiler island is a critical cost driver that can account for roughly 40 to 60 percent of the overall plant cost, depending on the type of biomass combusted and the need for postcombustion pollution controls. The choice of source and type of fuels to be combusted is an important cost driver. In addition, the escalation trends for raw materials used in manufacturing the boiler island, primarily steel cost, are factors that can influence delivered boiler island cost.

Long-term fuel supply contracts-Most current biomass fuel supply contracts are of short-term duration and can entail varying fuel qualities. A key cost barrier to promoting biomass circulating bed combustion in California is the ability to develop and achieve performance on long-term (for example, 5 years duration and longer) fuel supply contracts for available fuel sources.

Plant scale- While current CFB technology has been proven to utility-scale applications of up to 300 MW, supply availability limits potential plant scale. Steam-generator scale economies are substantial, with a 50 MW biomass plant likely to cost substantially more per kW than a 500 MW coal-fired plant of the same technology (McCann, et al., 1994).

Emissions control costs-Costs of emission control needed to satisfy air quality and permitting requirements can increase the cost of biomass plants. Post-combustion emissions control technologies, such as selective catalytic reduction/selective noncatalytic reduction technologies for NOx control, and additional particulate matter controls, are important cost drivers that can significantly increase the capital and operating costs of biomass plants.

Posted in Biomass, Electric Grid | Tagged , , | Leave a comment

Integrating renewable power research

Below are intermittent energy integration posts, workshops, and other research on this topic that aren’t integrated and summarized into a post yet.

Savage, W. 2012. The Full Cost of Renewables: Managing Wind Integration Costs in California. Pomona Senior Theses. Paper 57.

[This 71-page paper has some great explanations of how hard it already is to operate the electric grid and issues with wind integration]

The costs of building and operating a renewable power generator do not paint a complete picture. Due to their unpredictable and variable generation profiles, renewable sources of energy such as wind impose a unique burden on the rest of the electric power system. In order to accommodate this less reliable renewable power, the remaining conventional generation units must deviate from their optimal operating profiles, increasing their costs and potentially releasing additional GHG. Although this burden is conceptually understood, it is not explicitly valued in the market today. Thus, when analysts and policymakers discuss the cost-effectiveness of renewable energy as a GHG-reduction strategy, a key element is missing from the cost side of the equation, known as wind integration costs.

Wind integration costs will only increase with time. Thanks to a diverse resource mix, California should see modest integration costs for the time being. However, as policymakers consider moving beyond the 33% RPS standard to even more ambitious goals, they are more likely to encounter the non-linearities found in most studies. Furthermore, if California truly wants to be a national leader, it needs to demonstrate that its solutions can be replicated at the national scale, not just in areas whose wind resources are balanced by significant solar, geothermal and hydroelectric potential.

The cost of integrating renewable power can be generally defined as the cost of all actions taken to maintain the reliability of the electric grid in response to the uncertainty and variability of renewable power. This chapter will explain, from a physical operations perspective, exactly what those actions are.

Traditional System Operations. The day-to-day job of electric system operators is organized around one central goal: maintaining the reliable flow of electricity to customers, or more colloquially, “keeping the lights on”. In order to do this, groups known as balancing authorities maintain careful control over the electric grid at all times. Each such organization is responsible for maintaining reliability within a certain geographic region; for example, the California Independent System Operator (CAISO) is responsible for maintaining the reliable supply of power to most of California. In order to maintain reliability, each balancing authority much match the supply and demand for power within its territory at all times. The demand for power is known as “load”, and represents the sum of electricity being drawn by residential, commercial and industrial customers. This power is supplied by electric generation from power plants, such as coal-fired steam power plants, nuclear generation stations, hydroelectric dams or wind turbines. Power can also be imported or exported from one balancing authority to another.

One crucial feature of electric power is that, generally speaking, it cannot be stored. Most consumer goods are produced, put into inventory, and then sold whenever a customer wants to buy them. Electricity has no such “shelf life”. When electricity is generated at a power plant, it must be consumed instantly. Therefore, balancing authorities must make sure that the amount of power being generated is equal to load, not just in the aggregate, but at any given instant.

If generation exceeds load, it will increase the frequency of the alternating current power that flows through transmission lines, and vice versa. By convention, electric devices in the United States are designed to operate using an alternating current at a constant, 60-hertz frequency. Even small deviations to this frequency can cause serious damage to electrical equipment, and can trigger generator trips or load shedding to avoid a system emergency. Even without intermittent renewable technologies, the task of instantaneously balancing load and generation is a significant challenge for system operators.

As a general rule, system operators cannot control the amount of load that customers demand at any given time. Therefore, they must forecast the expected load, and then plan ahead so that enough generation will be available to meet demand. For example, on any given morning, the CAISO will estimate the hourly load profile for the next day. The ISO might estimate a load of 16,000 MW for the hour of 12am to 1am, a load of 15,000 MW for the hour of 1am to 2am, and so forth. Then, power plants can submit bids to provide this energy. The ISO will accept as many bids as necessary to meet projected demand, starting with the lowest-cost bids and moving up the cost curve. Based on the results of this bidding process, the ISO will produce an energy schedule, which specifies which power plants will generate power, when they will generate power, how much power they will produce, and how much they will be paid. It also issues daily unit commitment instructions, so that power plants with long start-up times can turn on or off (CAISO, 2010a). However, this process is imperfect. The actual load drawn by consumers does not follow the neat forecast assumed during the planning process. For example, the forecast of 16,000 MW for 12am to 1am is almost certainly wrong. Specifically, three types of errors are possible. First, the estimate of load could be biased. For example, during the first hour of the day, customers could use more total energy than expected. Second, the load could rise or fall during the hour, giving it an intra-hour load shape. For example, customers might use 17,000 MW at 12am, and decrease their usage to 15,000 MW by 1 am. Third, the load could fluctuate randomly about the average of 16,000 MW, creating a sawtooth pattern. All three of these possibilities are both realistic and common in normal grid operations.

In addition to load uncertainty, there is also a possibility that expected generation will be unavailable. For example, a fire might damage a transmission line scheduled to provide power from a distant source, or a mechanical failure could force a natural gas plant to shut down. While it is impossible to prepare for every possible contingency, grid operators include the possibility of these unexpected events in their planning process.

In short, “irrespective of current and future levels of wind generation, power systems are already required to cope with significant variability and intermittency concerns” (Fox et. al, 2007). The issues of uncertainty and variability, so often associated with renewable power, already exist in electric systems. Operators manage these issues using “ancillary services”, which are used to match generation and load at a more granular level.

A power plant is said to provide ancillary services if a certain portion of its capacity is set aside to be flexible. In addition to simply providing energy, power plants can choose sell the ability to accommodate changes in demand on short notice. For example, a 100-MW gas-fired power plant might provide 80 MW of steady power, and also offer the ability to increase or decrease its generation by up to 20 MW. The capacity set aside for the purpose is known as the operating reserve. Power plants can offer several different types of operating reserves, differentiated primarily based on how fast they can respond to a dispatch order requiring them to increase or decrease generation.

Unfortunately, there is no single set of ancillary service definitions; the names and exact technical specifications vary among different balancing authorities and countries, largely as a matter of convention. However, there are a few common categories. Almost all balancing authorities will have some kind of fast-responding ancillary service, variously known as frequency regulation or primary control. Regulation service is designed to respond on the order of seconds, and is controlled by an Automated Generation Control (AGC) system. This allows generation to automatically adjust to small fluctuations in load (Rebours et. al, 2007).

Balancing authorities also have ancillary services that allow manual adjustments to generation, which are generally slower in response time and larger in magnitude.

Generally speaking, there will be different types of operating reserves for load following, imbalance energy and contingencies. Load following refers to the ability to track the shape of the day’s load profile at a greater granularity than hourly schedules, and generally operates on the order of minutes. Imbalance reserves help to compensate for net schedule bias, and contingency reserves are in place to replace generation that could be lost in a system emergency, such as the loss of a major transmission line (Dragoon, 2010).

It would be economically infeasible – to say nothing of physically impractical – to have enough operating reserves to respond to every imaginable contingency. Instead, balancing authorities select a reasonable operating margin to provide a satisfactory level of reliability. The size of this operating margin is based on several factors, including the largest possible single contingency event, the availability of power plants connected to the system, and the expected error in demand forecasts (Ferris and Infield, 2008).

Greater operating reserve requirements to maintain grid reliability impose a cost that is ultimately paid by electric ratepayers. These ancillary services are the means through which system operators manage uncertainty and variability. Currently, that operating challenge is driven by the characteristics of load. The addition of variable energy sources, such as wind power, will increase the magnitude of this operating challenge; however, the challenge remains conceptually the same.

Understanding Wind Power’s Impact. Therefore, our first task in evaluating the cost of wind integration is to assess the extent to which wind power increases the requirement for balancing reserves. To do so, it is helpful to think of wind as “negative load”. Since wind power can generally not be controlled, its behavior is more similar to load than generation. By subtracting the amount of wind generation from load, one creates a new “net load” profile.

Then, balancing authorities must operate traditional power plants so that their generation matches net load, as opposed to raw load. Due to the inclusion of wind power, net load will be more unpredictable and more variable than raw load. However, the techniques used to balance net load are the same ancillary services that are provided in traditional systems. Kirby and Milligan (2008) note that wind has many similar characteristics to load, and that the differences in managing the two are “more of degree than kind,” as wind “add[s] to aggregate variability.”

The crucial question is how much of each type of ancillary service is required, and then how much will it cost. In order to determine the impact of wind power on the reserve requirements for net load, it is important to first understand the characteristics of wind power generation.

The power generated by a turbine is a function of wind speed, and has 4 distinct regions. Light winds will not generate any power at all; the minimum level of wind required to generate electricity is known as the cut-in speed, often around 4 m/s. From there, the wind power increases as a cubic function of wind speed, until the turbine reaches its maximum rated power output. Within this region, the output can change dramatically in response to even small changes in the wind. Once the rated power is reached, usually at 13-14 m/s, the wind speed can continue to increase but output will remain constant. However, if the wind reaches too high of a speed, often at 25 m/s, the turbines must shut down, or “cut off”, to avoid damaging the equipment. The sudden drop-off of power is another potential source of power variability (Laughton, 2007).

Incremental Reserve Requirements. These trends of variability and uncertainty help determine the incremental reserve requirements; in other words, how much more balancing capacity is required to maintain reliability on a grid with wind than one without wind? One common misconception is to assume that all variability and uncertainty associated with wind power must be counter-balanced by a dedicated flexible power plant. This is simply not true. Kirby and Milligan (2008) describe how “the power system does not need respond to the variability of each individual turbine”; instead, the system must “meet the North American Reliability Corporation (NERC) reliability standards and balance aggregate load-net wind with aggregate generation.

Fortunately, wind and load tend to be uncorrelated, so they do not add linearly, greatly reducing the net flexibility required from conventional generation.” Reliability standards are typically proportional to the standard deviation of the differences between actual load and scheduled load, or the load errors. For example, NERC standards require balancing authorities to maintain sufficient reserves such that 10-minute errors can be contained within certain limits 90% of the time in each month (Dragoon, 2010). In other words, the required balancing reserves depend on magnitude of the 90th percentile error, which is directly proportional to the standard deviation for approximately normal distributions. As more wind is added to the grid, the standard deviation of net load errors will increase, requiring more incremental reserves. However, as Kirby and Milligan explain above, the standard deviation of net load error is not simply the sum of the standard deviations of load error and wind error.

With a wind penetration level of 20% scheduled using persistence forecasts, the grid would require 7% of wind capacity to be set aside as operating reserves.

Millborrow (2007) estimates that if wind supplies 10% of electricity, the incremental reserve requirements would equal 3-6% of the wind’s rated capacity; that number grows to 4-8% at 20% penetration levels.

Milligan (2003) estimates that with 17% of energy coming from wind, incremental reserve requirements equal 6-11% of rated wind capacity, depending largely on forecast quality.

Gross et al. (2006) find similar results in a review of several studies, with 5-10% reserve requirements at 20% wind. Most studies find that reliability can be achieved by procuring balancing reserves of approximately three times the standard deviation of net load error (Holttinen et. al, 2008).

Second, the studies confirmed that costs of additional ultra-fast regulation reserves were minimal. This is consistent with the idea that, aggregated across an entire system, very large swings in power output simply do not happen within seconds, or even a few minutes.

Third, many studies find that costs of integration increase non-linearly as a function of wind penetration level. There are several intuitive reasons for this result. First, as demonstrated in the previous chapter, there are increasing marginal quantities of balancing reserves required to deal with increasing levels of wind. At low levels of wind, the variability of net load only increases by a small fraction of the variability in wind alone. At higher levels of wind, the variability of net load increases at an almost 1:1 rate with the variability of wind alone. Second, the marginal costs of providing these balancing reserves also increase as more wind is added to the grid. In well-functioning markets, economic dispatch systems are used to find the most cost effective way to balance wind power. This means that highly flexible units that can easily provide ancillary services are used first, and more expensive balancing services come later. Third, earlier projects are likely to use the geographic areas with the highest wind speeds and capacity factors, which tend to have a more stable energy output. The addition of inferior project sites can cause integration costs to rise.

Many studies find a key point of inflection in wind integration costs to be on the order of 20% penetration.

Millborrow (2007) reviews several more theoretical studies on high wind penetrations, and finds that double-digit integration costs are likely to begin when wind reaches 20-30% of electric generation on a standard system.

Although California has an aggressive RPS, its current and projected mix of renewable projects is relatively well balanced. Forecasts for the year 2020 shown below suggest that wind will only comprise approximately 30% of California’s RPS goals; solar power will comprise another 35%, geothermal another 20%, and the remaining 15% will come from biomass, biogas and small hydro (CPUC). The existence of legacy contracts in geothermal power from the days of the Public Utility Regulatory Policy Act and excellent solar resources have helped achieve this balance. Generally speaking, geothermal provides baseload power, and solar’s fluctuations are independent of wind. Therefore, it seems that for the time being, wind’s penetration within the entire electric grid will remain below 15%, sparing California from the significantly higher integration costs that seem to begin at around 20%.

California’s on-shore wind resources are clustered in three main areas: Altamont Pass which is east of San Francisco, Tehachapi Pass which is south of Bakersfield, and San Gorgonio Pass outside of Palm Springs; together, these three areas produce over 95% of California’s wind power from over 13,000 turbines (California Energy Commission). Within each area, geographic diversity is limited, as the best resources are tightly clustered. However, the fact that all three areas work under the same ISO is good for costs, because they are far enough apart to achieve low cross-correlations. Another relevant factor to integration costs is overall grid flexibility, which is influenced by the type and cost of other generating units available to provide balancing services.

In 2010, just over 70% of energy was generated inside of California, as opposed to imports; in-state generation is generally used for renewables integration. Of in state generation, over half comes from natural gas, which is a decently flexible resource. Combined-cycle gas turbines that are already on, as well as gas turbines, provide an important source of flexibility for the grid. Approximately 20% of energy comes from “baseload” sources, such as coal, geothermal and nuclear power, which have difficulty with fast cycling. Hydroelectric power, which is physically the most flexible resource when not subject to policy constraints, provides 15% of in-state generation, and the remainder comes from variable renewable sources (CEC 2010).

The physical flexibility of California’s resources is quite good, especially the mix of natural gas and hydroelectric power. This, along with the geographic distance between major wind farms and the fact that wind levels are relatively low, indicates that integration costs have the potential to be comparatively low in California.

California’s Market Design. It is worth understanding the conventions, terminology and market processes used in California’s electric markets to avoid potential confusion. While most modern electric system operators follow the same principles, specific details vary from region to region. Within California, CAISO is responsible for making sure that generation and load are always equal, and it does so in several stages. The first stage is the day-ahead market (DAM), also known as the integrated forward market (IFM), and is the “first cut” at scheduling energy generation to match demand. The process to schedule energy for any given operating day begins with the submissions of energy bids. Generating units submit bid curves for each operating hour, containing several important characteristics. All generators have minimum and maximum physical operating levels; for example, a gas-fired plant may be able to operate between 20 MW and 100 MW. Then, bids may include a portion of capacity that is “self-scheduled”, meaning that the generator is willing to supply that quantity regardless of price. The bid curve then includes minimum prices that the generator is willing to accept for various quantities of energy. Continuing in the example, the gas generator may be willing to supply between 20 and 40 MW at any price, so it would submit a self-schedule bid up to 40 MW. Then, it might offer to provide between 40 MW and 70 MW for a minimum price of $20 / MWh, and up to 100 MW for a minimum price of $30 / MWh. Generators may use up to 10 different price-quantity combinations in their bid curves, and may submit different bids for different operating hours.

Finally, every bid contains operational details, including the cost and time required to startup the plant, information about whether the plant is already online, and how quickly the plant can move (“ramp”) from one power level to another. Bids may come from generators within the CAISO or anyone wishing to import power from a neighboring balancing authority. Simultaneously, generation units may also submit bids to provide ancillary services. Specifically, the ISO explicitly procures four types of ancillary services: regulation up, regulation down, spinning reserves, and non-spinning reserves. (Load-following services are not an explicit ancillary service in CAISO, and will be discussed shortly). Regulation up and down are the capacity to adjust output in response to an automatic signal on a near-instantaneous basis, while spinning and non-spinning reserves are reserves that can provide power within 10 minutes in the event of a system contingency. Generators wishing to participate must include, for each hour, the quantity of each ancillary service they wish to provide, their minimum price for doing so, and operational information about their ramp rates. These generators may submit mutually exclusive bids for energy and ancillary services. Thirdly, in the DAM, load-serving entities submit bids to purchase energy. Similar to supply bids, demand bids can either come as self-schedules (i.e. willing to buy a certain quantity of energy at any price) or as price-quantity curves. These demand bids can be used to serve load within CAISO or to export power to a neighboring balancing authority. Finally, based on these load forecasts, CAISO will determine the desired quantity of ancillary services to meet its reliability obligations. The DAM closes at 10:00 AM on the day before any given operating day. For each operating hour, CAISO uses a co-optimization model to take the energy supply bids, energy demand bids, ancillary service supply bids, ancillary service demand requirements, and any available information such as transmission constraints, and find the least-cost way to dispatch generation units to meet load and ancillary service requirements. Later in the afternoon, the results are published, and generation units can see their schedules for the next day. This DAM process is where the bulk of the work happens: the bulk of energy, non-spinning reserves and spinning reserves are scheduled through the DAM, and all regulation reserves are procured in this time.

The HASP market is where the bulk of wind scheduling comes into play. Currently, California uses a program known as PIRP, the Participating Intermittent Resource Program. Under PIRP, CAISO contracts with an external vendor to create generation forecasts for all wind farms under the program. These forecasts are released 105 minutes prior to the start of each operating hour, and participating generators use that forecast as a self-scheduled supply bid quantity during the HASP. Using the officially sanctioned forecast has economic benefits for wind forecasters that will be discussed later. New bids and adjustments for the HASP market must be submitted by no later than 75 minutes prior to the start of any given operating hour. CAISO re-runs its optimization software, and publishes the results no later than 45 minutes before the start of the operating hour. By this point, the “baseline” hourly energy schedule is fixed, the energy schedules on the interties between CAISO and other balancing authorities are fixed, and the quantities of available ancillary services are fixed. The third and final stage involves real-time operations. This stage uses two tools, real time economic dispatch and regulation reserve, to match generation to the intra-hour variations in load. Real-time economic dispatch (RTED) is how the CAISO provides load-following (LF) services. Suppose that the final hourly energy schedule was 5,000 MW, but load quickly increased to 5,100 MW. In this situation, CAISO would look back at the economic energy supply bids it had received, and award an additional 100 MW to the cheapest available generation, subject to operational and locational constraints. Alternatively, if fell to be 4,900 MW, the CAISO would reduce the most expensive 100 MW of generation that could feasibly make that adjustment. In real-time, CAISO makes these adjustments to its economic dispatch every 5 minutes to provide load-following. The other tool, regulation reserve, is automatically dispatched minutes to provide load-following. The other tool, regulation reserve, is automatically dispatched minute period, CAISO uses its real-time economic dispatch to compensate for the net change that has occurred since the last adjustment. This way, regulation reserves can be “reset” to their base point, so that this ultra-fast capacity will be fully available in the next 5-minute period.

Wind integration costs come in several forms: from energy imbalance met by load following, from increased requirements for regulation reserve, and from less efficient use of conventional plants.

One commonly mentioned solution to the integration challenge is dedicated energy storage technologies. Proponents argue that energy storage devices, such as large battery arrays, can store excess energy when the wind is producing large amounts of power, and discharge that energy to the grid when the wind stops blowing. Popular media often portrays storage technologies as a “silver bullet” solution, and technology vendors are not shy about echoing that idea. For example, A123 Systems, a manufacturer of lithium ion batteries, published a white paper that showcases how the company’s technology can manage fluctuations in renewable energy, help reduce CO2 emissions, and promote grid reliability (Vartanian).

Despite the appeal of energy storage, the economics simply do not add up for its use in renewables integration. Rittershausen and McDonagh (2010) examine the use of energy storage for intermittent energy smoothing and shaping, an application that could potentially reduce load following requirements, and find that costs exceed benefits by two orders of magnitude. Other potential uses of energy storage, such as providing ancillary services or shifting load from off-peak to on-peak are (1) also not cost effective, and (2) are not linked to renewables integration nearly as directly as industry insiders would argue. However, there are other ways to induce a negative correlation between changes in load and wind generation, apart from dedicated energy storage devices. Demand-side management (DSM) uses devices that are already deployed on the grid, and as a result, can achieve many of the same benefits of storage at considerably lower cost.

Giant energy storage projects are not cost-effective, and CAISO can not simply “spread out” wind generators.

The inherent flexibility of generating resources is largely fixed, and policymakers have only limited control over the mix of renewable technologies.

 

CEC. 2008. Transmission technology research for renewable integration. Calilfornia Inst. for Energy & Env for California Energy Commission. CEC-500-2014-059. 123 pages.

From a transmission operational dynamics perspective, geothermal and biomass energy are similar to traditional power generators, especially base-load, and therefore do not pose much concern about their operational behavior within the power grid, though some biomass resources vary seasonally.

Some types of renewable generation, however, are “fueled” by variable, or intermittent, energy sources like wind and sunshine, i.e., insolation, which are controlled by weather and rotation of the earth. These intermittent renewables can create renewable energy power plant behaviors for which the grid was not designed and that are quite unfamiliar to grid operators and outside their control. To achieve a 20% renewable energy content will require a projected renewable nameplate capacity of over 14,000 MW with more than 60% of that capacity coming from the intermittent renewable forms of wind and solar. To achieve 33% would require 26,000 MW of renewable nameplate capacity.

Relatively small penetrations of intermittent renewables are expected to have “operational implications significant but manageable” (“California Independent System Operator Integration of Renewable Resources,” David Hawkins & Clyde Loutan, Cal ISO, November 2007). For greater penetration levels, however, transmission infrastructure expansion, improved wind and solar forecasting, increased ancillary services for the grid, and new technologies for a smarter grid will likely be required. Energy storage might also be deployed to mitigate some of the effects of intermittency.

The overall situation is complicated by the current and projected status of the grid over the next few years, even without considering the addition of renewables. Much equipment is aging and planned to be retired during the next 10 years. Prospective once-through cooling regulations may accelerate this trend. Operating margins have been steadily shrinking as transmission investment has not kept pace with increases in demand. Dynamic operating constraints have emerged which prevent major transmission lines from operating at the levels for which they were designed. Increasing levels of imported power have led to a substantially larger, more interconnected regional grid than envisioned when much of the infrastructure was planned.

Excess Total Generation – To achieve the increasing percentages of renewables, a rapid addition of renewable power plants will be required. The needed rate of addition is considerably higher than the growth of demand and is projected to be higher than the sum of demand growth and the retirement of existing equipment. In other words, the addition of the renewable plants may force the retirement or lowered use of some existing thermal plants, even though they are still viable. Cal ISO forecasts 13% less non-renewable generation in 2020 than in 2008.

Congestion Costs – Once connected to the grid, remotely located resources must be brought into major load centers. Lines which currently have adequate capacity are likely to experience increased periods of congestion.

Stability – Over wide areas, the grid can exhibit unstable behavior if power flows exceed dynamic limits. If not controlled, this can trigger large scale outages. This dynamic grid stability, even without the addition of renewable resources, is a critical issue. To maintain reliability, potential instabilities must be sensed and responded to quickly. While transmission lines have a designed power handling capacity based on thermal limits, instabilities frequently limit maximum transmitted power to levels significantly less. In particular, this limits both the amount of power which can be imported from out of state and amounts which can be transferred from one part of the state to another. The addition of significant remote generating facilities, much of it with low inertia, may have undesirable effects.

Local Area Limitations – Within the state, it appears that much of the new renewable energy will be generated in remote areas, while most of the consumption will be concentrated in the population centers such as the Los Angeles Basin. Five load centers comprise 87% of the total load in California. Power flow transmitted into these areas is channeled through key substations called gateways. Many of these gateways are already operating at their limits, which is typically in the range of 50% of the locally consumed power. If the gateways to an area are limited to 50% of the consumed power, then the balance of the power must be generated within the local area. As a result, even if there is abundant renewable power generated within the state and connected to the transmission system, many existing parts of the system will need significant increases in capacity.

Limited Bulk Storage – Existing large storage facilities, which could act to shift loads from day to night, are extremely limited and may be constrained by transmission limits. The Helms Pumping Facility, one of the largest in the state, with a maximum pumping capability of 900 MW from 3 pumps, operated at this level less than 250 hours in 2005, primarily due to transmission constraints. New pumping facilities require 10 – 12 years to implement.

Intermittent power sources generally complicate the problem of managing  the grid.

Extreme Events can be described as system disturbances characterized by multiple failures of transmission system components, resulting in widespread system collapse through cascading outages. Such large-scale events have always been difficult to analyze, plan for, and manage, but the potential severity of such events has grown with the interconnectedness of the grid, and is likely to grow more with the increasing integration of intermittent renewables in the system. Operators in adjoining systems generally don’t have good visibility of each other’s systems, hindering both the detection of impending or initiating extreme events, and effective countermeasures once an operator becomes aware an extreme event is propagating. Existing tools for operators have not been adequate to respond to these events. Currently there is significant effort focused on real-time system awareness and online analytical tools utilizing phasor measurements; additional areas of potentially beneficial research include advanced planning to better identify critical transmission paths, adaptive protection systems, and strategies for automated islanding of the grid.

Because most new renewable power plants will be located in areas rich in renewable resources but remote from California electricity customers, electric transmission will be crucial for transporting the renewable electricity to load centers, and thus for meeting the state’s renewable energy goals. Consequently, each new renewable power plant must be successfully integrated with the transmission system. To fulfill this mission, transmission must achieve three broad objectives:

  1. provide physical access for each new power plant,
  2. reliably accommodate any unique renewable generator behaviors, and
  3. increase its power carrying capacity to handle the additional electric power flows.

It is reasonable to assume that modest penetrations of renewable generation, perhaps up to 20%, can be successfully integrated into the grid by traditional system investments, such as building new lines and conventional generation for increased capacity and to maintain reliability. However, as the penetration of renewables grows, to perhaps 33% and beyond, and more transmission infrastructure is added to the system, its complexity will grow along with operational difficulties. It also will likely become increasingly difficult to meet the environmental and economic criteria for siting new infrastructure in a timely manner, further reducing the effectiveness of the “build” approach.

As an alternative, new technologies can be deployed in the transmission system to endow it with expanded or new capabilities that, at a minimum, will make renewable integration easier and less costly, and ultimately at some higher renewable penetration level, will probably be required to achieve California’s renewable energy goals. Some transmission stakeholders have expressed the opinion that we are already at the level of renewable penetration in California where new technologies will be required.

For most new renewable power plants, access to the transmission system can be directly translated into acquiring new right of way (ROW), and building new transmission lines between the power plant and an interconnect point on the transmission grid. The siting process for new transmission project is highly complex and difficult, involves many different stakeholders, and takes many years, typically 10 to 12 years for a major line.

While there are a number of state and national policy changes being pursued to shorten this time, concern remains that it will take longer to build the new transmission extension to a renewable power plant than it will to build the power plant. Two major impediments to timely new ROW approvals are cost/benefit allocation economic debates, and siting challenges, exemplified by, “not in my backyard.”

From a transmission operational dynamics perspective, some renewable energy plants such as geothermal, biomass and perhaps solar thermal with enough thermal storage will benignly operate similar to traditional baseload thermal power generators. Wind and some solar renewable generation, however, are intermittent, and exhibit power plant behaviors unfamiliar to grid operators, and for which the grid was not designed.  time

The Energy Commission Intermittency Analysis Project has projected that meeting the 33% goal by 2020 will result in power production capacity in excess of total demand requirements. Existing conventional plants would need to be closed or operated at lower capacity factors, potentially reducing the availability of system support generation. This situation might be compounded if coastal thermal plants using once through cooling must be shutdown.

Finally, to stimulate the private development of renewable power plants, utility contracts generally include the guaranteed acceptance of power generated.

Any transmission line has physical limits on the amount of power that can be transmitted Which limit is the dominant factor constraining the capacity of a given line at a given time depends on the conditions of that particular line and the broader wide-area transmission grid.

Thermal Limits: The maximum power a particular line can ever handle is its thermal limit. The primary source of heat comes from the interaction between the electrical resistance of the line material and the electric current flowing through it. Above this limit, a line may excessively sag, creating a safety hazard or an outage, or be physically damaged by excessive temperature.

Stability Limits: Poor voltage support, and dynamic and transient instabilities can result in even substantially lower capacity limits below the thermal limits in some situations. It is not unusual for a major interconnection path to be operationally limited by instabilities to half its rated static thermal limit. This effect imposes severe limits on the amount of renewable power which can be imported into California, and into major load centers within the state.

The most common way of transporting bulk electric energy is by means of overhead AC transmission lines, which are typically constructed of stranded, bare aluminum or aluminum/steel cables, suspended by insulators from steel lattice towers or wood poles. At some point, the loading limit is reached, and some method must be used to increase the line’s capacity. One way, of course, is to build another line, either as a parallel line or higher capacity replacement in the same corridor, or in a suitable alternate route. Assuming that the existing corridor is the only feasible one and has no additional space, there are a number of technological approaches available for increasing the power carrying capacity within the constraints of the existing ROW.

4.2.2 New Capabilities Addressed

Access Siting Capability #1: To facilitate environmental and societal deliberations, and enhance acceptability of new transmission lines. The addition of substantial amounts of new renewable generation to the electric system will require that new transmission lines be built between the renewables plants to the existing transmission grid, and also likely require significantly increased power-carrying capacity from the transmission gateways to the loads. These overhead transmission technologies can provide the additional needed capacity with reduced visual and environmental impacts compared to conventional overhead lines, potentially simplifying and easing the permitting process.

Reconductoring involves replacing the stranded conductors in the line with new ones of larger diameter. This is the most common upgrading method, with minimal visual impacts due to the new appearance of the line. Since current-carrying capacity (and by corollary, power transfer capacity) of a conductor is proportional to the cross-sectional area, a conductor of 50% larger diameter can have up to 2.25 times the capacity. This increase in conductor size is not difficult to accommodate; if the tower crossarms do not need strengthening, the only modifications needed are replacement of suspension clamps attach the conductor to the insulator string. Even if towers and crossarms need strengthening, the additional costs will still be reasonable, and visual changes to the line will not be significant. The only other issues are possible upgrades to terminal equipment, such as transformers, relays, switches, etc., to handle the additional current; and stability studies to assess the need for greater remedial action for contingencies at the higher current level. In general, this is a mature and cost-effective technology, and is the first and best option for utilities when additional capacity is needed.

Bundling simply means using two or more conductors per phase. Adding a second conductor identical to the first (the usual practice) doubles the current, which doubles the power transfer. Like reconductoring, this is a mature and cost- effective technology that is one of the first alternatives considered by transmission planners, usually involving simple retrofits of suspension clamps, possible replacement of insulators, and possible upgrades to towers and crossarms. Visual impacts are slightly higher than for reconductoring, which may be an issue in the permitting process.

When it is not feasible to increase the current in a transmission line corridor by reconductoring or bundling conductors, the line can be converted to the next voltage level, e.g., from 115 kV to 230 kV. The increase in power is proportional to the increase in voltage, in this case, by a factor of 2. If the existing conductors are used, the only changes to the line itself are new insulators and possibly some strengthening of the towers and crossarms, so the visual impacts are minimal. However, the terminal equipment, including transformers, circuit breakers, relays switches, etc., must be upgraded, and the costs for this will be significant. This is also a mature technology, the cost parameters of which are well known and included in the transmission planning analysis process.

4.2.4 Gaps Reconductoring, Bundling and Voltage Uprating

These are all mature technologies, well-known to the utility industry, cost-effective, and widely used. Barriers to wider use include issues of cost, cost recovery, and visual and environmental impacts that lead to intervention in the permitting process by various stakeholders.

Conventional underground transmission lines are constructed with copper wires (conductors) encased in an insulating material such as oil-impregnated paper, inside a pipe-type enclosure (conduit), and buried in a trench under special backfill material to dissipate the heat generated in the cables. The inside of the conduit is filled with an insulating oil similar to that used in transformers, or an insulating gas such as SF6, to provide high dielectric strength (insulating ability between the copper conductors and the conduit, which is at ground potential. Newer types use polyethylene sheathing as the dielectric material, and do not use oil or gas insulating media. The public generally views underground lines as having far fewer negative impacts than overhead lines, although there are still several difficult issues to address:

  1. Construction costs for an underground line can be up to 10 times the cost of an overhead line of the same capacity, and construction can take much longer.
  2. Underground lines are impractical in mountainous areas, where drilling through rock is required.
  3. The biggest environmental impact will be ground disturbance to in the immediate vicinity of the trench during construction, which can be significantly disruptive, albeit temporary.   Access to underground lines is more difficult when maintenance is required, which can lengthen outage times.
  4. Underground lines are more susceptible to damage from construction activities, because they are not visible to crews operating equipment.
  5. Joints in the conduit can leak, spilling oil into the surrounding soil, or releasing the insulating gas (SF6 is about 15,000 times more potent as a greenhouse gas than CO2).
  6. Lengths are limited to about 40 miles between substations, because of the high capacitive reactance of transmission cables.
  7. The main barrier to wider use of underground cables is cost: not just the cost of the cables themselves, but also the costs of constructing the trench for the cable. HDPE technology is helping to make the cable cost itself more reasonable over time, but more economical methods for installing the cable are needed.
  8. The environmental effects of current construction and trenching methods are also significant.

4.4 High-Voltage Direct Current (HVDC) Transmission Technologies

4.4.1 Technology Overview

High-Voltage DC (Conventional) HVDC transmission lines, as they have been typically developed and implemented to date, consist of AC-to-DC converters on the sending end, DC-to-AC converters on the receiving end, and an overhead transmission line or an underground cable system as the transmission path. The converters, which can be considered solid-state transformers, rely on high-voltage, high-power thyristors (semiconductors that are triggered by the AC voltage). Since only two phases are needed for DC, vs. 3 phases for AC, the transmission line, insulators and towers can be more compact and less expensive than AC lines, and less space is needed (and less land needs to be acquired) for the ROW. However, the converter terminals for HVDC are very expensive, being based on high-voltage solid-state electronics and requiring large amounts of AC capacitors at both ends to provide reactive support; thus, intermediate substations for stepping down the voltage add significantly to the cost of HVDC transmission systems. HVDC has traditionally been used when large blocks of power need to be transmitted long distances, and has been used at voltages up to 800 kVDC and several thousand MW of power capability. Historically, the breakeven point for AC-vs.-DC overhead lines has been around 400 miles: HVDC is more economic for transmission distances longer than that (where its lower line costs predominate), and AC is more economic for distances shorter than that (where its lower terminal costs predominate). Underground HVDC cables have an additional advantage over AC cables in that they do not have the problem of AC capacitance; therefore their length is not limited to the 40 miles or so that AC cables are.

The standard HVDC technology as it has been used to date, e.g., in the Pacific HVDC Intertie, the Intermountain Power Project, and many others, is a mature technology that has been continually refined over the last 50+ years, with virtually no research gaps. It is cos- effective for long-distance bulk power transmission when intermediate substations to serve loads along the transmission route are not needed. However, it is likely to be considered too expensive for new line construction for the anticipated power levels of integrating renewables. Conversion of AC lines to DC lines is fairly straightforward, and most utilities are familiar with the technical and cost issues, as well as when it might be considered a feasible alternative. I t has not been done much in the US, for the simple reason that additional ROW and upgraded AC lines have almost always been the feasible alternatives and cheaper than conversion to DC. Now that corridors are getting maxed out, this may be a feasible, albeit more costly, alternative to re-building AC lines or building new ones.

The external barriers to wider use of HVDC technologies are greater than the engineering or technical challenges. Research activities focused purely on technical issues with HVDC technologies are unlikely to make a significant difference in terms of implementing HVDC. The principal stumbling block will continue to be the perceived additional cost per MW of capacity compared to the traditional “least-cost” alternative of overhead AC.

Storage has taken on added importance with the increase of renewables plants, given that the intermittency and variability of renewables increases the complexity of the system operator’s job.

Storage systems have several basic characteristics that can vary depending upon the technology and the desired application: o Power capability: how many kW or MW the storage plant can discharge. This is usually a direct function of the electrical generating mechanism, be it a rotating machine or a solid-state electronics interface.

  • Bulk energy: how many kWh or MWh of energy can be stored.
  • Charge time: the number of hours or minutes required to fully charge the system.
  • Discharge time: the number of hours or minutes the system can supply its rated kW or MW output.
  • Efficiency: the ratio of energy discharged to the energy required for charging. Also called “round-trip” efficiency. Most storage systems fall into the 60-75 percent range.
  • Capital cost: the total cost to build a storage plant; it is usually given in terms of the power capability and bulk energy components.
  • Maintenance costs: consist of both $/kW and $/kWh components.

The use of storage to provide high quality and highly reliable electric service for one or more adjacent facilities. In case of an intermittent or extended grid outage, the storage system provides enough energy for some combination of the following: an orderly shutdown of customer processes, transfer of customer loads to on-site generation resources, or high-quality power needed for sensitive loads.

Pumped hydro is very site‐dependent, and most of the best sites are already developed; therefore, it can’t always be located where it’s needed in the transmission system. 

Batteries : The energy density of chemically-based battery systems is not as high as desired, requiring a fairly large footprint for even modestly sized battery systems for utility applications. Costs in both per-kW and per-kWh terms are relatively high. There are significant maintenance requirements including periodic replacement of internal components, safety issues with the chemicals involved, and life expectancy.

The AC electric power system, by its nature, does not have a high degree of controllability, in terms of system operators being able to designate which transmission paths the power flows on. The electric system is a giant interconnected network of generating sources, loads (customers) and the transmission and distribution lines that provide the connections among them all. To a great extent, the power flows on the system are determined by the customer loads and the generators that are on the system at any given time; the power then flows over the transmission and distribution lines as determined by the impedance of the lines and paths and Kirchhoff’s laws.

Because of the numerous parallel paths that power can flow on, the contract path for power, defined as the line or path over which the contracted power from a generator to a load is meant to flow, is not necessarily the only path over which that power will flow. For example, Bonneville Power Administration (BPA) can contract with Pacific Gas and Electric (PG&E) to send 3,000 MW of power over the 500 kV Pacific Intertie, but in reality about 20% of that power can flow through parallel paths on the eastern side of the Western Electricity Coordinating Council (WECC) system. This phenomenon, called loop flow or inadvertent flow, frustrates the efficient exchange of power within transmission grids and between utilities, and can result in diseconomies.

For controlling real power, system operators have just a few tools at their disposal. They can adjust the output of generators under their direct control; however, in today’s power markets, this control is diminishing. When lines or paths reach their thermal or stability limits, congestion occurs, and generators are forced to adjust their outputs to relieve the line overloads, with congestion payments both to generators who must curtail and to reliability-must-run generators who must generate in their place. Series capacitors in the transmission lines can be switched in or out to reduce or increase, respectively, the impedance of a line or path, increasing or decreasing the power flowing in that path; this is typically not a real-time control option, as most series capacitors are manually switched, usually on a seasonal basis. Devices called phase-shifting transformers are sometimes used to increase the apparent impedance of a line or path, the objective being to shift power flow from a specific line or transmission path to adjacent circuits or paths. The only other options for controlling real power are to change the configuration of the lines in the system, i.e., switch lines in or out, or to change the bus connection arrangements in the substations; neither of these are generally desirable options, and cannot be done feasibly on a real-time basis.

HVDC transmission lines (see section 4.4 on HVDC Transmission Technologies), in contrast to AC lines, have the ability to control their power flow due to the power electronics in the converter stations at the terminals of the lines. There are currently only a few HVDC lines in the Western grid, whose purpose in mainly to transmit large blocks of inexpensive but remote generation to load centers in Southern California.

Asynchronous HVDC links, also called back- to-back HVDC links, are sometimes used to provide control and isolation between utility control areas: the power transfer between the areas can be precisely controlled, and the system frequencies of the adjoining systems do not have to be in synchronism with each other, and system disturbances do not propagate through the links as they would through AC lines.

Reactive power is much more controllable than real power. Generating plants have the capability to adjust their volt-amps reactive (VAR) outputs automatically to match the reactive demands of the system. Shunt capacitors and inductors can be installed at any substation, and are switched in and out as needed (not always in real time) to control the voltage profile of the system and adjust to the reactive power demands of the loads on a local as well as system level. Series capacitors also help to control voltage levels by reducing the reactive impedance of transmission lines. Devices called synchronous condensers can provide a measure of dynamic voltage control. Synchronous condensers are rotating synchronous generators without prime movers, and appear as reactive power devices only; by adjusting their excitation systems (voltage to the stator coil) they can either produce or consume VARs. Transmission transformers, for the most part, do not have tap changers and can’t control voltage. Distribution transformers (transmission voltage to distribution voltage) can have some measure of voltage control to adjust to the demands of the loads on the distribution side. Other control methods are used in the context of remedial action schemes to control system stability: generator dropping, load dropping, fast reactor insertion, series capacitor switching, and braking resistors, to name the major ones.

While technologies can be used to bring new or enhanced capabilities to the transmission infrastructure for meeting the three major objectives via new hardware measures, technologies can also bring new capabilities for operating the infrastructure in a reliable, economic and integrated fashion. Indeed, given the additional operating uncertainties that renewable generation will likely add, the new operating capabilities will be a necessity, especially those for real time and wide- area systems operations. This class of technology generally consists of sensors for detection and measuring of system conditions; communication systems; data management; analysis for monitoring, diagnosis, prediction and decision support; visualization for human interface; and instructions for automation. Much of this technology platform is enabled by an emerging sensing technology known as synchrophasors, or, more commonly, phasors.

In addition to enhancing grid reliability and avoiding major blackout conditions, the KEMA study identified Disturbance Detection, Diagnosis and Compliance Monitoring as a phasor application that offers the potential to significantly reduce the capacity derating of key transmission pathways that are critically important for 33 percent and greater renewables integration. This would seek to analyze PMU data from various locations within the regional power grid to detect, diagnose and mitigate low frequency oscillations and, through improved operating tools, free up significant underutilized transmission capacity for importing renewable power into the state and into major urban areas.

With the addition of 4,500 Megawatts of new wind generation in Tehachapi in the 2010 timeframe, Cal ISO and other grid operators are likely to experience periods where electricity production from these wind plants will rapidly decline while simultaneously the load is rapidly increasing. Energy ramps as high as 3,000 MW per hour or larger may occur between 7 AM and 10 AM in the morning in the 2010 and larger ramps over the longer term, as progress is made in pursuing the 33% and 50% renewables goals. Fast ramping generation, such as hydro units, will be essential for the Cal ISO to keep up with the fast energy changes.

There will be other periods, particularly in the winter months, where large pacific storms will impact the wind parks and their energy production will rapidly ramp up to full output.   The Cal ISO Renewable Integration study recommends the development of a new ramp- forecasting tool to help system operators anticipate large energy ramps, both up and down, on the system. The longer the lead-time for forecasting a large ramp, the more options the operators have to mitigate the impact of the ramp.   The Cal ISO report also identifies the need for research to analyze the impact of large central station solar power intermittency in producing large energy ramps, within the context of anticipated wind energy ramps as well as load variations and distributed customer-side-of the-meter solar photovoltaic (PV), small wind turbines and other distributed energy resources.

There exists today a wealth of methods for short-term prediction of wind generation. An excellent summary of the state-of-the-art in wind power forecasting, available at the following website: http://en.wikipedia.org/wiki/Wind_power_forecasting,

Future PHEVs are anticipated to have expanded battery power for extended electric-only operation, and presumably they will largely recharge overnight when minimum loads traditionally occur. This situation creates a potentially synergistic relationship between wind and PHEVs, i.e., coordinating PHEV electric demand with wind generation in a “smart” infrastructure can mitigate impacts to the grid. In the simplest instance, PHEV load could be switched off to counter drops in wind generation (similar to demand response), and switched on as wind generation increases.

Typical protection systems utilize digital relays individually or in combination to protect valuable assets, such as transmission lines or generators. Advanced relays incorporate PMU technology directly into the relay. Transmission lines may incorporate redundant primary relays and back up relays in complex relays designed to insure reliable action. Operation of these systems is programmed based on the expectation of a relatively normal operating configuration.

However, under abnormal conditions, such as can occur during a fault, the relay system may operate, or fail to operate, in a manner which was not intended.   During major cascading blackouts, protective relays have either been implicated in increasing the severity of the blackout or of failing to slow or stop the spread. In the August 14, 2003 blackout on the East Coast and the July 2 and August 10 1996 blackouts in the West, zone 3 impedance relays played a major contributing role as well as many transmission and generation protective relays.

In each of these blackouts, due to an unusual and unanticipated set of circumstances, the EHV transmission grid became configured in highly abnormal operational states that were not anticipated or studied by protection and system operating engineers. These protection systems are almost exclusively local in nature. Wider area protection systems – Remedial Action Schemes (RAS) or Special Protection Schemes (SPS) have been created to provide a variety of system protection actions. As these systems grow in scope and complexity, there is the increasing possibility of unintended consequences. The term “intelligent protection systems” is not precisely defined and can be used to mean any of a variety of related concepts. For this report, the term is used to primarily describe protection systems which use phasor data and are adaptive, i.e. which can monitor conditions in real time, and “intelligently” adapt their operation to reflect actual conditions on the power grid. Ultimately, intelligent wide area protection systems can be seen as “protecting” the system by controlling its operation in such a manner as to prevent faults or instabilities from becoming large scale outages.

Major outages such as described here are sometimes referred to as “Extreme Events,” because of the multiple contingencies that occur, and because they are beyond the ability of planning and operations engineers to foresee, and in many cases, to mitigate once they start. There is research currently underway to develop new methodologies for analyzing extreme events and test the methodologies; first in simple network systems, and next in larger, more complex and realistic network systems, modeling the California grid and its western interconnections.

New Accommodation Dynamic Behavior Capability #3: To operate the grid in response to renewable power plant dynamic behaviors. The increasing penetration of renewables with different types of dynamic behavior increases the risk of serious consequences in response to a transient event. Intelligent protection systems offer the possibility of improved mitigation of the consequences of a fault and reduced likelihood of a fault triggering a cascading blackout.

Traditional utility electric power systems were designed to support a one-way power flow from the point of generation through a transmission system to distribution level loads. These system s were not originally intended to accommodate the back-feed of power from distributed solar photovoltaic, small- scale wind turbines and other distributed energy systems at the distribution level.

Current interconnection requirements for residential net-metered PV systems in California require that the system include a UL 1741 certified inverter (meaning that it has been tested to meet the Institute of Electrical and Electronic Engineers IEEE 929- 2000, recommended practice for safe utility interface of generating systems) that will disconnect from the utility distribution if the voltage decreases or frequency deviation. Disconnect switches must meet the National Electrical Code’s Article 690 on solar photovoltaic systems published by the National Fire Protection Association. When the utility is able to restore electric service on the distribution circuit, the customer is normally responsible for realizing that the distributed energy system has been disconnected from the grid and taking action to restore normal operation.

The IEEE standards for the inverter, along with system design components such as a lockable disconnect switch, are necessary to prevent “Islanding.”

Islanding refers to a situation where the grid power is down and a customer’s generator is still on, creating the potential for power to feed back into the grid. This would cause an unsafe situation for linesmen working on an otherwise non-electrified portion of the power grid. Owners of grid-tied systems should know that their system’s anti-islanding design also prevents them from having power on-site when the grid goes down.

Grid operators are concerned that manual restoration of power production by distributed renewable energy systems may not be workable approach when a significant amount of the customer end-use electricity load is supplied by these distributed systems.

Based on discussions with grid operators and transmission owners there appear to be two interrelated needs:

  1. There is a need for customer-side-of-the-meter interconnection equipment that will permit the automatic restoration of the operation of distributed energy systems if the voltage, frequency and other operating characteristics of the electricity distribution system are within normal operating ranges.
  2. There is a need for reliable information about the operating status of these distributed energy systems to be readily available to grid operators and utilities, within the overall context of customer loads that will be connected when service is restored. These information needs are one of the important evolutionary features of the smart grid.

The current status of this research is available at the following website: http://www.energy.ca.gov/research/integration/demand.html

Load Management Standards Proceeding; more information is available at http://www.energy.ca.gov/load_management/index.html.

DOE is also actively involved in planning and funding research on smart power grid; more information is available at the following DOE website: http://www.oe.energy.gov/smartgrid.htm.

Grid planning and operating decisions rely on simulations of dynamic behavior of the power system. Both technical and commercial segments of the industry must be confident that the simulation models and database are accurate and up to date. If the transfer limits are set using overly optimistic models, a grid operator may unknowingly operate the system beyond its capability, thereby increasing the risk of widespread outages, such as occurred during summer 1996 outages. If the models are pessimistic, a grid operator may be overly conservative and impose unnecessary restrictions on the transfer paths, thereby increasing the risk of power shortages in energy deficient regions. Therefore, having realistic models is very important to ensure reliable and economic power system operation. Because accurate end-use load models and renewable generation models are likely to have a significant impact on the capacity derating of major transmission paths carrying renewable energy into and within California, it is vitally important that these models accurate current conditions as well as future changes over the 2009 to 2030 time frames addressed by the 20 percent, 30 percent and 50 percent renewables goals.

Uncertainty is a persistent theme underlying virtually every aspect of the transmission planning and grid operations. Traditional power system analysis tools do not directly assess the many, inescapable uncertainties that are inherent in all models and in all data they on which they rely. Responsible users of these tools cannot ignore these uncertainties because they routinely have a major influence on the results.   Common

uncertainties in power system analyses used in transmission planning might include estimates of load growth in time, by region and by end-use composition, potential location and generating capacity of wind, solar, other renewable and central station power generation facilities, retirements or upgrades of existing generating facilities, and likelihood that transmission facilities and substations will be approved and constructed in the future. Common uncertainties in analyzing grid operations might include weather impacts on load and renewable generation output, operational status of various transmission pathways, operational status of power generation facilities, possibilities of unplanned outages of generation and transmission equipment, and real-time actions of market players to maximize revenues or reduce costs in generation or utilization of power. This uncertainty has been compounded by the disaggregation of the vertical structured utility, deregulated power markets, and the increased size of the grid interconnections crossing state and national boundaries.

6.2.2 Uncertainty Analysis and Probabilistic Forecasting Tools

Access of Renewable Resources to the Transmission Grid

Meeting 20 percent, 30 percent and 50 percent renewables goals will require a substantial amount of new transmission development, as most large-scale renewable resources are located in remote areas rather than near the state’s major load centers. Energy Commission IAP study concluded that, for the 2010 Tehachapi case, 74 new or upgraded transmission line segments are needed at a first order estimated cost of $1.2 billion plus $161 million for transformer upgrades and unknown costs for land use and right-of-way costs. The 2020 case would require 128 new or upgraded transmission line segments, with just over half (66) needed to serve increasing load requirements. For just the 500 kV and 230 kV additions, a first order estimated cost would be $5.7 billion. In addition, 40 new or improved transformers would be needed at an estimated cost of $655 million (excluding detailed land use and right-of-way costs).

Wind generation output varies significantly during the course of any given day and there is no predictable day-to-day generation pattern.

Daily patterns of wind power which exhibit a high degree of variability and uncertainty will likely cause more serious congestions with greater uncertainty.

The following summarizes one near-term operating scenario of interest to Cal ISO that might be the focus of research on pattern recognition methods applied to real-time grid operations. The Tehachapi Area is expected to have one of the largest installations of wind generation in the State of California. Over 5,600 MW of wind generation consisting of both traditional induction generators as well as the latest modern doubly fed induction generators with power electronics controls are planned for the Tehachapi Area. In addition, the Tehachapi has one of the largest water pumping operations in the world. Through pumping, water is elevated 3,000 feet to over the Tehachapi Mountains to serve the greater Los Angeles Area. The combination of large amounts of wind generation and large pumping operation at the Tehachapi Area is expected to severely tax the power grid in the Southern California area and is therefore selected for analysis in this research. A new 500 kV transmission system is planned for the Tehachapi Area. Through this research, it can be validated how this new transmission facility enhances the statistical distribution of the power grid parameters in the Tehachapi Area.

DOE. September 30, 2014. Summary of Discussion U.S. Department of Energy Workshop on Estimating the Benefits and Costs of Distributed Energy Technologies. Department of Energy.

DOE did a study on 30% penetration of wind that showed $143 billion of additional transmission would be needed to meet the additional wind.

PV generation is relatively predictable but it is not necessarily coincidental with peak usage.

For avoided transmission investment, we need to determine the relative coincidence of distributed PV production with peaks on the transmission system.

The way to look at capacity is through the reliability lens. Once you get high penetration, reliability starts to decline. The system in Hawaii has become less robust against big transient events, so the utility now has to spend millions to enable the grid to respond to transient events as it did before. Adding flexible generation also adds capacity cost. When penetration levels get significant, huge ramp events can occur for which the system was never designed

Enabling high penetration of DETs will increase the cost of the distribution infrastructure.

Germany paid 56 cents per kilowatt-hour to incentivize rooftop installation, and they face a price tag of a trillion dollars.

There are costs for wear on assets used in ways for which they were not designed

Grid operators have addressed ramping through the same mundane approach for decades, but with penetration of RE, the cost of dealing with ramping increases.

With increased penetration of variable generation, frequency regulation becomes more of a challenge at the bulk system level. Primary and secondary costs are straightforward. States are having individual issues. Most reliability activities are trans-state, and two interconnections have seen increased de gradation at the bulk system level. Some of that is from losing inertia. Frequency regulation at the bulk system level is not a resolved issue and will get more complex.

Distribution system impacts are more discrete, which is both good and bad. Extremely granular data are required–an overwhelming level. With “dumb” inverters, there is a risk of voltage violations and losses of 10% to 30%. We can avoid overloaded feeders. Avoided capacity also has a potential impact on extension of service life for system equipment.

Even at low penetration rates, DER can cause reliability issues. Mr. Fine showed a chart with possible effects at 10% penetration levels.

The current business development model for customer solar PV in Hawaii is not sustainable due to economic, policy and grid-related technical challenges associated with high solar penetration levels. Customers must recognize that the recent rapid pace of customer solar PV interconnections is not sustainable when grid infrastructure mitigations need to be developed and deployed.

Commissioner Champley discussed lessons learned from the experiences of Hawaii’s utilities. The state has had high growth of residential and other solar photovoltaic (PV) over the last five years and is poised for a major thrust in the development of utility-scale PV. As a result, the state faces a number of significant economic, policy, and grid-related technical challenges. Electrically speaking, Hawaii is a collection of island electric grids. There is no interconnection between islands; each island has effectively become a laboratory for renewable resource integration. The Federal Energy Regulatory Commission (FERC) and NERC have no jurisdiction, so the Hawaii Public Utilities Commission can establish its own rules, within state statutes.

Annual renewable energy output in 2013 ranged from 12% on Oahu the main population center) to 48% on the main island (Hawaii) and renewable energy growth continues. The state leads the nation in penetration of rooftop PV and, as a result, is at the forefront of the integration challenges associated with high distributed PV penetration levels. By 2017, two islands will have over 75% of day-time system load supplied by distributed and utility-scale solar. Solar has seen exponential growth, but the growth has been slowing down in 2014. Hawaii is approaching 50,000 solar customers; over 10% of total residential customers have solar PV. Installed customer solar PV capacity represents roughly 23% of annual system peak load. Average residential customer electricity usage has dropped by about 30% over last ten years due to customer energy efficiency, conservation and distributed generation (but the grid investment did not shrink 30% and in fact, increased during this time). On Kauai Island, solar generation is approaching 50 megawatts (MW), while oil will soon be down to around 10 MW. However, solar resources energy output contributed only 18% of the daily energy used due to limited hours of full solar energy output. Regarding solar penetration at the distribution level, approximately 50% of all distribution circuits for the Hawaiian Electric Companies have greater than 75% solar PV penetration

Exponential growth in renewables was market-driven, but if the consequences are not anticipated and appropriately addressed proactively, such growth will lead to unintended results. Developing renewables makes sense in Hawaii due to its current dependency on oil for electric generation, but with state tax and rate incentives and no penetration level check points, the growth outpaced the utility’s ability to manage interconnection queue and grid integration issues. As a result, the residential PV industry in Hawaii faces a boom-bust cycle. Commissioner Champley noted that there are now emerging substantial integration challenges uniquely associated with incremental additions of utility-scale and distributed solar PV, and that the integration costs of solar may exceed those of other forms of renewables, due to less solar energy output to spread integration fixed costs and due to PV ’s inherent low capacity factor. Other technical issues include:

  • Many issues have arisen that were not initially evident at lower penetration levels.
  • The size of a customer’s PV grid “footprint” matters when excess solar energy is exported.
  • Bulk power system reliability challenges, not distribution circuit issues, have become binding constraints on the island grids.
  • PV inverters are a crucial part of the distributed solar PV integration equation.
  • Inability to curtail customer solar PV output leads to curtailment of utility-scale renewable projects, to the economic detriment of customers without solar PV.
  • Legacy customer and technology issues are an emerging concern.

Most studies indicate that above 10% energy penetration of distributed PV, the capacity credit and capacity value of additional distributed PV is very low.

 

CEC. April 2012. Summary of recent wind integration studies. Experience from 2007-2010. California Wind Enegy Collaborative for California Energy Commission. CEC-500-2013-124.

Transmission studies are often neglected or extremely simplified for current wind integration studies. Detailed transmission studies are necessary for before each additional wind plant is installed. Transmission elements must be designed specifically for wind generation to ensure reliability. This would entail an AC transmission analysis as opposed to the DC analysis which is common to most integration studies. The AC analysis would likely focus on the possible electrical issues such as: inertial response, reactive power support, and transient stability. Another important aspect of a transmission study is a land use study. This study is necessary to ensure that proposed transmission can be built. It would need to consider the arrangement of wind projects to ensure that transmission is appropriately sized and that the connections to the system and made in the most optimal way.

The expected growth of electric vehicles is another aspect to study in relation to wind generation. Electric vehicles are expected to charge at night when demand is low. This could prove beneficial for wind generation because wind generation in many areas will be at its peak at night. It seems as though electric vehicles will be able to absorb wind energy that won’t otherwise be needed. There are several concerns with how this will work in practice. Such as, what happens if the wind dies?

Also, will electric vehicles start charging all at the same time leading to a sudden load spike? Is there a way for the chargers to be responsive to the power grid?

Wind generation is an intermittent, variable and uncertain generating resource. This uncertainty is an important characteristic in that it is in contrast to conventional generation which is available as needed and controllable.

The increase in variability will require system operators to take more and larger control actions to keep the system balanced.

The uncertainty of wind in the power system is the largest concern. The variability introduced is generally manageable but it is made much worse by the uncertainty. Uncertainty will lead to less efficient operation and can lead to reliability problems. The variability and uncertainty of wind generation will cause operators to increase the amount of ancillary services they procure to keep the system balanced.

The regulation reserve is the most affected because it is primarily charged with managed short term fluctuations. The amount of additional regulation that systems will need to procure varies greatly between studies. Regulation needs increase with higher penetrations of wind generation. It is important for system operators to quantify the regulation needs to ensure the system will have the capability to provide it.

There are many possible ways that wind generation can impact the power system costs. It can affect the energy costs, ancillary service costs, unit commitment costs, congestion costs, uplift costs, transmission costs, and so forth.

The third strategy for managing integration is increasing diversity. Diversity can be increased in a number of ways. Building wind generation in different resource areas is one way to increase diversity. Constructing sufficient transmission to ensure wind power can be moved where it is needed is another. Combining control areas is another or increasing the cooperation between areas. Increasing cooperation would involve increased scheduling frequency across inter-ties and sharing of renewable energy data.

The uncertainty of wind in the power system was the largest concern. The variability introduced was generally manageable but it was made worse by the uncertainty. Uncertainty made it much more difficult to plan generation schedules in an optimal way. The variability and uncertainty of wind generation will cause operators to increase the amount of ancillary services they procure to keep the system balanced. Ancillary services are a subset of a group of services that are necessary to maintain operation of the power grid. They are used to maintain short-term balance of the system and to recover from unexpected outages. Ancillary services included operating, contingency, and regulating reserves. The regulation reserve was the most affected because it was primarily charged with managing short-term fluctuations. The amount of additional regulation that systems will need to procure varied greatly between studies. Regulation needs increased with higher penetrations of wind generation. It was important for system operators to quantify the regulation needs to ensure that the system would have the capability to provide it.

Determining the costs of wind integration was one of the main goals of many studies. These studies used a wide range of methods and assumptions to determine costs. There were many possible ways that wind generation could affect power system costs. Wind generation can affect energy costs, ancillary service costs, unit commitment costs, congestion costs, uplift costs, and transmission costs, among others. Direct comparisons of wind integration costs were difficult because different studies chose to include different factors when making cost calculations. Studies found wind can reduce energy cost by displacing more expensive generation. These savings may be offset by higher costs introduced from other elements such as increased ancillary service costs. The extra cost estimates ranged from $0/megawatt hour (MWh) to $9.35/MWh.

Various studies recommended many ways to successfully integrate wind power into the system. The recommendations fell into three basic categories: reducing uncertainty, increasing flexibility, and increasing diversity. Reducing the uncertainty of wind generation was the primary method recommended to facilitate integration. Forecasting for wind generation was the most important strategy for integrating wind into the power grid. Forecasting reduced the uncertainty of wind directly and could potentially result in very large savings for the power system. Forecasts would be designed for each area to fit current operating practices. These forecasts could provide insight into the expected level of generation and variability that wind power will introduce into the system, which will give operators the ability to make adjustments or procure extra capacity as needed. Increasing the flexibility of the power grid was the second strategy for managing wind integration. Increasing the amount of ancillary services, specifically regulation, was a common tactic to increase the system’s flexibility. This would literally increase the amount of capacity that is tasked with following variations between the load and generation. Other methods of increasing flexibility were also suggested but were more dependent on system and operating practices. Increasing diversity was the third strategy for optimally managing integration. Diversity can be increased in a number of ways. Building wind generation in different resource areas was one way to increase diversity. Constructing sufficient transmission to ensure wind power can be moved to where it is needed was another. Combining control areas or increasing the cooperation between areas was another viable strategy. Increasing cooperation would involve increased scheduling frequency across entities and sharing renewable energy data.

Generation is the most controllable element and is relied upon to maintain the balance as the usage changes. If there is too much generation it can cause components to overload or burn out. Too little generation will lead to brown or black outs. Load or generation can change rapidly and unexpectedly, as a result sufficient flexibility must be maintained to quickly rebalance the system. Wind generation with its intermittent and variable nature adds another source of variability to balance with controllable resources. Reliable operation of the power system is critical and maintaining the reliability is the primary focus for the system operators.

There are six reliability regions within the eastern interconnection and one each within the western connection and the ERCOT connection.

Within the reliability regions are balancing areas. There are over 100 balancing areas within the United States. Balancing areas range in size from individual cities, such as Sacramento, to areas that cover several states, such as the PJM4 interconnection. The balancing areas are responsible for controlling the generation within their area and coordinating with neighbors to control their inter-ties.

1.2.1 System Control. Reliable operation and planning in power systems require consideration of a wide range of timescales. Resource adequacy and capacity planning takes place on scales of one year to several years, this includes transmission and generation siting, sizing and construction. On shorter time scales in the range of days to months, maintenance planning is done. Generation and transmission facilities plan scheduled maintenance far in advance and coordinate with other facilities to minimize grid disturbance. In the range of hours to days, the unit commitment and scheduling processes are done, in these processes generation is selected to provide for the forecast load. To adapt to any forecast errors or unplanned events generator dispatch is done on the minutes to hours time scales. Automatic Generator Control (AGC) which dispatches generation automatically to keep the system balanced operates on the seconds’ timescale. A number of other automatic controls including generator governors, automatic voltage regulators, power system stabilizers, special protection and remedial action schemes operate on the milliseconds to seconds’ timescales. Most planning is done on the longer time frames from years to days, while the operations time-frames range from seconds to days.

Committing generation to serve load is a very important process for reliability and to minimize system costs. Generators must be committed in advance of their scheduled operation because it can take many hours for them to start up. If too much generation is committed it is costly, inefficient and in extreme cases can overload system components. If not enough generators are committed in advance other power will have to be procured or blackouts will be risked. The resource pool to procure more generation will be diminished because there is insufficient time for many units to respond. The units capable of responding are likely to be expensive gas turbine units. Both under commitment and over commitment of generation can lead to higher energy costs.

Dispatch of generation is another important part of operating the power system. In dispatch the units that are committed are given schedules to follow. There are three basic categories of generation which determines the extent that they are dispatched: base load, intermediate load and peaking generation. Base loaded typically operates at its forward schedule and is rarely dispatched away from that point. Intermediate generators perform most of the changes in output. They will typically ramp to minimum at night, or shut off, and then ramp up with load the next day. The third type of generation is peaking generation, which is started and used for only extreme conditions. Forward scheduling is done along with the unit commitment process and accounts for the majority of energy schedules. Dispatch of generation away from its forward schedule makes up a small part of the overall energy flows. Dispatch away from the forward schedules to reflect forecast error is usually called load following.

Load following is an important consideration in many studies, because wind provides uncertainty and variability to the system that needs to be balanced. Keeping the power system balanced and reliable requires more than adjusting the supply energy. Reliability-related services are a group of services that are necessary to maintain operation of the power grid.

There are a wide range of reliability related services that vary by region. Reserves, regulation, voltage support, black start capability, are all examples of reliability related services. Ancillary services are a subset of reliability-related services which include operating, contingency, and regulating reserves. Ancillary services are used to maintain short term balance of the system and recover from unexpected outages. Ancillary services provided by generation reduce the amount of energy a generator can supply when it supplies ancillary services. Operating reserves are made up of unloaded generating capacity which is synched to the power grid and capable of responding in a certain amount of time. Operating reserve is a very broad term which includes the ability to provide spinning reserve, regulation, supplemental reserve, and load following. Contingency reserves are power system reserves which can be called to respond to a contingency event, or are interruptible loads which will reduce consumption.

Power systems maintain a few dedicated operating and contingency reserves to meet their reliability needs. They purchase these reserves from generators who reserve that capacity in case they are need. Spinning reserve is a common ancillary service used as an operating and contingency reserve. Systems procure an amount of spinning reserve which is synchronized to the power grid and available within 10 minutes. Non-spinning reserve is another reserve used for contingency reserves. Non-spinning reserve is offline capacity that needs to synchronize and deploy within 10 minutes. The levels of spinning and non-spinning reserves that systems maintain is related to the system size, the largest single contingency, and the makeup of the generation fleet.

The amount of reserves a system maintains depends primarily on the system size. All systems maintain some degree of all the reserves but have a wide variety of mechanisms for procuring and implementing. The power grid must be managed in a way that a single contingency will not affect the security of the grid. NERC has specific requirements for the amount of spinning and non-spinning reserve that must be maintained by a balancing authority. The requirements have to do with system size, contingency size, and type of generator resources. The system must be able to recover from a contingency in a certain amount of time to be prepared for the next one. Spinning reserves and non-spinning reserves are deployed following contingencies, and are not used during normal system operation.

Wind generation is a variable, intermittent, and uncertain resource.

Another important factor to consider with wind generation is the location of the resource.

Intermittency describes the wind’s nature to come and go, to be available to produce electricity sometimes, but be unavailable other times. Most areas have distinct weather patterns for when the wind blows. California tends to have a diurnal wind pattern with the period of strongest wind occurring at night, with the day experiencing lower winds. In addition to the diurnal pattern there are also seasonal patterns for winds with the most productive periods occurring in spring and summer with the fall and winter being less productive. Other areas may have significantly different wind patterns. These wind patterns can be very important if wind power is playing a large role in supplying electricity. Intermittency is in contrast to most conventional generation which has a fuel that can be stored used on demand. The intermittency can affect the system’s resource adequacy calculations. The system operators will need to determine if the wind is likely to be available during the system peaks or if other generation will need to be available.

In addition to this longer term intermittency wind power is also variable in nature. Variability describes the wind’s tendency to change speeds as it is blowing. The variability of the wind can happen in seconds as gust blow through, or in longer time frames as regional weather patterns change. Because wind generation is weather dependent it is sensitive to the fluctuations of the weather. This variability is in contrast to most conventional generation can generally choose its desired generation level and maintain a steady output. Wind variability gives the operators another source of variability to consider other than the load.

Uncertainty of wind generation is caused by the intermittency and variability of wind generation though it can pose a different set of challenges. Uncertainty is related to the unknown future wind conditions. Even with certain repeatable weather patterns, prediction is not an exact science. The output of wind generation is unknown ahead of time, forecast can help bound the problem and often give quite good estimates, but compared with generation that can predictably operate at a certain output wind generation presents a challenge.

The electrical generator characteristics of wind generation also differ from conventional generation. Conventional generators use permanent magnet generators which operate at fixed speeds to produce power. Permanent magnet generators provide inertia for the grid; the electro-mechanical link helps resist changes in system frequency. Wind generation however has used induction generators (IG). The electrical differences of wind generation are primarily a concern for transmission design, which typically isn’t covered in recent wind integration studies.

Location of wind generation is another difference from most conventional generation. With fossil fuel generation it is possible to transport fuel to a generator so the location is not as constrained by resources. Wind resources are often located far from load centers and far from main transmission pathways. While conventional generators can be located much more flexibly on the transmission system, wind generators are limited to where there is sufficient resource. Remote resources often require large transmission upgrades to connect wind to the system. The transmission upgrades for wind will have a set of design considerations specific to wind generation.

Wind generation in the power system is described as having a penetration level. There are several different ways to define the relative amount of wind generation in the power system. The energy penetration is the ratio of energy produced by wind generation to the ratio of total demand over the same time. Typically the energy penetration is expressed on an annual basis. RPS standards are usually defined in terms of energy penetrations.

Capacity penetration is the ratio of the installed capacity of wind generation to the historical peak demand of the system. Finally the instantaneous penetration is the ratio of the wind energy production to the system demand at that time. Instantaneous penetration can also be calculated over short periods of time, for example the hourly time step of a production cost simulation. While penetration is a good way to compare systems of different sizes there are often significant differences between systems which may drive the impacts to be significantly different at similar penetration levels.

With higher penetrations of wind generation unit commitment algorithms take wind forecasts into account to avoid over commitment of generation which could risk over generation conditions. Generator schedules will be affected as more expensive generation is displaced for less expensive wind energy. The load following process will need to adjust for the load and wind together rather than just the load.

Wind is generally the largest contributor but solar, geothermal, and hydro can all factor in.

The size of the system is one of the critical factors. Larger systems often can have easier time incorporating wind. Larger systems take advantage of aggregation. The load variability scales more slowly than the load. Larger systems often tend to have a larger and more diverse generation fleet. The physical infrastructure of the power grid is also an important consideration. The makeup of the generator fleet can also impact the studies. System that are hydro dominated often have many fast moving generators available. Coal, natural gas, and hydro generators have different characteristics and the overall system capabilities will depend on the mixture of generators. The operations of the system are also important, is it a market system, what are the scheduling periods, and so forth. There are also systems that are net importers, or net exporters. The ERCOT system is essentially an islanded system.

Transmission and Reliability. Strong wind resources are often located far from population centers that consume the bulk of the electricity. Transmission is required to move the energy to where it will be used. Transmission can be one of the most expensive components of integrating wind. Older wind integration studies focused a great deal on the transmission design to accommodate wind. The need for transmission analysis in a wind integration study has diminished due to the lessons learned from previous studies and the new technology in wind turbines that eliminates some problems. Transmission is important for wind integration even if it no longer a prominent focus. The primary transmission considerations for wind resources are sizing, voltage regulation, reactive capability, grid disturbances, control, and frequency response.

There are a few reasons that design of transmission facilities takes a different approach when it comes to integrating wind generation. Transmission facilities include not only transmission lines, but also transformers, capacitors and other hardware. The intermittent nature of wind is one concern. Wind generates below its rated power most of the time so lines may not need to be sized for full delivery.

Another issue with transmission design is the location of wind resources. Strong wind resource areas are often located far from load centers in weak areas of the power grid. Transmission lines for wind may be trunk lines that connect radial to the power grid, which would not have alternate routes in case of outage. More recent integration studies haven’t emphasized the transmission component as much as in the past. Assumptions are that transmission will be built or upgraded as necessary to accommodate the new wind generation, and that the changes in operating characteristics are more important to focus on.

Systems with more frequent scheduling will have an easier time adjusting to changes than those with longer scheduling blocks.

The hydro system in California will need to be increased roughly 50% from current practice to accommodate load growth and wind. The study estimates the additional regulation needed to be 20MW on 350MW with 20% renewable generation. … increases in load following capability will be needed, an increase of about 10MW/minute to 130 MW/minute. The load following increase needs to be maintained for 5 minutes.

20% Wind Energy by 2030 – July 2008 Performed by the U.S. Department of Energy, this study takes a broad look at the issues that the country would face if it were to try to supply 20 percent of electric energy demand from wind power by the year 2030. It is very broad and includes sections examining turbine technology, manufacturing processes, materials, resources, and equipment and O&M costs. It is not a typical wind integration study that looks at the operating changes for specific wind scenarios. It gives very good information about all aspects of wind generation and how it may be able to contribute in the future. The study takes a balanced view of wind plant siting and potential environmental effects. The study looks at the impacts wind generation could have on greenhouse gas emissions, water conservation, energy security and stability, and costs. It also considers potential negative environmental costs such as bird kill, and noise. The study looks at the transmission requirements for integrating wind power throughout the U.S. It takes a national view of the best resource locations and the load centers and considers how wind can best be moved around. The study looks at a possible design of 12,650 miles of new transmission at a cost of $60 billion. This study includes analysis of distributed wind as well as off shore wind energy. This study includes a review of wind integration studies from 2006 and earlier, and uses them as a basis for analysis. The study concludes that the US possesses sufficient resources to power 20% of the electricity needs using wind energy by 2030. Doing so would require 300 GW of installed wind capacity, compared to the 11 GW installed by 2006. This would decrease greenhouse gases by 825 metric tons annually, and reduce the electricity sectors water use by 8 percent (4 trillion gallons). The prediction of the cost differential is a modest 2% increase over a conventional generation build out. In real dollars it is still a significant sum of $43 billion. Spread out over the total generation it represents an increase of $0.0006 per kWh.

Eastern Wind Integration and Transmission Study – January 2010. The Eastern Wind Integration and Transmission Study (EWITS) study looks at the eastern interconnection in the US. The eastern connection is the largest system considered with a system peak load as studied is 530 GW. The study considers 4 transmission scenarios that are primarily made up of ultra-high voltage lines from Midwest to the Northeast. The four scenarios consider three different 20% penetration build outs and one 30% penetration in the year 2024. The 20% wind scenarios consider different utilizations of resources one considers high capacity factor; another considers local resources, and a third to consider more off shore development. The study uses three base years; 2004 through 2006 for the input data sets. These scenarios are compared to a reference case that includes current development and some near term development.

The results show that 20% and 30% wind energy penetrations are possible in the eastern interconnection but will require significant new transmission. Substantial curtailment of wind would be required without new transmission, so much so that all cases will require some amount of new transmission. The study calculates wind integration costs that include transmission, and wind capital costs as well as operating changes costs. Production costs decline with increased wind penetration, though overall wind integration costs increase with penetration due largely to capital costs of transmission and wind generation. Very high increases in regulation will be needed. The regulation changes are calculated for each of the balancing areas individually, regulation increases of over 1,000 percent in some areas. Integration costs range from $5.00 -6.68/MWh wind production.

The ERCOT study shows that the energy from combined cycle units is offset the most by additional wind generation. Energy from coal is also slightly reduced. It is interesting that energy from gas turbines increases except for the highest penetration of wind generation. This is likely because of the flexibility that gas turbines have they are able to respond quickly to make up for variability and uncertainty. Similar studies have similar results; the resources displaced are largely a factor of the system configuration. Though increased wind generation tends to displace the most expensive units.

EWITS has some very interesting results for how the wind impacts the energy from different sources. The study shows that having different amounts of forecast error can affect the energy from different sources. Increasing forecast error cause energy to shift from less flexible sources such as coal to more flexible sources such as combined cycle and gas turbines. Reductions in coal energy were in the range of 3-4 percent for the cases with forecast versus the perfect forecast cases. Meanwhile combined cycle generation increases roughly 20%, and energy from gas turbines increase 20 to 30% percent. It should be noted that coal energy is roughly 15 times that of combined cycle, while gas turbine energy is roughly 25% of the combined cycle generation.

Reserve Requirements.  Determining proper levels of ancillary service or reserves required with different wind penetrations is also one of the main concerns addressed in each study. There are two concerns with reserves; how much reserves is needed and does the system have the capability to provide them. The system’s ability to provide reserves will also depend on the displacement of other generation. If the uncertainty and variability of wind generation is significant, reserves could increase beyond the system’s ability to provide them. System planning will likely need to take into account the ability to provide reserves as well as energy. Regulation is the most impacted reserve requirement. Regulation is also the most expensive reserve as it is the most flexible. Spinning reserve could be affected if the wind was concentrated and represented a credible contingency. Some systems also carry a replacement reserve product, which could be dispatched either in a contingency or if there are significant schedule deviations. These other reserves can also be affected. Determining regulation reserves requirements varies significantly between the studies. Some studies rely on statistical techniques to estimate the regulation, while others use the operational models.

Both Montana and CAISO employ techniques to measure the regulation requirements based on the expected system needs caused both from the variability of the wind power and from the uncertainty in the short term forecasts. This is in contrast to some of the other studies whose methodologies implicitly assumed perfect forecasts in the short term, and therefore measured only the variability components. The CAISO study estimates that regulation with 7 GW wind capacity would need to increase by 100-500 MW depending on the hour and season to maintain the same performance. California normally maintains 350 MW of regulation. The Montana study estimates a 0-241 MW increase in regulation needs, on up to 1450 MW wind addition. This results in 1-3.84 fold increase in the procurement of regulation.

Load Following and Ramping. Load following is another aspect that is often considered with wind integration. Load following is fairly loosely defined and it can vary quite a bit between different regions. Generally speaking load following is the dispatch of generation necessary to keep the system balanced. Load following is measured as the difference between the forward schedule of generation and the dispatch. It has typically been used to make up for load forecast errors, and for the natural differences that occur when scheduling is done on hourly blocks. Using regulation for these larger and longer term changes is expensive. Ramping is closely related to load following. Most systems will have peak load in the day and minimum load at night. In order to match the load the generation in the system ramps up in the morning as the load rises to the peak. Then it ramps down in the evening towards the minimum load. The magnitude, rate, and duration of these ramps are important to keeping the system balanced. The generation on the system must have sufficient flexibility to meet the demands of the ramps on a system wide basis. Wind generation has the ability to affect the perceived ramp the system sees. Generation must be able to follow the net ramp on the system. In many regions wind has a diurnal pattern that is out of phase with load. Wind generation will peak at night and be at a minimum during the day. This has the effect of increasing the needed ramping as other generation is used to balance the wind. The CAISO study shows lots of analysis relating to the load following and ramping concerns. The study uses both statistical and operational models to address the concerns. The study uses a statistical model to consider the potential impacts of wind generation on the morning and evening ramps. The methodology is designed to look at extreme ramps that may potentially occur. The data is separated into different seasons and the maximum seasonal ramps are calculated for load alone, and for load minus wind. The CAISO does an analysis examining the expected maximum net load ramps during the shoulder hours, or the hours of the morning when load rises rapidly and the hours in the evening when it declines. The CAISO study shows that the maximum net ramp could increase over 30% from the baseline values. Their analysis represents the extreme combination for each season, and is boosted by the consistent diurnal pattern of wind generation which is opposite the load shape. The WWSIS uses a similar analysis and shows that the largest net ramp increases 50% from the baseline at the 30% penetration level. The CAISO load following methodology is an operational model that considers short term dispatch for a simplified system, considering the net load changes and short term forecast error. The load following study estimates that the amount of load following increase roughly 800 MW from a base of 2,200 MW. A sensitivity analysis is performed with a modest decrease in forecast error. The sensitivity shows forecasts improvements can reduce the additional load following requirement by about 50%.

Each new facility when built will be subject to a transmission study, which will determine in detail the needed transmission and the expected impacts. The studies that have included some transmission analysis have shown that new transmission is useful and sometimes necessary. One interesting thing is that extra high voltage transmission lines are often called for in relation to wind. The electrical generator characteristics of wind generation also differ from conventional generation. Conventional generators use permanent magnet generators which operate at fixed speeds to produce power. Permanent magnet generators provide inertia for the grid; the electro-mechanical link helps resist changes in system frequency. Wind generation however has used induction generators (IG). Modern turbines use more advanced doubly fed induction generators (DFIG) instead of standard IGs. DFIGs are variable speed generators allowing the generators to operate more efficiently over a wider range of wind speeds. Also DFIG generators can control the amount reactive power used or supplied much like a traditional generator. Another common generator for a modern turbine is a full conversion system. With a full conversion system the generator generates an AC power which is converted to a DC and then back to grid synchronized AC power. The AC-DC-AC systems have the benefits of the DFIG system over the standard IG and are capable of providing inertial response. These new turbine generator systems have alleviated much of the worry with respect to the electrical connection.

Fault tolerance was another area of concern for wind generation on the transmission system. Older wind turbine generators were not fault tolerant and often dropped off line during grid disturbances. Operators encouraged this performance in the past for small amounts of wind, because the grid would drop a small amount of generation that could be easily replaced. As wind penetrations increase the potential disruptions from all the wind dropping is of large concern. Grid codes in many areas now require wind generation to have low voltage ride

Voltage regulation is another concern for wind generation. The concern is related to the characteristics of IG. In order to generate IG consumes reactive power this affects the local voltage. Wind generation has evolved beyond simple IG to doubly fed induction generation, DFIG, and full AC-DC-AC conversion. These new generators do not use reactive power the way older models did and are able to support voltage; many of these generators can even help support voltage when there is insufficient wind to generate power. Supporting the local voltage is important for generation particularly if there is no other generation in the area.

Frequency response or inertial response is another area of concern when integrating wind in the power system. When a large fault happens the system becomes suddenly imbalanced, the frequency changes as a result. The large amount of rotating mass behind generators helps arrest the frequency changes keeping it in a manageable range as the system is rebalanced. Some wind generators do not provide inertia this way due to the generator types. If large amounts of wind replace conventional generation care will need to be taken to make sure that frequency changes will be manageable. Conventional generator can also have frequency responsive governors, which are able to provide an injection of power if frequency dips suddenly to counteract the change. Since wind generation attempts to make maximum use of the available wind it may not be able to respond to frequency dips. If wind has displaced significant conventional generation, there is concern that a frequency dip may not be arrested, which could cause a cascading blackout as generators shut down to avoid damage that is caused by generating under frequency. This is still an area of investigation as national standards are developing for frequency response initiatives. Modern wind turbines do have some ability to provide inertial response. Modern supervisory control and data acquisition systems (SCADA) have also helped improved integration. Modern SCADA systems allow real time detail measurements of wind plants to be visible to operators. Additionally they allow wind plants to control their output, while historically they would be subject only to the weather. SCADA systems combined with improvements in wind generators can help eliminate many of the transmission and reliability concerns associated with large penetrations of wind power.

There are three basic strategies for managing wind integration; reduce uncertainty, increase flexibility, and increase diversity.

Wind forecasting is seen as one of the best ways to reliably accommodate wind power.

Curtailment. Another very common operating practice to manage wind is to curtail the wind generation periodically.

Adjust Scheduling and Dispatch. Scheduling practices vary widely between different balancing areas. The scheduling lead time and time steps are both very important when it comes to integrating wind generation. Many areas do the majority of scheduling well ahead of the operating hour and fix hourly schedules for most units. This puts them at a disadvantage when try to deal with forecast error and variability within an hour. Without a way to change dispatch or adjust schedules within an hour systems must use expensive regulation to keep the system in balance. Having a real time market, or a similar process to adjust schedules within the hour will help prevent flexibility from being stranded, and will reduce the amount of regulation needed. It will allow the flexibility of units to be realized through a more efficient dispatch process. Changing the dispatch process should allow ancillary services, primarily regulation to be reduced. Systems that operate markets on a five or ten minute basis have more flexibility to adjust to wind generation. The Avista study models how the market structure can change the wind integration costs. The study considered adding a 10-minute market on top of the hourly market that exists. The study shows that between 45 and 75 percent of the integration costs are attributable to factors that occur within hour, meaning that most of the costs occur because the system is locked in for an hour. When the 10-minute market is added to the analysis is gives the system more flexibility to respond to conditions. For their system the integration costs are lowered between 40 percent and 60 percent with the addition of the 10-minute market.

Change Ancillary Services. The majority of studies show that the necessary amount of ancillary services will increase. Many studies focus on regulation, and suggest increasing the amount of regulation procured. The estimates of regulation increases vary significantly between the studies. Though there is a considerable range all studies agree that increasing wind generation will increase the amount of ancillary services a region maintains to keep the same reliability level. Regulation is the ancillary service most affected. Ancillary services are usually more expensive than energy and the system tries to keep the cost down. The variation in ancillary services relative needs between balancing areas depends in large part on the scheduling time frames. One way to offset increases in ancillary services is to increase the entities that can provide them. This includes having more generation certified to provide ancillary services, including wind generation.

Encourage Flexible Generation. Operators are concerned about the increased variability in the system, as well as additional need for ancillary services. As a way of managing this operators suggest encouraging the development of more flexible generation. This includes constructing new generation that can meet future needs or retrofitting current generation to operate more flexibly. In order to encourage generators with sufficient flexibility to be constructed operators will need to compensate generators for the extra benefits they provide. One way to do this is to make resource adequacy payments that consider more than available capacity. There are several attributes that system operators consider when thinking about generator flexibility. One way for generation to provide more flexibility

Frequent cycling capability is another desirable feature. Generators that can cycle daily or more frequently are desirable. Faster start and stop capability goes along with frequent cycling. This refers to the amount of time it takes for a generator to turn on or off from when it receives an instruction. This amount of time can be one the order of days so reducing it to hours or less could greatly increase flexibility.

Demand response stands on its own as a way to enhance the power system. Demand response can allow systems to avoid installing expensive peak generation that is rarely used.

Zoning and Aggregation. Many studies have shown there are benefits for increasing diversity of wind resources. Diverse wind resources aggregated together have a smaller amount of variability that one’s close together. If systems can encourage wind generation in many areas they will have to make fewer changes to accommodate the wind.

Another option for aggregation is a larger balancing area. Larger balancing areas reduce the penetration of wind generation which reduces the effects. Additionally larger balancing areas will have larger generation fleets and more flexibility in how they are dispatched.

Grid Codes. Reliability organizations are actively studying renewable energy and revising grid codes to ensure the reliability of the system. LVRT and reactive control are some of the standards that have been introduced. Balancing areas, reliability regions, and NERC will continue to review wind turbine technology and power system performance to assure that there will be the reliable operation of the power grid. Grid codes will ensure that wind won’t harm the power systems reliability.

Telemetry. It is very important for system operators to have quality information on the state of the system. This will enable them to ensure the reliability. Telemetry of the power system elements is crucial to giving system operators the information they need. Telemetry at wind sites should provide both meteorological and power data. Real time measurement of wind plant performance can have a variety of benefits. Power systems have real time monitoring systems which typically measure generation and transmission conditions. System operators continually monitor the conditions of the system and make operating decisions based off the measurements. These systems can be easily adapted to include wind production information as well as local meteorological information for wind farms. This information includes things such as wind speed, wind direction, air temperature, pressure, and humidity. The data can give operators real time information on the generation of wind, its variability, and recent trends. Measurements can also be used by forecasters to help better predict the wind generation. Telemetry for transmission elements should also be increased to monitor the greater system, especially for upgrades to integrate wind generation.

Storage. Storage is often discussed in the integration studies and there are many possibilities for how storage could benefit a system with high penetrations of renewable. Energy storage has the potential to address both the intermittency of wind and the variability depending on characteristics of the storage. There are a wide variety of different storage technologies with a variety of different operating characteristics. The CAISO study describes many of the technologies and properties that they offer. The predominant technologies are pumped storage hydro, compressed air, batteries, flywheels, super capacitors, and hydrogen. The ability of storage to mitigate potential problems will depend on its characteristics. A large storage system would be able to charge during windy conditions if the energy isn’t needed then and could then discharge when it is calm and more electricity is needed.

To shift energy pumped storage hydro is the most practical storage type; many areas already have some pumped storage hydro installed. Smaller storage could be used to manage the variability principally within hour. Under the right conditions storage could contribute to a variety of operations functions. Storage could be used to provide reserves and regulation. It could be used for load following services. It could be used to provide peak power, or add demand in minimum load conditions. Storage could provide reactive support to the system or provide inertial response. While storage has many potential benefits for power systems it is important to understand a few things about it. First, storage needs to make sense from the system perspective compared with other operational strategies. If wind does add variability to the system the systems needs should be evaluated relative to the variability, storage should not be used to return the system to its state before wind is added. Second is the cost and benefit analysis. Storage will be competing with other generation for all those possible functions and the revenue model for storage is often unclear. On the cost issue alone building storage is more expensive than equivalent capacity gas turbines. The EWITS study suggests that in certain situations storage can be used instead of transmission upgrades, though further

Coordination. System operators with a few notable exceptions are not alone in trying to maintain system reliability. Interconnections have dozens of control areas that need to interact to maintain reliability. Current practices typically lock in the flows between control areas in hourly blocks. More flexibility between balancing areas is seen as a way to increase the diversity in the system and mitigate wind generation. There are many ways for areas to coordinate their integration efforts. More frequent scheduling on shared line or connections is one. This allows greater flexibility within the areas if they do not have to maintain fixed flows across their interconnections for a full hour. Data sharing is another way areas can coordinate. Sharing of information about weather conditions can help areas coordinate their wind generation and perhaps get better forecasts. Coordination could mean consolidating balancing areas into large areas with one operator.

Extreme Conditions. Another area that studies often suggest future work is the extreme weather events. Wind studies consider typical operating conditions and historical years, if weather patterns didn’t exist in the input data they won’t be considered in the study even though they may be possible. There are extreme scenarios which could pose problems to the operation of the grid. Studying those cases could determine strategies to successfully handle them if they occur. Sub hourly studies are often necessary to fully consider extreme events or other areas of concern. Extreme events can be based not only on the wind generation behavior, but also on the simultaneous behavior of the power system.

Along with the studies of possible extreme events studies need to be done on how to mitigate them. Extreme events may require special attention and solutions that are not typically used.

 

 

 

Posted in Renewable Integration | Leave a comment

Over 21 essential resources have peaked: Fish, milk, eggs, wheat, corn, rice, soy

Nature summary of this article: “The rates at which humans consume multiple resources such as food and wood peaked at roughly the same time, around 2006. This means that resources could be simultaneously depleted, so achieving sustainability might be more challenging than was thought.

Ralf Seppelt … and his colleagues estimated the peak rate of extraction for 27 resources. For 20 of them, mostly renewables such as meat and rice, the peak-rate years occurred between 1960 and 2010, with many clustering around 2006. Only coal, gas, oil, phosphate, farmed fish and renewable energy have yet to peak.

Humans use multiple resources to generate new ones and to meet basic needs, which could explain the synchronicity of peak usage, the authors suggest.

Seppelt, R., et al. 2014. Synchronized peak-rate years of global resources use. Ecology and Society 19(4): 50.

ABSTRACT

Many separate studies have estimated the year of peak, or maximum, rate of using an individual resource such as oil. However, no study has estimated the year of peak rate for multiple resources and investigated the relationships among them. We exploit time series on the appropriation of 27 global renewable and nonrenewable resources. We found 21 resources experienced a peak-rate year, and for 20 resources the peak-rate years occurred between 1960-2010, a narrow time window in the long human history. Whereas 4 of 7 nonrenewable resources show no peak-rate year, conversion to cropland and 18 of the 20 renewable resources have passed their peak rate of appropriation. To test the hypothesis that peak-rate years are synchronized, i.e., occur at approximately the same time, we analyzed 20 statistically independent time series of resources, of which 16 presented a peak-rate year centered on 2006 (1989-2008). We discuss potential causal mechanisms including change in demand, innovation and adaptation, interdependent use of resources, physical limitation, and simultaneous scarcity. The synchrony of peak-rate years of multiple resources poses a greater adaptation challenge for society than previously recognized, suggesting the need for a paradigm shift in resource use toward a sustainable path in the Anthropocene.

INTRODUCTION

Sustainable appropriation of nonrenewable and renewable resources is required for society’s long-term well-being. Four decades ago, Meadows’ limits to growth model reignited the old Malthusian debate about the limits of the world’s resources (Mathus 1798, Bardi 2000, Griggs et al. 2013). Limits to growth of specific resources such as oil (Hallock et al. 2014) or fossil water (Gleick and Palaniappan 2010) have been analyzed separately, by estimating the peak-rate, or maximum, year, defined as the year of maximum resource appropriation rate. For which renewable and nonrenewable resources can a peak-rate year be identified given the most up-to-date time series of human resource appropriation? Exploring the relation among peak-rate years for multiple resources then raises an important second question: are global peak-rate years synchronized, i.e., occurring at approximately the same time in the long history of human civilization? Calculating the appropriation rate of resources allows the detection of the maximum increase year or peak-rate year, which indicates the timing of scarcity or change in demand (Fig. 1). We analyzed peak-rate years for many of the world’s major resources and found synchrony in the peak-rate years of statistically independent resources by a method that is standardized, nonparametric, generalizable, and allows analysis of nonrenewable and renewable resources (Table 1), and we will conclude by giving clear implications for sustainable development goals (Arrow et al. 1995).

We focused on 27 nonrenewable and renewable resources essential for human well-being and daily needs, e.g., energy and food. These resources are also the focus of global policy bodies such as the United Nations and the World Bank. Nonrenewables include the fossil fuels, i.e., coal, gas, oil, supplying 87% of the energy consumed by the 50 wealthiest nations (Tollefson and Monastersky 2012). Renewables include staple crops, e.g., cassava, maize, rice, soybeans, and wheat, which the Food and Agriculture Organization of the United Nations identified as providing 45% of global caloric intake (FAO 2013). Combined with data on the consumption of animal products, the main sources of food are included in our analysis. We also evaluated resources with a long history of use, e.g., cropland and domesticated species, and renewable energy sources, which may be increasingly important in the future. Furthermore, we considered two global drivers of resource use, population and economic activity (world GDP). The database consists of time series of 27 global resources, 2 global drivers, and 13 national resources/drivers. The data sources are listed in Table 2. All data is accessible at Figshare http://dx.doi.org/10.6084/m9.figshare.929619. The raw data and smoothed times series of the bootstrap resamples (see Methods) are plotted in Figure 2.

METHODS

Peak-rate year estimation

We used a method that is standardized, nonparametric, generalizable, and allows analysis of nonrenewable and renewable resources. Renewable resources regenerate on shorter time scales, i.e., harvest rate and regrowth have comparable time scales, and have a human scale, and annual production is the response variable that was analyzed. Nonrenewables are regenerated on geological time scales, and the response variable is the accumulated amount extracted. This choice of response variables allowed the analysis of all resources with the same mathematical method (Table 1, Fig. 1). To estimate a peak-rate year, the maximum increase rate of the time series must be calculated. It is possible to use a parametric model, e.g., a logistic curve or its derivative. However, nonparametric curve fitting offers advantages regarding the bias and does not require parametric assumptions or that a functional model be postulated, e.g., stationarity of rate of resource appropriation. This means that the different resources and drivers need not follow the same increase process (Gasser et al. 1984). Further, by using a bootstrap resample to estimate the uncertainty of the peak rate year estimate, we avoided distributional assumption. However, no prediction outside the range of the data can be performed.

Time series analysis of peak-rate years and synchrony testing

We do provide a summary of the statistical analysis of the time series. Appendix 1 provides detailed documentation of conducted steps. Figure A1.1 in the appendix provides a graphical overview.

The time series of the 27 global resources, 2 global drivers, and 13 national resources/drivers (with length n = 12 – 112, see Table 2) were subjected to a cubic smoothing spline to find the peak rate of resource appropriation, based on the maximum of the first derivative. This nonparametric method has no distributional assumptions, but does not enable predictions. We performed 5000 bootstrap resamples, and the 50th percentile (2.5 and 97.5th) was taken as the peak-rate year (uncertainty), unless the 50th percentile was equal to the last year in the time series, in which case we concluded the rate of resource appropriation was still increasing.

To test the hypothesis of synchrony, we selected statistically independent time series. We performed ARIMA modelling of the 27 global resource time series and tested the residuals with a Box-Pierce test of white noise. Haugh’s test of dependence on all pairs of resources was performed on cross-correlation coefficient of the white noise residuals (Haugh 1976). We selected 20 statistically independent time series of resource, of which 16 presented a peak-rate year. A peak-rate year from the 5000 bootstrap resamples for each of the 16 resources was randomly selected, and the mode of the resulting smoothed distribution of 16 peak-rate years obtained. This process was repeated 5000 times, and we estimated the synchrony as the median of the 5000 modes. A nonparametric goodness of fit test was performed with a uniform distribution as a null hypothesis, i.e., no mode implies no synchrony, and a critical value obtained by Monte Carle simulation (5000). The statistical tests were performed at a Type I error rate of 0.05.

RESULTS

We observed that for 21 of the 27 global resources and for the 2 global drivers of resource use, there was a peak-rate year. For the 21 resources that had a peak-rate year (Table 3), all but 1 (cropland expansion) lay between 1960 and 2010 (Fig. 3). Given the long human history, this is a very narrow time window. The available data suggest that peak-rate years for several nonrenewable resources, i.e., coal, gas, oil, and phosphorus, have not yet occurred. This implies a continued acceleration of extraction, which is in accordance with earlier analysis for oil (Hallock et al. 2014) and phosphorus (Cordell et al. 2009).

Individual countries have detectable impacts on the global nonrenewable resource extraction rate. For example, in 2011 the rate of coal extraction for China was 7.2% (5.7-7.4), whereas the rate for the world without China was 3.7 % (3.5-3.8). The values for natural gas in 2011 were 10.1% (7.6-10.3) and 4.4% (4.0-4.4) with and without China, respectively. A peak-rate year for renewable energy has not occurred.

Figure 3 shows that the peak rate of earth surface conversion to cropland occurred in 1950 (1920-1960), and the expansion of cropland recently stabilized at the highest recorded levels, about 1.8 x 106 ha (Ramankutty and Foley 1999). We find peak-rate years recently passed for many agricultural products: soybeans in 2009 (1977-2011), milk in 2004 (1982-2009), eggs in 1993 (1992-2006), caught fish in 1988 (1984-1999), and maize in 1985 (1983-2007). Two major factors of agricultural productivity, N-fertilizers and the area of irrigated land, show peak-rate years in 1983 (1978-2010) and 1978 (1976-2003), respectively. Water is a resource that many world policy bodies are concerned with and is largely understood as a renewable resource. But not all water is renewable. ‘Fossil water’ stocks are isolated water resources, which are consumed faster than are naturally renewed. There is currently a lack of time-series data at the global scale on the status of hydrological resources (Fan et al. 2013). As an example of national trends, the greatest rate of groundwater extraction occurred in 1975 in the USA (1975-2005). Water conservation and rationing rules likely reduced the rate of ground water extraction (Gleick and Palaniappan 2010). For maize, rice, wheat, and soybeans, the yield per area is stagnating or collapsing in 24-39% of the world’s growing areas (Ray et al. 2012), which may explain why the peak-rate years have passed at a global level. The peak-rate years of renewable resources collectively suggest challenges to achieving global food security (Foley et al. 2011). We identified a sequence in the peak-rate years of resources associated with food production: 1950 for conversion to cropland, 1978 for conversion to irrigated land, 1983 for fertilizer use. Because all peak-rate years for food resources appeared afterward, we inferred that the strategies to increase food production changed from land expansion to intensification of production. Furthermore, the pattern of peak-rate years occurring in land, food, and not yet for nonrenewable resources suggests that sustained intensification of agricultural production is not limited by energy but rather by land.

Following the observation of an apparently simultaneous pattern of peak-rate years in Figure 3, we tested the hypothesis of synchrony among peak-rate years on 20 statistically independent time series of resources, of which 16 presented a peak-rate year. We found that peak-rate years appeared clustered around 2006 (1989-2008), given the uncertainty surrounding the peak-rate year estimate of each resource (Fig. 4). It is unlikely that the synchrony is a statistical artefact because there is less than a 1 in 1000 chance that the distribution in Figure 4 would have been obtained if it were sampled from a uniform distribution, i.e., null hypothesis of no synchrony is rejected.

DISCUSSION

Why is there a synchrony of peak-rate years? Some explanations follow. The overall hypothesis is that multiple resources become scarce simultaneously, which can be driven by two mechanisms.

First, multiple resources, e.g., land, food, energy, etc., are consumed at the same time to meet different human needs. For example, people require food for nutrition; water for drinking, irrigation, and cleaning; land for housing, recreation, food production, infrastructure; and energy for cooking food, transportation, heating, cooling, etc.

Second, producing one resource requires the use of other resources. For example increasing food production requires more land and water whose scarcity in turn leads to limited food production increases, as the sequence of peak-rate years associated with food production shows (see above). Furthermore, the continued increase in extraction for less accessible resources results in an increased ecological and economic cost per unit extracted (Davidson and Andrews 2013), thus reducing availability of the remaining resources. For example, pollution exacerbates water shortages because polluted water is not suitable. These two mechanisms provide the most parsimonious explanation for simultaneous scarcity leading to synchrony of peak-rate years.

Are there other factors causing synchronized peak-rate years? Besides scarcity, passing an individual peak-rate year may be caused by two possible reasons: availability of substitutes or less demand, e.g. less resource is needed because of more efficient use, taste changes, or institutional and regulatory changes (Fig. 1). It is unlikely that substitution has a substantial influence on synchronization. Strong support for the hypothesis that substitution synchronized the peaks would require that substitution took place for all or most of the resources with synchronized peak-rate years. However, among the 16 resources with synchronized peak-rate years in our database, which contains most of the critical global resources, only a few resources may have substitutes. For instance, contrary to expectation there is little evidence that farmed fish substitutes for caught fish (Asche et al. 2001). In contrast, poultry products serve as a substitute for beef because they are cheaper and better adapted to changing tastes (Eales and Unnevehr 1988). However, evidence suggests that meat as a category is not being substituted by plant protein on a global scale (Daniel et al. 2011). Finally, there is little evidence that renewable energy, which did not show a peak-rate year, substitutes for fossil energy. In the last 50 years, the general global trend was that a unit of energy sourced from nonfossil fuels substituted less than one quarter of a unit of fossil fuel-based energy, possibly as a consequence of economic and social complexity (York 2012).

A global synchronous reduction of demand is also an unlikely driver. Despite a declining global population growth rate, i.e., peak-rate year passed in 1989 in accordance with preceding reports (Lutz and K. C. 2010), the global population continues to grow. In most developed countries, we identified peak-rate years in household intensity, i.e., number of households per 100 people (Table 2). Additionally, the peak rate of meat consumption in the USA occurred in 1955 (1909-1999). Nevertheless, the rate of resource appropriation is not expected to decline because consumption in developing countries increases because of lifestyle changes (Brown 2012, Liu 2014), and the land area used for urban settlements and household numbers continue to increase (Liu et al. 2003, Seto et al. 2011). These shifts in resource-intensive living likely more than offset the declining rate of population growth. Declining demand would have to come from broad scale changes in individual preferences for conservation, which continue to seem unlikely.

Finally, constraints on production may not be alleviated unless there is disruptive innovation. For example, although there is phenotypic plasticity in plants, which is exploited by agronomic research, e.g. breeding, particular biochemical mechanisms were not as of now disrupted or constructed de novo in a commercial setting: nitrogen fixation for cereals remains elusive (Charpentier and Oldroyd 2010) and further increase in photosynthetic efficiency is expected to be hard to achieve (Zhu et al. 2010). Further, a basic constraint on breeding is biological diversity. The rate of domesticating species, the biological foundation of food provisioning, began to slow around 2600 B.C. (3600-1500 B.C.), well before our era.

Synchrony among the peak-rate years suggests that multiple planetary resources have to be managed simultaneously, accounting for resource distribution and utilization (Steffen et al. 2011, Liu et al. 2013). Synchrony does not necessarily imply a tipping point that leads to disastrous outcomes because trade-offs are possible (Seppelt et al. 2013), and adaptation, such as the current increasing rate of renewable energy generation or shifting diets (Foley et al. 2011), potentially can be accelerated. Synchrony also suggests that the debate about whether humans can devise substitutes for individual natural capital needs to be broadened to assess simultaneous substitutability (Barbier et al. 2011). Whether substitution and recycling will alleviate constraints to future economic growth (Neumayer 2002) remains an open question, especially because maintaining the innovation rate requires increasing expenditures on human capital (Huebner 2005, Fenichel and Zhao 2014). Arrow et al. (2012) estimated that the growth rate of human capital in the United States could be as low as 0.35%, which is 15-44% of the growth rate of conventional reproducible capital, e.g., infrastructure, and China’s rate of human capital growth ranges between 1.1% and 2%, but is only 10-17% of the rate of growth in reproducible capital.

The synchronization of peak-rate years of global resource appropriation can be far more disruptive than a peak-rate year for one resource. Peak-rate year synchrony suggests that the relationship among resource appropriation paths needs to be considered when assessing the likelihood of successful adaptation of the global society to physical scarcity.

ACKNOWLEDGMENTS

We are grateful to Anna Cord, Jörg Priess, Nina Schwarz, Dagmar Haase, and Burak Guneralp for providing comments on earlier versions of the manuscript. We also thank Karen Seto and Burak Gunneralp for providing access to the urban growth data. The work was funded by grant 01LL0901A “Global Assessment of Land Use Dynamics, Greenhouse Gas Emissions and Ecosystem Services – GLUES” (German Ministry of Research and Technology) under the Helmholtz Program “Terrestrial Environmental Research,” U.S. National Science Foundation, and Michigan AgBioResearch. This article contributed to the Global Land Project (www.globallandproject.org).

LITERATURE CITED

Arrow, K. J., B. Bolin, R. Costanza, P. Dasgupta, C. Folke, C. S. Holling, B.-O. Jansson, S. Levin, K.-G. Mäler, C. Perrings, and D. Pimentel. 1995. Economic growth, carrying capacity, and the environment. Science 268:520-521. http://dx.doi.org/10.1126/science.268.5210.520

Arrow, K. J., P. Dasgupta, L. H. Goulder, K. J. Mumford, and K. Oleson. 2012. Sustainability and the measurement of wealth. Environment and Development Economics 17:317-353. http://dx.doi.org/10.1017/S1355770X12000137

Asche, F., T. Bjørndal, and J. A. Young. 2001. Market interactions for aquaculture products. Aquaculture Economics and Management 5(5-6):303-318. http://dx.doi.org/10.1080/13657300109380296

Barbier, E. B. 2011. Capitalizing on nature: ecosystems as natural assets. Cambridge University Press, New York, New York, USA. http://dx.doi.org/10.1017/CBO9781139014922

Bardi, U. 2000. The limits to growth revisited. Springer, New York, New York, USA. http://dx.doi.org/10.1007/978-1-4419-9416-5

Brown, L. R. 2012. Full planet, empty plates: the new geopolitics of food scarcity. Earth Policy Institute, Washington, D.C., USA. [online] URL: http://www.earth-policy.org/books/fpep

Charpentier, M., and G. Oldroyd. 2010. How close are we to nitrogen-fixing cereals? Current Opinion in Plant Biology 13:556-564. http://dx.doi.org/10.1016/j.pbi.2010.08.003

Cordell, D., J.-O. Drangert, and S. White. 2009. The story of phosphorus: global food security and food for thought. Global Environ Change 19:292-305. http://dx.doi.org/10.1016/j.gloenvcha.2008.10.009

Costanza, R., L. Graumlich, W. Steffen, C. Crumley, J. Dearing, K. Hibbard, R. Leemans, C. Redman, and D. Schimel. 2007. Sustainability or collapse: what can we learn from integrating the history of humans and the rest of nature? Ambio 36:522-527. http://dx.doi.org/10.1579/0044-7447(2007)36[522:SOCWCW]2.0.CO;2

Daniel, C. R., A. J. Cross, C. Koebnick, and R. Sinha. 2011. Trends in meat consumption in the United States. Public Health Nutrition 14(4):575-583. http://dx.doi.org/10.1017/S1368980010002077

Davidson, D. J., and J. Andrews. 2013. Not all about consumption. Science 339:1286-1287. http://dx.doi.org/10.1126/science.1234205

Eales, J. S., and L. J. Unnevehr. 1988. Demand for beef and chicken product: separability and structural change. American Journal of Agricultural Economics 70:521-532. http://dx.doi.org/10.2307/1241490

Fan, Y., H. Li, and G. Miguez-Macho. 2013. Global patterns of groundwater table depth. Science 339:940-943. http://dx.doi.org/10.1126/science.1229881

Fenichel, E. P., and J. Zhao. 2014. Sustainability and substitutability. Bulletin of Mathematical Biology May 2014. http://dx.doi.org/10.1007/s11538-014-9963-5

Foley, J. A., N. Ramankutty, K. A. Brauman, E. S. Cassidy, J. S. Gerber, M. Johnston, N. D. Mueller, C. O’Connell, D. K. Ray, P. C. West, C. Balzer, E. M. Bennett, S. R. Carpenter, J. Hill, C. Monfreda, S. Polasky, J. Rockström, J. Sheehan, S. Siebert, D. Tilman, and D. P. M. Zaks. 2011. Solutions for a cultivated planet. Nature 478:337-342. http://dx.doi.org/10.1038/nature10452

Food and Agriculture Organization of the United Nations (FAO). 2013. FAOSTAT. Food and Agriculture Organization of the United Nations, Statistics Division. [online] URL: http://faostat3.fao.org/faostat-gateway/go/to/download/C/CC/E

Gasser, T., H.-G. Müller, W. Köhler, L. Molinari, and A. Prader. 1984. Nonparametric regression analysis of growth curves. Annals of Statistics 12:210-229. http://dx.doi.org/10.1214/aos/1176346402

Gleick, P. H., and M. Palaniappan. 2010. Peak water limits to freshwater withdrawal and use. Proceedings of the National Acadamy of Science 107:11155-11162. http://dx.doi.org/10.1073/pnas.1004812107

Griggs, D., M. Stafford-Smith, O. Gaffney, J. Rockström, M. C. Öhman, P. Shyamsundar, W. Steffen, G. Glaser, N. Kanie, and I. Noble. 2013. Sustainable development goals for people and planet. Nature 495:305-307. http://dx.doi.org/10.1038/495305a

Hallock, Jr., J. L., W. Wu, C. A. S. Hall, and M. Jefferson. 2014. Forecasting the limits to the availability and diversity of global conventional oil supply: validation. Energy 64:130-153. http://dx.doi.org/10.1016/j.energy.2013.10.075

Haugh, L. D. 1976. Checking the independence of two covariance stationary time series: a univariate residual cross-correlation approach. Journal of the American Statistical Association 71(354):378-384. http://dx.doi.org/10.2307/2285318

Huebner, J. 2005. A possible declining trend for worldwide innovation. Technological Forecasting and Social Change 72:980-986. http://dx.doi.org/10.1016/j.techfore.2005.01.003

Liu, J. 2014. Forest sustainability in China and implications for a telecoupled world. Asia and the Pacific Policy Studies 1:230-250. http://dx.doi.org/10.1002/app5.17

Liu, J., G. C. Daily, P. R. Ehrlich, and G. W. Luck. 2003. Effects of household dynamics on resource consumption and biodiversity. Nature 421:530-533. http://dx.doi.org/10.1038/nature01359

Liu, J., V. Hull, M. Batistella, R. DeFries, T. Dietz, F. Fu, T. W. Hertel, R. C. Izaurralde, E. F. Lambin, S. Li, L. A. Martinelli, W. J. McConnell, E. F. Moran, R. Naylor, Z. Ouyang, K. R. Polenske, A. Reenberg, G. de Miranda Rocha, C. S. Simmons, P. H. Verburg, P. M. Vitousek, F. Zhang, and C. Zhu. 2013. Framing sustainability in a telecoupled World. Ecology and Society 18(2): 26. http://dx.doi.org/10.5751/ES-05873-180226

Lutz, W., and S. K. C. 2010. Dimensions of global population projections: what do we know about future population trends and structures? Philosophical Transactions of the Royal Society B: Biological Sciences 365:2779-9. http://dx.doi.org/10.1098/rstb.2010.0133

Malthus, T. R. 1798. An essay on the principle of population as it affects the future improvement of society. Printed for J. Johnson, in St. Paul’s Church-Yard, London, UK.

Neumayer, E. 2002. Scarce or abundant? The economics of natural resource availability. Journal of Economic Surveys 14:307-335. http://dx.doi.org/10.1111/1467-6419.00112

Peterson, N., T. Peterson, and J. Liu. 2013. The housing bomb. Johns Hopkins University Press, Baltimore, Maryland, USA.

Ramankutty, N., and J. A. Foley. 1999. Estimating historical changes in global land cover: croplands from 1700 to 1992. Global Biogeochemical Cycles 13:997-1027. http://dx.doi.org/10.1029/1999GB900046

Ray, D. K., N. Ramankutty, N. D. Mueller, P. C. West, and J. A. Foley. 2012. Recent patterns of crop yield growth and stagnation. Nature communications 3:1293. http://dx.doi.org/10.1038/ncomms2296

Seppelt, R., S. Lautenbach, and M. Volk. 2013. Identifying trade-offs between ecosystem services, land use, and biodiversity: a plea for combining scenario analysis and optimization on different spatial scales. Current Opinion in Environmental Sustainability 5:458-463. http://dx.doi.org/10.1016/j.cosust.2013.05.002

Seto, K. C., M. Fragkias, B. Güneralp, and M. K. Reilly. 2011. A meta-analysis of global urban land expansion. PloS one 6:e23777. http://dx.doi.org/10.1371/journal.pone.0023777

Steffen, W., Å. Persson, L. Deutsch, J. Zalasiewicz, M. Williams, K. Richardson, C. Crumley, P. Crutzen, C. Folke, L. Gordon, M. Molina, V. Ramanathan, J. Rockström, M. Scheffer, H. J. Schellnhuber, and U. Svedin. 2011. The Anthropocene: from global change to planetary stewardship. Ambio 40(7):739-761. http://dx.doi.org/10.1007/s13280-011-0185-x

Tollefson, J., and R. Monastersky. 2012. The global energy challenge: awash with carbon. Nature 491:654-655. http://dx.doi.org/10.1038/491654a

U.S. Patent and Trademark Office. 2013. U.S. Patent Statistics Chart Calendar Years 1963 – 2013. U.S. Patent and Trademark Office, Washington, D.C., USA. [online] URL: http://www.uspto.gov/web/offices/ac/ido/oeip/taf/us_stat.htm

World Bank. 2014. Indicators. World Bank, Washington, D.C., USA. [online] URL: http://data.worldbank.org/indicator

York, R. 2012. Do alternative energy sources displace fossil fuels? Nature Climate Change 2:441-443. http://dx.doi.org/10.1038/nclimate1451

Zhu, X.-G., S. P. Long, and D. R. Ort. 2010. Improving photosynthetic efficiency for greater yield. Annual Review of Plant Biology 61:235-261. http://dx.doi.org/10.1146/annurev-arplant-042809-112206

Posted in Limits To Growth | Tagged , , | Leave a comment

Extreme Events. CEC 2011. Variable distributed generation from solar and wind increase the chance of large blackouts

Morgan, M., et al.   (Pacific Northwest National Laboratory, University of Wisconsin-Madison, Electric Power Research Institute, BACV Solutions, Southern Company, CIEE, University of Alaska – Fairbanks, and KEMA). 2011. Extreme Events. California Energy Commission. Publication number: CEC-500- 2013-031.

distributed gen blackout 1

Figure 18: BLACKOUT FREQUENCY and SIZE (figure 19, not shown, similar to above) Increases Greatly With Highly Variable Distributed Generation, decreases with Reliable Distributed Generation 

Summary

This study showed that in some cases, increasing the proportion of variable distributed generation could actually increase the long-term frequency of the largest blackouts. If the decentralized generation is highly variable, as is the case with wind and solar power, the operation of the grid can be severely degraded. This may increase the probability of large blackouts and a higher frequency of failures.

One potentially problematic scenario is that as the early penetration of distributed generation comes on line, it will actually make the system more reliable and robust since it will effectively be adding to the capacity margin. However, as new distributed generation is added, the system could become much less reliable as the demand grows, the fraction of distributed generation grows, and the capacity margin falls back to historical, mandated levels.

Possible trigger events that can lead to a blackout include short circuits due to lightning, tree contacts, or animals, severe weather, earthquakes, operational or planning errors, equipment failure, or vandalism.

The worst case occurs when highly centralized high-variability generation, such as large wind farms, are added without the necessary increase in generation margins.

Large blackouts pose a substantial risk that must be mitigated to maintain the high overall reliability of an electric power grid. As the control of the power grid becomes far more complex with the increasing penetration of new generation sources such as wind and solar power and new electric loads such as electric cars, maintaining high reliability of the electric grid becomes even more critical.

Generator capacity margin or generation variability leveling mechanisms are critical to reducing the degradation that can be caused by the increased penetration of sustainable distributed generation.

The backbone of electric power supply is the high-voltage transmission grid. The grid serving California is part of the larger Western Interconnection, administered by the Western Electricity Coordinating Council (WECC), which extends from the Mexican border well into Canada and from the Pacific coast to the Rocky Mountains.

The western power grid is an impressively large and complex structure. The full WECC interconnection system comprises 37 balancing authorities (BAs), 14,324 high-and medium-voltage transmission lines, 6,533 transformers, 16,157 buses (8,230 are load buses), and 3,307 generating units. The grid has 62 major transmission paths between different areas.

While the extent of this grid provides it with certain reliability benefits, it also adds vulnerabilities because it provides multiple paths for any local disturbance to propagate. This is the problem of cascading failure; a series of failures occur, each weakening the system further, making subsequent failures more likely.

System cascading failures may occur due to the loss of several important elements, such as multiple generating units within a power plant, parallel transmission lines or transformers and common right-of-way circuit outages. The failure of these elements may widely propagate through the interconnected power network and result in a local or wide-area blackout. These kinds of failures that cause severe consequences are initiating events to a cascading failure.

The electrical transmission system of California, like all interconnected transmission systems, is vulnerable to extreme events in which complicated chains of exceptional events cascade to cause a widespread blackout across the state and beyond.

A reliable transmission grid is essential for enabling transition to renewable energy sources and electric cars, especially as the grid itself evolves toward a “smart” infrastructure.

The high voltage transmission grid for California is part of the larger western power grid, a complicated and intricately coordinated structure with hundreds of thousands of components that support the electrical supply and hence the way of life for California citizens, business, and government.

Although the transmission grid is normally very reliable, extreme events in which disturbances cascade across the grid and cause large blackouts do occasionally occur and result in direct costs to society amounting to billions of dollars.

There is an evident need to expand the list of initiating events to reflect the complexities of modern power systems as well as new factors such as the increasing penetration of variable renewable generation resources, demand-side load management, virtual and actual consolidation of balancing authorities, new performance standards, and other factors.

 

Excerpts from the 85 pages:

These large blackouts always have a substantial impact on citizens, business and government. Although these are rare events, they pose a substantial risk. Much is known about avoiding the first few failures near the beginning of a cascade event series, but there are no established methods for directly analyzing the risks of the subsequent long chains of events. The project objective is to find ways to assess, manage, and reduce the risk of extreme blackout events. Since this is a difficult and complex problem, multiple approaches are pursued, including examining historical blackout data, making detailed models of the grid, processing simulated data from advanced simulations, and developing and testing new ideas and methods. The methods include finding critical elements and system vulnerabilities, modeling and simulation, quantifying cascade propagation, and applying statistical analyses in complex systems. The project team combines leading experts from industry, a national laboratory, and universities.

Although such extreme events are infrequent, statistics show that they will occur. The electric power industry has always worked hard to avoid blackouts, and there are many practical methods to maintain reliability. However, the cascading- failure problem is so complex that there are no established methods that directly analyze the risk of the large blackouts. The overall project objective is to assess the risk of extreme-blackout events and find ways to manage and reduce this risk. Managing the risk of extreme events such as this is particularly important as society moves toward environmental sustainability.

 

Although extreme events only occur occasionally, the NERC data show a substantial risk of extreme events in the WECC region.

From the area of operations, the researchers found that the average fractional load (the load divided by the limit) of the transmission lines is a good representation for the risk of large failures. If this average is kept below about 50%, the probability of large failures appears to decrease. This in turn has major implications for the ratepayer; operating at less than 50% of line capacity would lead to improved reliability for the users but would probably require investment in both the transmission capacity and demand-side control.

 

Researchers found that decentralized generation can greatly improve the reliability of the power transmission grid. However, if the decentralized generation is highly variable, as is the case with wind and solar power, the operation of the grid can be severely degraded. This may increase the probability of large blackouts and a higher frequency of failures. The project results suggest that one of the critical factors is the generation margin. If high-variability non-centralized generation is brought on-line as an increase in the generation capacity margin, it is likely to improve the network robustness; however, if over time that margin declines again (as the demand increases) to the standard value, the grid could undergo a distinct decline in reliability characteristics. This suggests a need for care in planning and regulation as this decentralization increases. The worst case occurs when highly centralized high-variability generation, such as large wind farms, are added without the necessary increase in generation margins. Increased use of de-centralized generation in the system has numerous effects on the ratepayer, from decreased electricity costs and increased reliability, if implemented carefully, to decreased reliability and an accompanying increase in costs, if not.

CHAPTER 1: Introduction

On August 10, 1996, a blackout started in the northwestern United States and cascaded to disconnect power to about 7,500,000 customers over the West Coast, including millions of customers in both northern and southern California. Power remained out for as much as 9 hours, snarling traffic, shutting down airports and leaving millions in triple- digit heat. An initially small power- system disturbance, a sagging power line, cascaded into a complicated chain of subsequent failures leading to a widespread blackout. Although such extreme events are infrequent, historical statistics show they will occur. The resulting direct cost is estimated to be in the billions of dollars, not including indirect costs resulting from social and economic disruptions and the propagation of failures into other infrastructures such as transportation, water supply, natural gas, and communications.

 

  1. 5.2 Line-Trip Data

The transmission line outage data set consists of 8864 automatic line outages recorded by a WECC utility over a period of ten years. This is an example of the standard utility data reported to NERC for the Transmission Availability Data System (TADS). The data for each transmission line outage include the trip time. More than 96% of the outages are of lines rated 115 k V or above. Processing identified 5227 cascading sequences in the data. Some of these cascades are long sequences of events, but most are short.

CHAPTER 4: Extreme Event Risk. Anatomy of Cascading Failure

Cascading failure can be defined as a sequence of dependent events that successively weakens the power system. The events are often some individual power system component being outaged or damaged or mis-operating, but can also include a device functioning as designed but nevertheless contributing to the cascade, or actions by operators, software, or automatic controls. As shown in Figure 6, cascading failure starts with a trigger event and proceeds with further events. All the events interact with the system state as the cascade proceeds. The occurrence of each event depends on the system state, and the system state is affected by every event that has already occurred, and thus the system state changes throughout the cascade. The progressive weakening of the system as the cascade propagates is characteristic of cascading failure.

Possible trigger events include short circuits due to lightning, tree contacts, or animals, severe weather, earthquakes, operational or planning errors, equipment failure, or vandalism. The system state includes factors such as component loadings, which components are in service, generation margin, hidden failures, situational awareness, and weather.

The triggers and the subsequent propagation of events have different mechanisms, so that different approaches are needed to mitigate the triggers or mitigate the propagation. Moreover, the triggers and the propagation have different effects on the risks of small, medium, and large blackouts, so that managing these risks may require different combinations of mitigations for triggers and/or propagation. Limiting the triggers and initiating events reduces the frequency of all blackouts, but can in some cases actually increase the occurrence of the largest blackouts, whereas limiting the propagation tends to reduce the larger blackouts, but may have no effect on the frequency of the smaller events.

The notions of causes (and blame) often can become murky in complicated cascades. For example, it is possible that automatic or manual control decisions that are advantageous in many standard system operational states and are overall beneficial may occasionally be deleterious.

  1. 2 Probabilistic Approach to Simulation of Rare Events Cascading

Failure in power systems is inherently probabilistic. There are significant uncertainties in the initial state of the power system, in the triggering events, and in the way that the cascading events propagate or stop. The initial state of the power transmission system is always varying and includes factors such as patterns of generation and loading, equipment in service, weather, and situational awareness. Examples of trigger events are lightning, earthquakes, shorts involving trees and animals, equipment failure, and operational errors. The progress of cascading events depends on exact conditions and thresholds, can be very complicated, and can involve combinations drawn from dozens of intricate mechanisms, some of which involve unusual or rare interactions, that span a full range of physical and operation al factors. It is appropriate to understand all these uncertainties probabilistically. Large black outs are particular samples from an astronomically large set of possible but unusual combinations of failures. From a modeling perspective, the underlying probabilistic view is driven by several factors. It is impossible to enumerate all the possible large blackouts because of the combinatorial explosion of possibilities. While some selected mechanisms of cascading failure can be usefully approximated in a simulation, it is well beyond the current state of the art to represent all or even only the physics- based) mechanisms in great detail in one simulation. The full range of power system phenomena involved in cascading failure occur on diverse time-scales, and obtaining the full data (such as fast dynamical data) is difficult for the large-network cases needed to study large cascading blackouts. Most important, such a simulation, even if otherwise feasible, would be too slow.

 

In WECC, one could consider small blackouts to be less than 100 MW load shed, medium blackouts to be between 100 MW and 1000 MW load shed, and large blackouts to be more than 1000 MW load shed. The historical data implies that large blackouts are rarer than medium blackouts, but that the large blackouts are more risky than the medium blackouts because their cost is so much higher.

Based on these cost assumptions, a rough calculation of large and medium blackout risk can be made. The NERC WECC blackouts are divided into small (<100 MW) medium (100 – 1000 MW) and large blackouts (>1000 MW). The largest recorded blackout is 30,390 MW. Small blackouts are not systematically covered by the reported data and are put aside. According to the data, the large blackouts have about 1/3 the probability of the medium blackouts. The average large blackout is roughly 8 times the size of the average medium blackout, so its cost is roughly 20 times larger. Since risk is probability times cost, the risk of an average large blackout is roughly 7 times the risk of an average medium blackout.

CHAPTER 5: Results, Analysis, and Application to California and the Western Interconnection

  1. 1.1 Selection of Initiating Events Power

System cascading failures may occur due to the loss of several important elements, such as multiple generating units within a power plant, parallel transmission lines or transformers and common right-of-way circuit outages. The failure of these elements may widely propagate through the interconnected power network and result in a local or wide-area blackout. These kinds of failures that cause severe consequences are initiating events to a cascading failure. Some of the selected initialing events are in NERC Category D. Such events are not routinely analyzed by system planners and operators due to the complexity of such events. The selection of initiating events is a critical step in accurately simulating and analyzing large-scale cascading failures. Successful identification of initiating events can help effectively identify the most severe disturbances and help system planners propose preemptive system reinforcements that will improve both the security and the reliability of the system. Analyzing too few initiating events may not be sufficient to reveal critical system problems. At the other extreme, scanning all combinations of initiating events in a bulk power system is computationally impossible. As an example, the Western Interconnection contains approximately 20,000 transmission lines. Screening all combinations of N-2 contingencies requires approximately 199,990,000 simulation runs, which is beyond the capability of available simulation tools; for example, if time per run were 90 seconds, the total run time would be about 570 years.   Currently, only 5-50 contingencies are selected annually to perform extreme event analysis to comply with NERC requirements in the WECC system. The selections of these contingences are based on the experience of power grid operators and planners, that is, knowing critical elements in their systems. This limited set of events is included in the list created in this study. In this study, eight categories of initiating events were collected for the entire WECC system from multiple sources such as historical disturbance information, known vulnerable system elements, engineering judgment, transmission sensitivity analysis methods and others. A large list with more than 35,000 initiating events was created for the full WECC model. The different types of initiating events are summarized below.

  1. 1.1.3 Substation Outage. This type of initiating event considers the complete loss of a substation (bus) in the WECC model. It is used to simulate extreme events that result in a complete outage of all elements within a substation. 8,000 initiating events in this category were generated considering all substations with voltage levels higher than 115 kV.
  1. 1.1.4 The Loss of Two Transmission Lines Based on Contingency Sensitivity Study
  1. 1.1.5 Parallel Circuits Transmission Line Outage. Many of the higher- kV lines are made of two or more circuits on a common tower to increase their transmission capacity. However, during catastrophic events such as thunderstorms, lightning strikes or tornadoes, all the circuits of a multi- circuit transmission line can be out of service leading to huge power- transfer capacity loss. This contingency list considers all the transmission lines that have two or more parallel circuits originating and ending on the same buses. 996 initiating events in this category were collected.
  1. 1.1.6 Common Right of Way and Line Crossings Outage. This outage list contains common corridors or common right-of-way (ROW) lines. Common ROW is defined by WECC as “Contiguous ROW or two parallel ROWs with structure centerline separation less than the longest span of the two transmission circuits at the point of separation or 500 feet, whichever is greatest, between the two circuits” events is very important since the right-of- way lines generally fall within similar geographical areas and any natural calamity can easily cause the outage of these transmission lines.
  2. 1.1.7 Flow Gates between Balancing Authorities. The flow gates between various balancing authorities represent important transmission-path gateways transporting large amounts of power. Loss of a flow gate can cause major problems for a balancing authority, especially if the BA is normally a power importer without sufficient local generation to meet demand. 54 initiating events in this category were collected.
  1. 1.1.8 Major Transmission Interfaces in the WECC System. This event considers outages of major transmission interfaces or paths between different major load and/or generation areas as identified in WECC power-flow base planning case. These interfaces are the backbone of the WECC power grid, and the loss of any of these paths can have large impact. 62 initiating events in this category were collected.
  1. 3.1 Critical Events Corridors Analysis

Although no two blackouts follow the same sequence of events, similar partial sequences of cascading outages may exist in a particular power system. Partial patterns in which transmission lines, generators or buses are forced out in a certain order can repeatedly appear across a variety of initiating events and system conditions. These patterns can result from multiple different initiating events, and therefore are seen as parts of different cascading processes. Figure 9 illustrates the hypothesis of these “critical event corridors.” Critical-corridor identification can be used to recommend transmission-system enhancements, protection-system modification, and remedial actions to help eliminate these most frequently observed, and therefore most probable, critical sequences that lead to severe consequences.

Selection of optimal locations for high penetration of renewables to minimize effects on system reliability; if location choice is not under control of the BA, results can point out potential extreme events due to the concentration of renewable resources in few locations

  1. 4.2 Finding Line Clusters That Are Critical During Propagation Finding

The triggers for a large blackout is only the first step. Most large blackouts have 2 distinct parts, the triggers/initiating event followed by the cascading failure. The cascade can be made up of as few as one subsequent stage or as many as dozens or even hundreds of stages. The cascading part of the extreme event is critically dependent on the “state” of the system: how heavily the lines are loaded, how much generation margin exists, and where the generation exists relative to the load. However, during large cascading events there are some lines whose probability of overloading is higher than the others. Statistical studies of blackouts using the OPA code allow the identification of such lines or groups of lines for a given network model, thereby providing a technique for identifying at risk (or critical) clusters. These lines play a critical role in the propagation of large events because they are likely to fail during the propagation of the cascade, making it more likely that the cascade will propagate further and turn into an extreme event. Therefore, it is clearly very important to identify them.

  1. 4.3 System State Parameters That Correlate With Large Blackouts. In a complex system, extreme events may be triggered by a random event. However, the much- higher-than-Gaussian probability of extreme events (the heavy tail) is a consequence of the correlations induced by operating near the operational limits of the system and has little to do with the triggering events. The result is that the extreme-event distribution is independent of the triggering events. Therefore, trying to control the triggering events does not lead to a change of the power-tail distribution. A careful reduction of triggering events may reduce the frequency of blackouts but will not change the functional form of the size distribution. The process of trying to plan for and mitigate the triggering events can in fact lead to a false sense of security since one might think one is having an effect on risk by doing so when in reality, the unexpected triggers which will certainly occur will lead to the same distribution of blackout sizes.

In these complex systems, an initiating event cannot be identified by just the random trigger event, but by the combination of the triggering event and the state of the system. This “state of the system” can be characterized by different measurements of the parameters of the system. In the case of power systems, for example, the system state includes the distribution and amounts of loads and power flows in the network. A simulation model like OPA is continually changing the network loading and power flows. This, importantly, gives a large sample of initiating events. The statistics of the results reflect many combinations of initial events and system states. It is also important to distinguish between blackout initiating events and general cascade initiating events. In power systems, a cascade, in particular a very short cascade, does not always lead to a blackout. Therefore, those two sets of initiating events are different. Within the OPA simulations, a blackout is defined as any event in which the fraction of load shed is greater than 0.00001. However, for comparison with the reported data we use fraction of load shed being greater than 0.002, which is consistent with the NERC reporting requirements from emergency operations planning standard EOP-004-1.

In calculating the probability of a blackout occurring, good measures include the number of lines overloaded in the first iteration, the average fractional line loading every day, the variance of the fractional line loading every day, and the number of lines with a fractional line loading greater than 0.9. They all show strong positive correlation with the probability of a blackout. When a blackout occurs, the size of the blackout correlates strongly with the number of lines overloaded in the initiating state. This is a very clear correlation. The size also has a positive correlation with the average fractional line loading every day, variance of the fractional line loading every day, and the number of lines with a fractional line loading greater than 0.9 (Figure 16).

 

Having found a number of system parameters that strongly correlate with blackout probability, and even more importantly with extreme event size, it is possible to consider monitoring these quantities in the real system. The goal there would be to see (1) whether they show variations that are meaningful and the same correlations exist, and (2) if so, whether the noise level is low enough to make any of them useful as a precursor measure- the ultimate objective of the work in this section.

  1. 4.4 Impact of Distributed Generation

The increased utilization of local, often renewable, power sources coupled with a drive for decentralization, the fraction of electric power generation that is “distributed” is growing and set to grow even faster. It is often held that moving toward more distributed generation would have a generally positive impact on the robustness of the transmission grid. This intuited improvement comes simply from the realization that less power would need to be moved long distances, and the local mismatch between power supply and demand would be reduced. The project approached the issues of system dynamics and robustness with this intuitive understanding in mind and with the underlying question to be answered, “is there an optimal balance of distributed versus central generation for network robustness?” In the interest of understanding the effects of different factors, the investigation was initiated by intentionally ignoring the differences in the economics of centralized vs. distributed generation and trying to approach the question in a hierarchical manner, starting from the simplest model of distributed generation and then adding more complexity.

Using OPA to investigate the effects of increased distributed generation on the system, it was found that:

  1. Increased distributed generation can greatly improve the overall “reliability and robustness” of the system.
  2. Increased distributed generation with high variability (such as Wind and Solar) can greatly reduce overall “reliability and robustness” of the system, causing increased frequency and size of blackouts.
  3. Generator capacity margin or generation variability leveling mechanisms are critical to reducing the degradation that can be caused by the increased penetration of sustainable distributed generation.

Figure 18 shows the blackout frequency as the degree of distribution (a surrogate for the amount of distributed generation) is increased. It can be clearly seen that with reliable distributed generation (same variability as with central generation) the overall blackout frequency decreases, while Figure 19 shows a concomitant decrease in the load-shed sizes as the degree of distribution increases. However, Figures 18 and 19 show a large increase in both the frequency and size of the blackouts when using distributed generation with realistic variability. In some cases, the distributed generation can make the system less robust, with the risk of a large blackouts becoming larger. It is clear that distributed generation can have a range of effects on the system robustness and reliability, coming from the reliability of the generation (wind, solar, and so forth), the fraction that is distributed and the generation capacity margin.   Many more aspects of distributed generation such as local storage, demand- side control, and so forth, remain to be investigated.   Figure 18: Blackout Frequency Decreases with Increased Reliable Distributed Generation but Increases Greatly With Increased Highly Variable Distributed Generation

One potentially problematic scenario is that as the early penetration of distributed generation comes on line, it will actually make the system more reliable and robust since it will effectively be adding to the capacity margin. However, as new distributed generation is added, the system could become much less reliable as the demand grows, the fraction of distributed generation grows, and the capacity margin falls back to historical, mandated levels.

  1. 5.3 Predicting Extent of Blackout Triggered by an Earthquake

This section summarizes the project results about the size of blackouts triggered by earthquakes. Chapter 6.5.5 of the Phase 1 report gives details.   If there is a large initial shock to the power system such as from an earthquake, what is the risk of the failure cascading to other regions of the WECC? This is an important question because the time required to restore electric power and other infrastructure in the region that experienced damaging ground motion depends on how far the blackout extends. Long restoration times would multiply the consequences of the direct devastation not only to conventional measures such as load loss but also to restoration of lifeline services. Since earthquakes can produce orders of magnitude more costly damage than a blackout, any prolongation of earthquake restoration due to the blackout cascading beyond the shaken region has a significant effect. The project made an illustrative calculation of the blackout extent as measured by number of lines tripped as a result of a large shock to the system in which initially 26 lines outaged based on a real earthquake scenario. The calculations assumed and applied the branching-process model and observed propagation. (Figure 22) shows an initial estimate of the distribution of the total number of lines tripped due to the combined effect of the earthquake and subsequent cascading. The most likely extent is about 90 lines tripped, but there is a one-in-ten chance that more than 150 lines would trip. (The chance of more than 150 lines tripped is the sum of the chances of 151, 152, and 153 … lines out.) This initial estimate is illustrative of probable outage scenarios. A detailed examination of actual earthquake initiating failures and line- trip propagation data would be required to improve it. Similar calculations would be feasible for other large disturbances such as extreme weather events, wildfires or floods.

  1. 2.3.2 Additional Types of Initiating Events

There is an evident need to expand the list of initiating events to reflect the complexities of modern power systems as well as new factors such as the increasing penetration of variable renewable generation resources, demand-side load management, virtual and actual consolidation of balancing authorities, new performance standards, and other factors.

  1. 3.1.8 Impact of Distributed Generation

The project studied the impact of increased distributed generation on cascading failure risk with the OPA simulation. The results of this work suggest that a higher fraction of distributed generation with no generation variability improves the system characteristics. However, if the distributed generation has variability in the power produced (and this is typical of distribute d generation sources such as wind or solar), the system can become significantly less robust with the risk of a large blackouts becoming much larger. It is possible to find an optimal value of the fraction of distributed generation that maximizes the system robustness. Further investigations with different models of the reduced reliability of the distributed generation power and different distributions of the distributed generation would be worthwhile, as would the extension of this work to the larger WECC models.

Historical Data

North American Electric Reliability Corporation (NERC) has made public data for reportable blackouts in North America. Blackouts in the WECC for the 23 years from 1984 to 2006 have been analyzed. The 298 blackouts in the WECC data occur at an average frequency of 13 per year. The main measures of blackout size in the NERC data that are used in the project are load shed (MW) and number of customers affected. Blackout duration information is also available, but the data quality is less certain.

The NERC data follows from government reporting requirements. The thresholds for the report of an incident include uncontrolled loss of 300 MW or more of firm system load for more than 15 minutes from a single incident, load shedding of 100 MW or more implemented under emergency operational policy, loss of electric service to more than 50,000 customers for 1 hour or more, and other criteria detailed in the U.S. Department of Energy forms EIA-417 and OE-417.

 

Posted in Grid instability | Tagged , , , , , , , , , , , , , , | Leave a comment