Structurally Deficient Bridges

Preface.  As I explained in my book “When Trucks Stop Running”, if diesel fuel ran out, civilization would end within a week as grocery shelves, pharmacies, gas stations, and all other businesses ran out of supplies. The millions of miles of roads and tens of thousands of bridges that trucks drive on were built at a time when the energy return on energy invested of oil was 100 to 1.  Since global oil production may have peaked in 2018, the most important bridges need to be fixed ASAP before oil is rationed to agriculture and essential services.

Even London Bridge is falling down (Landler 2020).

Alice Friedemann www.energyskeptic.com  author of “Life After Fossil Fuels: A Reality Check on Alternative Energy”, April 2021, Springer, “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

* * *

ARTBA (2020) ARTBA Bridge Report. American Road and Transportation Builders Association.

  • There are 178 million daily crossings on over 46,100 structurally deficient U.S. bridges in poor condition.
  • At the current rate, it would take 50 years to fix all of the nation’s structurally deficient bridges.
  • If placed end-to-end, the length of bridges in need of repair would stretch over 6,300 miles– long enough to make a round trip across the country from New York City to Los Angeles and back again to Chicago.
  • 1 in 3 bridges on the Interstate needs repair work.
  • ARTBA has many maps and lists of the states and bad bridges here.

Brady J (2019) New Bridge Data Supports C+ Report Card Grade. American Society of Civil Engineers.

The states with the most structurally deficient bridges, as a percent of their total bridge inventory, are Rhode Island (23%); West Virginia (19.8%); Iowa (19.3%); South Dakota (16.7%); Pennsylvania (16.5%); Maine (13.1%); Louisiana (13%); Puerto Rico (11.7%); Oklahoma (10.9%); and North Dakota (10.7%).

States with the largest number of structurally deficient bridges are Iowa (4,675 bridges); Pennsylvania (3,770); Oklahoma (2,540); Illinois (2,273); Missouri (2,116); North Carolina (1,871); California (1,812); New York (1,757); Louisiana (1,678); and Mississippi (1,603).

The subpar condition of our bridges is a result of an inability to properly fund our current bridge needs, with the most recent estimate putting our nation’s backlog of bridge rehabilitation needs at $123 billion. ASCE recommends that if we want to raise our bridge grade from a “C+”, we must:

  • Fix the federal Highway Trust Fund by raising the federal motor fuels tax by 25 cents. States must ensure their funding mechanisms (motor fuels taxes or other) are sufficient to fund needed investment in bridges.
  • Increase funding from all levels of government to continue reducing the number of structurally deficient bridges, decrease the maintenance backlog, and address the large number of bridges that have passed or are approaching the end of their design life.
  • Have bridge owners consider the costs across a bridge’s entire lifecycle to make smart design decisions and prioritize maintenance and rehabilitation
  • States should ensure their funding mechanisms (motor fuels taxes or other) are sufficient to fund needed investment in bridges.
  • States and the federal government should consider long-term funding solutions for transportation infrastructure and potential alternatives to the motor fuel taxes, including further study and piloting of mileage-based user fees.

References

Landler M (2020) London’s Bridges Really Are Falling Down. Three major crossings on the Thames are closed to cars — one of them considered too dangerous even to walk across. Even the landmark Tower Bridge was recently shut for two days.  New York Times.

Posted in Bridges | Tagged , | 2 Comments

Largest oil spill on earth: Plastics

plastic-dead-bird-chris-jordan-2009-midway-cf000313

Preface. There have been thousands of articles since I published this back in 2003.

Today I read a surprising study that claims glass is more harmful than plastic because it is mined from rare materials and requires more fossil fuels to produce and ship (Brock 2020). Yet far more plastic is produced a year, 380 million tonnes, versus 209 million tonnes of glass.

Plastic in the news:

2021 ‘Biodegradable’ plastic will soon be banned in Australia—that’s a big win for the environment. Biodegradable plastic implies it is made plant materials, and will break down into natural components after it is discarded. But many are made from fossil fuels with chemical additives that break the plastic fragment into micro-plastics, still polluting land and water. And they often don’t break down faster than traditional plastics. And don’t throw “compostable plastic” into your home compost (or recycling bin for that matter), they’re meant only for industrial composting where very high temperatures are reached.

2016-8-23 Stomachs of dead sperm whales found in Germany filled with plastics, car parts

2016-7-28 Is your garden hose toxic?

Below my account I’ve put another old story about plastic: Hayden T (2002) Trashing the Oceans. An armada of plastic rides the waves, and sea creatures are suffering. U.S. News & World Report.

Alice Friedemann   www.energyskeptic.com  author of “Life After Fossil Fuels: A Reality Check on Alternative Energy”, 2021, Springer; “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Friedemann A. 15 Feb 2003. The largest oil spill on Earth: Plastic in the Oceans.  EnergyResources.

I had a disturbing experience at Cape Canaveral National Sea Shore last month.  I drove to the north end of the park, and walked south along the beach towards the enormous towers of the Kennedy Space Center, 11 miles away.  On the left, the sparkling blue water was punctuated with the tall white plumes of pelicans dive-bombing the waves.

On the right was Mosquito lagoon, one of the most beautiful places in Florida.

I’d come to find Sea Beans.  These are beautiful exotic seeds of tropical plants from all over the world, with names like Hog-plum, Hamburger Bean, and Moonflower  (http://www.seabean.com/guide/index.htm).

But what I found was plastic trash. Miles and miles of soda bottles, plastic bags, milk jugs, plastic spoons, and the like.  At the Ponce Inlet Marine Science Center, the docent was sure most of it came from party boats and cruises offshore.   Case solved-– cruise ships have been caught dumping their shit, literally, into the pristine waters of Alaska.  Not surprising to find out they’re throwing trash overboard as well.

My career as trash detective would have ended then if I hadn’t seen an ad in the paper for a lecture on “How plastic trash finds its way into the ocean” at the Berkeley Public Library on February 11, 2003, where I heard Charles Moore speak.

The problem is huge — what I would call largest oil spill in the world since most plastic is made out of oil, not natural gas or coal.

Much of this is out of date in 2021, but for historical purposes, back then:

Between California and Asia there’s ten million square miles of plastic swirling in the slow rotation of the north pacific gyre, an area larger than Africa.  A huge mountain of air, heated over the equator, creates the currents as it moves north. The garbage on this marine merry-go-round spends 12 years completing one circle.   About half of the plastic made is close to the specific gravity of water, and the half that sinks easily rises again when storms mix the water up.

There’s so much plastic in the Pacific gyre, that six times as much plastic as zooplankton by weight was found there (Marine Pollution Bulletin).   Outside the gyres, the concentration is almost half that amount – still awfully high.

Like diamonds, plastics are forever.  Plastic doesn’t biodegrade.  It takes even longer for the sun to break apart a piece of plastic in the ocean than on land, because the water cools the plastic down.  Although it gets broken into smaller and smaller pieces, it reaches a point where the molecular weight and tight chemical bonds prevent any organism from breaking it down further.

Plastic facts

  • One hundred billion pounds of pre-production plastic resin pellets are produced every year in the US to create consumer plastics.
  • These pellets, also known as nurdles, look just like fish eggs, and are the most common plastic object found in the ocean.  Clearly many of them are escaping the production process.
  • Only 3.3% of plastic is recycled, because reheating plastic reduces its flexibility.  Sixty-three pounds of plastic per person ends up in landfills in the United States.
  • Because plastic is lighter than sand, it may be eroding beaches
  • Plastic concentrates chemicals and pollutants up to one million times their concentration in the surrounding sea water.  Many of these chemicals are endocrine disruptors.

So – how are plastics getting into the ocean?  About 20% comes from activities at sea, especially when some of the 100 million containers shipped every year get knocked off in storms.  The remaining 80% comes from the land.

References

Brock A, Williams ID (2020) Life cycle assessment of beverage packaging. Detritus 13: 47-61 DOI 10.31025/2611-4135/2020.14025

California and the World Ocean Presentation to the Marine Debris Panel “A comparison of neustonic plastic and zooplankton abundance in southern California’s coastal waters and elsewhere in the North Pacific”. Captain Charles Moore. Algalita Marine Research Foundation. 30 oct 2002.

Synthetic Sea: Plastic in the Ocean. Algalita Marine Research Foundation video Transcript 2001.

Rafts of Plastic Debris Stretching over Miles of Open Ocean Discovered by Research Vessel Captain.  22 oct 2002

———————————–

U.S. News & World Report 4 NOV 2002

Trashing the Oceans by Thomas Hayden

An armada of plastic rides the waves, and sea creatures are suffering

http://www.mindfully.org/Plastic/Ocean/Trashing-Oceans-Plastic4nov02.htm

At Taco Bell on Main Street in Ventura, Calif., you can take out the chalupa of your choice–Baja, Nacho Cheese, or Supreme, with ground beef, chicken, or steak. But it will always come in a small plastic shopping bag. The bags arrive preprinted from a factory in Asia–usually. One brilliant summer morning in 2000, the small private research vessel Alguita discovered a 10-mile-wide flotilla of the disposable sacks, an estimated 6 million of them destined for Taco Bells around the country, bobbing more than 1,000 miles west of the Ventura store. “We were out in the middle of the Pacific, where you would think the ocean would be pristine,” recalls the Alguita’s captain, Charles Moore. “And instead, we get the Exxon Valdez of plastic-bag spills.”

Most plastic bags end up in landfills, part of the millions of tons of plastic garbage Americans dump each year. But whether jettisoned illegally by ships at sea, washed out from land during storms, or, as in the case of the chalupa bags, accidentally lost overboard from containerships, countless tons of plastic refuse end up drifting on the high seas.

Lethal litter.

Many Americans know about the hazard posed by six-pack rings, the plastic yokes that can grasp a seagull or otter’s neck as tightly as they do a soda can. But researchers are finding that plastic litter doesn’t just strangle wildlife or spoil the view. “Plastic is not just an aesthetic problem,” says marine biologist David Barnes of the British Antarctic Survey. “It can actually change entire ecosystems.

The largest pieces of plastic–miles long discarded fishing nets and lines– take an obvious toll. These “ghost nets” snare and drown thousands of seals, sea lions, and dolphins a year. Researchers have also watched in horror as hungry turtles wolf down jellyfish-like plastic bags and seabirds mistake old lighters and toothbrushes for fish, choking when they try to regurgitate the trash for their starving chicks. As Barnes is documenting, tiny marine animals riding rafts of plastic trash are invading polar seas, while Japanese researchers are finding high concentrations of deadly chemicals clinging to floating, tapioca-size plastic pellets called “nurdles.” And Moore, back from a three-month North Pacific voyage last week, is tracking it all and discovering that tiny fragments of plastic are entering the food web right near its bottom.

A member of the prominent Los Angeles-area Hancock Oil family, Moore is anything but a typical researcher. He grew up as an avid surfer and sailor in a comfortable waterfront home in Long Beach and ran a furniture restoration business. But in 1995, at the age of 48, Moore sold his business, set up the Algalita Marine Research Foundation, and designed a unique double-hulled sailing research vessel, the Alguita. Both ship and captain found their true calling after a 1997 yacht race to Hawaii.

On his return voyage, Moore veered from the usual sea route and saw an ocean he had never known. Every time he stepped out on deck, “there were shampoo caps and soap bottles and plastic bags and fishing floats as far as I could see. Here I was in the middle of the ocean, and there was nowhere I could go to avoid the plastic.” Ever since, Moore has dedicated his time, and a small personal fortune, to seeking it out. “It’s an overlooked problem, and this guy is making a really important contribution,” says oceanographer Dale Kiefer of the University of Southern California.

With little scientific training, Moore formed alliances with professional scientists, including chemists, biologists, and a private oceanographer, Curtis Ebbesmeyer, himself a well-known flotsam hunter. Ebbesmeyer’s most famous case involved a 1990 containership spill that dumped 80,000 Nike running shoes into the North Pacific. The errant runners washed up on beaches from British Columbia to California, helping him trace the currents that carried them.

The Alguita’s mission started in earnest in 1999. Moore and his all-volunteer crew–attracted by the chance for meaningful adventure and Moore’s reputation as an excellent chef–returned to the garbage-strewn region he had happened on two years earlier and skimmed the surface with fine collecting nets. Across hundreds of miles of ocean, they counted roughly a million pieces of plastic per square mile, almost all of it less than a few millimeters across.

Trash heap.

The Alguita was sampling water beneath a climate feature called the North Pacific subtropical high–the big “H” on weather maps–that protects Southern California’s enviable weather by pushing storms north or south. The H is the eye of a circle of currents thousands of miles wide called the North Pacific gyre. The high’s weak winds and sluggish currents naturally collect flotsam, earning it the unfortunate nickname of the “Eastern Garbage Patch.” Similar wind and current patterns exist in all the major oceans, and all presumably suffer from similar contamination.

Because most plastics are lighter than seawater, they float on the surface for years, slowly breaking down into smaller and smaller fragments–which often end up in the ocean’s drifting, filter-feeding animals, like jellyfish. Early in his voyages, Moore collected baseball-size gelatinous animals called salps and found their translucent tissues clogged with bits of monofilament fishing line and nurdles (more romantically referred to as “mermaid tears” by beachcombers). A hundred billion pounds of these pellets are produced each year, to be formed into everything from cd cases to plastic pipe. But each one is a perfect plankton’s-eye-view replica of a fish egg. “You rarely find any particles smaller than a millimeter in the water,” says Moore. “They’re all in the jellies.”

That’s not likely to be good for the filter feeders or the things that eat them, notes Moore, and not just because a meal of plastic doesn’t yield much nutrition. A 2001 paper by Japanese researchers reported that plastic debris can act like a sponge for toxic chemicals, soaking up a millionfold greater concentration of such deadly compounds as pcbs and dde, a breakdown product of the notorious insecticide ddt, than the surrounding seawater. That could turn a bellyful of plastic from a mere stomachache to a toxic gut bomb that can work its way through the food web.

Unhappy hunting.

In Moore’s latest voyage to the garbage patch, he got a close-up view of what happens when life meets floating garbage. The Alguita’s crew found plastic trash bobbing in a thick line from horizon to horizon–everything from tiny particles to 5-inch-thick towing lines, Japanese traffic cones, and yellow quart bottles of American crankcase oil. “We followed the debris for more than a mile, and we never found the end of it,” Moore told U.S. News by satellite phone. The research team had stumbled across what oceanographers call a Langmuir cell, a wind-driven circulation pattern where two masses of water are pushed together, forcing some of the water to sink where they meet; anything that floats stays on the surface.

Normally that means living things. These convergences are favorite hunting grounds of seabirds and other predators, which pick zooplankton, fish eggs, jellyfish, and other delicacies out of the long, frothy windrows. Alien-seeming gelatinous creatures usually float just below, spinning fantastic webs of mucus to sieve out every last particle. Not this time, says Moore. “We found all the refuse of civilization, but there were no zooplankton at all.” He’s at a loss to explain why.

The Alguita team did see albatrosses and tropic birds circling above the line of trash. With little else to choose, they were apparently eating plastic. The birds seemed to be picking and choosing “the reds and pinks and browns. Anything that looks like shrimp,” Moore says. Earlier in the trip, the Alguita had visited the French Frigate Shoals, off Hawaii, home to endangered monk seals and seabird rookeries. In the birds’ gullets, researchers found red plastic particles.

Lines of trash like this one may also help explain the woes of the monk seals, which are usually killed by large masses of nets, more than any one fishing vessel is likely to lose or cut loose at a time. The Alguita’s crew plucked several of these net balls from the Langmuir windrow. The converging currents evidently brought nets together and tangled them into makeshift deathtraps as they rolled in the sinking water.

Expect the trashing of the oceans to continue. An international convention called MARPOL bans the dumping of plastics at sea, but enforcement on the open ocean is nonexistent. Accidental losses are forgiven, notes Moore, and shippers don’t even have to report them. “That means dogooders like me don’t even get a chance to clean up after the polluters,” says Moore.

Rob Krebs of the American Plastics Council notes that people value plastics for exactly what creates problems at sea: their durability. Manufacturers are not to blame for the trash, he says. “The responsibility is with the people who control the material, not those who produce it.” Moore agrees that greater efforts to prevent spills will help. But, he adds, “there’s no reason why a six-pack ring or a peanut butter jar should have to last for 400 years.” Manufacturers have tried for years to perfect biodegradable packaging, and at least one company, EarthShell, may finally be making some headway. Government agencies like the National Park Service are already using EarthShell’s biodegradable plates and packaging, and hundreds of McDonald’s restaurants have experimented with its clamshell boxes.

Moore, meantime, says he’ll keep hunting marine plastic as long as his money holds out. After all, there is a link between his own advantages and the plastic flotsam he has been tracking. Oil made his grandfather’s fortune–and oil is the raw material for most plastics manufacturing. “In a way, part of all this is remediation for the consequences of my grandfather’s life,” he says. “I guess maybe I need to make amends.”

Posted in Hazardous Waste, Oceans, Oil | Tagged , , | 1 Comment

Dust Bowls

Preface. As if there weren’t enough to worry about, more Great Dust Bowls may be on the way.  The irony is that some of it will likely be due to planting corn and soybeans to produce biofuels, yet another reason they’re so bad for ecology. They use more water, pesticides, and fertilizer than most other crops, and cause more erosion as well due to their wide row spacing. Biofuels have at best a break-even energy return (EROI about 1) and many scientists have found negative energy returns, while our civilization requires an EROI of 10 or more.  More dust bowls will cause further erosion and make crops hard to impossible to grow in the future. Doh!

Alice Friedemann  www.energyskeptic.com  Author of Life After Fossil Fuels: A Reality Check on Alternative Energy; When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Women in ecology  Podcasts: WGBH, Jore, Planet: Critical, Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, Kunstler 253 &278, Peak Prosperity,  Index of best energyskeptic posts

* * *

Fox A (2020) Are the Great Plains Headed for Another Dust Bowl? Smithsonian magazine.

Researchers say atmospheric dust in the region has doubled in the last 20 years, suggesting the increasingly dry region is losing more soil skyward.

A new study shows dust storms have become more common and more severe on the Great Plains, leading some to wonder if the United States is headed for another Dust Bowl, reports Roland Pease for Science. With nearly half the country currently in drought and a winter forecast predicting continued dry weather for many of the afflicted regions, dust storms could become an even bigger threat.

In the 1930s, the Dust Bowl was caused by years of severe drought and featured dust storms up to 1,000 miles long. But the other driving force behind the plumes of dust that ravaged the landscape was the conversion of prairie to agricultural fields on a massive scale—between 1925 and the early 1930s, farmers converted 5.2 million acres of grassland over to farming, reported Sarah Zielinski for Smithsonian magazine in 2012.

Hardy prairie grasses would have likely withstood the drought, but crops covering the newly converted tracts swiftly bit the proverbial dust, which loosened the grip their roots had on the soil. High winds then whipped that loose soil into the huge clouds that blanketed the landscape with dust, including 1935’s Black Sunday which lifted 300,000 tons of the stuff skyward.

Besides blotting out the sun, dust storms strip valuable nutrients from the soils, making the land less productive, and create a significant health hazard at a time when a respiratory illness is sickening people around the world, according to Science.

The new research, published earlier this month in the journal Geophysical Research Letters, used data from NASA satellites and ground monitoring systems to detect a steady increase in the amount of dust being kicked into the atmosphere every year, reports Brooks Hays for United Press International. The researchers found that levels of atmospheric dust swirling above the Great Plains region doubled between 2000 and 2018.

According to the paper, the increasing levels of dust, up to five percent per year, coincided with worsening climate change and a five to ten percent expansion of farmland across the Great Plains that mirrors the prelude to the Dust Bowl. Together, the researchers suggest these factors may drive the U.S. toward a second Dust Bowl.

Part of what allowed Lambert and his colleagues to tie the added dust in the sky to agriculture were clear regional upticks when and where major crops such as corn and soybeans were planted and harvested, per the statement. Ironically, much of the grassland that was converted to agriculture in recent years was not for food but for corn destined to become fodder for biofuels intended to reduce reliance on fossil fuels, Lambert tells Science.

Human-caused climate change is also making the Great Plains hotter and drier. In April, a paper published in the journal Science said the Southwestern part of North America may be entering a megadrought worse than anything seen in 1,200 years.

“The current drought ranks right up there with the worst in more than a thousand years, and there’s a human influence on this of at least 30 percent and possibly as much as 50 percent in terms of its severity,” as Jason Smerdon, a paleoclimatologist at Columbia University’s Lamont Doherty Earth Observatory who co-authored the study, told Smithsonian magazine’s Brian Handwerk at the time.

“I think it’s fair to say that what’s happening with dust trends in the Midwest and the Great Plains is an indicator that the threat is real if cropland expansion continues to occur at this rate and drought risk does increase because of climate change,” Lambert says in the statement. “Those would be the ingredients for another Dust Bowl.”

Gaskill, M (2012) Climate Change Threatens long-term stability of Great Plains. Scientific American.

Rising temperatures, persistent drought, and depleted aquifers on the southern Great Plains could set the stage for a disaster similar to the Dust Bowl of the 1930s, scientists say.

On October 17–18 drought conditions combined with high winds to create a large dust storm across Colorado, Nebraska, Kansas, Oklahoma and Wyoming, closing major highways. This October’s dust storm, which followed preparation of fields for fall planting, could be just the beginning. “If the drought holds on for two or three more years, as droughts have in the past, we will have Dust Bowl conditions in the farming belt,” says Craig Cox, an agriculture and natural resources expert with the Environmental Working Group.

As of November 6, nearly 60% of the contiguous U.S. was experiencing persistent drought conditions, especially in the Great Plains—North and South Dakota, Nebraska, Kansas, Oklahoma, Texas, Montana, Wyoming and Colorado—where drought is expected to persist or intensify in the foreseeable future.

Katharine Hayhoe, director of the Climate Science Center at Texas Tech University said “we’re seeing major shifts in places and times we can plant, the types of crops we can grow and the pests and diseases we’re dealing with.”

Since the 1940s agriculture on the semiarid southern Great Plains has relied on irrigation. On the high plains of Texas, tens of thousands of wells pumping from the 10-million-year-old Ogallala Aquifer have depleted it by 50 percent–most of the remaining reservoir will likely be useless for irrigation within about 30 years. At the same time, climate change has brought less rain as well as hotter temperatures that increase evaporation—forcing farmers to use even more water for irrigation. 

Romm J (2012) My Nature Piece On Dust-Bowlification And the Grave Threat It Poses to Food Security. ThinkProgress.org

Human adaptation to prolonged, extreme drought is difficult or impossible. Historically, the primary adaptation to dust-bowlification has been abandonment; the very word ‘desert’ comes from the Latin desertum for ‘an abandoned place’. During the relatively short-lived US Dust-Bowl era, hundreds of thousands of families fled the region.

Dust-Bowl conditions could stretch all the way from Kansas to California by mid-century. America’s financial future and the health and safety of our people are at serious risk…The food security of all of humanity is at risk.

Which impact of anthropogenic global warming will harm the most people in the coming decades? I believe that the answer is extended or permanent drought over large parts of currently habitable or arable land — a drastic change in climate that will threaten food security and may be irreversible over centuries.

The palaeoclimate record dating back to the medieval period reveals droughts lasting many decades. But the extreme droughts that the United States faces this century will be far hotter than the worst of those: recent decades have been warmer than the driest decade of the worst drought in the past 1,200 years.

To make matters worse, the regions at risk of reduced water supply, such as Nevada, have seen a massive population boom in the past decade. Overuse of water in these areas has long been rife, depleting groundwater stores.

Of course, the United States is not alone in facing such problems. Since 1950, the global percentage of dry areas has increased by about 1.74% of global land area per decade. Recent studies have projected ‘extreme drought’ conditions by mid-century over some of the most populated areas on Earth—southern Europe, south-east Asia, Brazil, the US Southwest, and large parts of Australia and Africa. These dust-bowl conditions are projected to worsen for many decades and be “largely irreversible for 1,000 years after emissions stopped.”

Most pressingly, what will happen to global food security if dust-bowl conditions become the norm for both food-importing and food-exporting countries? Extreme, widespread droughts will be happening at the same time as sea level rise and salt-water intrusion threaten some of the richest agricultural deltas in the world, such as those of the Nile and the Ganges. Meanwhile, ocean acidification, warming and overfishing may severely deplete the food available from the sea.

From an ecological perspective, what will be the effects of dust- bowlification on the global carbon cycle? In the past six years, the Amazon has seen two droughts of the sort expected once in 100 years, each of which may have released as much carbon dioxide from vegetation die-off as the United States emits from fossil-fuel combustion in a year. More frequent wildfires also threaten to increase carbon emissions. And as habitats are made untenable, what will be the effect on biodiversity?

“Drought conditions will prevail no matter what precipitation rates are in the future,” said  Michael Wehner, a climate scientist at the Lawrence Berkeley National Laboratory.  “Even in regions where rainfall increases, the soils will get drier. This is a very robust finding.”

Posted in Biofuels, Drought & Collapse | Tagged , , , | Comments Off on Dust Bowls

Predicting who will become a violent terrorist

Preface. This study was clever in predicting the political and religious outlook of people using abstract tests that were not political or emotional, such as memorizing visual shapes.

This study of worldviews was able to predict political preferences 4 to 15-fold better than demographic predictors. 

these findings could be used to spot people at risk from radicalization who might be willing to commit violence against innocent people.

So if people are hard-wired to perceive and react to reality in conservative or liberal ways, how do you go about teaching critical thinking skills and keeping people from believing fake news or becoming terrorists?  Especially when it is the conservative mind that is most vulnerable, yet these brains are the least able to do the complex thinking needed to make the best assessment of evidence. Or as the paper itself puts it:  “ideological worldviews may be reflective of low-level perceptual and cognitive functions”. How do you bring conservative brains to a higher level that are born with lower levels of functioning?

I personally think that there has to be a way to keep them from being exposed to ideas like QAnon, vaccine denial, FOX news and so on in the first place. Oh no, suppression of free speech, the horror! Well then first try bringing back the fairness doctrine Reagan abolished, making Rush Limbaugh and FOX possible. Select for less radical left or right candidates with the top-two primary. Yank FOX and similar channels out of the packages offered by Comcast, Disney, Verizon, and so on and make people pay more for them. Of course, since the economic conservatives have much to gain by deceiving conservative brains and the wealth to do so, the logical conclusion of all this are more buildings blown up and chaos while the rich hide behind protective gates and thousand-acre guarded estates as limits to growth makes all people more desperate and radicalized.

Below are bits and pieces of the paper I’ve extracted, read the full paper via the link.

Alice Friedemann www.energyskeptic.com  author of “Life After Fossil Fuels: A Reality Check on Alternative Energy”, April 2021, Springer, “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

* * *

Zmigrod L, Eisenberg IW, Bissett PG et al (2021) The cognitive and perceptual correlates of ideological attitudes: a data-driven approach. Phil. Trans. R. Soc. http://doi.org/10.1098/rstb.2020.0424

Abstract

Researchers discovered that “ideological attitudes mirrored cognitive decision-making strategies. Conservatism and nationalism were related to greater caution in perceptual decision-making tasks and to reduced strategic information processing, while dogmatism was associated with slower evidence accumulation and impulsive tendencies. Extreme pro-group attitudes, including violence endorsement against outgroups, were linked to poorer working memory, slower perceptual strategies, and tendencies towards impulsivity and sensation-seeking—reflecting overlaps with the psychological profiles of conservatism and dogmatism. Cognitive and personality signatures were also generated for ideologies such as authoritarianism, system justification, social dominance orientation, patriotism and receptivity to evidence or alternative viewpoints; elucidating their underpinnings and highlighting avenues for future research. Together these findings suggest that ideological worldviews may be reflective of low-level perceptual and cognitive functions.”

What this means is that people with extremist views less able to do complex mental tasks, research suggests.

Those with extremist attitudes tended to perform poorly on complex mental tasks and tended to think about the world in black and white. They struggled with complex tasks that required intricate planning. Perhaps this is why they’re drawn to authoritarian ideologies that simplify the world.

They tend to be dogmatic, stuck in their ways and relatively resistant to credible evidence, showing a problem with processing evidence at a perceptual level. It took them longer to decide if dots were moving to the left or right on a screen for example.  When asked to respond as quickly and accurately the politically conservative were slow and steady, while political liberals were faster with a less precise approach. 

This is in line with conservatism being known as a synonym for caution. It appears from these tests that they simply treat every stimuli they encounter with caution. Yet they also tend to be impulsive and poor at regulating their emotions.

Introduction

One of the most powerful metaphors in political psychology has been that of elective affinities—the notion that there is a mutual attraction between ‘the structure and contents of belief systems and the underlying needs and motives of individuals and groups who subscribe to them’. With roots in Enlightenment philosophy and Max Weber’s sociology, this metaphor contends that certain ideologies resonate with the psychological predispositions of certain people. So, we can elucidate psycho-political processes by logically tracing these coherences, these elective affinities between ideas and interests. This analogy has inspired rich theories about the epistemic, relational and existential motivations that drive individuals to adhere to political ideologies (e.g. [2]), highlighting the role of needs for coherence, connectedness and certainty in structuring ideological attitudes.

Nonetheless, the methodologies employed to study these questions have been mostly of a social psychological nature, relying primarily on self-report measures of needs for order, cognitive closure, rigidity and others. This has skewed the academic conversation towards the needs and interests that ideologies satisfy, and obscured the role of cognitive dispositions that can promote (or suppress) ideological thinking. In fact, it is only recently that researchers have begun to employ neurocognitive tasks and analytic approaches from cognitive science in order to tackle the question: which cognitive traits shape an individual’s ideological worldviews? In this investigation, we sought to apply cognitive methodologies and analytic tools in order to identify the cognitive and personality correlates of ideological attitudes in a data-driven fashion. Borrowing methods from cognitive psychology, which have established sophisticated techniques to measure and analyse perceptual and cognitive processes in an objective and implicit way, and implementing these in the study of ideology can facilitate the construction of a more wholistic and rigorous cognitive science of ideology. This can push the analogy of ‘elective affinities’ into the realm of perception and cognition to allow us to tackle the question: are there parallels between individuals’ ideologies and their general perceptual or cognitive styles and strategies?

Furthermore, owing to limited resources and siloed research disciplines, many studies in social psychology frequently focus on a single ideological domain (e.g. political conservatism) or a single psychological domain (e.g. analytical thinking). While an in-depth focus on a specific domain is essential for theoretical development, the selection of hypotheses and methodologies can at times suffer from problems of bias and a lack of conceptual integration across different ideological and psychological domains. Indeed, a growing concern has emerged among researchers that psychologists of politics, nationalism and religion generate hypotheses and develop study designs that confirm their prior beliefs about the origins of social discord [712]. It is, therefore, valuable to complement theory-driven research with data-driven approaches, which can help to overcome these methodological challenges, as well as offer a wholistic view of these complex relationships by ‘letting the data speak’. Perhaps most importantly, data-driven research can help validate or challenge theory-driven findings and consequently offer directions for future research.

Discussion

Dogmatic participants were slower to accumulate evidence in speeded decision-making tasks but were also more impulsive and willing to take ethical risks. This combination of traits—impulsivity in conjunction with slow and impaired accumulation of evidence from the decision environment—may result in the dogmatic tendency to discard evidence prematurely and to resist belief updating in light of new information.

Political conservatism was best explained by reduced strategic information processing, heightened response caution in perceptual decision-making paradigms, and an aversion to social risk-taking. These three predictors were consistently implicated in the general political conservatism factor, as well as the specific political-ideological orientations studied, such as nationalism, authoritarianism and social conservatism

the finding that political and nationalistic conservatism is associated with reduced strategic information processing (reflecting variables associated with working memory capacity, planning, cognitive flexibility and other higher-order strategies) is consistent with a large body of literature indicating that right-wing ideologies are frequently associated with reduced analytical thinking and cognitive flexibility 

Additionally, conservative political ideology was characterized by a diminished tendency to take social risks such as disagreeing with authority, starting a new career mid-life and speaking publicly about a controversial topic. This corroborates research showing that political conservatives tend to emphasize values of conformity, ingroup loyalty and traditionalism

Specifically, the caution with which individuals process and respond to politically neutral information was related to the conservatism with which they evaluate socio-political information (figures 4 and 5). It, therefore, appears that caution may be a time-scale independent decision strategy: individuals who are politically conservative may be perceptually cautious as well. This finding supports the idea of ‘elective affinities’ between cognitive dispositions and ideological inclinations and is compatible with the perspective that political conservatism is associated with heightened motivations to satisfy dispositional needs for certainty and security. Nonetheless, to the best of our knowledge, ideological attitudes have never before been investigated in relation to caution as measured with cognitive tasks and drift-diffusion parameters. The present results, therefore, offer a novel addition to this literature by suggesting that political conservatism may be a manifestation of a cautious strategy in processing and responding to information that is both time-invariant and ideologically neutral, and can be manifest even in rapid perceptual decision-making processes. This is relevant to the wealth of novel research on the role of uncertainty in the neural underpinnings of political processes

Social and economic conservatism are not the same though: although social and economic conservatism possessed many overlapping correlates (such as heightened goal-directedness and caution), economic conservatism was associated with enhanced sensation-seeking, whereas social conservatism was not, and in turn, social conservatism was related to heightened agreeableness and risk perception, while economic conservatism was not.

The psychological signature of religiosity consisted of heightened caution and reduced strategic information processing in the cognitive domain (similarly to conservatism), and enhanced agreeableness, risk perception and aversion to social risk-taking, in the personality domain. The finding that religious participants exhibited elevated caution and risk perception is particularly informative to researchers investigating the theory that threat, risk and disgust sensitivity are linked to moral and religious convictions, and that these cognitive and emotional biases may have played a role in the cultural origins of large-scale organized religions. The results support the notion that experiencing risks as more salient and probable may facilitate devotion to religious ideologies that offer explanations of these risks (by supernatural accounts) and ways to mitigate them (via religious devotion and communities).

The present data-driven analysis reveals the ways in which perceptual decision-making strategies can percolate into high-level ideological beliefs, suggesting that a dissection of the cognitive anatomy of ideologies is a productive and illuminating endeavor. It elucidates both the cognitive vulnerabilities to toxic ideologies as well as the traits that make individuals more intellectually humble, receptive to evidence and ultimately resilient to extremist rhetoric. Interestingly, the psychological profile of individuals who endorsed extreme pro-group actions, such as ideologically motivated violence against outgroups, was a mix of the political conservatism signature and the dogmatism signature. This may offer key insights for nuanced educational programs aimed at fostering humility and social understanding

Posted in Critical Thinking, Politics | Tagged , , | 2 Comments

The Texas electric grid outage

Preface. In February of 2021, millions of Texans and Mexicans lost electric power in a hard freeze. Oxer (2021) on the March 2 Power Hungry podcast, said that if the Texas grid had blacked out, it would have taken until May to bring the grid back up. 85 power plants tripped off when the frequency dropped from the idea 60 to 59.7 and had to shed their load or the frequency imbalance could damage the entire transmission system, interconnect transformers, substations, power plants. Everything failed, even piles of coal froze solid, and gas lines dependent on electric compressors to reduce CO2 emissions failed when the grid came down. Not very smart huh, better to use compressors powered by the natural gas flowing through the pipes. He also pointed out this could have been foreseen, there were huge freezes in 1899, 1933, the 1957 Panhandle blizzard, 1960 Houston snowstorm, 1985 San Antonio snowstorm, 2015 winter storm Goliath, and 2017 North American Ice storm 2017.

As the grid fails more often from lack of maintenance, climate changed weather, and natural gas decline, I have to wonder if the rich won’t move to areas that have the most reliable electricity. During the height of this record winter storm, 4 million Texans lost power, but those who lived on grids that connected hospitals, emergency responders, or downtown commercial buildings and condos were more likely to retain their power. Wealth, income and housing inequality make it much more likely for Black and Latinx families in Texas to live away from densely populated and more expensive parts of the city — and when they do live in urban areas, to reside in places that are not deemed essential to the functioning of the electrical grid. They are more likely to live in areas lacking the robust infrastructure necessary to weather environmental and man-made catastrophes (Joseph PE (2021) What’s happening in Texas and Mississippi has to stop. CNN.)

Texas electric grid outage in the news:

2021 Minnesota gasps at the financial damage it faces from the Texas freeze When Texas’ natural gas supplies froze up, prices soared, and now Minnesota’s customers are looking at an $800 million bill. Washington Post.   “…The Texas market is so large — second only to California’s — and its natural gas industry is so predominant that when things go wrong there, the impacts can be felt across the country. And in a state that eschews regulation, driving energy producers to cut costs as deeply as they can to remain competitive, things went spectacularly wrong the week of Valentine’s Day. With its ill-equipped natural gas systems clocked by the cold, Texas’s exports across the Rio Grande froze up and 4.7 million customers in northern Mexico went without electricity — more than in Texas itself. The spot price of gas jumped 30-fold as far west as Southern California. And all the way up by the Canadian border, gas utilities in Minnesota that turned to the daily spot market to meet demand say they had to pay about $800 million more than planned over the course of just five days as the Texas freeze-up pinched off supplies. “The ineptness and disregard for common-sense utility regulation in Texas makes my blood boil and keeps me up at night,” Katie Sieben, chairwoman of the Minnesota Public Utility Commission, said in an interview. “It is maddening and outrageous and completely inexcusable that Texas’s lack of sound utility regulation is having this impact on the rest of the country.”

Alice Friedemann www.energyskeptic.com  author of “Life After Fossil Fuels: A Reality Check on Alternative Energy”, April 2021, Springer, “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

* * *

From a post by Pedro Prieto on an energy forum:

One of the less studied things and the most ignored and covert things in the pro-renewable field is the existence of this growing problem, as we advance both to lower EROI fossil fuels to be used in electricity generation plants and at the same time, the higher penetration of renewables in a given electric network and specially when they start to look to alternatives.


The problem of a modern society relying more and more on electricity in their infrastructure functioning is not only the problems of managing an electric network to avoid collapsing for frequency or voltage deviations, as a result of sudden, unexpected variations, difficult to control. Or otherwise, the higher and higher reliance on automatism and robotic to take self-decisions to order regional or rolling blackouts to avoid the global blackout.


The elephant in the room, imo, is the ability of a given energy network to recover from a catastrophic failure in the shorter possible time, as our lives depend more and more from the umbilical cord of the electricity supply and energy security.And here is where the fossil fuel societal infrastructure and the electric network infrastructure have a totally different nature.


As we may expect more frequent and intense catastrophic events, not only due to climate unexpected variations, but also to increasing pressure of population demand or fast and sudden changes in social consumption (i.e. the ICT technologies increasing part of the electric cake), the problem is how fast (or slow) a society can recover from a complete shut down. We have seen clearly the case of Puerto Rico, with the hurricanes. They made the whole electric network to collapse. After one year, still many places had not electricity and others were abandoned forever. The infrastructure that collapsed, was mainly the electric transport and distribution grid. The wind and solar PV parks and plants were destroyed. The fossil fueled power plants were basically not affected. But they could not deliver fluid because the transport and distribution networks were down for months. 

And what type of societal system came to the rescue?  Perhaps 100% electric ships and vessels bringing spares? Perhaps 100% electric shovels and cranes and trucks removing the damaged materials and taking the new equipment to places and 100% electric pick ups transporting the O&M teams? This is the main issue. The gas wells in Texas could have been frozen, but when it comes to recover them, it will be oil and gas powered systems the ones going to rescue and erecting them again. And kerosene powered helicopters and trucks and vans equipment will also fly to deice wind turbines and check the restarts. Not the vice versa. This is the important thing. The asphalt roads are still there to be run by trucks and heavy machinery to repair themselves,when necessary. Not the other way around.fossil fuel siderurgy and metallurgy will have to work to manufacture new PV modules and wind turbines for Puerto Rico. Not the other way around.

The important thing is that energy storage with fossil fuels is much easier than with a full 100% electric world. Mountains of coal around the fired plants could power them for a full year. Huge oil and gas tanks can store energy safely and out of most of the catastrophic failures and be ready to supply. But oil and gas storage medium and small systems are also quite immune to big catastrophes up to the level of a car tank or even a 2 gallons oil canister or a gas cylinder, or a 3,000 kilos of chopped wood, able to solve a huge domestic problem for one or two weeks, if necessary. The density and versatility of stored energy in per weight and volume of fossil fuel derivatives has no rival in the disperse and low energy density renewables.


In trying to move to a 100% full electric society, we are moving fast forward to the perfect storm, unless the pro-renewables roll up their sleeves and recognize they have to start by presenting credible 100% electric, durable, workable and cheap enough storage systems. In heavy machinery, in transport (not only in BEVs), specially in heavy trucks to carry food and real essentials and ships and so on. Then, when we can see that this is possible and achievable. Then they can continue with home solar PV gadgets and wind turbines and private electric cars. Not the other way around, that is starting the house by the roof. Let’s do first the “green” hydrogen plants at scale and see if this works. Let’s build synthetic green ethanol or methanol in volume enough to store it in as many tanks and as big like those storing oil and gas in Houston and in all the present big refineries in the world. And then, we can talk, not before.  

Afternote: Pedro Prieto is one of my favorite people in the energy/ecology community discussions — original, funny, and his English is poetic perhaps because his native language is Spanish. He has built massive solar farms and co-wrote “Spain’s solar revolution” with Charles Hall. I especially like his concluding paragraph.

Posted in Alternative Energy, Grid instability, Pedro Prieto | Tagged , , , | 2 Comments

Biocoal from food waste and sewage

Preface. This probably doesn’t have a net energy gain because of the energy to move waste and sewage to a common facility, and then transport these wastes from numerous places to the factory where biomass is converted to coal using more energy to do so. Wood biocoal is not economic (Jossi 2018), and biocoal has the low energy of lignite rather than the more useful and higher energy of anthracite and bituminous.

Nor does it scale up. This article mentions 3 plants that will produce 8,000 tonnes of biocoal a year. You’d need a million more factories to match the 8.5 billion tons of coal consumed per year.

Also, when natural gas fertilizers can no longer be made, compost and sewage now must become fertilizer.  If diverted to making biocoal, then crop production will decline.  And with peak phosphorous looming, compost and sewage will be especially necessary, though we can always go back to hunting and gathering, which sounds like a better lifestyle than agriculture to me.

Alice Friedemann www.energyskeptic.com  author of “Life After Fossil Fuels: A Reality Check on Alternative Energy”, April 2021, Springer, “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

* * *

Merrifield R (2020) Making coal from food waste, garden cuttings – and even human sewage. Horizon Magazine.

Food waste, garden cuttings, manure, and even human sewage can be turned into solid biocoal for energy generation, and, if scaled up, could help match the industrial demand for carbon with the need to get rid of organic waste and reduce greenhouse gas emissions.

Europe has a biowaste problem. Rather than using the carbon-rich material for fuel, millions of tonnes of organic waste material are dumped in landfill, where it decomposes and gives off greenhouse gases. At the same time, the EU imports millions of tonnes of coal for industrial use and energy generation.

Efforts to match those imbalances could find a solution in biocoal—a carbon-neutral commodity made from organic waste that can be used as a source of energy, industrial raw materials or even as a way to store carbon, rather than emit it into the atmosphere.

One way to make the coal substitute is a process known as hydrothermal carbonisation (HTC), which uses super-heated water under pressure to produce biocoal in a few hours. It normally takes millions of years for fossil coal to form geologically.

“It’s really a very simple and stable process, because it acts like an acceleration of the natural formation of coal,” Hernandez Latorre said.

Ingelia has developed a proprietary HTC process for three biocoal plants—in Spain, the UK and Belgium, with a total capacity of 8,000 tonnes of biocoal per year. Several more are awaiting regulatory approval and should double capacity in the next couple of years.

“HTC biocoal … not only avoids the use of hard coal in industrial processes, but also the emission of methane from landfill,” Hernandez Latorre said, adding that the technology can recover up to 95% of the carbon from organic waste.

Methane is an even more potent greenhouse gas than carbon dioxide and a notable source is rubbish dumps. Europe abandons millions of tonnes of biowaste in landfill every year, and even where sites have methane-capture systems, a substantial portion of the gas can escape.

Pressure-cooker

Several different HTC methods have been developed, but the process generally works along the lines of a pressure-cooker, though the ingredients range from residue from food or drinks processing, agricultural waste, forestry industry discards such as woodchips and sawdust, to maize cobs and sewage.

The biowaste is put into a device known as a reactor, in temperatures from 180°C-250°C under pressure of the order of 2 megapascals (MPa) or 20 atmospheres. This means the water in the system is superheated, rather than converted into steam.

The reactor converts the solids in the organic material into hard biocoal—also known as hydrochar—while the liquids can be collected separately and used as bio-fertiliser and any gases given off are captured and used to power the system.

The biocoal has similar characteristics regardless of the biowaste used, though different raw materials do influence the quality by determining the ash content. Conditions in the reactor destroy pathogens and the resulting products are sterile. The coal slurry can also be processed to remove stones or shards of glass or metal, before being compressed into briquettes or pellets.

Ingelia’s basic HTC process can use food waste, for example, to produce biocoal similar to fossil browncoal, comprising about 60% carbon. This hydrochar can then go through extra steps to make higher-value ‘designer’ biocoal, removing ash and volatiles to ensure carbon content up to 90% – able to compete with top-grade hard coal.

“We can use (further processing) to tailor the final product, to recover from the bio-material exactly what they need for the industrial processes, in a circular economy (system),” Hernandez Latorre said.

Greenhouse gases

Hernandez Latorre says that internal Ingelia research shows that between 6.5 and 8.3 tonnes of CO2 equivalent are avoided per tonne of HTC biocoal produced, compared to a landfill operation with or without a methane-recovery system.

She says biocoal can have a market value of €170 per tonne for the most basic hydrochar, to more than €400 per tonne for top-grade biocoal with the highest carbon content, depending on its intended use.

Ingelia has combined its findings from several research projects into its HTC process and is aiming its technology at industries that rely on coal, sewage processing, which has to deal with organic waste, and energy producers moving away from coal-fired power generation towards renewables.

With the fall in coal prices and demand in the economic slowdown caused by the COVID-19 pandemic, it may take time for biocoal to displace fossil fuels in industry worldwide. But it offers one solution for those obliged to deal with organic waste and to meet the EU’s plan to become carbon-neutral by 2050.

Hernandez Latorre, who on 12 June was named the EU’s Mission Innovation Champion for her work in clean energy research, sees it playing an increasingly important role in the next 10-15 years.

“The market is really prepared to accept or implement new technologies, the only thing is they need to be sufficiently developed at scale,” she added.

Industries need sufficient market availability of biocoal to plan ahead for substitution of fossil fuels. And investors want to be sure they will have enough biowaste to process—and commitment from users to take their products—before they invest in sophisticated HTC units that could cost hundreds of thousands or even millions of euros.

Low-tech

Those set-up costs are prohibitive in many developing countries, even though biowaste poses a problem worldwide.

But a low-cost, low-tech version that uses human faeces to make biocoal and fertiliser could bring a double benefit to places where people lack sanitary facilities, said South Korean researcher Dr. Jae Wook Chung.

He sees potential to both generate income for communities and address their environmental and health problems caused by untreated excrement, citing WHO estimates that 673 million people have to defecate in the open – in the street, behind bushes or into open water.

Research has shown HTC reactors can be made for less than €20,000, but Dr. Chung aims to use a project called FEET to develop an even simpler, cheaper model that can be used in poor, high-density communities such as the Kibera slum in Kenya’s capital Nairobi.

He envisages a system about the size of an oil barrel, made with stainless-steel tubing available as a building supply in many developing countries. And he wants to monitor temperature and pressure from outside the reactor, avoiding expensive probes.

Dr. Chung will also focus on ways to ensure a sustainable supply of waste for processing—perhaps through organised emptying of pit latrines or portable lavatories—and to demonstrate the economic benefits of the biocoal and liquid fertiliser.

He sees making a sanitation system profitable for the community as key to making it sustainable, and to providing toilets in regions currently lacking them.

‘(The) economic benefit would also help those who have a cultural barrier to using conventional toilets move away from open defecation,” he said.

References

Jossi F (2018) Despite promising advances, costs keep wood biocoal on backburner. Energynews.us

Posted in Biomass, Coal | Tagged , , , , , | 2 Comments

The 10 states with the most farms

Preface. Of course there are many considerations: how climate change will affect farming in each state, the cost of the farmland, and other ecological factors discussed in the book by Hall & Day’s “America’s Most Sustainable Cities and Regions: Surviving the 21st Century Megatrends”.

On top of that, you need to be somewhere you’ll fit in, avoiding red states if you are gay, Muslim, atheist, or just about anything other than a white evangelist. Xenophobia is bound to grow as times get hard, and those of the wrong religion, ethnicity, and more will be jailed, killed, evicted, or forced to migrate. Cambodia, China and others killed or forced the educated into hard labor, man’s inhumanity to man is part of every nation and tribe. As my ecologist friends lament “what a species”. But yet another thing to keep in mind…

Alice Friedemann www.energyskeptic.com  author of “Life After Fossil Fuels: A Reality Check on Alternative Energy”, April 2021, Springer, “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

* * *

WorldAtlas. 10 US States With The Highest Number Of Farms.

The US has a large agricultural industry and a large population that relies on it for sustenance. This country is also a net exporter of food. According to the 2007 agricultural census, the country had over 2.2 million farms that covered approximately 922 million acres combined. These statistics mean that by that year, on average, each farm covered 418 acres. Almost all states have farming activities, however, some states have more farms than others according to 2017 data. Texas leads with 240,000 farms and Missouri comes second with 97,300 farms. The other states with the most farms are Iowa (86,900), Oklahoma (77,200), California (77,100), Kentucky (76,800), Ohio (73,600), Minnesota (73,200), Illinois (71,000), and Wisconsin (68,500).

Texas

“Everything is bigger in Texas,” is a common saying that reflects also the size and number of farms in this state. One in every seven Texan has an agricultural-related job. Texas has over 130 million acres of land under farming, approximately 99% of which are family owned. Texas’ farming sector accounts for $115 billion annually. This state mostly produces animal products, cotton, and dairy which it sells across the state, the US, and worldwide. Other farming activities include commercial feedlots and artificial insemination. Although the state leads in the number of farms, the current number is a decrease from 420,000 in the 1940s. Mechanization of farms, pest and disease control, precision agriculture, use of improved agricultural and plant engineering, and commercial and science-oriented educated farmers continue to improve Texan farms thus making it the breadbasket of the country.

Missouri

The “show-me” state of Missouri has more than 28 million acres dedicated to farming, mainly for soybeans, corn, broiler chickens, hogs, and cattle. In 2014, the state exported agricultural commodities worth $4.35 billion. Missouri has a “one agriculture” policy where the state and farmers encourage collaboration in terms of research, techniques, and marketing. Just like in Texas, families own most of the farms in this state.

[my comment: I have a cousin who married a Mormon and converted. She told me in 2020 that Mormons are being asked to move to Missouri if they can, and many of her in-laws have done so. Mormons see Missouri as the birthplace of the human race, according to Joseph Smith, and the place where Christ will first step down in the second coming. And from a quick internet search, it appears that Mormons have been moving there for many years. ]

Iowa

In Iowa, it’s mostly families that control the farming industry. Approximately 85% of land in Iowa is dedicated to farming. Barns and buildings for cattle, hogs, dairy cows, poultry, turkeys, and sheep dot the Iowa countryside. 30,622,731 acres of land in Iowa is under farming, 26,256,347 of which is cropland while 1,294,425 acres is pastureland. The average farmland is 345 acres. The main farm products in the state are soybeans, corn, pork, and eggs among others. Farming has always been the dominant livelihood activity for the people of Iowa, and has a lengthy past dating back several years.

Posted in Real Estate | Tagged , | 2 Comments

Wanted: Math geniuses and power engineers to make a renewable grid possible

OPF solution of original seven-bus system with generator at bus 4 offering highFigure 1. OPF solution of original seven-bus system with generator at bus 4

Preface. The U.S. electric grid produced 64% of electricity in 2019 with finite fossil fuels, and another 20% from nuclear power. Since fossil fuels and uranium are finite, and biomass doesn’t scale up, the electric grid needs to evolve to 100% renewables from the 9% of renewables powering the U.S. grid today, with the majority of the new power from wind and solar, since sites for hydropower (and pumped), geothermal, utility scale batteries, and Compressed Air energy storage are limited and don’t scale up (the only utility scale battery with enough materials on earth to store just ONE day of U.S. electricity generation is Sodium Sulfur, and would cost $41 trillion and take up 945 square miles of land — calculations and citations in When Trucks Stop Running: Energy and the Future of Transportation

If supply and demand aren’t kept in exact balance, the grid can crash. So increasing penetration of wind and solar will make the grid less stable since they are unreliable, variable, and intermittent.  Power engineers need to solve this increasing instability. Right now, the only solution is natural gas (and limited hydropower) to dispatch quick enough to balance wind and solar.  Coal and nuclear plants can’t ramp up or down quickly enough without causing damage.

Just as difficult is to completely redesign the grid, which now is mainly a “one-way” grid where power flows from about 6,000 very large, centralized power plants outwards and existing systems can keep good track of it.  As millions of home and industrial solar panels and wind turbine farms push electricity the “wrong way” increase, the potential for a blackout grows, because this power is invisible to the operators who keep supply and demand in balance.

Distribution grids around the world tend to operate in relative darkness, in terms of lacking sensors and monitors to reveal their point-to-point and moment-to-moment condition to grid operators. That will have to change as distributed energy resources take up an increasing role. Solar PV in particular can cause problems on distribution systems designed for one-way power flows by causing voltage disruptions or tripping protective equipment. Electric vehicle chargers can add significant loads to circuits not designed to handle them, as can electrifying loads that now run on fossil fuels. To manage this shift, network investments will need to increase substantially over the next decade, covering not only traditional grid reinforcement but also “smart solutions,” such as demand-side flexibility (Deign 2020).

Control center room PJM

 

 

 

 

 

 

 

 

New models, new algorithms, new mathematics, and higher-powered computers than we have now will be needed to be invented to cope with tens of millions of future rooftop solar panels, wind turbines, machinery and appliances, energy storage devices, automated distribution networks, smart metering systems, and phasor measurement units (PMUs) sending trillions of bits of data every few seconds.  

This paper proposes that new institutes staffed with power and other engineers be created.  Which is easier said than done. Solar panels and wind turbines may be “sexy”, but becoming a power engineer isn’t.  Anyone smart enough to become a power engineer can make far more money in other fields, which is why most universities have dropped their power engineering department.

This can be seen in the coming expertise crisis — for every two electric sector employees about to retire, the industry has less than one to replace them (the nuclear power sector alone needs 90,000 trained workers and engineers soon).  A lack of specialized workers to maintain and operate the infrastructure will greatly impact affordable, reliable service since new employees don’t have a lifetime of knowledge. They’re bound to make catastrophic errors, which will increase rates for consumers and blackouts (Makansi 2007, NAERC 2006).

And if a new car has hundreds of microchips, imagine how many billions would be needed for a smart grid.  Yet there is an enormous shortage of engineers to design and create them. Each chip fabrication plant requires thousands of engineers to operate (Lovejoy 2022).

Renewable power needs genius engineers to also solve these issues:

  • The electric grid is interdependent on other systems (transportation, water, natural gas and more). These systems also need to be modeled to make sure there is no impact on them as the grid evolves.
  • As wind and solar grow, placing new unpredictable demands on the grid, better forecasting tools are needed.
  • Better climate change forecasting tools are also needed since climate change will introduce several uncertainties affecting the grid. In addition to higher temperatures requiring increased air conditioning loads during peak hours, shifting rainfall patterns may affect the generation of hydroelectricity and the availability of cooling water for generating plants. The frequency of intense weather events may increase.
  • Modeling and mitigation of high-impact events such as coordinated physical or cyberattack; pandemics; high-altitude electromagnetic pulses; and large-scale geomagnetic disturbances, and so on are especially difficult because few very serious cases have been experienced. Outages from such events could affect tens of millions of people for months.

This report anticipated the need to create fake data in order to hide real data from terrorists that might enable them to find weak points and how/where to attack them.  Well oops, too late! The Russian attack of up to 18,000 government and private networks via SolarWind likely snagged the Black start plans of how to restore the electric grid after a cataclysmic blackout.  The plans would give Russia a hit list of systems to target to keep power from being restored in an attack like the one it pulled off in Ukraine in 2015, shutting off power for six hours in the dead of winter. Moscow long ago implanted malware in the American electric grid, and the United States has done the same to Russia as a deterrent  (Sanger et al 2021).

Understanding this report and the problems that need to be solved requires a power engineering degree and  calculus, so I only listed a few of the most simple-to-understand problems above, and excerpted what I could understand.  Some of these issues are more understandably explained in the Pacific Northwest National Laboratory paper:  “The Emerging Interdependence of the Electric Power Grid and Information and Communication Technology”

The energy consumed to keep track of the data from billions of sensors and energy producing and consuming devices every few seconds in order to balance a distributed grid is likely to be too much. After all, global production of oil probably peaked in 2018, and it is the master resource that makes all others possible, including coal, uranium, wind turbines, solar panels, microchips, transportation and more.  In addition, the complexity of a distributed grid is just way to difficult to manage.

Alice Friedemann  www.energyskeptic.com  Author of Life After Fossil Fuels: A Reality Check on Alternative Energy; When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Women in ecology  Podcasts: WGBH, Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, Kunstler 253 &278, Peak Prosperity,  Index of best energyskeptic posts

***

NRC. 2016. Analytic Research Foundations for the Next-Generation Electric Grid. Washington, DC: The National Academies Press.  160 pages. Excerpts:

Summary

The electric grid is an indispensable critical infrastructure that people rely on every day.

The next-generation electric grid must be more flexible and resilient than today’s. For example, the mix of generating sources will be more heterogeneous and will vary with time (e.g., contributions from solar and wind power will fluctuate), which in turn will require adjustments such as finer-scale scheduling and pricing. The availability of real-time data from automated distribution networks, smart metering systems, and phasor data hold out the promise of more precise tailoring of services and of control, but only to the extent that large-scale data can be analyzed nimbly.

Today, operating limits are set by off-line (i.e., non-real-time) analysis. Operators make control decisions, especially rapid ones after an untoward event, based on incomplete data.

By contrast, the next-generation grid is envisioned to offer something closer to optimized utilization of assets, optimized pricing and scheduling (analogous to, say, time-varying pricing and decision making in Internet commerce), and improved reliability and product quality. In order to design, monitor, analyze, and control such a system, advanced mathematical capabilities must be developed to ensure optimal operation and robustness; the envisioned capabilities will not come about simply from advances in information technology.

Within just one of the regional interconnects, a model may have to represent the behavior of hundreds of thousands of components and their complex interaction affecting the performance of the entire grid. While models of this size can be solved now, models where the number of components is many times larger cannot be solved with current technology.

As the generating capacity becomes more varied due to the variety of renewable sources, the number of possible states of the overall system will increase. While the vision is to treat it as a single interdependent, integrated system, the complete system is multi-scale (in both space and time) and multi-physics, is highly nonlinear, and has both discrete and continuous behaviors, putting an integrated view beyond current capabilities. In addition, the desire to better monitor and control the condition of the grid leads to large-scale flows of data that must in some cases be analyzed in real time.

Creating decision-support systems that can identify emerging problems and calculate corrective actions quickly is a nontrivial challenge. Decision-support tools for non-real-time tasks—such as pricing, load forecasting, design, and system optimization—also require new mathematical capabilities.

The future grid will rely on integrating advanced computation and massive data to create a better understanding that supports decision making. That future grid cannot be achieved simply by using the same mathematics on more powerful computers. Instead, the future will require new classes of models and algorithms, and those models must be amenable to coupling into an integrated system.

The grid itself and the conditions under which it operates are changing, and the end state is uncertain. For example, new resources, especially intermittent renewable energy such as wind and solar, are likely to become more important, and these place new demands on controlling the grid to maintain reliability.

This report contains the recommendations of the committee for new research and policies to improve the mathematical foundations for the next-generation grid. In particular

  • New technologies for measurement and control of the grid are becoming available. Wide area measurement systems provide a much clearer picture of what is happening on the grid, which can be vital during disruptions, whether from equipment failure, weather conditions, or terrorist attack. Such systems send a huge amount of data to control centers, but the data are of limited use unless they can be analyzed and the results presented in a way suitable for timely decision making.
  • Improved models of grid operation can also increase the efficiency of the grid, taking into account all the resources available and their characteristics; however, a systematic framework for modeling, defining performance objectives, ensuring control performance, and providing multidimensional optimization will be needed. If the grid is to operate in a stable way over many different kinds of disturbances or operating conditions, it will be necessary to introduce criteria for deploying more sensing and control in order to provide a more adaptive control strategy. These criteria include expense and extended time for replacement.
  • Other mathematical and computational challenges arise from the integration of more alternative energy sources (e.g., wind and photovoltaics) into the system. Nonlinear alternating current ACOPF can be used to help reduce the risk of voltage collapse and enable lines to be used within the broader limits, and flexible ac transmission systems and storage technology can be used for eliminating stability- related line limits.
  • Transmission and distribution are often planned and operated as separate systems, and there is little feedback between these separate systems beyond the transmission system operator’s knowing the amount of power to be delivered and the distribution system operator’s knowing what voltage to expect. As different types of distributed energy resources, including generation, storage, and responsive demand are embedded within the distribution network, different dynamic interactions between the transmission and distribution infrastructure may occur. One example is the synchronous and voltage stability issues of distributed generation that change the dynamic nature of the overall power system. It will be important in the future to establish more complete models that include the dynamic interactions between the transmission and distribution systems, including demand-responsive loads.
  • In addition, there need to be better planning models for designing the sustainable deployment and utilization of distributed energy resources. Estimating future demand for grid electricity and the means to provide it entail uncertainty. New distributed-generation technologies move generation closer to where the electricity is consumed.
  • Climate change will introduce several uncertainties affecting the grid. In addition to higher temperatures requiring increased air conditioning loads during peak hours, shifting rainfall patterns may affect the generation of hydroelectricity and the availability of cooling water for generating plants. The frequency of intense weather events may increase. Policies to reduce emissions of carbon dioxide, the main greenhouse gas, will affect generating sources. Better tools to provide more accurate forecasting are needed.
  • Modeling and mitigation of high-impact, low-frequency events (including coordinated physical or cyberattack; pandemics; high-altitude electromagnetic pulses; and large-scale geomagnetic disturbances) is especially difficult because few very serious cases have been experienced. Outages from such events could affect tens of millions of people for months. Fundamental research in mathematics and computer science could yield dividends for predicting the consequences of such events and limiting their damage.

Ten years ago, few people could have predicted the current energy environment in the United States—from the concern for global warming, to the accelerated use of solar and wind power, to the country’s near energy independence [My comment: Ha!!!  Guess power engineers can’t be experts in geology as well…]

Physical Structure of the Existing Grid and Current Trends

Economies of scale resulted in most electric energy being supplied by large power plants. Control of the electric grid was centralized through exclusive franchises given to utilities.

However, the grid that was developed in the 20th century, and the incremental improvements made since then, including its underlying analytic foundations, is no longer adequate to completely meet the needs of the 21st century.

The next-generation electric grid must be more flexible and resilient. While fossil fuels will have their place for decades to come, the grid of the future will need to accommodate a wider mix of more intermittent generating sources such as wind and distributed solar photovoltaics. Some customers want more flexibility to choose their electricity supplier or even generate some of their own electricity, in addition to which a digital society requires much higher reliability.

The availability of real-time data from automated distribution networks, smart metering systems, and phasor measurement units (PMUs) holds out the promise of more precise tailoring of the performance of the grid, but only to the extent that such large-scale data can be effectively utilized. Also, the electric grid is increasingly coupled to other infrastructures, including natural gas, water, transportation, and communication. In short, the greatest achievement of the 20th century needs to be reengineered to meet the needs of the 21st century. Achieving this grid of the future will require effort on several fronts.

The purpose of this report is to provide guidance on the longer-term critical areas for research in mathematical and computational sciences that is needed for the next-generation grid.

Excepting islands and some isolated systems, North America is powered by the four interconnections shown in Figure 1.1. Each operates at close to 60 Hz but runs asynchronously with the others. This means that electric energy cannot be directly transmitted between them. It can be transferred between the interconnects by using ac- dc-ac conversion, in which the ac power is first rectified to dc and then inverted back to 60 Hz.

Any electric power system has three major components: the generator that creates the electricity, the load that consumes it, and the wires that move the electricity from the generation to the load. The wires are usually subdivided into two parts: the high- voltage transmission system and the lower-voltage distribution system. A ballpark dividing line between the two is 100 kV. In North America just a handful of voltages are used for transmission (765, 500, 345, 230, 161, 138, and 115 kV). Figure 1.2 shows the U.S. transmission grid. Other countries often use different transmission voltages, such as 400 kV, with the highest commercial voltage transmitted over a 1,000-kV grid in China.

The transmission system is usually networked, so that any particular node in this system (known as a “bus”) will have at least two incident lines. The advantage of a networked system is that loss of any single line would not result in a power outage.

While ac transmission is widely used, the reactance and susceptance of the 50- or 60- Hz lines without compensation or other remediation limit their ability to transfer power long distances overhead (e.g., no farther than 400 miles) and even shorter distances in underground/undersea cables (no farther than 15 miles). The alternative is to use high- voltage dc (HVDC), which eliminates the reactance and susceptance. Operating at up to several hundred kilovolts in cables and up to 800 kV overhead, HVDC can transmit power more than 1,000 miles. One disadvantage of HVDC is the cost associated with the converters to rectify the ac to dc and then invert the dc back to ac. Also, there are challenges in integrating HVDC into the existing ac grid.

Commercial generator voltages are usually relatively low, ranging from perhaps 600 V for a wind turbine to 25 kV for a thermal power plant. Most of these generators are then connected to the high-voltage transmission system through step-up transformers. The high transmission voltages allow power to be transmitted hundreds of miles with low losses— total transmission system losses are perhaps 3 percent in the Eastern Interconnection and 5 percent in the Western Interconnection.

Large-scale interconnects have two significant advantages. The first is reliability. By interconnecting hundreds or thousands of large generators in a network of high-voltage transmission lines, the failure of a single generator or transmission line is usually inconsequential. The second is economic. By being part of an interconnected grid, electric utilities can take advantage of variations in the electric load levels and differing generation costs to buy and sell electricity across the interconnect. This provides incentive to operate the transmission grid so as to maximize the amount of electric power that can be transmitted.

However, large interconnects also have the undesirable side effect that problems in one part of the grid can rapidly propagate across a wide region, resulting in the potential for large-scale blackouts such as occurred in the Eastern Interconnection on August 14, 2003. Hence there is a need to optimally plan and operate what amounts to a giant electric circuit so as to maximize the benefits while minimizing the risks.

Power Grid Time Scales

Anyone considering the study of electric power systems needs to be aware of the wide range in time scales associated with grid modeling and the ramification of this range on the associated techniques for models and analyses. Figure 1.4 presents some of these time scales, with longer term planning extending the figure to the right, out to many years. To quote University of Wisconsin statistician George Box, “Essentially, all models are wrong, but some are useful. However, the approximate nature of the model must always be borne in mind”. Using a model that is useful for one time scale for another time scale might be either needless overkill or downright erroneous.

The actual power grid is never perfectly balanced. Most generators and some of the load are three-phase systems and can be fairly well represented using a balanced three-phase model. While most of the distribution system is three-phase, some of it is single phase, including essentially all of the residential load. While distribution system designers try to balance the number of houses on each phase, the results are never perfect since individual household electricity consumption varies. In addition, while essentially all transmission lines are three phase, there is often some phase imbalance since the inductance and capacitance between the phases are not identical. Still, the amount of phase imbalance in the high-voltage grid is usually less than 5 percent, so a balanced three-phase model is a commonly used approximation.

While an interconnected grid is just one big electric circuit, many of them, including the North American Eastern and Western Interconnections, were once divided into “groups”; at first, each group corresponded to an electric utility. These groups are now known as load-balancing areas (or just “areas”). The transmission lines that join two areas are known as tie lines.

Power transactions between different players (e.g., electric utilities, independent generators) in an interconnection can take from minutes to decades. In a large system such as the Eastern Interconnection, thousands of transactions can be taking place simultaneously, with many of them involving transaction distances of hundreds of miles, each potentially impacting the flows on a large number of transmission lines. This impact is known as loop flow, in that power transactions do not flow along a particular “contract path” but rather can loop through the entire grid.

Day-Ahead Planning and Unit Commitment

In order to operate in the steady state, a power system must have sufficient generation available to at least match the total load plus losses. Furthermore, to satisfy the N – 1 reliability requirement, there must also be sufficient generation reserves so that even if the largest generator in the system were unexpectedly lost, total available generation would still be greater than the load plus losses. However, because the power system load is varying, with strong daily, weekly, and seasonal cycles, except under the highest load conditions there is usually much more generation capacity potentially available than required to meet the load. To save money, unneeded generators are turned off. The process of determining which generators to turn on is known as unit commitment. How quickly generators can be turned on depends on their technology. Some, such as solar PV and wind, would be used provided the sun is shining or the wind blowing, and these are usually operated at their available power output. Hydro and some gas turbines can be available within minutes. Others, such as large coal, combined-cycle, or nuclear plants, can take many hours to start up or shut down and can have large start-up and shutdown costs.

Unit commitment seeks to schedule the generators to minimize the total operating costs over a period of hours to days, using as inputs the forecasted future electric load and the costs associated with operating the generators. Unit commitment constraints are a key reason why there are day-ahead electricity markets. Complications include uncertainly associated with forecasting the electric load, coupled increasingly with uncertainty associated with the availability of renewable electric energy sources such as wind and solar.

The percentage of energy actually provided by a generator relative to the amount it could supply if it were operated continuously at its rated capacity is known as its capacity factor. Capacity factors, which are usually reported monthly or annually, can vary widely, both for individual generators and for different generation technologies. Approximate annual capacity factors are 90% for nuclear, 60% for coal, 48% for natural gas combined cycle, 38% for hydro, 33% for wind, and 27 % for solar PV (EIA, 2015). For some technologies, such as wind and solar, there can be substantial variations in monthly capacity factors as well.

Planning takes place on time scales ranging from perhaps hours in a control room setting, to more than a decade in the case of high-voltage transmission additions. The germane characteristic of the planning process is uncertainty. While the future is always uncertain, recent changes in the grid have made it even more so. Planning was simpler in the days when load growth was fairly predictable and vertically integrated utilities owned and operated their own generation, transmission, and distribution. Transmission and power plant additions could be coordinated with generation additions since both were controlled by the same utility.

As a result of the open transmission access that occurred in the 1990s, there needed to be a functional separation of transmission and generation, although there are still some vertically integrated utilities. Rather than being able to unilaterally plan new generation, a generation queue process is required in which requests for generation interconnections needed to be handled in a nondiscriminatory fashion. The large percentage of generation in the queue that will never actually get built adds uncertainty, since in order to determine the incremental impact of each new generator, an existing generation portfolio needs to be assumed.

FIGURE 1.18 “Duck” curve. SOURCE: Courtesy of California Independent System Operator (California ISO, 2013). Licensed withpermission from the California ISO. Any statements, conclusions, summaries or other commentaries 1.18 expressed herein do not reflect the opinions or endorsement of the California ISO.

Also there is the question of who bears the risk associated with the construction of new generation. More recently, additional uncertainty is the growth in renewable generation such as wind and solar PV and in demand-responsive load.

Distribution Systems

As was mentioned earlier, the portion of the system that ultimately delivers electricity to most customers is known as the distribution system. This section provides a brief background on the distribution system as context for the rest of the report.

Sometimes the distribution system is directly connected to the transmission system, which operates at voltages above, say, 100 kV, and sometimes it is connected to a subtransmission system, operating at voltages of perhaps 69 or 46 kV. At the electrical substation, transformers are used to step down the voltage to the distribution level, with 12.47 kV being the most common in North America (Willis, 2004). These transformers vary greatly in size, from a few MWs in rural locations to more than 100 MW for a large urban substation.

The electricity leaves the substation on three-phase “primary trunk” feeders. While the distribution system can be networked, mostly it is radial. Hence on most feeders the flow of power has been one-way, from the substation to the customers. The number of feeders varies by substation size, from one to two up to more than a dozen. Feeder maximum power capacity can also vary widely from a few MVA to about 30 MVA. Industrial or large commercial customers may be served by dedicated feeders. In other cases smaller “laterals” branch off from the main feeder. Laterals may be either three phase or single phase (such as in rural locations). Most of the main feeders and laterals use overhead conductors on wooden poles, but in urban areas and some residential neighborhoods they are underground. At the customer location the voltage is further reduced by service transformers to the ultimate supply voltage (120/240 for residential customers). Service transformers can be either pole mounted, pad mounted on the ground, or in underground vaults. Typical sizes range from 5 to 5,000 kVA.

A key concern with the distribution system is maintaining adequate voltage levels to the customers. Because the voltage drop along a feeder varies with the power flow on the feeder, various control mechanisms are used. There include LTC transformers at the substation to change the supply voltage to all the substation feeders supplied by the transformer, voltage regulators that can be used to change the voltage for individual feeders (and sometimes even the individual phases), and switched capacitors to provide reactive power compensation.

Another key concern is protection against short circuits. For radial feeders, protection is simpler if the power is always flowing to the customers. Simple protection can be provided by fuses, but a disadvantage of a fuse is that a crew must be called in the event of it tripping. More complex designs using circuit breakers and re-closers allow for remote control, helping to reduce outage times for many customers.

With reduced costs for metering, communication, and control, the distribution system is rapidly being transformed. Distributed generation sources on the feeders, such as PV, mean that power flow may no longer be just one-way. Widely deployed advanced metering infrastructure systems are allowing near-real-time information about customer usage. Automated switching devices are now being widely deployed, allowing the distribution system to be dynamically reconfigured to reduce outage times for many customers. Advanced analytics are now being developed to utilize this information to help improve the distribution reliability and efficiency. Hence the distribution system is now an equal partner with the rest of the grid, with its challenges equally in need of the fundamental research in mathematical and computational sciences being considered in this report.

Organizations and Markets in the Electric Power Industry

Physically, a large-scale grid is ultimately an electrical circuit, joining the loads to the generators. However, it is a shared electrical circuit with many different players utilizing that circuit to meet the diverse needs of electricity consumers. This circuit has a large physical footprint, with transmission lines crisscrossing the continent and having significant economic and societal impacts. Because the grid plays a key role in powering American society, there is a long history of regulating it in the United States at both the state and federal levels. Widespread recognition that reliability of the grid is paramount led to the development of organizational structures playing major roles in how electricity is produced and delivered. Key among these structures is the Federal Energy Regulatory Commission (FERC), the North American Electric Reliability Corporation (NERC), and federal, regional, and state agencies that establish criteria, standards, and constraints.

In addition to regulatory hurdles, rapidly evolving structural elements within the industry, such as demand response, load diversity, different fuel mixes (including huge growth in the amount of renewable generation), and markets that help to determine whether new capacity is needed, all present challenges to building new transmission infrastructure. With these and many other levels of complexity affecting the planning and operation of a reliable power system, the need for strong, comprehensive, and accurate computational systems to analyze vast quantities of data has never been greater.

HISTORY OF FEDERAL AND STATE REGULATION WITH REGIONAL STANDARDS DEVELOPMENT

Since the creation of Edison’s Pearl Street Station in 1882, electric utilities have been highly regulated. This initially occurred at the municipal level, since utilities needed to use city streets to route their wires, necessitating a franchise from the city. In the late 1800s, many states within the United States formed public utility regulatory agencies to regulate railroad, steamboat, and telegraph companies. With the advent of larger electric power utility companies in the early 1900s, state regulatory organizations expanded their scopes to regulate electric power companies.

Regulatory Development

Almost from their inception, electric utilities were viewed as a natural monopoly. Because of the high cost of building distribution systems and the social impacts associated with the need to use public space for the wires, it did not make sense to have multiple companies with multiple sets of wires competing to provide electric service in the same territory. Electric utilities were franchised initially by cities and later (in the United States) by state agencies. An electric utility within a franchised service territory “did it all.” This included owning the increasingly larger generators and the transmission and distribution system wires, and continued all the way to reading the customer’s meters. Customers did not have a choice of electric supplier (many still do not). Local and state regulators were charged with keeping electric service rates just and reasonable within these franchised service territories.

Reliability Organization Development

On June 1, 1968, the electricity industry formed NERC in response to the FPC recommendation and the 1965 blackout, when 30 million people lost power in the northeastern United States and southeastern Canada. In 1973, the utility industry formed the Electric Power Research Institute to pool research and improve reliability. After another blackout occurred in New York City in July 1977, Congress reorganized the FPC into the Federal Energy Regulatory Commission and expanded the organization’s responsibilities to include the enactment of a limited liability provision in federal legislation, allowing the federal government to propose voluntary standards. In 1980, the North American Power Systems Interconnection Committee (known as NAPSIC) became the Operating Committee for NERC, putting the reliability of both planning and operation of the interconnected grid under one organization. In 1996, two major blackouts in the western United States led the members of the Western System Coordinating Council to develop the Reliability Management System. Members voluntarily entered into agreements with the council to pay fines if they violated certain reliability standards. In response to the same two western blackout events, NERC formed a blue-ribbon panel and the Department of Energy formed the Electric System Reliability Task Force. These independent investigations led the two groups to recommend separately the creation of an independent, audited self- regulatory electric reliability organization to develop and enforce reliability standards throughout North America.

Both groups concluded that federal regulation was necessary to ensure the reliability of the North American electric power grid. Following those conclusions, NERC began converting its planning policies, criteria, and guides into reliability standards.

On August 14, 2003, North America experienced its worst blackout to that date, with 50 million people losing power in the Midwestern and northeastern United States and in Ontario, Canada. On August 8, 2005, the Energy Policy Act of 2005 authorized the creation of an electric reliability organization and made reliability standards mandatory and enforceable. On July 20, 2006, FERC certified NERC as the electric reliability organization for the United States. From September through December 2006, NERC signed memoranda of understanding with Ontario, Quebec, Nova Scotia, and the National Energy Board of Canada. Following the execution of these agreements, on January 1, 2007, the North American Electric Reliability Council was renamed the North American Electric Reliability Corporation. Following the establishment of NERC as the electric reliability organization for North America, FERC approved 83 NERC Reliability Standards, representing the first set of legally enforceable standards for the bulk electric power system in the United States.

On April 19, 2007, FERC approved agreements delegating its authority to monitor and enforce compliance with NERC reliability standards in the United States to eight regional entities, with NERC continuing in an oversight role.

North American Regional Entities

There are many characteristic differences in the design and construction of electric power systems across North America that make a one-size- fits-all approach to reliability standards across all of North America difficult to achieve. A key driver for these differences is the diversity of population densities within North America, which affects the electric utility design and construction principles needed to reliably and efficiently provide electric service in each different area. There are eight regional reliability organizations covering the United States, Canada, and a portion of Baja California Norte Mexico (Figure 2.1). The members of these regional entities represent virtually all segments of the electric power industry and work together to develop and enforce reliability standards, while addressing reliability needs specific to each organization.

The largest power flow cases routinely solved now contain at most 100,000 buses…When a contingency occurs, such as a fault on a transmission line or the loss of a generator, the system experiences a “jolt” that results in a mismatch between the mechanical power delivered by the generators and the electric power consumed by the load. The phase angles of the generators relative to one another change owing to power imbalance. If the contingency is sufficiently large it can result in generators losing synchronism with the rest of the system, or in the protection system responding by removing other devices from service, perhaps starting a cascading blackout.

Stability issues have been a part of the power grid since its inception, with Edison having had to deal with hunting oscillations on his steam turbines in 1882, when he first connected them in parallel

In the case of wind farms, the dynamics of the turbine and turbine controls behind the inverter are also important. Because these technologies are developing rapidly and in some cases are manufacturers’ proprietary models, industry standard models with sufficient fidelity for TS lag behind the real-world developments. The development of inverter-based synthetic inertia and synthetic governor response from wind farms, photovoltaic farms, and grid-connected storage systems will create additional modeling complexity.

DS solutions have become more important in recent years as a result of the increased use of renewable sources, which causes concerns about system dynamic performance in terms of frequency and area control error—control area dynamic performance. DS solutions typically rely on IEEE standard models for generator dynamics and simpler models for assumed load dynamics. As with TS solutions, providing accurate models for wind farm dynamics and for proposed synthetic inertial response and governor response is a challenge.

The advent of high penetrations of inverter-based renewable generation (wind farms, solar farms) has led to a requirement for interconnection studies for each new renewable resource to ensure that the new wind farm will not create problems for the transmission system. These interconnection studies begin with load-flow analyses to ensure that the transmission system can accommodate the increased local generation, but then broaden to address issues specific to inverter-based generation, such as analyzing harmonic content and its impact on the balanced three-phase system.

HARMONIC ANALYSIS

The models described in all sections of this report are based on the 60-Hz waveform and the assumption that the waveform is “perfect,” meaning that there are no higher-order harmonics caused by nonlinearities, switching, imperfect machines and transformers, and so on. However, inverters are switching a dc voltage at high frequencies to approximate a sine wave, and this inevitably introduces third, fifth, and higher-order harmonics or non-sine waveforms into the system. The increased use of renewables and also increased inverter-based loads make harmonic analysis—study of the behavior of the higher harmonics—more and more important. While interconnection standards tightly limit the harmonic content that individual inverters may introduce into the system, the presence of multiple inverter-based resources in close proximity (as with a new transmission line to a region having many wind farms) can cause interference effects among the multiple harmonic sources.

Model predictive control (MPC) has been developed extensively in the literature for the AGC problem but has rarely been applied in the field. The minor improvements in the system which are not required by NERC standards today do not justify the increased cost and complexity of the software and the models needed. However, high penetration by renewables, decreased conventional generation available for regulation, the advent of new technologies such as fast short-term storage (flywheels, batteries), and short-term renewable production forecasting may reopen the investigation of MPC for AGC.

MODELING HIGH-IMPACT, LOW-FREQUENCY EVENTS

An emerging area for which some analytic tools and methods are now becoming available is the modeling of what are often referred to as high-impact, low-frequency (HILF) events —that is, events that are statistically unlikely but still plausible and, if they were to occur, could have catastrophic consequences. These include large-scale cyber or physical attacks, pandemics, electromagnetic pulses (EMPs), and geomagnetic disturbances (GMDs). This section focuses on GMDs since over the last several years there has been intense effort in North America to develop standards for assessing the impact of GMDs on the grid.

GMDs, which are caused by coronal mass ejections from the Sun, can impact the power grid by causing low frequency (less than 0.1 Hz) changes in Earth’s magnetic field. These magnetic field changes then cause quasi-dc electric fields, which in turn cause what are known as geo-magnetically induced currents (GICs) to flow in the high-voltage transmission system. The GICs impact the grid by causing saturation in the high-voltage transformers, leading to potentially large harmonics, which in turn result in both greater reactive power consumption and increased heating. It has been known since the 1940s that GMDs have the potential to impact the power grid; a key paper in the early 1980s showed how GMD impacts could be modeled in the power flow.

The two key concerns associated with large GMDs are that (1) the increased reactive power consumption could result in a large-scale blackout and (2) the increased heating could permanently damage a large number of hard-to-replace high-voltage transformers.

Large GMDs are quite rare but could have catastrophic impact. For example, a 500 nT/min storm blacked out Quebec in 1989. Larger storms, with values of up to 5,000 nT/min, occurred in 1859 and 1921, both before the existence of large-scale grids. Since such GMDs can be continental in size, their impact on the grid could be significant, and tools are therefore needed to predict them and to allow utilities to develop mitigation methods.

The mathematical sciences provide essential technology for the design and operation of the power grid. Viewed as an enormous electrical network, the grid’s purpose is to deliver electrical energy from producers to consumers. The physical laws of electricity yield systems of differential equations that describe the time-varying currents and voltages within the system. The North American grid is operated in regimes that maintain the system close to a balanced three-phase, 60-Hz ideal. Conservation of energy is a fundamental constraint: Loads and generation must always balance. This balance is maintained in today’s network primarily by adjusting generation. Generators are switched on and off while their output is regulated continuously to match power demand. Additional constraints come from the limited capacity of transmission lines to deliver power from one location to another.

The character, size, and scope of power flow equations are daunting, but (approximate) solutions must be found to maintain network reliability. From a mathematical perspective, the design and operation of the grid is a two-step process. The first step is to design the system so that it will operate reliably. Here, differential equations models are formulated, numerical methods are used for solving them, and geometric methods are used for interpreting the solutions. The next section, “Dynamical Systems, briefly introduces dynamical systems theory, a branch of mathematics that guides this geometric analysis. Stability is essential, and much of the engineering of the system is directed at ensuring stability and reliability in the face of fluctuating loads, equipment failures, and changing weather conditions. For example, lightning strikes create large, unavoidable disturbances with the potential to abruptly move the system state outside its desired operating regime and to permanently damage parts of the system. Control theory, introduced in a later section, “Control,” is a field that develops devices and algorithms to ensure stability of a system using feedback.

More generation capacity is needed than is required to meet demand, for two reasons: (1) loads fluctuate and can be difficult to accurately predict and (2) the network should be robust in the face of failures of network components.

“Optimization,” describes some of the mathematics and computational methods for optimization that are key aspects of this process. Because these algorithms sit at the center of wholesale electricity markets, they influence financial transactions of hundreds of millions of dollars daily.

The electrical grid operates 24/7, but its physical equipment has a finite lifetime and occasionally fails. Although occasional outages in electric service are expected, an industry goal is to minimize these and limit their extent. Cascading failures that produce widespread blackouts are disruptive and costly. Systematic approaches to risk analysis, described in the section “Risk Analysis, Reliability, Machine Learning, and Statistics,” augment physical monitoring devices to anticipate where failures are likely and to estimate the value of preventive maintenance.

The American Recovery and Reinvestment Act of 2009 funded the construction and deployment of many of the phasor measurement units (PMUs) discussed in Chapter 1, so that by 2015 there are approximately 2,000 production-grade PMUs just in North America that are sampling the grid 30 to 60 times per second . This is producing an unprecedented stream of data, reporting currents and voltages across the power system with far greater temporal resolution (once every 4 to 6 seconds) than was available previously from the existing Supervisory Control and Data Acquisition (SCADA) systems.

The final section, “Uncertainty Quantification,” introduces mathematical methods for quantifying uncertainty. This area of mathematics is largely new, and the committee thinks that it has much to contribute to electric grid operations and planning. There are several kinds of uncertainty that affect efforts to begin merging real-time simulations with real-time measurements. These include the effects of modeling errors and approximations as well as the intrinsic uncertainty inherent in the intermittency of wind and solar generation and unpredictable fluctuations of loads. Efforts to create smart grids in which loads are subject to grid control and to generation introduce additional uncertainty.

Some of the uncertainty associated with the next-generation grid is quite deep, in the sense that there is fundamental disagreement over how to characterize or parameterize uncertainty. This can be the case in situations such as predictions associated with solar or wind power, or risk assessments for high-impact, low-frequency events.

RISK ANALYSIS, RELIABILITY, MACHINE LEARNING, AND STATISTICS

Power systems are composed of physical equipment that needs to function reliably. Many different pieces of equipment could fail on the power system: Generators, transmission lines, transformers, medium-/low-voltage cables, connectors, and other pieces of equipment could each fail, leaving customers without power, increasing risk on the rest of the power system, and possibly leading to an increased risk of cascading failure. The infrastructure of our power system is aging, and it is currently handling loads that are substantially larger than it was designed for. These reliability issues are expected to persist into the foreseeable future, particularly as the power grid continues to be used beyond its design specifications.

Energy theft

One of the most important goals set by governments in the developing world is universal access to reliable energy. While energy theft is not a significant problem in the United States, some utilities cannot provide reliable energy because of rampant theft, which severely depletes their available funding to supply power. Customers steal power by threading cables from powered buildings to unpowered buildings. They also thread cables to bypass meters or tamper with the meters directly, for instance, by pouring honey into them to slow them down. Power companies need to predict which customers are likely to be stealing power and determine who should be examined by inspectors for lack of compliance. Again, each customer can be represented by a vector x that represents the household, and the label y is the result of an inspector’s visit (the customer is either in compliance or not in compliance).

UNCERTAINTY IN WHAT LIES AHEAD

The grid of today is changing with the rapid integration of renewable energy resources such as wind and solar photovoltaic (PV) and the retirement of substantial amounts of coal generation. For example, in early 2015 in the United States, there was installed capacity of about 65 GW of wind and 9 GW of solar PV (out of a total of 1,070 GW), from less than 3 GW of wind and 0.4 GW of solar just 15 years back (EIA, 2009). However, this needs to be placed in context by noting that during the natural gas boom in the early 2000s, almost 100 GW of natural gas capacity was added in just 2 years! And solar thermal, which seemed so promising in 2009, has now been mostly displaced by solar PV because of dropping prices for the PV cells.

Further uncertainty arises because of the greater coupling of the electric grid to other infrastructures such as natural gas, water, and transportation. Finally, specific events can upset the best predictions. An example is the Japanese tsunami in 2011, which (among other factors) dimmed the prospects for a nuclear renaissance in the United States and elsewhere.

Some of the uncertainty currently facing the industry is illustrated in Figure 5.1. The drivers of this uncertainty are manifold: (1) cyber technologies are maturing and are becoming available at reasonable cost—these include sensing, such as phasor measurement units (PMUs), communications, control, and computing; (2) emergence of qualitatively new resources, such as renewable distributed energy resources (DERs)—PVs, wind generation, geothermal, small hydro, biomass, and the like; (3) new quest for large-scale storage—stationary batteries, as well as low-cost storage batteries such as those for use in electric vehicles; (4) changing transmission technologies such as increased use of flexible ac transmission system (FACTS) technologies and/or increased use of high-voltage direct current (HVDC) lines and the integration of other dc technologies; (5) environmental objectives for reducing pollutants; (6) industry reorganization, from fully regulated to service-oriented markets; and (7) the need for basic electrification in developing countries, which affects the priorities of equipment suppliers. Given these drivers, it is hard to predict exactly long-term power grid scenarios.

TECHNOLOGIES THAT WILL ENHANCE THE OBSERVABILITY OF THE GRID

Since the advent of the electric power grid, measurement technologies have been a necessary component of the system for both its protection and its control. For example, measuring the currents flowing in the power system wires and the bus voltages are two key quantities of importance. The currents are measured using current transformers, which convert the magnetic field of the primary circuit to a proportionally smaller current suitable for input to instrumentation. The voltages are measured using potential transformers (PTs), which utilize traditional transformer technology of two windings coiled on a common magnetic core to similarly proportionally reduce the line voltage to a voltage suitable for instrumentation. Through the middle of the 20th century higher voltages and coupled capacitive voltage transformers used capacitors as a voltage divider as a more practical alternative to a PT for extra-high-voltage transmission. Other instruments exploiting either the electric or the magnetic fields have been developed. More recently, optical sensors can convert the voltages and currents as a directly measured quantity

Bringing these measurements to a central location has been possible for many decades. Technologies such as Supervisory Control and Data Acquisition (SCADA) use specialized protocols to transmit the information gathered in substations through analog-to-digital conversion in various sensors that are directly connected to remote terminal units (RTUs). A typical SCADA architecture exchanges both measurement and control information between the front end processor in the control center and the RTUs in the substations. Modern SCADA protocols support reporting of exceptions in addition to more traditional polling approaches. These systems are critical to providing control centers with the information necessary to operate the grid and to providing control signals to the various devices in the grid to support centralized control and optimization of the system.

SCADA systems in use today have two primary limitations. First, they are relatively slow. Most systems poll once every 4 sec, with some of the faster implementations gathering data at a 2-sec scan rate. Second, they are not time synchronized. Often, the data gathered in the substation and passed to the central computer are not timestamped until they are registered into the real-time database at the substation. And as the information is gathered through the polling cycle, sometimes there can be a difference between the pre- and post-event measurements if something happens during the polling cycle itself.

First described in the 1980s, the PMUs mentioned in earlier chapters utilize the precise time available from systems such as the Global Positioning System. The microsecond accuracy available is reasonable for the accurate calculation of phase angles of various power system quantities. More broadly, high-speed time- synchronized measurements are broadly referred to as wide area measurement systems. These underwent significant development beginning in the 1990s and can now provide better measurements of system dynamics with typical data collection rates of 30 or more samples per second. Significant advances in networking technology within the past couple of decades have enabled wide area networks by which utilities can share their high-speed telemetry with each other, enabling organizations to have better wide area situational awareness of the power system. This is addressing one of the key challenges that was identified and formed into a recommendation following the August 14, 2003, blackout

There are several benefits of wide area measurement systems. First, because of the high-speed measurements, dynamic phenomena can be measured. The 0.1- to 5-Hz oscillations that occur on the power system can be compared to simulations of the same events, leading to calibration that can improve the power system models. It is important to have access to accurate measurements corresponding to the time scales of the system. Second, by providing a direct measure of the angle, there can be a real-time correlation between observed angles and potential system stress.

The measurements from PMUs, known as synchrophasors, can be used to manage off-normal conditions such as when an interconnected system breaks into two or more isolated systems, a process known as “islanding.” For example, during Hurricane Gustav, in September 2008, system operators from Entergy (the electric utility company serving the impacted area in Louisiana) were able to keep a portion of the grid that islanded from the rest of the Eastern Interconnection operating after the storm damage took all of the transmission lines out of service, isolating a pocket of generation and load. The isolated area continued to operate by balancing generation and load. The system operators credited synchrophasor technology with allowing them to keep this island operational during the restoration process

Researchers are looking at PMU data to expedite resolution of operating events such as voltage stability and fault location and to quickly diagnose equipment problems such as failing instrument transformers and negative current imbalances. More advanced applications use PMU data as inputs to the special protection systems or remedial action schemes, mentioned in Chapter 3 for triggering preprogrammed automated response to rapidly evolving system conditions.

All telemetry is subject to multiple sources of error. These include but are not limited to measurement calibration, instrumentation problems, loss of communications, and data drop-outs. To overcome these challenges, state estimation, introduced in Chapter 3, is used to compute the real-time state of the system. This is a model-fitting exercise, whereby the available data are used to determine the coefficients of a power system model. A traditional state estimator requires iteration to fit the nonlinear with the available measurements. With an overdetermined set of measurements, the state estimation process helps to identify measurements that are suspected of being inaccurate. Because synchrophasors are time aligned, a new type of linear state estimator has been developed and is now undergoing widespread implementation (Yang and Bose, 2011). The advantage of “cleaning” the measurements through a linear state estimator is that the application is not subject to the data quality errors that can occur with the measurement and communications infrastructure. Additional advances are under way, including distributed state estimation and dynamic state estimation.

One of the more recent challenges has been converting the deluge of new measurements available to a utility, from synchrophasors and other sources, into actionable information. Owing to the many more points of measurement available to a utility from smart meters and various distribution automation technologies, all organizations involved in the operation of the electric power grid are faced with an explosion of data and are grappling with techniques to utilize this information for making better planning and/or operational decisions. Big data analytics is being called on to extract information for enhancing various planning and operational applications.

One challenge includes the improved management of uncertainty. Whether it be the uncertainty associated with estimating future load or generation availability or the uncertainty associated with risks such as extreme weather or other natural or manmade disaster scenarios that could overtake the system, more sophisticated tools for characterizing and managing this uncertainty are needed.

Better tools to provide more accurate forecasting are also needed. One promising approach is through ensemble forecasting methods, in which various forecasting methods are compared with one another and their relative merits used to determine the most likely outcome (with appropriate confidence bounds).

Finally, better decision support tools, including intelligent alarm processors and visualization, are needed to enhance the reliability and effectiveness of the power system operational environment. Better control room automation over the years has provided an unprecedented increase in the effectiveness with which human operators handle complex and rapidly evolving events. During normal and routine situations, the role of the automation is to bring to the operator’s attention events that need to be addressed. However, during emergency situations, the role of the automation is to prioritize actions that need to be taken. Nevertheless, there is still room for improving an operator’s ability to make informed decisions during off-normal and emergency situations. More effective utilization of visualization and decision-support automation is still evolving, and much can be learned by making better use of the social sciences and applying cognitive systems engineering approaches.

TECHNOLOGIES THAT WILL ENHANCE THE CONTROLABILITY OF THE GRID

The value of advanced analytics is only as good as our ability to effect change in the system based on the result of those analytics. Whether it is manual control with a human in the loop or automated control that can act quickly to resolve an issue, effective controls are essential. The power system today relies on the primary, secondary, and tertiary hierarchical control strategies to provide various levels of coordinated control. This coordination is normally achieved through temporal and spatial separation of the various controls that are simultaneously operating. For example, high-speed feedback in the form of proportional-integral- derivative controls operates at power plants to regulate the desired voltage and power output of the generators. Supervisory control in the form of set points (e.g., maintain this voltage and that power output) is received by the power plant from a centralized dispatcher. Systemwide frequency of the interconnected power system is accomplished through automatic generation control, which calculates the desired power output of the generating plants every 4 sec.

Protection schemes that are used to isolate faults rely on local measurements to make fast decisions, supplemented by remote information through communications to improve the accuracy of those decisions. Various teleprotection schemes and technologies have been developed over the past several decades to achieve improved reliability by leveraging available communications technologies. In addition, microprocessor-based protective relays have been able to improve the selectivity and reliability of fault isolation, including advanced features such as fault location. One example is the ability to leverage traveling wave phenomena that provide better accuracy than traditional impedance-based fault location methods

All of these methods described above have one thing in common: judicious use of communications. For historical reasons, when communications were relatively expensive and unreliable, more emphasis was placed on local measurements for protection and control. Communications were used to augment this local decision making. With the advent of more inexpensive (and reliable) communication technologies, such as fiber-optic links installed on transmission towers, new distributed control strategies are beginning to emerge. Additionally, classical control approaches are being challenged by the increased complexity of distribution networks, with more distributed generation, storage, demand response, automatic feeder switching, and other technologies that are dramatically changing the distribution control landscape.  It will soon no longer be possible to control the power system with the control approaches that are in use today (Hawaiian Electric Company, Inc., “Issues and Challenges,” http://www.hawaiianelectric.com/heco/Clean-Energy/Issues-and-Challenges )

Perhaps the biggest challenge underlying the mathematical and computational requirements for this research is the fact that any evolution from today’s operating and control practices will require that newly proposed methods cannot be best-effort methods; instead, a guaranteed performance (theoretical and tested) will be required if any new methods unfamiliar to the system operators are to be deployed. Today there is very little theoretical foundation for mathematical and computational methods capable of meeting provable performance goals over a wide range of operating conditions. More specifically, to arrive at the new mathematical and computational methods needed for the power system, one must recognize that the power system represents a very large-scale, complex, and nonlinear dynamic system with multiple time-varying interdependencies.

EFFECTS OF CLIMATE CHANGE

Many of the assumptions associated with the long-term operation of the electricity infrastructure are based on climatic conditions that prevailed in the past century. Climate changes appear likely to change some of those basic planning assumptions. If policy changes are made to mitigate carbon emissions, parallel changes to the entire power generation infrastructure and the transmission infrastructure connecting our sources of electricity supply will be necessary. This gets into institutional issues such as the availability of capital investment to accommodate these changes, and policies associated with how to recover the costs of the investments. The traditional utility business model would need to be changed to accommodate these developments.

If the average intensity of storms increases, or if weather events become more severe (hotter summers and/or colder winders), basic assumptions about the cost effectiveness of design trade-offs underlying the electric power infrastructure would need to be revisited. Examples of this are the elements for hardening the system against wind or water damage, the degree of redundancy that is included to accommodate extreme events, and the extent to which dual-fueled power plants are required to minimize their dependency on natural gas.

MATHEMATICAL AND COMPUTATIONAL CHALLENGES IN GRID ARCHITECTURES

At present, the system is operated according to practices whose theoretical foundations require reexamination. In one such practice, industry often uses linearized modes in order to overcome nonlinear temporal dynamics. For example, local decentralized control relies on linear controls with constant gain. While these designs are simple and straightforward, they lack the ability to adapt to changing conditions and are only valid over the range of operating conditions that their designers could envision. If the grid is to operate in a stable way over large ranges of disturbances or operating conditions, it will be necessary to introduce a systematic framework for deploying more sensing and control to provide a more adaptive and nonlinear dynamics-based control strategy. Similarly, to overcome nonlinear spatial complexity, the system is often modeled assuming weak interconnections of subsystems with stable and predictable boundary conditions between each, while assuming that only fast controls are localized. Thus, system-level models used in computer applications to support various optimization and decision-support functions generally assume steady-state conditions subject to linear constraints. As power engineers know, sometimes this simplifying assumption is not valid.

Other open mathematical and computational challenges include integrating more nondispatchable generation in the system or other optimized adjustment of devices or control systems. These opportunities for advancing the state of the art for computing technologies could be thought of as “deconstraining technologies”: The nonlinear ac optimal power flow can be used to help reduce the risk of voltage collapse and enable lines to be used within the broader limits; FACTS, HVDC lines, and storage technology can be used for eliminating stability-related line limits; and so on.

The problem of unit commitment and economic dispatch subject to plant ramping rate limits needs to be revisited in light of emerging technologies. It is important to recognize that ramping rate limits result from constraints in the energy conversion process in the power plant. But these are often modeled as static predefined limits that do not take into account the real-time conditions in the actual power generating facility. This is similar to the process that establishes thermal line limits and modifies them to account for voltage and transient stability problems.

As the dynamic modeling, control, and optimization of nonlinear systems mature, it is important to model the actual dynamic process of energy conversion and to design nonlinear primary control of energy conversion for predictable input-output characteristics of the power plants.

In closing, instead of considering stand-alone computational methods for enhancing the performance of the power system, it is necessary to understand end-to-end models and the mathematical assumptions made for modeling different parts of the system and their interactions. The interactions are multi-temporal (dynamics of power plants versus dynamics of the interconnected system, and the role of control); multi-spatial (spanning local to interconnection-wide); and contextual (i.e., performance objectives). It will be necessary to develop a systematic framework for modeling and to define performance objectives and control/optimization of different system elements and their interactions.

MATHEMATICAL & COMPUTATIONAL CHALLENGES IN LOCAL DISTRIBUTION GRID ARCHITECTURES

Today transmission and distribution are often planned and operated as separate systems. The fundamental assumption is that the transmission system will provide a prescribed voltage at the substation, and the distribution system will deliver the power to the individual residential and commercial customers. Historically, there is very little feedback between these separate systems beyond the transmission system operator needing to know the amount of power that needs to be delivered and the distribution system operator knowing what voltage to expect. It has been increasingly recognized, however, that as different types of distributed energy resources, including generation, storage, and responsive demand, are embedded within the distribution network, different dynamic interactions between the transmission and distribution infrastructure may occur. One example is the transient and small-signal stability issues of distributed generation that changes the dynamic nature of the overall power system. It will be important in the future to establish more complete models that include the dynamic interactions between the transmission and distribution systems.

In addition, there is a need for better planning models for designing the sustainable deployment and utilization of distributed energy resources. It is critical to establish such models to support the deployment of nondispatchable generation, such as solar, with other types of distributed energy resources and responsive demand strategies. To illustrate the fundamental lack of modeling and design tools for these highly advanced distribution grids, consider a small, real-world, self-contained electric grid of an island. Today’s sensing and control are primarily placed on controllable conventional power plants since they are considered to be the only controllable components. Shown in Figure 5.2a is the actual grid, comprising a large diesel power plant, small controllable hydro, and wind power plant. Following today’s modeling approaches, this grid gets reduced to a power grid, shown in Figure 5.2b, in which the distributed energy resources are balanced with the load. Moreover, if renewable plants (hydro and wind) are represented as a negative predictable load with superposed disturbances, the entire island is represented as a single dynamic power plant connected to the net island load (Figure 5.2c). (a)

In contrast with today’s local grid modeling, consider the same island grid in which all components are kept 5.2 and modeled (see Figure 5.3). The use of what is known as advanced metering infrastructure (AMI) allows information about the end user electricity usage to be collected on an hourly (or more frequent) basis. Different models are needed to exploit this AMI-enabled information to benefit the operating procedures used by the distribution system operator (DSO) in charge of providing reliable uninterrupted electricity service to the island. Notably, the same grid becomes much more observable and controllable. Designing adequate SCADA architecture for integrating more PVs and wind power generation and ultimately retiring the main fossil power plants requires such new models. Similarly, communication platforms and computing for decision making and automation on the island require models that are capable of supporting provable quality of service and reliability metrics. This is particularly important for operating the island during equipment failures and/or unexpected variations in power produced by the distributed energy resources. The isolated grid must remain resilient and have enough storage or responsive demand to ride through interruptions in available power generation without major disruptions. Full distribution automation also includes reconfiguration and remote switching.

MATHEMATICAL AND COMPUTATIONAL CHALLENGES IN MANAGING INTERDEPENDENCIES BETWEEN THE TRANSMISSION AND LOCAL DISTRIBUTION GRIDS/MICROGRIDS

Based on the preceding description of representative power grid architectures, it is fairly straightforward to recognize that different grid architectures present different mathematical and computational challenges for the existing methods and practices. These new architectures include multi-scale systems that range temporally between the relatively fast transient stability–level dynamics and slower optimization objectives. They consist, as well, of nonlinear dynamical systems, where today’s practice is to utilize linear approximations, and large-scale complexity, where it is difficult to completely model or fully understand all of the nuances that could occur, if only infrequently, during off-normal system conditions but that must be robustly resisted in order to maintain reliable operations at all times.

In all these new architectures the tendency has become to embed sensing/computing/control at a component level. As a result, models of interconnected systems become critical to support communications and information exchange between different industry layers. These major challenges then become a combination of (1) sufficiently accurate models relevant for computing and decision making at different layers of such complex, interconnected grids, (2) sufficiently accurate models for capturing the interdependencies/dynamic interactions, and (3) control theories that can accommodate adaptive and robust distributed, coordinated control. Ultimately, advanced mathematics will be needed to design the computational methods to support various time scales of decision making, whether it be fast automated controls or planning design tools.

The balance between security and financial incentives to keep data confidential on the one hand and open on the other to satisfy researchers’ needs for access to data. The path proposed here is to create synthetic data sets that retain the salient characteristics of confidential data without revealing sensitive information. Because developing ways to do this is in itself a research challenge, the committee gives one example of recent work to produce synthetic networks with statistical properties that match those of the electric grid. Ideally, one would like to have real-time, high- fidelity simulations for the entire grid that could be compared to current observations. However, that hardly seems feasible any time soon. Computer and communications resources are too limited, loads and intermittent generators are unpredictable, and accurate models are lacking for many devices that are part of the grid. The section “Data-Driven Models of the Electric Grid” discusses ways to use the extensive data streams that are increasingly available to construct data-driven simulations that extrapolate recent observations into the future without a complete physical model. Not much work of this sort has yet been done: Most attempts to build data-driven models of the grid have assumed that it is a linear system. However, there are exceptions that look for warning signs of voltage collapse by the monitoring of generator reactive power reserves.

SYNTHETIC DATA FOR FACILITATING THE CREATION, DEVELOPMENT, AND VALIDATION OF NEW POWER SYSTEM TOOLS FOR PLANNING AND OPERATIONS

Data of the right type and fidelity are the bedrock of any operational assessment or long-range planning for today’s electric power system. In operations, assessment through simulation and avoidance of potentially catastrophic events by positioning a system’s steady-state operating point based on that assessment is the mantra that has always led to reliability-constrained economical operation. In the planning regime, simulation again is key to determining the amount and placement of new generation, transmission, and distribution.

The data used to achieve the power industry’s remarkable record of universal availability of electricity has been relatively simple compared to future data needs, which will be characterized by a marked increase in uncertainty, the need to represent new disruptive technologies such as wind, storage, and demand-side management, and an unprecedented diversity in policy directions and decisions marked by a tension between the rights of states and power companies versus federal authority. The future grid is likely to be characterized by a philosophy of command and control rather than assessment and avoidance, which will mean an even greater dependence on getting the data right.

The U.S. electric power system is a critical infrastructure, a term used by the U.S. government to describe assets critical to the functioning of our society and economy. The Patriot Act of 2001 defined critical infrastructure as “systems and assets, whether physical or virtual, so vital to the United States that the incapacity or destruction of such systems and assets would have a debilitating impact on security, national economic security, national public health or safety, or any combination of those matters. Although the electric grid is perhaps the most critical of all the critical infrastructures, much of the data needed by researchers to test and validate new tools, techniques, and hypotheses is not readily available to them because of concerns about revealing too much data about critical infrastructures.

The electric industry perspective is that actual electric grid data are too sensitive to freely disseminate, a claim that is clearly understandable and justifiable. Network data are especially sensitive when they reveal not only the topology (specific electrical connections and their locations) but also the electrical apparatuses present in the network along with their associated parameters. Revealing these data to knowledgeable persons reveals information an operator would need to know to ensure a network is reliable as well as the vulnerabilities an intruder would like to know in order to disrupt the network for nefarious purposes.

There is also some justifiable skepticism that synthesized data might hide important relations that a direct use of the confidential data would reveal. This makes the development of a feedback loop from the synthetic data to the confidential data essential to develop confidence in the products resulting from synthetic data and to ensure their continuous improvement. A natural question is therefore what, if anything, can be done to alter realistic data so as to obtain synthetic data that, while realistic, do not reveal sensitive details.  Hesitation to reveal too much data might also indicate a view of what problems need to be solved that differs from the committee’s view.

It is clear that the availability of realistic data is pressing, critical, and central to enabling the power engineering community to rely on increasingly verifiable scientific assessments. In an age of Big Data such assessments may become ever more pressing, perhaps even mandatory, for effective decision making.

Recommendation: Given the critical infrastructure nature of the electric grid and the critical need for developing advanced mathematical and computational tools and techniques that rely on realistic data for testing and validating those tools and techniques, the power research community, with government and industry support, should vigorously address ways to create, validate, and adopt synthetic data and make them freely available to the broader research community.

Using recent advances in network analysis and graph theory, many researchers have applied centrality measures to complex networks in order to study network properties and to identify the most important elements of a network. Real-world power grids experience changes continuously. The most dramatic evolution of the electric grid in the coming 10 to 20 years will possibly be seen from both the generation side and the smart grid demand side. Evolving random topology grid models would be significantly enhanced and improved and made even more useful if, among other things, realistic generation and load settings with dynamic evolution features, which can truly reflect the generation and ongoing load changes, could be added.

As conditions vary, set points of controllable equipment are adjusted by combining an operator’s insights about the grid response and the results of optimization given an assumed forecast. If done right, system operators do not have to interfere with the automation: Their main task is to schedule set points given the forecasts. Fast dynamic transitions between new equilibria are stabilized and regulated by the primary controllers. Beyond this primary control of individual machines, there are two qualitatively different approaches to ensuring stable and acceptable dynamics in the changing power industry:

  • The first approach meets this goal of ensuring stable and acceptable dynamics via coordinated action of the system operators. Planners will attempt to embed sensing, communications, and controllers sufficient to guarantee system stability for the range of operating conditions of interest. This is an ambitious goal that faces theoretical challenges. For example, maintaining controllability and observability with increased numbers of sensors and controllers is a challenge given the current state of primary control. It seems feasible that current technologies will allow meeting performance objectives, which are now constrained by requirements for synchronization and voltage stabilization/regulation. As mechanically switched transmission and distribution equipment (phase angle regulators, online tap changers, and so forth) is replaced by electronic devices—flexible ac transmission systems, high- voltage dc transmission lines, and the like—the complexity of the control infrastructure for provable performance in a top-down manner is likely to become overwhelming. In particular, variable-speed drives for efficient utilization of power are likely to interfere with the natural grid response and the existing control of generators, transmission, and distribution equipment.
  • The second approach is the design of distributed intelligent Balancing Authorities (iBAs) and protocols/ standards for their interactions. As discussed in Chapter 1, automatic generation control is a powerful automated control scheme and, at the same time, one of the simplest. Each area is responsible for coordinating its resources so that its level frequency is regulated within acceptable limits and deviations from the scheduled net power exchange with the neighboring control areas are regulated accordingly. A closer look into this scheme reveals that it is intended to regulate frequency in response to relatively slow disturbances, under the assumption that primary control of power plants has done its job in stabilizing the transients.

It is possible to generalize this notion into something that may be referred to as an iBA, which has full responsibility for stabilization and regulation of its own area. Microgrids, distribution networks, portfolios (aggregates) of consumers, portfolios of renewable resources, and storage are examples of such areas. It is up to the grid users to select or form an iBA so that it meets stability and regulation objectives on behalf of its members. The operator of a microgrid is responsible for the distributed energy resources belonging to an area: The microgrid must have sufficient sensing, communications, and control so that it meets the performance standard. This is much more doable in a bottom-up way, and it would resemble the enormously successful Transmission Control Protocol/Internet Protocol (TCP/IP). Many open questions remain about creating a more streamlined approach to ensuring that the emerging grid has acceptable dynamics. For example, there is a need for algorithms to support iBAs by assessing how to change control logic and communications of the existing controllers to integrate new grid members.

The contrast between these two approaches reflects the tension between centralized and distributed control. Because experiments cannot be performed regularly on the entire grid, computer models and simulations are used to test different potential architectures. One goal is to design the system to be very, very reliable to minimize both the number and size of power outages. The problem of cascading failures looms large here. The large blackouts across the northeastern United States in 1965, 2003, and 2007 are historical reminders that this is a real problem. Since protective devices are designed to disconnect buses of the transmission network in the event of large fault currents, an event at one bus affects others, especially those connected directly to the first bus. If this disturbance is large enough, it may trigger additional faults, which in turn can trigger still more. The N – 1 stability mandate has been the main strategy to ensure that this does not happen, but it has not been sufficient as a safeguard against cascading failures. The hierarchy of control for the future grid should include barriers that limit the spread of outages to small regions.

PHYSICS-BASED SIMULATIONS FOR THE GRID

How can mathematics research best contribute to simulation technology for the grid? Data-driven models, described in “Data-Driven Models of the Electric Grid” earlier in this chapter, begin with a functioning network. Moreover, they cannot address questions of how the grid will respond when subjected to conditions that have never been encountered. What will be the effects of installing new equipment? Will the control systems be capable of maintaining stability when steam-driven generators are replaced by intermittent renewable energy resources? Simulation of physics-based models is the primary means for answering such questions, and dynamical systems theory provides a conceptual framework for understanding the time-dependent behavior of these models and the real grid. Simulation is an essential tool for grid planning, and its design requires extensive control. In normal steady-state operating conditions, these simulations may fade into the background, replaced by a focus on optimization that incorporates constraints based on the time-dependent analysis. Within power systems engineering, this type of modeling and simulation includes TS analysis.

Creating Hybrid Data/Human Expert Systems for Operations

When a serious problem occurs on the power grid, operators might be overloaded with alarms, and it is not always clear what the highest priority action items should be. For example, a major disturbance could generate thousands of alarms. Certainly much work has been done over the years in helping operators handle these alarms and more generally maintain situation awareness, with Panteli and Kirschen (2015) providing a good overview of past work and the current challenges. However, still more work needs to be done. The operators need to quickly find the root cause of the alarms. Sometimes “expert systems” are used, whereby experts write down a list of handcrafted rules for the operators to follow.

CHALLENGES IN MODELING THE ELECTRIC GRID’S COUPLING WITH OTHER INFRASTRUCTURES

A reliable electric grid is crucial to modern society in part because it is crucial to so many other critical infrastructures. These include natural gas, water, oil, telecommunications, transportation, emergency services, and banking and finance (Rinaldi et al., 2001). Without a reliable grid many of these other infrastructures would degrade, if not immediately then within hours or days as their backup generators fail or run out of fuel. However, this coupling goes both ways, with the reliable operation of the grid dependent on just about every other infrastructure, with the strength of this interdependency often increasing.

Rinaldi, S.M et al. December 2001. Identifying, understanding and analyzing critical infrastructure interdependencies. IEEE Control Systems Magazine, pp. 11-25

For example, PNNL (2015) gives a quite comprehensive coverage of the couplings between the grid and the information and communication technology (ICT) infrastructure. The coupling between the grid and natural gas systems, including requirements for joint expansion planning, is presented in Borraz-Sanchez et al. (2016). The interdependencies between the electric and water infrastructures are shown in Sanders (2014) with a case study for the Texas grid presented in Stillwell et al. (2011). While some of these couplings are quite obvious, others are not, such as interrelationships between the grid and health care systems in considering the vulnerability of the grid to pandemics (NERC, 2010). The rapidly growing coupling between electricity and electric vehicle transportation is presented in Kelly et al. (2015).

PNNL. August 2015. The Emerging Interdependence of the Electric Power Grid and Information and Communication Technology. Pacific Northwest National Laboratory PNNL-24643.

Models that represent coupling between the grid and gas, water, transportation, or communication will almost certainly include hierarchical structures characterized by a mixture of discrete and continuous variables whose behavior follows nonlinear, nonconvex functions at widely varying time scales. This implies that new approaches for effectively modeling nonlinearities, formulating nonconvex optimization problems, and defining convex subproblems would be immediately relevant when combining different infrastructures.

References

Deign J (2020) Research Finds Unknown ‘Critical Points’ in European Grids Finding from Depsys highlights need for network intelligence to speed the energy transition. Greentechmedia.com

Lovejoy B (2022) Global chip shortage: Engineer shortfall is the next big problem.  https://9to5mac.com/2022/01/03/global-chip-shortage-engineer-shortfall/

Makansi J (2007) Lights Out: The Electricity Crisis, the Global Economy, and What It Means To You.

NAERC (2006) Summary of 2006 Long-Term Reliability Assessment: The Reliability of the Bulk Power Systems In North America. North American Electric Reliability Council.

Sanger DE, Perlroth N, Barnes JE (2021) As Understanding of Russian Hacking Grows, So Does Alarm. New York Times.

Posted in Electric Grid & Fast Collapse, Grid instability, Renewable Integration, Smart Grid | Tagged , , , , , , | Comments Off on Wanted: Math geniuses and power engineers to make a renewable grid possible

Reducing pesticides with crop diversity

Preface. Pesticides are the main cause of the insect apocalypse, which reverberates up the food chain, leading to loss of biodiversity and extinction. And pesticides are made out of oil, which probably peaked globally in 2018, and pesticides only last 5 years on average before pests develop resistance.  So we have to get rid of them — it’s past time we looked for alternatives, especially since they will inevitably stop working or being manufactured as petroleum grows scarce.  And yet Rachel Carson warned us in 1962 in her book Silent Spring about this, and six decades later we’ve done little to solve it.

About 400 different pesticides are being used in the U.S., and 150 of them are considered hazardous to human health according to the World Health Organization. The U.S. Geological Survey data showed estimated that at least one billion pounds of agricultural pesticides were used in 2017. Of that, about 60%—or more than 645 million pounds—of the pesticides were hazardous to human health, according to the WHO’s data (Acharya 2020).

Another method of reducing pesticides, fertilizer, and water is intercropping — the simultaneous cultivation of multiple crops on a single plot of land, which can significantly increase the yield. Farmers have applied intercropping for as long as we can remember. Intercropping appears to give a 16-29% larger yield per unit area than monocultures in  under the same circumstances, while using 19-36% less fertilizer (Li et al 2020).

Alice Friedemann  www.energyskeptic.com  Author of Life After Fossil Fuels: A Reality Check on Alternative Energy; When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Women in ecology  Podcasts: WGBH, Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, Kunstler 253 &278, Peak Prosperity,  Index of best energyskeptic posts

***

Larsen AE et al (2020) Impact of local and landscape complexity on the stability of field-level pest control. Nature Sustainability.

Larsen and Noack scoured Kern county records from 2005 through 2017 focusing on factors such as field size, as well as the amount and diversity of croplands. What they found was that increasing cropland with larger fields generally increases the amount and variety of pesticides applied, while crop diversity has the opposite effect.

As field size increases, the area gets larger more quickly than the perimeter, while smaller fields have proportionally larger perimeters. And a larger perimeter may mean more spillover from nearby predators like birds, spiders and ladybugs that eat agricultural pests.

Smaller fields also create more peripheral habitat for predators and competitors that can keep pest populations under control. And since the center of a smaller field is closer to the edge, the benefits of peripheral land in reducing pests extends proportionally farther into the small fields.

Landscapes with diverse crops and land covers also correlated with reduced pesticide variability and overall use. Different crops in close proximity foster a variety of different pests. Though this may sound bad, it actually means that no single species will be able to multiply unimpeded.

When crops are grown over a wide area, it’s hard to stop a large outbreak of a pest in an area of almost unlimited food resources.

References

Acharya P (2020) The United States still uses many pesticides banned in other countries. The Counter

Larsen AE et al (2017) Identifying the landscape drivers of agricultural insecticide use leveraging evidence from 100,000 fields. PNAS.  www.pnas.org/cgi/doi/10.1073/pnas.1620674114

Li C et al (2020) Syndromes of production in intercropping impact yield gains. Nature Plants.  Simpler explanation in phys.org here

Posted in Biodiversity Loss, Farming & Ranching, Pesticides | Tagged , , | Comments Off on Reducing pesticides with crop diversity

Coming Food Crises from Drought

Preface. As climate change heats the planet, and groundwater depletes from aquifers that won’t be recharged until after the next ice age, it’s clear that food crises from drought (and many other problems) will be upon us soon. As long as we have diesel fuel, food supplies can be brought in from other parts of the country and world, but at some point of oil decline and localization, drought will be much more of a problem than it is now.

Alice Friedemann   www.energyskeptic.com  author of “Life After Fossil Fuels: A Reality Check on Alternative Energy”, 2021, Springer; “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Huning LS, AghaKouchak A (2020) Global snow drought hot spots and characteristics. PNAS 117: 19753-19759.

Researchers studied the effects of snow draughts on water supplies world-wide from 1980 to 2018. Snowmelt provides freshwater to more than a billion people, one sixth of the world’s population. Water from melting snow irrigates the crops of farming regions including areas that seldom if ever receive any snow during the winter, such as California’s Central Valley. Snow-water deficits have increased 28% in the Western U.S. during the second half of the study period, and to a lesser extent, Eastern Russia & Europe.

Lobell DB, Deines JM, Di Tommaso S (2020) Changes in the drought sensitivity of US maize yields. Nature Food.

The U.S. Corn Belt’s high crop yields conceal a growing vulnerability. Although yields have increased overall due to new technologies and management approaches, crops are becoming significantly more sensitive to drought conditions.

When corn crops succumb to drought, this not only affects food prices and availability, but also ethanol production. One of the reasons crops are becoming more susceptible to drought is that soils are able to hold less water than in the past.

Nabhan GP (2013) Our Coming Food Crisis. New York Times.

Long stretches of triple-digit days out West are getting more common and that will threaten our food supply.  2012 was the hottest year in American history. Half of all counties in the United States were declared national drought disaster areas. 90% of these counties were doubly devastated by heat waves as well.

The 17 Western states account for nearly 40% of farm income, and current and future heat waves will reduce the amount of food produced. On cause is that overheated crops need a lot more water.  After several years of drought both surface and groundwater supplies have diminished and the energy costs have gone way up because water needs to be pumped up from much deeper levels.

This means food costs are going to go up at a time when 1 in 6 people are already on food stamps and having a hard time making ends meet.

Strategies to cope have been blocked from being added to the current farm bill, such as promoting locally produced compost to hold moisture in the soil of row crops and orchards.  This also adds carbon and increases yields. Increasing organic matter from 1 to 5% can increase water storage in rot zones from 33 pounds to 195 pounds per cubic meter.  Cities could provide enormous amounts of compost, but most green waste ends up in landfill and generates the greenhouse gas methane.

Posted in Drought | Tagged , | Comments Off on Coming Food Crises from Drought