American Arsenal: A Century of Waging War by Patrick Coffey

Source: www.compoundchem.com/2014/05/17/chemical-warfare-ww1

Preface. These are my notes from “American Arsenal: A Century of Waging War” (2013) by Patrick Coffey . Absolutely horrifying, especially chemical warfare. Here’s the publishers weekly blurb of what this book is about:

“Science historian Coffey surveys the history of American military weapons development since WWI, focusing on the interactions between the military, science, and industry, and politicians in developing key weapons systems. “Scientists and inventors were active participants” in WWI, an entirely new development in conducting warfare. Coffey highlights several major types of weapons, including chemical munitions, bombers and bomb-sights, nuclear warheads, and the M-16 rifle. He also notes challenges to effective weapons development, such as the exaggerated claims made by the Army Air Force in WWII of pickle-barrel accuracy for its bombers; a lack of comprehensive military understanding of science, as was the case in the early development of chemical weapons; inter-service rivalries that impede effectiveness and efficiency while raising costs; and the influence political expediency has on funding. By no means comprehensive, the book deals with only a handful of weapons systems, some of which are notable due to controversies and problems attached to them. Nonetheless, Coffey delivers an interesting book that introduces the general reader to a little-known perspective on military history.”

These excerpts will give you more of an idea, but are just bits and pieces, for a coherent narrative, read the book.

Grim as this book may be, the “bright side” is that post fossil fuels armies won’t be able to cause such harm as the number of airplanes, tanks, and other diesel-fueled vehicles declines. Let’s hope that nuclear weapons disappear though before the worst of the final wars over resources begin.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts:  KunstlerCast 253, KunstlerCast278, Peak Prosperity]

[Let’s start out with the most “amusing” part of the book before the grim stuff.  What could possibly go wrong with this plan?]

In January 1942 Adams sent the following idea direction to FDR, which began with “the lowly bat is capable of carrying enough incendiary material to ignite a fire”. His plan was to attach small firebombs to millions of bats and  release them over Japanese cities, where they would roost in every attic. After a suitable delay, the bombs would detonate, igniting all of urban Japan. It was all thought out: bats hibernate during the winter, so they could be easily collected, equipped with bombs, and warmed up just before release. And since bats weigh less than one-half ounce, … which means that approximately 200,000 bats could be transported in just one airplane.

But before this could happen questions needed to be answered: Could an incendiary bomb be made that was small enough for a bat to carry? How could millions of bombs be attached to bats? Could bats be brought in and out of hibernation at will? How would the time-delay fuses work? What would keep them from triggering early and incinerating the bomber carrying the bats? How would two hundred thousand bat firebombs be stored and then dropped? No one asked any questions about the ethics of capturing millions of bats, stapling firebombs to their chests, dropping them from bombers, and incinerating them.

Von Bloeker volunteered to test the load-carrying capability of a bat; to everyone’s surprise, healthy Mexican free-tail bats, which could be found in the millions in Carlsbad Caverns, could each carry fifteen to eighteen grams—more than their own weight—and still fly. That set an upper limit on the weight of the incendiary. Fieser and his team constructed a small, pencil-like napalm bomb with a delayed chemical trigger that could be set by a syringe injection.

The first tests were scheduled for May 1943 at Edwards Air Force Base near Los Angeles. The goal was to test bats’ load-carrying capacity and the altitude at which they would come out of hibernation. The plan was to capture 3,000 bats at Carlsbad, fly them in a B-25 bomber to Muroc, keep them in a hibernated state in refrigerated trucks overnight, attach dummy bombs to their chests, and drop them at a series of different altitudes the next morning. Fieser’s report of the test: Everything went off on schedule, and shortly after dinner the bomber flew in loaded with kicking, shrieking bats. … The crates were loaded onto the truck and the refrigeration turned on full tilt. But the howling went on without abate for a couple of hours, and it became evident that the refrigeration unit was not adequate to cope with such a large amount of body heat all of a sudden. So we mounted a series of fans in position to blow air in over cakes of ice. Finally, about midnight, the noise ceased; hibernation had been accomplished. … A first batch of bats in hibernation with weights attached was dumped out of the bomber [the next morning] at low altitude. … Other batches were released from higher and higher altitudes. … Eventually it was clear that the bats were not in hibernation but dead. The cooling had been too efficient.

The next test was at a new auxiliary airfield at Carlsbad Army Air Base, much closer to the source of the bats. Because the test was top-secret, even the colonel who commanded the base was banned from the site, and the CWS ran the test behind locked gates. The bats were packed like eggs in specially designed crates, stacked for release by the bomber. It all went like clockwork. After the bats dropped, they came out of hibernation and flew.

If the test had ended then, it would have been a success. The story according to Couffer: Then Fieser said he wanted the photographic record of bat bombs going off in various realistic situations, “with complete verisimilitude,” as he put it. … [H]e also asked the photographers to shoot some pictures of himself with the bats and their attached bombs. … We attached … unarmed capsules of napalm to half a dozen [hibernating] bats for Fieser to have his fun. Fieser [injected] one capsule after another until all the bats were armed. … Once injected, the capsule became a ticking bomb, a firecracker with a short fuse. Then … all the bats simultaneously came to life. “Hey!” I heard Fieser shout. “‘Hey! They’re becoming hyperactive. Somebody! Quick! Bring a net!” By the time I got there with a hand net, Fieser and the two photographers were staring into the sky. … Exactly fifteen minutes after arming, a barracks burst into flames, minutes later the tall tower erupted into a huge candle visible for miles. Offices and hangars followed in order corresponding to the intervals between Fieser’s chemical injections.”38 Because the bat-drop bombing tests had been run with dummy bombs, no one had ordered firefighting equipment. The air base’s commanding colonel, who had been shut out from the tests, saw the smoke and appeared with three fire engines at the field’s padlocked gates, where he was told to go away.

Perhaps from embarrassment at the Carlsbad fiasco, Fieser tried to get the project killed. The AAF had had enough of bats. Nonetheless, the CWS persisted and managed to get the bat bomb transferred to the Navy, where it was renamed Project X-Ray.41 Burning down the airfield at Carlsbad should have been sufficient demonstration of the bat bomb’s effectiveness, but further tests were scheduled for the German-Japanese Village at Dugway.

Although Fieser had at first resisted the bat bomb, in his memoirs he mourned the cancellation of Project X-ray. He imagined a silent night attack on Tokyo, each plane delivering thousands of bats—no explosions to give warning. Four hours later, “bombs in strategic and not easily detectable locations would start popping all over the city at 4 a.m.,

Nuclear War

The idea that a president can direct or control nuclear war is an illusion. First, if it ever comes to that, the president would very likely be dead or incommunicado. Second, the plans for nuclear war are so complex and intertwined that there are very few options—in 1961, there was only one: unleash every American weapon in what Air Force generals called a “Sunday punch.” Most presidents have shown little interest in nuclear strategy (Carter, trained as a nuclear engineer, was the exception).

Months after Kennedy took office, he asked for a demonstration of the “red telephone” from which he was to respond to a Soviet attack. No one could find it—it had been in Eisenhower’s desk drawer, but Jackie Kennedy had swapped desks when she redecorated the White House.  And the military certainly did not believe in civilian micromanagement. LeMay angrily told Assistant Secretary of Defense John Rubel, “Who needs the president if there’s a war? All we need him for is to tell us there is a war. We’re professional soldiers. We’ll take care of the rest.” Before the Cold War’s nuclear standoff, a president had time to remove incompetent, insubordinate, or unstable commanders, as Lincoln and Roosevelt had done earlier.

But a war with the Soviet Union in the 1960s would have lasted only a few hours, and a “Dr. Strangelove” scenario, in which a rogue general launched a nuclear attack, was entirely possible. General Tommy Power, in charge of the Strategic Air Command (SAC) from 1957 to 1964, was generally seen as the most likely to do so. He gleefully presented SAC’s plans to launch thirty-two hundred warheads to Secretary McNamara; even his superior LeMay called him a sadist, and his subordinate General Horace Wade said of him, “I used to worry that General Power was not stable. I used to worry about the fact that he had control over so many weapons and weapon systems and could, under certain conditions, launch the force. … SAC had the power to do a lot of things, and it was in his hands and he knew it.”

Inter- and intraservice rivalry is a repeated motif throughout the book. The most egregious cases: the Navy refused to release Norden bombsights, for which it had little use, to the Army Air Forces, who were attempting precision bombing of Germany in World War II; and the Army and the Air Force engaged in a wasteful missile race in the 1950s—not with the Soviet Union, but with each other.

Service traditions have often impeded the replacement of old weapons with new. The Navy hung on to its battleships even after they were shown to be vulnerable to bomber attack, because its battleship tradition went back to John Paul Jones and the Bonhomme Richard. The Air Force, whose generals rose from the ranks of combat pilots, resisted developing missiles because they threatened its bombers, and it still resists unmanned aerial vehicles because they threaten to make pilots obsolete altogether. As a result, weapons can persist long after they have been proven to be useless or obsolete.

Effective weapons demand to be used, even if they are unsupported by doctrine. Napalm is an example. Developed by Harvard chemist Louis Fieser in 1942, it made the firebombing of Tokyo and other Japanese cities an option, although attacking civilians was contrary to the Army Air Forces’ precision-bombing doctrine.

Then disaster. From the New York Times, January 16, 1916: “Hydrogen Leak Suspected; Interior of E-2 Wrecked; … Daniels Orders Inquiry.” The E-2 submarine had been rocked by an explosion while it was in dry dock in the Brooklyn Navy Yard. It had been testing the Edison battery. Four men were killed immediately, and another would die a few days later. Ten others were injured. As Hutchison and Edison claimed, the alkaline Edison battery could not emit chlorine. But if a cell of an Edison battery was reversed (that is, after full discharge, it was subjected to an external current in the direction of discharge), the cell’s water would decompose into hydrogen and oxygen—an explosive mixture, especially in the confines of a submarine.

The Navy was not listening to him. Edison later told a reporter, “I made about 45 inventions during the war, all perfectly good ones, and they pigeon-holed every one of them.” He would send an idea to Daniels, Daniels would send it to someone in the Navy, and nothing would happen. Edison could not get the Navy to even explain what was needed. He complained that he was “pulling ideas out of the air” and wrote Daniels, “I am still without adequate information about submarine warfare in actual practice as no one … has given me any data of real value. Until I get some kind of data, I will have to depend on my imagination.” Edison’s greatest contribution was not his inventions but his common-sense analysis. He asked for information about submarine attacks, for example, and when he was told the data had not been compiled, he put his own analysis team together. In November 1917 he sent Daniels and the British Admiralty a report with graphs, charts, and forty-five maps. The conclusions he drew were straightforward. Most German submarine attacks were near French and British ports; if ships operated there only at night, they would be much less vulnerable. German subs also seemed to be lying in wait in prewar shipping lanes and near lighthouses, so those areas should be avoided. Merchant ships should be equipped with radios so that they could call for help from destroyers if attacked. Moreover, merchant ships’ old (and useless) sailing masts could be sighted by enemy subs from a great distance and should be removed. Smokeless anthracite coal should be burned in danger zones in order to reduce visibility, and lookouts should be stationed not on deck but at portholes near the water’s surface, where they could spot a sub’s periscope in profile against the sky. The Navy proved willing to listen to these, perhaps because Edison had given them to the British as well.

Concerning the E-2 explosion, the Navy was right: Hutchison had negligently ordered a procedure—the deep and rapid discharge of 240 battery cells in series, half the submarine’s complement—that was almost certain to emit hydrogen. Hutchison was right too: the Edison battery was safer than the lead-acid battery and not specifically to blame for the explosion, as a lead-acid battery (or any other wet-cell battery) subjected to that procedure would have emitted hydrogen and exploded in the same fashion. But Hutchison and Edison never seemed to understand that the technical cause of the E-2 explosion was immaterial, at least as far as the Navy was concerned. The Navy knew that Edison considered its officers to be ignorant martinets—he said so often enough. And when Hutchison blamed the explosion on the incompetence of the E-2’s captain, the Navy, which prided itself on its traditions and autonomy, closed ranks. Its officers saw Edison as an irrelevant meddler, Hutchison as a snake, and Daniels as a political hack. Edison was perhaps America’s greatest inventor, but he was woefully ignorant of the ways of the military.

The twentieth century would see this sort of misunderstanding repeated many times.

By the time the United States entered World War I, the Europeans had been gassing one another on the battlefield for two years. The American Army had no experience of chemical weapons. It should have worried about defense against gas attacks—training officers and individual soldiers, providing masks and decontamination gear, and familiarizing its medical staff with treatment of gas casualties—but it did not, and American soldiers would suffer as a result. Rather than concentrate on defense, the Army began a crash program to develop its own poison gas, a secret weapon that would force Kaiser Bill to his knees.

Gas was a horror, beginning with the first attack at Ypres in April 1915, when the Germans released chlorine gas from six thousand cylinders. When chlorine comes into contact with unprotected human tissue, it reacts immediately, burning the skin or the eyes if the exposure is prolonged or concentrated. When chlorine is inhaled, it corrodes the lungs, which fill with fluid. There is no antidote to chlorine poisoning—with moderate exposure, the body may heal itself, but if the exposure is severe, the victim drowns in his own fluid. One soldier described it as “an equivalent death to drowning only on dry land. The effects are these—a splitting headache and terrific thirst (to drink water is instant death), a knife-edge of pain in the lungs and the coughing up of a greenish froth off the stomach and the lungs, ending finally in insensibility and death. The color of the skin from white turns a greenish black and yellow, the color protrudes and the eyes assume a glassy stare. It is a fiendish death to die.”

Haber took charge of the German poison gas effort and developed gases that were even more lethal. Phosgene, sixteen times as deadly as the same amount of chlorine, was first used by the Germans and then quickly deployed by both sides. About the time that America entered the war in 1917, Haber developed dichlorodiethylsulfide, which was to become known as “mustard gas” because of its slight mustard-like odor. Unlike chlorine and phosgene, which had their principal effects upon the lungs, mustard was a blistering agent that caused skin burns, blindness, and internal and external bleeding. Soldiers often took four to five weeks to recover or die, putting a further load on the enemy’s medical services, and the pain was so bad that soldiers had to be strapped to their beds. Here was a far more terrifying weapon than chlorine. Because mustard attacked the skin, soldiers had to cover every inch of the body in a poncho during an attack. And mustard had another advantage—whereas phosgene and chlorine dissipated quickly, mustard was actually not a gas but a liquid that was sprayed as an aerosol. It was persistent, poisoning grass, plants, and the earth for days. It could be used to deny territory to the enemy, to support the flanks in an infantry advance, and to cover a retreat. Mustard was by far the most deadly agent used in the Great War.

The gas mask, especially the heavy British single-box respirator, was one more burden for the soldier to carry into battle. Soldiers in the trenches found themselves constantly sniffing for gas, and a soldier in a gas mask, even if it was functioning, was half blinded, unable to aim properly or to see peripherally.

Haber believed that chemical weapons were a natural stage in the evolution of warfare. Advances in the technology of artillery and machine guns had led armies to burrow into trenches; the next step was to develop chemical weapons, which would make those trenches uninhabitable.

Haber believed that gas was of greatest advantage to the most industrialized nations—the Germans were best at it, the British better than the French, and the Russians hopeless. He saw conventional warfare as a game like checkers, but gas warfare like chess—gas shells might contain two or even three agents, and that forced the combatant armies to develop a new gas mask filter to block each new combination.

While the French and British deplored the Germans’ gas attacks in their propaganda, they were vague about the effects of gas because they did not want to scare the Americans off. By the spring of 1917, the Allies imposed a total news blackout on gas warfare because, in the words of the British assistant secretary of war, it might result in an “unreasonable dread of gases on the part of the American nation and its soldiers.

The majority of the United States troops entered the European fight during and after the German spring offensive of 1918. The Germans had a field day gassing the green American soldiers, whose casualty rate was extremely high.

More than a year after the United States’ declaration of war, the American Expeditionary Force at last required that gas officers be assigned to each unit.

While mustard gas had proven to be an extremely effective blistering agent, it was considered too persistent to be used on the offensive—it hung around so long that it would poison the attacker’s own troops as they moved into territory that the enemy had abandoned. It had another disadvantage: its physiological action is delayed for hours, like a particularly hellish poison ivy, so enemy troops were often not immediately aware they had been gassed and would continue fighting. Captain Lewis was asked to find a poison gas that would outdo mustard, one that was “(1) effective in small concentrations; (2) difficult to protect against; (3) capable of injuring all parts of the body; (4) easily manufactured in large quantities; (5) cheap to produce; (6) composed of raw materials that were readily available in the United States; (7) easy and safe to transport; (8) stable and hard to detect; and, most importantly, (9) deadly.

A colleague suggested that Lewis take a look at Father Nieuwland’s doctoral dissertation, in which the chemist-priest had described combining arsenic trichloride and acetylene. The result had made him deathly ill. When Lewis repeated Nieuwland’s experiment, he found that the results matched his goal—immediately painful, more toxic than mustard, and less persistent than mustard because it decomposed in water.

Conant had continued in Harvard’s graduate program and received his Ph.D. in organic chemistry in 1917, just as the United States entered the war. He and two chemist friends could see that many organic chemicals were selling at very high prices because of the war. They decided to manufacture benzoic acid, but they found that producing chemicals in large batches was not the same thing as working in laboratory flasks: they burned down one building and used the insurance settlement to move the business to a second. When Roger Adams, an instructor in organic chemistry at Harvard, moved to the University of Illinois, Harvard offered Conant the open faculty position.* He accepted, and the move to Harvard was timely, as the benzoic acid business ended in catastrophe with a second fire two months later.

The government had difficulty convincing chemical companies to produce poison gases. The work was dangerous, and the only customer—the government—would immediately discontinue purchases whenever the war ended.

In the spring of 1918, the Army pushed to take control of all chemical warfare operations, including research and production within the United States. On June 28, President Wilson established the Chemical Warfare Service (CWS). Although Gen. Pershing had earlier removed Gen. William Sibert from command of the 1st Infantry Division before it was deployed in combat, he recommended him to command the CWS, with Gen. Amos Fries reporting to him and running things in France. Because Lewisite was to be America’s secret weapon, it was not produced at Edgewood but assigned its own production site in Willoughby, Ohio, about thirty miles from Cleveland. The similarities between Willoughby in World War I and Los Alamos in World War II are striking: Willoughby was called “the mousetrap,” because soldiers could get in but not out—no one assigned to Willoughby was transferred until after the armistice, and soldiers were told they would be court-martialed if they revealed what was being manufactured or even where they were stationed.

Gas had not broken the deadlock of trench warfare, and against a properly trained force equipped with masks and skin protection, it was not a wonder weapon. For all the war’s combatants, less than 5 percent of casualties were due to gas. For all but the Russians, who never developed a satisfactory mask, less than 5 percent of the gas casualties were fatal.

Douhet proposed a grand thesis, that airpower in future wars would take the battle beyond the trenches, destroying the enemy’s industrial base and with it the will to resist. The bomber and biological or chemical weapons would complement each other: “One need only imagine what power of destruction that nation would possess whose bacteriologists should discover the means of spreading epidemics in the enemy’s county”.  Douhet was not the only one with plans to combine chemical weapons and airplanes; shortly after the war’s end, the New York Times quoted an unidentified American military source: “Ten airplanes carrying ‘Lewisite’ would have wiped out … every vestige of life—animal and vegetable—in Berlin.”16 Mitchell had planned an assault using incendiary bombs and poison gas on the interior of Germany for 1919. But because the first American night bombers did not arrive at the front before the armistice, his ideas remained untested.

Douhet argued that the object of war was not to defeat the enemy’s army but to destroy the enemy’s will and ability to resist, and that this could best be done by striking behind the front. He saw the airplane as the ultimate offensive weapon; it could soar over the trenches and attack anywhere with great rapidity.

Bombing would win a war, the school taught, not by directly attacking enemy forces but by destroying the enemy’s ability and will to resist. Because a modern society was so complex, removing a few key components of its industrial web—rubber, oil, transportation hubs, steel mills, chemical plants, ballbearing factories—would result in a breakdown of industrial production, an inability to supply troops, and a collapse of civilian and military morale.

The ACTS argument that bombing could win a war was almost entirely theoretical, and events in World War II in Europe would prove it wrong; Britain would survive the Blitz, Germany would maintain war production while under intense and sustained bombing attack, and the Soviet Union would reorganize its economy after abandoning its European industrial base and retreating thousands of miles. In none of these countries did a collapse of either the industrial web or of civilian morale force a surrender or even negotiation.

The ACTS’s precision-bombing doctrine was based on unfounded assumptions, and it ignored problems. First, precision bombing would require visual sighting of targets and would need to be conducted during daytime. But in daylight, bombers would be more vulnerable to enemy fighters. Long-range escort fighters capable of matching the bombers’ range were not seen as technologically feasible, and the Air Corps leadership saw no possibility of getting Congress to simultaneously fund both new bombers and new fighters. So the problem was simply denied: the ACTS assumed that armed bombers flying in tight formation would be able to defend themselves against enemy fighters. Second, daylight precision bombing would require clear weather for targets to be identified. In fact, cloud cover in Europe could last for weeks, as would be seen during World War II. Third, at low altitude, bombers would be vulnerable to enemy anti-aircraft fire. The ACTS solution was to bomb from high altitude. That would admittedly make precision bombing more difficult, but the ACTS instructors assumed that this was a technological problem that could be solved—that an accurate bombsight capable of correcting for aircraft instability, headwinds, tailwinds, and crosswinds—would be developed. Fourth, a long-range, high-altitude, high-payload bomber with multiple defensive guns would be required. The Air Corps assumed that it would eventually get such a plane, although it was unlikely that the War Department or Congress would approve purchase even if one were offered, as it did not fit either of the Air Corps’ defined missions of coastal defense and combat support.27

Between 1936 and 1940, in clear weather, the Air Corps dropped 115,000 practice bombs from an altitude of fifteen thousand feet. After arbitrarily excluding misses of more than a thousand feet, the average miss was still well over three hundred feet.5 The Air Corps’ answer was more bombers dropping more bombs—if one bomber could not hit the target, perhaps forty could. Hap Arnold, in charge of the Air Corps’ combat arm, organized his teams into forty-plane formations that would drop their bombs simultaneously. Accuracy improved, but not by enough. One of the founders of the strategic bombing doctrine, Laurence Kuter, began to lose faith. He calculated that destroying the Sault Ste. Marie locks, one of his Air Corps Tactical School textbook examples, would require 120 bombers and a thousand bombs, which would yield the nine hits that would do the job.

Conant asked Fieser to look into explosions that had damaged a DuPont plant that manufactured divinylacetylene, a chemical used in coatings and in the manufacture of neoprene rubber; if the stuff could blow up a chemical plant, there might be a military use for it. Fieser enlisted E. B. Hershberg, a member of his research group who was a reserve Army officer in the CWS. The two of them poked at different batches of divinylacetylene as it dried, and they watched the batches turn from liquids to gels. At the end of each day, they burned the gels and watched them spark and sputter. Even as they burned, however, the gels did not turn liquid but stayed sticky and viscous. This suggested that a bomb composed of the material might scatter globs of burning gel. Hershberg filled tin cans with black powder and divinylacetylene and set them off in deserted areas in the nearby town of Everett. The results, he reported, were promising.7

CWS’s official history notes that “supplies of M-69 bombs were becoming available in 1943, when the AAF was giving thought to the strategic bombing of Japan. … What was the best incendiary for the new mission?” That question was answered by experiment, by simulating Japanese (and German) housing as closely as possible. At Dugway Proving Grounds in Utah, the NDRC employed Standard Oil Development as the principal contractor in the construction of a “German-Japanese Village” that was repeatedly bombed, burned, and rebuilt.22 Nothing was overlooked in the village’s design. Brick, wood, and tile structures were outfitted with authentic furniture, bedspreads, rugs, draperies, children’s toys, and clothing hanging in closets.

Standard Oil built two types of Japanese roofs as well—tile-on-sheathing and sheet-metal-on-sheathing. To ignite Japanese homes, the tatami mat—the rice-straw mat that was used in flooring nearly every Japanese home—would be key. Ideally, a bomb that had punctured the roof would stop on the mat. If the bomb went through the floor and embedded itself in the earth, a fire would be less likely than if it sprayed burning gel across the tatami, which would yield impressive results: the mat, the paper-and-wood walls, and the futon and zabuton cushions would all quickly ignite. Standard Oil acquired authentic rice-straw tatami mats from Hawaii and the West Coast.

From May through September 1943, four different incendiary bombs were tested on the German-Japanese Village. The napalm-filled M-69 proved most successful.

Everyone involved in the design, the construction, and the repeated destruction and reconstruction of the German-Japanese Village knew exactly what he was doing, and yet no one expressed ethical objections. Euphemisms such as “de-housing” could not disguise what was being done at great expense and effort. The CWS, with the direct support of the AAF, designed and tested a very effective weapon to do precisely what AAF doctrine precluded: to burn civilians in their homes.

 

Fifteen square miles of Tokyo disappeared that night, and more civilians died in Tokyo than would perish in either Hiroshima or Nagasaki a few months later. The Tokyo bombing of March 9–10, 1945, remains the most devastating air raid in history.

The center of the attack hit the Tokyo flatlands, where the Sumida River passed through thousands of wooden workers’ houses. “Around midnight,” Guillain wrote, “the first Superfortresses dropped clusters of the incendiary cylinders the people called ‘Molotov flower baskets.’” These were cluster bombs dispersing M-69 bomblets filled with napalm, and large fires immediately erupted. “The planes that followed, flying lower, circled and criss-crossed the area, leaving great rings of fire behind them. Soon other waves came in to drop their incendiaries inside the ‘marker’ circles. Hell could be no hotter.” The high winds made fighting the fires impossible when a house could be hit by ten or even more of the M-69s, which “were raining down by the thousands.” As they fell, Guillain noted, the cylinders scattered “a kind of flaming dew that skittered along the roofs, setting fire to everything it splashed.” The “flaming dew,” of course, was napalm. Almost immediately the houses, which were made of wood and paper, caught fire, “lighted from inside like paper lanterns.” The results were nightmarish: The hurricane-force winds puffed up great clots of flame and sent burning planks planing through the air to fell people and set fire to what they touched. … In the dense smoke, where the wind was so hot it seared the lungs, people struggled, then burst into flames where they stood. … [I]t was often the refugees’ feet that began burning first: the men’s puttees and the women’s trousers caught fire and ignited the rest of their clothing. Proper air-raid clothing as recommended by the government consisted of a heavily padded hood … to protect people’s ears from bomb blasts. … The hoods flamed under the rain of sparks; people who did not burn from the feet up burned from the head down. Mothers who carried their babies on their backs, Japanese style, would discover too late that the padding that enveloped the infant had caught fire. … Wherever there was a canal, people hurled themselves into the water; in shallow places, people waited, mouths just above the surface of the water. Hundreds of them were later found dead; not drowned, but asphyxiated by the burning air and smoke. … In other places, the water got so hot that the luckless bathers were simply boiled alive.

Curtis LeMay, whom we have seen as the Eighth Air Force’s most successful commander in Europe, planned and directed the Tokyo attack.

“Drafts from the Tokyo fires bounced our airplanes into the sky like ping-pong balls,” LeMay later wrote. “According to the Tokyo fire chief, the situation was out of control within minutes. It was like an explosive forest fire in dry pine woods. The racing flames engulfed ninety-five fire engines and killed one hundred and twenty-five firemen. … About one-fourth of the city went up in smoke that night anyway. More than267,000 buildings.” He quoted the Air Force history of the war, and he italicized the quote: “No other air attack of the war, either in Japan or Europe, was so destructive of life and property.

On March 13, Osaka; March 16, Kobe; March 18, Nagoya again. Five raids in nine days, 32 square miles destroyed in Japan’s four most populous cities—41% of the area the AAF destroyed in all of Germany during the entire war, and at a total cost of only 22 B-29s and their crews.  LeMay quit there, at least for a time—he had run out of napalm.

The idea of destroying Japan with incendiaries was not invented by Curtis LeMay or by Hap Arnold. It had many fathers. Gen. Billy Mitchell had suggested the possibility of burning Japan’s “paper and wood” cities as early as 1924. In November 1941, George Marshall threatened to “set the paper cities of Japan on fire” if war came. Immediately after Pearl Harbor, Churchill recommended “the burning of Japanese cities with incendiary bombs.” President Roosevelt saw in the RAF’s 1943 destruction of Hamburg in an incendiary firestorm “an impressive demonstration” of what might be done to Japan. For the Americans, however, it was important that bombing civilians have the appearance of bombing military targets. A May 1943 request for a bombing plan noted, “It is desired that the areas selected include, or be in the immediate vicinity of, legitimate military targets.

Vannevar Bush recommended that incendiaries be used against Japan, sending Arnold a report in October 1944 that estimated that they were five times as effective as high explosives by weight. Bush did say that switching to incendiaries would require a decision at a high level, but this did not bother Arnold, who already knew that he had the president’s backing. Arnold kept both Marshall and the president informed about firebombing. While they might not explicitly endorse his actions, they did not raise objections.

Even the atomic bomb did not end the incendiary attacks, which continued between Hiroshima and Nagasaki and then after Nagasaki until the Japanese surrender. The AAF wanted to win its independence by defeating Japan without a land invasion (a hope that was “not for public consumption,” as LeMay wrote to Arnold and Norstad),41 but it had no plans beyond running its bombing machine, which worked so smoothly that it had its own momentum.

The AAF exulted in the destruction. One press release crowed that a “fiery perfection” of “jellied fire attacks” had “literally burned Japan out of the war,” that the “vaunted Twentieth” had “killed outright 310,000 Japanese, injured 412,000 more, and rendered 9,200,000 homeless.” For “five flaming months … a thousand All-American planes and 20,000 American men brought homelessness, terror, and death to an arrogant foe, and left him practically a nomad in an almost cityless land.” In his final war dispatch, Arnold found a way to make Americans feel the terror of firebombing. He included a map of Japan, with the name of each of the sixty-six firebombed cities paired with the name of an American city of the same size.‡ So much for Roosevelt’s prewar condemnation that bombing civilians “sickened the hearts of every civilized man and woman.”

FDR did not fund the Briggs Committee, so his scientific adviser Vannevar Bush had the Carnegie Foundation, of which he was director, provide funding for the first few months. Briggs eventually scrounged up enough money from the Naval Research Laboratory to buy Szilard and Italian physicist Enrico Fermi some uranium and graphite, and he then waited for further direction from the president, which was not forthcoming. Sachs, Szilard, and Einstein pressed for action, but there seemed to be no urgency. The United States was not yet at war, and many scientists viewed the whole idea of atomic energy as a pipe dream. The required critical mass of uranium-235 might be tons, and other priorities—a peacetime draft, bombers and their bombsights, Navy ships, radar—were more pressing.

With the fall of France in June 1940, defense work acquired a new urgency for Roosevelt, and Vannevar Bush convinced the president to centralize weapons research and development. Roosevelt authorized formation of the National Defense Research Committee, with Bush at its head and the Briggs Uranium Committee reporting to him. Bush recruited Harvard’s president, James Conant, as an NDRC member and put him in charge of all chemical projects, including explosives and poison gas. The Briggs Committee was getting a bad reputation. Karl Compton, the president of MIT and an NDRC member, sent Bush a letter complaining of Briggs’s incompetence in managing atomic research, pointing out that “our English friends are apparently farther ahead than we are, despite the fact that we have the most in number and the best in quality of the nuclear physicists in the world.” He complained that the Briggs Committee “practically never meets.”6 Bush convened a National Academy of Sciences panel to consider Briggs’s fate. When the panel learned that the British thought they might have a bomb in two years, it recommended “that it would be advisable to have [the Briggs Committee] reconstituted so that a man of action would be the main executive.

Americans extracted as much information as possible from the British. When Roosevelt gave the order to proceed with development of the bomb in March 1942, the Manhattan Project was born. The Americans had more money, engineering resources, and émigré scientists than did the British and soon took the lead. Bush, Conant, and Gen. Leslie Groves, the Project’s military director, imposed a policy of “restricted interchange,” refusing to give the British scientists any information that would not contribute to developing a weapon during the current war. Security considerations were certainly important in that decision, but both sides also had their eyes on the postwar strategic balance and on the profits to be made from nuclear energy. British scientists and administrators pressed Churchill to demand full information sharing, and Churchill duly pressed Roosevelt. More than a year later, in August 1943, the two leaders signed what would be known as the “Quebec Agreement”: the two nations would pool their resources, and information would be freely exchanged among scientists working in the same field.9 (This free exchange would eventually allow the Soviet spy Klaus Fuchs, a member of the British team, access to Los Alamos.)

When the Germans were advancing, from 1940 to1942, they had no interest in gas because it would have slowed them down. From 1943 on, their cities were vulnerable to aerial gas attacks, especially after the Normandy invasion, when the Allies had air superiority. And near the end of the war in Europe, when Hitler might have been willing to use his secret nerve gases in scorched-earth revenge warfare, confusion and the interference of subordinates would have made gas attacks difficult to organize.

Roosevelt deserves much of the credit for the worldwide forbearance. Both he and his predecessor Herbert Hoover detested gas. When Congress passed a bill in 1937 promoting the Chemical Warfare Service to a Corps—with the same status as the infantry or artillery—FDR vetoed it, saying, “It has been and is the policy of this Government to do everything in its power to outlaw the use of chemicals in warfare. Such use is inhuman. … I hope the time will come when the Chemical Warfare Service can be entirely abolished.”1 He maintained that attitude throughout the war, threatening retaliation against any enemy’s first use. And he kept Churchill, who several times considered using gas, on a short leash.

Gen. Robert Travis, the commander of the nuclear mission to Guam, rode as a passenger on one of the B-29s. Takeoff conditions were ideal, with the wind almost directly head-on at seventeen knots. The pilot ran a full power check and released the brakes for takeoff. Just as he lifted off, his number two engine failed, and he feathered its propeller. Then the landing gear failed to retract, and when he tried to make a 180-degree turn, he could not keep the left wing up. He slid the plane to the left to avoid a trailer court and crash-landed, left wing down, at 120 mph. The crew escaped with minor injuries, but twelve passengers, including Gen. Travis, were killed. Twenty minutes after the crash, the chemical high explosives in the atomic bomb detonated, scattering tamper uranium, killing seven more and injuring 173 others. Only nine atomic bombs arrived in Guam.

Truman and Secretary of Defense Johnson had slashed the pre–Korean War defense budget under the assumption that possessing the atomic bomb would allow the United States to wage war on the cheap. Throughout the Korean War, both the military and the president had considered use of the bomb but had never found the right moment. Oppenheimer summed it up: “Are [atomic bombs] useful in ground combat? … What can we do with them?”26 Truman had his own ideas, which the Joint Chiefs or even LeMay would have been unlikely to approve. In his diary, Truman imagined giving the Soviet Union a ten-day ultimatum: either withdraw all Chinese troops from Korea or America would use its atomic weapons to destroy every military base in Manchuria, including any ports and cities.

 

 

Posted in War | Tagged , , | 2 Comments

Department of Homeland Security bioterrorism risk assessment

bioterrorism-pics-of-deadly-organisms

[So much of this was calculus calculating odds that very little of this document was extracted.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts:  KunstlerCast 253, KunstlerCast278, Peak Prosperity]

NRC. 2008. Department of Homeland Security Bioterrorism Risk Assessment: A Call for Change .Committee on Methodological Improvements to the Department of Homeland Security’s Biological Agent Risk Analysis. National Research Council.

Extracts from this 173 page document:

The threat posed by biological agents employed in a terrorist attack on the United States is arguably the most important homeland security challenge of our era. Whether natural pathogens are cultured or new variants are bioengineered, the consequence of a terrorist-induced pandemic could be millions of casualties—far more than we would expect from nuclear terrorism, chemical attacks, or conventional attacks on the infrastructure of the United States such as the attacks of September 11, 2001. Even if there were fewer casualties, additional second-order consequences (including psychological, social, and economic effects) would dramatically compound the effects. Bioengineering is no longer the exclusive purview of state sponsors of terrorism; this technology is now available to small terrorist groups and even to deranged individuals.

Today the nation is a long way from being able to meet the challenges posed by a bioterrorist attack. The United States currently has little ability to prevent or detect a biological attack, and the nation’s response systems are unproven. Biological weapons are easily concealed and hard to track. Biological attacks are potentially repeatable, and attribution is extremely difficult,

Advances in biotechnology will augment not only defensive measures but also offensive biological warfare (BW) agent development and allow the creation of advanced biological agents designed to target specific systems—human, animal, or crop” (National Intelligence Council, 2004, p. 36). The report states further that “as biotechnology advances become more ubiquitous, stopping the progress of offensive BW programs will become increasingly difficult” (p. 36).

Serious threats “may consist instead of unannounced attacks by subnational groups using genetically engineered pathogens against American cities” (U.S. Commission on National Security in the 21st Century, 1999, p. 2). Improving the U.S. capability to prevent, detect, and respond to the use of biological weapons is clearly a matter of national urgency. According to recent congressional testimony by the Director of National Intelligence, al-Qaeda and other terrorist groups continue to show interest in these weapons (Negroponte, 2007).

The biotechnology revolution will make even more potent and sophisticated weapons available to small or relatively unsophisticated groups.

One scenario, involving an aerosol anthrax attack in a highly populated U.S. city, begins with a single aerosol anthrax attack delivered by a truck using a concealed improvised spraying device in one densely populated urban city with a significant commuter workforce. Anthrax spores, delivered by aerosol, result in inhalation anthrax, which develops when the spores are inhaled into the lungs and germinate into vegetative bacteria capable of causing disease. A progressive infection follows. Attacks are made in five separate metropolitan areas in a sequential manner. Three cities are attacked initially, followed by two additional cities 2 weeks later. The crisis stresses and breaks the response capabilities of all relevant public and private institutions, rapidly leading to 328,400 exposures; 13,200 fatalities; and 13,300 other casualties. The full political, psychological, social, and economic impacts of the attack adversely affect national financial markets and consumer confidence, devastate the local and regional economy, and cause public faith in government to plummet across the country.

bioterrorism-nae-bioterrorism-risk-assessment-list-of-bio-agentsFIGURE 3.1 Biological threat agents as categorized by the Centers for Disease Control and Prevention (CDC). SOURCE: Available at www.bt.cdc.gov/Agent/Agentlist.asp

  • High-priority, Category A agents include organisms that pose a risk to national security because they can be easily disseminated or transmitted from person to person, they result in high mortality rates and have the potential for major public health impacts, they might cause social disruption, and they require special action for public health preparedness.
  • Category B, the second-highest priority, includes agents that are moderately easy to disseminate, that result in moderate morbidity rates and low mortality rates, and that require specific enhancements of CDC’s diagnostic capacity and enhanced disease surveillance.
  • Category C agents include emerging pathogens that could be engineered for mass dissemination in the future because of availability, ease of production and dissemination, and potential for high morbidity and mortality rates and for major health impact. A later CDC-categorized list (CDC, 2007) features the same categories, but with agent entries revised…

bioterrorism-nae-bioterrorism-risk-table-5-1

Agricultural consequences also need to be considered. Economic activity of U.S. agriculture has been estimated to exceed $1 trillion annually, with exports valued in excess of $50 billion. Protecting U.S. agriculture is critical to the global economy and to the ensuring of an adequate and safe food supply in the United States and other countries. Several assessments of agricultural consequences have shown that livestock and poultry populations are vulnerable to biologic attack. The U.S. Department of Agriculture has identified viruses and bacteria capable of causing wide-scale morbidity and mortality of livestock and poultry that would result in a cessation of international trade and exports costing the United States billions of dollars.

 

Posted in Terrorism | Tagged | Comments Off on Department of Homeland Security bioterrorism risk assessment

Dangers of EMP exaggerated?

Preface. The article by Lewis (2017) below questions whether a nuclear bomb can really ruin our grid and cause societal collapse.  All the other posts in the Fast Crash/Electromagnetic Pulse category think otherwise.

The most hopeful study that EMP effects from a nuclear bomb may not be as bad as expected is the Electric Power Research Institute (EPRI 2017) study, which looked at the possible effects on one high-altitude EMP on the U.S. fleet of 37,000 bulk power transformers. These large transformers, often found in substations, are greater than 69,000 volts and convert electricity from high-voltage electricity to levels that are distributed around neighborhoods.  “We found that there would likely be some failures, but those failures are relatively small in nature and not in the hundreds as had been contemplated from some of the reports in the past,” EPRI’s Manning said.

This article doesn’t question the effects of EMP’s, just whether North Korea has a nuclear warhead powerful enough:  “Former Department of Defense and intelligence contractor Jack Liu described the threat from EMPs as “grossly overstated” and said North Korea had not developed nuclear warheads powerful enough to be effective” (Porter 2017).

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report ]

***

Warnings that North Korea could detonate a nuclear bomb in orbit to knock out US electrical infrastructure are far-fetched, says arms expert. There is no shame in enjoying dystopian science fiction – it helps us contemplate the ways in which civilisation might fail.

But it is dangerous to take the genre’s allegorical warnings literally, as too many people do when talk turns to a possible electromagnetic pulse (EMP) attack. There have been repeated recent claims that North Korea could use a nuclear bomb in space to produce an EMP to ruin US infrastructure and cause societal collapse. This is silly.

We know a nuclear explosion can cause an EMP – a burst of energy that can interfere with electrical systems – because of a 1962 US test called Starfish Prime.

US nuclear weaponeers wanted to see if it was capable of blacking out military equipment. A bomb was launched to 400 kilometres above the Pacific before exploding with the force of 1.5 megatons of TNT. But it was a let-down for those hoping such blasts could knock out Soviet radar and radio.

The most notable thing on the ground were the visuals. Journalist Dick Stolley, in Hawaii, said the sky turned “a bright bilious green”.

Yet over the years, the effects of this test have been exaggerated. The US Congress was told that it “unexpectedly turned off the lights over a few million square miles in the mid-Pacific. This EMP also shut down radio stations, turned off cars, burned out telephone systems, and wreaked other mischief throughout the Hawaiian Islands, nearly 1,000 miles distant from ground zero.”

It didn’t. That was clear from the light-hearted tone of Stolley’s report. Immediate ground effects were limited to a string of street lights in Honolulu failing. But no one knows if the test was to blame.

Of course, we rely on electronics more today. Those warning of the EMP threat say it would lead to “planes falling from the sky, cars stalling on the roadways, electrical networks failing, food rotting”.

But evidence to back up such claims is lacking. A commission set up by the US Congress exposed 55 vehicles to EMP in a lab. Even at peak exposures, only six had to be restarted. A few more showed “nuisance” damage, like blinking dashboard displays. This is a far cry from the fantasies being aired as tensions with North Korea rise.

Nuclear weapons are scary enough without the fiction.

References

EPRI. 2017. Magnetohydrodynamic Electromagnetic Pulse Assessment of the Continental U.S. Electric Grid: Geomagnetically Induced Current and Transformer Thermal Analysis. Electric Power Research Institute.

Lewis, J. 2017. Would a North Korean space nuke really lay waste to the US? NewScientist.

Porter, T. 2017. Could a North Korean EMP Attack on the U.S. Really Cause Mass Starvation and Societal Collapse? Newsweek.

Posted in EMP Electromagnetic Pulse | 4 Comments

Civil war coming?

Preface. I read Turchin’s latest book which attempts to show why his past theories about the rise and fall of agrarian nations also applies to our modern civilization.  But there was not one mention of fossil fuels, natural resources, or Limits to Growth.  His focus is economics (wages, taxes, and so on), yet I’d expected that since there has never been a fossil fueled society and there will never be one again, that he’d add that to his analysis.

But I did find what he had to say about the Civil War of interest.

You can see my summary of his theory on how nations fail at this link: Book review of Turchin’s “Secular Cycles” and “War & Peace & War”

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report ]

***

Peter Turchin. 2016. Ages of Discord. A Structural-Demographic Analysis of American History.  Beresta Books.    

Multi-Secular Cycles in Historical and Modern Societies Introduction: Human Societies Are Fragile

For the first 80 years of the American history, democratic institutions sufficed to resolve the inevitable clashes of interests found in any large society. Political crises were defused within the constitutional framework without violence. But in 1861 democratic institutions failed catastrophically. American political elites had lost their ability to cooperate in finding a compromise that would preserve the commonwealth. And instead of defusing the crisis, popular elections in which Abraham Lincoln won the presidency triggered the conflagration. What is particularly astounding is how myopic the American political leaders and their supporters were on the eve of the Civil War, especially those from the Southern states. They gleefully wrecked the Union, without realizing what a heavy personal cost that would mean for most of them.

One wonders what they thought of their initial eagerness to join in the conflict four years and 620,000 corpses later. In the 1860s, Americans learned that large-scale complex societies are actually fragile, and that a descent into a civil war can be rapid. Today, 150 years later, this lesson has been thoroughly forgotten.

The degree to which cooperation among the American political elites has unraveled during the past decade is eerily similar to what happened in the 1850s, the decade preceding the Civil War. The divisive issues are different, but the vehemence and the disregard for the consequences of failing to compromise are the same. Of course, nobody expects another Civil War. But the political leaders of antebellum America also could not have imagined in their wildest dreams the eventual consequences of the choices they made during the 1850s.

Just because we cannot imagine our actions leading to disaster, it doesn’t mean that such a disaster cannot happen.

How nations fail

My focus is on why we sometimes see waves of sociopolitical instability that may, when extreme, cause state breakdown and collapse. Recent research indicates that the dynamics of sociopolitical instability in pre-industrial states are not purely random; history is not just “one damned thing after another” as Arnold Toynbee famously said.

One of the factors in the fall of a nation is that increasing population drives wages down, which leads to declining living standards for most of the population, but increased wealth for the elites. They then gain social status by conspicuous consumption, which further exacerbates inequality as they eat huge portions of the economic pie.  At some point this goes too far, and the elites begin competing with one another.\

The losing elites grow more likely to fight back with violence to gain back their former wealth rather than accept downward mobility, causing those in power to close ranks to keep aspirants out, making it all the more likely that the loser elites will team up with the even more miserable and downtrodden populace to take violent actions.  If the state is tries to prevent this by creating employment for more elites, that just tips finances further into the red, and doesn’t solve the problem.

Population growth in excess of the productivity gains of the land has several effects on social institutions such as price inflation, falling wages, rural misery, urban migration, and more food riots and strikes. Population growth also leads to expansion of the army and the bureaucracy and to rising real costs.

States have no choice but to seek to expand taxation, despite resistance from the elites and the general populace. Yet, attempts to increase revenues cannot offset the spiraling state expenses. Thus, even if the state succeeds in raising taxes, it is still headed for fiscal crisis. As all these trends intensify, the end result is state bankruptcy and consequent loss of military control; elite movements of regional and national rebellion; and a combination of elite-mobilized and popular uprisings that expose the breakdown of central authority.

Sociopolitical instability resulting from state collapse feeds back on population growth via depressed birth rates and elevated mortality and emigration. Additionally, increased migration and vagrancy spread the disease by connecting areas that would have stayed isolated during better times. As a result, epidemics and even pandemics strike disproportionately often during the disintegrative phases of secular cycles.

Instability has a negative impact on the productive capacity of a society. Lacking strong government to protect them, peasants cultivate only fields that are near fortified settlements or other strong points like hilltops. Without a strong state, the population  is vulnerable to banditry, civil war, and other threats.

How long does this take?

These data and analyses suggest that a typical historical state goes through a sequence of relatively stable political regimes separated by unstable periods characterized by recurrent waves of internal war. The characteristic length of both stable (or integrative) and unstable (or disintegrative) phases is a century or longer, and the overall period of the cycle is around two to three centuries.

Historians’ time divisions tend to reflect these secular cycles. Roman history is usually separated into Regal (or Kingdom), Republican, Principate, and Dominate periods. Transitions between these periods, in all cases, involved prolonged waves of sociopolitical instability. The Germanic kingdoms that replaced the Roman Empire after it collapsed in the West went through a sequence of secular cycles that roughly corresponded to the dynasties that ruled them.

Secular cycles are also observed in other world regions: in China with its dynastic cycles, in the Middle East, and in Southeast Asia. In fact, it is a general dynamic pattern that is observed in all agrarian states for which the historical record is accurate enough.

Posted in Peter Turchin, Stages of, War | Tagged , , | Comments Off on Civil war coming?

Not only are telecom companies screwing us on net neutrality, they refuse to get rid of robocalls

Preface. It’s four years after the 2013 Senate hearing below and the telecomm industry has still refused and congress and/or the Federal Trade Commission (FTC) have not demanded robocalls be blocked despite over 10 years of complaints.  In fact, unwanted and illegal robocalls are the FTCS #1 complaint, with more than 1.9 million complaints filed in the first five months of 2017 alone.

That really sucks, because after net neutrality goes away, oversight of internet protections will shift from the FCC to the FTC, who given this track record of ignoring the public on robocalls certainly aren’t going to regulate net neutrality either.

The FTC and congress haven’t delayed this because they’re trying to find the funding for research and implementation.  Canadian telecom companies had blocked robocalls for many years prior to the 2013 senate hearing.

You can block robocalls, but only if you have an Internet-based phone (VOIP Voice over internet protocol) via a free service, Nomorobo. This is currently available only for customers with Internet-based phones like Comcast xFinity, Verizon FiOS, or AT&T U-verse. ALL phone companies should offer Nomorobo or a similar free service directly to their customers.  Or you can pay for an app on your cell phone — it ought to be free!

We are thrilled with nomorobo, though the phone still rings once. I’d rather it didn’t ring at all, though it’s a reminder of the 25 calls we get a day on average.  We often shout “NO MO ROBO!” victoriously.

You can try signing this Consumers Union Petition to demand the major telephone companies block robocalls before they get to you, but given the many years they’ve had the petition…

There’s also a lot of advice out there, like at this link: Are robo-calls driving you crazy? Here’s how to block and beat them.

Below are about 8 pages of excerpts from the 88 page senate hearing.  

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

July 10, 2013. Stopping Fraudulent Robocall Scams: can more be done? Hearing before Subcommittee on Consumer protection, product safety, and insurance. U.S. Senate S. HRG. 113–117, 88 pages.

Highlights:

  • Canadian telephone companies have successfully blocked robocalls for years (with Primus telemarketing guide), but American phone companies won’t do it, and it isn’t expensive
  • Law enforcement officials have estimated that telemarketing fraud costs Americans over $40 billion annually.
  • The FTC received 2 million complaints between October 2011 and September 2012.

HON. CLAIRE MCCASKILL, U.S. SENATOR FROM MISSOURI. We have all been subject to the frustrations and annoyances of receiving unwanted telemarketing calls, also known as robocalls. It seems these calls always intrude at a very inconvenient time. Ten years ago, the Federal Trade Commission and the Federal Communications Commission, at the direction of Congress, established a National Do Not Call Registry so that consumers could get some peace and quiet in their homes and stop the torrent of unsolicited telemarketing calls. The idea was simple: voluntarily register your phone number on a centralized list, and telemarketers would be prohibited by law from calling you. The registry has been celebrated across party lines as a successful government program that provides real benefits to consumers.

While the National Do Not Call Registry has been effective at limiting intrusions by legitimate telemarketers, fraudulent robocalls have since filled the void and have become the source of understandable anger and frustration among the public. These automated, prerecorded telemarketing calls that often seek personal information from unsuspecting consumers are an annoyance at best, but they can be devastating for those that are defrauded by them. It is easy to see how consumers can easily be confused by these calls. One common scam involves a call from Rachel from ‘‘Cardholder Services’’ offering an easy way to reduce consumers’ credit card interest rates.

Another common scam involves robocalls warning consumers that their auto warranty is about to expire. In both examples, with the press of a button, the consumer is directed to an individual whose job is to collect financial information in an effort to defraud them. Even pressing the button they claim removes a caller from their list does nothing more than identify a phone number as valid, likely increasing the frequency of unwanted calls in the future.

USA Today outlined an example of this type of scam July 4th called ‘‘Your Money: Seniors Fight Back Against Robocalls.’’ And it gave a specific example of an automated voice that implies a doctor or a relative signed the consumer up for a free medical alert system. Authorities said that, in some cases, after consumers press a button to accept the offer, they quickly receive another call asking for personal information, including credit card numbers. This might be con artists trying to get bank or credit card information or a Social Security number to use in ID theft, or it is a way to pressure seniors into paying for equipment or services that they don’t need. The medical alert system scam is in full swing in Michigan, according to the state attorney general’s office, as well as in other states, including Pennsylvania, New York, Texas, Wisconsin, and Kentucky.

Law enforcement officials have estimated that telemarketing fraud costs Americans over $40 billion annually. So it is no wonder that robocalls consistently remain a top consumer complaint at the FTC as well as the FCC. The FTC received 2 million complaints between October 2011 and September 2012. The FTC and FCC have taken important steps to try and stop fraudulent robocalls. Both commissions have issued rules restricting robocalls, and they have taken enforcement actions to protect consumers. Since the National Do Not Call Registry started, the FTC has won more than $250 million in civil penalties and equitable relief for consumers against robocalls. But because these shady companies and individuals are often based overseas and very difficult to locate, the FTC has only been able to collect $15 million out of the $250 million that they have in fact gotten authorization to collect.

Advances in technology have made it cheap and easy for an individual anywhere in the world with a computer and a broadband connection to make thousands and even millions of robocalls at the push of a button.

Similarly, the exceptions to the Do Not Call Registry for charities, political calls, and businesses with which consumers have an existing relationship also remain a nuisance for consumers. In exploring regulatory, statutory, or technological changes to address the problem of robocalls, giving the consumers the choice to stop all unwanted calls—charities, political, and businesses with existing relationships to the consumer—stopping all of those calls, regardless of who places them, should be our ultimate goal. The choice here should rest firmly in the hands of the phone that rings.

The FTC and the FCC are actively engaged in stopping these illegal robocalls, but they have admitted to the significant challenges they face against new and emerging technologies, including sophisticated Voice-over-Internet-Protocol enabled auto-dialers and the use of fake caller ID systems. Companies using auto-dialers can send out thousands of phone calls every minute at almost no cost. Some of these companies do not screen against the Do Not Call Registry and use this solicitation to scam an individual.

LOIS GREISMAN, Assoc Director, Division of Marketing Practices, Bureau of Consumer Protection, FEDERAL TRADE COMMISSION

Fraudsters have also exploited caller ID spoofing, which induces the consumer to pick up the phone, while at the same time enabling the scammer to hide its identity and location. And, of course, with phone calls bouncing from country to country all over the world, it is now easier than ever for the robocaller to hide. With such a cheap and scalable business model, bad actors can blast literally tens of millions of illegal robocalls over the course of a single day at less than 1 cent per minute. These robocalls not only invade consumers’ privacy, quite often they pitch goods and services riddled with fraud. To meet this challenge, we stepped up our law enforcement initiatives. Looking just at the cases we have completed involving robocalls, we have shut down entities that placed billions of such calls

We have sued entities that afford access to massive dialer or voice-blasting platforms that initiate the calls. We have also sued entities known as payment processors that afford access to the financial system and enable the robocallers to process payments from consumers.

The Registry, which currently includes more than 221 million telephone numbers, has been tremendously successful in protecting consumers’ privacy from the unwanted calls of tens of thousands of legitimate telemarketers who participate in the Registry each year. Some of the Commission’s early robocall cases were against companies with household names such as Dish Network, DIRECTV, and Talbots.

Yet increasingly, robocalls that plague consumers are initiated by fraudsters, who often hide out in other countries in an attempt to escape detection and punishment. One example is the defendants in FTC v. Navestad, who the Commission successfully traced and sued even after they attempted to hide their identities through fake caller IDs, shifting foreign operations, and name changes. The court found that the defendants made in excess of eight million robocalls, and ordered them to pay $30 million in civil penalties and give up more than $1.1 million in ill-gotten gains. Unfortunately the two defendants are currently in hiding overseas.

One recent example is a concerted attack on illegal robocalls purporting to be from ‘‘Rachel’’ or others from ‘‘Cardholder Services,’’ which pitch a supposedly easy way to save money by reducing consumers’ credit card interest rates. The FTC brought five cases against companies that were allegedly responsible for millions of these illegal calls. The Commission simultaneously announced that state law enforcement partners in Arizona, Arkansas, and Florida had filed separate law enforcement actions as part of the same sweep.

First, the Commission aggressively pursues companies that provide the equipment and software necessary to send out millions of calls, sometimes referred to as ‘‘voice broadcasters’’ or ‘‘autodialers.’’ One example is FTC v. Asia Pacific Telecom, Inc., in which the FTC alleged that defendants were responsible for violating the TSR by placing billions of prerecorded phone calls on behalf of unscrupulous telemarketers. These robocalls pitched worthless extended auto warranties and credit card interest rate reduction programs while using spoofed Caller ID names—such as ‘‘SALES DEPT’’—and phone numbers registered to companies with overseas offices in the Northern Mariana Islands, Hong Kong, and the Netherlands.

The Robocall Summit made clear that convergence between the legacy telephone system and the Internet has given rise to massive, unlawful robocall campaigns. The telephone network has its origins in a manual switchboard that allowed a human operator to make connections between two known entities.39 A small group of well-known carriers were in control and were highly regulated. Placing calls took significant time and money, and callers could not easily conceal their identities. Now, communications technology is universal and standardized such that entrepreneurs can build up a viable telephone services business wherever they find an Internet connection. As a result, the number of service providers has grown exponentially and now includes thousands of small companies all over the world. In addition, VoIP technology allows consumers to enjoy high-quality phone calls with people on the other side of the planet for an affordable price. With this efficiency came other changes: instead of a voice path between one wire pair, the call travels as data; identifying information can be spoofed; many different players are involved in the path of a single call; and the distance between the endpoints is not particularly important. As a result, it is not only much cheaper to blast out robocalls; it is also easier to hide one’s identity when doing so.

New Technologies Have Made Robocalls Extremely Inexpensive.  Until recently, telemarketing required significant capital investment in specialized hardware and labor. Now, robocallers benefit from automated dialing technology, inexpensive long distance calling rates, and the ability to move internationally and employ cheap labor. The only necessary equipment is a computer connected to the Internet. The result is that law-breaking telemarketers can place robocalls for less than one cent per minute. In addition, the cheap, widely available technology has resulted in a proliferation of entities available to perform any portion of the telemarketing process, including generating leads, placing automated calls, gathering consumers’ personal information, selling the products, or doing all of the above. Because of the dramatic decrease in upfront capital investment and overall cost, robocallers—like e-mail spammers—can make a profit even if their success rate is very low.

Technological changes have also affected the marketplace by enabling telemarketers to conceal their identities when they place calls. First, direct connections do not exist between every pair of carriers, so intermediate carriers are necessary to connect the majority of calls. Thus, the typical call now takes a complex path, traversing the networks of multiple different VoIP and legacy carriers before reaching the end user. Each of these carriers knows which carrier passed a particular phone call onto its network, but likely knows little else about the origin of the call. Such a path makes it cumbersome to trace back to a call’s inception. All too often, this process to trace the call fails completely because one of the carriers in the chain has not retained the records that would further an investigation.

New technologies allow callers to manipulate the caller ID information that appears with an incoming phone call. This ‘‘caller ID spoofing’’ has beneficial uses; legitimate companies adjust their caller ID information regularly so that customers will see the most useful corporate number or name, rather than the phone number from which an agent actually placed the call. However, the same functionality allows robocallers to deceive consumers by pretending to be an entity with a local phone number or a trusted institution such as a bank or government agency. In addition, robocallers can change their phone numbers frequently in an attempt to avoid detection. It is generally illegal to transmit misleading or inaccurate caller identification information with the intent to defraud, cause harm, or wrongfully obtain anything of value, but many robocallers flagrantly violate this law.

Finally, new technologies help robocallers operate outside the jurisdiction where they are most likely to face prosecution.61 Indeed, all of the many different entities involved in the path of a robocall can be located in different countries, making investigations even more challenging.

If you answer a call and hear a recorded sales message—and you haven’t given your written permission to get calls from the company on the other end—hang up. Period.

Aaron Foss, Freelance Software Developer, of Free Robocaller blocking Nomorob

Here is how it works. In real-time, Nomorobo analyzes the incoming caller ID and the call frequency across multiple phone lines, and if it detects a robocaller, the call is automatically disconnected. And all of this happens before the consumer’s phone rings. So as each call is analyzed, a blacklist of robocallers is continually updated. And the more calls that come into the system for analysis, the better that the algorithm works. I actually built this system using the same technology that these robocallers are using, so it scales inexpensively to handle millions of calls. And Nomorobo works on landlines, voice-over-IP, and cell phones on all of the major carriers and does not require any additional hardware or software. All that is required by the consumer is a simple, one-time setup to enable a free feature that is already built into the switches called ‘‘simultaneous ring.’’ But, as with all new ideas, there is always some skepticism. Industry players have expressed three major concerns about robocall blocking: spoofing caller ID; violating consumer privacy; and allowing legal robocalls. So it is incredibly easy to spoof caller ID to show any phone number, and almost all of the robocallers do that.

But while you can falsify the calling number, you can’t falsify the calling patterns. So it is a red flag, for example, when the same number, whether it is spoofed or not, has made 5,000 calls to different numbers in the past hour. And it is also a red flag when the same number is sequentially calling large blocks of phone numbers. Both of these scenarios indicate robocalling patterns. And so a static blacklist of known robocallers would only work in a very limited amount of situations. But by combining the caller ID, whether it is real or faked, with real-time calling pattern analysis, robocalls can effectively be detected.

Also, with solutions like these that only look at the metadata of a call, there is no need to monitor or listen in to the phone calls, thus assuring customer privacy. The caller ID data, along with the date and time, across many phone lines, gives enough of a fingerprint to detect robocallers without having to analyze the actual content of the call.

And the final concern that has been raised is how to allow legal robocalls, such as schools and emergency notifications, to bypass robocall blocking. And this can be accomplished by building a trusted, real-time whitelist. I have already had the opportunity to speak with some of the legal robocallers, and they are very open to working on a solution that allows them to successfully deliver their calls. They want these illegal robocallers put out of business as much as the consumer does.

Senator MCCASKILL. I don’t understand. We have heard from two good witnesses that the technology is available. So why is it that Mr. Foss’s technology is not quickly being adapted in these commercial markets? And why is it that Mr. Stein’s patented product has not been licensed to an American carrier?

Michael F. Altschul, Senior Vice President and General Counsel, CTIA— The Wireless Association: We have concerns about overreaching and blocking legitimate calls. I am sure you are more familiar than you would like to be with the kind of informational robocalls and text messages you receive from airlines when flights are delayed because of weather or other events. The volume of these calls are unpredictable, and they will flood carrier networks with identical recorded messages and text messages. And they will carry a caller ID. That caller ID, if it is put on a whitelist, can then be spoofed, as I think we all agree how easy it is to spoof a number, and have the same fingerprint or pattern as other messages.

Mr. STEIN. The reality is that the system, the Telemarketing Guard system itself, will only begin to monitor and, therefore, take action once there are reports by enough people that say, this is an unwanted telemarketer. Nobody is going to call and say, the airline let me know my flight was late. That is the initial beginning of the block — a critical mass of people calling and saying, hey, these guys are trying to rip me off or sell me siding

Mr. ALTSCHUL. But my point is that that number, which is welcome and legitimate and properly described on caller ID, is basically the identifier that the carrier and the customer and Mr. Stein’s system has to track wanted and unwanted calls. Right now, there is no need for scammers to actually pick numbers that consumers would recognize as the source of messages, informational messages, they would like to receive. But there is no limitation on a fraudster’s ability to use an airline’s number to fill out the caller ID field in the robocalls and messages that they send.

Mr. STEIN. Two quick comments. First, the system is quite smart. And over the years that we have tuned it and built and enhanced it, we have built in a great many safeguards to prevent this exact thing from happening. And I won’t elaborate in full detail on all those, but if such a thing were to happen and reports were to start to come in, one would assume that at the same time the airline is using that phone number, too, and therefore a lot of those calls are getting accepted by our customers. So we would be seeing votes going in both directions, and the system becomes increasingly skeptical, and looks for what distinguishes the two types of calls, and then is able to break them down based on many of the other criteria that are no longer using just, say, the caller ID, which is the thing that is easy to spoof. There are a lot of other characteristics in a phone network that are available that we use.

Mr. FOSS. The thinking that went into it before everybody had voice-mail was that the call is going to be disconnected, you are going to lose the call forever. But now if we can just divert it to voice-mail, much like spam does into your spam filter, I think everybody would rather have a voice-mail box with five or six robocalls than five or six robocalls. It is absolutely not going to be 100 percent. But even with spam filters today, certain spam gets through, sometimes real e-mails get into your spam folder. And I think that we need to try it, and I think that we need to start somewhere.

Senator MCCASKILL. If a plane is late, we are talking about maybe 100, 200 people; we are not talking about thousands. I need to know what, if anything, these carriers are doing. And do they feel an obligation to do something?

Mr. RUPY. I think one of the points that was raised earlier by various folks on the panel here is that, under our current legal framework, regardless of whether it is a mass-calling event or sort of a standard calling volume, we are under a legal obligation to complete those phone calls

Senator MCCASKILL. Are you saying that you legally couldn’t adopt Mr. Stein’s technology? The phone call connects; it just decides whether it goes to voice-mail.

Mr. RUPY. As I understand Mr. Stein’s and Mr. Foss’s technology, to a certain degree the decision is removed from the consumer and is made by the carrier

Senator MCCASKILL. No, that is not true. That is not true. Mr. Stein, the carrier is not making the decision, is it?

Mr. STEIN. No, the carrier does not make that decision. The system doesn’t block a call under any circumstance, other than if the customer were to say, here is one given number that I don’t want, a blacklist, available on many services. In the case of Telemarketing Guard, it impedes the call and asks the caller to press a digit to record their name. But in all of those cases, those recorded names, the phone call is made, et cetera.

[Based on the testimony of Stein and Foss, plus Canada having successfully blocking robocalls for several years, I wouldn’t put much credence into the testimony below of why American telephone companies haven’t protected their customers from Robocalls by Rupy below or Altschul (wireless spokesman) above]

KEVIN RUPY, SENIOR DIRECTOR, LAW AND POLICY, UNITED STATES TELECOM ASSOCIATION

In addition to the harm they cause consumers, robocalls impact U.S. Telecom’s own member companies. Our companies’ customer service representatives represent the first line of defense on this issue. They must be well-versed in explaining to customers the difference between legal and illegal robocalls, providing them with information on how to file a complaint with the FTC, and pointing them to tools to help them mitigate these calls. Robocalls can also adversely impact our companies’ networks. Mass-calling events are typically highly localized, high-volume, extremely brief, lasting only a matter of minutes. And carriers receive no advance warning of these calls. A severe mass-calling event can result in service degradation and disruptions to phone services in a provider’s impacted area. Moreover, illegal robocalls exacerbate an already troubling problem in our industry known as phantom traffic: calls that evade the established intercarrier compensation regime.

It is unlikely that any single technological silver bullet can permanently address the robocall problem.

Significant Legal Constraints Limit Potential Robocall Deterrents. Two primary legal issues face USTelecom’s member companies with respect to remedying the robocall problem. First, under existing laws to which USTelecom’s members are subject for their provision of legacy voice service, phone companies have a legal obligation to complete phone calls. These companies may not block or otherwise prevent phone calls from transiting their networks or completing such calls. The current legal framework simply does not allow our companies to decide for the consumer which calls should be allowed to go through and which should be blocked. Second, there are substantial privacy issues that arise in any discussion relating to proposed robocall solutions. Robocalls are extremely contextual in nature. Depending on the nature of the call, certain robocalls are permitted under the law, while others are prohibited. Proposed solutions to the robocall dilemma that seek to make phone service providers the arbiter of whether a call should—or should not—be permitted to proceed skirt dangerously close to violating the privacy obligations imposed on us by law. For example, the Wiretap Act (also known as Title I of the Electronic Communications Privacy Act (ECPA) or Title III of the Omnibus Crime Control and Safe Streets Act of 1968) expressly protects wire, oral, and electronic communications while in transit and establishes that service providers are permitted to intercept those communications only as a necessary incident to the rendition of service or to the protection of the rights or property of the provider.

Today’s solution could very well turn into tomorrow’s Maginot Line, and could have unintended adverse consequences. For example, solutions that rely extensively on blocking calls populated by a blacklist could very well result in the blocking of legitimate calls from callers whose own phone numbers have been illegally spoofed. Conversely, solutions implementing call blocking features based upon a whitelist could potentially block an important— albeit unexpected—message from a legitimate caller. Even more perversely, the availability of spoofing technology can easily fool consumers into taking calls they should avoid. For example, spoofing the number of the local municipal hospital could dupe a senior citizen into believing that a fraudulent effort to sell phony medical products is actually a legitimate call from a whitelisted number. Given the open nature of the broadband network, technological solutions can be—and often are—superseded by technological countermeasures. The same increasingly appears to be the case for legislative and regulatory solutions, which regrettably do not seem capable of keeping pace with the evil genius of scammers who continually invent new ways of evading discovery and capture, much less prosecution and punishment. As noted earlier, we have been trying to legislate out of existence the problems of robocalling, spam, autodialing, and caller-ID spoofing for as long as two decades, but new technologies only seem to make the problems grow worse.

I would like to use a screenshot of a text message that I received on Monday to illustrate the difficulties we face in trying to solve this problem. And, by the way, wireless carriers do screen text messages and successfully block millions of them, I believe, every day. Voice calls have to be found at the source to be cut off. As you can see, this text message appears to be an informational text message about my account at a local financial institution. In fact, I have provided my express prior consent to the financial institutions where I have accounts, authorizing them to send me informational text message alerts about fraudulent activity, data breaches, and other time-sensitive account information. But since I do not have an account at this institution, I knew immediately it was a phishing scam that violates both the TCPA and the Truth in Caller ID Act, which prohibits the spoofing of caller IDs. Scammers, especially those outside of the United States, are not deterred from violating the TCPA or the Caller ID Act. For this phishing scam, the fraudster spoofed the caller ID of a local Washington, D.C., phone number. As it turns out, this number is not in service. It happens to be assigned and arranged so that it is assigned to a CLEC. But I called it and got a recording that the number is not service. So this is not a real phone number assigned to a user. But the fraudster could just as easily spoof the financial institution’s actual phone number or tumbled phone numbers randomly to defeat the use of blacklists and whitelists. And this is why this is such a difficult problem to solve. Carriers do not know the businesses and public agencies the customer has given express prior consent to send informational calls and messages. And even if a carrier did know this information, fraudsters can spoof whitelisted numbers and appear to be a legitimate business sending informational calls and messages to its customers. We appreciate the efforts of the FTC and others who are exploring technologies that may minimize the transmission of illegal robocalls and text messages to our customers. However, as H.L. Mencken famously observed, there is always a well-known solution to every human problem neat, plausible, and wrong. This wise counsel cautions us that any technical solutions must be subject to careful and complete consideration.

 

 

Posted in Congressional Record U.S., Scams | Tagged , | Comments Off on Not only are telecom companies screwing us on net neutrality, they refuse to get rid of robocalls

U.S. Senate 2014 Freight rail service: improving the performance of America’s rail system

Preface. This Senate hearing is mainly full of oil and grain industry leaders bashing the rail industry and asking Senators to do something about it. But the railroad industry is four times more energy efficient than trucks, and not guilty of the accusations, as you’ll see in the testimony of Hamberger, head of the Association of American Railroads.

Unlike autos, trucks, aircraft, and shipping, the rail industry is not subsidized by the government, even though it uses less oil per ton than all of these modes but shipping, and reinvests more its profits than nearly all other industries to maintain their infrastructure.  That leaves little extra to add on miles of rail track.  If we’re going to spend money on infrastructure, then the government should subsidize rail, perhaps it would even pay for itself since there would d be fewer trucks on the road wasting fuel and damaging roads.

But that won’t happen, because businesses and citizens must have things right now. Trucks can deliver just-in-time, unlike rail, but often arrive half empty with just what’s needed, often returning empty.

After fossils decline, it will be hard to imagine a time when there was too much STUFF like oil, grain, and  iron ore for railroads to move. Oh how we will rue the waste of diesel fuel some day!

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Senate 113-616. September 10, 2014. Freight rail service: improving the performance of America’s rail system. U.S. Senate hearing.

Edward R. Hamberger, President & CEO, Association of American Railroads We have problems. We did not see the surge in traffic coming. Many of our customers did not, either. In fact, last August there were over 50,000, grain cars in storage. And then in the fourth quarter the demand hit. The weather—you mentioned it, Mr. Chairman. Yes, it snows every year. But this particular year in Chicago was a record cold and record snow. We are a network industry. One-third of our traffic originates, terminates, or transits through Chicago. When Chicago has problems, the entire network—it ripples through the entire network.

In August 2014, just last month, we moved more merchandise overall than we have since October 2007, before the recession. So the economy is coming back. We hope to move 17 million new automobiles this year. On the intermodal side, we’re going to take 13 million trucks off the road as we grow intermodal at 7% this year. But we have problems. We did not see the surge in traffic coming. Many of our customers did not, either.

Cal mentioned average rail rates spiking and he’s right. They have gone up, according to this chart, over the last several years, all the way back to where they were in 1988 in inflation-adjusted terms, 17% below where they were in 1981. So they’ve spiked all the way back to where they were in 1988. One of his employees testified on

As volumes increase, a number of factors make rail networks exceedingly complex to plan and manage.

  • Trains of a single type can often be operated at similar speeds and with relatively uniform spacing between them. This increases the total number of trains that can operate over a particular rail corridor. This situation, however, is relatively rare.
  • Far more common is for trains of different types—with different lengths, speeds, and braking characteristics—to share a corridor. When this happens, greater spacing is required to ensure safe braking distances and to accommodate different acceleration rates and speeds. As a result, the average speed drops and the total number of trains that can travel over a rail corridor is reduced.
  • Service requirements. Different train types and customers have different service requirements. For example, premium intermodal trains demand timeliness and speed; for bulk trains (e.g., coal or grain unit trains), consistency and coordinated pick-up and delivery is the priority; customers who own their own rail cars will want railroads to implement strategies which help them minimize fleet-related costs, for example by maximizing the number of ‘‘turns’’ (loaded to empty to loaded again) the rail cars make; passenger trains require high speed and reliability within very specific time windows; and so on.
  • The need for safe operations is ever present, and proper line maintenance is essential for safe rail operations. In fact, because of higher rail volumes and a trend toward heavier loaded freight cars, the maintenance of the rail network has become even more important. Railroads have no desire to return to the days when maintenance ‘‘slow orders’’ (speed restrictions below the track’s normal speed limit) were one of the most common causes of delay on the rail network. That’s why maintenance is one of the most important parts of any railroad operating plan. It necessarily consumes track time that otherwise could be used to transport freight.
  • Traffic volumes are not always foreseen. When planning their operations, rail roads use past experiences, customer-provided forecasts, economic models, and other sources to produce their best estimate of what demand for their services will be well into the future. Railroads use those traffic forecasts to gauge how much equipment, labor, and other assets they need to have on hand. As with any prediction of future events, these traffic forecasts are imprecise predictors of markets. After a certain amount of traffic growth beyond what was anticipated, available resources will be fully deployed, and additional assets (some requiring long lead times—see below) will be needed.
  • Traffic mix. The U.S. and global economies are constantly evolving. Firms— even entire industries—can and do change rapidly and unexpectedly. The collapse of the construction industry when the housing bubble burst in 2007 and the recent rapid growth in ‘‘new energy’’ production are just two examples. These broad, often unanticipated economic changes are reflected in changes not only in the volumes (see above paragraph) but also in the types and locations of the commodities railroads are asked to haul. If the commodities with rail traffic declines traveled on the same routes as commodities with traffic increases, the challenges these changes presented to railroads’ operating plans have less impact. However, when traffic changes occur in different areas—as is usually the case and certainly has been the pattern in recent years—the challenges to railroads’ operating plans are magnified.
  • Resource limitations. Like firms in every industry, railroads have limited resources. Their ability to meet customer requirements is constrained by the extent and location of their infrastructure (both track and terminal facilities) and by the availability of appropriate equipment and employees where they are needed. Terminals—where trains are sorted, built, and broken down, similar in certain respects to airline hubs—are a case in point. If a train cannot enter a terminal due to congestion or some other reason, then it must remain out on a main line or in a siding where it could block or delay other traffic. The ability of a terminal to hold trains when necessary and to process them quickly is one of the key elements in preventing congestion and relieving it when it does occur. Thus, one of the most important factors in increasing capacity for the rail network is enhancing the fluidity of terminals. Unfortunately, terminals are often one of the more difficult areas in which to add capacity, in part because they are frequently in, or near, urban areas. Expansion generally means high land and, potentially, high mitigation costs. Even in less urban areas, a rail terminal is rarely considered positive by nearby residents, and its development or expansion to accommodate freight growth is usually the subject of intense debate.
  • Need for long lead times. It’s an unfortunate reality that many of the constraints railroads face—particularly those involving their physical network— usually cannot be changed quickly. For example, it can take close to two years for locomotives and freight cars to be delivered following their order; six months or more to hire, train, and qualify new employees; and several years to plan, permit, and build new infrastructure. Rail managers must use their best judgment as to what resources and assets will be needed, and where, well in the future. Usually, this process works well, but when those judgments are off, serious problems can ensue. When these judgments must also deal with the uncertainties of rapid and historically unstable market changes, such as the recent emergence of energy products moving by rail, the probability of successful forecasting is even further reduced. On a related point, firms in every industry walk a fine line when it comes to capacity. Generally speaking, if firms take too long to bring back idled capacity or to build new capacity, they risk shortages and lost sales. That’s the case in terms of some rail operations right now. On the other hand, if firms build capacity on the hope that demand will increase, they risk that the demand will not materialize and they will be saddled with added, and wasted, costs. Like other firms, railroads must balance these risks, and different railroads may come to different decisions as to how much ‘‘surge capacity’’ is needed and where to locate such capacity on their networks.
  • Railroads are networks. Last, but not least, the significance of the network as pects of rail operations cannot be overemphasized. Disruptions in one portion of the system can quickly spread to distant points. Railroads are not unique among network industries in this regard—weather problems at one airport can quickly cause problems at many other airports, for example. But unlike airline networks, where the overnight hours can usually be used to recover from the previous day’s problems, rail networks operate 24 hours a day, 7 days a week. Thus, incident recovery must be accomplished at the same time that current operations are ongoing and while the other factors mentioned above continue to come into play. That’s why, in extreme cases, recovery in rail networks can take months. The winter of 2013/2014 is one such extreme case that is discussed further below.

Much of the recent increase in crude oil production has occurred in North Dakota, where crude oil production rose from an average of 81,000 barrels per day in 2003 to close to a million barrels per day today. Most of North Dakota’s crude oil output is transported out of the state by rail.

Rail has a critical role in delivering these crucial benefits to our country. As recently as 2008, U.S. Class I railroads originated only 9,500 carloads of crude oil. By 2013, that had grown to 407,761, equal to around 11 percent of U.S. crude oil production.

That said, one must be careful when looking to ascribe blame to crude oil for the service problems railroads are currently facing, which, as discussed below, became especially acute during and after this past winter. As Chart 6 shows, Class I railroads originated 229,798 carloads of crude oil in the first half of 2014, up 11.7% (24,058 carloads) over the 205,740 carloads originated in the first half of 2013. That’s a considerably slower rate of growth compared with 2011 and 2012 trends. Crude oil accounted for just 1.6% of total Class I carload originations in the first half of 2014. Moreover, the 24,058 more originated carloads of crude oil in the first half of 2014 works out to less than 1.5 new train starts per day, on average. Surface Transportation Board data indicates that there are approximately 5,000 train starts per day. Thus, recent new crude oil train starts are a small fraction of total train starts nationwide.

By comparison, in the first half of 2014 Class I railroads originated 182,425 more carloads of ‘‘miscellaneous mixed shipments’’ (most intermodal is in this category), 118,500 more carloads of grain, 84,118 more carloads of coal, 41,310 carloads of crude industrial sand (this includes frac sand), 24,735 carloads of motor vehicles and parts, 20,949 more carloads of chemicals, and 18,246 more carloads of dried distillers grain (DDGs, a byproduct of ethanol production used as animal feed) than in 2013.

Rather than saying that crude oil is crowding out other traffic, it is more accurate to say that, right now, on some railroads, on some lines, rail capacity is a scarce resource. But as noted earlier, infrastructure creation takes time, even for urgent programs. For the time being, on congested rail lines, all commodities railroads are hauling are competing with each other for available capacity.

Coal Traffic Has Been Higher Than Anticipated. In addition to leading to sharply higher crude oil production, the ‘‘shale boom’’ has also led to sharply higher natural gas production and, consequently, lower natural gas prices from what they once were. That has made electricity generated from natural gas much more competitive vis-a`-vis electricity generated from coal. However over the past 18 months or so, not only has the coal share of U.S. electricity generation stopped falling, it’s actually risen, as utilities that had been generating electricity from natural gas switched back to lower-priced coal. According to the U.S. Energy Information Administration, in the first half of 2013, coal accounted for 764 million megawatthours of U.S. electricity generation, equal to 39.1% of the total. In the first half of 2014, coal accounted for 806 million megawatthours, or 40.1% of U.S. electricity generation. This past winter in particular, the price of natural gas spiked, leading to greater than expected demand for coal and the sharply higher rail coal volume

Extreme Weather Wreaked Havoc on Railroads, Especially in Chicago. The railroad ‘‘factory floor’’ is outdoors and nearly 140,000 miles long. As such, railroading is arguably more susceptible to weather-related problems than any other major industry.

Extreme weather events, particularly sustained extreme weather events, can wreak havoc on rail operations. For example, extremely cold weather can force railroads to dramatically shorten the length of their trains, while snow accumulation can make it difficult to keep rail yards functioning. In much of North America, this past winter was one very long, very severe extreme weather event, with both record cold temperatures and record precipitation. While this past winter was unusually harsh in much of the country, it was especially so in the Chicago area. Chicago has been a crucial nexus in the North American rail network for over a century. Today, nearly 1,300 trains (500 freight and 760 passenger) pass through the region each day. In fact, around one-fourth of the Nation’s freight rail traffic passes through or near Chicago. As such, when railroading becomes difficult in Chicago, it quickly becomes difficult throughout the rail network. According to the National Weather Service, Chicago experienced its coldest four- month period on record between December 2013 and March 2014, with an average temperature of 22 degrees and a record number of days (26) at zero degrees or below. Chicago’s 82 inches of snow this past winter was the third-highest in history and well over double the annual average of the previous 20 years. Moreover, during ordinary winters, there is usually time between storms to do some clean-up. Railroads typically ensure that their winter staffing levels are adequate to deal with these problems. However, that was often not the case this year due to short intervals between storms. In Chicago, for example, once the bad weather started, there was never a real opportunity for railroads to get their operations back to normal before the next severe cold spell or winter storm hit. The problems in Chicago and elsewhere in the Midwest were compounded by the fact that the severe weather occurred unusually far south this year so that the geography needing relief was much larger. Usually, the southern regions have served as relief valves during northern disruptions, and early last winter diversion of trains into this region was being planned, where possible. However, that outlet was not generally available

Much of the past winter. For example, a series of ice storms in a band between Atlanta and Memphis made it unsafe, sometimes impossible, for train crews to get to work in this region or for maintenance crews to properly tend to the many day-to-day problems requiring resolution in a properly operating railroad. The result was rail congestion in an area which has typically been available to relieve problems created by winter weather further north. Now, it’s true that, as some rail critics have charged, ‘‘winter comes every year,’’ but to claim that this past winter was typical is to be disingenuous. I respectfully submit to you that, if we had a ‘‘normal’’ winter this year, the capacity challenges we have seen would likely be at a significantly lower level. We should also remember that the challenges which have faced rail operations in many key areas were further exacerbated by widespread, regional spring flooding that was largely the result of the severe winter. As noted above, when capacity is constrained, disruptive incidents are more common and recovery takes longer than when the network is not fully utilized. In a nutshell, that explains why the events of this past winter continue to affect rail operations today.

Current Service Issues Are Not a Good Reason to Increase Government Control of Rail Operations. It is unfortunate that some groups are seeking to take advantage of the current rail service problems to advocate for far-reaching changes to the regulatory regime under which railroads operate that would result in a much greater government role in freight rail operations. That would be a profound mistake.

Railroads are the best way to meet this demand, and they’re getting ready today to meet the challenge. They will continue to reinvest huge amounts back into their systems, as long as a return to excessive regulation does not prevent them from doing so.

In short, it would force railroads—through what amounts, in one way or another, to price controls—to lower their rates to favored shippers at the expense of other shippers, rail employees, and the public at large. Billions of dollars in rail revenue could be lost each year. Artificially cutting rail earnings in this way would severely harm railroads’ ability to reinvest in their networks. The industry’s physical plant would deteriorate; essential new capacity would not be added; and rail service would become slower, less responsive, and less reliable at the very same time that rail customers are demanding more rail capacity and more reliable rail service. It makes no sense whatsoever to enact public policies that would discourage private investments in rail infrastructure when our Nation needs more of it.

We are also making investments to prevent accidents.  In the area of prevention, we are doing increased inspections of rail. We’re putting roadside detectors out so that when a train goes by we can actually detect acoustically if there is a bearing defect. We also have laser beam readers to try to see cracks in the wheel before the wheel splits apart. And obviously maintaining the track and maintaining the bridges is high on the agenda as well.

Senator HOEVEN. The point I want to make is this: The railroads need to bring more resources to meet the needs in North Dakota. We have a growing state and we’re moving not only ag products right now—we’ve got the harvest that’s under way, so we’ve got more coming—but with energy and with growth in other areas, manufacturing and so forth in our State, we need more capacity on the part of the railroads. They need to bring more cars, more locomotives, more people. And they need to build more track. Right now the need is particularly critical for our ag shippers, both because of the current backlog and because we’ve got harvest under way. So we need it for coal and for oil and gas and for other commodities as well, but it is a very acute problem right now for our farmers.

BNSF has put forward a very substantial resource plan to address the need. That includes $5 billion of investment this year all in for the whole system. It means about 500 locomotives, 5,000 new railcars, 250 more workers in North Dakota, about $400 million in additional track in North Dakota. So it is a substantial commitment. So we need to monitor that and make sure that that happens and that that investment does meet the need. They cover about, I would say, 75% of the volume in our state.

CP needs to make that same commitment. I’ve had the CEO of CP in Minot, North Dakota. We had a meeting. They talked about investing $150 million over the next year. But they have not provided us with a specific resource plan. Also, they’re working on changing their ordering system for shippers ordering cars. That may work, but it’s got to be fair. They can’t cancel orders on shippers, and it needs to be a transparent process so that we understand how it works and so that we have accurate reporting.

Senator HEITKAMP. This continues to be a problem in Montana as well, Minnesota as well, our producers are in dire straits. This isn’t just about who gets preference and having your feelings hurt. This is about the very real economic consequences of what’s happening in farm country in our state and across the Northern Tier across the board. Burlington Northern I think in many ways gets it, that this is a permanent problem, we’re going to continue to ship crude by rail, we’re going to continue to see bumper crops and soybeans denigrate very quickly, and we’ve got to get them to market. So as dire as that is, as that pile of wheat is, if those were soybeans basically what you’ve done is you’ve condemned that crop. So, understanding that we go into freeze with that pile, that has huge economic consequences to those producers.

I think it’s a matter of whether the STB believes this is permanent, whether this is a one-time glitch in the system or whether we’re going to have a need for a permanent increased buildout. I happen to believe we need a permanent increased buildout. Given the history of siting pipelines in this country, we’re going to continue to move oil on the rails. Your committees have already discussed the safety issues. But we’re at 1.1 million barrels a day pretty much in North Dakota. We think that’s going to grow another 20, 30 percent. Where is that oil going to move? It’s going to move on the tracks. It’s going to move in pipelines, but it’s also still going to move on the tracks.

I tell you that the big concern that I have is that still what we’re hearing is they don’t get that this is a permanent problem and needs huge amounts of capital infusion in order to solve it.

The CHAIRMAN. I initially took an interest in rail policy after hearing from West Virginia shippers who expressed frustration with high rates and poor service. That began 30 years ago and my progress has been measured in quarter-inch segments. That’s how much progress we’ve made on this. They have been highly frustrated about high rates and poor service. What you probably don’t know, however, is that these complaints were in place 30 years ago, as they are today. And yet here we are today trying to confront the same issues that have plagued shippers for several decades. The rail industry looks far different than it did 30 years ago. Competition in the industry has decreased. Before enactment of the Staggers Act in 1980, there were approximately 40 large railroad companies. Today that figure would be closer to seven, so competition is down, and profits are up. In passing the Staggers Act, Congress recognized the need for a robust freight rail system. The Staggers Act was a big favor in many respects to the industry because it recognized that they had to spend capital in order to be able to do the system properly. Well, they got the capital, but they haven’t necessarily used it properly. That law made sweeping regulatory changes which gave the railroad industry an opportunity to improve its finances and the ability to compete against other transportation modes. So that part they like a lot. The Staggers Act also sought to provide, and I quote, ‘‘the opportunity for railroads to obtain adequate earnings to restore, maintain, and improve their physical facilities while achieving the financial stability of the national rail system.’’ Well, make no mistake; in that regard, the Staggers Act has worked. In 2010, I released a Commerce Committee majority staff report which found four Class I railroads that dominate the railroad rail shipping market and that they are achieving returns on revenue and recognizing operating ratios that rank them among the most profitable businesses in the entire United States economy. I released a follow-up majority staff report last November which corroborated the 2010 findings: that freight railroads continue to set new financial records on a quarterly basis, and these companies continue to raise their dividends and buy back record amounts of stock. So cash is not the problem. But not everybody is as well as they are. In this world we’re meant to have sort of a balance, those who transport, those who are shipping. There has to be some kind of balance. The STB hasn’t found a way to do it. We can’t get anything to do it to pass. But again, not everybody is doing so well. Many of the witnesses here today have struggled to remain competitive as rail service declines and rates increase, and the situation continues to get worse. For several months now, the agricultural, coal, chemical, and automotive industries, among others, have been experiencing serious service delays on rail, sometimes on the order of months. You can’t blame everything on the winter. You just can’t do that, sorry. It’s not just industry. Passengers are also feeling the effects. Amtrak’s long distance trains around the country are being severely delayed. Whether it has been extreme winter weather, a surge in Bakken crude oil production, a recovering economy, or a combination of factors, we must do more to move our grain to market, coal to power plants, automobiles to consumers, and passengers to their destinations than we currently are. For many shippers this is their livelihood and it’s too important to not do anything. Therefore I look forward to hearing from the railroads on what is being done to alleviate these freight logjams as soon as possible, and I hope I don’t hear the phrase ‘‘We need more money in order to build better infrastructure for the future,’’ because I already have that, buddy.

JOHN THUNE, U.S. SENATOR FROM SOUTH DAKOTA. I only wish we could figure out a way to directionally drill up into the oil in North Dakota to bring it down into South Dakota. But I have often said that North Dakota has oil, Wyoming has coal, Montana has some of both, and in South Dakota we have pheasants. But we also raise a lot of agricultural commodities. We raise corn, wheat, and soybeans, and we have to have a way to get that to the marketplace, and that requires railroads, the most efficient way to move freight like agricultural commodities.

In South Dakota alone, this year’s harvest and what remains of last year’s is expected to exceed the statewide grain storage capacity by as much as 18%. Grain has already been stored on the ground. That was the wheat harvest that occurred earlier this year. What’s so alarming is that it happened early in the crop year and we’ve got much larger corn and soybean harvests coming this fall. Projections from the U.S. Department of Agriculture estimate that South Dakota’s 2014 wheat harvest is going to be at 108 million bushels, a 14% increase over the three-year average, and soybean and corn crops are also expected to be unusually large, potentially record-setting. Even with these high yields the increased negative basis due to inadequate transportation and the inability to timely move these crops from grain-handlers could result in more than $300 million in lost value to South Dakota corn, wheat, and soybean producers.

As winter approaches, ethanol plants will also become vulnerable to rail delays. Because of the nature of ethanol production, plants cannot simply be shut down during winter months. South Dakota ethanol producers, like Glacial Lakes and Redfield, rely on adequate services to prevent pipes from freezing and major structural damage to their operations.

In addition, South Dakota’s Big Stone Power plant has indicated that they’re running below capacity because they simply can’t get enough coal to fuel the most efficient operation. Coal stockpiles are alarmingly low and rail service simply hasn’t provided adequate coal supplies.

The Surface Transportation Board has taken several steps to address these rail service challenges, including issuing a number of orders designed to increase transparency. On June 20, the Board issued a grain order to provide additional transparency and ensure both Canadian Pacific and Burlington Northern Santa Fe Railroads had plans for reducing their grain car backlogs. While the STB has been working hard to address the current rail service issues facing South Dakota and other states in the Northern Tier of the United States, this crisis has highlighted some of the inefficiencies that currently exist at the STB. On Monday, Chairman Rockefeller and I introduced Senate Bill 2777, the Surface Transportation Board Reauthorization Act, which is a first step in addressing these inefficiencies so that the STB can better assist shippers and railroads when problems arise.

ARTHUR NEAL, DEPUTY ADMINISTRATOR, TRANSPORTATION AND MARKETING PROGRAM, AGRICULTURAL MARKETING SERVICE, U.S. DEPARTMENT OF AGRICULTURE

USDA’s current analysis indicates grain production and grain stocks this harvest season are expected to exceed permanent grain storage capacity by an estimated 694 million bushels in seven States, which include South Dakota, Indiana, Missouri, Illinois, Ohio, Michigan, and Kentucky. This level of storage capacity shortage is higher than any year since 2010, which had an 805 million bushel shortfall in permanent storage capacity distributed throughout the top 14 grain-producing states. Because 2013 grain is reportedly still in storage and waiting to be moved before the 2014 harvest, it is critical to move as much of the 2013 grain crop as quickly and efficiently as possible. USDA is concerned that railroad service to grain shippers may not recover in time for the 2014 harvest. Should this happen, grain elevators could run out of storage capacity, grain could be stored on the ground and run the risk of spoiling

USDA’s current analysis indicates grain production and grain stocks this harvest season are expected to exceed permanent grain storage capacity by an estimated 694 million bushels (3.5% of the expected U.S. record harvest) in 7 states, which include—in decreasing order of storage capacity shortage—South Dakota, Indiana, Missouri, Illinois, Ohio, Michigan, and Kentucky. This quantity is the equivalent of 173,500 jumbo covered-hopper rail cars, 13,219 barges, 881 15-barge tows, or 762,600 truckloads.

JERRY D. COPE, PRESIDENT, SOUTH DAKOTA GRAIN AND FEED ASSOCIATION AND MARKETING MANAGER, DAKOTA MILL & GRAIN

South Dakota and ag are very closely linked. It’s our number one industry. We rank in the top ten of the major crops produced in the United States. However, our state is landlocked. The railroads are our lifeline, our link to the economy. Right now we’re served by two railroads, the Burlington Northern Santa Fe and the Rapid City, Pierre, and Eastern. Without them, our farmers don’t have an economy, don’t have a life.

When it comes to things like quality, we’re having some problems with South Dakota wheat, but if elevators are full we don’t have any room to blend or clean that grain, so that grain faces a risk of not even being marketable.

We could invest in storage, but the problem we run into with that is investments of millions of dollars are made based on railroad predictability. If we have to weigh the costs versus the risk and we can’t rely on the railroad, then do we actually invest the dollars?

Destination markets are often beyond the practical reach of trucks making rail service a critical lifeline for the livelihood and economic well-being of our state. South Dakota exports the majority of the crop production by rail to terminals in the Pacific Northwest, the Gulf of Mexico, livestock feeders in the Southwest and flour mills in the eastern half of the United States. Approximately 45% of the corn grown in SD is processed in state. The refined ethanol and corn by-products are exported by rail to population centers in the west and east and the by-products to feed markets across the country. Over 75% of the wheat, soybeans, sorghum, sunflowers and birdseed grains are exported by rail either to domestic markets or for export.

CALVIN (CAL) DOOLEY, PRESIDENT AND CHIEF EXECUTIVE OFFICER, AMERICAN CHEMISTRY COUNCIL

The American business of chemistry is the second largest customer of the U.S. rail freight system. Thanks to the shale gas revolution here in the United States, we’re going to see the most dramatic increase in our production in history, and we’re going to be even more reliant on freight rail transportation.  The consolidation among Class I railroads has left only seven in operation today, with four rail companies controlling almost 90% of all shipments. Today, more than three-quarters of U.S. rail stations are served by only one rail company. And unlike the 1980s when many railroads were grappling with bankruptcy, today’s railroads are in a strong financial position. The consolidations are correlated to significant increases in rail rates. Rates increased more than 93 percent between 2002 and 2012, three times the rate of inflation.

Senator KLOBUCHAR. We are a major producer of taconite. As you can imagine, the Great Lakes shipping season is very short because of weather.   It’s going to close for shipping in just a few months. we have 2 million tons of iron ore pellets that we want to send out and make money for our country and get more jobs, that are just sitting there in a pile. I hope that you would be willing to look into this, because we have a situation where winter is coming and we have only a finite shipping time.

The Chairman. Mr. Hamberger, last fall my staff prepared a report, as I indicated, on the financial condition of the largest Class I freight rail companies. It was based on the public financial information that your companies share with your investors. It found that your companies are setting records for earnings and operating ratios almost every quarter. It found that your companies are generating record higher earnings for your shareholders. It also found that your companies are buying back record amounts of stock shares, which also rewards your shareholders. You pretty much get what you want and stop what you want around here, it has been my experience over 30 years. So the question I’m going to ask you is, you’re doing a great job for your shareholders. What about these folks sitting to your right? Why can’t your companies do a better job for their customers? Why are shippers not benefiting from the excellent, extraordinary financial condition of freight railroads?

Mr. HAMBERGER. We believe that the appropriate study, the appropriate metric of profitability, of how well you’re doing economically, is the return on invested capital. We are an incredibly asset-rich, asset-based industry, $180 billion, and that’s just book value, of assets in the ground in the network. We believe that the appropriate metric is a return on invested capital. We are at 7.74% return on invested capital. The Fortune 500 is 12.93%. So we are a little bit over halfway, toward what the Fortune 500 average is of return on invested capital.  We need to be able to improve that return on invested capital.

With respect to the dividends and share repurchases—and this is material that was just filed last Friday over at the STB by Union Pacific, so I’m using Union Pacific data—for their free cash-flow, 63.2% is going to capital expenditures, 14.7% to dividends, 22.1% to share repurchases. For the S&P 500, those numbers are 44.8% for capital expenditures, 21.7% — 50 percent more for dividends, and share repurchases of 33.6% versus 22%, again 50% more, and that’s the S&P 500. So we think that we are in fact spending 63.2, at least for Union Pacific, on investments to serve our customers.

The CHAIRMAN.  You can’t go to trucks because it would destroy the highway system. I know that from coal trucks in West Virginia.

Jay Rockefeller, National rural electric cooperative assoc.   

NRECA is the national service organization for more than 850 Distribution and 65 Generation and Transmission (G&T) not-for-profit rural electric utilities that provide electric energy to over 42 million people in 47 states.

The testimony to follow provides background on rail service delivery issues from Dairyland, Sunflower and Arkansas Electric Cooperative Corporation.

Low sulfur Powder River Basin (PRB) coal is the primary fuel source for Dairyland and a number of other base-load generation facilities

Reliable delivery service is necessary to ensure coal is available in sufficient quantities to produce power to meet demand. Coal delivery problems require Dairyland to use higher cost generation and/or purchase power on the open market, often at a premium, to meet members’ energy needs. Dairyland currently owns 250 rail cars and leases six more. They lease a full train set (about 125 rail cars) for shipments to the Mississippi River terminal in Iowa. The combined coal deliveries in any given year range from 2.0–2.4 million tons, or roughly 130–160 train loads.

Approximately 90–100 train loads are delivered to JPM annually. Average turnaround time (ATT) is defined as the time it takes for a train to make a round trip from the mine to the offload site and back again to the mine. Prior to 2014, ATT averaged six to eight days, which generally meets the fuel needs for the JPM plant. The station can unload an average train set in about six hours which provides three to four days of generation. In preparation for supply disruptions, the goal is to have between 30 and 50 available days of operation on hand to sustain reliable generation.

Two barges provide one day of generation. In order to meet Dairyland’s generation needs for its’ members throughout the year it is critical to have reliable rail and barge transportation from carriers. To prepare for supply disruptions Dairyland’s goal is to have 165–195 available days of operation on hand prior to the end of October to provide generation for the winter. Since the Upper Mississippi River usually freezes, the typical barge delivery season is from March through October, roughly 30 to 35 weeks.

To equal one train set of coal 630 truckloads would need to be delivered, equating to 87,000–104,500 truckloads to deliver their annual supply.

Rail shipments to the Southeast Iowa Mississippi River terminal since March had not built inventory at a rate to keep pace with barge shipments to Genoa needed to meet power generation. If this trend had continued, Dairyland’s Genoa power plant would have run out of coal and would be unable to generate power after January 2015.

AECC holds ownership interests in the White Bluff plant at Redfield and the Independence plant at Newark, each of which typically uses in excess of 6 million tons of PRB coal each year. In addition, AECC holds ownership interests in the Flint Creek plant at Gentry and the Turk plant at Fulton, each of which typically uses about two million tons of PRB coal each year.

PORTLAND CEMENT ASSOCIATION.  Cement is to concrete what nails are to wood. It acts as the glue that builds our bridges, roads, dams, schools and hospitals. The distribution of cement often occurs over hundreds of miles, and it must be done with carefully timed precision. A disruption in rail transportation and distribution can greatly influence the efficient delivery of cement; this can result in projects being delayed or cancelled. Rail carriers are vital to the movement of cement, representing approximately 65 percent of cement movements on a per ton basis.

AMERICAN BAKERS ASSOCIATION.  ABA advocates on behalf of more than 1,000 baking facilities and baking company suppliers. The baking industry generates more than $102 billion in economic activity annually and employs more than 706,000 highly skilled people.

Bakers are dramatically affected by the decrease in efficiency as they depend on timely shipments from millers for their flour needs. Hard Red Spring Wheat is used as a primary ingredient to most breads and specialty baked goods. The majority of Hard Red Spring Wheat is grown in Montana, North Dakota, South Dakota and Minnesota, all states that are land locked and dependent upon the railroads for shipping grain to end users across the country. While shipping wheat by truck is always suggested as an alternative, it would take four trucks to equal the capacity of one grain rail car, making trucking much less efficient than rail service. In addition, there is not enough trucking capacity in the U.S. today to make up for rail inefficiencies, making rail a critical lifeline for the baking industry. Bakers are captive to the railroads due to the inability of grain millers to gain access to Hard Red Spring Wheat by any means other than rail.

Bakers typically only have two to three days’ worth of flour storage on premises. When shipments of flour from millers are delayed due to backlogs in wheat shipments by rail to the milling facility, bakers struggle to find alternative flour sources. In some cases, bakers have shut down lines and reduced staff to accommodate for a lack of flour to bake products. Finished product has also been delayed when being shipped to the marketplace due to delays in fulfilling product orders and in intermodal transport.

M & G Polymers is a leading producer of polyethylene terephthalate (‘‘PET’’) resin in North America with our principle domestic production facility located in Apple Grove, WV. We employ 144 and generate circa $500,000,000 in annual revenues at our Apple Grove facility. Unfortunately, we are served at our facility by a single railroad, the CSX Railroad. Our customers want to receive our PET ‘‘pellets’’, which are used to make plastic bottles by soft drink manufacturers and others, by rail and penalize us significantly economically if our products cannot be delivered by rail.

 

Posted in Railroads, Transportation | Tagged , , , , | Comments Off on U.S. Senate 2014 Freight rail service: improving the performance of America’s rail system

China is securing energy resources. A potential threat to Europe and U.S. interests.

Preface. China is vastly expanding its fleet of natural gas heavy-duty trucks to 700,000 in 2018 and similar or more amounts after that.  They are building pipelines to Russia and other Central Asian countries to keep the gas coming.  I vote them to be Richard Heinberg’s “Last Nation Standing”. With the Russians second, the U.S. third, and Europe among the many energy resource deprived nations to fall first.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report ]

***

House 113-160. May 21, 2014. The development of energy resources in Central Asia. U.S. House of Representatives.

Mr. Rohrabacher. Natural resources including gas and oil are the building blocks of a nation’s economic strength and we all depend on these energy resources to power industry, heat homes, bring us our food and other goodsThe planet’s scarce resources are distributed unevenly around the globe, so history is filled with accounts of nations, states, and businesses engaged in power plays and maneuvers to secure and to move and to utilize and to sequester natural resources.

A contest of resources is playing out right now in Central Asia.

And so this hearing asks the question, what does the future hold for energy resources in Central Asia? To highlight the importance of this topic, it was just announced today that Russia and Communist China agreed on a natural gas deal worth $400 billion. This is a significant development that takes more gas off the market, and of course this gas otherwise might go to supply Europe.

I have been warning about the growing military and economic power of Communist China for years. China has grown to become the world’s largest energy consumer. This makes Central Asia’s oil and gas essential to the Chinese Communist Party and their plans. The Communist Party rules their country with an iron fist and it also threatens their neighbors.

The Communist regime is now actively engaged in expanding its influence beyond its western borders and throughout Central Asia. Their aim is to secure the access to energy resources through long term contracts, investment loans, and building pipelines back to China and perhaps bribes. Make no mistake, these deals favor the corrupt leaders of the Communist Chinese party, it solidifies their grip, and will not necessarily benefit the vast majority of the people of Central Asia. During the last decade, trade between China and the region has increased 30-fold and continues to climb. This is happening as the spectacle of China’s worldwide effort to fence off critical natural resources from the West through bribes and intimidation’s continue. This is quite evident.

 

Mr. KEATING. Today’s hearing topic provides us with an opportunity to examine the global impact of climate change and expanding world population and accompanying social unrest.

In March 2013, for the first time Director of National Intelligence, James Clapper, listed competition and scarcity involving natural resources as a national security threat on a par with global terrorism, cyber war, and nuclear proliferation.

Central Asian states have long been pressured by Russia to yield large portions of their energy wealth to Russia, in part because Russia controls most existing export pipelines. Further, Chinese interest in the region is growing as well. Over the past decade, China has dramatically increased its imports from the region. Today, China imports over half of its gas from Turkmenistan. And last week, the Turkmen President presided over the opening of a new processing plant that will further increase the flow of Turkmen gas to China.

 

DENNIS C. SHEA, CHAIRMAN, U.S.-CHINA ECONOMIC AND SECURITY REVIEW COMMISSION

Over the last decade, China’s engagement with its Central Asian neighbors has grown significantly. In a region with a long history of Russian control and influence, China is now the most powerful economic actor and is poised eventually to surpass the United States and Russia as Central Asia’s preeminent foreign power.

The Chinese Government is increasing its economic ties with Central Asia particularly in the energy sector for two main strategic reasons. First, Beijing is expanding its energy relationship with Central Asian states as part of a long term energy security strategy designed to diversify the types and sources of energy in an effort to reduce the risk of disruption of supply. Some Chinese policy makers believe this strategy could mitigate China’s so-called Malacca dilemma, or vulnerability to other countries imposing a blockade on Chinese trade at critical maritime chokepoints. However, Chinese growth in oil demand is such that the share of seaborne imports will increase even if all China’s planned overland energy routes are realized. Second, Beijing seeks to promote the security and development of its Xinjiang Autonomous Region. Beijing judges increased economic ties between China’s westernmost region and Central Asia will raise the welfare of the ethnic Uyghurs thereby helping to rein in ethnic unrest in Xinjiang.

Chinese companies own so many projects in Kazakhstan that experts estimate China controls between 25 and 50% of the country’s oil production. Turkmenistan accounts for more than half of China’s natural gas imports, and its future share of imports will likely increase with plans to elevate imports from 20 billion cubic meters per year in 2013 to 65 billion cubic meters by 2016.

Many Central Asian governments welcome China’s increasing economic engagement. Chinese investment, trade deals, and loans have enabled economic growth and development. However, Chinese economic engagement in Central Asia can be a double-edged sword. The region’s overreliance on energy exports to sustain growth can slow the development of competitive industries and democratic institutions. Additionally, at the local level allegations of poor business behavior by Chinese companies have led to protest and violence against Chinese workers and businesses.

The rise of Chinese influence in Central Asia at the expense of Russia coupled with the probable decline in overall U.S. interests in the region after the planned withdrawal of troops from Afghanistan will likely result in a major shift in the balance of power between the major external actors in favor of China.

 

CHARLIE SANTOS, CHAIRMAN, UZBEKISTAN INVESTMENT GROUP, INC.

While we sacrificed more than 3,000 lives and spent more than $1 trillion on a nation-building exercise in Afghanistan, China sought to fill our policy vacuum, focusing on energy and pipelines in Central Asia, taking a page literally out of our policy playbook. So far they have constructed two pipelines, a third to be finished this year, and a fourth expected in 2017.

Our allies in Europe, with even more at stake in pursuing gas resources in countries like Uzbekistan and Turkmenistan, followed the U.S. lead even when it meant losing the possibility of greater energy supply diversification. This has led to greater dependence on Russian gas. With the withdrawal from Afghanistan and growing East-West tensions, 2014 has demonstrated that our disengagement from Central Asia has left the U.S. and its European allies doubly exposed.

Finally, there is no single way to solve Europe’s energy dependency or bring stability to the region, particularly Afghanistan. But ignoring the importance of Central Asia, particularly the key countries that border Afghanistan and forgetting our initial insights about the region will surely make matters worse. When we ignore building broader strategic relationships, as we have during the past 12 years, we make our country and our allies more vulnerable. The confluence of the Afghan withdrawal and growing tension in Europe this year is giving us a chance to refocus our policies to help build a stronger and more independent Central Asia. It is an opportunity we should not squander.

 

DAVID MERKEL, former director, Europe & Eurasia, National Security Council

If we are going to decouple Central Asia from Russia or the growing influence in China, we need to join it up with Europe through Azerbaijan.

We need to have a higher level of engagement in the region. No sitting President has visited the region. Through bilateral and multilateral engagements, the Presidents of China and Russia meet almost on a monthly basis. We shouldn’t try and compete with that, we don’t need to. But if we had a meeting in Baku with the President of Turkmenistan, Uzbekistan, Kazakhstan, and President Aliyev, it would send a clear signal that the United States is supportive not of bypassing Russia, not of punishing anybody, but a very strong message for competition.

 

JEFFREY MANKOFF, deputy director & fellow, Russia & Eurasia program Center for strategic and international studies

The major beneficiary of the struggles that both the United States and Russia have faced in this region has of course been China. And the reasons for China’s success are not hard to grasp. It is a growing market with exponentially expanding energy demand.

China state-owned energy companies do not face the same financial constraints as Western firms. Flush with cash, comparatively insulated from the need to make an immediate return on their investments, they are less sensitive to political and economic risk and more responsive to political direction. China’s emergence into the Central Asian energy game represents both an opportunity and challenge. While the West has talked for two decades about new pipelines, China builds them and is pouring significant amounts of money into Central Asia in the process thereby reducing Russia’s hold on the region’s economies.

The influx of Chinese state-directed investment does not come with the same demands for transparency and rule of law that Western investors seek. This in turn further entrenches Central Asia’s corrupt, patrimonial political systems.

For now, Chinese investment also gives the Central Asian states an alternative to their dependence on Russia. In the future though the danger exists that these states will end up having traded dependence on Moscow for dependence on Beijing. Under the circumstances, U.S. options are somewhat limited.

Posted in Energy Dependence, Energy Policy, Natural Gas | Tagged , | 1 Comment

If we really cared about CO2, we’d reduce car size and weight, not make electric cars

Inline image 1

Preface. Since my book “When trucks stop running: Energy and the future of transportation makes the case that it’s trucks that need to be electrified to keep civilization going, since biofuels don’t scale up, natural gas and liquefied coal are finite, hydrogen is a net energy sink from start to end. Only transportation that keeps supply chains going matters, and trucks, rail, and ships nearly all run on diesel. As for why trucks can’t be electrified, in addition to my book, see posts here.

The authors had a rebuttal in the Financial times that accused this article of cherry-picking the data by an “apples-to-oranges comparison, pitting a luxury, high-power electric model against a subcompact, low-power petrol one”.  Although that may be true, I have a rebuttal to their rebuttal after the financial times article below.  I feel like it was a huge waste of my team to read the original paper, because the premises and assumptions are absurd — that batteries will continually improve even they’ve only improved 5-fold over 210 years but need to improve 50-100 fold to match gasoline, that 30 to nearly 100% of electric power will be renewable from 2030 to 2050, the monetary price including subsidies rather than energy costs, and using CO2 as the main criteria by which to judge cars.  And the fact that energy decline from peak fossils will do far more to reduce CO2 than EV ever could. Conventional oil peaked in 2005 (90% of our oil), and any day now the plateau could end and decline begin.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report ]

McGee, P. November 7, 2017. Electric cars’ green image blackens beneath the bonnet. Financial Times.

The humble Mitsubishi Mirage has none of the hallmarks of a futuristic, environmentally friendly car. It is fuelled by petrol, runs on an internal combustion engine and spews exhaust emissions through a tailpipe.

But when the Mirage is assessed for carbon emissions throughout its entire lifecycle — from procuring the components and fuel, to recycling its parts — it can actually be a greener car than a model by Tesla, the US electric vehicle pioneer, in regions with particularly high carbon emissions from electricity.

According to data from the Trancik Lab at the Massachusetts Institute of Technology, a Tesla Model S P100D saloon driven in the US Midwest produces 226 grams of carbon dioxide (or equivalent) per kilometer over its life-cycle — a significant reduction to the 385g for a luxury 7-series BMW. But the Mirage emits even less, at just 192 g.

The MIT data substantiate a study from the Norwegian University of Science and Technology last year: “Larger electric vehicles can have higher lifecycle greenhouse gas emissions than smaller conventional vehicles.”

The point of such comparisons is not to make the argument for one technology over another, or to undermine the case for “zero-emission” cars. But they do raise a central issue about the industry: are governments and ca rmakers asking the right questions about the next generation of vehicles?

Policymakers are pushing the car industry toward a new era, but neither Europe, America nor China have actually set up the appropriate regulatory apparatus to differentiate among electric vehicles and judge their environmental merits. The idea that some combustion engine cars can be greener than some “zero-emission” electric vehicles simply does not make sense in the current regulatory environment.

From a government standpoint, all electric vehicles are equally green — regardless of whether they are big or small, produced efficiently or with great waste, or powered by electricity generated by solar energy or coal.

Although multiple studies show that electric vehicles to be greener than comparable combustion engine cars, many components of the electric car life-sycle are left out. To capture electric cars’ full environmental impact, regulators need to embrace life-cycle analysis that takes into account car production, including the sourcing of rare earth metals that are part of the battery, plus the electricity that powers it, and the recycling of its components. Life-cycle studies show that the idea of “zero emissions” is misleading. Too much energy is consumed in the manufacturing process of lithium-ion batteries, and to recharge them, for the environmental impact to be nil.

Also, the lack of regulation differentiating between electric vehicles encourages car makers to sell cars with bigger batteries and longer ranges — features that sound great but are at odds with electric vehicles’ green image, given the amount of lithium and cobalt used in the batteries.

However, the problem for makers of electric vehicles is that their efforts to limit emissions in the supply chain can only go so far. The uncomfortable reality is that battery manufacturing plays a bigger role in life-cycle emissions than anything else the car maker does.

A decade ago, this was not such a problem. Researchers could assume electric vehicles were small cars such as the Smart fortwo, which weighs less than a tonne. But Tesla upended these assumptions with the Model S, its roomy saloon which can weigh up to 2,250kg because of a massive battery that powers its impressive range.  These bigger batteries could damage the green credentials of electric vehicles, even if power grids are fueled by less coal and more renewables, given the poor environmental and ethical standards involved in procuring metals such as cobalt, 60 per cent of which comes from the Democratic Republic of Congo.

Tesla has been credited with accelerating a broader shift into battery-powered cars, but one result of its appeal is that average electric vehicle batteries will double from 20 kilowatt hours today to 40 kWh by 2025, according to UK investment bank Liberum. Peter Mock, managing director for Europe at the International Council on Clean Transportation, says many electric vehicles produced today feature a range that is too high, and the trend is towards even bigger batteries.

Inline image 2

The average electric vehicle sold today offers a range of less than 250km, according to EV Volumes, a data provider. But the Renault-Nissan-Mitsubishi alliance announced plans in September to create 12 electric vehicles with at least 600km of range by 2022.

“For 90% of the vehicles it just doesn’t make sense to have such a big battery,” Mr Mock says. “Maybe it’s useful now in the transition phase . . . But rationally it doesn’t make sense. Most of us drive less than 100 km a day.”

Inline image 3
 Regulators should take weight into account by taxing heavier vehicles and creating incentives for smaller models in both electric and traditional vehicles.

Mr Meilhan points out that petrol-engine cars weighing just 500kg — such as the French Ligier microcar or some popular “kei cars” in Japan — emit less lifecycle emissions than a mid-sized electric vehicle even when driven in France, where carbon-free nuclear power generates three-quarters of electricity.

“If we really cared about CO2,” he adds, “we’d reduce car size and weight.”

My rebuttal of the authors rebuttal in the FT (can be found here).

The original paper is here.  Miotti, M., et al. 2016. Personal Vehicles Evaluated against Climate Change Mitigation Targets. Environ. Sci. Technol 50:10795-10804.

What a waste of time.  This paper evaluates fossiled versus EV using the wrong evaluation criteria.

  • Vehicle cost shouldn’t be in a scientific study of energy efficiency.
  • Only energy use per mile and energy used over the life cycle.
  • Nor should EV subsidies be used in the cost, the Republicans or next financial crash will end or lower them.

Secondly, the paper assumes that the electric grid will continually use more renewable power.  But since conventional oil, the master resource that makes all others possible, including solar and wind contraptions over their entire life cycles, peaked in global production in 2005, the electric grid cannot long outlast declining fossil fuels, and their life expectancy is only 20 years (wind turbines) to 30 years (solar). So even if we doubled and doubled and doubled the solar and wind generation of today, we’d have to stop making these contraptions at some point of oil decline – oil will be rationed to agriculture and other essential services, like heavy-duty trucks of all kinds, since they can’t run on electricity (see posts here).  And they aren’t doubling every year as you’ll see below, in fact at current rates of increase they won’t even be 10% of generation by 2030, and likely less since the best, most profitable sites are already taken (NREL 2013).

Solar and wind contribute a tiny fraction of electricity. So far in 2017, solar provided 0.95%, in 2016, 0.7 %, 2015 0.48%, 2014 0.38%, 2013 0.27%, and so on. In all cases the difference in increases was from one-tenth of 1% to a quarter of 1%. Not the doubling required.  But these low percentages belie the fact that for half of the year solar (fall and winter) the percent declines by half those months (see posts here). Most of the U.S. has very little solar power, even though it’s subsidized, because they are too far north.  Solar power is highly skewed:  California has 60% of solar generation, Arizona 24% (see post here). It can’t grow much more, the best spots are already built out. Sure, there are a lot of sunny places left, but it costs tens of millions to build the transmission lines to them (EIA 2017). And that cost isn’t ever included in the cost of solar electricity or solar plant construction. It is a huge free subsidy and substantially lowers the true cost of solar and wind electricity.

Wind contributed more than solar, but still a trivial amount of overall electricity generation: 1.9% in 2014, 2.0% in 2015, 2.5% in 2016, and 2.6% so far in 2017 (through August) (EIA 2017).  But the best sites for wind are already built out as well.  Wind is also highly seasonal, doesn’t blow at commercial scales across most of America in the summer, and never at commercial scale year-round in the South East (see posts here).  We are far from having a national grid, so you’ll need a horse or bicycle to get your groceries half of the year just about anywhere you live.

Another consequence of peak oil the study doesn’t acknowledge is that declining oil (coal, and natural gas) will reduce CO2 far more than EVs ever could.

Just at it is absurd to assume the grid will be 30% renewable power by 2030, it is also absurd to assume that batteries will grow more and more energy dense.  The energy density of batteries has only increased 5-fold over the past 210 years, but need to be 50-100 times more energy dense to come close to matching gasoline. That means that batteries will always be too heavy for diesel vehicles, and too expensive for 90% of consumers. The reason it is so damned hard to make batteries more energy dense are here and here.

Worse yet, comparisons are made on the basis of vehicle costs, with the assumption that EV costs will grow lower over time. Ridiculous! We’ve reached nearly all limits to growth, and lithium is certainly finite (lots of it but mostly embedded in minerals for which the chemical and/or heat energy is too high). Vehicle cost shouldn’t be in a scientific study of energy efficiency. Only energy use per mile and the energy used over the life cycle to make and operate the car, from mining to recycling.  And especially not subsidies, the Republicans or next financial crash will end or lower them.

EIA. Monthly Energy Review. November 2017. U.S. Energy Information Administration Office of Energy Statistics U.S. Department of Energy. https://www.eia.gov/totalenergy/data/monthly/pdf/mer.pdf

NREL. 2013. Beyond renewable portfolio standards: An assessment of regional supply and demand conditions affecting the future of renewable energy in the west. Golden: National Renewable Energy Laboratory.

Posted in Automobiles, Electrification | Tagged , , , | 14 Comments

Will the Navy go out with a whimper instead of a bang?

Preface. This house hearing is about the continuing decline of the Navy There are fewer and fewer ships. The remaining ships are overused, some past their normal lifespan, and under-maintained.

As the U.S. descends into fascist plutocracy enabled by the descent of Americans into the ignorant, conspiratorial, biblical, fake news, and new age myths over 70% of Americans subscribe to, including President Trump,  I can think of nothing better than letting the Navy atrophy, of going out with a whimper instead of an atomic bang.

Why spend trillions on ships that will be rusting and mothballed 15+ years from now as oil continually declines?  If they need to build ships, make then sailing ships, which will also help to keep supply chains going.

Long-term American national security depends far more on stopping topsoil erosion and aquifer depletion than our navy.

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report ]

***

House 113-7. February 26, 2013. The future of seapower. U.S. House of Representatives.

In January, the Navy presented to Congress a goal of achieving a fleet of 306 ships, a reduction from the previous goal of 313 ships. The fiscal year 2013–2017 5-year shipbuilding plan contains a total of 41 ships, which is 16 ships less than the 57 ships projected for the same period in the fiscal year 2012 budget request. Of this 16-ship reduction, 9 ships were eliminated and 7 ships were deferred to a later time. It should be noted that at its current strength of 286 ships, under the 30-year shipbuilding plan submitted to Congress, the Navy will not achieve its goal of 306 ships until fiscal year 2039. Even worse, the Navy will experience shortfalls at various points in cruisers, destroyers, attack submarines, ballistic missile submarines, and amphibious ships. One would think the number of required ships would have increased instead of decreased with the Navy now bearing the brunt of missile defense missions and the announced rebalance to the Asia-Pacific.

The Navy has been operating in a sustained surge since at least 2004. We have been burning out our ships more quickly because the demand has been high. Indeed, in the past 5 years roughly 25% of destroyer deployments have exceeded the standard deployment length.

And given our past record of meeting long-term goals, I seriously question the viability of the shipbuilding plans presented in the out-years of the 30-year plan.

Another area of concern is the cost of the plan. The Congressional Budget Office estimates that in the first 10 years of the 30-year shipbuilding plan that the cost will be 11 percent higher than the Navy’s estimate.

In addition to new construction of ships, I also have concerns on the sustainment of ships already in the fleet. After years of maintenance challenges the Navy has now been forced to cancel numerous ship maintenance availabilities.

A key tenet in the shipbuilding plan is an assumed ship service life for most ships of 35 years. If ships do not get the planned shipyard repairs, attaining this service life will be problematic and ships will be retired prematurely.

In fiscal year 2012, the existing force structure only satisfied 53% of the total combatant commander demand. It has been estimated that to fully support the combatant commander requirements would necessitate a fleet size in excess of 500 ships. Without an increase in force structure this trend would only get worse.

Finally, I think that our Navy needs to place more emphasis on undersea warfare and long-range power projection as part of a strategy to prevent potential adversaries from achieving the benefits offered by anti- access/aerial denial strategies.

JOHN LEHMAN, FORMER SECRETARY OF THE NAVY

First you have to reestablish the commonsense framework for why we need a Navy and where we need it and what kind of a Navy to carry out the task. It was relatively easy for the Reagan administration with a bipolar world in the Cold War. The Soviet threat clarified the mind wonderfully and made our task relatively easy. Today you could argue that the world is a more dangerous place because it is so multi-polar, there are now so many more potential disturbers of the peace all over the world, and yet we are more dependent ever in our history on the free flow of energy and of commerce through the Pacific, Indian Ocean, the Atlantic, Caribbean, and so forth.

We have to have the capability to maintain stability and freedom of the seas wherever our vital interests are involved. We should not be the world’s policeman, but we must be able to give the rest of the world the confidence to know that we are able to maintain the free flow of a global community of commerce and freedom of travel, and that we don’t have today. We don’t need a 600-ship Navy, as we did when we faced the entire Soviet fleet, but we certainly need a good deal more than the 280 ships we have today.

But even more disturbing is what is going on now in the overuse of the assets we have. It is very unfortunate that the institutional memory in the executive branch and in Congress is so short, because we have been down this road before. Both Admiral Roughead and I were in the Navy when we had the exact same situation in the 1970s, and we ran the fleet into the ground. We made deployments, added 50% to deployments time from 6 months to 9 months, just as the Administration has decided to do now. And we did not put—we, the U.S. Government—did not put the money into repairs and overhaul. And as a result the Navy dropped to the lowest readiness ever, where the former chief of naval of operations testified to this committee that we would lose a war if we ended up going into a conflict, and that was not an assessment lightly taken.

ADM GARY ROUGHEAD, USN (RET.), FORMER CHIEF OF NAVAL OPERATIONS

As we look at the world today, while it is generally conducive to our interests, it is still a messy place, with disorder and disruption in more areas than just 10 or 15 years ago. And as we look out over that world and as the only global navy, you do have to ask yourself what is the size, what is the capability that you want resident in the Navy that is to be provided and maintained by the Congress. I think it is important as we look at building and maintaining a navy that you can’t decouple it from the industrial base of the Nation. And I think that all too often is overlooked. I think the messiness of the world is spreading. We have been able in recent years to essentially be absent in the Mediterranean. I believe the future is not going to give us that luxury. I think North Africa and the Arab awakening, the Levant, Israel, Syria, energy deposits that are expected to be found in the Eastern Mediterranean are going to inject some friction and potential conflict and a presence will be required there.

Even though we talk about a rebalanced Asia, we are not turning away from the Middle East and the Arabian Gulf and the importance that that geographic area has on the global economy. And in a few years the Arctic is going to open, and the Arctic is an ocean. I refer to it as the opening of the fifth ocean. And so what sort of a force do you need there, what are the numbers that you need there? And all of that needs to be taken into account.

In the Air Force the average age of an airplane is something like 28 years.

Mr. LEHMAN. I would not pick a specific risk, because I think when you have to stretch as thin as we are now already stretched, when we can’t meet deployments that everyone, every combatant commander believes is minimally necessary, that we can’t protect all of our ships, commercial ships in the Indian Ocean, for instance, the first time in history that the U.S. Navy has told ships they have to stay 600 miles away from the east coast of Africa because we can’t protect you. So the danger is when you are stretched that thin, an incident happens, and because you have the number of submarines deploying with a Marine amphibious group, that some North Korean submarine happens to get a shot off the way they did to the South Koreans and sinks an entire aircraft carrier of marines and equipment, that is catastrophic. What that would that do to world markets, to our economy, we would be in the tank overnight. Nobody sleeps well if they are depending on the North Koreans or the Iranians not doing anything irresponsible. We are there now, so I wouldn’t say that you could pick a time where it gets worse. Obviously the fewer ships we have the more that makes us vulnerable to unforeseen events. And they happen. As any student of history knows, they will happen.

We clearly are already at the tipping point, best expressed in the book by Lee Kuan Yew. He is one of the wisest global viewers of this century or last century. And he says the U.S. is declining and that people in his neighborhood do not believe they can rely on the U.S. as they have in the past. But the perception in Asia is that we are not going to be able to do much in the future, which begets the temptation of disturbers of the peace like North Korea to go beyond prudent risk. So we are already there.

Currently the Navy has 286 ships. In order to pay for even drastically reduced current operations, the Administration will be retiring a score or more of modern combat ships (cruisers and amphibious vessels and frigates) well before their useful life. In order to reach a 350-ship fleet in our lifetime, we would need to increase shipbuilding to an average of 15 ships every year. The latest budget the administration has advanced proposes buying just 41 ships over five years. It is anything but certain that the administration’s budgets will sustain even that rate of only eight ships per year, but even if they do, the United States is headed for a Navy of 240-250 ships at best. So how is the Obama administration getting to a 300-ship Navy? It projects a huge increase in naval shipbuilding beginning years down the road, most of which would come after a second Obama term. In other words, the administration is radically cutting the size and strength of the Navy now, while trying to avoid accountability by assuming that a future president will find the means to fix the problem in the future. This compromises our national security. The Navy is the foundation of America’s economic and political presence in the world. Other nations, like China, Russia, North Korea and Iran, are watching what we do-and on the basis of the evidence, they are undoubtedly concluding that America is declining in power and resolution. Russia and China have each embarked on ambitious and enormously expensive naval buildups with weapons designed specifically against American carriers and submarines.

The Department of Defense acquisition process is seriously broken. Under the current system, it takes decades, not years, to develop and field weapons systems. Even worse, an increasing number of acquisition programs are plagued by cost over runs, schedule slips and failures to perform. The many horror stories like the F-35, the Air Force tanker scandal, the Navy shipbuilding failures and the Army armor disasters are only the visible tip of an iceberg. The major cause has been unbridled bureaucratic bloat (e.g.690,000 DoD civilians, 250 uniformed Joint task forces) resulting in complete loss of line authority and accountability. As the House Armed Services Committee formally concluded: “Simply put, the Department of Defense acquisition process is broken. The ability of the Department to conduct the large scale acquisitions required to ensure our future national security is a concern of the committee. The rising costs and lengthening schedules of major defense acquisition programs lead to more expensive platforms fielded with fewer numbers. That is, of course, an understatement. We are really engaged in a form of unilateral disarmament through runaway costs. Unless the acquisition system is fixed it will soon be impossible to maintain a military of sufficient size and sophistication with which to secure our liberties and protect the national interest.

MILITARY COMPENSATION Just as entitlements are steadily squeezing out discretionary spending in the Federal budget, personnel costs in the Pentagon are squeezing out operations and modernization. There has not been a comprehensive overhaul of military compensation, retirement, and medical care since the original Gates Commission during the Nixon Administration. It is long overdue.

 

 

Posted in Infrastructure, Transportation | Tagged , , , , | 1 Comment

Is Peak Oil dead? Not by a long shot! Remember Ladyfern?

Preface. Oil is finite. Period. Don’t be fooled by news stories that peak oil is dead, or we have reached peak demand.  They’re all nonsense. Gail Tverberg at ourfiniteworld.com is especially good at explaining this.

Worse yet, what we have left has been and is not being drained as quickly as possible to pay the capital back, and that increases the amount of oil that will be left in the ground forever, which could have been produced with more responsible methods.  But the very nature of capitalism is profits now, not 10 years from now.

This article makes the case that there are lessons to be learned today from the gigantic 2001 giant Ladyfern natural gas reserves in Northeastern British Columbia.

But due to the tragedy of the commons, where too many companies exploited this reservoir too quickly, much less was produced than could have been.  Like shale gas today, a gigantic amount of production drove gas prices down, thanks to the “stupid” middle class money financing companies that were already bankrupt (the banks prefer to get some money rather than none, and besides, it’s not their money).  Whether the gas bubble will be as bad as the subprime mortgage crisis waits to be seen.

Initially Ladyfern was thought to have a trillion cubic feet of recoverable reserves, but in the end had 400-billion-cubic-feet (bcf).  Some of the “missing” 600 bcf that could have been obtained was lost to greedy drilling, though most of this was probably due to overestimating the size of the reserve.  I’ve cut and paraphrased much of the article below (select the link in the title to see the original article).

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report ]

***

Terry Etam. January 19, 2016. The Ladyfern legend: huge reserves, frenzied drilling, and no one made money. Sound familiar? BOE report.

Is shale oil and gas too good to be true? History provides examples of the dangers of getting too starry-eyed by banking on seemingly endless natural gas reservoirs.  The historical cautionary tale that follows usually involves a gold-rush mentality that results in efforts to extract the entire reservoir all at once!

As an example, consider the legendary Ladyfern field in British Columbia, and whose story has an ugly lesson worth remembering.

The Ladyfern field was a giant gas reservoir estimated to contain up to a trillion cubic feet of recoverable reserves. There were wells that produced at initial rates of 70 million cubic feet per day. It’s worth remembering that those rates were from vertical wells without 50 stage fracking technology. The cost of producing this conventional natural gas was very cheap. After the initial discovery, companies raced to buy up mineral rights in the area, and once secured, the race was on.

What happened next can be best described as a ‘tragedy of the commons’. Companies acting in their own self-interest harmed all parties. The Ladyfern reservoir saw corporate beasts devour a beautiful gas reservoir like wild pigs upending a garden.

The problem was competitive drainage. Because the Ladyfern reservoir was so porous and prolific, it was in a company’s best interest to drain their reserves as fast as possible or lose them to competitors. As noted in the linked article above, had one company owned all the mineral rights, the reservoir would most likely have been developed more cautiously, or at the very least with a plan. If the competitive drainage phenomena were to have been avoided, reserve recoveries would most certainly have been higher and with far less capital investment.

Maximizing recoveries from a reservoir should be the primary concern, not booming discovery wells that generate hysteria and a “shoot first, aim later” mentality.

This lesson should not be lost on shale gas drillers today now that the latest Utica production test results are bringing levels of excitement akin to the Ladyfern era.

While there are obvious differences in reservoir characteristics between shale formations and the Ladyfern, the mechanics and philosophy of ultimate recovery remain the same. In particular, in new fields or non-homogenous fields being explored and developed, paying attention to the overall field recovery should be one of the most important considerations. But this parameter can quite easily be forgotten by (or fail to even enter the minds of) executives under pressure to deliver production growth and/or meet quarterly expectations. What’s worse, with the current extreme duress in industry, pressure mounts to keep drilling wells and bringing them on to shore up reserve bases to keep bankers happy. While this strategy can serve as  a useful short term survival tactic, more often it equates to bad news in the long run. But on the other hand, it may be the only option for companies that are trying to stay alive until the next price spike.

Posted in Oil & Gas Fracked, Peak Natural Gas | Tagged , , , | 1 Comment