Why the British don’t like Trump

Someone on Quora asked “Why do some British people not like Donald Trump?” Nate White, an articulate and witty writer from England wrote the following response

Alice Friedemann  www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Nate White. 2019. Why do some British people not like Donald Trump? Quora.com

A few things spring to mind.

Trump lacks certain qualities which the British traditionally esteem.

For instance, he has no class, no charm, no coolness, no credibility, no compassion, no wit, no warmth, no wisdom, no subtlety, no sensitivity, no self-awareness, no humility, no honor and no grace – all qualities, funnily enough, with which his predecessor Mr. Obama was generously blessed.

So for us, the stark contrast does rather throw Trump’s limitations into embarrassingly sharp relief.

Plus, we like a laugh. And while Trump may be laughable, he has never once said anything wry, witty or even faintly amusing – not once, ever. I don’t say that rhetorically, I mean it quite literally: not once, not ever.

And that fact is particularly disturbing to the British sensibility – for us, to lack humor is almost inhuman. But with Trump, it’s a fact. He doesn’t even seem to understand what a joke is – his idea of a joke is a crass comment, an illiterate insult, a casual act of cruelty.

Trump is a troll. And like all trolls, he is never funny and he never laughs; he only crows or jeers.

And scarily, he doesn’t just talk in crude, witless insults – he actually thinks in them. His mind is a simple bot-like algorithm of petty prejudices and knee-jerk nastiness. There is never any under-layer of irony, complexity, nuance or depth. It’s all surface.

Some Americans might see this as refreshingly upfront. Well, we don’t. We see it as having no inner world, no soul.

And in Britain we traditionally side with David, not Goliath. All our heroes are plucky underdogs: Robin Hood, Dick Whittington, Oliver Twist. Trump is neither plucky, nor an underdog. He is the exact opposite of that. He’s not even a spoiled rich-boy, or a greedy fat-cat. He’s more a fat white slug. A Jabba the Hutt of privilege.

And worse, he is that most unforgivable of all things to the British: a bully. That is, except when he is among bullies; then he suddenly transforms into a snivelling sidekick instead.

There are unspoken rules to this stuff – the Queensberry rules of basic decency – and he breaks them all. He punches downwards – which a gentleman should, would, could never do – and every blow he aims is below the belt.

He particularly likes to kick the vulnerable or voiceless – and he kicks them when they are down.

So the fact that a significant minority – perhaps a third – of Americans look at what he does, listen to what he says, and then think ‘Yeah, he seems like my kind of guy’ is a matter of some confusion and no little distress to British people, given that:

  • Americans are supposed to be nicer than us, and mostly are.
  • You don’t need a particularly keen eye for detail to spot a few flaws in the man.

This last point is what especially confuses and dismays British people, and many other people too; his faults seem pretty bloody hard to miss. After all, it’s impossible to read a single tweet, or hear him speak a sentence or two, without staring deep into the abyss.

He turns being artless into an art form; he is a Picasso of pettiness; a Shakespeare of shit. His faults are fractal: even his flaws have flaws, and so on ad infinitum.

God knows there have always been stupid people in the world, and plenty of nasty people too. But rarely has stupidity been so nasty, or nastiness so stupid. He makes Nixon look trustworthy and George W look smart. In fact, if Frankenstein decided to make a monster assembled entirely from human flaws – he would make a Trump. And a remorseful Doctor Frankenstein would clutch out big clumpfuls of hair and scream in anguish: ‘My God… what… have… I… created? If being a twat was a TV show, Trump would be the boxed set.

Posted in Political Books | Tagged , | 4 Comments

Why we need more women leaders

Preface. Hector Garcia makes the case that women make better leaders in an excerpt from his book below. His conclusion is that “scientific literature shows that when women are allowed greater political and economic power, which is inseparable from the power to control their own reproduction, quality of life measurably improves for everyone.”

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Garcia, H. 2019. Sex, Power, and Partisanship. How evolutionary science makes sense of our political divide.  Prometheus.

Historically, men have blocked women from the political process.  It was only recently that women were allowed a voice in US politics—the 19th amendment to the constitution, which granted women equal voting rights, was granted in 1920.  Saudi Arabia was the last nation to give women the right to vote in 2015.

Scholars have observed that women entering political leadership positions often display excessive hawkishness, which may help to establish themselves within the male primate hierarchy that politics has always been.  But most women across all levels of society are less hawkish.  A large body of research shows that women citizens are les likely to support the use of military force.  Research has found that when the ratio of women in legislatures increases, nations are less likely to use military force to solve conflicts with other nations.

Looking at 22 nations from 1970 to 2000 it was found that as the number of women legislators increased, nations were less likely to engage in an extensive list of conflict behaviors with other nations, such as threats, sanctions, demands, or actual military engagements.  The researchers also calculated Right-Left orientation of nations based on the percentage of government seats that parties held. As we might expect, Right-oriented nations spent more on defense overall.  But as the percentage of women legislators increased, defense spending decreased.  This decrease occurred at the same rate across nations that were Right-oriented, such as the U.S., and those that were Left-oriented, such as Norway, and the results were quantifiable.  In 2000, every 1% increase in women legislators in the U.S. produced a $314 million reduction in defense spending (out of $311 billion in total military spending). A 1% increase in women legislators in Norway saw a $3.34 million decrease out of $3.3 billion.

In 2008 Rwanda became the first nation in history to have a female majority in parliament. The shift of power to women resulted in laws to limit make sexual control. Domestic violence became illegal, and harsh prison sentences were legislated for rape.  Further, birth rates and maternal mortality dropped, doors were opened for women to own land and open bank accounts, daughters were allowed to inherit property, and the percentage of women in the labor force surged.  In 2009, the women-led government mandated basic education for all Rwandan children.  In 2016, the World Economic Forum’s global gender gap report ranked Rwanda fifth in the world on gender equality.  The U.S. ranks 45th.

Before male competition destroyed 20% of Rwandan males in the genocide, it oppressed Rwandan women. In the years leading up to the massacre, women lived under patriarchal control. Women’s property ownership was practically unheard of, literacy among women was low, and maternal mortality was high.

A clear conclusion of the scientific literature is that when women are allowed greater political and economic power, which is inseparable from the power to control their own reproduction, quality of life measurably improves for everyone.

Posted in Politics | Tagged , , | 5 Comments

Donald Trump: Sexual Predator

Preface.  This is a book review of “All the President’s Women: Donald Trump and the making of a Predator”.  Trump is clearly as much a sexual predator as Jeffrey Epstein (who he hung out with for a long time) and Harvey Weinstein.  I was so disgusted and angry I only got half way through the book.  How could any woman, or man for that matter, vote for such a bullying, brutal, nasty man?  Below are some excerpts.

Related: Video Trump–is the president a Sex Pest? BBC

Alice Friedemann   www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Levine, B. 2019. All the President’s Women: Donald Trump and the making of a Predator. Hachette Books.

“You know, it doesn’t really matter what the media write as long as you’ve got a young and beautiful piece of ass.” —DONALD TRUMP, 1991, ESQUIRE INTERVIEW

By June 2019, news organizations had documented as many as 24 women who have accused Donald Trump of varying degrees of inappropriate behavior, including sexual harassment or sexual assault. Our investigation found at least 67 separate accusations of inappropriate behavior, including 26 instances of unwanted sexual contact.

Making unwanted physical advances became a Trump trademark, according to many women, and one that has continued to define him. The accounts span nearly four decades and bear a striking resemblance to one another. Reading them in the aggregate, patterns emerge. Forcible kissing. Groping. Genital grabbing. Barging in on sleeping women. And, all too often, an utter indifference to women’s volition or boundaries.

The behavior he has admitted to—grabbing women by the “pussy”—and many of the credible accusations he denies, were they to be proven in a court of law, would qualify as crimes, some of them serious ones.

After considering all the evidence, one cannot but conclude that Donald Trump is, and has been for some time, a full-blown sexual predator.

Models

At the beginning of Trump’s club life he was out every night meeting women. “You had drugs, women, and booze all over the fuckin’ place,” Michael Gross quoted Trump as saying in his book My Generation: Fifty Years of Sex, Drugs, Rock, Revolution, Glamour, Greed, Valor, Faith, and Silicon Chips. Having sex became his “second business.

“The other girls were obviously afraid of him, like they knew he meant it and it wasn’t a joke,” she said. Given the nature of the modeling business, where bikini and lingerie shoots and quick changes at fashion shows are the norm, models are used to being seen when they are wearing very little. For models to be upset about being seen in their underwear, something has to be seriously amiss,

Carr said she and her model friends would see him out in Manhattan all the time in those days. “He was nearly always there, especially any party at Studio 54 or the Plaza, he was there. And he always had his sights set on very young women,” she said. The soirees that Trump attended during that era were guaranteed to have two elements, according to Carr: young models and cocaine—lots of it. “It generally wasn’t done in the open, but it was rampant.

According to interviews conducted for this book with dozens of people in the modeling world who are familiar with his behavior, Trump had a reputation for never missing an opportunity to meet models. “Trump would be at every model party.  Many said that he was a model hound. He was always chasing models.… He was a predator. Absolutely. And he could be intimidating.

One night in the mid- to late-1990s Webber was out at Life, a trendy club on Bleecker Street in Greenwich Village where modeling agencies often held parties. Trump was there, and Webber noticed that a young girl couldn’t get away from him. Webber didn’t know the girl personally, but had seen her at a casting. He believes she could have been as young as 14, he said, and was probably not older than 16. “Our faces met across the room and she mouthed ‘Help’ at me,” he said. “Trump had her pinned against a wall—not physically pinned, but he had surrounded her with his bigness, if you like. He had one hand on the wall to one side and she was against the wall.” Webber moved in to aid her. “I went there and I stepped right in between them with my back to him and grabbed her and said, ‘Oh, there you are, I’ve been looking for you everywhere.’ And I dragged her off into the next room, and she was like, ‘Thank you, thank you.’

Trump seemed none too pleased by the interruption. Later, when Webber was leaving the club, the bouncers, whom he knew, told him that Trump had given them money to beat him up.

Trump was a known type. “I would compare him to the spectrum of model fuckers,” Michael Gross, the author of the bestselling book Model, told us. “The spectrum of model fuckers runs from guy with no money to wealthy guys. And they’re collectors of baubles, they’re trophy hunters.… To some extent, they can be described as predators because they’re not actually going out and looking for a woman; they’re going out collecting, using, and discarding models.” These model-idolizing men may or may not end up sleeping with the women they pursue, but that’s not what it’s really about for many of them. The models are status symbols. “A lot of these guys, they don’t actually have sex with the girls. They just want to be with them to impress other men,” Gross said.

 “With some of the girls he pursued, it was an attention-getting thing. You know, ‘Look at me. Look at the young girl I have on my arm.’ I mean, everyone knew he was married, but it didn’t matter to him.

Trump would sometimes call a friend at one of the agencies and ask him to send specific girls to see his doctor so he could make sure they didn’t have any STDs, sources inside the modeling industry told us.

 “We all have the clear impression that Donald was having sex with these girls, but it was more about the boast—my building is taller, my car is longer, my apartment is older, and my women are prettier and have bigger tits,” Gross told us.

“He was gropey.… he had his hands in the most inappropriate places, always,” Carr said. “When he went in to kiss someone, the hand always went to either the hip or the butt. He was also really good when he did pictures or when he’d side-hug someone. He’d always get his hand on the boob. Every time.” Stories about Trump and his hands circulated within the modeling community. At one modeling event, Trump allegedly went down a line of women feeling their bodies to guess their dress size. Backstage at a lingerie show, he is said to have moved his hands all over a model’s breasts under the guise of inspecting the bra’s fabric.

“I saw this several times: When he met a girl, he’d immediately move in to kiss her, not shake her hand or say ‘Hello, how are you?’ He’d immediately put his mouth on her,” Carr said. “I saw him many times go straight to the mouth, to kiss them on the mouth,

Among modeling insiders, Trump had a reputation at the time for preferring the younger girls. “If you’re over twenty-one you don’t have to worry,” Carr said she was told.

Panagrosso said it’s often difficult for models to stand up for themselves. “It’s psychological manipulation because these men will put these women up against each other,” she said. “It’s like they are the prize to be won and these women will do everything to be selected by the men.” The imbalance of power skews the dynamic. “The women are usually intimidated, and in their minds—Trump and Weinstein—they believe they have the permissions to do what they want,” she said. “The women are afraid to say no.… When you have a predator, a guy with so much power… women caved in.

Pilling eventually excused herself to go to the restroom, where yet another girl was talking about the developer. “She said he grabbed her ass and kept going for her and was all hands.

“It was kind of like a feeding frenzy and the girls were there as consumables,” a fashion industry insider told BBC’s Panorama for the documentary “Trump: Is the President a Sex Pest?

The point of the parties was to “get laid,” he said. “We do know that [Trump] was having sex with [the models] because the next day or days after we’d hear about it,” the insider said. “He’d brag about it to his friends that he scored, maybe one or two girls at a time, which is what he loved to do.

While the BBC documentary on Trump and women found no conclusive evidence that Trump had sex with underage girls, it reported that Trump attended small social gatherings with models who were not yet adults.

The girls were asked to walk one by one down a staircase and dance in front of Trump and Casablancas, who were seated in chairs at the bottom. “It was a very small area and I could see them laughing and making fun of the girls,” most of whom were 15 and 16 years old, she said in an interview for this book. “You know, these men were older than my father at the time, looking girls up and down and objectifying them. It was just kind of gross. I don’t think doing that had anything to do with being a professional model.

Dressing rooms weren’t the only places where Trump barged in on women. In the early 1990s, a world-famous supermodel had flown to New York to attend a fashion event and was staying in a suite at the Plaza hotel. She was seeing another industry insider and they had gone back to her room. The two were in bed together when they heard the door open, according to the man, who gave an interview for this book requesting anonymity because both he and the supermodel still work in the industry and fear retribution. Thinking it was housekeeping, the pair sat up in bed and hollered that they didn’t need anything. A few seconds later, the door to the bedroom opened. “It’s Donald standing there,” the man said in his interview. Trump had let himself in. The supermodel and her companion were stunned. “I said, ‘What the hell are you doing?’” the man said. “We’re naked, so she pulled up the covers and we’re freaked out. I couldn’t believe it. He looks in the room and he took a good look at both of us and he slammed the door really hard, like in anger.

Trump sometimes hunted as part of a pack.  He walked into a big room where there were about 50 models.

Braden and her friends found the party odd. There was no DJ, no food, and no bartender

And then down this large staircase, in front of all of us, there was Donald Trump and behind him there were three actors, forties, maybe fifties. I don’t want to name them because they’re all still around.” The actors were famous, she said. “They came down the stairs and spread out like sharks among the girls,

Ask enough and somebody will say yes,” she said. “We were just pieces of meat.

The foreign women “ were more desperate than us because they were younger and they were controlled by the agents, who controlled their visas, who controlled their money. They had nothing to go back to,

Braden said that the younger models were likely easier quarry for Trump because they didn’t know his reputation. Among the older girls, “he was just known as a pig, to be honest. No one ever liked him who I had ever met. Everyone just took advantage of him and the money he was willing to spend to be there.

When Braden heard about the Access Hollywood tape in 2016, she laughed in recognition. “I’m shocked there’s not hundreds [of tapes],” she said. Trump’s behavior was widely known about, but his power rendered people fearful to talk about it. “They’d have to be hundreds of victims,” Braden said. “Everybody knows. I have lots of connections about him, everybody is terrified of him, have had their lives ruined, or are afraid they’ll be sued by him.

And some women, Braden said, just don’t want to dredge up the painful memories of being the victim of a predatory man. There’s another reason more women don’t come forward, however, another open secret of the modeling industry. Many of the women took money to have sex with the men.

Trump’s social circle in the early 1990s included Jeffrey Epstein, a registered sex offender who allegedly ran a sex ring of underage girls. He pleaded guilty in 2008 to soliciting a minor and in July 2019 was charged with two federal counts of sex trafficking before being found dead in his prison cell in an apparent suicide in August. Trump was friends with him and once said of Epstein: “Terrific guy. He’s a lot of fun to be with. It is even said that he likes beautiful women as much as I do, and many of them are on the younger side.

 “All these men were out trying to lure [models], get with them. It was a predatory world in a predatory market where young girls were preyed upon by these rich men,” Braden said. “Trump, these types of men, are predators, exploiters. They are essentially traffickers. They’re essentially passing girls among each other. We were used as bargaining chips, for sure.

According to a lawsuit, which was later dropped, Trump, too, had a penchant for young girls. On April 26, 2016, a woman using the alias “Katie Johnson” filed a civil lawsuit in California against Trump alleging that he raped her in 1994 at a party in Epstein’s Manhattan home when she was 13. She wasn’t represented by a lawyer and the suit was dismissed because of technical filing errors, but a similar complaint was filed in a federal court in New York on June 20, 2016, this time by a lawyer. It was later withdrawn, but was refiled that September with additional details before being withdrawn again in November. Shortly before the case was withdrawn, Johnson had canceled a scheduled press conference. Her lawyers said she had received death threats and was too afraid to appear.

I understood that both Mr. Trump and Mr. Epstein knew I was 13 years old,” the legal complaint alleged. “Defendant Trump had sexual contact with me at four different parties in the summer of 1994. On the fourth and final sexual encounter with Defendant Trump, Defendant Trump tied me to a bed, exposed himself to me and then proceeded to forcibly rape me. During the course of this savage sexual attack, I loudly pleaded with Defendant Trump to stop but he did not. Defendant Trump responded to my pleas by violently striking me in the face with his open hand and screaming that he would do whatever he wanted.” The suit also said that Trump threatened to hurt the girl and her family if she ever told anyone

Johnson’s suit contained allegations against Epstein as well, including that he raped Johnson both vaginally and anally while hitting her in the head with a closed fist because he was angry that Trump had taken her virginity instead of him.

Eventually Trump found an easier way to surround himself with models than chasing them at bars and parties: He started his own modeling agency. Some of the Trump girls didn’t have work visas, Pilling said, and many of the foreign girls were too young to be working legally in the United States.

In 1996, Trump purchased the Miss Universe Organization, which also operates the Miss USA and Miss Teen USA pageants.

“I made the bikinis smaller and the heels higher,” he told David Letterman in 2010. Despite those viewer-baiting changes, Trump didn’t leave the outcome to chance. On more than one occasion he intervened to get the final candidates he favored.

With Miss Universe he had a bigger and flashier pageant that he could call his own. Once he took it over, he was hands-on. From the very beginning, Trump exercised what he saw as the owner’s prerogative. “I’ll go backstage before a show, and everyone’s getting dressed and ready and everything else,” Trump told Howard Stern during a radio broadcast in 2005. “No men are anywhere, and I’m allowed to go in because I’m the owner of the pageant and therefore I’m inspecting it.… ‘Is everyone okay?’ You know, they’re standing there with no clothes. ‘Is everybody okay?’ And you see these incredible-looking women, and so I sort of get away with things like that.

The pageant contestants were not professional models who were used to being seen in their underwear, and Trump’s appearance created a stir.

Samantha Holvey told CNN that when she was 20 and competing in the 2006 Miss USA pageant, Trump made pointed visual inspections of all the contestants. “He would step in front of each girl and look you over from head to toe like we were just meat, we were just sexual objects, that we were not people,” she said. “You know when a gross guy at the bar is checking you out? It’s that feeling.” Being ogled by Trump made Holvey feel “the dirtiest I felt in my entire life.

She and her fellow contestants were also invited to private parties filled with “old, rich, drunk guys ogling all over us.

The 2013 Miss Washington USA, Cassandra Searles, felt degraded by her involvement in the pageant. In a 2016 post on Facebook, she called Trump a misogynist and said that he treated her and her fellow Miss USA contestants “like cattle,” lining them up “so he could get a closer look at his property.” She later added a comment to her post saying, “He probably doesn’t want me telling the story about that time he continually grabbed my ass and invited me to his hotel room.” Paromita Mitra, Miss Mississippi USA in 2013, added her own comment. “I literally have nightmares about that process,” she wrote.

Trump was also said to eliminate women “who had snubbed his advances.

Did Trump actually have sex with contestants? He bantered about it with Howard Stern in 2005, and suggested that sleeping with the girls might be his “obligation.

Back in the late 1970s, Jessica Leeds was one of the few women who flew alone for business. She had taken her seat in economy on a Braniff Airways flight from Dallas to New York when a flight attendant approached the 38-year-old newsprint saleswoman and asked if she would like to be upgraded to first class. Leeds didn’t need a second invitation, and followed the airline employee to the front of the plane. She slipped into a brown leather seat next to Donald Trump. What allegedly happened next is now well known: After introducing themselves, the two ate their dinners in silence. When the meal service was over, Trump raised the armrest between them and “suddenly turned on me and started groping me and kissing me,” Leeds, now in her seventies, told us during an interview in her apartment on the Upper East Side of Manhattan. “He hadn’t said anything.

When he started putting his hand up my skirt that I just ripped myself out of the seat, stood up, grabbed my purse, and went stomping to the back of the airplane,” where she remained for the rest of the flight. After landing, “I stayed there and waited for the entire plane to clear because I didn’t want to take the chance of running into him,” she said.

In terms of timing, Jessica Leeds’s accusation, dating back to the late 1970s was an outlier.

The first cluster of Trump’s alleged gropings dates to the early 1990s, around the time his marriage to Ivana collapsed.

Anderson, an aspiring model in her early twenties, was perched on a velvet couch in the club when a man sitting next to her slid his hand up her skirt and touched her vagina through her underwear. Shocked, she jumped up and turned to see Donald Trump, she told the Washington Post. She and her friends were “very grossed out and weirded out,” she said. “It wasn’t a sexual come-on. I don’t know why he did it. It was like just to prove that he could do it and nothing would happen. There was zero conversation. We didn’t even really look at each other. It was very random, very nonchalant on his part.

The three had dinner together at the Plaza hotel; Trump was dating Marla Maples at the time, but didn’t bring her along. During dinner, Trump repeatedly put his hands up Harth’s skirt, trying to touch “her intimate private parts,” she alleged in her lawsuit. “You know, there’s going to be a problem,” he told Houraney that night. “I’m very attracted to your girlfriend.” The couple returned to Florida, but Trump continued to call Harth, telling her he wanted to sleep with her.

While giving Harth a tour of the estate that evening, he allegedly pinned her against the wall of his daughter Ivanka’s bedroom.  She was stunned when Trump started kissing and groping her, “touching her intimately,” according to the court filing. “It was a shock,” Harth said in 2016. “I pushed him off me. And I was, I said to him, ‘What are you doing? Why are you doing this?’” Trump was twice her weight, and she was worried he would rape her. She was so fearful that she began vomiting profusely as a defense mechanism. She felt “degraded and humiliated as a female,” the lawsuit said.

Harth thinks Trump couldn’t believe she was resisting him. “Donald gets what he wants,” she said in a 2016 interview. “I believe, in his mind, he was—this was a come-on for him, some kind of romantic overture. Whereas for me, it was unwanted and aggressive, very sexually aggressive.

 “He constantly called me and said: ‘I love you, baby, I’m going to be the best lover you ever had. What are you doing with that loser, you need to be with me, you need to step it up to the big leagues,’” Harth said. His apparent desire for her didn’t stop him from asking her to provide him with “access” to a 17-year-old beauty contestant from Czechoslovakia, the lawsuit alleges. The lawsuit also says that Trump called some of the “Calendar Girls” over a period of several years, offering career advancement in exchange for sexual favors. Harth’s lawsuit said he harassed her for six years.

Entrepreneur Lisa Boyne accepted an invitation from her pal Sonja Morgan (now of Real Housewives of New York fame) in 1996 and found herself at dinner with Trump.  “He was a douche bag,” she told us. “He took all the air out of the limo. He wouldn’t let anyone talk.

Boyne, Morgan, and a handful of models were sandwiched between the two men. Trump started asking Boyne which of the models she thought he should sleep with.  Who do you think the hottest girl is?’ ‘Rate all these women I’m dating.’

If the women wanted to get out of the booth, the men made them walk across the table. Trump “stuck his head right under the women’s skirts” and commented on whether or not they were wearing underwear and on their genitalia, Boyne told the Huffington Post. “It was the most offensive scene I’ve ever been a part of. I wanted to get the heck out of there.” Boyne says she left before the appetizers arrived.

Cathy Heller was at Mar-a-Lago with her husband, her three kids, and stood up, planning to shake Trump’s hand. “He took my hand, grabbed me, and went for the lips,” she said. Heller turned her head when she realized what was happening, so his kiss landed on the side of her mouth. Trump was angry that she had twisted away, and walked off.

In 2003, Melinda “Mindy” McGillivray was working as an assistant for a photographer friend of hers at a Ray Charles concert at Mar-a-Lago and was backstage with a small group that included Trump and Melania. “The next thing you know I feel a grab,” she told Megyn Kelly on NBC’s Today show. “I stand there, I’m stunned. I’m speechless. I don’t even know what to do or say in that moment.” She elaborated to the BBC: “It was like someone was trying to feel whether a fruit was ripe at the store.” It made her feel “violated, entirely violated,” she said. “To see someone who resembled my father grab me like that was just deplorable.” The encounter left her feeling overlooked and unimportant. “He didn’t even acknowledge me,” McGillivray, who was twenty-three at the time, told Kelly. “It made me feel very small, inferior.

Trump told Stoynoff there was a “tremendous” room in the mansion he wanted to show her. “We walked into that room alone, and Trump shut the door behind us,” Stoynoff recounted in a 2016 article in People. “I turned around, and within seconds he was pushing me against the wall and forcing his tongue down my throat.” Trump was big and fast, Stoynoff said. She says she was saved by a butler who came into the room to tell Donald that Melania was on her way down to resume the interview. “I was still in shock and remained speechless as we both followed him to an outdoor patio overlooking the grounds,” Stoynoff wrote. “In those few minutes alone with Trump, my self-esteem crashed to zero. How could the actions of one man make me feel so utterly violated?

Trump seemed oblivious to the emotional damage he had wrought. “You know we’re going to have an affair, don’t you?” he said to Stoynoff while settling on a love seat and waiting for Melania to join him. “Have you ever been to Peter Luger’s for steaks? I’ll take you. We’re going to have an affair, I’m telling you.” Melania returned and Trump went back to playing the devoted husband.

In the hotel suite, he immediately started kissing her with an open mouth, she contends. Zervos walked away from Trump and sat in a chair and tried to strike up a conversation. He asked her to come sit next to him, and when she did he grabbed her shoulder, started kissing her aggressively, and put his hand on her breast, she said at the press conference. She got up, and he tried to pull her into the bedroom, saying, “Let’s lay down and watch some telly telly,” Zervos recounted. He put her in an embrace and she tried to push him away, saying, “Come on, man, get real.” He mimicked Zervos’s words back to her while “thrusting his genitals” at her, she said. When Trump denied her allegations and called her a “liar,” she filed a defamation lawsuit against him, which, as of June 2019, his lawyers continued to fight.

Karen Johnson, who alleged that Trump groped her and grabbed her by the genitals at the New Year’s Eve party at Mar-a-Lago, hesitated to tell anyone about her experience. “I feared that because I had been a dancer many years before they would say to me, ‘Well, you must have asked for it,’” she told us. “What he did was very traumatizing to me,” Johnson added. “And it still is. You know, I didn’t ask for that. I was literally just walking through a room… no matter what my past is I don’t deserve to be treated that way.” Despite her fears about not being believed, she is clear that she didn’t bring the assault on herself. “This is about a monster, an immature child running around who has no respect for anybody but himself and his giant ego,” she said.

When Maples attended the 1987 boxing match between Mike Tyson and Tyrell Biggs at the Trump Plaza in Atlantic City, she did so accompanied by her ex-boyfriend Tom Fitzsimmons, a former New York policeman who worked as Trump’s bodyguard for a while and regularly served as cover for their relationship. Trump also enlisted Alan Lapidus, the architect on the Trump Plaza Hotel and Casino, as a beard, as well as others in his employ. Lapidus recalled having dinner with Marla one evening and then driving off with her in a limousine. After traveling a few blocks, the car pulled up next to an identical one, in which sat Trump. Marla switched cars and Lapidus went home to his wife. “Donald used a lot of us that way,” Jack O’Donnell, former president and chief operating officer of Trump Plaza, said in an interview for The Trump Dynasty documentary on the A&E network.

In her deposition, Ivana said that in 1989, after seeing the results of Ivana’s visit to the plastic surgeon in Los Angeles, Trump decided to visit the same doctor for scalp reduction surgery, a procedure in which a doctor slices out a hairless section of scalp and sews together the remaining skin to cover bald spots. Between the headaches from his newly tightened scalp and the aching suture itself, the surgery left Trump in agonizing pain, according to Ivana’s account in the court deposition. He turned his rage on Ivana. “Your fucking doctor has ruined me,” the documents say Trump shouted at her. He grabbed her and began tearing clumps of her signature platinum locks out of her head. He then ripped her clothes off, unzipped his pants, and forced himself inside her for the first time in more than a year. “According to versions she repeats to some of her closest confidants, ‘he raped me,’” Hurt wrote.

Barbara Res, a former Trump Organization executive, told the Washington Post. “After that I don’t think he was considered a serious businessman… when he broke up with Ivana and did the Playboy and all that. I think that was the beginning of the end of him being a serious businessman… and he moved into being a cartoon.” The transformation took place on a personal level as well. Trump became more sexist, more openly objectifying, Res said. She recounted a meeting they had with a potential architect for a project the company was undertaking in California. Out of the blue, “[Trump] says, ‘I hear that the women of Marina del Rey…’ And he starts talking about women’s bodies. And that was just, it was a shock to me and a shock to the architect. We were just, ‘What is he saying?’” Res said she saw an ugly side of Trump emerge. “He used to be deferential to women,” she wrote in the Guardian.

As Trump became more famous, his behavior toward women worsened. 

Trump is said to have impregnated several women and facilitated the terminations

 “There are women who have had abortions paid for by Donald Trump. I don’t have the medical records to prove that, but they’ve told girlfriends about it,” said former Pulitzer Prize–winning Philadelphia Inquirer reporter David Cay Johnston. “It’s one of many things that’s sort of common knowledge about Donald.” Johnston said he never published the names of the women who received abortions because he was unable to obtain both their permission and their medical records. Johnston said, however, that he knows the identities of the “brand-name” women, whom he says would be familiar figures to the public.

Trump was no more faithful to Marla after taking his vows than he had been before. He continued to harass Jill Harth and the women from her Calendar Girls beauty contest, according to her lawsuit, and before Tiffany’s second birthday he had had an affair with New Zealand model Kylie Bax. And he reportedly continued his old modelizing habits. Author Laurence Leamer wrote in his book Mar-a-Lago that staff said there were often models traveling with Trump on his plane.

Marla got a reckoning of her own. In May 1997 Trump dialed the New York Post and gave them an exclusive story. Marla learned about it the next day, when she opened the door of her apartment and saw the headline: “Donald is Divorcing Marla,” according to an account by the late Trump biographer Robert Slater. The announcement was well timed—for Trump. Had they stayed married longer, he would have had to pay her more in the divorce, according to their prenup.

“Marla’s a good girl, and I had a good marriage with her, but it’s just that I get fuckin’ bored,

“He came in to The Apprentice believing his own hype. He has a problem with that,” Katherine Walker, the show producer for the first five seasons of The Apprentice, told us. “That’s his Achilles’ heel. Once it’s not about him, he can’t function. Trump Organization, Trump Tower, Trump, Trump, Trump. That’s a huge weird psyche thing.

Trump’s objectification of women permeated the set, both in the boardroom and behind the scenes. Summer Zervos, a contestant in season five, accused him of sexual misconduct and filed a defamation lawsuit against him in 2017.

Proximity to the women didn’t deter Trump from discussing their desirability. “We were in the boardroom one time figuring out who to blame for the task, and he just stopped in the middle and pointed to someone and said, ‘You’d fuck her, wouldn’t you? I’d fuck her. C’mon, wouldn’t you?’” a former crew member told the Associated Press, speaking on condition of anonymity because of a nondisclosure agreement. “Everyone is trying to make him stop talking, and the woman is shrinking in her seat.

Pinkett and five other former contestants on the show were so disturbed when Trump announced his candidacy for president that they spoke out against him publicly. “Because our allegiance to our country supersedes our relationship with Donald, we see today as an act of patriotism and not disloyalty,” Pinkett said in a press conference, representing the group. “We believe the American people have a right to be as informed as they can be in this election regarding Donald’s qualifications as the Republican Party’s front-runner and leading candidate to become president. Today we denounce Donald’s campaign of sexism, xenophobia, racism, violence, and hate as a unified team.

There is no doubt that Melania knew exactly what she was getting into when she married Trump. “He was known as a ladies’ man,” she told Barbara Walters in an interview the couple did during the campaign. Trump had cheated publicly on his previous two wives and made his disregard of fidelity clear in his 2000 book, The America We Deserve.

Melania seems to have been willing to accept that. Despite the occasional squall, the Donald-Melania pairing was much less stormy than his time with Marla had been. That’s largely down to Melania’s unflappable disposition. Friends describe her as serene, and Trump’s older children referred to her as “the Portrait” because she spoke so little.  

Trump has also shown a willingness to pay for sex. Drake and McDougal may be the only women who have publicly said that he offered them money, but stories about him doing so have been making the rounds for decades. “Donald has a magic number and I’ve heard it from more than one girl,” Evans said. “Jessica Drake dropped the ten-thousand-dollar number. Other girls that I’ve talked to, that’s the amount of money that he’s offered. Those aren’t going rates for porn girls. Normally if a porn star is an escort, she’s getting like a thousand dollars for an hour. So for someone like him to out of the ballpark offer like ten grand, he knows in a likelihood they’re not going to say no.

The stories about Trump and porn stars date back decades. “It goes back to the eighties,” Evans said. “In my world, Donald Trump is someone who has been talked about prior to being president. There’s friends I’ve worked with who have told me this directly.… I heard it for years.” Evans said she knows of at least three porn stars who claimed they were paid to have sex with Trump—two of whom she said told her directly.

John Tino didn’t just hear about Trump having sex with porn stars—he alleges he saw it firsthand. Between 1981 and 1983 Tino worked in a private brothel in Times Square above a theater that showed porn movies and had live sex shows. The private club—which he said was known as the “VIP Room”—was on the second floor. There was a private entrance around the side so clients could drive right up, enter, and go up the stairs without being seen. Like much of the porn and sex industries in that era, which were controlled by the mafia, Tino’s club was allegedly run by a crime family captain who was killed in a mob hit a few years later.  In each room was a hidden camera.

$1,500 an hour, though it could go higher), to escort the clients to their bedrooms, and then to go sit in a locked room and watch the clients on monitors to make sure none of the girls was being roughed up.

Every night when the VIP Room shut down, Tino would collect the tapes and put them in a bag or a box. The following morning he would go downtown to his boss’s office and deliver the tapes to him. The secret club was frequented by a few celebrities, and a client they called “the real estate guy,” Tino said, referring to Trump.

On January 21, 2017, the day after Donald J. Trump was inaugurated the 45th president of the United States of America, hundreds of thousands of women wearing pink cat-eared “pussy hats” flooded the streets of Washington, D.C. They did so to champion a panoply of issues, but mainly to rally against the elevation to the presidency of a man who had advocated, on tape, sexual assault. The women in Washington were joined by millions of women in other U.S. cities and around the world. In D.C. alone, the gathering represented the largest single-day march in U.S. history. The hats, an allusion to Trump’s now-infamous boast on the so-called Access Hollywood tape that he could “grab ’em by the pussy,

He didn’t always succeed in wowing his dates, though. He took artist Lucy Klebanow out to dinner one night in the early 1970s. He picked her up in a white Cadillac convertible and drove her to the famous Peter Luger’s steakhouse in Brooklyn. When the check came at the cash-only establishment, he didn’t have enough to pay for dinner. So she did. He said he’d pay her back, but never did.

The making of Donald Trump, Sexual Predator

Trump’s father was remote, emotionally abusive, and ruled the household with a metaphorical iron fist and a literal wooden spoon, which he employed for paddlings when deemed necessary. “He was a tough, hard-driving guy who didn’t traffic in emotions except perhaps anger.

That reliance on physical dominance rubbed off on Donald, who exhibited a violent streak from an early age, throwing rocks at the baby next door, pulling the pigtails of the girls in his class, throwing cake at birthday parties, and beating up kids in the neighborhood.

As an adult, such belligerence became a point of pride for Trump. “Even in elementary school, I was a very assertive, aggressive kid. In the second grade I actually gave a teacher a black eye—I punched my music teacher because I didn’t think he knew anything about music and I almost got expelled,” Trump boasted, likely falsely, in The Art of the Deal.  His self-image, his self-definition, was built around the idea that he was one tough son of a bitch.

He may have been right. It was Fred who drilled into his son’s head the vainglorious mantra “you are a killer, you are a king,” and Fred who drove all his sons to be ruthless and combative.  In 1990 he told Donald:  “You can have a thousand mistresses if you want, but you can’t have just one. And whatever you do, you never, ever let yourself get caught.” 

Trump’s tried-and-tested MO of never surrender, always hit back, and then claim victory comes straight out of the Cohn playbook as well. “You don’t admit to wrongdoing. You go full blaze on the offensive and you go after whatever person or whatever government agency is accusing you of something,” Marcus said in describing Cohn’s worldview. “Ultimately, in Roy’s world, you could settle, but you always had to make it look like you won. That was really important to Roy and is important to Trump. You have to declare yourself the winner.

Trumps military school

White recalled a Saturday night dance, to which he took a date from the local area with whom he had been fixed up. “The girl showed up; she was not from the higher echelons, more like middle class. She had on a beautiful dress, but it was handmade,” White told us. “To me, that was very sweet, like she had worked very hard on it.” Trump, though, noticed the difference between her and the fashion plates he regularly brought to campus. Once the cadets were back in the barracks, he began mocking White’s date. “He called her a ‘dog’ and asked me how I could go with her. His derisive mockery of her just would not stop—and this was in public. He said it to ten different people and he made a huge issue out of it,” White said. “This was just a sweet sixteen- or seventeen-year-old girl. Donald made a point of mocking that poor girl, calling her a ‘dog’ over and over again. Then he would do a dog bark—‘woof woof woof!’

McIntosh, a fellow classmate told Frontline. “I think that the things that we talked about at that time in 1964 really are very close to kind of the way he talks now about women and minorities and people of different religions.… When I hear him speak, I hear these echoes of the barracks life that we had and that we grew out of. Our whole idea of what sex was and the proper way to deal with women came from Playboy.

Trump’s understanding of women doesn’t seem to have evolved much since then.

Not having seen Trump since they graduated from high school, White approached him and said hello. Trump made a bit of small talk with him, then used White as a pawn in his gambit. “He grabbed one of the women by the shoulders, turned her in my direction, forcibly pushed her face next to mine, and said, ‘Would you rather go home with me or with him?’’’ White said. “I just walked out.

First wife Ivana

Trump was still unaware that modeling was not Ivana’s primary skill. When they hit the slopes the next day, he skied carefully, and she flew past him. “I disappeared,” Ivana told an interviewer. “Donald was so angry, he took off his skis, his ski boots, and walked up to the restaurant.… He went foot bare up to the restaurant and said, ‘I’m not going to do this shit for anybody, including Ivana.’ He could not take it that I could do something better than he did.” Already, Trump was showing his need to always have the upper hand,

Ivana and Fred—whom she later described as “a really brutal father”—had butted heads early on, when she joined the Trump family for a meal. “We went to Tavern on the Green for the brunch one Sunday and Trump’s father ordered a steak,” Ivana said. “So all the, you know, the sisters and brothers, they ordered a steak. And I said, ‘Waiter, can I have a filet of sole?’ And Fred looked up at the waitress and, ‘No, she’s going to have a steak.’ I look up at the waiter, I said, ‘No, Ivana is going to have a filet of sole’—because if I would let him just [roll] right over me, it would be all my life and I would not allow it.

Trump, the man who had described his subservient mother as the ideal woman, grew to abhor the tough business side of his wife and came to see putting her to work as a mistake. “I think that was the single greatest cause of what happened to my marriage with Ivana,” he said in a 1994 interview with Nancy Collins for Primetime Live. He hated coming home and hearing her shouting on the phone at someone at the casino who had upset her. “A softness disappeared… she became an executive, not a wife,” he said. What he really wanted was someone to cater to his needs. “I don’t want to sound too much like a chauvinist, but when I come home and dinner’s not ready, I go through the roof,” he said.

Early in their marriage, he reportedly told friends, “I would never buy Ivana any decent jewels or pictures. Why give her negotiable assets?” And on Oprah in 1988: “There’s not a lot of disagreement because, ultimately, Ivana does exactly as I tell her to do.

Nor did Trump spare Ivana the weaponized comments about her appearance. She was showing too much cleavage, her breasts were too small, her dress was ugly, she was too skinny: he had a litany of complaints. When she tried to fix the flaws he saw in her with a trip to Steven Hoefflin, Michael Jackson’s plastic surgeon in Santa Monica, he reportedly complained that he couldn’t stand to touch her “plastic breasts.

“Donald began calling Ivana and screaming all the time: ‘You don’t know what you are doing!’” one of Ivana’s assistants told longtime Trump chronicler Marie Brenner in Vanity Fair.

In the end, Ivana’s success was her downfall. She had become too famous. She was growing dangerously close to overshadowing Trump. “He put her there, but he couldn’t stand it,” Oscar de la Renta executive Boaz Mazor said in New York. “The student surpassed her master.

“I create stars,” Trump told Collins. “I love creating stars, and to a certain extent I’ve done that with Ivana.… Unfortunately, after they’re a star, the fun is over for me. It’s like a creation process.

He used his newfound fame to gain access to the kind of women he liked: young models. Some were so young that you could hardly call them women at all. Ivana was friendly with several designers and was a regular at fashion shows both in New York and in Europe, eventually hosting many fashion events at the Plaza hotel. Trump would often attend shows with her. NaKina Carr was working in New York for Oscar de la Renta and was backstage in the models’ dressing room at one of his fashion shows when she heard Trump’s name mentioned for the first time. She was getting ready when all of a sudden she heard someone shout, “Put your robes on, here he comes!

Carr asked another girl what was wrong, and the girl pointed to a man across the room. “She said, ‘He’s the money man. He can do whatever he wants.… unless you’re a gold digger, you avoid him at all costs.’” Trump walked in like he owned the place, according to Carr’s account, with a pregnant Ivana trailing behind him. “He threw his arms wide open and said, ‘Okay now ladies, drop ’em,’” Carr said. “The one thing I’ll always remember is the dejected look on Ivana’s face in the dressing room. I thought, how horrible, that he would treat her in this way.

Posted in Politics | Tagged , , | Comments Off on Donald Trump: Sexual Predator

Republicans way ahead of Democrats on voter data

Preface.  This is a book review and Kindle notes of Nelson’s “Shadow Network:  Media, money and the secret hub of the Radical Right”. It tells the sad history of how Republicans got Trump elected and took over the House, and Senate and Supreme court as well. I don’t cover some of the most interesting parts of what happened in this book review such as how on earth the evangelists went from reviling Trump to voting for him since it would take too many pages to tell, but it’s a good story, buy the book. 

Basically the willingness of fundamentalists, evangelists, and conservative Catholics to vote for Trump was due to being able to reshape the judiciary, roll back abortion rights, gay marriage, gun laws, environmental regulations; abolish federal agencies, assail IRS restrictions on churches’ right to operate as tax-free political platforms, allow gerrymandering and redistricting, and remove the system of checks and balances designed by the founders to guard against extremism. They were keen to slash food stamps, the department of education, department of Agriculture, Social Security, Medicare, Medicaid, Health & Human Services, the National Institutes of Health, Center for Disease Control, State Department, and Environmental Protection Agency in exchange for a few crumbs of tax refunds, and the $2 trillion tax cut that mainly went to the top 1% was designed to cause higher deficits making it easier to cut the social safety net programs.

Alice Friedemann www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Nelson, A. 2019. Shadow Network: Media, money and the secret hub of the Radical Right. Bloomsbury publishing.

Republican data mining

In 2009 the Koch brothers gave $2.5 million to the Themis Trust voter database using a company called i360, which networked the Koch organizations, Council for National Policy, and dozens of affiliated organizations. The target audiences were concentrated in sparsely populated states between the coasts where their votes could tip the Senate Republican.  It was known that people were more likely to respond to digital prompts to act or vote when combined with sustained interactions with members of their social circles.  The i360 data tracked voters’ marital status, interest in weight loss, cholesterol levels, preference for internet ads and outdoor ads, hearing difficulty, home equity, household income, and a category called “Bible” so that canvassers would have an excellent idea of who would answer and offered them a tailored script. 

The digital strategy was integrated into its messaging through extremist radio and TV stations, with no equivalent at all on the Democratic side.

In 2012 the group United in Purpose (UIP) obtained leaked data on 191 million registered voters including names, contact information, and voting records.  Then another online breach produced 18 million individuals with data including religious views, hobbies of hunting, a “bible lifestyle” and more.  This data was made available to church pastors who wanted to know what percent of their congregation was registered to vote. Those that weren’t were called on by other members who asked them to register and reminded them to show up at the polls. 

The UP app assigned points for sympathies from homeschooling to an affinity for NASCAR.  Any score over 600 indicated a religious, conservative person.  These people were then run against the voter registration database, and non-voters especially singled out to target.  Five million unregistered conservative voters of the 25 million known conservatives were found this way, and UIP and partners went door to door to register them. Two-thirds of them lived in the south and Midwest, with a median age of around 60 and mainly white.  Nearly 90% attend a Protestant church and have a biblical view (versus just 1% of the rest of the U.S.) and 90% are married.

In 2015 Ted Cruz developed a sophisticated political app with a database from voter files, the NRA, consumer sources, and Cambridge Analytics full of thousands of data points about each person.  When approached by phone or door-to-door canvassers, the message was crafted depending on each person. If they were in the NRA and neurotic then a pitch emphasizing the menace of home invaders and a firearm was used, or if they held more traditional values then given heart-warming messages about hunting as treasured family time.  When users downloaded it, the app asked for access to the phone’s entire directory of contacts. Those who were already Cruz supporters were then asked to reach out to contacts on this list, since they were likely potential supporters.  By February of 2016, 300,000 potential supporters were matched with already active supporters.  This data was supplemented with political surveys about themselves, their acquaintances, and data culled from their activities on the phone.  The app was gamified and awarded points for actions such as sharing contacts and making phone calls that led to badges ranging from “bald Eagle” to “U.S. Constitution” and rewards of bumper stickers, t-shirts, and tickets to opening-night screens of Star Wars. 

Meanwhile the RNC and Koch operations had made huge advances in data collection to add to their i360 database that they sold to Ted Cruz, Jeb Bush, Marco Rubio, the NRA, and others.  At this point a pool of 38 million born-again Christians who hadn’t voted in the previous two presidential elections were identified, despite 26 million of whom were already registered to vote.  These targeted voters received almost 1 billion digital contacts via social media, emails and more.  Conservative radio and TV stations got 99% of fundamentalist conservatives to believe that the mainstream media reporting on the election was unfair and biased.

On top of that, the NRA had launched a project called Trigger the Vote based on their secret database of gun owners assembled from state and local lists of gun permits and purchasing lists from gun shows and magazines.  They tracked tens of millions of gun owners, most of them without their knowledge.  Many of the non-voters within the NRA weren’t evangelists or cared about same-sex marriage, so the NRA database offered yet another avenue to getting out the Republican vote.

Both the NRA and i360 databases were augmented with Cambridge Analytica data.

In contrast, the MiniVAN app democratic volunteers in Texas, New York and California used had voting history, address, phone number, sometimes party registration, and the last time they voted.  New information was noted on paper forms passed on to be digitally recorded.  Lower-level candidates couldn’t afford to use this app.  Other apps didn’t coordinate, such as Voter Circle, Tuesday Company, Team, Polis and Hootsuite. None could compete with Republican apps, and didn’t have access to Cambridge Analytica data.  Nor did Democrats have a vast network of churches, radio, and TV stations to reach voters like the Republicans.

This culminated in a state-of-the-art app using a database of over 250 million 18+ adults, including the 190 million who registered to vote.  The app was augmented and by grassroots organizers from the NRA, Tea Party, tens of thousands of fundamentalist pastors, right-wing radio and TV stations, and social media.

Three days before the Iowa caucuses, the Ted Cruz app asked users to send out 230, 000 invitations, and share get-out-the-vote messages on Facebook and Twitter. In the final 24 hours the app served out over 850,000 requests to the 11,000 supporters online.  The Family Research Council also had an app that they urged their users to support Ted Cruz.  Since evangelicals made up two-thirds of the Iowa Republican caucuses, Cruz won.

Republicans also learned a vital lesson Democrats apparently had missed: the most effective way to reach a voter was through a printed—not online—guide, delivered by hand, preferably by a member of the community.  Over 112,000 churches, a third of all the churches and other groups distributed more than six million voter guides in 12 swing states to get out the vote.  This resulted in a record-breaking vote in Republican primaries from 1.4 million new voters registered since 2012.

Trump used a very simple app called the uCampaign based on the British Brexit Vote Leave app. The fundamentalists, NRA, and Americans for Prosperity also used uCampaign to get out the vote.  Like the Cruz app, user’s phone directories were mined.  After a download, the app sent pre-scripted messages to the contact list, which is very powerful, family, coworkers, and friends got the texts.  It could match the address books to voter data file and send specific messages to those in other swing states. Over 150,000 Trump supporters downloaded the app and messages were sent to three million contacts.

And there are too many more Republican apps to list for state and local campaigns.   

Meanwhile, the Clinton campaign was making all kinds of mistakes (see “Shattered” Inside Hillary Clinton’s doomed campaign for details).  A critical mistake was to not emphasize the votes of the undecideds and understand why they felt undecided, and all the registered voters who stayed home, while Trump got more voters to turn out.

Council for National Policy (CNP).

Nelson writes: “Through my research, I discovered the rapidly evolving ties connecting the manpower and media of the Christian right with the finances of Western plutocrats and the strategy of right-wing Republican political operatives. Many of their connections were made through a secretive organization called the Council for National Policy, which, as one member has said, brings together the “donors and the doers.” The CNP was founded in 1981 by a small group of archconservatives who realized that the tides of history had turned against them. They represented an American past dominated by white Protestant male property owners. They dreamed of restoring a 19th century patriarchy that limited the civil rights of women, minorities, immigrants, and workers, with no income tax to vex the rich or social safety net to aid the poor.

Now they faced a future in which minorities, women, gays, and atheists were gaining in number, rights, and political influence. If the country abided by a clear-cut democratic process, these constituencies, leaning Democratic, would consolidate their power based on majority rule. So the CNP decided to change the rules. This task would require developing a long-range strategy to target critical districts and activate previously unengaged voting blocs. But, as author David Daley has pointed out, the conservatives faced a deadline: once Democratic-leaning youth and minorities reached a decisive majority—which could be as early as 2031—there might be no turning back. The CNP spent decades building a framework to advance its agenda. One pillar has been its ability to master the basic rules of media and write new ones.

The CNP set its sights on the Republican Party, conducting a decades-long crusade to promote right-wing extremists and drive moderates out of office.”

Groups run by CNP members and their favored candidates benefit from a subsidized, turnkey digital package. Their coordinated apps collaborate across platforms and weave seemingly independent groups into tightly networked operations. These measures played a significant role in the 2016 surprise and continue to affect the electoral landscape today. The CNP’s preferred Republican candidate that year was Senator Ted Cruz, but when Donald Trump won the nomination, the movement turned on a dime, delivering its national network of media and manpower to carry his message, in return for his promise to advance its policy objectives. The impact of this network was borne out again in key races in the 2018 midterm elections, and can be anticipated for 2020.

Digital tools are unlikely to be effective if they are not rooted in social relationships. The movement has benefited from the gradual decline of mainline Protestant denominations and the rapid growth of the evangelical population over the past half century. Pastors have been wooed, pressured, and sometimes bullied to adopt increasingly political stands.

Family Research Council (FRC)

Many pastors continued to be uncomfortable with preaching politics from the pulpit, and the Family Research Council offered them a menu of arguments and workarounds. One video, narrated by Tony Perkins, listed religious figures who challenged authority, including Moses, Elijah, and John the Baptist, demanding, “Were these men of God throughout history being too political?” FRC voter guides scrupulously avoided endorsing candidates in a literal fashion; they simply rated candidates according to their criteria, which led to an inescapable conclusion in favor of Republicans.

The FRC also produced compelling antiabortion videos to show in the church. They even offered a menu of ready-made sermons, including PowerPoint presentations written by Perkins and his partners for download and delivery from the pulpit.

The Family Research Council sponsored regional briefings with names like “Keep God in Texas.” In 2003 Watchmen on the Wall hosted its first national conference, which became an annual event. Pastors and their wives enjoyed a heavily subsidized three-day junket in Washington at the swanky Hyatt Regency on Capitol Hill. There they received FRC policy briefings, training sessions, and a “Spiritual Heritage Tour” of the U.S. Capitol. Then they were dispatched to Capitol Hill to carry the FRC message to their congressmen, whose office addresses were helpfully listed in the conference program. The materials also included painstaking guidelines for the legal boundaries for church politicking, and tips for setting up a “Culture Impact Team” in the home church. The orientation materials included a budget—suggesting that pastors should contribute 1% of their church’s undesignated receipts to the Family Research Council.

The pastors’ training sessions instructed them in methods not only for getting their congregants to the polls but also for extending their influence to family and friends and recruiting their followers to run for political office. Any churchgoer who was misguided enough to support a Democrat was pounded with messaging on the twin virtues of “sanctity of life” (antiabortion) and “sanctity of marriage” (anti-same-sex marriage). The FRC website added a downloadable “Election Prayer Guide” asking worshippers to “pray that America’s Christians will all register to vote” and cast their votes based on candidates’ “biblical values,” in order to elect “godly men and women as leaders who fear the Lord and honor Him.

The iVoter guides (https://ivoterguide.com/  from iVoteValues.org of the FRC & https://erlc.com/ of the

Southern Baptists) defined the issues, and their wording influenced the reactions. At the top of their list they placed appointing conservative [or “originalist”] judges and banning same-sex marriage. Environmental issues were not worthy of mention.

But by making pastors and churches their vehicles of distribution, the iVoter guides gave their recommendations the imprimatur of spiritual leaders—perhaps even an air of divine authority.

The FRC website had a national pastors network that grew from a base of 1,800 pastors to 75,000. Many were located in critical swing states, including Wisconsin (with 891 members), Michigan (1,778), Pennsylvania (2,464), and Florida (7,372). In a tight race, a cohort of pastors who influenced as few as fifty votes apiece could swing an election.

Extremist Radio & TV network

I found that as local and regional newspapers collapsed over the early 2000s, media owned by CNP members rushed to fill the vacuum. They developed a sophisticated strategy, starting with local radio, an old-fashioned but powerful medium that had been written off too soon by the CNP’s opposition.

Three key players dominate this landscape: Salem Media Group, Bott Radio Network, and the American Family Radio networks. Over the years they have connected their holdings to a cohort of pastors, politicians, and tycoons, creating an armada of radio stations and news outlets loyal to the CNP’s political agenda, and selling millions of Americans on its harsh combination of plutocracy and theocracy.

American Family Radio preaches the dangers of modern science and “moral decay”: “The complete absence of transitional fossils disprove evolution,” it tells listeners, and reports that “God agrees … that homosexuality should be against the law.” Because these stations’ audiences have lost or abandoned professional news outlets—and because their interests had been ignored by major national media—they are more vulnerable than ever. Over time, the media empire has expanded its reach into Fox News operations and grown to include fundamentalist television broadcasting, digital platforms, book publishing, and feature-film production. The “wallpaper effect” of wraparound media can have a powerful impact. Abraham Hamilton III, host of American Family Radio’s Hamilton Corner, described the October 1, 2017, mass shooting in Las Vegas as “Satan’s work,” immune to legislation. The Democrats, he complained, were “exploiting” the victims by calling for hearings on gun control. This charge was repeated, often in the same language, by other CNP-affiliated political and media figures across platforms, including the Daily Signal, the Hillsdale Collegian, and Fox News’ Todd Starnes Show.8 The cumulative effect is the creation of a parallel universe of information.

Of Nebraska’s 220 radio stations, at least 50 are religious, and many belong to members of the CNP. By comparison, the state has only eleven NPR stations. Crossing the Great Plains, a driver can go for miles without a public radio signal, but he’ll never be far from fundamentalist broadcasting—or messaging inspired by the CNP. Media played a critical role in the CNP agenda. It was well and good for Weyrich, Viguerie, and Blackwell to recruit millions of evangelical voters. But they needed a way to reach them that complemented their pastors’ sermons, not encroached on them.

Salem found a new way to monetize religion. Other radio outlets depended on advertising for 95% of their revenue, subject to the state of the economy. Less than half of Salem’s revenue came from traditional advertising; most of it came from selling blocks of time to scores of religious organizations that solicited contributions from the listenership. Over time, the definition of “religious” customers evolved to encompass partisan organizations tied to the Council for National Policy.

The Dietary Supplement Health and Education Act of 1994 (DSHEA, pronounced “D-shay”), was promoted by Senator Orrin Hatch. Hatch and his family had extensive involvement in the nutritional supplements industry, which is based in Hatch’s home state of Utah. DSHEA prevented the FDA from regulating harmful or fraudulent supplements before they hit the market. The Los Angeles Times concluded, “The harvest [of DSHEA] has been a public health disaster.” It also created an advertising revenue stream for online and broadcast outlets of various persuasions. American Family Radio run ads for vitamins and medical and dietary supplements, many of them directed at the elderly.

In 2005 journalist Adam Piore published a detailed history of Salem’s strategy called “A Higher Frequency” in Mother Jones magazine. Piore reported that between 1998 and 2004, Atsinger, Epperson, and their company offered $423,000 in federal campaign contributions, 96 percent of it to Republicans. This rendered them the sixth largest donor in the industry.30 In 2000 Atsinger, Epperson, and a colleague donated $780,000 toward a California state ballot initiative to oppose gay marriage.

Evangelicals tended to distrust psychology, but Fundamentalist psycologist, James Dobson, who hosted a radio program called Focus on the Family, embraced the field as his calling.  Like many right-wing spokesmen, he embraced corporal punishment to discipline children using a switch or a paddle to deliver spanking of sufficient magnitude to cause the child to cry genuinely.” Dobson took similarly harsh stands against homosexuality, abortion, and pornography, clinging to positions that were increasingly discredited by the medical establishment. He claimed that “no credible scientific research has substantiated the claim that homosexuality is genetic or innate.” Instead, he held that it was usually the result of “a home where the mother is dominating, overprotective, and possessive while the father rejects or ridicules the child.” Dobson was no fan of feminism. “A good part of my professional life,” he noted, “has been devoted to trying to straighten out some of the feminist distortions about marriage and parenting and to address the relationships between men and our women in our society.

Dorothy Patterson—wife of the Conservative Resurgence leader Paige Patterson—told audiences that “A wife was created from the beginning to be a helper to her husband,” she told his listeners. “That functional role … is one of subjection, it is one of submission.

Radio offered an obvious advantage for the fundamentalist strategists. Over the postwar period, the American landscape was covered by an interstate highway system. Americans commuted in their cars, ate in their cars, courted in their cars—often with the radio on. Epperson and Atsinger systematically expanded the Salem network across the country, station by station.

But the Christian Science Monitor noted that while Pat Robertson’s broadcasts didn’t endorse specific candidates, they could (and did) “insinuate” endorsements on the air. Fundamentalist media was becoming a political force. The Monitor reported that Christian broadcasters ran around 1,300 radio stations in the United States (one out of every seven); a third of commercial publishing was evangelical, and that the outcome of the election “may ultimately depend on the impact of the so-called ‘electronic church,’ the far-reaching Christian broadcast networks.” Viguerie predicted that born-again evangelicals could become “the strongest force in American politics in the next few years.

The advent of cable television—combined with the demise of the Fairness Doctrine—represented a bonanza for the radical right. Many critics have focused on Fox News, launched in 1996, but the fundamentalist broadcasters benefited far earlier. Cable allowed them to both target and grow their audiences on a national level. The traditional networks employed huge teams of professional reporters, gatekeeping editors who checked facts, and vice presidents to enforce standards and practices, but the newly liberated cable broadcasters were unencumbered. Not only did they find ways to “insinuate” their endorsements of candidates, skirting the Johnson Amendment, they also launched an attack on professional news outlets.

“Pat Robertson’s longstanding talk show ‘The 700 Club’ … and others began to address what was happening in the news from a biblical perspective. They claimed they were providing viewers with ‘real’ explanations that media and liberal politicians covered up. These shows also reinforced conservative talking points as objective facts.

Fundamentalist broadcasting, Bivens added, “authorizes a particular, often conspiratorial way of viewing the world.

The architects of the radical right studied the art of the “soft coup d’etat”—not just to take over the Republican Party but to weaken various public institutions that challenged their “biblical values.” These included public schools that taught evolution, universities that advanced climate science, and businesses that supported equal rights for the LGBT community. They also disapproved of the professional news media, which seemed to bear every trait they spurned: urban, liberal, and more secular by the minute. They resolved to break its hold on the nation’s psyche.

Print and broadcast journalism continued to grow in influence and revenue. Newspaper penetration peaked between roughly 1970 and 1990, when the ratio of circulation to American households approached one to one. Network news, launched in the 1940s, reached an apex around the same time, and the evening news expanded from fifteen minutes to half an hour in the early 1960s. By 1980, 75% of American households were tuned to network news programs over the dinner hour. But this news ecosystem, as some journalism professors called it, was already in trouble. Newspapers were advertising-rich, producing returns of 10 to 20%, outstripping most investments in the manufacturing sector.

But family-owned newspapers paid a price for their success; when the patriarchs died, their descendants faced inheritance taxes of up to 70%, prompting many to cash out by selling their papers to corporations. Family owners were answerable to their communities and their peers, but corporations responded to shareholders who were more interested in quarterly earnings than Pulitzer Prizes. By the early 2000s, the new news business was implementing massive cost-saving measures: firing thousands of reporters, slashing circulations in underserved communities with commercially unattractive demographics, and refusing to invest in the vital new technologies that were transforming the culture. The new corporate owners squeezed every last penny from their newspapers, in many cases using their revenues to float their debt.

The result was devastating. Local voices were silenced, local populations abandoned. Newspaper ownership was increasingly concentrated in fewer and fewer hands. By 1990 just 14 companies controlled half of the 1,600 daily papers, and the concentration of ownership would increase.

Newspapers were losing ground to television, but network news divisions were also troubled. Over the late twentieth century networks were acquired by increasingly diversified corporations. CBS’s Viacom, NBC’s General Electric, and ABC’s Disney saw no need to subsidize news divisions, directing them to turn a profit like other divisions. Television news reporting slid into softer stories, shorter soundbites, and more reporting tied to entertainment and human interest. Over the next few decades, the management closed both international and domestic bureaus and laid off legions of reporters. Cable and public broadcasting filled some of the information gaps, but cable channels tended to emphasize opinion, debate, and sensationalism over traditional reporting and cultivated like-minded niche audiences. Public television was worthy but chronically underfunded.

The rest of the country’s newspaper culture suffered a colony collapse. One of the most significant casualties was statehouse reporting, the traditional purview of midsize newspapers in Middle America. Pew Research reported that between 2003 and 2014, the number of full-time statehouse reporters dropped 35%. The press corps in many statehouses dwindled, allowing state lawmakers to go about rewriting laws with less scrutiny.

All of this must have been music to the ears of the Council for National Policy.

Wildmon’s American Family Radio network, for example, produced segments with titles like “Infanticide Adopted by Democrats” and “Homosexuality is the Dividing Line between Light and Darkness.” One considered the question of how Christians should respond to a Muslim call to prayer, and answered, “They should take the call to prayer as a call to arms, to go to war in the Spirit against the demon-god Allah and the spiritual deception of Islam.

The Salem Radio Network was especially aggressive in acquiring new stations. Atsinger and Epperson developed a successful strategy of purchasing leveraged stations in urban markets. But they ran into an obstacle with the Federal Communications Commission, which prohibited a broadcaster from owning too many stations in one market. Epperson and Atsinger—by now members of the CNP board of governors— joined other broadcasters to lobby against the regulations; Salem contributed $74,000 to key legislators. The Telecommunications Act of 1996 was written by industry lobbyists, promoted by Newt Gingrich, and signed into law by Bill Clinton. It eased the ownership regulations, to the benefit of Salem and other large companies. Salem went on an acquisition binge and created a system of station “clusters” to cut costs.

Salem’s “Christian journalism” was a new genre, unhampered by professional practices of multi-sourced reporting, fact-checking, and corrections.  This was not news about Christianity, it was current events filtered through a highly partisan fundamentalist lens.

“Christian radio” had become the third-most-popular format in the United States, following country music and talk.

Eventually the Salem, Bott, and American Family Radio empires extended to at least 46 states. (As of January 2019, they owned stations in every state except Idaho, Nevada, Utah, and Alaska.) Their programming, including political content produced by CNP members, was utilized by other radio networks, including the Christian Satellite Network and Family Life Radio.

ABC, NBC, and CBS began to dump their radio stations, especially in small and midsize markets, and their news programs disappeared. The Great American News Desert grew drier by the year.

In 2008 revenues of the Associated Press fell 65%. It was a cooperative financed by member newspapers, broadcasters, and other outlets to support reporting including breaking news, investigative journalism, and foreign news beyond the resources of individual resources.  Local news across the nation was gathered.  After their decline, national news tilted to the coasts, and Fox, Sinclair, fundamentalist and other right-wing radio and TV took over local audiences.

National Public Radio (NPR)

National Public Radio, founded in 1970, did much to fill in the gap. Serving more than a thousand public radio stations, NPR offered traditional journalism and newscasts that presented multiple perspectives on public issues. Listeners could turn to NPR for detailed, thoughtful interviews with leaders from the leading political parties. Nonetheless, many conservatives in Middle America distrusted NPR as a smugly liberal voice with little interest in their issues, a maddening focus on identity politics, and a propensity for promoting the Democrats’ agenda. NPR tilted urban and coastal for obvious reasons. Its stations, its listeners—and its listener contributions—were concentrated in urban areas, suburbs, and college towns. NPR’s weekly listenership would reach 28.5 million by 2017—but that was still less than 10% of the national population.

Public broadcasting was founded with federal support, but the ongoing assault by Republican administrations whittled that funding down to almost nothing over the years. NPR responded by basing its budgets on listener contributions. But that meant that urban NPR stations—especially major stations in New York, Washington, and San Francisco—had outsize budgets and programming capacity. Stations in conservative, rural areas—the news deserts that needed them most—got by on a fraction of the funding, with part-time employees and spotty local coverage. Many public radio stations are low-budget operations based on college campuses that broadcast from translator stations whose signals vanish a few miles out of town. Those who can’t afford to pay for all of the syndicated news and information programs often substitute light musical offerings.

Oklahoma, for example, has six NPR stations, mostly in cities and college towns, while Bott and American Family Radio have a combined twenty stations blanketing the state. And radio matters: it remains an important part of daily life for millions of Americans, whether in the home, the workplace, or the car.

***

CNP strategists showed an astute grasp of electoral politics, finding hidden pockets of evangelical voters and identifying the issues that could drive them to the polls. They displayed a special talent for pinpointing the districts and swing states that could win them critical victories. The intricate mechanics of the Electoral College and redistricting presented a narrow window to circumvent the popular vote, and they seized the opportunity. The CNP and its allies spent years building party machines at a state level. The Republican control of statehouses supported their gerrymandering efforts, and powerful donors helped them tackle labor unions in Wisconsin, Michigan, and other former Democratic strongholds.

The National Rifle Association, a former gentlemen’s marksmanship club has been weaponized for political purpose.

The movement has also appropriated a vocabulary that it redeploys with Orwellian flair. “Family” is a code word for homophobic, and “defense of marriage” means prohibition of same-sex unions. “Fairness” and “justice” mean lowering taxes for the wealthy and corporations. “Values” means conservative evangelical ideology. “Right to work” means depriving unions of the benefits of collective bargaining. The movement’s brand of “religious freedom” often disparages other beliefs, and would allow fundamentalist churches to support political campaigns while retaining their tax-exempt status. And in the lexicon of Betsy DeVos, crown princess of the movement, “educational reform” means redirecting public school funding to religious schools, charter schools, and homeschooling. All of these euphemisms promote policies that victimize low-income and minority populations.

The figures who would create the Council for National Policy had a fierce allegiance to the white Protestant culture of the past, and presumed it would prevail forever. But the shifting electorate challenged that notion. As the power of the federal government expanded, its courts and agencies reflected national trends and imposed change on regions that had long lived as semiautonomous enclaves. In the late 1960s these tensions came to a head in a bedrock of American Protestantism: the Southern Baptist Convention. This conflict was an essential prologue to the story of the Council for National Policy. It was a key proving ground for some of the council’s founders; it would shape the group’s core and inform its tactics over the next half century.

The counterculture called the 1960s the “Age of Aquarius,” but Southern fundamentalists feared the decade as the eve of the apocalypse. They were rattled by the disturbing images the network news broadcasts brought into their living rooms. The year of reckoning was 1967. Southern society was based on segregation, but in June the Supreme Court struck down all state laws banning interracial marriage, and that October the court installed its first African American justice. Southerners were steeped in military tradition, but that month they watched almost 100,000 protesters march on the Pentagon. The South was still the land of church socials and sock hops, but that year Hair opened off Broadway, celebrating LSD and nudity onstage. Even the Bible was under scrutiny, as a new generation of theologians reviewed the scientific record and suggested that the Good Book was a profound work of literature, not a chronicle of historical fact. The conservative wing of the Southern Baptist Convention was profoundly shaken.

Southern Baptists were heavily concentrated in the states of the former Confederacy. As of 1980 there were more than 2.6 million Southern Baptists in Texas, almost a sixth of the state’s population. Southern Baptists represented over a quarter of all Alabamans, but they were scarce in New England. There were affiliated churches in 41 states as of 2019, but the denomination remains a predominantly southern institution.

One of its tenets was the believers’ right to conduct certain religious practices in the public square. For generations Southern Baptists and other Christians had taken it for granted that public institutions should double as religious venues. Public school days and sports events began with Christian devotions. High school football teams joined the Fellowship of Christian Athletes to pray for victory in the locker room, and county employees installed Christmas crèches on the courthouse lawn. These practices went unquestioned, and for generations few religious minorities or public atheists were around to object. For many communities in Middle America, Protestantism was the organizing principle for society, its various denominations serving as silent markers for tribes, class, and ethnicity. Churches were where housewives displayed their finery and teenagers courted under the watchful eyes of adults. Congregations served as nonstate social agencies, helping the needy and lending a hand to members in trouble. As long as communities were uniformly Christian and the nation’s values were shaped by their ethos, these phenomena were an accepted way of life.

But as America changed, the courts changed with it. They began to respond to the growing population of atheists and adherents of minority religions, who argued that state institutions should not be used to promote one religion over other beliefs. In 1962 the Supreme Court ended public school prayer. The following year it ended devotional Bible study in public schools. The fundamentalists were outraged.

Southerners resented the federal courts’ intrusion into their local affairs. In the same way antebellum Southern Baptists refused to be governed by their northern counterparts, Southerners rejected the imposition of national norms on their society.

Questioning the literal truth of the Bible could open the door to teaching evolution, environmentalism, and cultural relativism.

Some of the early political tactics included reserving blocks of rooms in conference hotels to enfranchise sympathizers, building communication networks, enlisting the media in disinformation campaigns, and spying on enemies, stratagems some saw as “going for the jugular.” Similar tactics would be deployed against moderate Republican congressmen in the years to come.

Social issues were key to organizing the Southern Baptist messengers, but the fundamentalist leaders were equally determined to expand their role in the public sphere. At the core of their political mission was the demand for “religious freedom” to enhance their political influence, using the church as a tax-exempt power base.

Their next step was to extend this strategy from church to state, a plan rooted in the concept of theocracy: the belief that government should be conducted through divine guidance, by officials who are chosen by God. The fundamentalists believed that this concept was written into the country’s founding principles, but this was not true:  The Founding Fathers … stipulated that no religious test would be allowed for federal office holders. The First Amendment proclaimed: “Congress shall make no law respecting the establishment of religion, or prohibiting the free exercise thereof.

Weyrich cofounded three institutions that became crucial building blocks of the radical right (and, eventually, of the Council for National Policy). One was the Heritage Foundation, intended as a counterweight to Brookings and other liberal think tanks, with major funding from beer scion Joseph Coors and Mellon heir Richard Scaife. Weyrich became its first president. Weyrich also cofounded the Republican Study Committee (RSC) to counter the Democratic Study Group, founded in 1959. The RSC would advance the interests of the conservative wing of the Republican Party in Congress, to the detriment of party moderates. Finally, Weyrich founded an influential Republican lunch club on Capitol Hill, with the help of two youngsters named George Will and Trent Lott. The Weyrich Lunch would become a Washington institution

Weyrich cofounded the American Legislative Exchange Council, or ALEC, as a way for the Republican minority to gain the upper hand. Republican state legislators and their spouses were invited to junkets at luxury hotels and resorts, organized and financed by hundreds of lobbyists and corporations. There the lawmakers studied “model” legislation, drafted by the corporations they purported to regulate. The bills were often introduced in states with favorable conditions, such as West Virginia, Oklahoma, and Mississippi. There they were validated in state courts, then leveraged to other states, bringing the advantage of a legal precedent. Extractive industries, Big Pharma, tobacco companies, and others flocked to ALEC conferences, paid their dues, and emerged with their reward. (It would take the Democrats four decades to launch the State Innovation Exchange as a tactical response.)

Weyrich and his allies knew that the Democrats enjoyed a mounting demographic advantage. The coming generations of voters, newly enfranchised minorities, and energized women all leaned Democratic. Much of the national news media also skewed liberal, especially in the era of Watergate and the Vietnam War.

The New South offered Republicans the potential for a new well of untapped voters, and Weyrich embarked on a search for the partners who could turn his dreams of a conservative coalition into a reality. The resurgent Southern Baptists were a logical starting point.

At a mass rally held by the Moral Majority Weyrich suggested to Mr. Reagan that because it was a bipartisan [event] it would be in his best interest, since we could not and would not endorse him as a body, if his opening comment were “I know this is nonpartisan so you can’t endorse me. But I want you to know—I endorse you and what you’re doing.” Reagan delivered his lines to perfection, and the masses leaped to their feet. The throng included men who would guide the movement for decades to come. Mike Huckabee, Robison’s assistant, was in charge of logistics. Meeting Reagan for the first time showed the 24-year-old Arkansan how religion and media could be channeled into political power. “No one had ever given so much attention to, or paid respect for the evangelicals,” Huckabee told the Washington Times. “It was magic, and [the evangelicals were] a major force in Reagan winning.

“I don’t want everybody to vote,” Weyrich told his audience. “Elections are not won by a majority of people. They never have been from the beginning of our country, and they are not now. As a matter of fact, our leverage in the elections, quite candidly, goes up as the voting populace goes down. In other words, suppressing opposition voters was as critical as engaging supporters.

Carter had infuriated the fundamentalists by supporting the Equal Rights Amendment and abortion rights, as well as allowing the IRS to conduct its ongoing audits of segregated institutions. The fundamentalists might do better cutting a deal with a questionable Reagan, they reasoned, than relying on a righteous Carter.

The pastors asked Paul Weyrich how they could leverage the rally into a political movement, given their limitations. Weyrich understood their concerns. “You don’t think your congregations will tolerate your involvement in public policy,” he told them. “Amen—that’s right,” they answered. Many churchgoers believed the church should attend to spiritual life, and render politics “unto Caesar.” There was a lot at stake: evangelism had become big business, and millions of dollars hung in the balance

The group commissioned Tarrance to conduct a poll asking their congregations first whether they would support their pastors’ active involvement in politics, and second, whether they would help pay for it—without cutting back on their usual tithing. The result was affirmative on both counts.

Bill Moyers, a journalist and former Southern Baptist pastor, reported, “In Dallas, the religious right and the political right formally wed … By the mid-1980s, Southern Baptist annual conventions began to look like precinct meetings of the Republican Party.

The harvest of votes was potentially massive. Falwell had noted that only 55% of evangelicals were registered to vote, compared to the national average of 72%. His movement set up tables in church lobbies and parking lots with the mantra, “Number one, get people saved. Number two, get them baptized. Number three, get them registered to vote.”

Surveys show that from 1980 to 1984, the percentage of Southern Baptist clergymen who described themselves as Republican rose from 29 to 66%, while those identifying as Democrats fell from 41 to 25%. Many of their congregants followed.

There was a basic philosophical difference, Pressler wrote, between fundamentalists and their political adversaries; the fundamentalists “believe in the sinfulness of each person … the consistent liberal, on the other hand, believes in the basic goodness of human beings.

The leaders of the Conservative Resurgence refined their other policy priorities. They wanted to impose severe legal restraints on the right to abortion wherever possible, limiting it to cases in which the life of the mother was at stake. This reversed the more liberal position the Southern Baptists had adopted in 1971. They also sought to eliminate IRS restrictions on using their churches to pursue their political agenda while maintaining their tax-exempt status. All of these goals could be blocked by court rulings and federal regulations, so they focused on the mechanics of government: limiting the power of the federal government, strengthening state government, and installing sympathetic judges to the federal courts.

Gazing out at the Dallas rally, Weyrich beheld an army of Southern Baptists who could serve as foot soldiers and an electoral base to fulfill his political agenda. But the Southern Baptists—13.7 million of the U.S. population of 226 million—couldn’t do it alone. The previous year Weyrich told Jerry Falwell of his vision of tens of millions of evangelicals, fundamentalists, Catholics, Mormons, and certain mainline Protestants, who put aside their religious divisions to form a massive voting bloc.

By 1980 Weyrich’s complex machine was under construction, with the Heritage Foundation to program policy, the Republican Study Committee to wrangle congressional votes, ALEC to draft state-level legislation, and the Moral Majority to mobilize the masses. Now the movement needed money. For this Weyrich looked to the business sector. He had already recruited Joseph Coors and Richard Scaife to back the Heritage Foundation,

The nation’s vast business community brimmed with magnates who chafed at corporate taxes, oil barons who resented environmental regulations, and entrepreneurs who wanted to pursue risky ventures without pesky investigations. These individuals sought to curtail the power of the federal government and reassign it to more easily managed statehouses. Weyrich’s political machine was an investment that promised massive returns

 In the 1800s, Alexis de Tocqueville observed that American organizations performed functions that were the purviews of the aristocracy, the church, and the state in European societies. He asked, “But what political power would ever be in a state to suffice for the innumerable multitude of small undertakings that American citizens execute every day with the aid of an association?

Paul Weyrich’s new movement needed associations too. If the Texas fundamentalists and their Washington allies were going to make national inroads, they had to appeal to non-fundamentalists in other regions of the country, based on a new network of seemingly secular organizations.

The creation of effective coalitions, he stated, “takes two things: It takes things to get real bad very quickly, and there has to be some political machinery there to take advantage of that opportunity.” The New Deal was the perfect example. Over the 1930s things got “real bad very quickly,” and FDR’s crack team of advisors assembled the political machinery to consolidate the Democrats’ advantage

The Democratic Party continued to promote a national civil rights agenda—and the southerners continued to resist.  In the eyes of the fundamentalists, things got “real bad very quickly.” Tensions mounted, and civil rights protesters marched across southern cities, met by fire hoses and police dogs.

In 1964, Lyndon Johnson signed the Civil Rights Act, the most extensive civil rights legislation since Reconstruction, barring discrimination in schools and the workplace. This energized the backlash in the South, spurring the Christian academy movement and driving a wedge through the Democrats’ Solid South.

Viguerie prepared to take on the Republican establishment. His list not only anchored an impressive fund-raising operation, it also offered a way to bypass the national news media. “We couldn’t get our candidates on the evening news or our issues talked about,” he stated. Direct mail allowed him to expand the influence of candidates who were otherwise written off.

They decided that the winner of an election is “determined by the number and the effectiveness of the activists and leaders on the respective side.” Third, they declared that “the number and effectiveness of the activists and leaders … is determined by the political technology used by that side.

Phyllis Schlafly, a constitutional lawyer from St. Louis, used her legal training to restrict the rights of working women. A veteran of the John Birch Society and the Goldwater campaign, Schlafly organized a successful national movement to derail the Equal Rights Amendment in 1977. She sowed panic among her followers by warning, misleadingly, that the ERA would cost widows their Social Security benefits and deprive divorced mothers of custody of their children.

“When [Phyllis Schlafly] and I were involved in politics back in the fifties and sixties, the conservative movement rested on a two-legged stool,” he recalled. “The two-legged stool was national defense, which really meant anticommunism, and economic issues. We’d win 40, 45, sometimes 47% of the vote. Very seldom would we ever get 51%.” But under the leadership of Schlafly, Weyrich, and Falwell, “conservatives began to reach out and bring into the conservative movement social issues.

Blackwell was ever attentive to lessons from the left: “How you design a piece of political literature, how you raise funds, how you organize a precinct, how you attract a crowd to a political event, how you communicate to a mass audience online—those techniques can work for anybody,” he wrote. In the process, the Leadership Institute imbued generations of right-wing candidates and their campaign managers with a common ideology, vocabulary, and method.

Blackwell, Weyrich, and Viguerie were ready to consolidate their gains. On May 19, 1981, Viguerie gathered more than 50 conservatives at his handsome brick home in McLean, Virginia, to found the Council for National Policy.

The Johnson Amendment to the tax code became a plague to the fundamentalist movement. It limited tax-exempt status to groups whose activity “does not participate in, or intervene in (including the publishing or distributing of statements), any political campaign on behalf of (or in opposition to) any candidate for public office.” The Revenue Code further ruled that groups could engage in voter education so long as it was “conducted in a non-partisan manner”—but not if it favored one candidate over another. This wording applied to churches as well. Critics noted that Lyndon Johnson slipped his amendment into the bill without debate

Daily life in the United States has been dramatically shaped by the existence of 501(c)(3) status. It has been the hidden state subsidy for art museums and opera companies, whose donors can write off their contributions. It has allowed churches to amass vast real estate holdings and universities to build up huge endowments and fund critical research. The Council on Foreign Relations justified its 501(c)(3) status by serving as a leading think tank on foreign policy. As an educational institution, it offered a portfolio of publications, starting with its journal, Foreign Affairs, as well as numerous online resources and academic partnerships. Its membership was a matter of public record. Most of its frequent meetings were closed, but others were open to the press. The Council on Foreign Relations became a routine stop for every presidential candidate from a major party. The members’ politics ranged from arch-conservative to extremely liberal, making it a venue for spirited debate.

Soon after the Council for National Policy was founded in 1981, its leaders applied to the Internal Revenue Service for 501(c)(3) status, arguing their group’s similarity to the Council on Foreign Relations. They pointed to Foreign Affairs, claiming that they would produce similar educational materials, and the IRS granted their request. The CNP’s 501(c)(3) status benefited the network’s financial strategy. But unlike the Council on Foreign Relations, the CNP and its partners did not promote bipartisan discussion or open-ended policy debates. They functioned to promote the right wing of the Republican Party, skirting the IRS restrictions against partisan campaigning with the airiest of pretenses.

White evangelicals had voted in equal numbers for Carter and Ford in 1976, but they voted two to one for Reagan, and the Republicans took control of the Senate for the first time in 26 years.

The fundamentalists had expected Reagan to show his gratitude by moving full steam ahead on their social issues, ending abortion and quashing IRS challenges to their tax-exempt status. Instead, the White House emphasized economic policy and put the fundamentalists’ issues on the back burner.

In Reagan’s second term, his administration handed the fundamentalists a gift that would galvanize their media and leverage it into an even more powerful political tool: its ruling on the Fairness Doctrine. The doctrine had been in effect since 1949, and required any radio or television broadcaster seeking a license to devote a certain amount of airtime to controversial matters of public interest and to offer opposing views on critical issues. The doctrine also dealt with two other contingencies. If a station aired personal attacks on an individual involved in public issues, it was obliged to notify the party in question and offer a chance to respond. If a station endorsed a candidate, it had to provide other qualified candidates the opportunity to respond over its airwaves.

In August 1987 the four FCC commissioners—all Reagan or Nixon appointees—abolished the doctrine unanimously. Critics argued that the Fairness Doctrine had stopped making sense when cable television burst upon the scene with the birth of CNN in 1980. Television and radio transmissions were no longer captive to “scarce frequencies.” Cable channels (which were not covered by the Fairness Doctrine) proliferated, representing diverse points of view.

These rallying cries to tribalism and paranoia, echoing across the South and the West, would cleave a rift in Americans’ political perceptions that persists to this day.

Right-wing anti-environmentalism to increase corporate profits

While the movement’s public platform preached a return to Judeo-Christian values and regressive social policies, the underlying economic issues were equally critical. Many of these concerned environmental regulations. The Environmental Protection Agency had been founded in 1970 under the Nixon administration, in reaction to a run of national emergencies.

In Texas and Oklahoma, runoff from abandoned oil wells had been quietly poisoning farmland and drinking water for decades. In 1969 DuPont opened a chloroprene plant among the petrochemical facilities on an 85-mile stretch of the Mississippi River in Louisiana that would be dubbed the Chemical Corridor. A 2014 assessment by the EPA found that the five census tracts around the plant had the nation’s highest cancer risk in the country. But the extractive industries treated the Clean Air Act (1963), the Clean Water Act (1972), and the EPA (1970) as existential threats. It didn’t take an oracle to see that environmental regulations would take a bite out of oil, coal, and mining profits.

Opportunists swarmed the countryside, piercing the earth and throwing up shards of shale and toxic brine. When I was a teenager, oil companies operated rigs on the very grounds of the Oklahoma state capitol. There was little talk of preserving the environment; indeed, theology was right on hand to justify the pillage. Nature existed to be conquered and exhausted. The Dominionists cited Genesis 1:26: “And God said, Let us make man in our image, after our likeness: and let them have dominion over the fish of the sea, and over the fowl of the air, and over the cattle, and over all the earth, and over every creeping thing that creepeth upon the earth.” The oil industry of Texas, Oklahoma, and Louisiana was a natural habitat for Dominionist theology.

Confederate roots ran deep in the oil belt. After the war Confederate veterans poured into the underpopulated regions of Texas and Oklahoma, seeking a fresh start and imprinting the states with their politics and their culture. There was also an economic legacy. On a Facebook group called Sons of Confederate Veterans Oklahoma Division, a member offered his perspective on the roots of the conflict: “Y’all are partly right, but it was mostly about the Federal government’s overreach in the south, extra taxes and tariffs put on them. Sort of what’s been happening with all the socialist were voting for office today. You will not recognize it, you have been conditioned to accept it thru over 50 years of public school. Live free while you can.

The DeVos & Prince families’ influence

Richard DeVos was a long-standing member of the Council for National Policy, and the ruling patriarch of an economic and political dynasty. The DeVos and Prince families—united through the marriage of Richard’s son and Betsy Prince—built two vast fortunes through a range of unusual business practices. They have used their massive wealth to erode the state’s power and impose their rigid theology on society. The CNP was central to their mission, and they have served as a cornerstone for it.

In the 19th century the Dutch government decided to liberalize the laws concerning the state Dutch Reformed Church. A small group of conservative farmers resisted and immigrated to America, cleaving to their old ways through the Reformed Church in America. The “Seceders” represented only 2% of the Dutch population, but they made up almost 50% of the Dutch immigrants to America before 1850. In 1858 a third of the Dutch immigrants in Michigan decided that their American church had succumbed to “moral decay” and theological liberalism, and founded the Christian Reformed Church. They blamed the Enlightenment for “idoliz[ing] human reason at the expense of Bible-based faith” and set out to contest the government’s role in public education and labor relations. Education should be the purview of the family and the church, not the government, they argued, and trade unions and collective bargaining undermined divine authority.

Amway products had their fans, but some consumers complained they were overpriced, and disaffected dropouts called their sales force a “cult.” Nonetheless, Amway martialed a vast network of indoctrinated distributors and customers that could be mobilized for political as well as commercial purposes

The dynasty spent a king’s ransom on political operations. As fundamentalists they invested in campaigns against gay marriage and abortion rights. As businesspeople, they resented the federal government, especially its power to regulate business practices and carry out consumer protections. As donors, they contributed massive amounts to political campaigns, candidates, and organizations that advanced their agenda.

Betsy DeVos served as the family’s minister of public education—or rather, against public education. She had worked her way up the Republican Party ladder, primarily as a fund-raiser, to become a member of the Republican National Committee. On her home turf, she labored tirelessly to promote charter schools and school vouchers as ways to divert tax dollars from public schools to private religious schools.  Though the Detroit public schools—troubled as they were—produced better test scores.

Unions

The Rust Belt states were some of the last holdouts from organized labor’s glory days. U.S. union membership peaked in 1954, when nearly 35% of all U.S. wage and salary workers were unionized. By 2014, union membership had fallen to just over 11%. There were many reasons for this decline, among them automation and manufacturers’ decisions to move factories overseas—as well as decades of Republican assaults on unions’ bargaining power.

Labor unions had been instrumental in achieving major reforms: abolishing child labor, advancing occupational safety, raising the standard of living. But they were not immune from corruption, and in some areas they inspired resentment by fostering a two-tier labor market that favored friends and family of members, and winning benefits denied to the self-employed and other workers.

Unions have been closely allied to the Democratic Party, and Republicans have responded by promoting so-called right-to-work legislation on a state level. These laws weaken unions by permitting workers to benefit from a union’s collective bargaining process without paying union dues. Workers have less incentive to join the union, and unions lose the funds and manpower they need to participate in the political arena.

Right-to-work laws reduce Democratic Presidential vote shares by 3.5 percentage points.

Although union membership in the private sector continued to decline, public sector unions grew rapidly, especially at the local level. In 2009 their membership overtook that of private sector unions for the first time. Police, firefighters, and teachers’ unions remained a potent political force. Betsy DeVos had already declared war on the teachers’ unions. The New Deal coalition was already weakened by decades of the Democrats’ dissension and neglect. The coup de grace required only money, media, and strategy. These the CNP had in ample supply.

The NRA

In 1871 journalist William Conant Church came back from the front alarmed by the Union soldiers’ poor marksmanship; the records showed that a thousand rounds were fired for every Confederate hit. Church and his friend General George Wingate decided that the country needed an organization to improve the marksmanship of future soldiers. They launched it in New York City’s fire department headquarters at 155 Mercer Street (now a Dolce & Gabbana boutique), and named it the National Rifle Association. For its first century the NRA concentrated on hosting target practice at shooting ranges and promoting gun safety. These were useful lessons. The West was young, and settlers relied on guns to hunt game and kill the predators raiding their poultry and livestock.

Rifles and shotguns were standard items in the farmers’ toolkit, and lessons in firearm safety counted as a vital public service. The NRA worked closely with the National Guard and supported U.S. military training efforts in World War II. After the war, the NRA returned to educating hunters on safety and conservation measures.

But things changed. As the country urbanized, the rate of violent crime rose, more than doubling between 1960 and 1970. Congress responded by passing the Gun Control Act of 1968, which limited the sale of weapons to felons and minors, barred mail-order purchases, and required new firearms to bear a traceable serial number. The NRA’s vice president wrote in American Rifleman that while he saw parts of the bill as “unduly restrictive, the measure as a whole appears to be one that the sportsmen of America can live with.”6 The NRA and the nation were still on the same page.

On May 21, 1977, at the NRA’s annual meeting, there was dissension in the ranks, and it erupted, an event that came to be known as the Cincinnati Revolt.  Harlon Carter took over and used his position to pioneer a new style of lobbying. He orchestrated national opposition to a 1975 bill that sought to restrict the purchase of handgun ammunition under the Federal Hazardous Substances Act, generating more than 300,000 letters from gun owners to congressmen, some of which included petitions bearing thousands of signatures. The letters supporting the limits, in contrast, numbered 400.

Carter’s campaign was successful, and the limits on handgun ammunition were defeated. Congressmen took heed of the NRA’s new muscle; the NRA-ILA built out its mailing lists and began to deploy them on state and local campaigns as well as national ones.

Under their leadership, NRA membership mushroomed from 980,000 in 1977 to 1,900,000 in 1981. Their assets grew in step with the membership, since fees and contributions provided the lion’s share of the organization’s revenues. The NRA was exempt from federal income tax as a 501(c)(4) organization, defined as one operating “exclusively for the promotion of social welfare … the net earnings of which are devoted exclusively to charitable, educational, or recreational purposes.

One watershed was the battle over California’s Proposition 15 in 1982. This measure called for limiting handgun ownership through a number of measures, including a registration process and a ban on mail-order purchases.

The NRA swung into action, with a budget of over $5 million. It martialed an estimated 30,000 volunteers to distribute flyers and make phone calls, and convinced some 250,000 Californians to register to vote, just to oppose the proposition. The NRA crushed the measure by a two-to-one vote.

The Democrats had been counting on votes from African Africans, Asian Americans, and Hispanics, but their turnout was lower than expected, while the NRA constituency’s turnout was higher. Tom Bradley, the popular African American Democratic mayor of Los Angeles, lost his bid to become governor by less than 100,000 votes—a fraction of the NRA’s hidden pool of 250,000 new voters.

Inspired by the high turnout of NRA members the new model became: identify an invisible, disengaged group of potential voters. Find a hot-button issue to activate them. Keep them riled up with targeted media and direct mail. Facilitate their interactions in gathering places they frequent, to reinforce their commitment with groupthink. Follow up with onsite voter registration and transportation to the polls on Election Day. This tactic would be adopted by various CNP partners and reinforced with digital tools, to serve as a model for elections to come.

In 1991 the NRA elected Wayne LaPierre to the leadership position of executive vice president. LaPierre, a professional lobbyist, brought a new emphasis on advertising and marketing to the job. Membership, which was claimed to be 2.5 million when LaPierre came into office, rose to 3.4 million by 1994.

LaPierre faced his first major test shortly after. With Bill Clinton’s election in 1992, the anti-gun lobby moved swiftly to introduce the Brady Handgun Violence Prevention Act, named after James Brady, the White House press secretary who was gravely wounded in the 1981 assassination attempt on Ronald Reagan. The bill was signed into law in 1993, the first such legislation passed since 1968.

In 1994 the NRA drew up a list of 24 congressional supporters of the Brady Bill and went to work. On election night, 19 of the 24 went down in defeat, and the Republicans won control of the House for the first time in 40 years. Clinton blamed the NRA.

Challenging presidential candidate Al Gore’s calls for gun control after the Columbine massacre, Heston appeared at a 2000 rally, hoisting a rifle and shouting, “From my cold, dead hands!

Demographically, gun owners tended to be older white males in rural areas of the South, Midwest, and West, and were more than twice as likely to be Republicans as Democrats. Furthermore, white evangelicals were more likely than other religious groups to own a gun and to support the NRA. The opportunities for networking were obvious.

The primaries in Iowa, a fundamentalist-heavy state, offered a major opportunity, Silk reported: “The technique of the Robertson campaign was to make caucus attendance a church activity. Tables would be set up for congregants to sign on to caucus for Robertson, and when the day came they showed up en masse. Indeed, the strategy worked so well that it propelled Robertson to a second-place finish ahead not only of Kemp but also of Bush himself.

George W. Bush courts the evangelists

Bush’s Episcopalian background was a disadvantage, but he was coached on the art of fundamentalist fudging.  He learned the language of the conversion experience, at least well enough to sow division. “Methodically,” Silk added, “[Bush’s] people had taken Bush to call on leading Southern pastors, whom the transplanted Yankee convinced that yes, he too was a Christian who had been born again.  He was instructed Bush to “signal early, signal often” to the evangelical community. He noted that the national media was hostile to the fundamentalists, so relations were best established early in the campaign before the coverage intensified. He was given memos on fundamentalists on a state-by-state basis, naming the influential preachers, describing their doctrines, and rating their popularity. The strategy worked.

The evangelicals’ union equivalents in the Republican Party were gathering steam, at the same time the Democrats’ actual labor unions were going off the rails. Between 1980 and 1990, U.S. union membership dropped by almost a third, to only 16% of the workforce. Over the same period, the number of Americans identifying as “evangelical” and “born again” rose to about a third of the population

Once in office, Bush, like Reagan before him, appointed moderates to key positions and focused on economic and foreign policy. “We won three landslide presidential elections in the 1980s, but … we were still burdened by the dead wood of the business-as-usual Republican Party,” fumed the CNP’s master marketer.  So they began to look outside the Republican establishment for new leaders and for a new vehicle to translate their anger and outrage into political action.

In 1996 the bipartisan Federal Election Commission filed a lawsuit against the Christian Coalition, charging the organization—whose membership had grown to 1.7 million—with acting illegally to advance Republican candidates, including CNP members Senator Jesse Helms and Oliver North. As a 501(c)(4) organization, the Christian Coalition was required to be nonpartisan, but the FEC found that over the 1990, 1992, and 1994 elections it had used voter guides, mailings, and telephone banks to campaign for conservative Republicans. The contributions that paid for these efforts should have been reported as campaign contributions.

The New York Times reported, “That same year, the suit said, the coalition coordinated with the National Republican Senatorial Committee to produce and distribute 5 million to 10 million voter guides to help Republican Senate candidates in seven states.” The coalition also worked in “coordination, cooperation and/or consultation” with the 1992 Bush campaign. Its activities included spending funds to identify and transport voters to the polls, and to produce and distribute 28 million voter guides. Oliver North’s unsuccessful 1994 bid for the Senate benefited from 1.7 million voter guides.

Their ensemble of single-issue organizations harmonized like a well-tuned choir. The CNP leadership set the agenda, the donors channeled the funding, the operatives coordinated the messaging, and the media partners broadcast it unquestioningly. Every element of the operation worked toward getting out specific votes in support of hand-picked candidates. They were relentless in helping their friends and punishing their enemies. There was little interest in engaging Democrats in constructive debate or reaching across the aisle. Theirs was a Manichean vision of good versus evil. They were the elect, chosen by God to set the nation on His path. Democrats were demonized.

A 1993 poll showed that only 12% of the voters and 22% of evangelicals considered abortion to be a key issue. Reed was particularly interested in broadening the movement’s appeal to conservative Catholics. The situation called for new tactics. If the electorate wasn’t sufficiently worried about their issues, the issues would need to be refined, reframed, and sold to their voters.

The party ranks still included moderates such as Pennsylvania senator Arlen Specter and New Jersey governor Christine Todd Whitman, both of whom had taken pro-choice positions.

In 1995 James Dobson threatened to bolt both the CNP and the Republican Party on the grounds of insubordination on both fronts. He arrived in Washington with a small entourage, including Ralph Reed and Betsy DeVos, to lecture Republican presidential hopeful Phil Gramm. They sternly informed Gramm that he needed to run on a “morality” platform, but Gramm balked at the idea. The following year, candidate Bob Dole proved equally uncooperative on the question of appointing antichoice judges to the Supreme Court. Dole committed a further offense by suggesting he would make a place in his cabinet for Colin Powell, a moderate Republican with a pro-choice stance.

Dobson and company made it clear that they would rather see the Republicans lose than win with a maverick, and punished Dole by withdrawing their support. In November Dole went down in defeat to Bill Clinton’s bid for a second term. One factor was the evangelical turnout, which dropped 6% from 1992 to 1996.

In February 1998, Dobson returned to the CNP fold with an appearance at its Phoenix meeting.

“Does the Republican Party want our votes—no strings attached—to court us every two years, then to say, ‘Don’t call me. I’ll call you?’ ” he demanded. “If I go, I’ll take as many people with me as possible.” His audience understood that Dobson’s weekly radio audience numbered 28 million—when the combined audience for all three network news broadcasts had dropped to 32 million viewers.

The following month Dobson delivered his ultimatum to 25 House Republicans in the Capitol basement, threatening to pull his support from the party unless it backed his agenda. He detailed his demands in a letter to Rep. Tom Coburn of Oklahoma. The list included defunding Planned Parenthood, eliminating “so-called safe-sex and condom distribution programs,” and cutting off support for the National Endowment for the Arts. It added supporting school choice and “a ban on partial-birth abortion, the defense of traditional marriage, and opposition to any legislation that would add ‘sexual orientation’ to any civil rights law, educational program, or any congressional appropriation.” The CNP would adhere to this menu with astonishing consistency over the next two decades.

The bullying tactics worked. “Keeping Dobson and other Christian-right leaders happy has become the central preoccupation of Republican lawmakers,” CNN reported. “In the House, the legislative agenda is crammed with ‘pro-family’ votes aimed at Dobson’s constituency. But people had to vote; without them, the movement was stalled.

The “Program” amounted to a virtual declaration of war on American culture and governance—shocking in its ruthlessness and antidemocratic spirit. Our movement will be entirely destructive, and entirely constructive. We will not try to reform the existing institutions. We only intend to weaken them, and eventually destroy them. We will endeavor to knock our opponents off-balance and unsettle them at every opportunity … We will use guerrilla tactics to undermine the legitimacy of the dominant regime. We will take advantage of every available opportunity to spread the idea that there is something fundamentally wrong with the existing state of affairs … Most of all, it will contribute to a vague sense of uneasiness and dissatisfaction with existing society. We need this if we hope to start picking people off and bringing them over to our side. We need to break down before we can build up. We must first clear away the flotsam of a decayed culture.

The new movement advocated “intimidating people and institutions” such as Hollywood celebrities and university administrators: “We must be feared, so they will think twice opening their mouths. They must understand that there is some sort of cost in taking a ‘controversial’ stand.

The movement would stoke the flames of alienation: “It is a basic fact that an us-versus-them, insider-versus-outsider mentality is a very strong motivation in human life.” The movement would transform the political culture by laying siege to the popular culture through dedicated organizations. These new associations would watch movies together and “feel part of the group as we watch it.” They would engage in charitable activities, partly to create a positive public image and “partly to create an alternative to government solutions.” The groups “should provide everything that a person could want in terms of social interaction,” other than the office and the church, although some churches would be affiliates. It would include sports leagues to recruit people who were otherwise uninterested.

The essay echoed authoritarian philosophies, emphasizing groupthink to the detriment of independent inquiry and open debate. “The movement should imitate the communist distinction between party members and fellow travelers,” it continued. “There is no medium more conducive to propagandistic purposes than the moving image, and our movement must learn to make use of this medium.” Effective television and movie propaganda would require creative talent and considerable capital, “but these hurdles must be overcome sooner or later.

The evidence suggested that they were losing ground on abortion and gay rights, but there were promising signs that they could make same-sex marriage their next hot-button issue.

Your constituency is the voters, especially the coalition which elected you. You can’t count on the news media to communicate your message to your constituency. You must develop ways to communicate with your coalition which avoid the filters of the media. Focus on your base. Write to them. Meet with them. Honor them. Show yourself to be proud of them. Support their activities. Show up at their events. Help other politicians and activists who share their priorities.

The Council for National Policy’s demographics problem continued. The bedrock of its support, the older white Protestant population, was aging. Younger, more racially diverse voters skewed liberal, especially on social issues, and the causes that mobilized fundamentalist voters didn’t play as well with the new generations. Young women who had come of age with abortion rights weren’t ready to surrender them—especially to a movement that maintained that life began with conception. Millennials had grown up around openly gay friends and relatives, and the sky hadn’t fallen—even when they enlisted, married, or had children.

The manifesto specified that none of these efforts would bear fruit if they didn’t address a vital demographic: “We will accomplish the goal of retaking our country only when large numbers of young people are educated outside of the indoctrinating environment of many public and private schools, universities, and of course, the popular culture. At this point in their lives, many of their ideas are still in the formative stage, the more so the younger they are … College students must be a key audience for our movement, since they are free of excessive time commitments and they find themselves in an environment that (theoretically) encourages activism and exposure to new ideas.

The movement, it argued, needed to establish “alternative fraternities” as well as study groups and book clubs that could “build each other up in every possible way: in terms of public speaking skills, debating skills, physical skills, intellect, manners, aesthetic sense.  But the CNP’s most visible efforts were focused not on fraternities and book clubs but on cultivating entire colleges.

The CNP’s partner media platforms were as networked as its organizations. Hillsdale College enjoyed a cozy relationship with the Daily Caller, founded in 2010 by Tucker Carlson and CNP member Neil Patel, and seeded with $3 million from former CNP president Foster Friess. Described as the radical right’s answer to the Huffington Post, the Daily Caller claims more than 20 million unique readers a month on its home page, and millions more on its partner sites and social media. (As of 2019 its Facebook page has more than five million followers.) The Daily Caller creates and distributes its content through the Daily Caller News Foundation, or DCNF—another tax-exempt 501(c)(3) organization. The foundation shares content with over 250 publishers, and its website states that its content “is available without charge to any eligible news publisher that can provide a large audience.

Falwell was an eager entrepreneur. In 1971 he founded a small Baptist college in Virginia as a subsidiary of his multimillion-dollar televangelism business. But his revenues stumbled in the 1980s with the fallout from the Jim Bakker and Jimmy Swaggart sex scandals, and his college suffered too. Rebranded as Liberty University in 1985, the school made a partial recovery, but it still labored under heavy debt. Liberty started to explore the economic potential of an online curriculum, propelled by the vision of Falwell’s son, Jerry Jr., a bearded version of his father. There were limits to that vision. One was a series of scandals involving a number of for-profit schools with online curricula, which were issuing worthless diplomas while skimming vast amounts of federal scholarship funds. (Liberty is officially “non-profit.”) In 1992 Congress responded by passing the 50% rule, requiring colleges to hold at least half of their courses on a physical campus to qualify for federal support. But in 2006 the Republican Congress quietly passed legislation removing those consumer protections, stealthily inserting eight lines into a vast budget bill.

This benefited a massive number of commercial educational institutions, including many fundamentalist colleges. Liberty University’s fortune was made; it quickly expanded to become the second-largest online college in the United States. As of 2015, its on-campus student body numbered around 15,500, while its online enrollment approached 95,000. The school, like many similar institutions, makes a special effort to recruit military veterans, who have access to additional government funding. By 2016 the university was pulling in more than $1 billion a year, most of it courtesy of U.S. taxpayers, and clearing a net income of $215 million; Falwell Jr.’s salary was set at almost $1 million a year. The university has dismissed faculty concerns and student complaints about the quality of online instruction.

The Council for National Policy has rich hunting grounds in America’s evangelical colleges, which number over a hundred.

The Leadership Institute plays an essential role in TPUSA’s “Professor Watchlist,” a site that publishes photos and denunciations of professors. The accused’s offenses range from joking about Republicans to documenting gender bias in economics textbooks. (Politico recorded 226 “watch-listed” professors at 156 schools in 2018.) The site encouraged students to inform on their professors through the Leadership Institute’s Campus Reform project. Campus Reform works alongside TPUSA to equip and train conservative student activists across the country, through twelve regional field coordinators.

Another Turning Point USA initiative, the Campus Victory Project, consists of a plan to “commandeer the top office of Student Body President at each of the most recognizable and influential American Universities.” In 2017 the New Yorker’s Jane Mayer published the content of a brochure from the project, which outlined the stages of its campaign. “Once in control of student governments,” Mayer wrote, “Turning Point expects its allied campus leaders to follow a set political agenda. Among its planks are the defunding of progressive organizations on campus, the implementation of ‘free speech’ policies eliminating barriers to hate speech, and the blocking of all campus ‘boycott, divestment and sanctions’ movements. Turning Point’s agenda also calls for the student leaders it empowers to use student resources to host speakers and forums promoting ‘American Exceptionalism and Free Market ideals on campus.’

Charles and David Koch were unlikely allies for the fundamentalist right. Religion has played little part in their rhetoric; they preach the free market gospel. Fundamentalists should have been dismayed at the way the Kochs extended their free-wheeling notions to the private sphere. David Koch advocated civil liberties that the fundamentalists bitterly opposed, including same-sex marriage and abortion rights. Outlining his philosophy in a 2014 interview, he explained, “I’m basically a libertarian. And I’m a conservative on economic matters and I’m a social liberal.

David decided to take things a step farther, with an attempt to disrupt the bipartisan status quo. In 1980 he ran as the Libertarian Party’s candidate for vice president, on a platform that can only be described as bizarre. It called for the elimination of all restrictions on immigration and the abolition of the Immigration and Naturalization Service, the repeal of all gun laws, opposition to all taxation, the abolition of the FBI and the CIA, and the repeal of Social Security. It added that no one, no matter how psychotic, should be involuntarily committed to an institution for care. The platform also called for the legalization of homosexuality, prostitution, abortion, and all forms of drug use.

The two brothers reluctantly turned back to the GOP. Like Richard Viguerie and Morton Blackwell, they were dismayed by centrist Republicans. Nixon had founded the Environmental Protection Agency, and moderate Republicans were willing to reach across the aisle to collaborate and compromise with Democrats on taxes and entitlement programs. But Reagan’s “Southern strategy” showed new potential to widen the country’s political divide, and Texas was a key component. It was no coincidence that Reagan’s alliance with the South was launched in Dallas.

Soros began his philanthropic career in 1979, and eventually he assigned more than $32 billion of his fortune to his philanthropic network, the Open Society Foundations (leaving him with over $8 billion). Unlike the Kochs, Soros launched his philanthropy with an international emphasis, and only added U.S. domestic projects after the end of the Cold War.

Three-quarters of the Democracy Alliance partners were coastal, concentrated in three areas: the Boston–New York–Washington corridor, the Bay Area, and Los Angeles. In contrast, almost two-thirds of the Koch Seminar participants lived in the South and the Midwest. In electoral terms, this meant that the Democratic donors’ focused on zones that weighed heavily in the popular vote, while the Koch seminar donors were more likely to inhabit critical swing states that tilted the Electoral College, and sparsely populated states with disproportionate influence in the Senate. Both networks featured a preponderance of donors from the fields of finance, insurance, and real estate. But the Koch seminars were weighted toward the extractive industries and manufacturing, while the Democrats skewed toward the information industries, the legal profession, and entertainment.

The Koch network could, as the Skocpol study states, “nimbly form and revise overall strategies, while [the Democracy Alliance’s] rules have promoted scattering of resources and undercut possibilities for advancing any coherent strategy.

Logically, Democrats should have enjoyed a competitive advantage, given that wealthy liberals are more prevalent in the United States than wealthy conservatives. Nonetheless, their network proved less effective. The Koch seminars “have fueled a tightly integrated political machine” that moved national and state-level Republicans toward the ultra-free-market right. The Democracy Alliance, on the other hand, achieved “more limited results by channeling resources to large numbers of mostly nationally focused and professionally managed liberal advocacy and constituency groups.” These differences would have a dramatic impact on the battle royal to come.

Other American Christians agonized over the conflicts generated by the gaps between the world’s political realities and the ideals of their faith. What was a Christian position on the torture practiced by the U.S. government in the post-9/11 period in pursuit of combating terrorism? How could the biblical commandment “Thou shalt not kill” be reconciled with capital punishment and the epidemic of gun violence? What would a humane refugee policy look like in a world beset by millions of suffering refugees? These matters were absent from the prayer menu for Watchmen on the Wall and the program for the Values Voter Summit. Children’s welfare was only mentioned from conception until birth. The Family Research Council held that fundamentalist Christians “are victims of religious discrimination … From the Senate chamber to a corner bakery, Christians with natural or biblical views of marriage and sexuality have a bullseye on their backs.” Their sense of victimization left little compassion for anyone else.

The Council for National Policy was still racing against time. As of the early 2000s, evangelical Christians remained the largest religious group in the United States, with about a quarter of the population, but their numbers were starting to drop. The Southern Baptists, the largest Protestant denomination in the country, peaked in 2006 at sixteen million members, then went into a precipitous decline. At the same time, the percentage of atheists and unaffiliated Americans rose sharply; it was only a matter of time before the “unchurched” overtook the Southern Baptists.

The premise was that while elected officials may not like all nominees equally, they could agree on common standards of professionalism and impartiality. The ABA review began in 1953 at the request of Republican president Dwight Eisenhower, and every president from Eisenhower to Barack Obama participated in the process except George W. Bush. Conservatives claimed that the ABA ratings had a liberal bias, but the ratings did not adhere to party loyalty: for example, none of George H. W. Bush’s nominees received the ABA’s lowest rating, while four of Bill Clinton’s did.

Sekulow’s enterprises served him well. In 2017 the Guardian obtained tax documents revealing that Sekulow and his family had reaped more than $60 million since 2000 from the ACLJ and an affiliated charity, Christian Advocates Serving Evangelism. Much of the money was extracted as donations from retirees on fixed incomes, susceptible to a finely tuned telemarketer script: “We wanted to make sure you were aware of the efforts to undermine our traditional Christian values” effected by Barack Obama, and so on. The bounty bought Sekulow a private jet, extensive properties, and his own law firm operating for the benefit of the fundamentalists.

He also became a CNP media star. His radio show Jay Sekulow Live! has been carried by more than 1,050 stations, including Salem Communications and the Bott network. Sekulow is telegenic, with expensive suits, a perpetual tan, and an authoritative baritone. He appears as a frequent guest on Robertson’s Christian Broadcasting Network, and his weekly program is carried on the fundamentalist Trinity Broadcasting Network, Daystar, and Sky Angel. Fox News and the three networks made him a regular commentator.

Republican Senate minority leader Mitch McConnell took strong objection to her nomination. McConnell, a graduate of Morton Blackwell’s Leadership Institute, turned to the National Rifle Association, run by CNP member Wayne LaPierre. He requested that the group publicly oppose Sotomayor and “score” the vote to activate its members. “The NRA had never scored a vote on a judicial nomination,” wrote Greenhouse. “Judge Sotomayor had no record on gun issues. But the organization obliged Senator McConnell and announced that it would score the Sotomayor vote. Republicans melted away. Only seven voted for confirmation. The scenario was repeated the following year for the nomination of Elena Kagan, who had no track record on gun cases because she had never been a judge.” The NRA took similar actions against other nominees, with mounting success. In 2016 the NRA pulled out all the stops to derail the confirmation of Obama nominee Merrick Garland to the seat left vacant by Scalia’s death, issuing an “instant and evidence-free denunciation,” Greenhouse wrote. The NRA mobilized its supporters to lobby Congress against Garland.

According to Adam Piore in Mother Jones, in 2005 Frank Wright, president of the National Religious Broadcasters, reported that one hundred million Americans tuned in to Christian stations at least once a month. This was four times the weekly audience of National Public Radio at the time.

Salem’s programming was now available to a third of the U.S. population, and its online publications had an audience of three million. Its news division website described it as “the only Christian-focused news organization with fully-equipped broadcast facilities at the U.S. House, Senate, and White House manned by full-time correspondents,” with news “specifically created for Christian-formatted radio stations.” This meant “news” based on “biblical values”—not fact-based, multi-sourced professional practice.

Citizens for community values, affiliated with the FRC and similar organizations registered nearly 55,000 new voters by hiring a firm to call every home in the state to identify 850,000 Bush supporters, and then call each of them the day before the election, encouraging them to vote. This organization placed nearly three million inserts into church bulletins the Sunday before the election.  Bush won Ohio by 118,457 votes—with 50.8% of the vote. Switching fewer than 60,000 votes [in Ohio] would have given the national election to John Kerry.

Ralph Reed undertook the organization of evangelical activists on a national basis, collecting thousands of fundamentalist church directories across the country and submitting them to the Bush-Cheney campaign (over the objections of many pastors). Their listings were fed into phone banks and registration drives.  The campaign sent the names of unregistered evangelicals back to their local volunteers, who would contact them and encourage them to register. A Bush campaign director estimated that this campaign yielded new voters “in the range of millions.

in the six years leading up to the 2004 elections, Salem Communications and its executives contributed $423,000 to federal candidates, 96% of it to Republicans, making it the sixth-largest donor in the industry.

“Evangelicals had constituted the same portion of the electorate as in 2000, about 25%, but had turned out in higher numbers than in any presidential election for which statistics are available. White evangelicals supplied two of every five Bush votes.

According to journalist Max Blumenthal, the members of the CNP were the “hidden hand” behind McCain’s choice of running mate, having withheld their support—and their fundamentalist base—until he accepted their candidate, fundamentalist Sarah Palin, over his first choice of moderate Joe Lieberman, a decision he later regretted.

Obama’s victory challenged the fundamentalists’ electoral strategy, and they were obliged to assess their weaknesses. Once again, they had to regroup.

In 1983 Weyrich had founded a weekly, by-invitation-only lunch near Capitol Hill. The lunches served as interim meetings for CNP activists to discuss their efforts to lobby for their causes and to purge moderate congressional Republicans, with the lobbying arms of the Family Research Council and the American Family Association as important sponsors.

The grand old man of the CNP spent his twilight years traveling to Moscow, building new alliances between his conservative constituency and the ruling class of the New Russia.

Posted in Politics, Religion | Tagged , , , , , | Comments Off on Republicans way ahead of Democrats on voter data

Scientists’ warning to humanity on insect extinctions

Preface. Below are excerpts from two articles on why the extinction of insects could lead to our own extinction, not to mention all the other species on earth.

Though if peak oil did happen in 2018 (citations chapter 2 of Life After Fossil Fuels: A Reality Check on Alternative Energy), then our ability to affect climate, destroy biodiversity, topsoil, pollute land, air, and water plus all the other existential boundaries we’re crossing will be reduced at least eight-fold over the next two to three decades.

Alice Friedemann  www.energyskeptic.com Women in ecology  author of 2021 Life After Fossil Fuels: A Reality Check on Alternative Energy best price here; 2015 When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Podcasts: Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity

***

Dasgupta K (2021) What could the loss of insects mean for the ecosystems that sustain us? The Indian Express.

E. O. Wilson is among the scientists who believe that the insect decline is among the most cataclysmic aspects of the Sixth Extinction. In a speech for the openin gof the National Zoological Park in Washington DC 34 years ago, Wilson said: “If invertebrates were to disappear, I doubt that the human species could last more than a few months. Most of the fishes, amphibians, birds, and mammals would crash to extinction about the same time. Next would go the bulk of the flowering plants and, with them, the physical structure of the majority of the forests and other terrestrial habitats of the world. The earth would rot. As dead vegetation piled up and dried out, narrowing and closing the channels of nutrient cycles, other complex forms of vegetation would die off, and, with them, the last remnants of the vertebrates. The remaining fungi, after enjoying a population explosion of stupendous proportions, would also perish. Within a few decades, the world would return to a state of a billion years ago, composed primarily of bacteria, algae, and a few other very simple multicellular plants.”

And as University of Pennsylvania entomologist Janzen wrote in a 2019 article in the international journal on conservation science, Biological Conservation: “The loss is very real…and the reasons are very evident: Intense forest and agricultural simplification of very large areas, massive use of pesticides, habitat fragmentation, and, at least since the 1980s, ever-increasing climate change in temperature…if our terrestrial world remains constructed through constant war with the arthropod world, along with the plants, fungi and nematodes, human society will lose very big time. The house is burning. We do not need a thermometer. We need a fire hose,”

Last year, a study in the journal Science reported a nine per cent loss in the population of insects like ants, grasshoppers and butterflies, every decade, since the past 30 years. The decline of ants, butterflies, bees, wasps, grasshoppers, fireflies and dragonflies could have outcomes far beyond their own demise. If not for insects, we wouldn’t get a lot of our plant-based food and without bugs the world would be overrun with rotting material. Insects are the original recyclers: They digest decomposed bodies and dead wood, check the spread of weeds, agricultural pests, disease vectors and other organisms that make lives of humans difficult. They are resources for medicines and indicators of habitat quality. Much like earthworms, ants are ecosystem engineers, who, in tunneling through earth to make their complex mounds, redistribute nutrients in the soil and improve air and water circulation.

Butterflies, moths and bees are amongst the worst hit. The US has lost nearly half of its bee colonies in the past 70 years. The demise of the creatures started immediately after the introduction of DDT in the 1940s and has continued even after America discontinued the insecticide in 1972.

David Wagner and his co-editors of the multidisciplinary American scientific journal PNAS’s seminal issue on insect decline in January this year believe that “many of the butterfly declines in Europe are result of changes in agricultural practices after World War II… when modern tractors and mechanised equipment were employed to accelerate industrialisation of agriculture, insecticides became widely available, and synthetic fertilisers could be manufactured and applied in prodigious volume”. Deforestation, principally for agricultural expansion, is progressing at a rate that has an alarming impact on insects and other arthropods. We do not even know the real scale of this crisis, Wagner and his colleagues fear.

Cardoso, P., et al. 2020. Scientists’ warning to humanity on insect extinctions. Biological Conservation.

Highlights:

  • We are pushing many ecosystems beyond recovery, resulting in insect extinctions.
  • Causes are habitat loss, pollution, invasives, climate change, and over exploitation.
  • We lose biomass, diversity, unique histories, functions, and interaction networks.
  • Insect declines lead to loss of essential, irreplaceable services to humanity.
  • Action to save insect species is urgent, for both ecosystems and human survival.

Abstract

Here we build on the manifesto ‘World Scientists’ Warning to Humanity, issued by the Alliance of World Scientists. As a group of conservation biologists deeply concerned about the decline of insect populations, we here review what we know about the drivers of insect extinctions, their consequences, and how extinctions can negatively impact humanity.

We are causing insect extinctions by driving habitat loss, degradation, and fragmentation, use of polluting and harmful substances, the spread of invasive species, global climate change, direct over-exploitation, and co-extinction of species dependent on other species.

With insect extinctions, we lose much more than species. We lose abundance and biomass of insects, diversity across space and time with consequent homogenization, large parts of the tree of life, unique ecological functions and traits, and fundamental parts of extensive networks of biotic interactions. Such losses lead to the decline of key ecosystem services on which humanity depends. From pollination and decomposition, to being resources for new medicines, habitat quality indication and many others, insects provide essential and irreplaceable services.

1. Introduction

Insect extinctions, their drivers, and consequences have received increasing public attention in recent years. Media releases have caught the interest of the general public, and until recently, we were largely unaware that insects could be imperiled to such an extent, and that their loss would have consequences for our own well-being. Fueled by declining numbers from specific regions, concern over the fate of insects has gained traction in the non-scientific realm.

Current estimates suggest that insects may number 5.5 million species, with only one fifth of these named. The number of threatened and extinct insect species is woefully underestimated because of so many species being rare or undescribed. For example, the IUCN Red List only includes ca. 8400 species out of one million described, representing a possible 0.2% of all extant species (IUCN, 2019). However, it is likely that insect extinctions since the industrial era are around 5 to 10%, i.e. 250,000 to 500,000 species, based on estimates of 7% extinctions for land snails (Régnier et al., 2015). In total at least one million species are facing extinction in the coming decades, half of them being insects (IPBES, 2019).

It is not only their vast numbers, but the dependency of ecosystems and humanity on them, that makes the conservation of insect diversity critical for future generations. A major challenge now and in the coming years is to maintain and enhance the beneficial contributions of nature to all people. Insects are irreplaceable components in this challenge, as are other invertebrates and biodiversity in general.

Here we build on the manifesto World Scientists’ Warning to Humanity, issued by the Union of Concerned Scientists (1992) and re-issued 25 years later by the Alliance of World Scientists (Ripple et al., 2017). The latter warning was signed by over 15,000 scientists and claims that humans are “pushing Earth’s ecosystems beyond their capacities to support the web of life.” (https://www.scientistswarning.org/the-warning/). As a group of conservation biologists deeply concerned about the decline of insect populations worldwide, we here review what we know about the drivers of insect extinctions, their consequences, and how extinctions can negatively impact humanity. We end with an appeal for urgent action to decrease our knowledge deficits and curb insect extinctions.

2. We are causing insect extinctions

Irrespective of the precise trends and their spatial distribution, human activity is responsible for almost all current insect population declines and extinctions. Yet, in order to act, we first need to identify and quantify the different ways we are acting upon them, recognizing that much is still to be understood, and more often than not, several factors contribute synergistically to decline or extinction (Fig. 1).

Fig. 1

2.1. Habitat loss and fragmentation

Habitat loss, degradation, and fragmentation are probably the most relevant threats to biodiversity. Globally, 50% of endemic species of plants and vertebrates are restricted to some 36 biodiversity hotspots covering just 2.5% of the Earth’s surface and arguably, these hotspots likely harbor similar percentages of endemic insect species. Recent modelling suggests that agro-economic pressure for land will reduce the currently very restricted natural intact vegetation by a further 50% by 2050 in one third of the world’s hotspots. Processes associated with deforestation, agricultural expansion, and urbanization are the proximate drivers of loss of natural or semi-natural habitats and their insect assemblages across the world. Mining is particularly relevant for subterranean species, which are often spatially restricted. Freshwater habitats additionally suffer from river flow regulation and damming. Increased siltation in rivers and streams from agricultural runoff, as well as flow regulation, degrade habitats of typical stream dwelling insect larvae. There is also a significant loss of pond ecosystems largely due to agricultural land drainage and urban development.

Habitat loss is often accompanied by habitat fragmentation, and both lead to decreasing connectivity (Fischer and Lindenmayer, 2007; Fletcher Jr. et al., 2018). However, depending on the mobility of the insect species and the degree of habitat fragmentation their relative importance varies. Insects with low mobility may survive in isolated populations (e.g., many flightless Orthoptera). In contrast, many species with a higher mobility – such as butterflies – usually form metapopulations (Hanski, 1999). They depend on a network of suitable habitat patches of sufficient size and in spatial proximity (Eichel and Fartmann, 2008; Stuhldreher and Fartmann, 2014). However, in less fragmented landscapes – even among metapopulation species – habitat connectivity usually plays a minor role for patch occupancy. Here habitat quality is the main driver of insect species occurrence (Krämer et al., 2012; Poniatowski et al., 2018; Münsch et al., 2019). In these times of global warming, habitat connectivity becomes increasingly important for all insect survival. This is because insect range shifts in response to climate change are often constrained by insufficient habitat connectivity in fragmented landscapes (Platts et al., 2019), and so lag behind the increase in temperature, even for mobile species (Devictor et al., 2012; Termaat et al., 2019).

2.2. Pollution

Pesticides are key drivers of insect declines due to their intensive use, as well as inappropriate risk assessment regulations (Brühl and Zaller, 2019). Pesticides impact insect populations via direct toxicity and sub-lethal effects (mainly insecticides), and indirectly through habitat alteration (mainly herbicides). Bioaccumulation, due to chronic exposure and biomagnification along food chains, pose significant additional threats for insect populations (Hayes and Hansen, 2017) that can have undetected harmful effects on insect physiology and behaviour (Desneux et al., 2007).

Many fertilizers (including organic and mineral fertilizers) widely used in agriculture, can affect insect populations indirectly, via impacts on the composition or quality of plant resources, on structural habitat properties or causing soil acidification, and through eutrophication (Fox, 2013; Villalobos-Jiménez et al., 2016). Effects of high-levels of fertilizer use can be positive for a few herbivorous insects in agroecosystems (e.g., aphids; Kytö et al., 1996), but have negative effects on most insects (Kurze et al., 2018; Habel et al., 2019a). Also, the use of anthelmintic substances (e.g. Ivermectin) in livestock systems has a negative impact on the abundance and richness of insects associated with dung decomposition (Verdú et al., 2018).

Industrial pollution (including air pollution, chemicals from factories or mining operations, and heavy metals) also causes insect population declines (Zvereva and Kozlov, 2010). Several economically important insect species (such as pollinators or natural enemies of pests) may be threatened by chronic exposure to pollutants (e.g., heavy metals), but community-wide effects are often not well understood. Freshwater invertebrates (including several insect taxa) are disproportionately affected by pollution, with over 41% of species on the IUCN Red List threatened by water pollution (Darwall et al., 2012). Industrial discharge, sewage, and agricultural and urban run-off as well as increased sediment deposition, all reduce freshwater habitat quality (Jones et al., 2012).

Light and noise pollution are becoming increasingly pervasive globally, and gaining a better understanding of these novel impacts is critical. Nocturnal insects are especially vulnerable to changes in natural light/dark cycles. Light pollution interferes with insects that use natural light (from the moon or stars) as orientation cues for navigation and with communication of insects that use bioluminescent signals, such as fireflies. It desynchronizes activities triggered by natural light cycles, such as feeding and egg-laying, and causes temporal mismatches in mutualistic interactions. Noise pollution greatly changes the acoustic landscape and interferes with acoustic communication of insects and their auditory surveillance of the environment, having significant fitness costs. Finally, the effects of electromagnetic pollution on insects and other life-forms, including humans, are still very badly understood and deserve further exploration (Thielens et al., 2018).

2.3. Invasive species

Invasive alien species (IAS) are anthropogenically introduced species to locations outside of their natural geographical range, which have a demonstrable environmental, ecological, or socio-economic effect (Turbelin et al., 2017). Impacts may be direct (e.g., through predation, competition, or disease vectoring) and/or indirect (e.g., through trophic cascades, co-extinction of herbivore or parasitoid hosts). Species introductions may ultimately lead to local loss of native insects, with those exhibiting narrow geographic distributions or specialist feeding habits being most vulnerable (Wagner and Van Driesche, 2010).

Direct competition by non-native species can drive local populations towards extinction. The degree of ecological overlap with the invasive ladybird, Harmonia axyridis Pallas, 1773, was a main predictor for local extinctions of endemic ladybird fauna in Britain (Comont et al., 2014). Invasive ants (e.g. the Argentine ant, Linepithema humile Mayr, 1868) are perhaps the best example of IAS that challenge native insect fauna. Due to their large numbers and generalist predatory behavior, many invasive ant species are primary threats to native insects (see Wagner and Van Driesche, 2010). The invasive amphipod Dikerogammarus villosus (Sowinsky, 1894) kills significantly greater numbers of aquatic invertebrates than native amphipods, reducing invertebrate diversity and displacing native amphipod species (Dick et al., 2002; Rewicz et al., 2014).

The high biomass and dense structure of invasive plants often has a major impact on insect communities (Strayer, 2010). The monotypic nature of invasive plants reduces the quantity and/or quality of food, and leads to declines in essential resources for many insects (Severns and Warren, 2008; Preston et al., 2012; Havel et al., 2015). Additionally, invasive plants can change matrix composition, adversely affecting insect host-parasitoid relationships (Cronin and Haynes, 2004). Invasive plants may also provide eco-evolutionary traps for native insects. Once an invader has outcompeted and displaced native hosts, it may act as a host that results in poor larval development, or increased larval mortality (Sunny et al., 2015), leading to insect population decline.

Invasive pathogens can also lead to native insect extinctions. European strains of the fungal pathogen, Nosema bombi, are thought to have resulted in the widespread collapse of North American bumblebees (Cameron and Sadd, 2020). Furthermore, the introduced bumblebee Bombus terrestris L., 1758 has caused the disappearance of the Patagonian bumblebee, B. dahlbomii Guérin-Méneville, 1835, across much of its native range, either due to direct competition or the introduction of pathogens to which the native species have no defences (Cameron and Sadd, 2020).

2.4. Climate change

Climate change poses threats to insects and the ecosystems they depend on, whether terrestrial (Burrows et al., 2011), freshwater (Woodward et al., 2010) or subterranean (Mammola et al., 2019b). The complexity of global climate change goes far beyond simply global temperature increase (Walther et al., 2002; Ripple et al., 2019). It also leads to a variety of multifaceted ecological responses to environmental changes, including shifts in species distribution ranges (Chen et al., 2011), phenological displacements (Forrest, 2016), novel interactions among previously isolated species (Krosby et al., 2015), extinctions (Dirzo et al., 2014), and other unpredictable cascading effects at different levels of ecosystem organisation (Peñuelas et al., 2013). Changes in species phenology, distributions, reduction in body size, assemblage structure, and desynchronization of species-specific interactions are all linked to climate change (Scheffers et al., 2016). For example, some British butterflies are emerging earlier than previously recorded, and in some cases, before their nectar plants have flowered (Roy and Sparks, 2000). In addition, changes in functional feeding group diversity can be associated with changes in trophic interactions in food webs (Jourdan et al., 2018).

Aquatic insects are disproportionately affected by climate change, due to the synergistic negative effects on freshwater ecosystems overall (Reid et al., 2019), and these insects having limited dispersal capacity, as well as them confronting barriers to their dispersal, particularly at higher elevations (Bush et al., 2013). There is a need for the development and implementation of bioindicators, and dragonflies are emerging as taxonomic champions for aquatic ecosystems (Chovanec et al., 2015; Dutra and De Marco, 2015; Valente-Neto et al., 2016; Vorster et al., 2020). Bush et al. (2013) dubbed dragonflies as ‘climate canaries’, with dragonfly species assemblages being three times more sensitive to climate variables than macroinvertebrate assemblages at family level. While there is evidence that water quality improvements have offset recent climatic debt for stream macroinvertebrates (Vaughan and Gotelli, 2019), this continued mitigation is not likely to reverse or even halt trends in aquatic insect species declines.

2.5. Overexploitation

Though rarely considered, overexploitation may play a role in insect decline for many groups. It primarily threatens free-living insects and includes unsustainable harvesting for use as pets and decoration (as souvenirs and jewels), or as food resources and traditional medicine. Various insects are kept as pets, but they are especially popular in Japan, where there are many illegally traded insects (Actman, 2019). Ants maintained in commercial farms are probably the most common pet insect in USA, but field crickets, praying mantids, antlions, caterpillars, and mealworms are also reported worldwide as household pets (Smithsonian, 2019).

Ornamental insects as preserved decorations are also numerous, particularly regarding Lepidoptera and Coleoptera. Coloured wings and elytra are used in jewellery, embroidery, and pottery (Prasad, 2007; Lokeshwari and Shantibala, 2010). In regions where market demand is high, ornamental insects are frequently imported in large numbers (Kameoka and Kiyono, 2003), which fuels an illegal export industry in areas where high-demand insects occur naturally (Kameoka and Kiyono, 2003; New, 2005). Unsurprisingly, this demand for ornamental insects has driven declines of sought-after species (Tournant et al., 2012; Huang, 2014).

Entomophagy is another driver of overexploitation. A worldwide inventory listed 2111 edible insect species (Jongema, 2017), with number of collected individuals often exceeding regeneration capacity (Cerritos, 2009). Wild populations are threatened because collection practices became less selective and sustainable, due to the dissipation of indigenous knowledge, which often includes the sustainable use of edible insects and their habitat (Kenis et al., 2006). In many subsistence societies, insects provide protein supplements and can constitute nearly a third of total protein intake during periods of meat protein shortage (Dufour, 1987).

The overexploitation of insects as alternative medicine also poses a risk. Demand for the hundreds of insect species that are used in such practices is reportedly threatening insect biodiversity (Feng et al., 2009). The commercial value of products based on medicinal insects comprises about US$100 million per year (Themis, 1997).

2.6. Co-extinction

Specialisation has led to many insects becoming co-dependent, and therefore, vulnerable to co-extinction (Dunn, 2005; Dunn et al., 2009). Among these, numerous insect lineages have diversified with vertebrates, either as parasites, epizoic mutualists, or commensal coprophages. At least 5000 louse (Phthiraptera) species have been described, of which most (~4000) use avian hosts. About 2500 flea species are recognised and >6000 species of dung beetles are named. Numerous insect lineages have also diversified with invertebrate hosts. Insects of the order Strepsiptera (twisted-wing insects) are obligate parasites of other insects, and >600 species have been described, though they are dwarfed by the parasitic wasps, which are estimated to include as many as 350,000 species (Gaston, 1991). Insects co-dependent on plants are also extremely species rich, with gall-inducing insects alone comprising as many as 211,000 species (Espírito-Santo and Fernandes, 2007). Similarly, mycophagous insects are extremely diverse and often co-dependent on a few fungal hosts.

Co-dependent insects are greatly at risk of extinction through their specialised ecologies (Dunn, 2005; Dunn et al., 2009), even though examples of co-extinctions are rare (Colwell et al., 2012). Models suggest that co-extinction events should be far more common (especially among plant-dependent beetles and bird lice) than present records suggest (Koh et al., 2004a). This is either because of co-extinction events are poorly recorded, or due to unrecognised network resilience owing to the ability of co-dependent insects being able to use many more species than previously assumed (Colwell et al., 2012).

In the case of co-dependent insects, trophic cascades can be particularly relevant (Strona and Bradshaw, 2018). Host species are lost due to habitat loss, as has been shown in Lepidoptera-host plant systems (Pearse and Altermatt, 2013). A historical example of indirect effects of invasive species is the co-extinction of Christmas Island flea (Xenopsylla nesiotes Jordan & Rothschild, 1909), resulting from loss of the Christmas Island rat (Rattus macleari Thomas, 1887) due to the introduced black rat (Rattus rattus L., 1758) (Kwak, 2018). There is evidence that decline of mammals due to synergistic causes (climate change, habitat destruction, hunting, etc.) lead to a pervasive co-decline of dung beetles at continental scales (Bogoni et al., 2019). The overexploitation of birds by the pet trade also threatens their dependent insects (Eaton et al., 2015).

3. We lose much more than species

All species, including insects, are valuable as unique combinations of evolutionary events, have innate value, and so require care and conservation. Yet, as George Orwell put it in Animal Farm, “All animals are equal, but some animals are more equal than others.”, with invertebrates being largely neglected in conservation efforts worldwide (Cardoso et al., 2011), the so-called “institutional vertebratism” (Leather, 2013). There is no reason why an insect species deserves much less attention than a bird or mammal species. However, the importance of insect population declines and consequent extinctions goes way beyond loss of species and their intrinsic value.

Each species represents individuals, biomass, and functions being lost, and therefore not available for other living beings. Each species contributes a unique piece to a complex living tapestry that changes in space and time. Each species represents an unrepeatable part of the history of life. In turn, each species also interacts with others and their environment in distinctive ways, weaving a complex network that sustains other species, including us (Fig. 1).

3.1. Abundance and biomass

Hallmann et al. (2017) documented a loss of biomass of flying insects of about 75% over 30 years. This negative trend occurred in nature reserves in Germany. These results are a warning and stimulated an intense debate on the insect crisis. Also, in other parts of Germany, declining abundances and biomass for a broader set of arthropods have been recorded (Seibold et al., 2019). Similar trends have been recorded for other parts of Europe. Large declines in abundance have also occurred among UK butterflies and moths, dragonflies (Clausnitzer et al., 2009) and carabid beetles (Brooks et al., 2012) in recent years. Negative trends are not restricted to Europe, but also occur in other parts of the world (Wagner, 2019). A global meta-analysis of insect abundances revealed a 45% decline across two-thirds of the taxa evaluated (Dirzo et al., 2014). Yet, the specific trend and strength of the decline or eventual increase is not universal and changes according to taxon and region (Macgregor et al., 2019).

Declines in insect abundance and biomass always precede species extinctions, as this is a continuous, not binary, process. Although critically dependent on the ecological role of the species, numerical loss in abundance, and by extension, biomass, reflect impairment of ecological function and provisioning of ecosystem services. For example, biomass is a measurement of the amount of energy flowing through trophic levels that insects represent. In turn, reduced abundance and biomass affects ecosystem functionality and resilience, food web structure, and species interactions, such as plant-pollinators, population persistence, and many ecosystem services (Biesmeijer et al., 2006; Losey and Vaughan, 2006).

These studies highlight numerical declines in abundance and biomass at the landscape level, but also inform us that declines are not restricted to rare and endangered species only, but are also present for more abundant species (Habel and Schmitt, 2018; Hallmann et al., 2020). While insect conservation often target charismatic, rare, or threatened species, the temporal and spatial trends of common and widespread species are often overlooked (Gaston, 2011). Numerical declines of common and widespread species impact the functioning of ecosystems more severely. As such, safeguarding ecosystem function may be suffering un-noticed, highlighting the need for insect monitoring and conservation beyond rare and threatened species.

3.2. Differences in space and time

Insects and most arthropods are relatively small organisms that often occupy small microhabitats. As we move horizontally across a seemingly homogenous patch, small features, such as dead wood, rocks, or even a single tree can alter conditions, leading to replacement of species and allowing higher richness to persist within the larger patch. Insects also partition themselves vertically, i.e. in a forest, we find soil, ground active, undergrowth, sub-canopy, and canopy species, all of which contribute to the hyper-diversity found in, for example, tropical rainforests (Stork et al., 2016). This way, insect assemblages tend to be composed of few very common and many rare species (Pachepsky et al., 2001; McGill et al., 2007), leading to high levels of beta-diversity. Such high levels of species turnover can be difficult to monitor, as research tends to describe overall arthropod richness and compositional changes driven by the common species. Given their nature, it is much harder to quantify how rare species are responding to anthropogenic pressures (van Schalkwyk et al., 2019).

Processes that homogenize natural systems decrease beta-diversity by removing rare species from the system. These pressures not only remove native species, but also simplify the system, reducing the diversity of resources and biological interactions. Furthermore, they allow secondary invasions from ecologically dominant alien invasive insects that outcompete or simply feed on the native fauna (Silverman and Brightwell, 2008; Roy et al., 2016; see section on invasive species). The edges of transformed areas, including linear structures such as roads, show large edge effects on beta-diversity. This suggests that the presence of dominant species, either native or alien, reduce niche space by outcompeting and effectively replacing rare species (Swart et al., 2019).

Insects do not just partition themselves across space, but also time. Tropical rainforest cicadas and bush-crickets call during different times of the day and night or at different frequencies to avoid overlap (Schmidt and Balakrishnan, 2015). At the other extreme are the periodic cicadas, which only emerge as adults every 13 or 17 years (prime numbers to avoid frequent overlap). One of the major concerns with global climate change is how warmer temperatures might be interfering with arthropod phenology. For example, a population of the 17-year cicada emerged after just 13 years in 2017 (Sheikh, 2017), which is most likely due to the alteration of host tree cycles (Karban et al., 2000).

3.3. Phylogenetic diversity

Phylogenetic diversity takes the evolutionary relationships between taxa into account and reflects the evolutionary history of each species. Communities with identical taxonomic diversity may differ widely with respect to their evolutionary past, depending on the time of divergence of species from their nearest common ancestor (Webb et al., 2002; Graham and Fine, 2008). Studying the effects of species extinction on the phylogenetic tree of life is therefore imperative and provides a complementary view to the loss of taxon diversity.

Insects constitute a major branch of the tree of life, representing ca. 480 million years of evolution (Misof et al., 2014). Preserving this phylogenetic diversity is crucial to protect the evolutionary trajectories of the most successful taxonomic group on our planet. Understanding the phylogenetic relationships among and within species is crucial to avoid detrimental decisions in conservation management, such as neglecting populations with unique evolutionary histories (e.g., Price et al., 2007), (re-)introducing non-native species or mis-adapted evolutionary lineages (Moritz, 1999), or outbreeding depression in captive breeding projects (Witzenberger and Hochkirch, 2011).

Insects comprise many unique evolutionary lineages with some old relict groups, such as the Zoraptera, Mantophasmatodea, Mecoptera, or Grylloblattodea. Among the latter, the Kosu Rock Crawler (Galloisiana kosuensis Namkung, 1974) is listed as Critically Endangered on the IUCN Red List of Threatened Species (Chung et al., 2018). This species is only known from a single cave, whose temperature has risen by >3 °C from increased tourism, reaching 1400 visitors per day. Another example is the Mauritian endemic grasshopper species Pyrgacris relictus Descamps, 1968, which belongs to a distinct family (Pyrgacrididae) with only two species. This species, which only feeds on an endemic palm species is Critically Endangered, and only known from a single locality, imperilled by construction of a golf course (Hugel, 2014). Loss of such distinct evolutionary branches of the tree of life is irreversible and leads to the loss of unique genetic diversity.

3.4. Functional diversity

Functional diversity quantifies the components of biodiversity that influence how an ecosystem operates or functions (Tilman et al., 2001) and reflects the amount of biological functions or traits displayed by species in given communities. Communities with completely different species composition may be characterized by low variation in functional traits, with phylogenetically unrelated species replacing others with similar functional roles (Villéger et al., 2012). The functional diversity and role of insects in maintaining ecological processes are issues of immense interest, and are particularly relevant to landscapes undergoing anthropogenic change and biodiversity loss (Ng et al., 2018). This is because functional diversity provides a direct link between biodiversity and ecosystem processes. Moreover, loss of particular traits can result in changes to key ecological processes promoted by insects, such as pollination (Saunders, 2018) and decomposition (Barton and Evans, 2017).

Threatened species are not a random subset of all the species. Threatened species tend to share biological traits that influence their extinction risk (Chichorro et al., 2019). In general, specialists in either habitat type or feeding regime, very small or very large species, and poor dispersers, are at highest risk. The decline of both habitat and resource specialist species has been documented for bees, beetles, butterflies, dragonflies, and moths (e.g., Kotze and O’Hara, 2003; Koh et al., 2004b; Bartomeus et al., 2013). Species with narrower habitat requirements have less ability to escape from multiple pressures. The resource specialists depend not only on their effective population size, but also on the availability of their resources. When organisms are dependent on only one resource type, co-extinctions might also be more likely to occur.

Demise of both large and the very small species occurs among vertebrates (e.g., Ripple et al., 2017). There are two main reasons explaining the demise of large species: 1) they usually require more resources and therefore exist at lower population densities than smaller species, which in turn increases the risk of local extinction due to unpredictable events; 2) they usually have traits related to slower life cycles and therefore respond slower to environmental change. On the other hand, smaller species often decline in greater proportions than larger ones, due to their lower competitive ability (Powney et al., 2015). However, small insects can be sensitive to fragmentation (Basset et al., 2015) and habitat loss (Jauker et al., 2013) due to poor dispersal ability.

3.5. Ecological networks

Insects are crucial in structuring and maintaining communities, forming intricate networks that can influence species’ coevolution (Guimarães Jr. et al., 2017), coexistence (Bastolla et al., 2009), and community stability (Thébault and Fontaine, 2010; Rohr et al., 2014). Insect extinctions not only reduce species diversity, but also simplify networks, and we may be losing interactions at a higher rate than species (Tylianakis et al., 2008; Valiente-Banuet et al., 2015). The implications of these changes will depend on the role a species plays in the network (Bascompte and Stouffer, 2009; Tylianakis et al., 2010). The more a species shapes a network, the more the architecture will change if it goes extinct. Furthermore, species conferring network structure are most at risk of going extinct (Saavedra et al., 2011). Thus, we should aim to preserve both species and their interactions (Tylianakis et al., 2010).

In mutualistic networks, plants and insects weave nested relations (Bascompte et al., 2003). This leads to specialists interacting with subsets of generalist interaction partners. Nested networks tend to mitigate random extinctions or the loss of specialists (Memmott et al., 2004). In this case, when species are lost, the structure remains. In contrast, the extinction of generalists erodes the nested architecture. In this case, the loss of focal species makes the system more prone to co-extinction cascades (Dunne et al., 2002).

In antagonistic networks, species form intertwined subgroups, where inter-module interactions are rare (Olesen et al., 2007). Connectors and network hubs are important contributors to the modular structure, with beetles, flies, and small bees being the most common connectors (Olesen et al., 2007). Alarmingly, some of these hub species are currently at risk of extinction (Sirois-Delisle and Kerr, 2018). They not only benefit interaction partners, but also give cohesion to the entire community. Their disappearance may result in the fragmentation of networks into isolated modules (Bascompte and Stouffer, 2009; Tylianakis et al., 2010). This endangers communities by making them more susceptible to perturbations (Olesen et al., 2007).

Interactions drive the coevolution of plants and insects (Bronstein et al., 2006). They can result in remarkable trait complementarity, as in the case of pollination or ant protection of plants (Bronstein et al., 2006). Yet, in complex networks, indirect effects steer the evolution of traits (Guimarães Jr. et al., 2017). In species-rich networks, all members influence how traits evolve in the community. This means that extinctions will affect direct partners, and can reduce community-wide trait integration. This could incapacitate entire communities from responding to environmental change.

4. We depend on insects

Insects contribute to the four main types of ecosystem services defined by the Millennium Ecosystem Assessment (2003): i) provisioning services, ii) supporting services, iii) regulating services, and iv) cultural services (Noriega et al., 2018; Table 1). This animal group contributes to the structure, fertility, and spatial dynamics of soil, and they are a crucial element for maintaining biodiversity and food webs (Schowalter et al., 2018). A large number of insects provide medical or industrial products (Ratcliffe et al., 2011), and globally, >2000 insect species are consumed as food. Also, in agroecosystems, insects perform many different functions, such as pollination, nutrient and energy cycling, pest suppression, seed dispersal, and decomposition of organic matter, feces, and carrion (Schowalter et al., 2018). Today, the agricultural sector already actively uses insect antagonists of pests (classical and augmentative biological control) or establishes habitat management practices to promote insects as natural enemies of pests. In this context, as a clear consequence, insect declines can negatively affect the maintenance of food supply and put at risk human well-being.

Table 1. Ecosystem services provided by insects.

Type of service Area Provision
Commercial Provisioning services Medicine New treatments
Engineering Biomimetics
Monitoring Monitoring of habitat quality
Genetic resources New chemicals
Ornaments Insect houses and deadstock
Biocontrol Biocontrol agents
Production Food and fibre
Non-commercial Regulating services Climate Climate regulation
Disease control Burial of dung or carcasses
Erosion Limiting erosion
Invasion resistance Controlling invasive species
Herbivory Nutrient cycling
Natural hazards Protection from hazards
Pollination Reproduction of flowering plants
Plant dispersal Seed dispersal of plants
Water flow Regulating water movement
Water treatment Purification by larvae
Supporting services Nutrient cycling Through saprophagy/coprophagy
Oxygen production Through interaction with plants
Habitat creation Building mounts, nests, and others
Soil formation Breakdown of plants, dung and carcasses
Cultural services Cultural heritage Arts, myths, and stories
Education Connecting with nature
Knowledge systems Models for scientific research
Recreation Nature tourism
Sense of place Endemic species
Spiritual values Views on nature

(Adapted from Samways, 2019)

All described services translate to monetary value. In an initial approach, Costanza et al. (1997) estimated a global value of ecosystem services at US$33 trillion annually. Later, ecosystem services provided by insects were estimated to have a value of $57 billion per year in the United States alone (Losey and Vaughan, 2006), and insect pollination may have an economic value of $235–577 billion per year worldwide (IPBES, 2016). Additionally, the annual contribution of ecosystem services provided by dung beetles to the cattle industry can reach $380 million in the USA (Losey and Vaughan, 2006) and £367 million in the UK (Beynon et al., 2015).

However, there is little knowledge on the functional roles that insects play in many ecosystems, with their values likely greatly underestimated. Absence of detailed information is related to lack of manipulative controlled experiments for several services (Noriega et al., 2018). Also, the few comprehensive studies available are focused on a few iconic groups or functions, such as bees and pollination (e.g., Brittain et al., 2010), ground beetles and pest control (e.g., Roubinet et al., 2017), dung beetles and decomposition (e.g., Griffiths et al., 2016), or aquatic insects and energy flow (e.g., Macadam and Stockan, 2015). This critical shortfall must be addressed to conserve insect diversity for our own survival.

References

See https://www.sciencedirect.com/science/article/pii/S0006320719317823

Posted in Biodiversity Loss, Extinction, Scientists Warnings to Humanity | Tagged , | 4 Comments

Kurt Andersen on Trump & Covid-19 in “Evil Geniuses”

Preface. If you want to know all the economic and political history that got us to the right-wing extremist Trump and Republican party, there’s no more entertaining way to do so than Kurt Anderson’s latest book “Evil Geniuses”.  Better yet, his book “Fantasyland: How America Went Haywire: A 500-Year History” is one of the best books I’ve read.

This book has insights into Trump, health care, and covid-19 I haven’t seen elsewhere, such as how Koch and other billionaires who originated the Tea Party used the same astroturf operations to create a fake movement of angry people who refused to wear masks, which turned into millions of actual people refusing to do so and tens of thousands of unnecessary deaths.  Don’t billionaires have enough money? Can’t they just let folks keep safe until a vaccine is found, and help pay for their rent and wages out of their ill-gotten billions obtained by paying low wages in the first place, rather than sharing profits with their workers? What ever happened to noblesse oblige.

I was frustrated that such a great writer and book were marred by a lack of awareness of biophysical economics, ecology, or limits to growth.  Andersen might have written a better book if he’d read “Energy and the Wealth of Nations: understanding the biophysical economy”, “Limits to Growth”, “When trucks stop running: energy and the future of transportation”, “Living within Limits”, or “Extracted: How the quest for mineral wealth is plundering the planet”, and “One Nation Under God: How Corporate America Invented Christian America”.

Alice Friedemann www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

* * *

Kurt Andersen. 2020. Evil Geniuses: The Unmaking of America: A Recent History. Random House.

It’s too bad your wages haven’t gone up for forty years, goes one common argument from the right and well-to-do concerning the economic condition of the American majority, and that pensions and unions and millions of good jobs disappeared, but, hey, haven’t we let you eat cake? That is, they say, in all seriousness, that income inequality isn’t as bad as it looks because some things, like milk and eggs, are actually less expensive now, and TVs are gigantic and inexpensive, and all the other stuff at the Walmarts and dollar stores is so cheap, thanks to Chinese imports.

And so even if we’re still able to buy low-quality cosmetics and toys and frozen squid cheaply, we’re now definitely paying more than we should for more essential things. As a result of looser, lavishly big-business-friendly government policies, every piece of the U.S. medical-industrial complex became much more concentrated during the 1990s and 2000s—hospitals, health insurance companies, large physicians’ groups—and prices increased as a result of the greater market power. A conservative estimate is that since 1980, government policy changes have caused Americans to spend an extra $130 billion every year for healthcare. For instance, why are prescription drug prices routinely two and three times as expensive in the United States as in other countries? A big reason is that in the 1980s and afterward, Congress and federal antitrust enforcers gave away the store to pharmaceutical companies by letting them control patents longer and set minimum prices.

The obeisance of the rich right and their consiglieri to Trump for the last four years has exposed more nakedly than ever their compact—everything about money, anything for money—and the events of 2020 pushed that along to an even more shameless, grotesque crescendo. In early spring, when COVID-19 had killed only dozens of Americans, Stuart Stevens, a strategist for four of the five previous GOP presidential nominees but now a fierce apostate, wrote that “those of us in the Republican Party built this moment,” because “the failures of the government’s response to the coronavirus crisis can be traced directly to some of the toxic fantasies now dear to the Republican Party….Government is bad. Establishment experts are overrated or just plain wrong. Science is suspect.

He could have also listed Believe in our perfect mythical yesteryear, All hail big business, Short-term profits are everything, Inequality’s not so bad, Universal healthcare is tyranny, Liberty equals selfishness, Co-opt liberals, and Entitled to our own facts as operating principles of the Republican Party and the right. During the first six months of 2020, all those maxims drove the responses (and the non-responsiveness) of the Trump administration and its extended family of propagandists and allies and flying monkeys.

Almost every piece of the crises’ exacerbation by them was inevitable because each one came directly out of the right’s playbook of the last several decades.

Government is bad. A Republican administration uniquely unsuited and unready and really unwilling to deal with such a national crisis? Decades before our latest show-business president defamed his entire executive branch as a subversive Deep State, the cocreator of late Republicanism announced in 1981, a few minutes after becoming our first show-business president, that “in this present crisis, government is not the solution to our problem, government is the problem,” then made a shtick out of warning Americans to consider any offers of help from the government “terrifying.

Believe in our perfect mythical yesteryear. The right twisted and exploited nostalgia in the 1970s and ’80s to get its way, selling people on a restoration of old-time America with storybook depictions that omitted all the terrible parts of the past—including the epidemics we had before we built a public health system and before governments required citizens to get government-funded vaccines; the economic panics and collapses we had before government intervened to help unemployed workers; the phony miracle cures that charlatan showmen marketed to us before government put a stop to them.

Entitled to our own facts. That systematic spread of coronavirus misinformation by Trump and the right through the first pandemic winter couldn’t have happened without the creation in the late 1980s (Rush Limbaugh) and ’90s (Fox News) of big-time right-wing mass media. Their continuous erasure of distinctions between fact and opinion has always served the propaganda purposes of the political party most devoted to serving the interests of big business and investors, and during the COVID-19 crises—Reopen now—they attempted to serve those interests directly.

Short-term profits are everything. For years, reckless financial operators dragged healthy enterprises into leveraged buyouts and piled on excessive debt, making billions personally but the companies weak and barely able to survive in normal times. Then when things got bad in 2020, the LBO’d companies (such as J. Crew and Neiman Marcus) started dying off even faster than others: excessive debt turned out to be a main underlying condition comorbid with the economic effects of the pandemic.

Liberty equals selfishness. The right spent decades turning brat into a synonym for ultra-conservative, forging a tantrum-based politics focused on hating sensible rules that reduce unnecessary deaths and sickness—no gun control! no mandatory vaccinations! no universal health insurance! So in the spring of 2020, of course mobs of childish adults were excited to throw self-righteous tantrums on TV about being grounded by the mean grown-ups. While also playing soldier by carrying semiautomatic rifles in public.

Inequality’s not so bad. The glaring new light of the pandemic vividly showed the results of the system we’ve built. The health risks and the economic burdens are borne disproportionately by people near the financial edge, black and brown people, people with low-paying jobs that can’t be done from home. And on the other hand, we see more clearly than ever how the lucky top tenth, the people who own more than 80 percent of all the stocks and other financial wealth, inhabit an alternate economic universe.

Universal healthcare is tyranny. A healthcare system already fractured, unfair, inefficient, confusing, and anxiety-provoking as a result of its capture by a for-profit medical-industrial complex? Check. And a system unique in the world for making its exceptionally expensive care a fringe benefit of (some) particular jobs—at a moment when tens of millions of jobs suddenly disappeared? Check.

Ten days later, in early March, a House subcommittee held a regular hearing on the CDC’s annual budget, which the administration was trying to cut, as it had tried to do every year—large cuts that the Koch organization Americans for Prosperity had recommended because, as it complained in 2018, “CDC funding has already grown significantly over the last fifteen years.” Trump’s CDC director, Robert Redfield, a conservative, testified. A right-wing Republican congressman, who like Redfield is a physician, used his question time to explain why dealing with “these kind of new viruses” requires the government to continue guaranteeing high profits to the pharmaceutical industry. “On the vaccine front,” he said, prospective laws like the bill the House had just passed to let the government negotiate Medicare drug prices downward “will destroy American innovation” in medicine. He instructed Redfield to agree with him that only “the private sector” can properly deal with COVID-19 and “these kinds of public health threats.

But then Redfield shared with the congressman his surprise and disappointment that the two big U.S. medical testing companies had not, on their own, “geared up sooner,” starting in mid-January, to handle mass testing for the coronavirus. “I anticipated that the private sector would have engaged and helped develop it” and “be fully engaged eight weeks ago” to deal properly with this new disease, said the national director of disease control.

“Here were two men wondering aloud,” the journalist Alex Pareene wrote at the time, “why reality had failed to conform to their ideology. How odd that these companies, whose only responsibility is to their shareholders, had failed to make up for the incompetence of this administration.

Monday, March 16, when the shutdown really started, the conservative Hoover Institution published a piece called “Coronavirus Perspective” recommending against any restrictions on the economy because the pandemic just wasn’t going to be a major public health problem. “In the United States, the current 67 deaths should reach about 500” in all, the Stanford think tank article projected, and in a quick follow-up article called “Coronavirus Overreaction,” the same writer completely showed his ideological cards. “Progressives think they can run everyone’s lives through central planning,” he warned, so don’t let them do it to fight the spread of this no-big-deal disease.

The writer was neither a medical professional nor an economist, but a lawyer named Richard Epstein, a blue-chip economic right-winger from the 1970s and ’80s—influential University of Chicago law professor, early Federalist Society VIP, Cato Institute scholar, editor of the Law and Economics movement’s main journal. Right away, “conservatives close to Trump and numerous administration officials [were] circulating” Epstein’s inexpert pronouncements, The Washington Post reported.

Right around then, according to “a Trump confidant who speaks to the president frequently” and spoke to a Financial Times reporter about those conversations, Jared Kushner was telling his father-in-law “that testing too many people, or ordering too many ventilators, would spook the markets and so we just shouldn’t do it….That advice worked far more powerfully on him than what the scientists were saying.

Rush Limbaugh in late March was still telling his 15 million listeners to doubt the Deep State doctors and scientists advising Trump. “We didn’t elect a president to defer to a bunch of health experts that we don’t know,

“And how do we know they’re even health experts? Well, they wear white lab coats and they’ve been on the job for a while and they’re at the CDC and they’re at the NIH….But these are all kinds of things that I’ve been questioning.

During the previous big economic crisis in 2009, the Kochs used their organizations FreedomWorks in Washington, and Americans for Prosperity just across the Potomac, to harness and amplify grassroots anxiety and confusion in the provinces. From those headquarters they’d executive-produced the politically useful shows of performative anger by Tea Party protesters against the Democratic-led federal government. In 2020 the pandemic provided a reboot opportunity—this time for protests against state and local governments, especially those run by Democrats, that weren’t following the maximalist line on instantly reopening business. They mobilized their militias—old Tea Partiers, gun nuts, antivaxxers, random Trumpists—for demonstrations around the country that began on Easter Monday.

“There’s a massive movement on the right now,” Stephen Moore claimed, “growing exponentially. People are at the boiling point. They are protesting against injustice and a loss of liberties.” He insisted, The Washington Post reported, that “the protests have been spontaneous and organized at the local level,” although he admitted that “his group has been offering them advice and legal support.

So why, according to polls, were two-thirds of Americans in favor of the national quasi-quarantine? Because, this presidential adviser and would-be Fed governor said, “the American people are sheep.

The two Koch-created enterprises and Moore were joined by a newer organization also devoted to promoting right-wing economics, the Convention of States, funded by Robert Mercer—hedge fund billionaire, early Breitbart News investor, Trump’s biggest 2016 donor—and overseen by a cofounder of the Tea Party Patriots and (such a long game) a strategist for David Koch’s 1980 Libertarian vice-presidential campaign. In Michigan, the protests were organized and promoted by existing Republican groups, one connected to the right-wing billionaire DeVos family, and in Idaho by a group funded by a new Coors, the son of the counter-Establishment founder Joseph.

The mission of those demonstrations, as The Washington Post reported, was “making opposition to stay-at-home orders—which had been in place in most states for only a couple of weeks or less—appear more widespread than is suggested by polling.” The shorthand Astroturf for these kinds of protests is a misnomer. Rather, they’re more like sod: real grass but more expensive, centrally produced and harvested, then rolled out by professionals on command to look instantly picturesque. It seemed clear, from the social media posts of nominally local groups all over the country, that talking points and specific language were being issued from headquarters.

FreedomWorks’ protest brand Reopen America became the name for local protests all over the country—Reopen Wisconsin, Reopen Oregon, Reopen Nevada, Reopen Delaware, and many more. Their online national protest calendar stipulated that “these are not FreedomWorks events, but…if you’re interested in planning your own event, click here for our planning guide.

The professional right-wingers on K Street provided photo-op protest tradecraft instructions to the provincials—make sure to “include…nurses, healthcare workers, etc. as much as possible,” and to “keep [signs] homemade.

Americans for Prosperity held an online training session for would-be agitators on how to spread memes that they actually called “Best at Going Viral.

Because the president had been unable to hold any of his MAGA rallies for weeks, then months, the demonstrations also served as ad hoc reelection events, keeping the super enthusiasts excited and acting out their love for the president on TV, where he could see it.

At the end of the first week of protests in April, the country was still in the middle of his government’s “30 Days to Slow the Spread,” as the second phase was called, but the president said fuck that—in four minutes one morning, he posted tweets to rev up the cultists in three swingy states: “LIBERATE MINNESOTA!” and “LIBERATE MICHIGAN!” and “LIBERATE VIRGINIA, and save your great 2nd Amendment. It is under siege!

Testing [people] is somewhat overrated,” he said, and “this is going to go away without a vaccine.” In other words, a reporter asked the president, Americans just had to accept that reopening without enough testing and contact-tracing would cause lots more deaths? Yes. “I call these people warriors, and I’m actually calling now…the nation, warriors. You have to be warriors,” by which he meant, of course, be willing to be killed by COVID-19, fallen soldiers for American capitalism. But apart from that, everything would soon be fantastic.

Posted in Disease, Politics | Tagged , , | 1 Comment

A Strong Case for the Anthropocene: no other species has ever consumed so much of earth’s resources so quickly

Produced energy and the pattern of human population growth from 1750. Utilization of these energy sources, together with the energy used by humans from net primary production, is now approaching the entire energy available to the global ecosystem before human intervention [Barnosky, [1]]. Key to colours: dark blue = coal; dark brown = oil; green = natural gas; purple = nuclear; light blue = hydro; orange brown = biomass (e.g. plants, trees). Data source from http://www.theoildrum.com/node/8936

Figure 1. Produced energy and the pattern of human population growth from 1750. Utilization of these energy sources, together with the energy used by humans from net primary production, is now approaching the entire energy available to the global ecosystem before human intervention [Barnosky, [1]]. Key to colours: dark blue = coal; dark brown = oil; green = natural gas; purple = nuclear; light blue = hydro; orange brown = biomass (e.g. plants, trees). Data source from http://www.theoildrum.com/node/8936 Figure 1. Produced energy and the pattern of human population growth from 1750.

Preface. A few key paragraphs from the article below:

Humans are producing and consuming resources at a geologically unprecedented rate – a rate that needs to be maintained to continue the high level and complexity of the current [fossil-fuel based] civilization.  This high consumption has formed a ‘striking new pattern’ in the planet’s global energy flow.

It is without precedent to have a single species appropriating a quarter of the net primary biological production of the planet and to become effectively the top predator both on land and at sea.

Some of the massive effects humans are having on the planet include mining phosphorus and fixing nitrogen to make fertilizer, burning hundreds of millions of years of fossil fuels, and directing this increased productivity that is well beyond natural levels towards animals re-engineered for our consumption.

Alice Friedemann www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles,Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Williams, M., et al. March 14, 2016. The Anthropocene: a conspicuous stratigraphical signal of anthropogenic changes in production and consumption across the biosphere. Earth’s Future.

Humans are producing and consuming resources at a geologically unprecedented rate – a rate that needs to be maintained to continue the high level and complexity of the current [fossil-fuel based] civilization.  This high consumption has formed a ‘striking new pattern’ in the planet’s global energy flow.

Humans now consume between 25 and 38% of net primary production of the planet. Human modification and appropriation of NPP, and the production of energy over and above NPP, has been developing over thousands of years, but accelerated markedly from the mid-20th century onward (Figure 1)

Professor Zalasiewicz at the University of Lecister said the last times such huge effects were seen happened 2.5 billion years ago when photosynthesis appeared, and again half a billion years ago when the food web grew more complex.  Although the 5 major extinction events were also huge, “ even measured against these events, human-driven changes to production and consumption are distinctly new.”

Co-author Dr Carys Bennett added: “It is without precedent to have a single species appropriating something like one quarter of the net primary biological production of the planet and to become effectively the top predator both on land and at sea.”

Some of the massive effects humans are having on the planet include mining phosphorus and fixing nitrogen to make fertilizer, burning hundreds of millions of years of fossil fuels, and directing this increased productivity that is well beyond natural levels towards animals re-engineered for our consumption.

According to Professor Zalasiewicz: “This refashioning of the relationship between Earth’s production and consumption is leaving signals in strata now forming, and this helps characterize the Anthropocene as a geological time unit.  It also has wider and more fundamental importance in signaling a new biological stage in this planet’s evolution.”

Dr Colin Waters of the British Geological Survey said: “Modern human society is structured around economic production and consumption and our recent perturbation of the balance between the two, notably since the mid-20th century, will leave a signal that will provide a lasting legacy of our existence on this planet.”

Also see ScienceDaily.com’s March 23, 2016 Human impact forms ‘striking new pattern’ in Earth’s global energy flow.

More excerpts:

The human impact on production and consumption in the biosphere is recognizably different from all previous patterns. Humans appropriate a major component of NPP that is augmented by their use of fossil fuels: the combined energy use now approaches that available to the entire terrestrial biosphere prior to human intervention. In addition, humans are poor at recycling compared to the unmodified biosphere, a clear example being the geologically unprecedented rapid increase of carbon in the atmosphere from the consumption of fossil fuels, and concomitant accumulations of plastics—made from hydrocarbons—at the surface.

The influence of humans on mammal populations during the late Pleistocene represents a global, though diachronous, signal of growing human impact. This potentially had an ecosystem engineering effect, as the climax forests of several areas throughout North America may be the result of the removal of megafauna (mammoths and mastodons) in the late Pleistocene, animals that were effective in forest clearance.

However, a key transition in the human remodeling of production and consumption was the origin of farming, moving primary productivity to annual crop plants and shifting primary consumption to domesticated animals. These innovations, which mark the end of the Epi-Paleolithic and the beginning of the Neolithic culture, include the domestication of cattle (pigs, cows, goats, sheep etc.) and development of agriculture from about 10,000 years ago. Once adopted, agriculture sustained a greater population (and standing biomass) of people, and provided the environment in which human specialist activities unrelated to food production could evolve.

The eventual transfer of labor from agriculture to non-agricultural activities is the central component of industrialization, and has led to even greater appropriation of primary production by humans, and to the use of fossil fuels to augment energy supplies to the global ecosystem, with the concomitant rise of humans and their domesticated animals as the principal component of standing terrestrial large-animal biomass. From the 17th- to mid-20th century technological advances in farming, in their initial stages focused on England, the Low Countries and northern Italy, and then spreading globally, helped facilitate increasing appropriation of primary production. These included: improvements in drainage and restoration systems; the development of the Dutch plough in the early 17th century; the mechanization of farming in the early 18th century; developments in breeding and genetic manipulation, scientifically explained by Gregor Mendel in the mid-19th century; and the use of fertilizers, with the discovery that ammonia could be synthesized by a chemical reaction from nitrogen, first demonstrated by Fritz Haber in 1909, representing perhaps the most significant step. This paved the way for overcoming a major natural limiting force on agricultural production—the rate at which plants fix atmospheric nitrogen into soils—in the early 20th century by the German scientists Fritz Haber and Carl Bosch, who used Haber’s earlier discovery to develop the Haber-Bosch process. Their process took atmospheric nitrogen to make nitrogen fertilizers [some 90 million tons of nitrogen-based fertilizer now being produced each year. Through enhancing food production, this single innovation is estimated to sustain some 40% of global human population today. The process is energy-intensive, and is directly supported by the consumption of fossil fuels (fossil NPP). The widespread use of fossil energy to make processing of land (e.g., ploughing) quicker and more efficient, to support a greater number of humans and their domesticated animals, to enable rapid national/international transfer of produce, and to enable more efficient harvesting of the sea and sea floor has further amplified the impact of humans on both production and consumption in the biosphere.

During the 20th century (between 1910 and 2005) the Human Appropriation of Net Primary Productivity doubled from 13 to 25% of the NPP of potential vegetation. These changes involved a doubling of reactive nitrogen and phosphorus in the environment, and the use of vast amounts of fossil energy focused on agricultural production. In 2014 humans extracted 225 million tons of fossil phosphates, and this is projected to rise to 258 million tons by 2018. Phosphates are a limited resource, but nevertheless annual human addition to the phosphorus cycle exceeds the amount of available phosphorus from natural recycling. Future projections, dependent on land-use, suggest between 27 and 44% of NPP might be appropriated by humans by 2050. While it is likely a geologically unique situation for a single species to co-opt or consume such a large percentage of NPP, perhaps more significant from a biosphere perspective is the technology and landscape modification that humans have used to achieve this. This leads to a complex relationship whereby the ultimate biophysical limit to the amount of NPP that humans might appropriate is dependent on the interplay of many parameters in the landscape, a relationship that needs to evolve rapidly to provide stability between production and consumption in the Anthropocene biosphere.

Viewed from another perspective, the large-scale integration of humans and technology has led to a new terrestrial “sphere,” the technosphere, a novel Earth system of global extent, which is characterized by a total mass approaching that of the biosphere, significant rate of energy dissipation (17 TW), and high average density of infrastructure links such as roads [circa 0.4 km of roadway per km2 of land area, CIA, 2015] and of links between mobile communication devices [circa 50 such devices per km2 of land area, PR Newswire, 2014] that help connect together most humans and most in-use technological artifacts. An emergent system, the technosphere comprises the world’s humans, cultures, and technological components and systems, and maintains itself quasi-autonomously via feedback loops that deliver goods and services desired by humans (e.g., entertainment), or essential for their survival (e.g., food and water), in return for human participation in its continued function. There are no analogs for the technosphere in the geological history of life on Earth. Therefore, its myriad ramifications are truly unprecedented.

Human Impact Measured Against Geological Events

Throughout geological history the coupling between the production of biomass and the consumption of that biomass in the biosphere has typically maintained stability, with periods such as the Ordovician and Cretaceous showing patterns of fauna and flora that indicate persistent stable ecosystems over long time frames. Intervals where this stability may have been temporarily disrupted include the mass extinction events of the Neoproterozoic Era and Phanerozoic Eon [there being six of these following the definition of Benton, 2012, of which five were within the Phanerozoic Eon], with many small-scale extinctions operating at intervals of perhaps hundreds of thousands of year timescales or less. More fundamental changes to the functioning of the biosphere are associated with: its expansion to cover much of the globe (with increasing primary production) during the evolution of photosynthesis at circa 2.7 billion years ago [Nisbet and Fowler, 2014; see Figure 2] linked to the development of an oxygenated atmosphere during the Great Oxygenation Event beginning circa 2.5 billion years ago [Pufahl and Hiatt, 2012]; the construction of complex trophic structures between primary producers (e.g., marine phytoplankton), primary consumers (e.g., herbivorous zooplankton), and secondary consumers (e.g., tertiary and apex arthropod predators) during the Cambrian Period [Butterfield, 2011; Perrier et al., 2015], which led to animals typically forming the largest standing biomass in marine ecosystems; and the construction of complex terrestrial ecosystems with plants forming the largest standing biomass, with an increasing impact on the carbon-cycle and climate during the mid-Paleozoic [Kansou et al., 2013] and later. Measured against these changing geological-scale patterns, is the human impact on the biosphere significant?

Certain characteristics of current production and consumption in the biosphere appear entirely unique from a geological perspective, not least in being driven by a single species (Homo sapiens) within a time frame that is dramatically accelerated (decades versus millions of years) relative to past events. These changes have been characterized as defining a new biosphere state [Behrensmeyer et al., 1992; Williams et al., 2015]. They include the widespread transportation of animals and plants around the planet (the “neobiota”), the human-directed evolution of biology and ecosystems, the extraction of energy and material resources from deep in the Earth’s crust, and the huge appropriation of production by humans, which will leave a fossil record in, for example, both the physical and chemical signatures of biomineralized materials [bones, shells, reefs, etc., see Kidwell, 2015].

A profound example of these changing patterns is the Green Revolution of the mid-20th century. This translation of technologies that originated from technological breakthroughs in developed countries, which were transported and adapted to the developing world, included the transfer of technology for fertilizers (principally nitrogen-, phosphate- and potassium-based), new crop varieties, insecticides, pesticides, herbicides and irrigation. The Green Revolution spread across the world from the 1950s onward, dovetailing with the Great Economic Acceleration in industrialized nations [Steffen et al., 2007, 2015]. It led to the doubling of appropriation of NPP by humans through the 20th century [Krausmann et al., 2013] and a concomitant rise in the consumption of fossil NPP to support that. This redirection of resources along different biological paths has led to humans and their domesticated animals comprising 175 million tons of carbon (estimates based on dry mass of 45% carbon) at the end of the 20th century, whilst wild terrestrial mammals represent just 5 million tons of carbon [Smil, 2011]; the total standing biomass of large terrestrial vertebrates in itself has been increased by about an order of magnitude over a “natural” baseline level by the tightly controlled and directed hyper-fertilization of terrestrial primary production [Barnosky, 2008].

Analyses suggest that human influence on the Earth’s biota is promulgating a contemporary mass extinction event [Barnosky et al., 2011, 2012, 2014; Kolbert, 2014; Pimm et al., 2014; Ceballos et al., 2015] comparable to the five most significant events of the Phanerozoic Eon. This potential Anthropocene mass extinction event, if it continues to unfold, would thus immediately succeed (stratigraphically) a major perturbation of the nitrogen cycle (from the Haber-Bosch process) that is leaving a geochemical signal in sedimentary deposits worldwide, and it would also be associated with changes in carbon isotope ratios in marine carbonates as a result of the anthropogenic CO2 emitted from the burning of hydrocarbons (a characteristically depleted 13C signature). These signatures would resemble in magnitude, though not in environmental forcing, patterns of chemical change in the physico-chemical stratigraphic record, in part suggesting changes in the make-up of primary producer versus consumer organisms, and which are features of earlier extinction events such as in the latest Proterozoic [reduced acritarch phytoplankton diversity as a result of surface ocean eutrophication, Nagy et al., 2009], or at the Precambrian-Cambrian boundary [perhaps reflecting changes to surface-ocean primary production as a result of acritarch extinction, see Zhu et al., 2006 for a summary].

The human impact is not restricted to the land. The scale of appropriation of marine biological production by a single species (Homo sapiens) is almost certainly unique in Earth history, far exceeding the grazing of mainly coastal waters by, for example, seabirds (and, before them, flying reptiles), or pinnipeds. The rates of domestication of marine plants and animals are rising rapidly [Duarte et al., 2007]. Although fish farming dates back over 2000 years [e.g., McCann, 1979], with early examples in Australia, East Asia and Europe, it was quantitatively trivial, except locally, until 1970. Since that time aquaculture has become a significant component of fish consumption [Naylor et al., 2002], and this is sometimes referred to as the “blue revolution”: in 2012 total world fisheries amounted to 158 million tons, of which 42% was aquaculture [FAO, 2014]. Having removed most top predators from the oceans, including by some estimates 90% of the largest predatory fish stocks [Jackson, 2008], humans are steadily fishing down the food chain [Pauly et al., 1998; WBGU, 2013]—in aggregate, 38% of marine fish have been lost, and the decline in certain baleen whales is up to 90% [McCauley et al., 2015]. At the same time, humans are continually harvesting, via a massive extension of bottom trawling powered by fossil fuels, the majority of the continental shelf, ranging now down onto parts of the continental slope [Puig et al., 2012]. Regions of the ocean undergoing fishery collapses are incapable of providing a full complement of ecosystem services, including those necessary to sustaining ever-growing human coastal populations [Worm et al., 2006].

Thus, it can be argued that the scale of human change to the biosphere with its transformation of terrestrial and marine ecologies, its use of fossil fuels to elevate the energy available to the global ecosystem, its impact on the standing biomass of terrestrial vertebrates, and its displacement of apex predators in both terrestrial and marine foodwebs, is of the magnitude of past major changes in the biosphere as shown in Figure 2.

****

My comment: Predictably, some of the miracle green revolution plants are evolving into feral weeds that outcompete the specially bred high production varieties. This problem partly springs from hybridizing different varieties with new unexpected and undesirable traits. An unexpected side effect has been a tendency of green revolution crop plants like rice to go feral and become agricultural weeds (Qui, J., et al. 2020. Diverse genetic mechanisms underlie worldwide convergent rice feralization. Genome Biology.)

Posted in ! PEAK EVERYTHING, Climate Change, Human Nature | Tagged , | 10 Comments

EV cars and utility scale energy storage batteries are not likely to materialize

Preface.  Clearly there’s not enough minerals and metals to shift from fossil fuels to electric vehicles and utility scale battery storage, due to peak critical elements, peak platinum group elements, peak precious elements, peak rare earth elements, and peak everything else.

Even now, only the top 5% of Americans can afford electric vehicles. The third and last article explains why battery prices fell one-time only and are likely to rise again, putting EV out of reach for perhaps all but the 1% some day.

The first article points out how much cobalt, neodymium, lithium, and copper just the United Kingdom alone would need to meet electric car targets for 1050.

The second article, from science magazine, also points out what an immense amount of minerals are needed and some of the ecological destruction in obtaining them (the references contain the grim details of amounts and devastation caused).

And the cost! The county of San Diego is spending up to $100 million to store 2 hours of electricity for 110,000 homes. San Diego county has a population of 3,338,000 with an average household size of 2 people (https://www.city-data.com/county/San_Diego_County-CA.html). So 1,669,000 homes with 2 hours of backup power would cost over 15 times more — $1.5 billion dollars — and that’s just homes, there are also restaurants, factories, industry, street lights and more Mohebbi E (2021) San Diego County energy storage project aims to help fight climate crisis. KPBS.

And 142 Tesla Megapacks will cost at least $175 million dollars to power Oxnard during peak demand for 4 hours or Ventura county for 30 minutes. And keep in mind that batteries only last 15 years while a natural gas plants last for 30 to 50 years. 

2023 What is the Cost of Electric Vehicle Batteries?

Alice Friedemann  www.energyskeptic.com  Author of Life After Fossil Fuels: A Reality Check on Alternative Energy; When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Women in ecology  Podcasts: WGBH, Financial Sense, Planet: Critical, Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, Kunstler 253 &278, Peak Prosperity,  Index of best energyskeptic posts

***

Herrington, R., et al. 2019.  Leading scientists set out resource challenge of meeting net zero emissions in the UK by 2050. Natural History Museum, London.

For just the United Kingdom alone to meet electric car targets for 2050 requires production of twice the global cobalt production, near all of the world’s neodymium, three quarters the world’s lithium production and at least half of the world’s copper production. Also, the UK grid would need to increase in size by 20% to charge electric cars.  The UK comprises less than 1% of world population (0.87%) so clearly the entire world can’t migrate from gasoline to electric vehicles.

Sovacool, B. K., et al. 2020. Sustainable minerals and metals for a low-carbon future. Science 367: 30-33.

Metals and minerals, including cobalt, copper, lithium, cadmium, and rare earth elements (REEs) will be needed for technologies such as solar photovoltaics, batteries, electric vehicle (EV) motors, wind turbines, fuel cells, and nuclear reactors.

Between 2015 and 2050, the global EV stock needs to jump from 1.2 million light-duty passenger cars to 965 million passenger cars, battery storage capacity needs to climb from 0.5 gigawatt-hour (GWh) to 12,380 GWh, and the amount of installed solar photovoltaic capacity must rise from 223 GW to more than 7100 GW (3). The materials and metals demanded by a low-carbon economy will be immense (un 2019). One recent assessment concluded that expected demand for 14 metals—such as copper, cobalt, nickel, and lithium—central to the manufacturing of renewable energy, EV, fuel cell, and storage technologies will grow substantially in the next few decades (Dominish 2019). Another study projected increases in demand for materials between 2015 and 2060 of 87,000% for EV batteries, 1000% for wind power, and 3000% for solar cells and photovoltaics. Although they are only projections and subject to uncertainty, the World Bank put it concisely that “the clean energy transition will be significantly mineral intensive” (WB 2018).

Many of the minerals and metals needed for low-carbon technologies are considered “critical raw materials” or “technologically critical elements,” terms meant to capture the fact that they are not only of strategic or economic importance but also at higher risk of supply shortage or price volatility (EC 2017).

In addition, mining frequently results in severe environmental impacts and community dislocation. Moreover, metal production itself is energy intensive and difficult to decarbonize. Mining for copper, needed for electric wires and circuits and thin-film solar cells, and mining for lithium, used in batteries, has been criticized in Chile for depleting local groundwater resources across the Atacama Desert, destroying fragile ecosystems, and converting meadows and lagoons into salt flats. The extraction, crushing, refining, and processing of cadmium, a by-product of zinc mining, into compounds for rechargeable nickel cadmium batteries and thin-film photovoltaic modules that use cadmium telluride (CdTe) or cadmium sulfide semiconductors can pose risks such as groundwater or food contamination or worker exposure to hazardous chemicals, especially in the supply chains where elemental cadmium exposures are greatest. REEs, such as neodymium and the less common dysprosium, are needed for magnets in electric generators in wind turbines and motors in EVs, control rods for nuclear reactors, and the fluid catalysts for shale gas fracking. But REE extraction in China has resulted in chemical pollution from ammonium sulfate and ammonium chloride and tailings pollution that now threaten rural groundwater aquifers as well as rivers and streams. Several metals for green technologies are found as “companions” to other ores with differential value and unsustainable supply chains (Nassar 2015).

References (the interesting details that are skimmed over above)

  • Dominish, E. et al. 2019. Responsible minerals sourcing for renewable energy. Institute for Sustainable Futures, University of Technology, Sydney.
  • EC. 2017. Communication from the Commission to the European Parliament, the Council, the European Economic and Social Committee, and the Committee of the Regions on the 2017 list of critical raw materials for the EU. COM/2017/490, European Commission, Brussels.
  • Nassar, N. T., et al. 2015. By-product metals are technologically essential but have problematic supply. Science Advances. Companionality is the degree to which a metal is obtained largely or entirely as a by-product of one or more host metals from geologic ores. The dependence of companion metal availability on the production of the host metals introduces a new facet of supply risk to modern technology. We evaluated companionality for 62 different metals and metalloids, and show that 61% (38 of 62) have companionality greater than 50%. Eighteen of the 38—including such technologically essential elements as germanium, terbium, and dysprosium—are further characterized as having geopolitically concentrated production and extremely low rates of end-of-life recycling. It is this subset of companion metals—vital in current technologies such as electronics, solar energy, medical imaging, energy-efficient lighting, and other state-of-the-art products—that may be at the greatest risk of supply constraints in the coming decades.
  • UN. 2019. Global resources outlook 2019: Natural resources for the future we want. United Nations Environment Programme, Nairobi www.resourcepanel.org/reports/global-resources-outlook.
  • WB. 2018. Climate-smart mining: Minerals for climate action. World bank. www.worldbank.org/en/topic/extractiveindustries/brief/climate-smart-mining-minerals-for-climate-action

Goehring, L. R., Rozencwajg, A.A. 2019. The unintended consequences of high grading. Goehring & Rozencwajg.

We believe what follows will have a significant impact on the potential adoption of both electric vehicles and renewable power generation as we progress through the coming decade. Our research indicates that neither the EV nor renewable power can gain material adoption without further major reductions in battery costs. Yes, battery costs have dropped dramatically over the last decade, but we believe these costs reductions were one time in nature and will be near impossible to repeat. Further cost reductions will be entirely dependent on major advancements in battery technology–which, as of today just don’t exist.

Renewable energy’s major problem is intermittency: the sun doesn’t always shine and the wind doesn’t always blow. As a result, it’s impossible for renewables to provide reliable baseload power at scale without storage. While batteries could provide the necessary buffer to overcome the problem of intermittency, the costs of renewable plus storage remain prohibitive and uncompetitive.

Similarly, the battery pack has become the limiting factor to widespread EV adoption. Analysts estimate that the battery pack on an EV represents one-third of its total cost. Unless the EV reaches cost parity with the combustion engine it will not gain widespread adoption, unless the EV is subsidized or the ICE is outlawed. Materially reducing the cost of the battery is the only way for the EV to become competitive. Many analysts believe that EVs will reach cost parity once lithium-ion batteries can be produced at $100 per kwh. Costs would have to fall further to allow for grid-level storage. Battery proponents argue these thresholds are just around the corner. As recently as 2012, lithium-ion batteries cost more than $750 per kwh. Bloomberg New Energy Finance estimates these costs have now fallen by an impressive 80% to reach $156 per kwh by 2019. The bulls argue that even if cost improvements slowed by half, $100 per kwh will be achieved within three to five years.  Bloomberg New Energy finance reports that per $/kwh, lithium ion battery costs dropped as follows $707 (2012), $663 (2013), $588 (2018), $381 (2015), $293 (2016), $219 (2017), $180 (2018), $156 (2019).

But where did these numbers come from? The data is hard to find.  Many battery commentators spoke about economies of scale, but few were willing to give details. Battery companies also consider their manufacturing process to be their greatest competitive advantage and, as a result, few give information or breakdowns of their cost structure.

Through our research, we came across an excellent book detailing the inner workings of the battery industry. In Powerhouse, Steve LeVine (my review of this 2015 book is here) explores the challenges in developing lithium-ion batteries. He also describes the ground-breaking work conducted at the Argonne National Laboratory outside of Chicago. Levine explains how Argonne maintained meticulous cost models for all major lithium-ion battery formulations over time and regularly released these models into the public domain.

Argonne’s models are invaluable in understanding what caused the 80% fall in battery costs over the last seven years. After carefully analyzing the Argonne data, we now believe costs have come down mostly through a series of one-time improvements. Instead of continuing to fall materially (à la Moore’s Law), we believe that most of the drop in lithium-ion costs is now behind us. The first $600 move from $750 to $156 per kwh was relatively easy– the next $56 move from $156 to $100 will be extremely difficult. If we are correct, lithium-ion batteries will not be able to reach the threshold for mass adoption in either EVs or grid-level storage for the foreseeable future. We should point out that many battery experts privately acknowledge that the trajectory of the past decade is not repeatable.

Four main factors explain the fall in battery costs over the past decade: increased plant utilization, increased battery size, chemical prices and battery chemistry improvements. Beginning in 2008, the battery industry built a large amount of lithium-ion manufacturing capacity to meet the expected surge in demand. While the demand projections ultimately proved correct, the timing was initially far too optimistic and by 2010 the average battery plant only operated at 10% utilization. The low level of throughput resulted in substantial operational inefficiencies and artificially high unit costs. Argonne released a version of its model in late 2011 and we used this as a starting point for our analysis. The Argonne model assumes a 100,000 pack per year facility that operates at full capacity. The first thing we did was adjust the model to reflect a plant that only operated at 10% utilization. The result was a cost of $705 per kwh–within 5% of the battery cost reported by the battery industry for 2012.

Using this as a baseline, we adjusted the plant utilization to 100%–the base case used in the Argonne model. Immediately the costs collapsed by 50% from $705 per kwh to $360. These results have profound consequences: nearly 60% of the total cost savings of the past decade came from simply ramping up underutilized facilities. The cost savings is the result of the fixed or semi-fixed costs (such as capital equipment, land and labor) being amortized over a greater quantity of batteries. Battery manufacturing plants today are operating near full utilization. Going forward, additional demand will be met by building new plants and not by increasing utilization. As a result, the largest driver of cost reduction over the last decade is unrepeatable.

The second source of cost reduction is the size of the battery itself. In 2012 the average lithium-ion battery had much less capacity than today. For example, the benchmark battery from the 2011 Argonne model only had capacity of 11 kwh compared with 65 kwh in the most recent edition. In any battery pack there are significant costs that are incurred only once per battery. These costs include module terminals, gas release valves, bus bars, and pack jackets as well various integration costs. By increasing the capacity of the battery five-fold, these one-time costs are spread over more kilowatt hours. In a typical 2012 vintage battery, these costs made up as much as 20% of the total battery cost. As the capacity increased materially, we estimate these costs came down from $80 to $20 per kwh–a reduction of 75%.

we believe these cost reductions will not be repeated going forward. There is clearly a tradeoff between capacity, unit cost, and total cost. For example, a 2019-vintage battery has a capacity of 65 kwh equating to an EV range of 220 miles. Such a battery is estimated to cost $156 per kwh or $10,170 per battery. If you increased the capacity six-fold (similar to the increase between 2012 and today), the resulting battery would have a range of nearly 1,000 miles and a total cost of $50,000. While its cost per kwh would indeed have come down from $156 to $120, we doubt any consumer would be willing to incur these extra costs for such a ridiculously long range. Clearly there is a right-sizing of the battery that dictates capacity and we believe current EVs are close to optimal.

The third driver of cost reduction over the last several years has been chemical prices. A battery’s “chemistry” typically refers to the active material used in the battery’s cathode. For example, Tesla utilizes a so-called LCA battery where the cathode consists of a compound made of lithium, cobalt, nickel, and aluminum. This compound is purchased from a specialty chemical company which charges a price based upon the cost of the underlying materials and the cost of manufacturing. Over the last several years, the compound price has fallen by nearly 50% as manufacturing costs have declined materially. Our models suggest these cost savings have a limit as the raw material cost becomes a larger and larger percent of the total. For example, we estimate raw material costs made up 40% of chemical price in 2011. By 2018 this had flipped and the raw materials made up 60% of the total chemical price. Moreover, as battery demand picks up we believe metal demand risks exceeding supply in cobalt and nickel. This will put upward pressure on the specialty chemical price. Battery insiders admit metal prices could be a problem going forward. In January 2019, Tesla announced a cobalt offtake agreement with Glencore in an effort to secure long-term supply. These cost pressures are unlikely to be offset by lower manufacturing costs, given they now make up less and less of the total. Over all, we estimate chemical prices have lowered battery costs by $40 per kwh between 2011 and 2019.The remaining cost savings have come from improvements to the underlying battery itself and the manufacturing process. After accounting for cost inputs mentioned above, we believe these additional improvements have resulted in $100 in savings or less than 20% of the total.

Our analysis suggests a full 80% of the cost savings of the last several years have come from one-time sources that cannot be repeated. Battery bulls extrapolate the 20% annual cost savings that took prices from $705 to $157 over the last several years. Instead, we believe it is more appropriate to first back out the one-time cost savings in order to isolate the sustainable cost savings going forward. Instead of falling $550, we believe battery prices fell by less than $100 per kwh over the last seven years, after adjusting for plant utilization, pack size, and chemical cost reduction.  One-time reductions of scale ($343), larger battery ($60), chemical prices ($40), and other prices ($106) based on models from the Argonne National Laboratory.

Late last year the Wall Street Journal reported a spat between Tesla and Panasonic regarding their Gigafactory joint venture. The issue revolved around price with Panasonic claiming it could not operate profitably at current levels. The Gigafactory is the largest battery manufacturing facility in the world, operates at near full utilization, and produces very high capacity batteries. This strongly suggests its costs should be among the lowest in the world – and yet they still are not low enough. If our analysis is correct, it will become harder and harder for battery manufacturers to continue to lower costs. Perhaps the Panasonic headlines are just the start.

Posted in Automobiles, Batteries, Battery - Utility Scale, Peak Lithium | Tagged , , , | 13 Comments

How much oil left in America? Not much

Preface. If you think we have no worries because we can get arctic oil, think again. We can’t because icebergs mow drilling platforms down in the ocean. On land, massive amounts of expensive new drilling rigs, roads, rail lines, platforms, buildings and other infrastructure need to be built, and maintained every year as permafrost soil bucks and heaves like a bronco trying to shake infrastructure off.

In the first two oil shocks in the 1970s, many intelligent people proposed we should buy oil from other nations to keep ours in the ground for when foreign oil declined. But hell no, Texas, Oklahoma, and other oil states said that we need jobs and CEO/shareholder profits more than national security. Over half of all remaining oil is in the Middle East, which China, Russia, and Europe are much closer to than the U.S.

What saved the U.S. and the world, from conventional peak oil and natural gas decline since 2005 is fracking. But fracking began to decline as early as 2020 according the first report below. The second article is about oil discoveries in the U.S. declining.

This just in: John Hess, CEO of Hess Corporation, told his audience that “key U.S. shale fields are starting to plateau” and will not the next Saudi Arabia. U.S. shale oil production has been a major driver in the growth of world oil supplies. Last year the United States accounted for 98% of global growth in oil production. Since 2008 the number is 73%. so a slowdown or decline in U.S. oil production growth would mean trouble for the whole world. With 81 percent of global oil production now in decline, even a plateau in U.S. production would likely result in a worldwide decline (Kobb 2020).

Peak Fracking in the news:

2020 U.S. Shale Oil Production – All That’s Left Is The Permian And That Won’t Last Forever Either.

Alice Friedemann www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Goehring, L. R., et al. 2019. The unintended consequences of high grading. Goehring & Rozencwajg.

…we believe, new underappreciated forces are now at work in the shales. As we progress through 2020, the retrenchment of drilling activity, combined with the inability for drilling productivity to rise because of “high grading” will produce the potential for a significant disappointment in oil production. If rig counts turn much lower or if productivity continues to disappoint, US shale production growth might even start to turn negative as we reach the end of 2020. Most analysts are still very optimistic about US oil growth in 2020. Estimates range from as low as 1.1 mm b/d to as high as 1.7 mm b/d, but we believe these growth estimates are far too aggressive. Several prominent oil industry veterans are calling for growth as low as 400,000 b/d, which could also turn out to be optimistic. Our Oil Markets section will outline our estimates for US shale growth, as well as global oil supply-and-demand balances.

On a longer-term basis, we believe 2019 results are just the beginning. Of the two major shale oil basins, two are showing signs of exhaustion (the Eagle Ford and Bakken). Even the Permian is starting to show its age. Recent comments from Halliburton and Schlumberger reinforce the idea that the shales may be past their prime–something we have been expecting for several quarters. The underlying issues are geological in nature: the industry is running out of high-quality Tier 1 acreage. While there was hope that drilling and completion technologies could overcome these forces, our data suggests this is unlikely.

Philippe Gauthier. May 3, 2019. US Oil Exploration Drops by 95%. Resilience.org

It is well known that oil discoveries are in continuous decline worldwide in spite of ever-increasing investments. What is less known, however, is that spending on oil exploration is fast dropping in the United States. Exploratory drilling has been decreasing year after year and now stands at only five percent of its 1981 peak. In other words, once the currently producing shale oil wells are gone, there won’t be much to take their place.

According to figures derived from US Energy Information Agency (EIA) data by French oil geologist Jean Laherrère, oil exploration has already peaked twice in the United States. The first time was in the mid-1950s, with just over 16,000 wells drilled in a single year. The second major peak dates back to 1981, with 17,573 exploration wells. This number fell to only 847 in 2017.

Another even more revealing phenomenon is the decrease in NFWs. New field wildcats are exploration wells drilled in areas that have never produced oil, as opposed to wells drilled simply to help better delineate already known oil sectors (shown as red and greenlines in the graph). NFWs also declined by 95%, from 9,151 in 1981 to just 450 in 2017. According to Laherrère, this means that the United States have been almost entirely explored for oil and gas since 1859 and that few sites are worth drilling anymore. “There are only a few unexplored areas left offshore”, he notes.

In comparison, the number of operating wells (used to pump oil from previously known fields) was 646,626 in 1985, 597,281 in 2014, and 560,996 in 2017. However, nearly 400,000 of these wells are very old and produce at a marginal rate – fewer than 15 barrels a day and sometimes as little as one. They are described as marginal wells in the graph above.

It should be noted that the number of operating wells – a figure sometimes used to suggest that the oil industry is still running strong – does not account for this sharp decrease in exploration. Once shale oil production starts to decline – and Laherrère expects this to happen within a couple of years – there will remain few reserves to support US production.

The source material for this post is: Jean Laherrère, Updated US primary energy in quad (April 30, 2019) https://aspofrance.files.wordpress.com/2019/04/updateduspe2019-3.pdf

References

Kobb, K. 2020. Peak Shale Could Spark An Offshore Drilling Boom. oilprice.com

Posted in How Much Left, Peak Oil | Tagged , | 6 Comments

Native American enslavement

Preface.  This is a book review of “The Other Slavery: The Uncovered Story of Indian Enslavement in America” by Andrés Reséndez

Slavery is an important postcarbon topic because given our past history, future wood-based civilizations will certainly return to slavery, that’s the kind of species we are. Even hunter-gatherers had slaves.

The main reason we don’t have slavery today is that fossil fuels provide each American with about 500 “energy” slaves each as I write about here.  

It’s clear that slavery has existed since towns and cities began (Scott 2013).  If you read the Old Testament, it is full of slavery (Wikipedia 2020), as I discovered when I tried to read the Bible in High school. I can’t begin to express how sad and angry I was. Plus how women were treated. It’s one of many reasons I became an atheist.

Some key points:

Indian slavery never went away, but rather coexisted with African slavery from the 16th through late 19th century. Until quite recently, we did not have even a ballpark estimate of the number of Natives held in bondage. Since Indian slavery was largely illegal, its victims toiled, quite literally, in dark corners and behind locked doors, giving us the impression that they were fewer than they actually were. Because Indian slaves did not have to cross an ocean, no ship manifests or port records exist.

Slavery had been practiced in Mexico since time immemorial. Pre-contact Indians had sold their children or even themselves into slavery because they had no food. Many Indians had been sold into slavery by other Indians as punishment for robbery, rape, or other crimes. Some war slaves were set aside for public sacrifices and ritual cannibalism. Some towns even had holding pens where men and women were fattened before the festivities. All of these pre-contact forms of bondage operated in specific cultural contexts.

In pre-contact North America … Indian societies that adopted agriculture experienced a sudden population increase and acquired both the means and the motivation to raid other peoples. The Aztecs, Mayas, Zapotecs, Caribs, Iroquois, and many others possessed captives and slaves, as is clear in archaeological, linguistic, and historical records. Nomadic groups also had slaves. But it is possible to find some nomads who were reluctant to accept even individuals who willingly offered themselves as slaves to save themselves from starvation. For some of these groups, taking slaves was simply not economically viable.

The end of native American slavery

The impetus did not originate in abolitionist groups. Instead it came from that much-maligned institution, the United States Congress.  Although the intended beneficiaries of the 13th amendment were African slaves, the term “involuntary servitude” opened the possibility of applying it to Indian captives, Mexican peons, Chinese coolies, or even whites caught in coercive labor arrangements.

It is clear that the introduction of horses and firearms precipitated another cycle of enslavement in North America.  Read all about it in “Thundersticks: Firearms and the Violent Transformation of Native America“ by David J. Silverman 2016.

What follows are my kindle notes of passages I found of interest in this book.

Alice Friedemann www.energyskeptic.com  author of “When Trucks Stop Running: Energy and the Future of Transportation”, 2015, Springer, Barriers to Making Algal Biofuels, and “Crunch! Whole Grain Artisan Chips and Crackers”. Podcasts: Collapse Chronicles, Derrick Jensen, Practical Prepping, KunstlerCast 253, KunstlerCast278, Peak Prosperity , XX2 report

***

Andrés Reséndez. 2016. The Other Slavery: The Uncovered Story of Indian Enslavement in America. Mariner books.

It came as a revelation to many easterners making their way across the continent that there were also Indian slaves, entrapped in a distinct brand of bondage that was even older in the New World, perpetrated by colonial Spain and inherited by Mexico. With the Treaty of Guadalupe Hidalgo at the end of the war, this other slavery became a part of Americans’ existence. California may have entered the Union as a “free-soil” state, but American settlers soon discovered that the buying and selling of Indians was a common practice there.

The first California legislature passed the Indian Act of 1850, which authorized the arrest of “vagrant” Natives who could then be “hired out” to the highest bidder. This act also enabled white persons to go before a justice of the peace to obtain Indian children “for indenture.

According to one scholarly estimate, this act may have affected as many as 20,000 California Indians, including 4,000 children kidnapped from their parents and employed primarily as domestic servants and farm laborers.

Americans learned about this other slavery one state at a time. In New Mexico, James S. Calhoun, the first Indian agent of the territory, could not hide his amazement at the sophistication of the Indian slave market. “The value of the captives depends upon age, sex, beauty, and usefulness,” wrote Calhoun.

Mormons bought slaves

Americans settling the West did more than become familiar with this other type of bondage. They became part of the system. Mormon settlers arrived in Utah in the 1840s looking for a promised land, only to discover that Indians and Mexicans had already turned the Great Basin into a slaving ground. The area was like a gigantic moonscape of bleached sand, salt flats, and mountain ranges inhabited by small bands no larger than extended families. Early travelers to the West did not hide their contempt for these “digger Indians,” who lacked both horses and weapons. These vulnerable Paiutes, as they were known, had become easy prey for other, mounted Indians. Brigham Young and his followers, after establishing themselves in the area, became the most obvious outlet for these captives. Hesitant at first, the Mormons required some encouragement from slavers, who tortured children with knives or hot irons to call attention to their trade and elicit sympathy from potential buyers or threatened to kill any child who went unpurchased.

In the end, the Mormons became buyers and even found a way to rationalize their participation in this human market. “Buy up the Lamanite [Indian] children,” Brigham Young counseled his brethren in the town of Parowan, “and educate them and teach them the gospel, so that many generations would not pass ere they should become a white and delightsome people.” This was the same logic Spanish conquistadors had used in the sixteenth century to justify the acquisition of Indian slaves.

With respect to slavery, the Church of Jesus Christ of Latter-day Saints had no set doctrine. However, Brigham Young, the undisputed Mormon leader, believed that slavery had always been a part of the human condition. “Eve partook of the forbidden fruit and this made a slave of her,” he affirmed in a major speech. “Adam hated very much to have her taken out of the Garden of Eden, and now our old daddy says I believe I will eat of the fruit and become a slave too. This was the first introduction of slavery upon this earth.

Over the next few years, Indians living in Utah made their way to the Mormon settlements, offering captives. These traffickers expected willing customers, but they were prepared to use the hard sell, displaying starving captives to arouse the pity of potential buyers.

Once Young gained more confidence and understood that the Indian slave trade had existed in the region for centuries and was deeply rooted, he changed his mind. By 1850 or 1851, he had become persuaded that the way to move forward was by buying Indians. “The Lord could not have devised a better plan than to have put the saints where they were to help bring about the redemption of the Lamanites and also make them a white and delightsome people,” Young said to the members of the Iron County Missions in May 1851. Other church leaders were no less enthusiastic. “The Lord has caused us to come here for this very purpose,” said Orson Pratt, one of the original Mormon “apostles,” in 1855, “that we might accomplish the redemption of these suffering degraded Israelites.

The passage of the Act for the Relief of Indian Slaves and Prisoners in 1852. The discussions that took place prior to this act reveal that Young and other Mormon leaders did not so much want to do away with Indian slavery as to use it for their own ends. They objected to Indian children and women being left in the hands of Ute captors to be tortured and killed and to allowing them to fall into the “low, servile drudgery of Mexican slavery.” But they were fully in favor of placing Native children and women in Mormon homes to associate them “with the more favored portions of the human race.

In fact, the Act for the Relief of Indian Slaves and Prisoners allowed any white resident of Utah to hold Indians through a system of indenture for a period of up to twenty years—longer than in California or New Mexico. Masters in Utah were required to clothe their indentured Indians appropriately and send youngsters between seven and sixteen years of age to school for three months each year. Other than that, they were free to put them to work.

When the Mormons first reached Utah in 1847, there were an estimated 20,000 Native Americans within the territory. By 1900 the number had plummeted to 2,623. In other words, eighty-six percent of the Indians in Utah vanished in half a century. It would not be until the 1980s that the Indian population there regained its pre-Mormon levels. As usual, it is impossible to disentangle the extent to which biological and man-made factors contributed to this catastrophic decline. But Indian slavery was certainly a major factor.

Historians Juanita Brooks and Michael K. Bennion have established that Native Americans who grew up in Mormon households married at significantly lower rates than the population at large. One would think that in a polygamous society, Indian women would have been readily incorporated as secondary wives, but this occurred rarely. Contemporaries such as John Lee Jones could not hide his astonishment at finding a Mormon man with an Indian wife, calling it “quite a novel circumstance to me.

For Indian males, the situation was dire. Few Native American men are known to have married white women.

Before the Mormons moved to Utah, they never anticipated acquiring Indians and keeping them in their homes as “indentures.” Their curious ideas about the origins of Indians and their impulse to help in their redemption eased their transformation into owners and masters. But even without these notions, they would have become immersed in an extraordinarily adaptable and durable system that had long flourished in the region. In colonial times, Spanish missionaries had acquired Indians to save their souls. In the nineteenth century, the Mormons’ quest to redeem Natives by purchasing them was not too different. Yet both ended up creating an underclass, in spite of their best wishes. Such was the staying power of the other slavery.

Origins of slavery

The beginnings of this other slavery are lost in the mists of time. Native peoples such as the Zapotecs, Mayas, and Aztecs took captives to use as sacrificial victims; the Iroquois waged campaigns called “mourning wars” on neighboring groups to avenge and replace their dead; and Indians in the Pacific Northwest included male and female slaves as part of the goods sent by the groom to his bride’s family to finalize marriages among the elite. Native Americans had enslaved each other for millennia,

Columbus traded in slaves

The earliest European explorers began this process by taking indigenous slaves. Columbus’s very first business venture in the New World consisted of sending four caravels loaded to capacity with 550 Natives back to Europe, to be auctioned off in the markets of the Mediterranean. Others followed in the Admiral’s lead. The English, French, Dutch, and Portuguese all became important participants in the Indian slave trade. Spain, however, by virtue of the large and densely populated colonies it ruled, became the dominant slaving power. Indeed, Spain was to Indian slavery what Portugal and later England were to African slavery.

Spain was the first imperial power to formally discuss and recognize the humanity of Indians. In the early 1500s, the Spanish monarchs prohibited Indian slavery except in special cases, and after 1542 they banned the practice altogether. Unlike African slavery, which remained legal and firmly sustained by racial prejudice and the struggle against Islam, the enslavement of Native Americans was against the law. Yet this categorical prohibition did not stop generations of determined conquistadors and colonists from taking Native slaves on a planetary scale, from the Eastern Seaboard of the United States to the tip of South America, and from the Canary Islands to the Philippines. The fact that this other slavery had to be carried out clandestinely made it even more insidious. It is a tale of good intentions gone badly astray.

Indian slavery never went away, but rather coexisted with African slavery from the 16th all the way through the late 19th century

Because African slavery was legal, its victims are easy to spot in the historical record. They were taxed on their entry into ports and appear on bills of sale, wills, and other documents. Because these slaves had to cross the Atlantic Ocean, they were scrupulously—one could even say obsessively—counted along the way. The final tally of 12.5 million enslaved Africans matters greatly because it has shaped our perception of African slavery in fundamental ways. Whenever we read about a slave market in Virginia, a slaving raid into the interior of Angola, or a community of runaways in Brazil, we are well aware that all these events were part of a vast system spanning the Atlantic world and involving millions of victims. Indian slavery is different. Until quite recently, we did not have even a ballpark estimate of the number of Natives held in bondage. Since Indian slavery was largely illegal, its victims toiled, quite literally, in dark corners and behind locked doors, giving us the impression that they were fewer than they actually were.

Because Indian slaves did not have to cross an ocean, no ship manifests or port records exist.

Historians working on all regions of the New World have found traces of the traffic of Indian slaves in judicial proceedings, official inquiries, and casual mentions of raids and Indian captives in letters and assorted documents.

If we were to add up all the Indian slaves taken in the New World from the time of Columbus to the end of the 19th century, the figure would run somewhere between 2.5 and 5 million slaves.

At the height of the transatlantic slave trade, West Africa suffered a population decline of about 20%.

Native populations were reduced by 70 to 90% through a combination of warfare, famine, epidemics, and slavery. Biology gets much of the blame for this collapse, but as we shall see, it is impossible to disentangle the effects of slavery and epidemics. In fact, a synergistic relationship existed between the two: slaving raids spread germs and caused deaths; deceased slaves needed to be replaced, and thus their deaths spurred additional raids.

Europeans preferred women and children slaves

In stark contrast to the African slave trade, which consisted primarily of adult males, the majority of Indian slaves were actually women and children.  

Indian slave prices from such diverse regions as southern Chile, New Mexico, and the Caribbean reveal a premium paid for women and children over adult males. As noted by the New Mexico Indian agent James Calhoun, Indian women could be worth up to fifty or sixty percent more than males. What explains this significant and persistent price premium? Sexual exploitation and women’s reproductive capabilities are part of the answer. In this regard, Indian slavery constitutes an obvious antecedent to the sex traffic that occurs today. But there were other reasons too. In nomadic Indian societies, men specialized in activities less useful to European colonists, such as hunting and fishing, than women, whose traditional roles included weaving, food gathering, and child rearing. Some early sources also indicate that women were considered better suited to domestic service, as they were thought to be less threatening in the home environment. And just as masters wanted docile women, they also showed a clear preference for children.

Children were more adaptable than grown-ups, learned languages more easily, and in the fullness of time could even identify with their captors. Indeed, one of the most striking features of this form of bondage is that Indian slaves could eventually become part of the dominant society. Unlike those caught up in African slavery, which was a legally defined institution passed down from one generation to the next, Indian slaves could become menials, or servants, and with some luck attain some independence and a higher status even in the course of one life span

Europeans had the upper hand because of their superior war technology—specifically, horses and firearms—which allowed them to prey on Indian societies almost at will. What started as a European-controlled enterprise, however, gradually passed into the hands of Native Americans. As Indians acquired horses and weapons of their own, they became independent providers of slaves. By the 18th and 19th centuries, powerful equestrian societies had taken control of much of the traffic. In the Southwest, the Comanches and Utes became regional suppliers of slaves to other Indians as well as to the Spaniards, Mexicans, and Americans. The Apaches, who had early on been among the greatest victims of enslavement, transformed themselves into successful slavers. In colonial times, Apaches had been hunted down and marched in chains to the silver mines of Chihuahua. But as Spanish authority crumbled in the 1810s and the mining economy fell apart during the Mexican era, the Apaches turned the tables on their erstwhile masters. They raided Mexican communities, took captives, and sold them in the United States.

The other slavery continued through the end of the 19th century and in some remote areas well into the 20th century. Disguised as debt peonage, which stretched the limits of accepted labor institutions and even posed as legal work, this other slavery was the direct forerunner of the forms of bondage practiced today.

At last count, there were more than 15,000 books on African slavery, whereas only a couple of dozen specialized monographs were devoted to Indian slavery.  It is as if each group fits into a neat historical package: Africans were enslaved, and Indians either died off or were dispossessed and confined to reservations.

Such an oversimplification is troublesome, because Indian slavery actually explains a great deal about the shared history of Mexico and the United States and casts new light on even familiar events. If we want to find answers to such varied questions as why the Pueblo Indians launched a massive rebellion in 1680 and drove the Spaniards out of New Mexico; why the Comanches and Utes became so dominant in large areas of the West; why the Apache chief Geronimo hated Mexicans so much; why article 11 of the Treaty of Guadalupe Hidalgo prohibited Americans from purchasing “Mexican captives held by the savage tribes”; why California, Utah, and New Mexico legalized Indian slavery, disguising it as servitude or debt peonage; or why so many Navajos appear in New Mexico’s baptismal records in the aftermath of Colonel Kit Carson’s Navajo campaign of 1863–1864, we have to come to terms with the reality of this other slavery.

I focus on some areas that experienced intense slaving. Thus the story begins in the Caribbean, continues through central and northern Mexico, and ends in the American Southwest—with occasional glimpses of the larger context. And even within this restricted geography, I limit myself to examining moments when the evidence is particularly abundant or when the traffic of Indians underwent significant change.

The second caveat concerns the definition of Indian slavery. Who exactly counts as an Indian slave? The honest answer is that no simple definition is possible.  After the Spanish crown prohibited the enslavement of Indians, owners resorted to a variety of labor arrangements, terms, and subterfuges—such as encomiendas, repartimientos, convict leasing, and debt peonage—to get around the law.

They generally shared four traits that made them akin to enslavement: forcible removal of the victims from one place to another, inability to leave the workplace, violence or threat of violence to compel them to work, and nominal or no pay.

Early chroniclers, crown officials, and settlers all understood the extinction of the Indians as a result of warfare, enslavement, famine, and overwork, as well as disease.

King Ferdinand of Spain—no Indian champion and probably the most well-informed individual of that era—believed that so many Natives died in the early years because, lacking beasts of burden, the Spaniards “had forced the Indians to carry excessive loads until they broke them down.

The documentation suggests that the worst epidemics did not affect the New World immediately. The late arrival of smallpox actually makes perfect sense. Smallpox was endemic in the Old World, which means that the overwhelming majority of Europeans were exposed to the virus in childhood, resulting in one of two outcomes: death or recovery and lifelong immunity. Thus the likelihood of a ship carrying an infected passenger was low. And even if this were to happen, the voyage from Spain to the Caribbean in the sixteenth century lasted five or six weeks, a sufficiently long time in which any infected person would die along the way or become immune (and no longer contagious). There were only two ways for the virus to survive such a long passage. One was for a vessel to carry both a person already infected and a susceptible host who contracted the illness en route and lived long enough to disembark in the Caribbean. The odds of this happening were minuscule—around two percent according to a back-of-the-envelope calculation by the demographer Massimo Livi Bacci.

If I had to hazard a guess using the available written sources, it would be that between 1492 and 1550, a nexus of slavery, overwork, and famine killed more Indians in the Caribbean than smallpox, influenza, and malaria. And among these human factors, slavery has emerged as a major killer.

The Spanish crown never intended to commit genocide or perpetrate the wholesale enslavement of the Native inhabitants of the Caribbean. These outcomes were entirely contrary to Christian morality and to Spain’s most basic economic and imperial interests. Yet a handful of individual decisions, human nature, and the archipelago’s geography led to just such a Dantean scenario. Christopher Columbus’s life offers us entrée into this tragic chain of decisions and circumstances.

Columbus knew these peoples were intelligent but “weaker and less spirited” than Europeans, making them especially suitable as slaves. “They began to understand us, and we them, whether by words or by signs,” Columbus would later write of these first captives, “and these have been of great service to us.” The return ocean passage also afforded him time to develop his economic plans, which included the wholesale export of Native slaves. In his very first letter after his return, addressed to the royal comptroller, Luis de Santángel, he promised gold, spices, cotton, and “as many slaves as Their Majesties order to make, from among those who are idolaters

The Admiral’s plan to ship Natives to Europe was quite understandable given his ideas about the nature of the Indians, his anxieties about making his discovery economically viable, and the one-tenth of the proceeds of the sale of these captives that he would pocket according to the terms of the capitulations.

But even in its early days, Columbus could observe how a European stronghold on another continent could thrive by trading a variety of products, including humans. There is little doubt that the Admiral of the Ocean Sea intended to turn the Caribbean into another Guinea.

Early in his second voyage to America, Columbus sent dozens of Carib Indians back to Spain with the first returning ships. Accompanying them was a candid letter to Ferdinand and Isabella: “May Your Highnesses judge whether they ought to be captured, for I believe we could take many of the males every year and an infinite number of women. A year later, in February 1495, he sent 550 Indians from Española crammed into four caravels bound for the slave market of southern Spain, his largest shipment thus far. The caravels were filled to capacity. The conditions were extreme. During the passage, approximately 200 Natives perished “because they were not used to the cold weather,” Cuneo wrote, “and we cast their bodies into the sea.” Of the remaining Indians, half were ill and very weak when they finally arrived in Spain.

Slavery was a venerable institution in Spain (and throughout the Mediterranean world). Anyone visiting Seville, Valencia, Barcelona, or any other Iberian city in the fifteenth century would have come in contact with a variety of slaves. Many of these people were Muslims who had lived in Spain for centuries and who had been seized as prisoners during the Reconquista, the Christian campaigns to retake the peninsula. Other captives came from the eastern edges of Christendom—Greeks, Bulgarians, Russians, Tartars, Circassians, and others traded by Mediterranean merchants.

Slaves appeared before a Spanish official, who took the depositions of the captors and—crucially—the captives to determine whether they were in fact “enemies of the Catholic church and of the crown” who had been taken in a “good” or “just” war. Therefore the question before the Catholic monarchs was whether the Natives of the New World met this legal standard of “enemy” and thus constituted an enslaveable people. Ferdinand and Isabella appointed a committee of lawyers and theologians to help them reach a final determination. During those five years, however, the monarchs’ reluctance to enslave Natives intensified.

Isabella and Ferdinand freed many Indians and, astonishingly, mandated that many of them be returned to the New World.

Enslavement & gold in Espanola

Most Indians did everything they could to avoid the tribute, including hiding away in the mountains or fleeing Cibao altogether. After three collection periods, the Indians had provided only 200 pesos’ worth of gold out of an anticipated 60,000. Clearly, if the Spaniards wanted gold from Española, they would have to get it themselves.  

Instead of using valuable beasts of burden, the Spaniards compelled Natives to do all the hauling, carrying 60-90 pounds on their backs; horses and mules were devoted to the tasks of conquest and pacification. The Indians were even forced to carry their Christian masters in hammocks. Any refusal led to floggings, beatings, thrashings, punches, curses, and countless other vexations and cruelties.

Despite Ovando’s well-intentioned administration, the gold rush wiped out the island’s population. The mines destroyed the Taínos working there and in the process doomed those left behind in the villages. Caciques who had ruled over hundreds of individuals saw their dependents shrink to a handful of survivors after ten years of unrelenting work.

Las Casas was one of the 2,500 colonists who had arrived in Española with Governor Ovando, and he had received an encomienda in the goldfields of Cibao, where he observed the cataclysmic decline of the Indians. He believed that three million Indians had died in just a few years.

Another knowledgeable contemporary writing a few years later, Pietro Martire d’Anghiera, expressed the same idea. “Let us be strictly truthful and add that the craze for gold was the cause of their destruction,” he wrote to the pope, “for these people who were accustomed as soon as they had sown their fields to play, dance, and sing, and chase rabbits, were set mercilessly to work.

Ovando himself, realizing the depth of the crisis and the failure of his policies, proposed a dramatic and far-reaching solution: bring Indian slaves from the surrounding islands to work in the gold mines and other endeavors of Española. A new chapter in the sad history of the early Caribbean had begun. In the early years of the sixteenth century, Puerto Real and Puerto de Plata were two drab ports on the north shore of Española.

The northern shore of Española opened up to the green-blue waters of the Caribbean and to dozens of islands that were large enough to sustain Native populations but small enough that the people could not hide from Spanish slavers.

The first step for anyone wishing to launch a slaving expedition was to obtain a license. Clandestine slaving was possible, but because captives needed to be certified by crown officials before their legal sale in the markets of Española or Puerto Rico, it was best to get a license. Although King Ferdinand and Queen Isabella had prohibited the enslavement of Indians in 1500, their order was followed by what appeared to them to be three judicious exceptions. In 1503 the crown authorized the enslavement of Indians who were cannibals

In 1504 the monarchy also allowed the capture of Indians taken in “just wars,” extending to the New World the doctrine that had long justified the impressment and bondage of enemies in Europe. And in 1506 the monarchs permitted the colonists to “ransom” Indians who were enslaved by other Indians and whom the Spaniards could then keep as slaves—the logic being that ransomed Indians would at least become Christianized and their souls would be saved.

Of the three, they most often used cannibalism to legitimize their raids. Scholars have argued that early Spaniards had perverse incentives to exaggerate, sensationalize, and even fabricate stories of man-eating Indians, given the legal context.

Slave raiders formed compact groups of around 50 to 60 men. They arrived quietly on their ships; waited until nighttime, “when the Indians were secure in their mats”; and descended on the Natives, setting their thatched huts on fire, killing anyone who resisted, and capturing all others irrespective of age or gender. Once the initial ambush was over, the slavers often had to pursue the Indians who had escaped, unleashing their mastiffs or running the Natives down with their horses. If there were many captives, the slavers took the trouble of building temporary holding pens by the beach, close to where their ships were moored, while horsemen combed the island. The attackers literally carried off entire populations, leaving empty islands in their wake.

Unlike the Middle Passage, which required a month of travel, slaving voyages in the Caribbean lasted only a few days. Yet the mortality rates of these short passages surpassed those of transatlantic voyages. Friar Las Casas reported that “it was never the case that a ship carrying 300 to 400 people did not have to throw overboard 100 to 150 bodies out of lack of food and water”—making for a mortality rate of 25 to 50%.

***

Left to their own devices, the Native peoples of the Caribbean would have limited their exposure to illness, coping like many other human populations before and after them. We will never know how many Indians actually died of disease alone. But even if one-third, or two-thirds, of the Caribbean islanders had died of influenza, typhus, malaria, and smallpox, they would have been able to stem the decline and, in the fullness of time, rebound demographically. In fact, some Indian populations of the New World did just that. But unlike fourteenth-century Europeans, the Natives of the Caribbean were not left to their own devices. In the wake of the epidemics, slavers appeared on the horizon.

A slave woman has “no shadow of law to protect her from insult, from violence, or even from death.” The notion that a slave could sue his or her master to attain freedom would have been laughable to most southerners during the first half of the 19th century. Spain’s slaves lived under an entirely different legal regime. The New Laws not only affirmed that Indians were free vassals but also instructed the audiencias, or high courts, of the New World to “put special care in the good treatment and conservation of the Indians,” to remain informed of any abuses committed against Indians, and “to act quickly and without delaying maliciously as has happened in the past.

Because the Spanish legal system was open to Indians, a class of specialized lawyers that became known as procuradores generales de indios served to represent them. These procuradores assisted indigenous clients in building their cases and navigating the Spanish bureaucracy. In stark contrast to the black slaves of the antebellum South, Indians could rely on these lawyers for at least some representation in the Spanish legal system.

Indians may have been “free vassals” in the eyes of the law, but Spanish masters resorted to slight changes in terminology, gray areas, and subtle reinterpretations to continue to hold Indians in bondage. Still, the larger point remains true: the legal regimes under which African and Indian slaves operated were vastly different.

When we think of the Middle Passage, we immediately imagine adult African males. This image is based on fact. Of all the Africans carried to North America from the 16th through 18th century, males outnumbered females by a ratio approaching two to one, and they were overwhelmingly adults. The “reverse Middle Passage,” from America to Spain, was just the opposite: the slave traffic consisted mostly of children, with a good contingent of women and a mere sprinkling of men.

Most slaves held in Italian and Spanish households in the 14th through 16th centuries—whether Slavs, Tartars, Greeks, Russians, or Africans—were women. Females comprised an astonishing 80% or more of the slaves living in Genoa and Venice, the two leading slave-owning cities in Italy.

Adult Native women in Santo Domingo or Havana cost 60% more than adult males.

Spaniards who wished to transport Indians to Europe after 1542 had to demonstrate that they were taking legitimate slaves—branded and bearing the appropriate documentation from the time when slavery was legal—or were accompanied by “willing” Native travelers. Faced with these circumstances, traffickers went to great lengths to procure “willing” Indians, particularly children, who were more easily tricked and manipulated than adults.

Once these Indians were in Spain, their lives revolved around the master’s house. Occasionally they accompanied their masters on errands or were sent out of the house to fetch water, food, or some other necessity. For the most part, however, they were confined to the home, where their chores were never-ending. They swept floors, prepared food, looked after children, and worked in the master’s trade. On duty at all hours of the day and night, they watched as the days turned into months and years. The major milestones in their lives occurred when they were transferred from one master to the next. In return for their ceaseless work, they received no compensation except room and board.

The minute the lawsuit was filed, their relationship with their master turned decidedly hostile. Since slaves had nowhere else to go, they generally continued to live under the same roof with their masters during their trials, which could last for months or even years, giving masters ample opportunities to punish, torture, or somehow make their slaves desist.

Indians taken to Spain when they were very young often could not speak Native languages or remember much about their homelands. So slave owners’ most common strategy consisted of asserting that their slaves had not come from the Spanish Indies but from the Portuguese Indies (Portuguese colonies)—Brazil, northern and western Africa, and parts of Asia—where the enslavement of Natives was legal.

The New Laws did not end Indian slavery in Spain, but they did initiate the gradual eradication of this peculiar institution in the Iberian Peninsula. After 1542 it became public knowledge that the king of Spain had freed the Indians of the Americas. Word about Indians suing their masters and scoring legal victories spread quickly. By the 1550s, Indian slaves living in small Spanish towns were well aware that they were entitled to their freedom.

The Spanish crown also attempted to end Indian slavery in the New World, but the situation could not have been more different there. Indian slaves constituted a major pillar of the societies and economies of the Americas.

Spanish conquerors also acquired slaves, tens of thousands of them. Many were taken from among those who resisted conquest. They were called esclavos de guerra, or war slaves. According to one of Cortés’s soldiers who later wrote an eyewitness account, before entering an Indian town Spaniards requested its inhabitants to submit peacefully, “and if they did not come in peace but wished to give us war, we would make them slaves; and we carried with us an iron brand like this one to mark their faces.” The crown authorized Cortés and his soldiers to keep these Indians as long as the conquerors paid the corresponding taxes.

For the period between January 1521 and May 1522—that is, a few months before and after the fall of Tenochtitlán—Spaniards paid taxes on around 8,000 slaves taken just in the Aztec capital and its immediate surroundings. Thousands more flowed from Oaxaca, Michoacán, Tututepec, and as far away as Guatemala as these Indian kingdoms were brought into the Spanish fold. “So great was the haste to make slaves in different parts,” commented Friar Toribio de Benavente (also known as Motolinía) some years later, “that they were brought into Mexico City in great flocks, like sheep, so they could be branded easily.

Spaniards also purchased Indians who had already been enslaved by other Indians and were regularly offered in markets and streetsTo distinguish these slaves from those taken in war, the Spaniards used a different type of brand, also applied on the face.

Slavery had been practiced in Mexico since time immemorial. Pre-contact Indians had sold their children or even themselves into slavery because they had no food. Many Indians had been sold into slavery by other Indians as punishment for robbery, rape, or other crimes. Some war slaves were set aside for public sacrifices and ritual cannibalism. Some towns even had holding pens where men and women were fattened before the festivities. All of these pre-contact forms of bondage operated in specific cultural contexts.

In the 1520s, these slaves were so plentiful that their average price was only 2 pesos, far less than the price of a horse or cow. Spaniards typically traded small items such as a knife or piece of cloth in exchange for these human beings.

In Spain the New Laws produced discontent, but in the Spanish colonies they caused outright rebellion. In Peru a group of colonists murdered the official sent from Spain to enforce the laws and then decapitated him. For a time it seemed that Peru might even break away from the empire.

The Spanish envoy agreed to suspend the New Laws until he received further instructions from the king. Charles and the members of the Council of the Indies considered the situation and eventually consented to the granting of more encomiendas. It was a major victory for slave owners. Encomiendas remained in existence for another century and a half, affecting tens of thousands of Indians.

Thus a new regime emerged in the 1540s and 1550s, a regime in which Indians were legally free but remained enslaved through slight reinterpretations, changes in nomenclature, and practices meant to get around the New Laws.

All over Spanish America, Indian slave owners and colonial authorities devised subtle changes in terminology and newfangled labor institutions to comply with the law in form but not in substance.

Throughout the hemisphere, Spaniards chanced upon Indian villages or nomadic bands and snatched a woman or a couple of children to make a tidy profit. While constant, these spur-of-the-moment kidnappings were narrow in scope. The real slavers, the individuals who truly benefited from trafficking humans, operated on a much larger scale. They planned their expeditions carefully, procured investors and funds for weapons and provisions, hired agents to sell the slaves in mines and other enterprises, and—because Indian slavery was illegal—made sure to exploit loopholes and elicit plenty of official protection. Frontier captains were ideally suited for this line of work, as the empire expanded prodigiously during the sixteenth century. For them, slavery was no sideline to warfare or marginal activity born out of the chaos of conquest. It was first and foremost a business involving investors, soldiers, agents, and powerful officials.

Cape Verde’s specialty was supplying African slaves to Spanish America. Because the Spaniards possessed no slaving ports of their own in western Africa, they had to rely on the Portuguese to obtain black slaves. Cape Verde was ideal for this purpose. The archipelago lay in the same latitude as the Spanish Caribbean and was four hundred miles closer to it than the African coast. As in all forms of commerce, time was of the essence. But this was particularly so in a business in which the length of the passage determined the survival rate. Every additional day of travel represented more dead slaves and lost profits. By virtue of being the part of Africa closest to Spanish America, the Cape Verde Islands developed as the preeminent reexport center for slaves.

Spanish gentlemen and ladies gathered at a garden in Texcoco belonging to the viceroy in order to choose their English slaves. “Happy was he that could get soonest one of us,” Phillips observed. Each new owner simply took his or her slave home, clothed him, and put him to work in whatever was needed, “which was for the most part to attend upon them at the table, and to be as their chamberlains, and to wait upon them when they went abroad.” Like the liveried Africans who waited on their wealthy masters around Mexico City, these Englishmen represented conspicuous consumption, meant to be displayed to houseguests and on outings. Ordinary Indian slaves would not have fared so well. Some of the English prisoners were sent to work in the silver mines, but there too they received favorable treatment, as they became “overseers of the negroes and Indians that labored there.” Some of them remained in the mines for three or four years and, in a strange twist of fate, became rich. The experiences of Miles Phillips and the others differed in important respects from those of Indian slaves, but they were still subjected to the slavers’ methods. They traveled from Pánuco in a coffle, were sold in the slave markets of Texcoco, worked in the mines, and witnessed the living conditions of Indian men and women in bondage.

Like any other slaving system, the one in northern Mexico boiled down to pesos. The expeditions into Chichimec lands were expensive undertakings that required up-front outlays of cash. Each soldier needed to pay for horses, weapons, protective gear, and provisions. Experienced Indian fighters estimated that a soldier could not equip himself adequately for less than 1,000 pesos. Yet the crown generally paid a yearly salary of only 350 pesos (which was increased to 450 pesos after 1581). So the first thing a captain had to do in order to attract soldiers and volunteers was to assure them that the campaign would yield Indian captives. Without being offered a chance to capture Natives, few would risk life or horse. Time and again, Carvajal faced this fundamental economic reality.

Punitive expeditions into the Chichimec frontier were economic enterprises. Investors offered loans or equipment to the volunteers, who would repay them through the sale of captives at the end of the campaign.

Encomienda owners in the north were assigned bands of hunter-gatherers who, unlike the agriculturalists of central Mexico, had little to give but their labor. To profit from their encomiendas, encomenderos had to hunt down their “entrusted” Indians, transport them (often at gunpoint) to an estate, and make them work during planting or harvesting time without pay before releasing them again. This system of cyclical enslavement became widespread and quite characteristic of the encomiendas of Nuevo León. Granting nomadic peoples in encomiendas under these conditions was abusive, but it was entirely legal and well within Carvajal’s powers.

The principal shaft of that mine went down 420 feet, more than the length of a football field. The effort needed to make these tunnels is hard to imagine. Workers dug with simple picks, wedges, moils (metal points), and crowbars, toiling from sunrise to sunset. (Explosives were not introduced until the early eighteenth century.) Some of the tools weighed thirty or forty pounds.

Digging the shafts was a major undertaking, but it was only the start of the operation. Unlike much of the gold of the Caribbean, which could be collected as flecks or nuggets, silver was mostly embedded in the rock and combined with other substances. This geological reality added immensely to the work that was necessary to extract it. In Parral, as in many other silver mines throughout Mexico, Indians and black slaves carried the ore to the surface. Carrying leather bags full of rocks, they had to crawl through low passages and ascend by means of notched pine logs, or “chicken ladders.” Since the carrier’s hands were occupied holding the ladder, the heavy bag—which could weigh between 225 and 350 pounds—dangled perilously from his forehead and was propped against his back.

The main work took place on a central patio, where one could see heaps of ore and crews crushing rock and isolating the silver. Most of the haciendas in Parral used the smelting method. After crushing the ore into coarse gravel, workers shoveled it into blast furnaces and combined it with molten lead to get a

higher yield of silver. The ore was crushed to a fine powder, spread on a courtyard or patio, and sprinkled with mercury. Water was added to allow the heavier metals to sink to the bottom of this sludge. In Parral the worst job consisted of walking in shackles over this toxic mud in order to mix it thoroughly. This job invariably resulted in serious health problems, as the poisonous metal would enter the body through the pores and seep into the cartilage in the joints. The last step of the patio process was to heat the amalgam in order to vaporize the mercury and water and leave only the silver behind. Workers involved in this step absorbed the mercury vapors through their mucous membranes, which generally caused uncontrollable shaking of the limbs and death in as little as two or three years.

There were also “Chinese” slaves in Parral. (“Chinese” was a blanket term used for all Asian people.) Although they were never numerous, their presence revealed a network of enslavement that operated across the Pacific Ocean.

Mine owners therefore regarded salaried work not as an ideal form of labor, but as a necessary evil and a first step toward acquiring a more pliable and stable workforce. One strategy to achieve this goal involved advancing wages in pesos or specie (silver coins) to free workers. Since food, clothes, and many other necessities were outrageously expensive in Parral (and often because of gambling and drinking habits), workers frequently incurred debts. In principle these were free individuals who had temporarily fallen on hard times. But the reality was more ominous. Unable to repay their debts, these workers could not leave the mines until they closed their accounts. We may think of debt peonage as a phenomenon of great haciendas in the years leading up to the Mexican Revolution. Yet two centuries earlier, indebted servants and peons proliferated in Parral.

It is clear that many indebted workers were considered part of the mines’ inventories and more or less permanently attached to them. For instance, when Parral owners put a mine up for sale, they specifically listed the number of indebted workers. Evidently the existence of such workers was a major consideration for prospective buyers. Regardless of the exact sequence of events, mine owners ultimately addressed the problem of insufficient workers by bringing Indians to Parral from even farther away. Coastal Natives were hunted down and transported with great difficulty across the Sierra Madre Occidental to Parral.

In 1598 Juan de Oñate arrived there with his men and in short order took possession of this kingdom. Oñate apportioned Indians who submitted peacefully in encomiendas, but he reserved a far worse fate for those who resisted: all males over age 25 had one foot cut off.

By calling for unprovoked attacks on the Indians, Governor Rosas initiated a cycle of reprisals and counter-reprisals that resulted in ideal conditions for obtaining Indian workers, some of whom ended their days in his textile shop.

Clearly by the 1650s, the kingdom of New Mexico had become little more than a supply center for Parral. From the preceding examples and many others, it is possible to reconstruct the overall trajectory of the traffic of Natives from New Mexico. The earliest Spanish settlers began by enslaving Pueblo Indians. But they quickly discovered that keeping Pueblos as slaves was counterproductive, as this bred discontent among the Natives on which Spaniards depended for their very sustenance. Although the occasional enslavement of Pueblos continued throughout the seventeenth century, the colonists gradually redirected their slaving activities to Apaches and Utes. The Spaniards injected themselves into the struggles between different rancherías (local bands) and exploited intergroup antagonisms to facilitate the supply of slaves,

By 1679 so many Indians were flowing out of New Mexico that the bishop of Durango launched a formal investigation into this burgeoning business. Bishop Bartolomé García de Escañuela undertook this inquest less out of a sense of moral or religious duty than out of concern about the church’s declining revenues. Ordinarily, the faithful of Nueva Vizcaya—a province that included the modern states of Chihuahua, Durango, Sonora, and Sinaloa—had to pay a yearly tithe to the bishopric of ten percent of their animals and crops. But ranchers all over this region discovered that they were able to reduce their herds—and consequently their tax liabilities—by trading tithe-bearing animals for Indian slaves, who were tax-free. In effect, the acquisition of Indians amounted to a tax shelter,

Beyond northern Mexico, coerced Indian labor played a fundamental role in the mining economies of Central America, the Caribbean, Colombia, Venezuela, the Andean region, and Brazil. Yet the specific arrangements varied from place to place. Unlike Mexico’s silver economy, scattered in multiple mining centers, the enormous mine of Potosí dwarfed all others in the Andes. To satisfy the labor needs of this “mountain of silver,” Spanish authorities instituted a gargantuan system of draft labor known as the mita, which required that more than two hundred Indian communities spanning a large area in modern-day Peru and Bolivia send one-seventh of their adult population to work in the mines of Potosí, Huancavelica, and Cailloma. In any given year, ten thousand Indians or more had to take their turns working in the mines.

This state-directed system began in 1573 and remained in operation for 250 years.  Although the degree of state involvement and the scale of these operations varied from place to place, they all relied on labor arrangements that ran the gamut from clear slave labor (African, Indian, and occasionally Asian); to semi-coercive institutions and practices such as encomiendas, repartimientos, debt peonage, and the mita; to salaried work.

In the twilight of his life, King Philip came to grips with the failure of his policies as he struggled to save his soul. Yet he died before he could set the Indians of Chile free and discharge his royal conscience.

But Philip was not alone in trying to make things right. His wife, Mariana, was thirty years younger than he, every bit as pious, and far more determined. The crusade to free the Indians of Chile, and those in the empire at large, gained momentum during Queen Mariana’s regency, from 1665 to 1675, and culminated in the reign of her son Charles II. Alarmed by reports of large slaving grounds on the periphery of the Spanish empire, they used the power of an absolute monarchy to bring about the immediate liberation of all indigenous slaves. Mother and son took on deeply entrenched slaving interests, deprived the empire of much-needed revenue, and risked the very stability of distant provinces to advance their humanitarian agenda. They waged a war against Indian bondage that raged as far as the islands of the Philippines, the forests of Chile, the llanos (grasslands) of Colombia and Venezuela, and the deserts of Chihuahua and New Mexico.

In the early days of conquest, European slavers were attracted to some of the most heavily populated areas of the New World, including the large Caribbean islands, Guatemala, and central Mexico. But by the time the antislavery crusade got under way in the 1660s, nearly two centuries after the discovery of America, the slaving grounds had shifted to remote frontiers where there were much lower population densities but where imperial control remained minimal or nonexistent and the constant wars yielded steady streams of captives.

From the coast of Brazil, small parties of bandeirantes—a cross between pathfinders, prospectors, and slavers—also mounted devastating expeditions into the interior. Over the centuries, Brazilians have celebrated the bandeirantes in poems, novels, and sculptures, hailing them as the founders of the nation. Yet the bandeirantes took upwards of 60,000 captives in the middle decades of the seventeenth century, snatching mostly Indians congregated in the Jesuit missions of Paraguay.

The llanos of Colombia and Venezuela, the vast grasslands crisscrossed by tributaries of the Orinoco River, were a third zone of enslavement. Here Spanish traffickers competed with English, French, and above all Dutch networks of enslavement, all of which operated in the llanos. Interestingly, the Carib Indians—whom the Spaniards had long sought to exterminate—emerged as the preeminent suppliers of slaves to all of these European competitors of the Spanish. The Caribs carried out raids at night, surrounding entire villages and carrying off the children. A Spanish report summed up these activities: “It will not be too much to say that the Caribs sell yearly more than three hundred children, leaving murdered in their houses more than four hundred adults, for the Dutch do not like to buy the latter because they well know that, being grown up, they will escape.” The victims of this trade could variously wind up in the Spanish haciendas of Trinidad, the English plantations of Jamaica, the Dutch towns of Guyana, or as far west as Quito, Ecuador, where some of them toiled in the textile sweatshops for which this city was famous.

The last major area of enslavement, and perhaps the largest, was in the Philippines, where Europeans had stumbled on a dazzling world of slaves. “Some are captured in wars that different villages wage against each other,” wrote Guido de Lavezaris seven years after the Spanish had first settled in the Philippines, “some are slaves from birth and their origin is not known because their fathers, grandfathers, and ancestors were also slaves,” and others became enslaved “on account of minor transgressions regarding some of their rites and ceremonies or for not coming quickly enough at the summons of a chief or some other such thing.

Queen Mariana brought renewed energy to the abolitionist crusade. If we had to choose an opening salvo, it would be the queen’s 1667 order freeing all Chilean Indians who had been taken to Peru. Her order was published in the plazas of Lima and required all Peruvian slave owners to “turn their Indian slaves loose at the first opportunity.

In 1672 she freed the Indian slaves of Mexico, irrespective of their provenance or the circumstances of their enslavement.

With the accession of Charles II to the throne in 1675, the antislavery crusade neared its culmination. In 1676 Charles set free the Indian slaves of the Audiencia of Santo Domingo (comprising not only the Caribbean islands but some coastal areas as well) and Paraguay. Finally, on June 12, 1679, he issued a decree of continental scope: “No Indians of my Western Indies, Islands, and Mainland of the Ocean Sea, under any circumstance or pretext can be held as slaves;

In a separate order issued on the same day, el Hechizado freed the slaves of the Philippines, thus completing the project initiated by his father and mother of setting free all Indian slaves within the Spanish empire,

These early crackdowns failed to stop the Indian slave trade, however. Residents continued to buy Indians clandestinely, and slavers continued to supply them. But the crusade certainly made life more difficult for the traffickers.

There were very real limitations of monarchical authority. It worked in places where determined officials such as Governor Roteta and audiencia member Haro y Monterroso upheld the royal decrees. However, in many areas of the empire, the very officials charged with freeing the Indians were also in collusion with the slavers.

It was in the provinces that the situation became truly critical. Native Filipinos faced total ruin, as they had most of their wealth invested in their slaves. Moreover, the slaves supplied much of the rice and other basic foodstuffs of the islands, and now “agitated and encouraged by the recent laws setting them free [they] went to the extremity of refusing to plant the fields.” The greatest threat of all was that “by setting these slaves free, the provinces remote from Manila may be stirred up and revolt,

In the Philippines all branches of the imperial administration, including the governor, the members of the audiencia, the city council of Manila, members of the military, and the ecclesiastical establishment beginning with the archbishop, sent letters to Charles II requesting the suspension of the emancipation decree.

The Spanish campaign also pushed the slave trade further into the hands of Native intermediaries and traffickers, whether in northern Mexico, Chile, or the llanos of Colombia and Venezuela. The crown had some power over Spanish slavers and authorities, but its control over indigenous slavers was extremely tenuous or nonexistent. The late seventeenth and early eighteenth centuries witnessed the emergence of powerful indigenous polities that gained control of the trade. The Carib Indians consolidated their position in the llanos as the preeminent suppliers of slaves to French, English, and Dutch colonists, consistently delivering hundreds of slaves every year. In the far north of Mexico, the Comanche Indians came to play a similar role and began a breathtaking period of empire building.

These runners, keepers of accurate information and athletes of astonishing endurance, ran in the summer heat, pushing as far south as Isleta and as far west as Acoma and the distant mesas of the Hopis. In pairs they snaked through canyons and skirted mountains, trying to remain inconspicuous as they covered hundreds of miles with ruthless efficiency. They were sworn to absolute secrecy. And even though they would convey an oral message, they also carried an extraordinary device: a cord of yucca fiber tied with as many knots as there were days before the insurrection. “[Each pueblo] was to untie one knot to symbolize its acceptance,” observed one medicine man from San Felipe who was implicated in the plot, “and also to be aware of how many knots were left.” The countdown had begun.

As the day of the uprising approached, some pueblos around Santa Fe refused to go through with the plot. They had initially supported the plan even though they would bear the brunt of the fighting against the Spaniards residing in the capital city. But during the waxing moon, they began to reconsider the grave consequences of an all-out war against a foe that possessed firearms and horses. With the moon nearly full and only two knots left in the cord, the Native governors of Tanos, San Marcos, and Ciénega fatefully decided to switch sides. They journeyed to Santa Fe to denounce the conspiracy and, in a more personal and insidious betrayal, alert the Spanish authorities to the whereabouts of two Indian runners, Nicolás Catúa and Pedro Omtuá, who were still making the rounds with the knotted cord.

The revolt swept throughout the kingdom of New Mexico on August 10–11, destroying houses, ranches, and churches and killing some 400 men, women, and children, or about 205 of New Mexico’s Spanish population. The rebels did not engage in wanton destruction or indiscriminate killing. Po’pay and the other leaders gave them clear instructions. They were to destroy missions, churches, and all manner of Christian paraphernalia: “break up and burn the images of the holy Christ, the Virgin Mary, and the other saints, the crosses, and everything pertaining to Christianity.

Religion was clearly a flashpoint of the conflict. Throughout the seventeenth century, missionaries had made every effort to suppress “idolatry” and “superstition” and to subdue the Native medicine men, who had become their main competitors and antagonists. For their part, the medicine men had retained their traditional beliefs and clandestinely practiced their religion inside kivas. When Po’pay descended victorious from his perch in Taos and toured the pueblos, he commanded the Indians to return to their old traditions and beliefs, declaring that Jesus Christ and the Virgin Mary had died.

Near impunity permitted friars to extract unpaid Native labor. Governor Bernardo López de Mendizábal (1659–1661) flatly accused the missionaries of exploiting the Indians under the pretense that it was “for the temples and divine worship” and forcing “all the Indians of the pueblos, men as well as women, to serve them as slaves.” Some of the friars also abused their privileged position to procure sex. Oral traditions from the Hopi villages—which are corroborated at least in part by documentary information—detail how some friars at Oraibi and Shungopovi would send the men to fetch water in distant places so that the friars could be with the women during their absence. Most threatening of all was the missionaries’ capacity to torture and kill in the name of God.

When it came to fighting the Devil, Friar Guerra had few peers. Not only did he beat suspected idolaters and hechiceros, but he also soaked them with turpentine and set them on fire.

There is no question that the religious thesis of the Pueblo Revolt explains a great deal. But, like all historical explanations, it hinges on highlighting certain episodes and personalities while de-emphasizing others.  The causes of the rebellion: long-simmering religious animosities, famine, and illness made the mix even more volatile. But rising levels of exploitation, which can be documented in the archival record, belong at the core of this story. In the course of the seventeenth century, the silver economy expanded, and it was New Mexico’s misfortune to function as a reservoir of coerced labor and a source of cheap products for the silver mines. It did not take the bad behavior of too many Spanish governors, friars, and colonists—compelling Indians to carry salt, robbing their pelts, locking them up in textile sweatshops, and organizing raiding parties to procure Apache slaves—to bring about widespread animosity, resentment, and ultimately rebellion.

Native American Slavery

Native Americans were involved in the slaving enterprise from the beginning of European colonization. At first they offered captives to the newcomers and helped them develop new networks of enslavement, serving as guides, guards, intermediaries, and local providers. But with the passage of time, as Indians acquired European weapons and horses, they increased their power and came to control an ever larger share of the traffic in slaves.

The easternmost pueblos of Pecos and Taos befriended Apache bands that lived farther to the north and east, while the pueblos of Acoma and Jemez, in western New Mexico, developed alliances with groups of Navajos and Utes. Before the arrival of Europeans, such interactions had been common. In the period between 1450 and 1600, Pueblo Indians had enjoyed close trading relationships with outlying nomads. In spite of their strikingly different lifestyles, town dwellers and nomads complemented each other well. The Pueblos exchanged corn and ceramics with hunter-gatherers for bison meat and hides: carbohydrates for protein, and pottery for hides. The Spaniards’ arrival in 1598 severely reduced this trade. The Pueblos now had to surrender their agricultural surplus to encomenderos and missionaries and therefore retained few, if any, items to exchange. The archaeological record shows fewer bison bones and bison-related objects among the Pueblos during the seventeenth century. Additionally, the Spaniards launched raids against outlying hunter-gatherers, further disrupting Pueblo-Plains trading networks. With the Spanish exodus in 1680, the Pueblos had a chance to reestablish their old ties with the nomads. This trade appears to have been reinvigorated in a very short time.

After the Spanish retreat following the Pueblo Revolt of 1680, nomadic Indian traders with newfound access to horses began to muscle their way into the markets of New Mexico. In 1694, barely two years after the Spaniards had retaken control of the province, a group of Navajos arrived with the intention of selling Pawnee children. The Spanish authorities initially refused to acquire the young captives

But the spurned Navajos did not give up easily. To ratchet up the pressure, the traffickers proceeded to behead the captive children within the Spanish colonists’ sight.

Some years later, in 1704–1705, the Navajos, together with other nomads and Pueblo Indians, increased the pressure even more by threatening an all-out anti-Spanish revolt. Interestingly, it was around this time that New Mexican officials began sanctioning the ransoming of Indian captives sold by these groups. In effect, the Navajos, Utes, Comanches, and Apaches forced New Mexican authorities to break the law and accept their captives. Willingly or not, New Mexicans had become their market. By the middle of the eighteenth century, these commercial and diplomatic relations had become normalized. In 1752 Governor Tomás Vélez Cachupín reached peace agreements with the Comanches and Utes. Governor Cachupín understood quite well that the best way to achieve a lasting peace with these equestrian powers was by maintaining open trade relations with them and fostering mutual dependence. Thus New Mexico’s annual trading fairs became choreographed events in the service of diplomacy.

Many servants escaped, banded together, and mustered the courage to ask for recognition and even request land in outlying areas to start new communities.

Many of the signatories were Indians from the plains, including Pawnees, Jumanos, Apaches, and Kiowas. Friar Menchero visited a genízaro community south of Albuquerque in 1744. This crude community called Tomé consisted entirely of “nations that had been taken captive by the Comanche Apaches.” In the 1750s and 1760s, more genízaro settlements came into existence, an indication of the slaving prowess of the Comanches and other providers.

Indian captivity not only transformed New Mexico but also refashioned the Comanches and their principal victims. The quest for loot caused the Comanches to leave the tablelands and mountains of the Colorado Plateau and move to the plains. In the 1720s, merely one generation after having acquired horses, these mounted Indians abruptly shifted their base of operations to the east. They descended onto the immense grasslands, with their rolling hills and abundant herds of bison. But more than the bison, what initially attracted the Comanches to the plains were the isolated Apache villages.

The Apaches already practiced limited forms of agriculture, but the Pueblo refugees introduced new agricultural methods that enabled the Apaches to remain in place all year round. In the fifty-year period between 1675 and 1725—the blink of an eye in archaeological terms—dozens of Apache settlements sprouted up along the streams, lakes, and ponds of the large region between the Rocky Mountains and the 100th meridian, spanning much of modern-day Kansas and Nebraska.

In 1706 a group of Spanish soldiers visited one of these mixed communities of Apaches and Pueblos by the Arkansas River named El Cuartelejo. The residents lived in spacious adobe huts and cultivated small plots of corn, kidney beans, pumpkins, and watermelons, in addition to hunting bison.

In the end, the Comanches prevailed, employing captivity as a primary tool to remake the region. They raided Apache settlements, burning houses and fields, probably deliberately adopting a scorched-earth strategy to permanently dislodge their antagonists. To avoid complications, they generally killed the adult males on the spot, then seized the women and children.

The Comanches took many of their captives to New Mexico, where they exchanged them for horses and knives. In the absence of money or silver, women and children constituted a versatile medium of exchange accepted by Spaniards, Frenchmen, Englishmen, Pueblos, and many other Indian groups of the region.

Comanche males competed with one another by expanding their kinship networks. The Comanches practiced polygyny, so raids allowed men to acquire additional wives. Successful males could have three, four, five, or up to ten or more wives. Their “main instinct,” commented New Mexican governor Tomás Vélez Cachupín in 1750, “was to have an abundance of women, stealing them from other nations to increase their own.” This was not just about prestige, sexual gratification, and reproduction. In an equestrian society, women provided specialized labor.

For instance, a skilled male hunter could bring down several bison in just one hour. But once the exhilaration of the chase was over, hunters faced the daunting task of processing dead animals spread over great distances. Each carcass could weigh a ton or more. Flaying open a bison, cutting the choice meat from the back and around the ribs, removing the inner organs, cleaning the hide, and severing the legs and head required not just skill but above all untold amounts of labor. Captive women spent endless hours stooping over these large carcasses, withstanding the heat, stench, and exhaustion involved in preparing the hides for their many uses; looking after the horses; and doing the myriad chores of life in an encampment and on the move. Circumstances could vary, but enslaved women usually began at the bottom of the hierarchy of wives and were given the most taxing and unpleasant tasks. They were subordinate not only to their Comanche husbands but also to the “first wives,

Captive children faced different circumstances. Older boys, because they could not readily identify with their captors and had difficulty learning the language, were frequently excluded from the Comanche kinship system. These unlucky captives sometimes remained slaves for life. In contrast, younger captives were often adopted into a family and regarded as full-fledged members of it. Comanches showed a marked preference for boys over girls.

They were in high demand primarily because of the relative scarcity of males in Comanche society. Constant battles and raids took a heavy toll on the male population. Reportedly, relatively few Comanche warriors reached old age. The marked preference for boys may also have been a result of the growing number of horses the Comanches came to control. Breaking horses and looking after them became major occupations in Comanche society, and boys were deemed more appropriate for such tasks than girls. At the height of their power in the nineteenth century, the Comanches owned so many horses that each boy was responsible for a herd of as many as 150 animals,

Looking after the horses was the first task assigned to these young captives; it was a way of testing their loyalty to the group. The boys also had to recognize their captors as their parents, learn the ways of the society, and earn sufficient trust to receive more difficult assignments. In the fullness of time, they were allowed to take part in bison hunts and eventually were invited to accompany the warriors in raids against other Indians, including their former kinsmen.

Camps consisted of as few as ten people and seldom exceeded fifty. Atomization was a necessity, given the scarcity of food. These small bands moved from one campsite to another in carefully planned circuits to procure grasses, pine nuts, and other food resources that were available in different locales at different times of year.

The sparse conditions of the Great Basin limited the ability of the Paiutes to acquire horses. Horses consumed great amounts of grass, the very food on which the Paiutes depended for survival.

Thus the Paiutes ate horses instead of keeping them as beasts of burden. As a result, unlike other Numic speakers such as the Utes and Comanches, the Paiutes remained a horseless people, moving on foot in small groups, carrying simple tools, and eking out a living by digging roots and catching animals. Without giving a second thought to the environmental constraints to which the Paiutes were subjected, newcomers to the Great Basin simply assumed that the local Indians were exceedingly backward:

New England explorer Thomas J. Farnham remarked that many of the slaving victims were Paiute and Shoshone Indians living on the Sevier River of Utah—“poor creatures hunted in the spring of the year, when they are weak and helpless . . . and when taken [they are] fattened, carried to Santa Fé and sold as slaves during their minority.” Farnham noted that all ethnicities were already involved in this trade: “New Mexicans capture them for slaves; the neighboring Indians do the same; and even the bold and usually high-minded old [Anglo-American] beaver-hunter sometimes descends from his legitimate labor among the mountain streams, to this mean traffic.

In pre-contact North America, the diffusion of agriculture had given rise to an earlier cycle of enslavement. Indian societies that adopted agriculture experienced a sudden population increase and acquired both the means and the motivation to raid other peoples. The Aztecs, Mayas, Zapotecs, Caribs, Iroquois, and many others possessed captives and slaves, as is clear in archaeological, linguistic, and historical records. Nomadic groups also had slaves. But it is possible to find some nomads who were reluctant to accept even individuals who willingly offered themselves as slaves to save themselves from starvation. For some of these groups, taking slaves was simply not economically viable.

It is clear that the introduction of horses and firearms precipitated another cycle of enslavement in North America.  

The mission was Spain’s first frontier institution. In the early years of colonization, friars boldly ventured into unsettled areas, established contact with Indians, and acted as diplomats, spies, and agents of the crown.  Missions proved inadequate to secure the unsettled frontier. Working alone or in pairs, friars simply lacked the means to control territory or enforce a European-style regime. Missionaries depended on Native leaders to decide whether it was to their peoples’ advantage to live within a mission. In many instances, Indians found that life under the mission bell was too regimented for them and ultimately abandoned their missions. As the friars were powerless to retrieve absconding Indians, they had to rely on Spanish soldiers to help them carry out their work of religious instruction.

The Utes, Comanches, and Apaches, refused to allow missions into their territories. These nations wanted nothing to do with the meddlesome robed men bent on monogamous marriage, a sedentary way of life, and other strictures, and there was nothing the missionaries could do about it.

For the Indians, the presence of missions and presidios represented both opportunity and danger. Indians preferred to engage these outposts intermittently and on their own terms—perhaps to procure goods or food or even to gain temporary employment, but nothing more. However, their very existence made life risky for Natives living in the vicinity, as they increased the Indians’ vulnerability to European labor demands. This was especially true for small nomadic or seminomadic bands that had little else to offer but their labor. The alternatives were stark for them. They could either take to inaccessible areas beyond the pale of Spanish control or strike a bargain with the Devil, so to speak, by joining a mission or presidio while negotiating the best possible arrangement.

The Seris did receive the Jesuit missionaries peacefully, but one important reason was that the padres gave away food liberally. As one missionary noted, it was necessary to win over the Seris “by their mouths.

Out of a total population of around three thousand in the early eighteenth century, perhaps ten to twenty percent chose to settle down. The majority pursued the opposite strategy, avoiding contact with Europeans and retreating deep into their environmental refuge. Tiburón, the largest island in Mexico, lies only about a mile and a half from the continent and is clearly visible from much of the central coast of Sonora. But to get to this island, one has to cross the treacherous Strait of Infiernillo. The Spaniards needed good boats to negotiate the strait’s strong currents, but the desert coast of Sonora had no trees and therefore no wood for boats. The closest sources of wood would have been the Sierra de Bacoachi or Cerro Prieto. But hauling logs for even a medium-size vessel would have been a formidable task. The Seris were well aware of the Spaniards’ difficulties in getting to Tiburón and to the even more remote island of San Esteban, and thus headed there to escape their control.

Negotiating between these two worlds, many Seris chose to straddle them. They would stay in the missions for some time, performing the arduous work of the agriculturalist/stockman, but also frequently flee. Sometimes they would plunder a neighboring mission or nearby ranch, then abscond to the islands. Seri bands also would raid one another’s settlements, “hunt” mission cattle as if they were deer, and plunder corn as if it were a wild plant. Ancient animosities, multigenerational vendettas, and rivalries—exacerbated by the emergence of agricultural/ ranching oases in the middle of the desert—motivated some of these attacks. They also discovered that they could extend their traditional hunting and gathering activities with the resources recently introduced by Europeans.

The padres may have thought that they were “civilizing” the Seris, but the opposite was equally plausible: the Seris had incorporated the missions into their way of life, as they continued to move, hunt, and gather.

The ineffectiveness of the missions eventually prompted Spanish planners to attempt a more forceful approach. As the eighteenth century unfolded, military garrisons and soldiers superseded the missions as the lynchpins of Spain’s efforts to stabilize the frontier.

With the new approach came new forms of coercion. The word “presidio” captures the dual purpose of garrison and prison. Presidial soldiers were professionals who drew a salary from the crown, but they were underpaid. Thus garrison commanders and soldiers supplemented their earnings by catching Indians and selling them to the Spanish colonists or by turning presidios into supply centers based on coerced labor.

The Natives, once inside the presidio, were compelled to work from dawn to dusk. Twenty-two Indians labored in shackles, while the remaining sixty-six did not wear chains but were constantly monitored. Since many of the prisoners were married, their wives and children also lived at the garrison. They made tortillas, ground pinole (a course flour made of corn and seeds), and fetched water in return for food and clothes. Discipline was extreme. Minor infractions such as being late for work could result in forty or fifty lashes. Some guards were sadistic, beating Indians to unconsciousness, burning their armpits with hot wax, and hanging them from their feet with their heads dangling over a fire. Three Indians accused of being hechiceros at the pueblo of Onavas died after suffering horrifying head burns as presidial soldiers attempted to extract their confessions.

Several inmates had been accused of being hechiceros, or sorcerers, and had been sent to the presidio by express orders of the missionaries.

The presidio’s commanders had used the inmates’ labor for private gain. Pitic had been established right next to a large hacienda that belonged to the governor of Sonora and Sinaloa, Agustín de Vildósola. Since the beginning, most of the inmates had been sent to work on his property, building a dam, digging an irrigation ditch, installing fences, and tending the cornfields and wheat fields. Other prisoners had been hard at work carding, spinning, making cloth on looms, and fermenting mescal from the agave plant. Yet others had toiled in the nearby mines.

Rodríguez Gallardo’s solution was to deport all Seri Indians to a place from which they would never return. Rodríguez Gallardo believed that it was possible to remove the entire Seri nation of around three thousand people. All male and female Seris over the age of eight would be sent away, preferably by sea, because “once secured in a boat they will only be able to seek their freedom in their own shipwreck and ruin and without seeing the lay of our continent they would not understand how to return.” Given that the textile sweatshops of central Mexico had not been able to keep the Seris from returning home, Spanish officials decided to ship the Indian prisoners to the “ultramarine islands,” a vague formulation that probably meant the Caribbean islands and quite possibly the Philippines,

The only Seris who would not be shipped away—children younger than eight—would be marched to the Apache frontier to be used as reinforcements.

Spanish colonists and their Opata allies had been clinging precariously to their communities in the face of Apache raids in places such as the Valley of Bacanuchi, Terrenate, and San Francisco Xavier de Cuchuta along the headwaters of the San Pedro River. The Seri children would add to their numbers. The governor of Sonora predicted that “the Spaniards or people of reason among whom they intend to place the Seri children will not only agree to it but wish for the children to help them contain the enemy Apaches.

Adult Seris were led away in ropes and chains, not quite to the Caribbean islands, as originally proposed, but to Guatemala. Even then, some of the men returned.

The extirpation strategy ultimately failed, however. Many Seris remained in their homeland and had even more reason to rebel. Three years after the expedition to Tiburón, a Seri leader named Chepillo had a frank conversation with a missionary. When the Spanish friar urged the Indian leader to surrender, Chepillo replied, “I know that if we continue fighting we are damning ourselves, but there is no other way. We are accustomed to living with women. We do not know where our wives are, whether they are living or dead. You would not marry us to others, and if we take others, you will order us whipped.” Chepillo’s reasoning was unassailable. The Seri mission program, which had lasted for more than seventy years, had given way to extirpation and enslavement.

To prevent disruptions and to keep the silver flowing, Spanish officials subjected the Apaches to some of the same policies tested earlier on the Seris. According to the estimates of historian Paul Conrad, between 1770 and 1816 some three to five thousand Apaches and other Indians from the north were led away in chains, bound for central and southern Mexico. The most dangerous were shipped to Cuba.

Soldiers had an incentive to give the prisoners as little food as possible, in order to profit from the budget set aside for food. They also forced the Indians to walk for hours on end in order to wear them down and prevent any escape attempts. Terrible abuse arose from the fact that the majority of the prisoners were women and children, at the mercy of male soldiers.

These drives moved people living in regions of low demographic density to major urban agglomerations such as Mexico City and Veracruz, which were rife with disease. But it is remarkable that even in the midst of this outbreak, the colleras continued: one in 1780 and three more in 1781. These Indian drives, moving dozens of susceptible indigenous hosts and requiring soldiers to move back and forth between central and northern Mexico, would have been excellent carriers of the disease.

By the early 19th century, Indian slavery had nearly disappeared on the east coast of North America.

During the 18th and 19th centuries, however, the traffic of Natives was replaced almost completely by that of African slaves. Only a few vestiges of the old trade networks remained, notably in Florida.

 The Seminoles took Africans as slaves.

Captives like Abelino sometimes tried to escape while they were still close to their home communities and in relative proximity to other Mexican towns. That is why Indians often bound captives with ropes before going to sleep or even while riding. After crossing the Rio Grande and especially after having reached the Comanchería, such precautions became unnecessary. Lacking horses, weapons, and provisions, it was extremely risky for captives to set out on their own in the immense southern plains.

Captive Fernando González singled out the Yamparicas (a band of Comanches), the Kiowas (a group closely allied with the Comanches), some Apache bands (Lipanes, Mescaleros, and Gileños), and the Sarigtecas (or Sarituhkas, a generic term for Plains Indians used by the Comanches) as the principal captive takers in northern Mexico. These bands often traded their prisoners away, but they also retained many captives who were incorporated into their respective bands and came to comprise significant proportions of their overall populations.

“At least one fourth of the whole number have more or less of captive blood . . . chiefly Mexicans and Mexican Indians, with Indians of other tribes, and several whites taken from Texas when children.” In a census of Comanche families conducted in Oklahoma Territory in 1902, fully forty-five percent turned out to be of Mexican descent.

From published and unpublished sources, Rivaya-Martínez has identified 470 captives taken by Comanches from the 1820s to the 1860s. It is impossible to know how many cases went unrecorded. From this sample, however, it is clear that most of the victims were Hispanics (75%), followed far behind by other Indians (14%) and Anglo-Americans (10%).  Proportionally, the Comanches took few Anglo-American captives, and the ones they did take were often ransomed and released as soon as practicable.

Humble Mexicans, whose lives were changed in an instant when they were captured, and who frequently remained with the Natives for several years, if not forever. Lacking the necessary means and connections, the families of these captives were unable to ransom their children and wives and were otherwise powerless to demand their return.

One-fourth of all Kiowa Indians and nearly half of all Comanches were of Mexican descent, and many of them surely participated in raids against fellow Mexicans.

After independence, Mexico extended citizenship rights to all Indians residing there and abolished slavery. In the absence of slavery, the only way for Mexicans to bind workers to their properties and businesses was by extending credit to them. As a result, debt peonage proliferated throughout Mexico (and in the American Southwest after slavery was abolished there in the 1860s) and emerged as the principal mechanism of the other slavery.

The Indian did not know the amount he still owed or how much money he and his family had earned during their twelve years of forced servitude. But he was certain that peonage was worse than slavery because unlike the Africans with whom he toiled, he was not allowed to wander the streets freely even on Sundays. Over the centuries, debt peonage spread.

States throughout the country enacted servitude and vagrancy laws. The state of Yucatán, for example, regulated the movement of servants through a certificate system. No servant could abandon his master without having fulfilled the terms of his contract and could not be hired by another employer without first presenting a certificate showing that he owed “absolutely nothing” to his previous employer.

In Chiapas the state legislature introduced a servitude code in 1827 allowing owners to retain their workers by force if necessary until they had fulfilled the terms of their contracts.

Peonage in neighboring Nuevo León may have been just as common and was especially galling because it was customary to transfer debts from fathers to sons, thus perpetuating a system of inherited bondage. In these ways, servitude for the liquidation of debts spread all over Mexico.

“We do not consider that we own our laborers; we consider they are in debt to us,” the president of the Agricultural Chamber of Yucatán told Turner. “And we do not consider that we buy and sell them; we consider that we transfer the debt, and the man goes with the debt.

One year ago the price of each man was $1,000.” Obviously, the reason the going rate was uniform was not that all peons were equally in debt, but that there was a market for them irrespective of their debt. “We don’t keep much account of the debt,” clarified one planter, “because it doesn’t matter after you’ve got possession of the man.” After paying the price, Turner was told, he would get the worker along with a photograph and identification papers.

Turner asked candidly about how to treat his workers. “It is necessary to whip them—oh, yes, very necessary,” opined Felipe G. Canton, secretary of the Agricultural Chamber, “for there is no other way to make them do what you wish.

“Peons, you are aware, is but another name for slaves as that term is understood in our Southern States,” he explained in a letter to the commissioner of Indian affairs, adding that the main difference was that the peonage system was not confined to a particular “race of the human family,” but applied to “all colors and tongues.

Indians purchased other Indians, and Mexicans bought other Mexicans, and yet no one seemed to have the slightest objection to being purchasers of their own “kith and kin.

Foreign visitors who ventured out of Don Guadalupe’s home and onto his nearby Rancho Petaluma were able to gain a great deal more insight. At its peak in the early 1840s, this 66,000-acre ranch was tended by seven hundred workers. An entire encampment of Indians, “badly clothed” and “pretty nearly in a state of nature,” lived in and around the property and did all the work.

Faced with dwindling resources and loss of land, former mission Indians had little choice but to put themselves under the protection of overlords like the Vallejos.

Especially after the secularization of the missions in 1833, Mexican ranchers sent out armed expeditions to seize Indians practically every year—and as many as six times in 1837, four in 1838, and four in 1839.

Mexican ranchers pioneered the other slavery in California, but American colonists readily adapted to it. They acquired properties of their own and faced the age-old problem of finding laborers. Their options were limited.

Although the indigenous population of Alta California had been cut by half during the Spanish and Mexican periods—roughly from 300,000 to 150,000—Indians still comprised the most abundant pool of laborers. Short of working the land themselves, white owners had to rely on them.

A Massachusetts doctor named John Marsh offered clearer guidance on how to treat Indian workers: “Nothing more is necessary for their complete subjugation but kindness in the beginning, and a little well timed severity when manifestly deserved.” And even when the latter method became a necessity, Dr. Marsh reassured his readers, the California Indians “submit to flagellation with more humility than the negroes.

Chico’s Bidwell

On one end of the spectrum were the decidedly paternalistic patrones (landowners), such as John Bidwell.  Bidwell regarded Indians as children of nature—credulous, superstitious, and gullible—and sometimes resorted to manipulation. To intimidate them, he carried the paw of a very large grizzly bear and showed it to them, knowing that they viewed grizzlies as especially powerful, and even evil, spirits.

Bidwell regarded Indians as children of nature—credulous, superstitious, and gullible—and sometimes resorted to manipulation.

Bidwell’s need for Indian workers became critical during the gold rush years. He was among the lucky few who struck gold and was able to establish a productive gold-mining camp on the Feather River. During the frantic mining seasons of 1848 and 1849, he and his partners managed to recruit between twenty and fifty Natives from the Butte County area. Bidwell paid his workers with food and clothing rather than cash, but to his credit, he did not use debt or coercion to get his way.

When he served as alcalde at the mission of San Luis Rey a few years earlier, he specifically refused to return fugitive workers to their Mexican masters because of unpaid debts.

Bidwell’s peculiar blend of pragmatism and paternalism was perhaps best expressed at Rancho del Arroyo Chico, a 22,000-acre property east of the Sacramento River and north of Chico Creek (encompassing what is now the town of Chico) that he had acquired with his mining wealth. When he first moved onto the ranch in 1849, there were no Indians on the premises. Therefore his first goal was to convince the Mechoopda Indians living immediately to the south to come to his ranch.

Bidwell gave them work and asked them to stay. He offered the ranch as a refuge where they could hunt, fish, gather acorns, conduct communal grasshopper drives, and generally maintain their way of life and culture at a time of rapid change throughout California. A couple of hundred Mechoopdas resettled in a new ranchería barely one hundred yards from Bidwell’s residence. One visitor commented that Bidwell had found these Indians “as wild as a deer and wholly unclad,” but through his protection and employment, they had built “happy homes with their own gardens, fruit trees, and flowers.

Sutter’s enslavement

Such experiences paved Sutter’s way into the slaving business. But what really pushed him into that traffic was the need to punish hostile Indians and the realization that this could be done in an economically advantageous manner. Sutter’s presence by the Sacramento River had polarized the indigenous inhabitants. Some Miwoks and Nisenans were his allies and laborers—however reluctantly—but many others refused to submit and attempted to steal from Sutter and even murder him. In 1844–1845, when Sutter’s political influence was on the wane and huge payments to the Russians were due, he opted to use an iron fist on the Natives. “I see now how it is,” Sutter wrote to his most trusted agent, who was in the process of developing a new farm; “if they are not Keept strickly under fear, it will be no good.” Sutter’s personal army came alive in those years, persuading unreliable laborers, breaking up bands of hostile Indians, and punishing cattle rustlers. All of these activities became potential sources of slaves. Unguarded private letters reveal the deliberate way in which Sutter approached this line of business. “I shall send you some young Indians,” Sutter wrote to his neighbor and creditor Antonio Suñol in May 1845, “after our campaign against horse-thieves, which will take place after the wheat harvest.

Vallejo

Finally they reached the Kam-dot Indians, who organized a great council in a temescal, or sweathouse, to which Vallejo was invited. The Indians began gathering in the conical structure, about the size of a circus ring, by the lake. The building was completely enclosed except for a small hole at the top to let out the smoke. The only way in or out was through a narrow tunnel that could be used by only one person at a time. The participants set a fire in the middle of the structure, and once they were sweating profusely, they would escape through the tunnel to plunge into the lake. According to Vallejo’s own version of events, he believed that the sweathouse invitation was a ruse. So with half the Indian men inside, naked and unarmed, he and his men set the building on fire while blocking the tunnel. Then the rest of the men and “the squaws and children were made prisoners and driven down into Napa Valley and there compelled to go to work”—a prize of three hundred Indians, young and old, male and female. The American takeover of California forced the Vallejos to consolidate their holdings.

Americans who stayed with Kelsey and Stone reported that their hosts flogged Indians for entertainment and even shot random Natives just for the fun of seeing them jump. Thomas Knight, an American who settled in the Napa Valley in 1845, said that one of the preferred methods of punishment was to hang Indians by their thumbs in the adobe house for two or three days, allowing their toes to just touch the floor. Kelsey and Stone also raped young Indian women. Indeed, according to another white Napa Valley resident, one of their motivations for relocating to remote Clear Lake was to gain the freedom to satisfy “their unbridled lusts among the youthful females.

One morning in December 1849, the Indians charged the adobe house, killing Kelsey and Stone with arrows and striking their heads with rocks.

Although the two American partners may have been unusually (even pathologically) cruel, they were able to enslave these Indians because such activities were common throughout the region and there was a thriving market for Indian slaves. Indeed, their deaths did not stop the trafficking of Clear Lake Indians. The trade resumed in 1850.

***

The next step in the process of formalizing the peonage system was to give teeth to Montgomery’s proclamation, which is exactly what Henry W. Halleck, secretary of state of California, did by introducing a certificate and pass system in 1847. All employers were required to issue certificates of employment to their indigenous workers. If these workers had to travel for any reason, such as to visit friends or relatives or to trade, they also had to secure a pass from the local authorities. These certificates and passes allowed employers and local officials to monitor and control the movements of Indians.

“Any Indian found beyond the limits of the town or rancho in which he may be employed without such certificate or pass,” Halleck ordered, “will be liable to arrest as a horse thief, and if, on being brought before a civil Magistrate, he fail to give a satisfactory account of himself, he will be subjected to trial and punishment.” This system accomplished a number of goals. It allowed ranchers to hold Indians in place, as the certificates typically listed the “advanced wages” that had to be repaid before the certificate bearer would be free to go. This was the very cornerstone of the peonage system. The certificate and pass system also sought to minimize conflict among employers. Understandably, Indians often fled from ranches and mines and took up work with other employers.

With these documents, prospective employers could determine at a glance if an Indian seeking employment had any outstanding debts. And finally, the pass system went beyond previous ordinances in distinguishing between Natives gainfully employed and all others—regardless of where they lived—who were automatically considered vagrants or horse thieves and therefore subject to the labor draft.

The Indian Act of 1850 was like a piñata with something for everyone who wished to exploit the Natives of California. For instance, section 20 stipulated that any Indian who was able to work and support himself in some honest calling but was found “loitering and strolling about, or frequenting public places where liquors are sold, begging, or leading an immoral or profligate course of life” could be arrested on the complaint of “any resident citizen” of the county and brought before any justice of the peace. If the accused Indian was deemed a vagrant, the justice of the peace was required “to hire out such vagrant within twenty-four hours to the best bidder . . . for any term not exceeding four months.” In short, any citizen could obtain Indian servants through convict leasing.

Another section established the “apprenticeship” of Indian minors. Any white person who wished to employ an Indian child could present himself before a justice of the peace accompanied by the “parents or friends” of the minor in question, and after showing that this was a voluntary transaction, the petitioner would get custody of the child and control “the earnings of such minor until he or she obtained the age of majority” (fifteen for girls and eighteen for boys).

The apprenticeship provision worked in tandem with yet another section of the Indian Act of 1850 that gave justices of the peace jurisdiction in all cases of complaints related to Indians, “without the ability of Indians to appeal at all.” And “in no case [could] a white man be convicted of any offense upon the testimony of an Indian, or Indians.” Understandably, these provisions gave considerable latitude to traffickers of Indian children. In northern California, this trade flourished, especially in the mid-1850s, and became so important that some newspapers began writing about the inhumanity of it. In 1857 the newspapers launched what one witness described as “an agitation against the California slave trade

Carson forwarded an extraordinary request to Carleton: It is expected by the Utes, and has, I believe, been customary to allow them to keep the women and children and the property captured by them for their own use and benefit, and as there is no way to sufficiently recompense these Indians for their invaluable services, and as a means of insuring their continued zeal and activity; I ask it as a favor that they be permitted to retain all that they may capture. Carson made this request as a concerned commander who wished to retain his Indian scouts.

The end of native American slavery

The impetus did not originate in abolitionist groups. Instead it came from that much-maligned institution, the United States Congress.  Although the intended beneficiaries of the 13th amendment were African slaves, the term “involuntary servitude” opened the possibility of applying it to Indian captives, Mexican peons, Chinese coolies, or even whites caught in coercive labor arrangements.

Posted in Human Nature, Slavery | Tagged , , | 5 Comments