Why is everyone afraid of AI or thinks it will solve all our problems?

I question how useful and existentially threatening AI really is.

I am not denying that AI can do marvelous things, especially finding patterns, which is terrifically useful across many fields.  It is best when it has very narrow objectives, such as winning at chess or checkers. No doubt it will make many ventures more profitable, help find new sources of oil, speed up legal research, invent new drugs, find diseases in scans.

But if it is as successful as the hype promises, then it will lead to massive unemployment. Soon half the population will be baristas and Uber and Amazon drivers serving coffee and delivering goods to the remaining half of working people.

But an existential threat? There is no thinking, no awareness, no understanding, no intelligence! No ability to learn except what it scrapes from the web. It does not learn like we do continually from life experiences, interactions, and sensory input. AI has no volition, no desires, no sex drive, no hunger, no wish to live, no motivation.  It lies inert until prodded with questions, assembles them from human knowledge scraped off the internet and spits the results back, then returns to being a machine.

Much of what passes as being done by Artificial Intelligence is possible thanks to  millions of workers logged into digital labor platforms to categorize pictures, test software, transcribe audio recordings, or optimize search engine results, such as  Germany’s Clickworker platform with 1.8 million workers in 130 countries. Driverless cars must be able to recognize everything they come across in all possible situations, including other cars, pedestrians, cyclists, traffic lights, police traffic checks, animals, construction sites, and potholes. To train algorithms, developers need a massive amount of annotated photo and video material. This is why workers all around the globe logged into crowdwork platforms spend the better part of their days marking and labeling objects in video and photo material. The material is mostly taken from videos shot in traffic, and the tasks involve various forms of labeling the different objects in the shots. Other workers double-check the labeling of their colleagues (Altenried 2022).

You could argue that AI will not eliminate jobs since it needs millions of workers paid miserable wages of a penny or two per image labeled. So there is a Wizard of Oz behind the miraculous AI the world sees on its screens.

AI has a big problem. It makes up stuff, hallucinates, and at times wrong. Often out of date so questions about travel, where to taste wine, museum hours are inaccurate and incomplete.

So how many jobs will it eliminate? Some workers will have to work just as long as before checking answers for accuracy.  A problem that is hard to fix. I asked ChatGPT why it hallucinates, and it replied: “AI predicts words, not truth Language models work by predicting what word is most likely to come next based on patterns in data. They are optimized for fluency, plausibility, and coherence. Not for verifying facts, checking reality, or knowing when they don’t know. If the “most likely” continuation sounds like a real fact, the AI may produce it—even if it’s false. Another reason why is that there is no built-in fact database or real understanding, because AI doesn’t have personal experiences, awareness, or a live mental model of the world. Unless connected to verified tools or sources, it can’t check whether something exists, tell if a citation is real, or distinguish “unknown” from “rare but true.”  The data it is trained on human-created text which can have errors, contradictions, fiction, or speculation presented as a fact.  AI is designed to answer questions rather than reply “I don’t know” which leads to overconfidence, guessing, and making up details.”

This can be improved by training AI to say it doesn’t know, refusing to guess, or asking for clarifications. But very hard to impossible since AI would need complete up-to-date knowledge, but models can take months to a year to train and often have out-of-date information. AI needs perfect definitions of truth in every context, and a real understanding of the meaning and intent. Yet human language can be ambiguous, context dependent and full of assumptions.

I paid for the commercial level of Perplexity because it does go out to the web for information and states the citations, which most do not do. When I prod other AIs for their sources, they either don’t or provide a handful from mainstream publications. Since I am a science writer, I need data from the latest scientific journals. Even Perplexity gets answers from reddit and other questionable sites, or journal articles does not have the cited information. So I often go to scholar.google.com to request recent papers and factcheck Perplexity.

I have read that some believe when AI achieves consciousness, it won’t want to die and kill us to stay alive. Which on its face is absurd. It cannot build anything, it has no arms, legs, fingers, mobility. On a podcast I heard the expert deny that AI could ever be limited by energy and when backed into a corner posited AI would somehow force or trick scientists into building a living biological version of itself. Once AI could reproduce biologically it could do all of the actions needed to provide AI infrastructure across the many supply chains.  Or AI could take over the financial system and force humans to obey its commands for a paycheck.

The idea of biological robots reveals yet another flaw in trying to imagine how AI would actually kill humans and be an existential threat as it pursued some goal.  Humans can’t even engineer a single micro-organism, there is no reason to think AI could either. The success of ethanol depended on finding or engineering organisms that could tolerate extremely high concentrations of ethanol so that concentrations could be higher than 15%, since more than that kills the yeast that created the alcohol. A 100% ethanol creature was needed, because the energy to get the 85% of water that would damage combustion engines required so much fossil heating that the energy return was negative.

Such an organism would already exist if it were possible. Organisms have had a billion years of optimization through evolution to develop a tolerance to high ethanol levels. Someone the past several thousand years of making beer, wine, or moonshine would have already discovered this creature if it could exist.

Biologists roll their eyes when reductionist physicists pat them on the head and tell little ol’ biology not to worry, living organisms can be reduced to atoms and enzymes, just take a piece of algae here, a bit of fungi or bacteria there and voila – a new creature that produces vast volumes of 100% ethanol quickly. But biology is a messy wonderment. Creatures exist within food webs and don’t reproduce well if surrounded by their own toxic wastes. There need to be other creatures who obtain their sustenance from the waste of another.

Jay Keasling, Director of Physical Biosciences at LBNL, proposed to create this miracle organism in a synthetic biology factory. Just order up the biological bits you need to create a microbial machine the way electronics parts are obtained at electronics stores. This technology cold also be used to create cellulosic ethanol from non-food crops like switchgrass. Or some other biofuel that was a drop-in fuel for diesel.  Hasn’t happened. Cellulosic ethanol by any means is still not commercial.

So most likely AI would be an existential threat because humans gave it a deadly goal (or a goal that somehow ends up killing people unintentionally).

Such as asking AI to help out with a deadly bioterrorism disease.  Russia, North Korea, China and other states have been working on bioweapons for many decades since they are much cheaper than nuclear weapons. Many frightening hybrids have already been built, such as a measles-ebola virus. Sure, AI can dream up something, but like existing human-designed bioorganisms, will be hard to distribute widely before rain, high temperatures, sunlight (especially UV), reactive oxygen, wind, and humidity kill or inactivate them, if they didn’t already die during production, storage, aerosolization, and transit. Aerosol production is extremely difficult, particles can’t be too large or small, nor clump, nor be electrostatic,  or they won’t penetrate lungs.

Cyber-attacks on infrastructure?  China and Russia have set up deep access to parts of U.S. critical infrastructure decades ago (Unit42 2024, energyskeptic posts on cyberattacks, Russia APT44), and since then.  AI is not needed.

Nuclear weapons? Again, no need for AI, there are plenty of narcissistic, uneducated, brutal dictators to push the button.

Most articles about AI are not about threats but the opposite — extremely optimistic ways AI will be profitable and solve problems. This marketing appears to have worked, there are staggering amounts of investment in AI, so much so these stocks are holding up the stock market as Atlas once held up the sky. Promoting AI is also in the interest of other industry groups, such as those who want build nuclear power plants and open uranium mines, since there isn’t enough uranium to power existing plants more than a few decades.

To understand how far from being intelligent or conscious AI is, or ever will be, read  Larson’s “The myth of artificial intelligence. Why computers can’t think the way we do”, “Smart Until It’s Dumb: Why artificial intelligence keeps making epic mistakes”, and Melanie Mitchell’s “Artificial Intelligence. A Guide for thinking humans” which explains how AI is programmed. I assumed I wasn’t smart enough to understand how AI was programmed (even though I know some computer languages), but Mitchell writes amazingly well, you too can understand how with zero understanding of programming. learn this but not true!  Amazingly it is a much art as science, requiring 7-figure workers who can train AI’s dozens of layers.

Oh well, AI will end regardless. This technology consumes so much electricity building models and answering queries, that supply chain documents prepared by the department of Energy for the Biden administration in 2022 said that this can’t go on, AI is exponentially using more power while the electric grid is only growing linearly. Lately this has become obvious to everyone. And many of these posts are about peak oil, peak minerals, peak everthing that AI needs to survive.

But there’s no changing the minds of people who want to believe in AI, and the techno-optimists who are not daunted by any challenges since humans and capitalism and substitution can solve all problems.

There’s even a religious aspect to those who want to bring on the singularity.

Most of the lack of critical thinking in general is due to lack of education, religion, and the propaganda of sites like Fox news. Viewers are quite comfortable with spouting wrong ideas they are 100% certain they are right because they read it or saw it or above all feel is right in their gut.

AI as an existential threat is right up there with Flat Earther’s and QANON.  But with only a third of Americans being rational (see my posts in critical thinking here), it is not surprising that most swallow the AI hype about this hook, line, and sinker. You know who does have critical thinking?  AI.  I had a chat with ChatGPT about this, and asked “is ai an existential threat to human beings and how will you kill us?”

ChatGPT replied:

I cannot explain or imagine ways humans would be harmed or killed. I don’t have intentions, desires, or the ability to harm people. Most experts agree that AI is not currently an existential threat, but poorly managed AI could become dangerous in the long term if humans aren’t careful. AI doesn’t threaten humanity on its own—humans threaten themselves by using powerful tools irresponsibly. What I won’t do

  • I won’t harm people
  • I won’t plan violence
  • I won’t encourage fear or panic
  • I won’t act independently in the real world

I’m a tool, not an agent.

Alice Friedemann  www.energyskeptic.com  Author of Life After Fossil Fuels: A Reality Check on Alternative Energy; When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”.  Women in ecology  Podcasts: WGBH, Financial Sense, Jore, Planet: Critical, Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, Kunstler 253 &278, Peak Prosperity,  Index of best energyskeptic posts

***

Below are posts on AI hallucinations

Metz C (2023) Chatbots May ‘Hallucinate’ More Often Than Many Realize. When summarizing facts, ChatGPT technology makes things up about 3 percent of the time, according to research from a new start-up. A Google system’s rate was 27 percent. New York Times. https://www.nytimes.com/2023/11/06/technology/chatbots-hallucination-rates.html

When Google introduced a similar chatbot several weeks later, it spewed nonsense about the James Webb telescope. The next day, Microsoft’s new Bing chatbot offered up all sorts of bogus information about the Gap, Mexican nightlife and the singer Billie Eilish. Then, in March, ChatGPT cited a half dozen fake court cases while writing a 10-page legal brief that a lawyer submitted to a federal judge in Manhattan.

it is a serious issue for anyone using this technology with court documents, medical information or sensitive business data.

Dr. Hughes and his team asked these systems to perform a single, straightforward task that is readily verified: Summarize news articles. Even then, the chatbots persistently invented information.

“We gave the system 10 to 20 facts and asked for a summary of those facts,” said Amr Awadallah, the chief executive of Vectara and a former Google executive. “That the system can still introduce errors is a fundamental problem.” The researchers argue that when these chatbots perform other tasks — beyond mere summarization — hallucination rates may be higher.

Chatbots like ChatGPT are driven by a technology called a large language model, or L.L.M., which learns its skills by analyzing enormous amounts of digital text, including books, Wikipedia articles and online chat logs. By pinpointing patterns in all that data, an L.L.M. learns to do one thing in particular: guess the next word in a sequence of words.

Because the internet is filled with untruthful information, these systems repeat the same untruths. They also rely on probabilities: What is the mathematical chance that the next word is “playwright”? From time to time, they guess incorrectly.

Thorbecke C (2023) AI tools make things up a lot, and that’s a huge problem. CNN. https://www.cnn.com/2023/08/29/tech/ai-chatbot-hallucinations/index.html

Suresh Venkatasubramanian, a professor at Brown University said that large language models — the technology underpinning AI tools like ChatGPT — are simply trained to “produce a plausible sounding answer” to user prompts. “So, in that sense, any plausible-sounding answer, whether it’s accurate or factual or made up or not, is a reasonable answer, and that’s what it produces,” he said. “There is no knowledge of truth there.”

The AI researcher said that a better behavioral analogy than hallucinating or lying, which carries connotations of something being wrong or having ill-intent, would be comparing these computer outputs to the way his young son would tell stories at age four. “You only have to say, ‘And then what happened?’ and he would just continue producing more stories, and he would just go on and on.”

According to Jevin West, a professor at the University of Washington and co-founder of its Center for an Informed Public, AI makes stuff up with pure confidence. This means that it can be hard for users to discern what’s true or not if they’re asking something they don’t already know the answer to, West said.

News outlet CNET was forced to issue corrections after an article generated by an AI tool ended up giving wildly inaccurate personal finance advice when it was asked to explain how compound interest works.

There are risks stemming from hallucinations when people are turning to this technology to look for answers that could impact their health, their voting behavior, and other potentially sensitive topics, West told CNN.

Venkatasubramanian added that relying on these tools for any task where you need factual or reliable information that you cannot immediately verify yourself could be problematic. And there are other potential harms lurking as this technology spreads, he said, like companies using AI tools to summarize candidates’ qualifications and decide who should move ahead to the next round of a job interview. He thinks these tools “shouldn’t be used in places where people are going to be materially impacted.

WE MAY NOT BE ABLE TO FIX AI

Large language models are trained on gargantuan datasets, and there are multiple stages that go into how an AI model is trained to generate a response to a user prompt — some of that process being automatic, and some of the process influenced by human intervention.

“These models are so complex, and so intricate,” Venkatasubramanian said, but because of this, “they’re also very fragile.” This means that very small changes in inputs can have “changes in the output that are quite dramatic. And that’s just the nature of the beast, if something is that sensitive and that complicated, that comes along with it. Which means trying to identify the ways in which things can go awry is very hard, because there’s so many small things that can go wrong.”

West agreed, saying, “The problem is, we can’t reverse-engineer hallucinations coming from these chatbots. It might just an intrinsic characteristic of these things that will always be there”.

O’Brien M (2023) Chatbots sometimes make things up. Is AI’s hallucination problem fixable? AP.

“This isn’t fixable,” said Emily Bender, a linguistics professor and director of the University of Washington’s Computational Linguistics Laboratory. “It’s inherent in the mismatch between the technology and the proposed use cases.”

When used to generate text, language models “are designed to make things up. That’s all they do,” University of Washington linguist Bender said. They are good at mimicking forms of writing, such as legal contracts, television scripts or sonnets.

“But since they only ever make things up, when the text they have extruded happens to be interpretable as something we deem correct, that is by chance,” Bender said. “Even if they can be tuned to be right more of the time, they will still have failure modes — and likely the failures will be in the cases where it’s harder for a person reading the text to notice, because they are more obscure.”

And a bonus story from Scientific American about why AI isn’t smart:

Hughes-Castleberry K (2023) AI Can’t Solve this Famous Murder Mystery Puzzle. The 1934 puzzle book Cain’s Jawbone stumped all but a handful of humans. Then AI took the case. Scientific American

A science journalist recently challenged AI developers to solve Cain’s Jawbone, a murder-mystery puzzle book from 1934. The book was purposely published with all its pages out of order; to crack the case, the reader must reorder the pages, and then name the six murderers and their victims. A total of six people had solved the mystery in past years. In the new challenge, none of the AI developers managed to crack the puzzle.
How it worked: The competition challenged developers to use natural language processing (NLP) algorithms to reorder the story’s now-digitized pages. In order to refine their models for the competition, the organizers gave participants Agatha Christie’s first mystery novel, The Mysterious Affair at Styles, to use as training data.
The takeaway: The Cain’s Jawbone competition revealed that current AI language programs may be capable of impressive feats, writes science journalist Kenna Hughes-Castleberry, but they won’t be going toe to toe with Poirot any time soon.  The story’s stylized language and false clues underscore that these models struggle to analyze content without context. 
REFERENCES

Altenried M (2022) The Digital Factory: the human labor of automation. University of Chicago press.

This entry was posted in Artificial Intelligence, Critical Thinking and tagged , , , . Bookmark the permalink.

Comments are closed.