Preface. AI is not intelligent, it is stupid! AI not only makes stuff up as explained below, but I have found it to be highly inaccurate and incomplete. For example, I asked about hours and costs of wine tasting in Paso Robles and it got about a quarter of them wrong, and left out quite a few vineyards. When I’ve asked scientific questions for my research, it can’t tell me what sources the information came from. Though once it did — and none were from scientific papers, so of no use to me. With no sources at all, how can I trust the reply? Or use it, since as you know from reading my posts, I cite everything so the reader can check my work and do additional research, because often the citation will have additional, important ideas I left out due to only so much space in a chapter or post.
And AI companies call it a Hallucination? That’s just a way to make it sound more human, more intelligent. Come on. It is flat out lying and confabulating.
All AI can do is find patterns to identify images or what word might follow the next word. Or win at chess, checkers, or Go. But there is no thinking involved, no awareness, understanding, intelligence! It can be useful of course, but it is not an existential threat and requires experts to discern useful information from hallucinations.
Like the endlessly optimistic articles on renewable energy, carbon capture, and small nuclear reactors, so too is are endless “AI will solve all our problems” articles to generate investment money and drive up stock prices. There is a lot of money at stake! The McKinsey Global Institute says it could add $2.6 trillion to $4.4 trillion to the global economy.
To understand how terrifically far from being intelligent AI is, or ever will be, I highly recommend Erik J Larson’s “The myth of artificial intelligence. Why computers can’t think the way we do”, “Smart Until It’s Dumb: Why artificial intelligence keeps making epic mistakes”, and Melanie Mitchell’s “Artificial Intelligence. A Guide for thinking humans” which actually shows you how AI is programmed. Even though I was a programmer/systems engineer I assumed I wasn’t smart enough to learn this but not true! Anyone can by reading this book. Amazingly it is a much art as science, not just anyone can train AI because so much human judgement and skill is required.
Oh well, AI will end regardless. This technology consumes so much electricity building models and answering queries, that supply chain documents prepared by the department of Energy for the Biden administration say that this can’t go on, AI is exponentially using more power while the electric grid is only growing linearly.
But there’s no changing the minds of people who want to believe in AI, and that humans can solve all problems.
Most of the lack of critical thinking though is due to lack of education, religion, and the propaganda of sites like Fox news. Viewers are quite comfortable with their spouting of wrong ideas, or not covering at all events that make Republicans look bad. There’s simply no changing their minds when they don’t even see other information. For many, evidence doesn’t matter, just what their hating guts want to feel. AI is just as crazy as flat Earther’s, schizophrenics having psychotic hallucinations, and Christians who think they can bring back Jesus ASAP by making prophesies in Bible chapter Revelations happen ASAP, even at the cost of a nuclear holocaust. Well that certainly would do the trick: it would “rapture” 5 billion or more of us according to the latest nuclear winter research.
But with only a third of Americans being rational (see my posts in critical thinking here), if AI makes stuff up it doesn’t matter or isn’t discernible to the rest.
Alice Friedemann www.energyskeptic.com Author of Life After Fossil Fuels: A Reality Check on Alternative Energy; When Trucks Stop Running: Energy and the Future of Transportation”, Barriers to Making Algal Biofuels, & “Crunch! Whole Grain Artisan Chips and Crackers”. Women in ecology Podcasts: WGBH, Financial Sense, Jore, Planet: Critical, Crazy Town, Collapse Chronicles, Derrick Jensen, Practical Prepping, Kunstler 253 &278, Peak Prosperity, Index of best energyskeptic posts
***
Metz C (2023) Chatbots May ‘Hallucinate’ More Often Than Many Realize. When summarizing facts, ChatGPT technology makes things up about 3 percent of the time, according to research from a new start-up. A Google system’s rate was 27 percent. New York Times. https://www.nytimes.com/2023/11/06/technology/chatbots-hallucination-rates.html
When Google introduced a similar chatbot several weeks later, it spewed nonsense about the James Webb telescope. The next day, Microsoft’s new Bing chatbot offered up all sorts of bogus information about the Gap, Mexican nightlife and the singer Billie Eilish. Then, in March, ChatGPT cited a half dozen fake court cases while writing a 10-page legal brief that a lawyer submitted to a federal judge in Manhattan.
it is a serious issue for anyone using this technology with court documents, medical information or sensitive business data.
Dr. Hughes and his team asked these systems to perform a single, straightforward task that is readily verified: Summarize news articles. Even then, the chatbots persistently invented information.
“We gave the system 10 to 20 facts and asked for a summary of those facts,” said Amr Awadallah, the chief executive of Vectara and a former Google executive. “That the system can still introduce errors is a fundamental problem.” The researchers argue that when these chatbots perform other tasks — beyond mere summarization — hallucination rates may be higher.
Chatbots like ChatGPT are driven by a technology called a large language model, or L.L.M., which learns its skills by analyzing enormous amounts of digital text, including books, Wikipedia articles and online chat logs. By pinpointing patterns in all that data, an L.L.M. learns to do one thing in particular: guess the next word in a sequence of words.
Because the internet is filled with untruthful information, these systems repeat the same untruths. They also rely on probabilities: What is the mathematical chance that the next word is “playwright”? From time to time, they guess incorrectly.
Thorbecke C (2023) AI tools make things up a lot, and that’s a huge problem. CNN. https://www.cnn.com/2023/08/29/tech/ai-chatbot-hallucinations/index.html
Suresh Venkatasubramanian, a professor at Brown University said that large language models — the technology underpinning AI tools like ChatGPT — are simply trained to “produce a plausible sounding answer” to user prompts. “So, in that sense, any plausible-sounding answer, whether it’s accurate or factual or made up or not, is a reasonable answer, and that’s what it produces,” he said. “There is no knowledge of truth there.”
The AI researcher said that a better behavioral analogy than hallucinating or lying, which carries connotations of something being wrong or having ill-intent, would be comparing these computer outputs to the way his young son would tell stories at age four. “You only have to say, ‘And then what happened?’ and he would just continue producing more stories, and he would just go on and on.”
According to Jevin West, a professor at the University of Washington and co-founder of its Center for an Informed Public, AI makes stuff up with pure confidence. This means that it can be hard for users to discern what’s true or not if they’re asking something they don’t already know the answer to, West said.
News outlet CNET was forced to issue corrections after an article generated by an AI tool ended up giving wildly inaccurate personal finance advice when it was asked to explain how compound interest works.
There are risks stemming from hallucinations when people are turning to this technology to look for answers that could impact their health, their voting behavior, and other potentially sensitive topics, West told CNN.
Venkatasubramanian added that relying on these tools for any task where you need factual or reliable information that you cannot immediately verify yourself could be problematic. And there are other potential harms lurking as this technology spreads, he said, like companies using AI tools to summarize candidates’ qualifications and decide who should move ahead to the next round of a job interview. He thinks these tools “shouldn’t be used in places where people are going to be materially impacted.
WE MAY NOT BE ABLE TO FIX AI
Large language models are trained on gargantuan datasets, and there are multiple stages that go into how an AI model is trained to generate a response to a user prompt — some of that process being automatic, and some of the process influenced by human intervention.
“These models are so complex, and so intricate,” Venkatasubramanian said, but because of this, “they’re also very fragile.” This means that very small changes in inputs can have “changes in the output that are quite dramatic. And that’s just the nature of the beast, if something is that sensitive and that complicated, that comes along with it. Which means trying to identify the ways in which things can go awry is very hard, because there’s so many small things that can go wrong.”
West agreed, saying, “The problem is, we can’t reverse-engineer hallucinations coming from these chatbots. It might just an intrinsic characteristic of these things that will always be there”.
O’Brien M (2023) Chatbots sometimes make things up. Is AI’s hallucination problem fixable? AP.
“This isn’t fixable,” said Emily Bender, a linguistics professor and director of the University of Washington’s Computational Linguistics Laboratory. “It’s inherent in the mismatch between the technology and the proposed use cases.”
When used to generate text, language models “are designed to make things up. That’s all they do,” University of Washington linguist Bender said. They are good at mimicking forms of writing, such as legal contracts, television scripts or sonnets.
“But since they only ever make things up, when the text they have extruded happens to be interpretable as something we deem correct, that is by chance,” Bender said. “Even if they can be tuned to be right more of the time, they will still have failure modes — and likely the failures will be in the cases where it’s harder for a person reading the text to notice, because they are more obscure.”
And a bonus story from Scientific American about why AI isn’t smart:
Hughes-Castleberry K (2023) AI Can’t Solve this Famous Murder Mystery Puzzle. The 1934 puzzle book Cain’s Jawbone stumped all but a handful of humans. Then AI took the case. Scientific American |
A science journalist recently challenged AI developers to solve Cain’s Jawbone, a murder-mystery puzzle book from 1934. The book was purposely published with all its pages out of order; to crack the case, the reader must reorder the pages, and then name the six murderers and their victims. A total of six people had solved the mystery in past years. In the new challenge, none of the AI developers managed to crack the puzzle.
How it worked: The competition challenged developers to use natural language processing (NLP) algorithms to reorder the story’s now-digitized pages. In order to refine their models for the competition, the organizers gave participants Agatha Christie’s first mystery novel, The Mysterious Affair at Styles, to use as training data.
The takeaway: The Cain’s Jawbone competition revealed that current AI language programs may be capable of impressive feats, writes science journalist Kenna Hughes-Castleberry, but they won’t be going toe to toe with Poirot any time soon. The story’s stylized language and false clues underscore that these models struggle to analyze content without context.
|