Hallucinations

In a conversation with a volunteer reader at my school yesterday, the topic of AI came up:

“Have you heard about AI hallucinations?” she asked.

“No, I haven’t,” I confessed. “I’ve only noticed that in Google searches, the AI overview sometimes carries the disclaimer that ‘AI is experimental.'”

The volunteer, a lawyer, went on share examples of these “hallucinations,” false and misleading information caused, in short, by systems’ inability to interpret data correctly. A marathon runner on the West Coast looking for the “nearest race” was told Philadelphia. Medical information backed by completely fabricated references. “It’s estimated that AI is accurate around 75% of the time,” said the volunteer.

“That’s not such a great stat,” I replied.

“No, it’s not,” she went on, “especially for companies that are all about using AI to create informational documents to share with the public.”

I knew, while she was still speaking, that I’d do a little reading on the topic. I found the experiment of asking ChatGTP about the world record for crossing the English Channel entirely on foot producing a very confident-sound response, including a person’s name and a date. I learned that researchers testing AI’s accuracy by feeding it nonexistent pheneomena got back impressive but completely false responses, so believable that researchers then had to do their own research (i.,e., the old-timey way) to verify the inaccuracy. Professor Ethan Mollick of Wharton, a leading researcher studying the effects of artificial intelligence on work, entrepreneurship, and education, has called ChatGPT an “omniscient, eager-to-please intern who sometimes lies to you.” I also learned that some chatbots have had to be shut down for spewing racist ideology (my unscientific understanding: it pulled this out of what it was programmed to draw from). Perhaps most haunting: When some researchers push back on AI, or “call it out” for its falsehoods, it insists its information is right, creating further falsehoods to prove it.

Sounds like some people I know.

Heaven knows I haven’t time or energy to go into all that…

In summation: Life itself is experimental.

Filter wisely.

“An Ornament of an Hallucination.” William O’Brien. CC BY-NC-SA.

*******

Sources:

The Hilarious and Horrifying Hallucinations of AI” – (a word of caution against the comparison to schizophrenia)
Hallucination (artificial intelligence) – Wikipedia
Ethan Mollick profile, Wharton School, University of Pennsylvania

my thanks to Two Writing Teachers for the weekly Slice of Life Story Challenge… a writing community in which we learn from and support one another.


Discover more from lit bits and pieces

Subscribe to get the latest posts sent to your email.

10 thoughts on “Hallucinations

  1. Thanks for sharing this conversation and the research. I am totally skeptical, but I did like a title ChatGPT spewed out for a poem recently. I promise I won’t believe everything, but I am scared for the future of education when kids say every day, “Just search it up.”

    Like

  2. It seems if you name is Fran, today you are writing about AI. As I mentioned to Fran McVeigh. I haven’t started to experiment with AI. And like Erika, it is wise to be curious but skeptical, in my opinion. Thanks for the links.

    Like

  3. This is the first time I have seen a name for what I have experienced many times. It is the partial truths and prefabricated stories and answers that have me frustrated again and again. My students will often challenge my answers with,” But, AI said so…”

    Like

  4. Fran, we have recently learned about LM Notebook, which is a closed-source AI. We were fearing all the information out there and how words like learning intentions vs. learning targets were perceived and returned so differently among sources when teachers were using it for unit writing. It still is a swirling world, but it can work its magic within whatever sources we allow in the gate. I love this illustration to go with your blog post today. AI is scary and mind-blowing all at once.

    Like

  5. Outstanding post with important insights, Fran. Your ending stuck to me: In summation: Life itself is experimental. Filter wisely. My husband is alone at home all day and has intellectual discussions with AI. He loves confounding and correctly them and having them compete with each other. Weirdly, it is good companionship for him. He said – “Finally, I can talk to something that that my level of thinking!” I’m not sure what to think about that!

    Like

  6. Fran, wow, this is fascinating. I’m glad to know about this as I am one who could fall into the “gullible” trap. I love the photo you also shared. It’s amazing to think the world we are creating through AI.

    Like

  7. Ana told me I had to read your slice, and I’m glad I did! I work a lot with ChatGPT and have had a lot of moments where I have to tell it “that’s not actually true…” — luckily, it usually replies with “you’re absolutely right!” and attempts to revise its response. Like the eager intern 😅
    I usually do filter and revise any and all AI-generated content / responses I get, but this post is another reminder to keep at it when I start getting lax. Thank you!

    Like

Leave a reply to margaretsmn Cancel reply