By Adam Smith, Thomson Reuters Foundation
THERE’S an elephant in the room when it comes to talking about artificial intelligence (AI) and it’s the fact that sometimes it just makes things up and serves up these so-called hallucinations as facts.
This happens with both commercial products like OpenAI’s ChatGPT and specialized systems for doctors and lawyers, and it can pose a real-world threat in courtrooms, classrooms, hospitals, and beyond, spreading mis- and disinformation.
Despite these risks, companies are keen to integrate AI into their work, with 68% of large companies incorporating at least one AI technology, according to British government research.
What is an AI hallucination?
Generative AI products, like ChatGPT, are built on large-language models (LLMs) and they work through ‘pattern matching’, a process whereby an algorithm looks for specific shapes, words, or other sequences in the input data, which might be a particular question or task.
But the algorithm does not know the meaning of the words. While it might have the facade of intelligence, what it does is perhaps closer to pulling Scrabble letters from a large bag, and learning what gets a positive response from the user.
These AI systems or products are trained on huge amounts of data but incomplete data or biases – like a missing letter or a bag full of Es – can result in hallucinations.
Continue reading in LiCAS.news.