{"id":12288,"date":"2023-12-01T20:56:08","date_gmt":"2023-12-01T20:56:08","guid":{"rendered":"https:\/\/tokendices.com\/what-are-ai-hallucinations-and-how-to-prevent-them\/"},"modified":"2023-12-01T20:56:08","modified_gmt":"2023-12-01T20:56:08","slug":"what-are-ai-hallucinations-and-how-to-prevent-them","status":"publish","type":"post","link":"https:\/\/tokendices.com\/what-are-ai-hallucinations-and-how-to-prevent-them\/","title":{"rendered":"What Are AI Hallucinations and How to Prevent Them?"},"content":{"rendered":"
Coinspeaker<\/a> What comes to mind when you hear the term “hallucinations”? For most of us, this brings up images of insomnia-induced visions of things that aren\u2019t real, schizophrenia, or some other sort of mental illness. But have you ever heard that Artificial Intelligence (AI) could also experience hallucinations?<\/p>\n The truth is that AIs can and do hallucinate from time to time and this is an issue for people and companies that use them to solve tasks. In this guide, we\u2019ll take you through AI hallucinations, what causes them, and what their implications are.<\/p>\n An AI hallucination is a scenario where an AI model begins to detect language or object patterns that don\u2019t exist and this affects the outcome that they are given. Many generative AIs work by predicting patterns in language, content, etc. and giving responses based on them. When an AI begins to generate output based on patterns that don\u2019t exist or are completely off base from the prompt they were given, we refer to this as ‘AI hallucination.’<\/p>\n Take, for example, your customer service chatbot on an e-commerce website. Imagine you ask it when your order will be delivered and it gives you a nonsensical answer unrelated to it. That is a common case of AI hallucination.<\/p>\n Essentially, AI hallucinations happen because generative AI is designed to make predictions based on language but doesn\u2019t actually \u2018understand\u2019 human language or what it is saying. For example, an AI chatbot for a clothing store is designed to know that when a user types the words \u2018order\u2019 and \u2018delayed\u2019, its response should be to check the status of the customer\u2019s order and tell them that it is on the way or has already been delivered. The AI doesn\u2019t actually \u2018know\u2019 what an order is or what a delay is.<\/p>\n So if a user types in the chatbot that they would like to delay their order because they won\u2019t be home, such an AI might keep telling them the status of their order without actually answering their query. If you spoke to a human who understands language nuance, they would know that just because certain words are in a prompt doesn\u2019t mean the same thing every time. But AI, as we\u2019ve established, does not. Instead, it learns to predict patterns in language and works based on those. Hallucinations also tend to occur when a user gives a prompt that is poorly constructed or too vague and this can cause confusion. Typically, AIs will become better at language prediction over time but AI hallucinations are still bound to happen now and again.<\/p>\n Typically, AI hallucinations occur in several different ways:<\/p>\n Now that we understand AI hallucinations better, it is worth exploring what its consequences are.<\/p>\n AI hallucinations can cause many serious problems. Firstly, they can lead to fake news. We as a society have been trying to combat fake news for several years now but AI hallucinations could put a dent in these plans. People rely on respected news outlets for legitimate news and if AI hallucinations continue to create fake facts, the truth and lies will be further blurred.<\/p>\n Secondly, AI hallucinations can result in a lack of trust in AI. For AI to continue being used by the public, we will need to have some trust in it. This trust is shaky when AI models are giving fake news to users or offering facts that are not correct. If this is constant, users will begin cross-checking AI responses, which defeats the purpose. With this, trust in AI will be diminished. There\u2019s also the fact that AIs giving nonsensical or unhelpful responses will only irritate and alienate users.<\/p>\n Furthermore, many of us turn to AI to get advice or recommendations for everything from food to schoolwork. If AI gives incorrect information, people could end up harming themselves, which is a whole other can of worms.<\/p>\n A prime example of an AI hallucination would be the Bard chatbot falsely claiming that the first image of a planet outside of the Milky Way was taken by the James Webb Space Telescope. In reality, the first image was taken in 2004, 7 years before the James Webb Space Telescope was even launched.<\/p>\n Another example is ChatGPT<\/a> making up fake articles associated with The Guardian newspaper, including a fake author and fake events that never happened.<\/p>\n Or, for instance, Microsoft\u2019s Bing AI insulted and even threatened a user to reveal his personal information and ‘ruin his chances of finding a job’. after it was launched in February 2023.<\/p>\n Because AI is not infallible, both developers and users need to know how to detect and prevent AI hallucinations to avoid experiencing the downsides. Here are a few ways to detect and prevent AI hallucinations:<\/p>\n The more we use AI, the more we become aware of its limitations and issues that still need to be worked out. AI hallucination is a genuine issue within the tech world and one that both creators and users of AI need to be aware of. Whether a flaw in the system or due to prompt issues, AIs can and have given false responses, nonsensical ones, and much more. It is up to developers to work towards making AIs as close to infallible as possible and for users to be cautious as they use AI.<\/p>\n What Are AI Hallucinations and How to Prevent Them?<\/a><\/p>\n","protected":false},"excerpt":{"rendered":" Coinspeaker<\/a> In this guide, we\u2019ll take you through AI hallucinations, what causes them, and what their implications are.<\/p>\n
\nWhat Are AI Hallucinations and How to Prevent Them?<\/a><\/p>\nAI Hallucinations Defined<\/h2>\n
Why Do AI Hallucinations Happen?<\/h2>\n
Types of AI Hallucinations<\/h2>\n
\n
Consequences of AI Hallucinations<\/h2>\n
AI Hallucinations’ Examples<\/h2>\n
Detecting and Preventing AI Hallucinations<\/h2>\n
\n
Final Thoughts<\/h2>\n
\nWhat Are AI Hallucinations and How to Prevent Them?<\/a><\/p>\n