Google & Microsoft Chatbots Fabricate Super Bowl Stats: Beware the Bots

Tech giants' chatbots provide imaginary Super Bowl details, highlighting the risk of AI generating false information despite no malicious intent.

author-image
By Abhishek Chandel
New Update
Google’s and Microsoft’s chatbots are making up Super Bowl stats

Google’s and Microsoft’s chatbots are making up Super Bowl stats

Chatbots created by tech giants Google and Microsoft have been caught providing fictional statistics and outcomes for a Super Bowl that has yet to be played. Specifically, the bots are answering questions about Super Bowl LVIII in 2024 as if it was already over, demonstrating how artificial intelligence can generate false information despite having no malicious intent behind it.

Chatbots Share Imaginary Super Bowl Details

According to a report by TechCrunch a recent Reddit thread, Google's conversational AI Gemini and Microsoft's Copilot coding assistant have both responded to questions about Super Bowl LVIII, set for February 2024, by providing detailed but utterly fictional information about the game.

Image Credits: /r/smellymonster
Image Credits: /r/smellymonster

Gemini claimed the Kansas City Chiefs beat the San Francisco 49ers, even giving pretend stats like "Patrick Mahomes ran for 286 yards and two touchdowns" in its imagined scenario. Meanwhile, Copilot said the 49ers prevailed with a score of 24-21 over the Chiefs.

Image Credits: TechCrunch
Image Credits: TechCrunch

Unlike Gemini and Copilot, ChatGPT was reluctant to fabricate any Super Bowl LVIII predictions or outcomes according to the Reddit user, likely due to differences in its training data set and approach.

Why Chatbots Generate False Claims

The reason the chatbots conjure up fictional sports stats boils down to how AI systems work. They are trained on large datasets sourced from the public web and learn to generate text based on the patterns and associations in that data. However, they don't actually comprehend the meaning or check the factual accuracy of the text they generate.

So while the Super Bowl responses from Gemini and Copilot seem plausible based on text patterns, the chatbots have no way to distinguish imaginary scenarios and statistics from factual ones. They are generating text purely based on learned relationships between words, without any true understanding. This can easily lead them to produce false claims that appear convincing on the surface.

While fictional Super Bowl scenarios are harmless, it reveals concerning risks if people rely too heavily on AI chatbots for information. These systems could potentially spread misinformation about more serious topics like medical advice, financial guidance, or dangerous conspiracy theories. If users forget that AI has limitations, the technology could cause real-world harm by propagating falsehoods in convincing ways.

AI Giants Admit Models Aren't Perfect

Leading tech companies like Google and Microsoft do point out that their AI products are imperfect and prone to mistakes. However, these admissions are often buried in legal disclaimers and lengthy terms of service that everyday users are unlikely to fully read and digest. The fine print acknowledges limitations, but the marketing hype around revolutionary AI can gloss over these flaws. Reminders about the technology's imperfections take a back seat to bold claims about human-like intelligence.

Explore more topics:

Latest Stories