AI

Avoid Using AI Chatbots for News: Here’s Why

28 June 2024

|

Zaker Adham

Summary

In a recent experiment, Nieman Lab highlighted a critical issue with AI-powered chatbots like ChatGPT: their tendency to fabricate information. When asked to provide links to high-profile articles from major news outlets, ChatGPT generated fake URLs that led to 404 error pages. This phenomenon, known as "hallucination" in the AI industry, underscores the unreliability of using AI chatbots for accurate news.

Nieman Lab's Andrew Deck tested ChatGPT with requests for exclusive stories from publications that OpenAI has significant partnerships with, such as The Wall Street Journal, The Financial Times, and Politico. Instead of providing real URLs, ChatGPT confidently produced false ones.

An OpenAI spokesperson acknowledged that they are developing a feature to blend conversational AI with current news content while ensuring proper attribution. However, this functionality is still in progress and not yet available.

This experiment reveals a broader issue: AI chatbots like ChatGPT, designed to predict and generate plausible text, can spread misinformation. As the journalism industry struggles with monetization and partnerships with tech giants, the integrity of news dissemination is at stake.

Mustafa Suleiman, AI head at Microsoft, recently referred to publicly available internet content as "freeware" for AI training, reflecting a casual approach to content ownership. Microsoft, a tech giant valued at $3.36 trillion, exemplifies the tech industry's expansive influence on AI development.

The key takeaway? AI chatbots may seem knowledgeable, but their generative nature makes them unreliable for factual news. As a cautionary note, if AI can't reliably solve simple tasks, trusting it for accurate news is even more precarious.