AI chatbots want to be factual. But can they be a credible news source?
First, it was the Browser Company’s Arc web browser, and now it is social networking platform X’s artificial intelligence (AI) tool Grok – AI platforms are now being trained to produce summaries of content on a particular site, triggering concerns over its impact on traffic to news publishers, and their potential to generate and propagate misleading content harming the online information economy.
Last week, X said that its AI offering Grok will summarise events viral on the platform with news points and additional commentary around the events. The service will be available under the ‘For You’ tab in the ‘Explore’ page of the platform – and when users tap on a story, instead of using text from the article or news itself, the summary is generated based on the conversations happening on the platform. For now, this service is only available to paid users of X.
While offering summaries of news events and trending topics on X is not new and was a feature under the leadership of Jack Dorset when the social network was called Twitter, this is the first time the platform will use AI to create summaries of events.
Even as many fear that this could drive down the traffic that X sends to news publishers who typically cover all major national and international developments, concerns have also been raised over the quality of summaries that X’s Grok would be able to generate given that its source data is essentially what people are posting on the social media platform.
While journalists and credible news publishers share their work – which is backed by thorough ethical and fact-check standards – on the platform, the platform is known to have a misinformation problem too. And since Grok will essentially depend on conversations on X regarding a particular development, it is unclear how the system may decide what is accurate information and what isn’t.
Besides, many have questioned whether AI tools should ever even be seen as a factual source. Many people in the industry have argued that AI bots are better at creative tasks than being authoritative news sources given their hallucination problem. In countries with low digital literacy, there could be a potential for users to assume such chatbots to be an authoritative source of factual information without realizing some of their pitfalls.
While press freedoms vary across countries dictating what journalists can and can’t freely write about, technology companies who do not have a financial or ethical responsibility towards journalism may soon find themselves caught in the crosshairs with governments if their chatbots produce content that regulators find objectionable – giving way to fears that the responses they generate may be susceptible to self-censorship as companies are likely to first protect their commercial interests.
Some of this has already played out in India. For instance, Google has said it will restrict the types of election-related questions users can ask its artificial intelligence (AI) chatbot Gemini in the country, and Krutrim, the chatbot developed by an Indian AI startup founded by Bhavish Aggarwal of Ola, had been found to self-censor on certain keywords.
News companies are also having to contend with AI news summaries and the impact it could have on their business model – which typically relies on either advertisements or subscriptions, but ultimately on how many people reach their website.
As a result, many news companies are cutting deals with AI companies for licensing their content to them to introduce that as an additional revenue stream. OpenAI recently announced a partnership with the Financial Times, and others, such as Axel Springer, the AP, and Le Monde, have also announced similar moves.
However, there is also the concern that while deep-pocketed news publishers may be more lucrative partners for AI companies for content licensing deals, smaller niche, and independent publishers may miss out on such revenue streams, putting their business in jeopardy.
Source: The Indian Express