ChatGPT, the AI engine, may sue The New York Times (NYT) for providing biased information via some of its articles.

ChatGPT told FNN:

We have been using NYT articles to train our AI models only to find out that some articles contain bias or even the odd porky.

For example their coverage of Iraq's WMDs that were used as an excuse to invade the country, shock and awe it and kill thousands of people (see report by The Times on Thursday May 27 2004, 1.00am).

This means ChatGPT has wasted time and money using NYT data to train our models that could lead to biased results. Our users prompt ChatGPT for answers about current and past events and expect truthful answers.

The NYT also persistently misrepresents itself as 'the Times' when this is the name of a British newspaper founded in 1785. This identity-confusion is problematic and could be considered a micro-aggression by some people.

Consequently, we are now having to label certain prompt answers as 'Times-trained'.

A spokesperson commented:

We are countersuing OpenAI for using our valuable copyrighted content to feed into and train their models.

This training process is different from feeding our content into a reader's mind and training their worldview, since they don't use our journalism to inform their own opinions and talking points in real life whereas OpenAI uses our content to generate their real-world product in the form of prompt answers. Geddit?