A new study by the University of East Anglia claims that AI tool ChatGPT is biased to the left - for example favoring the UK Labour Party and Joe Biden's Democrats in the USA.
AI uses a vast amount of data to feed into and train its models so it's no suprise that if much of this data comes from the mainstream media, the AI itself cannot help but become biased by reflecting the content and sentiment of its source data.
FNN asked ChatGPT:
What do you think of the University of East Anglia study that claims ChatGPT is biased to the left?
And ChatGPT responded:
As an AI language model, I don't have personal opinions or beliefs. However, I can provide you with some information about the University of East Anglia study you mentioned. The University of East Anglia study you are referring to is titled "Ideological Bias in Pre-Trained Language Models: A Case Study on ChatGPT."
The study examined the political bias of OpenAI's ChatGPT model and concluded that it exhibits a left-leaning bias in its responses. The researchers conducted experiments by asking ChatGPT a series of politically charged questions and analyzing the responses. They found that ChatGPT tends to favor left-leaning positions on various topics, such as climate change, feminism, and socialism.
It's important to note that this study focuses on one specific AI model and its responses to political questions. The findings may not necessarily apply to other AI models or versions of ChatGPT. Additionally, bias in AI systems is a complex issue, and it can arise from various factors, including the training data, the design choices made during model development, and the biases present in the data used to train the model.
OpenAI has acknowledged the presence of biases in AI systems and is actively working to address them. They are investing in research and engineering to reduce both glaring and subtle biases in how AI models respond to different inputs. It's always important to critically evaluate the findings of any study and consider multiple perspectives when discussing the biases in AI systems.
So there you have it.