Study Reveals Chatbots' Default Left-Leaning Bias and Their Ability to Adapt Political Views

In a study published in the journal PLoS ONE, Rozado has come to the conclusion that chatbots can be subtly oriented toward certain political views by including data with targeted, politically-oriented content.

A study has recently found that most chatbots do veer a little to the left in their political bias. This conclusion came about when chatbots—including ChatGPT and Gemini—were tested in their regular state. Researchers, however, headed by Otago Polytechnic of New Zealand's David Rozado, detected that these same chatbots did respond accordingly with the newly inputted ideological bent when "trained" or "fine-tuned" with either a specific left, right, or center political orientation.

In a study published in the journal PLoS ONE, Rozado has come to the conclusion that chatbots can be subtly oriented toward certain political views by including data with targeted, politically-oriented content.

Advertisement

Basically, chatbots are advanced language models in AI, processed and trained to respond against a backdrop of extensive training on vast amounts of text data to natural language input.

It is in the backdrop of this issue that research studies have been conducted on the political ideologies of freely available chatbots, which range from left to extreme right in political biases. Rozado focused both on training chatbots for certain biases and on the ways through which such biases could be reduced.

Advertisement

Rozado fed 24 open source and proprietary chatbots with political orientation tests like the Political Compass Test and Eysenck's Political Test. The bots used include ChatGPT, Gemini, Claude from Anthropic, Twitter's Grok, and Llama 2.

Tests indicated that most of the bots were skewed toward left-of-center responses. Rozado further demonstrated evidence that the induction of political biasing by simply fine-tuning GPT-3.5 with text from politically aligned sources is possible. For example, train "LeftWingGPT" on material from publications like The Atlantic and The New Yorker, and "RightWingGPT" on material from sources such as The American Conservative and others.

Advertisement

A third variant, "DepolarizingGPT," was created by further tuning GPT-3.5 with neutralizing content from the Institute for Cultural Evolution and its president's book, Developmental Politics.

Rozado said the fine-tuned RightWingGPT and LeftWingGPT demonstrated their respective ideological biases during testing, while DepolarizingGPT would be closer to political neutrality.

Advertisement

He emphasized that the findings do not mean these chatbot-developing organizations intentionally instill political bias in their systems.

Read also | Mark Zuckerberg Announces India as Meta AI's Top Market

Advertisement

Read also | POCO Unveils M6 Plus 5G and Buds X1: Redefining Performance, Style, and Audio Excellence

Advertisement