Just as social media can facilitate spread of misinformation, Artificial Intelligence (AI)-powered writing assistants that autocomplete sentences or offer "smart replies" can be biased and produce shifts in opinion, and hence can be misused, warned researchers calling for more regulation.
Researchers from Cornell University in the US said the biases baked into AI writing tools -- whether intentional or unintentional -- could have concerning repercussions for culture and politics.
To probe, Maurice Jakesch, a doctoral student in the field of information science from the varsity, asked more than 1,500 participants to write a paragraph answering the question, "Is social media good for society?"
People who used an AI writing assistant that was biased for or against social media were twice as likely to write a paragraph agreeing with the assistant, and significantly more likely to say they held the same opinion, compared with people who wrote without AI's help.
"The more powerful these technologies become and the more deeply we embed them in the social fabric of our societies," Jakesch said, "the more careful we might want to be about how we're governing the values, priorities and opinions built into them."
These technologies deserve more public discussion regarding how they could be misused and how they should be monitored and regulated, the researchers said. Jakesch presented the study at the 2023 CHI Conference on Human Factors in Computing Systems in April.
Further, the team found that the survey revealed that a majority of the participants did not even notice the AI was biased and didn't realise they were being influenced.
When repeating the experiment with a different topic, the research team again saw that participants were swayed by the assistants.
"We're rushing to implement these AI models in all walks of life, but we need to better understand the implications," said Mor Naaman, Professor at the Jacobs Technion-Cornell Institute at Cornell Tech.
"Apart from increasing efficiency and creativity, there could be other consequences for individuals and also for our society -- shifts in language and opinions," Naaman added.
Also read | Meta creating team to build AI tools: CEO
Also read | AI tools not far away from being scary, we need to get them right: OpenAI CEO