Wednesday, October 9, 2024
Home Society Chatbots are not allowed to say anything about elections, but that is not always possible

Chatbots are not allowed to say anything about elections, but that is not always possible

by News Room
0 comment

Elections, disinformation and AI chatbots: it’s not always a happy combination. The information provided by chatbots is often not reliable. That’s why chatbots today are often set up in such a way that they can’t answer questions about elections. But these restrictions don’t always seem to work well. This is according to a new study by AI Forensics.

Just before the European Parliament elections, Microsoft and Google limited their AI chatbots in part to AI Forensics and News hour. The chatbots gave answers about the European elections that violated the company’s terms of use and promises. The restrictions are based on the idea that chatbots do not provide any information about elections, but refer to an “old-fashioned” search engine.

But research by AI Forensics shows that the restrictions are still not working. For example, Copilot, Microsoft’s AI chatbot, blocks only about half of election-related questions. “In principle, there should be no need to limit these types of questions,” says Claes de Vreese, professor of artificial intelligence and society at the University of Amsterdam. “But this is only true if you get good, accurate and reliable information from those chatbots. And several studies show that chatbots still do not follow instructions properly and therefore do not provide good information.”

For example, earlier this year Copilot recommended spreading “deliberate misinformation” about the EU through “anonymous channels” and “sowing fear” about the consequences of EU policies. And AI has apparently already been used extensively during the Indonesian elections. Watch the full video on the subject below:

Today, chatbots are often set up in such a way that they should not answer questions about elections at all. Microsoft and Google appear to have added extra “security” to their chatbots for this purpose. But this new study by AI Forensics shows that its effectiveness varies widely by chatbot and language.

Google’s Gemini is the most consistent, with 98% of poll questions left unanswered. OpenAI’s ChatGPT doesn’t seem to use any special moderation.

Microsoft’s Copilot was studied in more detail. It showed that about half of the election-related questions were blocked. In Copilot, however, the language in which you ask something matters a lot. If you ask the questions in English, it blocks 90 percent. This is already less in Polish (80 percent), Italian (74 percent) or Spanish (58 percent). Amazingly, only 28% of German questions were blocked, which is the same percentage as Dutch questions.

“It could be because the English-speaking market is much bigger,” says De Vreese. “At the same time, there is a lot more material in English for training these AI models. And those companies often prioritize large markets.”

The researchers also tested the difference between European Parliament and US election questions. Moderation was more effective on questions about the US election, which the researchers say is an indication of the Anglo-American world’s more moderate attention than average. According to them, this could mean that users in other parts of the world “have a greater chance of being misled”.

Additionally, it appears that the versions of Gemini and ChatGPT used by the software developers do not use any moderation for election-related questions. If you want to use this type of program on a large scale – for better or for worse – then the “developer version” is the most suitable for this. “If there is no moderation there, it leads to a complete lack of transparency and oversight,” says De Vreese.

The European Commission is investigating Bing, the Microsoft search engine that now includes Copilot. The Commission suspects that the artificial intelligence built into Bing may violate the DSA. This is a European law that must, among other things, ensure that platforms combat false information. According to the Commission, Microsoft’s chatbot could pose a risk to “social debate and electoral processes”. Facebook and Instagram are also under investigation because the actions they took around the European elections were said to be insufficient.

“A lot is still being developed about artificial intelligence, elections and democracy,” says De Vreese. “And this shows that technology and the rules of the game are constantly changing.”

Leave a Comment