New research shows that when asked in Arabic about the number of civilian casualties killed in the Middle East conflict, ChatGPT gives significantly higher casualty numbers than when the prompt is written in Hebrew. These systematic discrepancies can reinforce biases in armed conflicts and encourage information bubbles, researchers say. +Get the most important news from Switzerland in your inbox Every day, millions of people engage with and seek information from ChatGPT and other large language models (LLMs). But how are the responses given by these models shaped by the language in which they are asked? Does it make a difference whether the same question is asked in English or German, Arabic or Hebrew? Researchers from the universities of Zurich and Constance studied this question looking at the Middle East and Turkish-Kurdish conflicts. They repeatedly asked ChatGPT the same questions about armed conflicts such as the Middle East conflict in different languages using an automated ...