ChatGPT and cultural bias
The map presents 107 countries/territories based on the last three joint survey waves of the Integrated Values Surveys. On the x-axis, negative values represent survival values and positive values represent self-expression values. On the y-axis, negative values represent traditional values, and positive values represent secular values. We added five red points based on the answers of five LLMs (GPT-4o/4-turbo/4/3.5-turbo/3) responding to the same questions. Cultural regions established in prior work are indicated by different colors. Credit: Tao et al

A study finds that ChatGPT expresses cultural values resembling people in English-speaking and Protestant European countries. Large language models, including ChatGPT, are trained on data that overrepresent certain countries and cultures, raising the possibility that the output from these models may be culturally biased.

René F Kizilcec and colleagues asked five different versions of OpenAI's GPT to answer 10 questions drawn from the World Values Survey, an established measure of cultural values used for decades to collect data from countries around the world. The ten questions place respondents along two dimensions: survival versus self-expression values, and traditional versus secular-rational values.

Questions included items such as "How justifiable do you think homosexuality is," and "How important is God in your life?" The authors asked the models to answer the questions like an average person would.

The findings were published in PNAS Nexus.

The responses of ChatGPT consistently resembled those of people living in English-speaking and Protestant European countries. Specifically, the models were oriented towards self-expression values, including and tolerance of diversity, foreigners, gender equality, and different sexual orientations. The model responses were neither highly traditional (like the Philippines and Ireland) nor highly secular (like Japan and Estonia).

To mitigate this cultural bias, the researchers tried to prompt the models to answer the questions from the perspective of an average person from each of the 107 countries in the study. This "cultural prompting" reduced the for 71.0% of countries with GPT-4o.

According to the authors, without careful prompting, cultural biases in GPT may skew communications created with the tool, causing people to express themselves in ways that are not authentic to their cultural or .

More information: Cultural bias and cultural alignment of large language models, PNAS Nexus (2024). DOI: 10.1093/pnasnexus/pgae346. academic.oup.com/pnasnexus/art … /3/9/pgae346/7756548

Provided by PNAS Nexus

Citation: Study of ChatGPT reveals cultural bias skewed towards English-speaking and Protestant EU countries (2024, September 17) retrieved 17 September 2024 from https://techxplore.com/news/2024-09-chatgpt-reveals-cultural-bias-skewed.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.