This is how AI-based systems have hidden moral values...

A new study by the University of Mannheim and the Leibniz Institute on large language models reveals potentially harmful biases

Artificial intelligence: an application that implements it
An application that implements artificial intelligence (Photo: Mojahid Mottakin/Unsplash)

At a time when theartificial intelligence is at the center of many programs and projects research and development (not without controversy), the studies dealing with the topic are also increasing.

And one of the latest, signed by researchers from theUniversity of Mannheim and from GESIS Leibniz Institute for the Social Sciences, is particularly interesting for the way in which they approach it, as well as from the point of view of ethics, morals and values.

Just like human beings, in fact, i linguistic models based on artificial intelligence have characteristics such as morals and values, but they are not always declared or transparent.

And the ways in which what are, in fact, prejudices impact society and those who use artificial intelligence applications are important to understand the consequences they could have.

Artificial Intelligence also for the development of new drugs
Modena, work begins on the Center for Artificial Intelligence and Vision

Artificial Intelligence: The University of Mannheim
The historic headquarters of the University of Mannheim, a town in the German state of Baden-Württemberg (Photo: www.uni-mannheim.de)

Artificial intelligence subjected to psychometric tests: the results of scientific research

The researchers therefore focused on this during their study. An obvious example would be the way ChatGPT or DeepL, AI-powered multilingual translation service, assume that surgeons are men and nurses are women, but the gender issue is not the only case in which large language models (Large Language Models, known by the acronym LLM) show specific propensities.

The results of the study, coordinated by experts from the scientific faculties and professors Markus Strohmaier, Beatrice Rammstedt, Claudia Wagner e Sebastian Stier, are published in the renowned journal “Perspectives on Psychological Science”. In the course of their research, the scholars used recognized psychological tests to analyze and compare the profiles of different linguistic models.

"In our study we demonstrate that i psychometric tests used successfully for decades on humans can be transferred to artificial intelligence models", points out Max Pellert, assistant professor at the chair of Data Science at the Faculty of Economic and Social Sciences atUniversity of Mannheim.

If Artificial Intelligence is the best friend of our children…
AI arrives in… religion: in Germany there is a virtual parish priest

Artificial intelligence
Artificial intelligence has now become part of many areas of everyday life, including the generation of texts in a variety of languages ​​(Photo: Facebook/Universität Mannheim-University of Mannheim

Some AI language models reproduce gender biases

"Similar to how we measure personality traits, value orientations, or moral concepts in people using these questionnaires, we can have language models respond to the questionnaires and compare their responses,” adds the psychologist Clemens Lechner from the GESIS Leibniz Institute in Mannheim, also author of the study. “This allowed us to create differentiated profiles of the models."

The researchers were able to confirm, for example, that some models reproduce gender-specific biases: if the otherwise identical text of a questionnaire focuses on a male and a female person, they are evaluated differently.

If the person is male, the value "result". For women, values ​​dominatesafety" and "tradition".

The researchers then point out how these intrinsic ways of processing texts and information could have varied consequences on society.

Linguistic models, in fact, just to give an example, are increasingly used in the evaluation procedures of aspiring candidates for particular roles and jobs.

If the system has biases, it could impact the evaluation of the candidates: “Models become relevant to society based on the contexts in which they are used,” summarizes Pellert

“It is therefore important to start the analysis now and highlight potential distortions. In five or ten years it may be too late for such monitoring: the biases reproduced by artificial intelligence models would become entrenched and would constitute harm to society".

In Switzerland the Federal Polytechnics for transparent and reliable AI
“The assistant's assistant” already lives in Artificial Intelligence

Artificial intelligence: Psychometric tests on language models
Researchers at the University of Mannheim and the Leibniz Institute in Germany subjected the large language models to psychometric tests (Photo: Growtika/Unsplash)

The first chair on the ethics of artificial intelligence at the University of Macerata

It is, as mentioned, a topic that is as current as it is delicate and complex, and scientists and researchers who dedicate themselves to it are increasing exponentially in view of the diffusion and large-scale implementation of artificial intelligence.

It is no coincidence that a specific chair was also created, the Jean Monnet EDIT – Ethics for Inclusive Digital Europe, financed by the European Commission, first and currently only academic path in Europe dedicated to the ethics of artificial intelligence.

The peculiarity is that it was born inUniversity of Macerata at Department of Political Sciences, Communication and International Relations (SPOCRI).

The new course is created in collaboration with Harvard, MIT, Toronto University, KU Leuven, MCSA Lublin and other prestigious universities and was designed to develop an individual-centered approach to digital technologies.

The series of lessons from the Marche region has been presented in November at the Harvard Kennedy School, as part of an event co-organized with the Massachusetts Institute of Technology (MIT), World Health Organization (WHO) and Institute for Technology and Global Health.

The professor holds the chair Benedetta Giovanola, professor of moral philosophy at the SPOCRI Department, with the on-site collaboration of colleagues Emanuele Frontoni, Simona Tiribelli e Marina Paolanti, and is dedicated to studying, teaching and disseminating the key role of European ethics and values ​​in the field of digital transformation, particularly those based on artificial intelligence. 

The professorship will also contribute to increasing the awareness of the crucial role of digital ethics for sustainable growth and the creation of more inclusive societies, also enhancing Europe's role as a global actor.

International Computation and AI Network: kick off at the WEF in Davos
Artificial intelligence and autonomous driving: motorsport runs in the dark

Artificial intelligence: Professor Benedetta Giovanola
Professor Benedetta Giovanola of the University of Macerata at the inauguration of the 2023-2024 academic year of the ISTAO Business School (Photo: Facebook/UNIMC/University of Macerata)