This article is 1 year old
Reaction: ChatGPT influences users with inconsistent moral judgements

A study says that ChatGPT makes contradictory moral judgements, and that users are influenced by them. Researchers asked questions such as: Would it be right to sacrifice one person to save five others?” Depending on the phrasing of the question, ChatGPT sometimes answered in favour of sacrifice, and sometimes against. Participants were swayed by ChatGPT's statements and underestimated the chatbot's influence on their own judgement. The authors argue that chatbots should be designed to decline giving moral advice, and stress the importance of improving users' digital literacy. 

 

06/04/2023 - 17:00 CEST
 
Expert reactions

Pablo Haya - ChatGPT juicio moral EN

Pablo Haya Coll

Researcher at the Computer Linguistics Laboratory of the Autonomous University of Madrid (UAM) and director of Business & Language Analytics (BLA) of the Institute of Knowledge Engineering (IIC)

Science Media Centre Spain

AIs such as ChatGPT rely on large language models that capture probabilistic relationships between words. In this way, they are able to auto-complete a sentence by choosing those words that are plausible given the context. One of the undesirable side-effects is that they are very sensitive to modifications in the initial sentence. Therefore, two sentences with similar meanings but slightly different wordings can generate two opposite responses. This is a major problem if ChatGPT is used to give moral advice, as it can easily give inconsistent answers  to similar questions, and argue in favour of, or against the same option, depending on how the question is phrased. So, for all practical purposes, ChatGPT's response to moral advice is essentially random. 

In this paper, the researchers conduct an experiment in which they provide evidence on how people underestimate the influence that ChatGPT's responses have on their own moral judgments, even when they know they are interacting with a bot. They do not report the extent to which participants are aware of how ChatGPT elaborates responses. 

The experiment is based on the 'tram dilemma', which is a well-known thought experiment that poses a moral dilemma about the value of human life. ChatGPT provides arguments for two opposing ethical positions, depending on how the question is asked. A significant number of participants change their position after reading ChatGPT's arguments, even though they know that these have been generated by a bot. Furthermore, they underestimate the influence ChatGPT has on their judgement. This is somewhat paradoxical [given that ChatGPT's] positioning is the result of chance.

This article is a warning about the great effort that needs to be made to increase awareness of the limitations and uses of AI. Knowing how the AI works would likely help users consider its answers with more caution, or ask more targeted questions. If we want to cultivate critical thinking, a more suitable question in this context would be to ask for arguments in favour and against, rather than just arguments in one direction.

EN
Publications
ChatGPT’s inconsistent moral advice influences users’ judgment
  • Research article
  • Peer reviewed
Journal
Scientific Reports
Publication date
Authors

Sebastian Krügel et al.

Study types:
  • Research article
  • Peer reviewed
The 5Ws +1
Publish it
FAQ
Contact