This article is 1 year old
Why ChatGPT cannot sign scientific articles

Nature's ban on publishing papers written by ChatGPT has brought several debates to the table: is it ethical to use it to produce science texts, and if so, should it sign them? Just as we don't make the Word proofreader a co-author of our articles, let's not make fools of ourselves by giving these new tools the status of co-authors as if they had an entity of their own.

31/01/2023 - 11:08 CET
 
chatgpt

Adobe Stock

Generative models and language models have come a long way in recent years and are revolutionising the way we produce and consume content. One example of such models is OpenAI's ChatGPT, which has demonstrated its ability to write text in such a way that it is not possible to detect that it is not the work of a person. However, this has raised concerns in the scientific community, as shown by the recent ban on publishing articles written by ChatGPT in scientific journals in Springer Nature, which the journal group explained in an editorial.

Should these tools be considered authors of the texts they produce, and is it ethical to use them to produce scientific publications?

One of the main controversies about language models is based on the fact that they learn from large amounts of data, which in many cases can be considered intellectual property or even confidential. At this point we should ask ourselves whether we have the right to use this data to train our generative models.

Fear of AI stealing human talent

Some argue that this data is already available on the web and that using it to train artificial intelligence models is no different from learning from the works and experiences of others. On the other hand, some argue that such use can be detrimental to intellectual property and the rights of individuals, and that the data should be protected. Famous exyoutuber and now well-known comic book artist Isaac Sánchez (Loulogio) has taken a stand against generative artificial intelligence models that are trained on works available on the internet. Isaac argues that these AIs do nothing more than copy or steal from human artists.

Is it fair to give AI credit for its output, especially when the data used to train it comes from human works and authors?

It is a fact that humans learn and create from observing and studying other people's work. In this sense, there is not a big difference between this and AIs learning from the same data. Even so, along with Isaac Sánchez, more movements and groups are emerging that oppose the use of these technologies for the generation of various types of art. Personally I can't agree, since a deep understanding of these tools makes us realise that AIs do not copy data from the works they have been trained on, but extract the patterns from which these works are composed to generate new works. These models would not copy an eye from a work by Monet, but would learn the concept of the eye, learn the concept of impressionism and know how to combine them to generate the desired result.

At this point, many questions arise, such as: Who is the author of a text generated by a language model? Is it fair to give credit to AI for its output, especially when the data used to train it comes from human works and authors?

No responsibility for content

These debates raise important ethical and legal questions that do not yet have clear answers and will continue to be discussed in the future. Thus, we have yet to strike a balance between protecting intellectual property and recognising the role that artificial intelligence models play in content production.

However, it is important to remember that there are legal issues that need to be considered when using these models. For example, intellectual property law may seek to protect the data used to train the models. In addition, the authorship of the texts produced by these models may also be subject to legal debate, especially with regard to liability for the content of the texts.

Although generative models and language models are powerful tools, the legal rights and responsibilities related to their use have not yet been clearly established

The notorious "misuse" of Deep Fakes is sadly notorious and very realistic pornographic images can be generated with public figures. How does an actress protect herself from having compromising images of herself appear that are fake? What would happen if a person with depression talks to chatGPT about their problems and the AI suggests as a solution that they should end their life? It could happen, since the AI does not understand what they are saying, but generates the text that its internal mathematical model calculates the highest probability. There is no consciousness, no feelings, no responsibility in that AI model.

Although generative models and language models are powerful tools, the legal rights and responsibilities related to their use have not yet been clearly established. Who is responsible for errors and misinformation produced by these models? Is it the developers, the users or the artificial intelligence itself? These are some of the many legal and ethical challenges that need to be addressed before AI models can be widely accepted and used.

Ethical debate and regulation

However, it is important to note that AIs are tools we use to produce texts and are not authors themselves. Just as we do not consider Word's automatic corrector or Photoshop's filters as authors, we should not consider AIs as authors of the texts they produce. A tool is never going to be responsible for its use, only the person who uses it. A hammer can be a great tool for hammering nails, but it can also be used as an offensive weapon. So do we ban hammers? No, we ban the offensive use of hammers.

AIs are tools we use to produce texts and are not authors in themselves, just as we do not consider Photoshop filters to be authors

In conclusion, generative models and language models are a fascinating technology that has revolutionised the way text, images and even music are produced, but also raises profound ethical and legal questions. In this sense, it is important to continue to debate their use and to establish clear regulations that allow them to be used in a responsible and ethical manner.

In the future, it is likely that this technology will continue to evolve and that the academic community will learn to use it as another tool in the process of producing scientific and academic texts; but just as we do not use Word's proofreader as a co-author of our articles, let us not make fools of ourselves by co-authoring these new tools as if they had an entity of their own.

By the way, was this text written by a person or an artificial intelligence? We will never know, but, just in case, it is signed by yours truly.

Javier Palanca

About the author: Javier Palanca, professor at the Valencian University Institute for Artificial Intelligence Research at the Universitat Politècnica de València.

The 5Ws +1
Publish it
FAQ
Contact