Large language models (LLMs) radically speed up text production in a variety of use cases. When they are fed with samples of our individual writing style, they are even able to produce texts that sound as though we ourselves wrote them. In other words, they act as AI ghostwriters creating texts on our behalf.
As with human ghostwriting, this raises a number of questions on authorship and ownership. A team led by media informatics expert Fiona Draxler at LMU's Institute for Informatics has investigated these questions around AI ghostwriting in a study that was recently published in the journal ACM Transactions on Computer-Human Interaction.
"Rather than looking at the legal side, however, we covered the human perspective," says Draxler. "When an LLM relies on my writing style to generate a text, to what extent is it mine? Do I feel like I own the text? Do I claim that I am the author?"
To answer these questions, the researchers and experts in human-computer interactions conducted an experiment whereby participants wrote a postcard with or without the help of an AI language model that was (pseudo-) personalized to their writing style. Then they asked the test subjects to publish the postcard with an upload form and provide some additional information on the postcard, including the author and a title.
"The more involved participants were in writing the postcards, the more strongly they felt that the postcards were theirs," explains Professor Albrecht Schmidt, co-author of the study and Chair of Human-Centered Ubiquitous Media. That is to say, perceived ownership was high when they wrote the text themselves, and low when the postcard text was wholly LLM-generated.
However, perceived ownership of the text did not always align with declared authorship. There were a number of cases in which participants put their own name as the author of the postcard even when they did not write it and also did not feel they owned it. This recalls ghostwriting practices, where the declared author is not the text producer.
"Our findings highlight challenges that we need to address as we increasingly rely on AI text generation with personalized LLMs in personal and professional contexts," says Draxler. "In particular, when the lack of transparent authorship declarations or bylines makes us doubt whether an AI contributed to writing a text, this can undermine its credibility and the readers' trust. However, transparency is essential in a society that already has to deal with widespread fake news and conspiracy theories."
As such, the authors of the study call for simple and intuitive ways to declare individual contributions that reward disclosure of the generation processes.
More information: Fiona Draxler et al, The AI Ghostwriter Effect: When Users Do Not Perceive Ownership of AI-Generated Text But Self-Declare as Authors, ACM Transactions on Computer-Human Interaction (2023). DOI: 10.1145/3637875
Citation: Study explores how people perceive and declare their authorship of artificially generated texts (2023, December 19) retrieved 19 December 2023 from https://techxplore.com/news/2023-12-explores-people-declare-authorship-artificially.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.