chatbot
Credit: Pixabay/CC0 Public Domain

A trio of ethicists at the University of Oxford's Oxford Internet Institute has published a paper in the journal Royal Society Open Science questioning whether the makers of LLMs have legal obligations regarding the accuracy of the answers they give to user queries.

The authors, Sandra Wachter, Brent Mittelstadt and Chris Russell, also wonder if companies should be required to add features to results that allow users to better judge whether the answers they receive are accurate.

As LLMs move into the mainstream, their use has become a matter of speculation—should students be allowed to use them on homework assignments, for example, or should business or be able to use them to conduct serious business?

This is increasingly salient as LLMs quite often make mistakes—sometimes really big ones. In this new effort, the team at Oxford suggests that the makers of LLMs should be held more accountable for their products due to the seriousness of the issues that have been raised.

The researchers acknowledge that LLM makers cannot currently be legally bound to produce LLMs that produce only correct and reasonable answers—at present, it is not technically feasible. But they also suggest that companies should not get off scot-free.

They wonder if the time has come to enact a legal duty to place a greater emphasis on truth and/or regarding their products. And if that is impossible, they suggest LLM makers could at least be forced to add features such as including citations in their answers to help users decide whether they are correct—or perhaps to add features that give users some sense of the confidence level of the answers given.

If a chatbot is not sure about an answer, they note, perhaps it could simply say so instead of generating a ridiculous answer.

The researchers also suggest that LLMs used in high-risk areas such as should only be trained on truly useful data, such as , thereby greatly increasing their accuracy. They suggest their work offers a pathway to a possible improvement of LLM accuracy, an issue that could increase in importance as they become a more common source of information.

More information: Sandra Wachter et al, Do large language models have a legal duty to tell the truth? Royal Society Open Science (2024). DOI: 10.1098/rsos.240197

© 2024 Science X Network

Citation: Ethicists wonder if LLM makers have a legal duty to ensure reliability (2024, August 7) retrieved 7 August 2024 from https://techxplore.com/news/2024-08-ethicists-llm-makers-legal-duty.html

This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.