A study co-authored by Associate Professor Juliana Schroeder found that people view customer service agents that make typographical errors—and correct them—as more human and sometimes even more helpful.
"For decades, people worked to make machines smarter and less prone to errors," Schroeder says. "Now that we're living through real-world Turing tests in most of our online interactions, an error can actually be a beneficial cue for signaling humanness."
In a paper published in the Journal of the Association for Consumer Research, Schroeder and colleagues from Yeshiva University, Stanford, and the University of Colorado Boulder developed their own chatbot—named Angela—and conducted five studies involving over 3,000 participants.
Across all studies, participants rated agents that made and corrected typos as more human than those that made no typos or left typos uncorrected. They also viewed them more warmly.
The effect was strongest when participants did not know if the agent was a bot or a human, but interestingly, it held even when participants were told this information. "Seeing an agent correct a typo led people to expect the agent would be more helpful," Schroeder says.
Prior research dating back to the 1960s—dubbed the "Pratfall Effect"—showed that under certain conditions, making mistakes can increase a person's likability. But other studies have shown that communicators who make typos, spelling mistakes, or grammatical errors are seen as less intelligent or competent as those who don't. Schroeder and her co-authors suggest it's what happens after an error is made that can make the difference.
"We suspect that correcting an error is humanizing because it shows an engaged mind," she says. "It's a sign that the communicator cares about how they're perceived."
The researchers, who include Shirley Bluvstein from Yeshiva University, Xuan Zhao from Stanford, Alixandra Barasch from the University of Colorado Boulder, do not suggest that companies intentionally program their chatbots by inserting typos—which could be seen as manipulative and raise ethics questions.
Recent policy efforts in some states require bots to disclose their identities or companies to watermark AI-generated content. Yet, if a genuine mistake is made and the chatbot (or person) has the wherewithal to address it, this may impress customers.
Overall, the findings suggest that it may be possible to improve chatbots by implementing humanizing cues— such as fixing mistakes while still being transparent to consumers. These cues, the researchers say, "can signal a company's dedication to connecting with consumers, potentially offsetting the impersonal and dehumanizing nature of text-based interactions."
More information: Shirley Bluvstein et al, Imperfectly Human: The Humanizing Potential of (Corrected) Errors in Text-Based Communication, Journal of the Association for Consumer Research (2023). DOI: 10.1086/728412
Citation: To err is human—and in the age of AI, it may be humanizing (2024, August 1) retrieved 1 August 2024 from https://techxplore.com/news/2024-08-err-human-age-ai-humanizing.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.