Admit it. You rely on navigation apps to help you get around almost every day, whether you drive, take the bus or train, walk, or hike from point A to B.
In foreign cities, we depend on apps such as Waze and Google Maps to help us discover new places. At home, we use these apps to beat the rush-hour traffic and find the quickest routes to school, work, and other places we regularly frequent.
But sometimes, we doubt a suggested turn; we question a re-route; we suspect an arrival-time estimate.
Maybe it's because we're in a hurry. Maybe we just trust our own instincts better. Whatever the reason, there are situations that compel us to turn off the app, and go rogue, so to speak.
But, what exactly is going on in our head when we make decisions that override suggestions or recommendations made by automated systems? Yael Karlinsky-Shichor, a recently appointed assistant professor of marketing at Northeastern, is on a quest to find out.
Karlinsky-Shichor's research focuses on the automation of decision-making and its application to marketing. She also studies the psychological aspects of using automation and artificial intelligence models. Wait, automation and marketing? Absolutely, says Karlinsky-Shichor. The two domains intersect more than you might think.
"Many of the topics that we investigate in marketing today you can also find in information systems," she says. "It was really nice for me to broaden my view on those topics and look at them from a marketing perspective, but also continue to look at the topics that involve technology and user interaction with technology."
Here's a case in point: Karlinsky-Shichor and her research colleagues ran a field experiment in which they tried to assess who could generate a higher profit for a business-to-business company that sells aluminum—humans or machines? They did this by creating an automated system that learned and reapplied every salesperson's pricing decisions.
They found that when the salespeople used the prices recommended by the automated system, that generated more money for the company. But interestingly, they learned that if the system were to be used in tandem with a high-performing sales representative, that would yield even better results.
"We use machine learning to automatically decide who should make the pricing decision—the salesperson or the model," Karlinsky-Shichor says. "What we find is that a hybrid structure that lets the model price most of the quotes that come into the company but lets the expert salesperson take those cases that are more unique or out of the ordinary actually performs even better."
Here's why. Humans are unpredictable and fickle, but they are also more adept at handling unpredictability. They have the advantage when it comes to meeting new clients, for example, and gauging a client's needs and willingness to pay. However, machines have a leg up on humans in more technical, repetitive, and scalable tasks, and they get to avoid the different behavioral inconsistencies that people often display. Together, they're an unbeatable duo.
"In many cases, people think that AI models are going to replace human jobs," says Karlinsky-Shichor. "What I find—and it's insight that comes up in many domains—is that instead of replacing humans, AI will complement them."
Two things happened after the researchers completed their case study. The company went forward with implementing the pricing process recommended by the automated system. And, the company's chief executive officer came back to Karlinsky-Shichor and her colleagues with an interesting offer.
"He said, 'well, why don't you go and take my best salesperson and create a model based on that salesperson? That model is going to give us the best results,'" she says. "But actually, we found that this is not the case. Even the best salesperson did not necessarily have an expertise that applied to every single case in this company."
The researchers found that, in fact, pooling the expertise of different experts generated a better outcome for the company's bottom line than using the highest-performing salesperson. So now they are working on an automation approach that will combine the wisdom of the crowds with individual expertise, she says.
Karlinsky-Shichor is also tackling a different, but related problem: How do you get people to faithfully follow suggestions or recommendations made by automated models? This issue of compliance is a challenge regularly faced by companies that use such systems, she says.
Again, she points to the business-to-business pricing scheme.
"What we see is that salespeople generally take the price recommended by the model when they either anticipate a low risk in the change, or it seems like there's a big difference in the price when going with the model," she says. "So one of my conjectures is that if they're very confident, or when they have no clue, they use the model's recommendation."
Karlinsky-Shichor will continue exploring this intertwined field of marketing and artificial intelligence as a researcher at Northeastern. She believes she's at the right place for this work.
"For me, Northeastern is a great combination of a school that puts research as a high priority, but also puts a lot of emphasis on the application of the research," she says. "I am generally interested in problems that not just us researchers, but also companies, care about."
More information: Yael Karlinsky-Shichor et al. Automating the B2B Salesperson Pricing Decisions: Can Machines Replace Humans and When?, SSRN Electronic Journal (2019). DOI: 10.2139/ssrn.3368402
Citation: Who makes better decisions: Humans or robots? (2019, December 6) retrieved 6 December 2019 from https://techxplore.com/news/2019-12-decisions-humans-robots.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no part may be reproduced without the written permission. The content is provided for information purposes only.