
BLOOMINGTON, Ind. – Google’s launch of Bard, it’s search-integrated, AI-powered chatbot, went wrong when the bot’s first advertisement accidentally showed it was unable to find and present accurate information to users.
Research by professors at the Indiana University Kelley School of Business and the University of Minnesota’s Carlson School of Management explains why it may be harder for the creator of the world’s largest search engine to write off the situation as a temporary issue.
Although it isn’t uncommon for software vendors to release incomplete products and subsequently fix bugs and provide additional features, the research shows this may not be the best strategy for AI.

As seen through a one-day $100 billion decrease in market value for Alphabet, Google’s parent company, a botched demo can cause significant damage. Findings in an article published by the journal ACM Transactions on Computer-Human Interaction indicates that errors that occur early in users’ interactions with an algorithm can have a lasting negative impact on trust and reliance.
Antino Kim and Jingjing Zhang, associate professors of operations and decision technologies at Kelley, are co-authors of the paper, “When Algorithms Err: Differential Impact of Early vs. Late Errors on Users’ Reliance on Algorithms,” with Mochen Yang, assistant professor of information and decision sciences at Carlson. Zhang also is co-director of the Institute for Business Analytics at Kelley. Yang taught at Kelley in 2018-19.
Known as “algorithm aversion,” users tend to avoid using algorithms, particularly after encountering an error. The researchers found that giving users more control over AI results can alleviate some of the negative impacts of early errors.
Kim, Yang and Zhang examined the situation through the lens of their research and present their analysis below: