Bender is not against the use of language models for question-answer exchange in all cases. She has a Google Assistant in her kitchen, which she uses to convert units of measurement in a recipe. “There are times when it’s super convenient to be able to use speech to access information,” she says.
But Shah and Bender also give a more disturbing example that surfaced last year, when Google responded to the question “What’s the ugliest language in India?” with the excerpt “The answer is Kannada, a language spoken by approximately 40 million people in South India.”
No easy answers
There is a dilemma here. Direct answers can be helpful, but they are also often incorrect, irrelevant, or offensive. They can hide the complexities of the real world, says Benno Stein of Bauhaus University in Weimar, Germany. In 2020, Stein and his colleagues Martin Potthast of the University of Leipzig and Matthias Hagen of the Martin Luther University in Halle-Wittenberg, Germany published a paper emphasizing the problems with direct answers† “The answer to most questions is ‘It depends,'” says Matthias. “This is hard to get through to someone who is seeking.”
Stein and his colleagues view search technologies as having evolved from organizing and filtering information, through techniques such as providing a list of documents matching a query, to making recommendations in the form of a single answer to a question. And they think that is a step too far.
Again, the problem is not the limitations of existing technology. Even with perfect technology, we wouldn’t get perfect answers, Stein says: “We don’t know what a good answer is because the world is complex, but we don’t think that anymore when we see these direct answers.”
Shah agrees. Giving people a single answer can be problematic because the sources of that information and any disagreement between them are hidden, he says: “It really depends on whether we trust these systems completely.”
Shah and Bender suggest some solutions to the problems they anticipate. In general, search technologies should support the various ways people use search engines today, many of which are not served by instant answers. People often use search to research topics they may not even have specific questions about, Shah says. In this case, it would be more helpful to simply provide a list of documents.
It should be clear where information is coming from, especially if an AI pulls pieces from more than one source. Some voice assistants already do this, for example prefixing an answer with “This is what I found on Wikipedia.” Future search engines should also have the ability to say, “That’s a stupid question,” Shah says. This would help the technology prevent abusive or biased premises from being duplicated in a search.
Stein suggests that AI-based search engines can provide reasons for their answers, giving pros and cons from different points of view.
However, many of these suggestions simply emphasize the dilemma that Stein and his colleagues identified. Anything that diminishes convenience will be less appealing to most users. “If you don’t click through to the second page of Google results, you don’t want to read any other arguments,” Stein says.
Google says it is aware of many of the issues raised by these researchers and is working hard to develop technology that people find useful. But Google is the developer of a multi-billion dollar service. Ultimately, it will build the tools that most people bring in.
Stein hopes it doesn’t all depend on convenience. “Search is so important to us, to society,” he says.