Sunday, May 22, 2022

Google uses AI to better detect searches from people in crisis

Must read

Shreya Christinahttps://cafe-madrid.com
Shreya has been with cafe-madrid.com for 3 years, writing copy for client websites, blog posts, EDMs and other mediums to engage readers and encourage action. By collaborating with clients, our SEO manager and the wider cafe-madrid.com team, Shreya seeks to understand an audience before creating memorable, persuasive copy.

In a personal crisis, many people turn to an impersonal source of support: Google. Every day, the company searches for topics such as suicide, sexual assault and domestic violence. But Google wants to do more to get people to the information they need, saying new AI techniques that can better analyze the complexity of language are helping.

Notably, Google is integrating its latest machine learning model, MUM, into its search engine to more accurately detect a wider range of personal crisis searches. The company unveiled MUM at its IO conference last year and has since used it to extend the search to include features that attempt to answer questions related to the original search.

In this case, MUM can find searches related to difficult personal situations that previous search engines couldn’t find, says Anne Merritt, Google product manager for health and information quality.

“MUM can help us understand longer or more complex questions, such as ‘why did he attack me when I said I don’t love him,'” Merrit said. The edge† “It may be obvious to humans that this question is about domestic violence, but long, natural language questions like this are difficult for our systems to understand without advanced AI.”

Other examples of questions MUM can respond to include “most common ways suicide is completed” (a search that Merrit says previous systems were “previously understood as seeking information”) and “Sydney suicide hotspots” (where , again, previous responses would have likely returned travel info — ignoring the mention of “suicide” in favor of the more popular search for “hotspots”). When Google detects such crisis searches, it responds with an information box telling users that help is available, usually accompanied by a phone number or website for a mental health charity like Samaritans.

In addition to using MUM to respond to personal crises, Google says it is also using an older AI language model, BERT, to better identify searches looking for explicit content such as pornography. By using BERT, Google says it has “reduced unexpected shock results by 30%” year on year. However, the company couldn’t provide absolute numbers on the number of “shocking results” users encounter on average, so while this is a comparative improvement, it doesn’t give an indication of how big or small the problem actually is.

Google is eager to tell you that AI is helping the company improve its search products, especially at a time when there’s a building story that “Google search is dying out† But integrating this technology also has drawbacks.

Many AI experts warn that Google’s increasing use of machine learning language models could: uncover new problems for the company, such as introducing biases and misinformation into search results. AI systems are also opaque, giving engineers limited insight into how they arrive at certain conclusions.

For example, when we asked Google how it pre-verifies which search terms identified by MUM are associated with personal crises, the reps were either unwilling or unable to answer. The company says so rigorous testing changes to its search products using human evaluators, but that’s not the same as knowing in advance how your AI system will respond to certain queries. However, for Google, such tradeoffs are apparently worth it.

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest article