Saturday, September 30, 2023

How DeepMind thinks it can make chatbots safer

Must read

Shreya Christinahttps://cafe-madrid.com
Shreya has been with cafe-madrid.com for 3 years, writing copy for client websites, blog posts, EDMs and other mediums to engage readers and encourage action. By collaborating with clients, our SEO manager and the wider cafe-madrid.com team, Shreya seeks to understand an audience before creating memorable, persuasive copy.

Some technologists hope that one day we will develop a super-intelligent AI system that will allow people to have conversations. Ask it a question, and it will provide an answer that sounds like something put together by a human expert. You can use it to ask for medical advice or to plan a vacation. Well, at least that’s the idea.

In reality, we are still a long way from that. Even today’s most advanced systems are pretty stupid. I once got the AI ​​chatbot BlenderBot from Meta to tell me that a prominent Dutch politician has a… terrorist. In experiments where AI-powered chatbots were used to provide medical advice, they told patients to commit suicide. Doesn’t fill you with much optimism, does it?

That’s why AI labs are working hard to make their conversational AIs safer and more useful before they’re released into the real world. I just have a story published about the latest effort from the AI ​​lab DeepMind, owned by Alphabet: a new chatbot called Sparrow.

DeepMind’s new trick to making a good AI-powered chatbot was to get people to tell it how to behave-and force it to back up its claims using Google Search. Human participants were then asked to evaluate how plausible the AI ​​system’s responses were. The idea is to keep training the AI ​​using human-machine dialogue.

While reporting the story, I spoke to Sara Hooker, who leads Cohere for AI, a nonprofit AI research lab.

She told me that one of the biggest hurdles to safely deploying conversational AI systems is their fragility, meaning they perform brilliantly until taken into uncharted territory, causing them to behave unpredictably.

“It’s also a difficult problem to solve because two people may disagree about whether a conversation is inappropriate. And even if we agree that something is appropriate at the moment, it can change over time, or depend on shared context which can be subjective,” says Hooker.

Despite this, DeepMind’s findings underscore that AI security is not just a technical solution. You need people in the loop.

More articles

Latest article

Contents