Tuesday, September 26, 2023

AI models spit out photos of real people and copyrighted images

Must read

Shreya Christinahttps://cafe-madrid.com
Shreya has been with cafe-madrid.com for 3 years, writing copy for client websites, blog posts, EDMs and other mediums to engage readers and encourage action. By collaborating with clients, our SEO manager and the wider cafe-madrid.com team, Shreya seeks to understand an audience before creating memorable, persuasive copy.

Stable Diffusion is open source, which means anyone can analyze and research it. Imagen is closed, but Google has granted access to the researchers. Singh says the work is a good example of the importance of giving research access to these models for analysis, and he argues that companies should be similarly transparent with other AI models, such as OpenAI’s ChatGPT.

While the results are impressive, there are some caveats. The images the researchers managed to extract appeared multiple times in the training data or were very unusual compared to other images in the dataset, says Florian Tramèr, an assistant professor of computer science at ETH Zurich, who was part of the group.

People who look unusual or have unusual names are more likely to be remembered, says Tramèr.

The researchers were only able to extract relatively few exact copies of individual photos from the AI ​​model: Only one in a million images were copies, according to Webster.

But that’s still worrying, says Tramèr: “I really hope no one is going to look at these results and say, ‘Oh, actually these numbers aren’t that bad if it’s just one in a million.'”

“The fact that they are greater than zero is what matters,” he adds.

More articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest article

Contents