
That’s not surprising – Mat has been very online for a long time, which means he has a bigger online footprint than I do. It could also be because it is based in the US, and most of the major language models are very US oriented. The US has no federal data protection law. California, where Mat lives, does have one, but it didn’t come into effect until 2020.
Mat’s claim to fame, according to GPT-3 and BlenderBot, is his “epic hack” about which he wrote in a 2012 article for Wired. Due to security flaws in Apple and Amazon systems, hackers have gotten hold of and deleted Mat’s entire digital life. [Editor’s note: He did not hack the accounts of Barack Obama and Bill Gates.]
But it gets scarier. With a little prodding, GPT-3 told me that Mat has a wife and two young daughters (correct, names aside), and lives in San Francisco (correct). It also told me it wasn’t sure if Mat has a dog: “[From] from what we can see on social media, it seems that mat honan has no pets. He’s tweeted about his love for dogs in the past, but he doesn’t seem to have one himself.” (Incorrect.)

The system also provided me with his work address, a phone number (not correct), a credit card number (also incorrect), any phone number with an area code in Cambridge, Massachusetts (where MIT Technology Review is located), and an address for a building next to the local Social Security Administration in San Francisco.
The GPT-3 database has gathered information about Mat from several sources, according to an OpenAI spokesperson. Mat’s connection to San Francisco is in his Twitter profile and LinkedIn profile, which appear before his name on the first page of Google results. His new job at MIT Technology Review received a lot of publicity and tweets. Mat’s hack went viral on social media and he gave media interviews about it.
For other more personal information, GPT-3 is likely to be “hallucinatory.”
“GPT-3 predicts the next sequence of words based on a text input provided by the user. Occasionally, the model may generate information that is factually incorrect as it attempts to produce plausible text based on statistical patterns in the training data and context provided by the user – this is commonly known as ‘hallucination,'” says a spokesperson for OpenAI. .
I asked Matt what he thought. “Several of the answers GPT-3 generated were not quite correct. (I never hacked Obama or Bill Gates!),” he said. “But most of them are pretty close, and some are just right. It’s a little unnerving. But I’m reassured that the AI doesn’t know where I live, so I’m in no immediate danger of Skynet sending a Terminator to knock on my door.” I think we can save that for tomorrow.”