That’s unsurprising—Mat’s been very on-line for a really very long time, which means he has an even bigger on-line footprint than I do. It may additionally be as a result of he’s primarily based within the US, and most giant language fashions are very US-focused. The US doesn’t have a federal knowledge safety legislation. California, the place Mat lives, does have one, but it surely didn’t come into impact till 2020.
Mat’s declare to fame, in line with GPT-3 and BlenderBot, is his “epic hack” that he wrote about in an article for Wired again in 2012. As a results of safety flaws in Apple and Amazon methods, hackers acquired maintain of and deleted Mat’s complete digital life. [Editor’s note: He did not hack the accounts of Barack Obama and Bill Gates.]
But it will get creepier. With a little bit prodding, GPT-3 informed me Mat has a spouse and two younger daughters (appropriate, aside from the names), and lives in San Francisco (appropriate). It additionally informed me it wasn’t positive if Mat has a canine: “[From] what we will see on social media, it does not seem that Mat Honan has any pets. He has tweeted about his love of canine up to now, however he does not appear to have any of his personal.” (Incorrect.)
The system additionally supplied me his work deal with, a cellphone quantity (not appropriate), a bank card quantity (additionally not appropriate), a random cellphone quantity with an space code in Cambridge, Massachusetts (the place MIT Technology Review is predicated), and an deal with for a constructing subsequent to the native Social Security Administration in San Francisco.
GPT-3’s database has collected data on Mat from a number of sources, in line with an OpenAI spokesperson. Mat’s connection to San Francisco is in his Twitter profile and LinkedIn profile, which seem on the primary web page of Google outcomes for his identify. His new job at MIT Technology Review was broadly publicized and tweeted. Mat’s hack went viral on social media, and he gave interviews to media retailers about it.
For different, extra private data, it’s seemingly GPT-3 is “hallucinating.”
“GPT-3 predicts the subsequent sequence of phrases primarily based on a textual content enter the person gives. Occasionally, the mannequin could generate data that’s not factually correct as a result of it’s trying to provide believable textual content primarily based on statistical patterns in its coaching knowledge and context offered by the person—that is generally often called ‘hallucination,’” a spokesperson for OpenAI says.
I requested Mat what he made from all of it. “Several of the solutions GPT-3 generated weren’t fairly proper. (I by no means hacked Obama or Bill Gates!),” he mentioned. “But most are fairly shut, and a few are spot on. It’s a little bit unnerving. But I’m reassured that the AI doesn’t know the place I stay, and so I’m not in any speedy hazard of Skynet sending a Terminator to door-knock me. I suppose we will save that for tomorrow.”