Black Mirror came incredibly hot out of the gate when its first episode premiered in the United Kingdom in 2011 thanks to a grotesquely absurd premise that ultimately didn’t turn out to be as unrealistic as you’d think thanks to allegations that surfaced four years after it aired.
The show’s ability to depict a disturbingly plausible vision of the future is a major reason it’s been able to amass a sizeable fanbase.
Showrunner Charlie Brooker has an uncanny ability to keep his finger on the pulse of technological trends to the point where he’s predicted multiple real-world innovations spawned from the minds of people with slightly more optimistic views about their applications.
The sixth season of Black Mirror is slated to arrive on Netflix on June 15th, and while it was in production long before ChatGPT took the world by storm, the man who’s overseen a number of episodes that revolve around the potential threat the rise of artificial intelligence poses to humanity hasn’t been able to resist the urge to see what all of the hype is about.
Plenty of A.I. evangelists have argued ChapGPT and similar platforms could eventually make a number of occupations obsolete, but in a fairly ironic twist, it seems the “Learn to Code” crowd may be facing a bigger existential threat than many creative minds like Brooker who don’t seem to think their job is in any immediate danger.
The Black Mirror creator addressed that particular topic during an interview with Empire where he revealed he asked ChatGPT to write an episode of the show that failed to meet his standards, saying:
“I’ve toyed around with ChatGPT a bit. The first thing I did was type ‘generate Black Mirror episode’ and it comes up with something that, at first glance, reads plausibly, but on second glance, is s***.
Because all it’s done is look up all the synopses of Black Mirror episodes, and sort of mush them together. Then if you dig a bit more deeply you go, ‘Oh, there’s not actually any real original thought here.'”
Brooker’s anecdote sums up one of the biggest issues with ChatGPT and other digital entities that are really just ” language models” that learn from (and, in many cases, essentially plagiarize) things that were written by a real, actual human, which means their ability to come up with truly original ideas currently leaves a bit to be desired.