Based on some of the advancements in artificial intelligence that have been unveiled in recent years, it seems like there are way too many people out there who’ve watched an episode of Black Mirror and thought, “Hey, that’s a great idea,” as you’re kind of missing the point if you use a show that’s devoted to highlighting the potential pitfalls of our crippling reliance on technology as a good source of inspiration.
I’m sure most of the people who’ve devoted their lives to figuring out how to harness the power of artificial intelligence have good intentions, but the same could be said for the researchers who brought Jurassic Park to life (and we all know how well that worked out for them). There’s no denying that it’s wild to live in a world where we can “talk” with people after they’ve died and even have a computer predict when you’re going to die, but as the aforementioned movie taught us, it’s easy to get so preoccupied with whether or not you can do something to take a second to ask yourself if you should.
Whenever Boston Dynamics releases a video showcasing a new skill one of its robots has learned, there’s always an avalanche of people who respond by joking about the “robot overlords” that will eventually bring humanity to its knees. However, there’s plenty of evidence that suggests that outcome isn’t a laughing matter, as some people who more know about A.I. than I ever will believe that dystopian future is a very real possibility.
Now, we have even more proof courtesy of researchers at the Max-Planck Institute for Humans and Machines, who recently published a paper in The Journal of Artificial Intelligence Research with a fun little title containing the words “Superintelligence Cannot Be Contained,” which is totally, definitely not a cause for concern whatsoever.
The authors of the paper took a closer look at the “Three Laws of Robotics” author Isaac Asimov famously said could prevent an I, Robot scenario from unfolding, which appear to be about as foolproof in the real world as they were in that work of fiction.
While some experts have posited you can control A.I. by limiting its access to the internet or writing algorithms in an attempt to control its behavior, the chance of those strategies actually working becomes increasingly unlikely as humans willingly push the limits of what artificial intelligence can do, with researcher Iyad Rahwan saying:
“The ability of modern computers to adapt using sophisticated machine learning algorithms makes it even more difficult to make assumptions about the eventual behavior of a superintelligent AI.”
Oh well. At least society had a good run.