For some reason that I truly cannot understand, human beings keep trying to create robots and artificial intelligence that are smarter, more agile, more deadly, and more insane than actual people, as if by doing just that they aren’t sealing our own doom as a species.
I say this, because researchers from Warwick Business School, University of Plymouth, Donders Centre for Cognition at Radboud University in the Netherlands and the Bristol Robotics Lab at the University of the West of England banded together recently to conduct a study to see if robots could recognize human emotion. The findings they uncovered are a little frightening.
We’ve already seen that none other than Amazon is developing a wearable that can read human emotions, and now we’ve got these researchers basically training AI to copy the way humans recognize and determine emotion.
[protected-iframe id=”79494cb57292fe7fa592cd35323fc5cd-97886205-92827192″ info=”https://giphy.com/embed/1BdJd24oEwvuSvXYb0″ width=”640″ height=”390″ frameborder=”0″ class=”giphy-embed” allowfullscreen=””]
Kayla Matthews of betanews.com tried to explain WHY these researchers are doing such foolish things.
Robots that make deliveries often get attacked or vandalized for a multitude of reasons. If a robot can sense it’s in danger, it can leave a situation, as opposed to allowing itself to be attacked. In these scenarios, it’s also helpful for a robot to understand the distinction between backing away or leaving a scene entirely. Either way, almost everyone can benefit from a robot understanding emotions.
Thaaaaaaaaat sounds like an ability that also might be useful when conducting a violent robot uprising.
According to the researchers themsevles, “even though assessing social interactions is difficult even for humans, using skeletons and facial landmarks only does not significantly degrade the assessment. Future studies aiming to train a robotic system would ideally utilize a training dataset where the internal states and social constructs have been verified (and therefore a ground-truth is available). This study provides the evidence to guide this type of work, for example by demonstrating that training a robot to recognize aggression from movement information is likely to be more successful than recognizing aimlessness.”
They key takeaway, however, was that their study “provides promising support for fast and effective classification of social interactions, a critical requirement for developing socially-aware artificial agents and robots.”
What… are… you people… doing?! Now we want to make robots that in addition to being smarter and more agile than us, will eventually be more perceptive than we are as well?!
I mean, smart folks like Elon Musk have already gone on record saying things like, “The biggest mistake I see artificial intelligence researchers making is assuming they’re intelligent. They’re not, compared to AI. A lot of them cannot imagine something that is smarter than them but AI will be vastly smarter. Vastly.”
In 2014, Musk said artificial intelligence is humanity’s “biggest existential threat” and compared it to “summoning the demon.”
What more of a warning do we need?!
Of course, Musk is also the one making it so monkeys can control computers with their brains so he’s not helping. Then again, he also has his own friggin’ spaceship that he can just fly away in to Mars or whatever planet he ends up colonizing (or currently is?!).
We’ve already got robots who have learned how to work together and another one that bleeds, breathes, pees, and has a pulse, so it’s only a matter of time before they stop “accidentally” hurting humans and start doing it on purpose (unless they already are?!).