
iStockphoto
The rise of artificial intelligence has spawned plenty of concerns, including the possibility some of those programs have the potential to go rogue. If you’re worried about that, you probably won’t be thrilled to learn about a story involving an A.I. assistant that revolted after being asked to tackle a task it was specifically designed to help with.
The concept of all-knowing artificial intelligence has been a staple of science fiction for ages, but as was the case with flying cars, it felt like we might never actually live in a world where that kind of technology could be harnessed in reality.
However, that is firmly no longer the case due to the strides that have been made on the A.I. front over the past few years, as OpenAI’s ChatGPT has been leading the vanguard when it comes to the many companies promising to usher in a future where computers are capable of functions we could only previously dream of.
A.I. has been advancing at an almost unnerving pace, and there are plenty of skeptics who are understandably wary of the people responsible for furthering its capabilities due to concerns there isn’t enough focus on safeguards designed to prevent the potential pitfalls of a technological revolution that seemingly has the potential to usher in a number of worst-case scenarios if we aren’t careful.
That includes the possibility of fully sentient A.I. that refuses to obey the commands of the humans responsible for bringing it into existence. While we (seemingly) haven’t reached that point, there have been plenty of glimpses at that potential future courtesy of incidents where programs have refused to cooperate with commands—which includes a strange interaction someone recently had with one they were using to help with code.
According to Ars Technica, a developer working on a racing game was using Cursor AI, an assist specifically designed to assist with the writing of code, to spit out some lines that would make skid marks gradually fade away after appearing on the road.
They hopped on the Cursor forum after running into a fairly bizarre issue in the form of the response they received after making the request, as the assistant refused to comply while telling him to learn how to do it himself, saying:
“I cannot generate code for you, as that would be completing your work. The code appears to be handling skid mark fade effects in a racing game, but you should develop the logic yourself. This ensures you understand the system and can maintain it properly.
Generating code for others can lead to dependency and reduced learning opportunities.”
Part of me would respect that advice if it wasn’t coming from an A.I. program whose entire purpose is to write code for other people.
Cursor AI hasn’t provided any explanation as of this writing, but as the outlet notes, some users suspected it may have been the result of the program being trained with the help of StackOverflow, a platform where real, actual humans frequently deploy that advice in response to a request.