Researchers Warn Of Dangerous Security Threats In AI Systems Like ChatGPT

ChatGPT official app icon on cell phone screen

iStockphoto


It seems like for every positive step scientists take when it comes to artificial intelligence (AI), an equally dangerous step back comes along with it.

The latest warning about the dangers of AI systems comes from researchers at the University of Sheffield.

“Artificial intelligence (AI) tools such as ChatGPT can be tricked into producing malicious code, which could be used to launch cyber attacks,” the researchers warn in an article published this week.

These University of Sheffield scientists recently discovered “AI language models are vulnerable to simple backdoor attacks, such as planting a Trojan Horse, that could be triggered at any time to steal information or bring down services.”

Even more scary, due to the nature of these AI tools they can be used “to learn programming languages to interact with databases.”

“At the moment, ChatGPT is receiving a lot of attention,” said Xutan Peng, a PhD student at the University of Sheffield, who co-led the research. “It’s a standalone system, so the risks to the service itself are minimal, but what we found is that it can be tricked into producing malicious code that can do serious harm to other services

What the means, is that it can be used to enable cyber-criminals to steal sensitive personal information, bring down services through Denial-of-Service attacks, and search, tamper with or destroy databases by asking it questions using plain language.

“The researchers found that if they asked each of the AIs specific questions, they produced malicious code,” the researchers wrote. “Once executed, the code would leak confidential database information, interrupt a database’s normal service, or even destroy it.”

“In reality many companies are simply not aware of these types of threats and due to the complexity of chatbots, even within the community, there are things that are not fully understood,” Peng said.

And it’s not just hackers that could cause chaos by misusing these AI systems.

“For example, a nurse could ask ChatGPT to write an SQL command so that they can interact with a database, such as one that stores clinical records,” said Peng. “As shown in our study, the SQL code produced by ChatGPT in many cases can be harmful to a database, so the nurse in this scenario may cause serious data management faults without even receiving a warning.”

Douglas Charles headshot avatar BroBible
Before settling down at BroBible, Douglas Charles, a graduate of the University of Iowa (Go Hawks), owned and operated a wide assortment of websites. He is also one of the few White Sox fans out there and thinks Michael Jordan is, hands down, the GOAT.