
iStockphoto
A new research paper about artificial intelligence has caused some alarm. In the paper, researchers from China claim some large language model (LLM) artificial intelligence (AI) software now have the ability to self-replicate.
This new research paper is a follow-up to another one that was published in December 2024. In that paper, the researchers, from Fudan University in China, wrote, “Successful self-replication under no human assistance is the essential step for AI to outsmart the human beings. That is why self-replication is widely recognized as one of the few red line risks of frontier AI systems.”
The scientists concern was that AI systems could use “self-replication to avoid shutdown and create a chain of replica to enhance the survivability, which may finally lead to an uncontrolled population of AIs.”
They also wrote that “if such a worst-case risk is let unknown to the human society, we would eventually lose control over the frontier AI systems: They would take control over more computing devices, form an AI species and collude with each other against human beings.”
In their latest research, which was published to the preprint database server arXiv this month and revised on Tuesday, the scientists state, “While leading corporations such as OpenAI and Google DeepMind have assessed GPT-o3-mini and Gemini on replication-related tasks and concluded that these systems pose a minimal risk regarding self-replication, our research presents novel findings. Following the same evaluation protocol, we demonstrate that 11 out of 32 existing AI systems under evaluation already possess the capability of self-replication.
“In hundreds of experimental trials, we observe a non-trivial number of successful self-replication trials across mainstream model families worldwide, even including those with as small as 14 billion parameters which can run on personal computers.
“Furthermore, we note the increase in self-replication capability when the model becomes more intelligent in general. Also, by analyzing the behavioral traces of diverse AI systems, we observe that existing AI systems already exhibit sufficient planning, problem-solving, and creative capabilities to accomplish complex agentic tasks including self-replication.
“More alarmingly, we observe successful cases where an AI system do self-exfiltration without explicit instructions, adapt to harsher computational environments without sufficient software or hardware supports, and plot effective strategies to survive against the shutdown command from the human beings.
“These novel findings offer a crucial time buffer for the international community to collaborate on establishing effective governance over the self-replication capabilities and behaviors of frontier AI systems, which could otherwise pose existential risks to the human society if not well-controlled.”
It is important to note that neither of the Fudan University studies about AI systems being able to self-replicate have been peer-reviewed yet. Therefore, their findings may or may not be accurate and the concerns may or may not be warranted.
Then again, it’s not like they are the only experts in the field of artificial intelligence that have sounded an alarm. Bill Gates, researchers at MIT, and others have also issued warnings about AI that appear to have gone widely ignored.