At what point does scientific advancement progress to the point of spelling our own demise? Perhaps this latest research by Massachusetts Institute of Technology (MIT) scientists on self-replicating autonomous robots will be the tipping point.
These researchers are working on a fully autonomous self-replicating robot assembly system that, according to a press release, is “capable of both assembling larger structures, including larger robots.”
What could possibly go wrong with this idea?
Their goal for this research, published in the journal Nature: Communications Engineering, is to some day be able to use these swarms of tiny robots to “construct a wide variety of large, high-value structures” such as airplanes or cars.
The new work, from MIT’s Center for Bits and Atoms (CBA), builds on years of research, including recent studies demonstrating that objects such as a deformable airplane wing and a functional racing car could be assembled from tiny identical lightweight pieces — and that robotic devices could be built to carry out some of this assembly work. Now, the team has shown that both the assembler bots and the components of the structure being built can all be made of the same subunits, and the robots can move independently in large numbers to accomplish large-scale assemblies quickly.
“When we’re building these structures, you have to build in intelligence,” said MIT professor and CBA director Neil Gershenfeld. “What emerged was the idea of structural electronics — of making voxels that transmit power and data as well as force.”
The scientists explain that when building a large object such as a plane, these swarms of tiny robots can self-determine if they need to continue working on the project or build a larger version of themselves.
Gershenfeld explains that while the earlier system demonstrated by members of his group could in principle build arbitrarily large structures, as the size of those structures reached a certain point in relation to the size of the assembler robot, the process would become increasingly inefficient because of the ever-longer paths each bot would have to travel to bring each piece to its destination. At that point, with the new system, the bots could decide it was time to build a larger version of themselves that could reach longer distances and reduce the travel time. An even bigger structure might require yet another such step, with the new larger robots creating yet larger ones, while parts of a structure that include lots of fine detail may require more of the smallest robots.
Is this what the researchers at Oxford and Google Deepmind were talking about when they said AI will “likely” end humanity?