Site icon Wonderful Engineering

This MIT Professor Says That Drug Cartels May Soon Have Access To “Slaughterbots”

An MIT artificial intelligence and weapons researcher fears that drug cartels may soon have access to a “slaughterbot.”

According to AI researcher and MIT professor Max Tegmark, the military is already working on making lethal robots a reality.

Max Tegmark ( Image: MIT Physics)

Max Tegmark, an MIT professor and the creator of the Future of Life Institute, voiced some grave predictions regarding the future applications of militaristic robots in a recent interview with The Next Web, and the concepts create a dismal image of future combat. When it comes to small, weaponized, autonomous drones and robots, Tegmark believes that once the military has finished developing “slaughterbots,” it will only be a matter of time before citizens have access to them, as they have with so many other weapons. These bots will open a doorway of cheap and practically unstoppable targeted assassinations on anybody they choose if they fall into the wrong hands, such as drug cartels, he warned, adding that governments must intervene now before that nightmarish scenario becomes a reality.

The biggest losers from this are going to be countries that are militarily dominant because these weapons are incredibly cheap,” said Max Tegmark.

While Tegmark does not believe a global ban will be enacted this week, he does predict that individual countries will progressively agree to new laws. He thinks that, over time, LAWs will be so demonized that every military power will be forced to agree on restrictions. If they don’t, artificial intelligence (AI) weapons of mass devastation may become commonplace.

Major governments are pouring billions into developing powerful AI weapons that can hunt and strike targets without the need for human intervention. According to a UN report, in Libya last year, a Turkish-built kamikaze drone made the world’s first autonomous kill on human targets.

Experts caution, however, that because technology advances so quickly, governments and communities have failed to adequately address the risks. Machines that make their own decisions, according to them, are prone to unanticipated and rapidly spreading errors. These are caused by algorithms, which even programmers don’t always understand and can’t prevent from going wrong.

If AI weapons are equipped with biological, chemical, or even nuclear warheads in the future, unintended Armageddon may occur.

Exit mobile version