The rapid development of AI has created a new risk that would have been unimaginable only a few years ago. It comes from autonomous AI weapons that will be allowed to kill humans. Today, aerial drones already pick targets at their own discretion. Tomorrow, independent AI will be able to launch missiles, if allowed. This thought keeps me awake at night.

I am a Ph.D. in Materials Science, specializing in fabrication of microelectronics. Currently, I apply my skills to building devices that work like neural networks — in other words, developing lifelike artificial brains for autonomous weapons.

Lifelike AI was supposed to have a civilian use. It was intended for autonomous means of production, which would increase labor productivity and raise the profitability of the manufacturing sector, to cope with the decline in working population.

Today, autonomous AI is also considered for military purposes, because artificial brains can independently control aerial and submersible drones and replace humans in armored fighting vehicles. The military believes that autonomous weapons can open new theaters of war and stimulate the development of new methods of warfare.

The big question now is whether the autonomous weapons should be authorized to use deadly force at their sole discretion. I think it would be a terrible thing to allow, for no one, including these machines’ creators, can predict how the artificial lifelike brains will behave in every situation.

To read more, click here.