While Prime Minister Dmitry Medvedev and Arkady Volozh were driving the unmanned Yandex.Taxi around Skolkovo, military engineers were figuring out how to adapt the technology of unmanned vehicles to create new weapons.

In fact, technology is not quite what it seems. The problem with all technological evolution is that the line between commercial robots "for life" and military killer robots is incredibly thin, and it costs nothing to cross. So far, they choose the route of movement, and tomorrow they will be able to choose which target to destroy.

This is not the first time in history that technological progress calls into question the very existence of mankind: first, scientists created chemical, biological and nuclear weapons, now - "autonomous weapons", that is, robots. The only difference is that, until now, weapons of "mass destruction" were considered inhuman - that is, they did not choose who to kill. Today, the perspective has changed: much more immoral seems to be a weapon that will kill with particular discrimination, choosing victims to its own taste. And if some militant power was stopped by the fact that, if it used biological weapons, everyone around would suffer, then with robots everything is more difficult - they can be programmed to destroy a specific group of objects.

In 1942, when the American writer Isaac Asimov formulated the three laws of robotics, it all seemed exciting, but completely unrealistic. These laws stated that a robot cannot and must not harm or kill a human. And they must unquestioningly obey the will of man, except in cases where his orders would be contrary to the above imperative. Now that autonomous weapons have become a reality and may well fall into the hands of terrorists, it turned out that the programmers somehow forgot to put Asimov's laws into their software. This means that robots can be dangerous, and no humane laws or principles can stop them.

A Pentagon-designed missile detects targets on its own thanks to software, artificial intelligence (AI) identifies targets for the British military, and Russia is showing off unmanned tanks. Colossal funds are spent on the development of robotic and autonomous military equipment in various countries, although few people want to see it in action. Just as most chemists and biologists are not interested in their discoveries eventually being used to create chemical or biological weapons, so most AI researchers are not interested in creating weapons based on them, because then a serious public outcry would damage their research programs.

In his speech at the start of the United Nations General Assembly in New York on September 25, Secretary-General António Guterres called AI technology a "global risk" along with climate change and rising income inequality: "Let's call a spade a spade," he said. “The prospect that machines will determine who lives is disgusting.” Guterres is probably the only one who can call on the military departments to change their minds: he previously dealt with conflicts in Libya, Yemen and Syria and served as the High Commissioner for Refugees.

The problem is that with the further development of technology, robots themselves will be able to decide who to kill. And if some countries have such technologies, while others do not, then uncompromising androids and drones will predetermine the outcome of a potential battle. All this contradicts all Asimov's laws at the same time. Alarmists may seriously worry that a self-learning neural network will get out of control and kill not only the enemy, but all people in general. However, the prospect of even quite obedient killer machines is not at all rosy.

The most active work in the field of artificial intelligence and machine learning today is not in the military, but in the civilian sphere - in universities and companies like Google and Facebook. But much of this technology can be adapted for military use. This means that a potential ban on research in this area will also affect civil developments.

In early October, the Stop the Killer Robots Campaign, a US non-governmental organization, sent a letter to the United Nations demanding that the development of autonomous weapons be internationally restricted. The UN made it clear that it supports this initiative, and in August 2017, Elon Musk and the participants of the UN International Conference on the Use of Artificial Intelligence (IJCAI) joined it. But in fact, the US and Russia oppose such restrictions.

The last meeting of the 70 member countries of the Convention on Certain Conventional Weapons (on "inhumane" weapons) was held in Geneva in August. Diplomats failed to reach consensus on how a global AI policy could be implemented. Some countries (Argentina, Austria, Brazil, Chile, China, Egypt and Mexico) expressed support for a legislative ban on the development of robotic weapons, France and Germany proposed to introduce a voluntary system of such restrictions, but Russia, the USA, South Korea and Israel said they were not going to limit the research and development that is being done in this area. In September, Federica Mogherini, the European Union's top foreign and security policy official, said that weapons "affect our collective security" and that life and death must remain in the hands of the individual anyway.

Cold War 2018

US defense officials say the United States needs autonomous weapons to maintain its military advantage over China and Russia, which are also investing in similar research. In February 2018, Donald Trump demanded $686 billion for national defense in the next fiscal year. These costs have always been quite high and have only declined under the previous president, Barack Obama. However, Trump - unoriginally - argued the need to increase them by technological competition with Russia and China. In 2016, the Pentagon budgeted $18 billion for the development of autonomous weapons over three years. This is not much, but here you need to take into account one very important factor.

Most of the developments in the field of AI in the United States are carried out by commercial companies, so they are widely available and can be sold commercially to other countries. The Pentagon does not have a monopoly on advanced machine learning technologies. The American defense industry no longer conducts its own research in the way it did during the Cold War, but uses the achievements of start-ups from Silicon Valley, as well as Europe and Asia. At the same time, in Russia and China, such research is under the strict control of the defense departments, which, on the one hand, limits the influx of new ideas and the development of technologies, but, on the other hand, guarantees state funding and protection.

The New York Times estimates that military spending on autonomous military vehicles and unmanned aerial vehicles will exceed $120 billion over the next decade. This means that the discussion ultimately comes down not to whether to create autonomous weapons, but to what degree of independence to give them.

Today, fully autonomous weapons do not exist, but Air Force Vice Chairman General Paul J. Selva of the Joint Chiefs of Staff said back in 2016 that in 10 years, the United States will have the technology to create such weapons that can decide on their own who and when to kill. And as countries debate whether to limit AI or not, it may be too late.

Clearpath Robotics was founded six years ago by three college friends who shared a passion for making things. The company's 80 specialists are testing cross-country robots like the Husky, a four-wheeled robot used by the US Department of Defense. They also make drones and even built a Kingfisher robotic boat. However, one thing they will definitely never build: a robot that can kill.

Clearpath is the first and so far the only robot company to pledge not to build killer robots. The decision was made last year by co-founder and CTO Ryan Garipay and, in fact, even attracted experts to the company who liked Clearpath's unique ethical stance. The ethics of robot companies have come to the fore recently. You see, we are one foot in the future, in which there will be killer robots. And we are not yet ready for them.

Of course, there is still a long way to go. Korean Dodam systems, for example, is building an autonomous robotic turret called the Super aEgis II. It uses thermal imaging cameras and laser rangefinders to detect and attack targets up to 3 kilometers away. The US is also reportedly experimenting with autonomous missile systems.

Two steps away from the "terminators"

Military drones like the Predator are currently operated by humans, but Garipai says they will become fully automatic and autonomous very soon. And it worries him. Very. “Deadly autonomous weapon systems could roll off the assembly line now. But deadly weapons systems that will be made in accordance with ethical standards are not even in the plans.

For Garipai, the problem lies in international rights. There are always situations in war where the use of force seems necessary, but it can also endanger innocent bystanders. How to create killer robots that will make the right decisions in any situation? How can we determine for ourselves what the correct solution should be?

We are already seeing similar problems in the example of autonomous transport. Let's say a dog crosses the road. Should the robot car swerve so as not to hit the dog but put its passengers at risk? What if it's not a dog, but a child? Or a bus? Now imagine a war zone.

“We can't agree on how to write a manual for such a car,” says Garipai. “And now we also want to move to a system that should independently decide whether to use lethal force or not.”

Make cool things, not weapons

Peter Asaro has spent the last few years lobbying for a ban on killer robots in the international community, being the founder of the International Committee for the Control of Robotic Armies. He believes the time has come for "a clear international ban on their development and use." This will allow companies like Clearpath to keep doing great things without worrying that their products can be used to violate human rights and threaten civilians, he said.

Autonomous missiles are of interest to the military because they solve a tactical problem. When remote-controlled drones, for example, operate in combat, it is not uncommon for an adversary to jam the sensors or network connection so that the human operator cannot see what is happening or control the drone.

Garipai says that instead of developing missiles or drones that can decide on their own which target to attack, the military needs to spend money on better sensors and anti-jamming technologies.

“Why don't we take the investment that people would like to make to build autonomous killer robots and invest it in improving the efficiency of existing technologies? he says. “If we set a goal and overcome this barrier, we can make this technology work for the benefit of the people, not just the military.”

Recently, talk about the dangers of artificial intelligence has also become more frequent. Elon Musk is worried that a runaway AI could destroy life as we know it. Last month, Musk donated $10 million to artificial intelligence research. One of the big questions about how AI will affect our world is how it will merge with robotics. Some, like Baidu researcher Andrew Ng, worry that the coming AI revolution will take people out of their jobs. Others, like Garipai, fear that it could take lives.

Garipay hopes that his colleagues, scientists and machine builders, will think about what they are doing. Therefore, Clearpath Robotics took the side of the people. “While we as a company cannot bet $10 million on this, we can bet our reputation.”

Dmitry Melkin and Pavel and Boris Lonkin had no questions about who to take on the team to participate in robot battles. The guys knew each other from Baumanka, then together they assembled and installed solar-powered power plants. One day, Dmitry saw an announcement about a robotics competition and applied. Friends supported the initiative, and a month later the first combat robot of the Solarbot team, Brontosaurus, stood in the garage.

The first robot is lumpy

The Brontosaurus weighed a whole centner and, as its creators now admit, was not distinguished by either reliability or ingenious design solutions. No wonder: it was assembled partly on a whim, partly on fuzzy screenshots from videos from English Robot Wars competitions.

After Brontosaurus, having counted and rediding the main nodes several times, Dmitry, Boris and Pavel assembled their second robot. For its resemblance to a shell, it was called Shelby, from the English shell - “shell”. Shelby, the son of difficult mistakes, first defeated everyone at the “Battle of Robots – 2016” in Perm, organized by the Moscow Technological Institute (MIT) and Promobot, and then, together with the machines of two other Russian teams, became a participant in international competitions in China. How the winning robot works and what it cost to make it, its creators tell.


Dmitry, ideological inspirer and jack of all trades:

“Our great pride is the Shelby chassis. We fiddled with the running gear of its predecessor literally after each battle. When we made Shelby, the chassis was turned, sorted out and reassembled many times, but now you can forget about it altogether. In future projects, we will only have to work on maintaining reliability and increasing power. It would be nice, for example, if our new robot could move not one, but two enemy robots at once.”

Shelby's chains are from mopeds, the wheels are from a racing kart, and the electric motors are from radio-controlled models of cars. Parts for combat robots are not produced, so you have to look for them in flea markets and on the Internet. Good parts are very expensive, and designers tend to make them themselves.


Boris, designer, durable:

“Shelby is a type of flipper, “flipper”. It is equipped with a pneumatic system that pushes the lid up with force. This is the main weapon of the robot and its way of stabilizing: by tipping over, it can roll over with one jerk and stand on wheels. But we could not create high pressure in the pneumatic cylinder to make the impact of the cover powerful - there were no necessary valves. There was only one thing left: to make the system work as quickly as possible. The solution turned out to be simple: we got rid of excess hydraulic resistance and modified the factory valves. In the future, of course, a high pressure valve will be needed. Ready-made is expensive, about 200 thousand rubles, so now we are thinking about our own design.


Combat robots are not a cheap hobby: you need at least 200-300 thousand rubles plus consumables, spare wheels and everything that breaks down and is replaced in battle. And that's without taking into account the time and effort involved. “To assemble a robot, a team of three people needs to stop going to work for two months,” Solarbot engineers laugh. It will not be possible to save even on electronic stuffing.

Pavel, programmer:

“The main advantage of Shelby electronics is that there are very few of them. In order not to pick up a soldering iron after each fight, you need to provide the robot with the necessary minimum of “brains”. Shelby has simple factory controllers, and only the valves are controlled by a small board. It is very difficult to disable it. Even when in China, instead of the usual lead batteries, we were given powerful lithium batteries and the wires could not stand it after a couple of minutes, the robot’s electronics were not affected.”

Fighting robot Shelby

Speed ​​up to 25 km/h Effort on the pneumatic cylinder rod 2 t Engine power 2.2 kW Stock of pneumatic shocks without changing the cylinder 30−35 Remote control and its body is made only of a metal profile.

The Solarbot team has built a hardy iron soldier, but it also has a limit to its strength. In China, he suffered from the rotating knives of Chinese spinners, in Perm - from the claws of a matangi robot, which cuts a metal profile like butter with an eight-ton force. There are lacerated wounds on his iron ribs. The creators are preparing the fate of the exhibit for him: he will participate in festivals (summer Geek Picnic is in the near future), and a new fighter will replace him in the arena - also a flipper, only faster, more powerful and even more reliable. The lifting force of the lid will be twice that of the Shelby, the motor power will increase from 2.2 to 2.8 kW, and the speed will increase. With a new robot, the Russian team dreams of getting to Robot Wars in England.

But the future flipper is not the ultimate dream of Solarbot. Now Dmitry is negotiating with other teams and looking for sponsors: if everything goes well, then the first “megabot” will appear in Russia - as big and formidable as Japanese, American and Chinese multi-ton monsters.

Thanks to the support of the Moscow Institute of Technology, the Russians for the first time got to the international tournament of fighting robots FMB Championship 2017 in China. The fight was hosted by Shelby, Destructor from Kazan and Energy from St. Petersburg, which advanced to the semi-finals.

Elon Musk recently spoke in the spirit that he strongly opposes the use of AI to create killer robots. This is not yet about the "Terminators", but about robotic systems capable of performing some tasks that are usually the responsibility of soldiers. The interest of the military in this topic is understandable, but their far-reaching plans frighten many.

But not only modern warriors sleep and see machine guns that can replace ten or even a hundred soldiers at the same time. These thoughts visited the heads of figures from different eras. Sometimes some ideas were realized, and they looked very good.

Robot Knight Da Vinci


Leonardo was a genius in almost every field. He managed to achieve success in almost all areas in which he showed interest. In the 15th century, he created a robot knight (of course, then the word "robot" was not in use).

The machine was able to sit, stand, walk, move its head and arms. The creator of the mechanical knight achieved all this with the help of a system of levers, gears and gears.

The Knight was re-created in our era - a working prototype was built in 2002. It was created "based on" the Da Vinci project by Mark Rosheim.

RC Boat Tesla


In 1898, the inventor Nicola Testa showed the world the first invention of its kind - a remotely controlled vehicle (small boat). The demonstration was held in New York. Tesla steered the boat, and the boat maneuvered, performing various actions as if by magic.

Later, Tesla tried to sell his other invention to the US military - something like a radio-controlled torpedo. But the military for some reason refused. True, he described his creation not as a torpedo, but as a robot, a mechanical man who is able to perform complex work instead of his creators.

Radio-controlled tanks of the USSR



Yes, the engineers of the Soviet Union were not bad at all. In 1940, they created radio-controlled combat vehicles based on the T-26 light tank. The range of the control panel is more than a kilometer.

The operators of these military terminators could open fire with machine guns, use a cannon and a flamethrower. True, the disadvantage of this technology was that there was no feedback. That is, the operator could only directly observe the actions of the tank at a distance. Naturally, the efficiency of the operator's actions in this case was relatively low.

This is the first example of a military robot in action.

Goliath


The Nazis created something similar, only instead of equipping conventional tanks with radio control, they created miniature tracked systems. They could be controlled remotely. They started the Goliaths with explosives. The idea was as follows: a nimble kid made his way to the “adult” enemy tank and, being nearby, carried out the operator’s command to destroy everything with an explosion. The Germans created both an electric version of the system and a mini-tank with an internal combustion engine. In total, about 7,000 such systems were produced.

Semi-automatic anti-aircraft guns


These systems were also developed during World War II. The founder of cybernetics, Norbert Wiener, had a hand in their creation. He and his team were able to create anti-aircraft systems that corrected the accuracy of the fire themselves. They were equipped with technology that allowed them to predict where an enemy aircraft would appear next.

Smart weapon of our time


In the 1950s, the US military, seeking to win the Vietnam War, pioneered laser-guided weapons, as well as autonomous flying devices, in fact, drones.

True, they required human assistance in choosing a target. But it was already close to what it is now.

Predator


Probably everyone has heard about these drones. The MQ-1 Predator was introduced to the US military a month after the events of 9/11. The Predators are now the world's most widely used military drones. They also have older relatives - the Reaper UAV.

sappers


Yes, in addition to killer robots, there are sapper robots. Now they are very common, they began to be used several years ago, in Afghanistan and other hot spots. By the way, these robots were developed by iRobot - it is she who creates the most popular cleaning robots in the world. This, of course, is about Roomba and Scooba. In 2004, 150 such robots were produced (not vacuum cleaners, but sappers), and four years later - already 12,000.

Now the military has completely dispersed. Artificial intelligence (its weak form) promises great opportunities. In the US, these opportunities are going to take full advantage. Here comes the creation of a new generation of killer robots, with cameras, radars, lidars and weapons.

It is they who scare Elon Musk, and with him many other bright minds from various fields of activity.

Ride on the unmanned "Yandex. Taxi" by Skolkovo, military engineers figured out how to adapt the technology of unmanned vehicles to create new weapons.

In fact, technology is not quite what it seems. The problem with all technological evolution is that the line between commercial robots "for life" and military killer robots is incredibly thin, and it costs nothing to cross. So far, they choose the route of movement, and tomorrow they will be able to choose which target to destroy.

This is not the first time in history that technological progress calls into question the very existence of mankind: first, scientists created chemical, biological and nuclear weapons, now - "autonomous weapons", that is, robots. The only difference is that, until now, weapons of "mass destruction" were considered inhuman - that is, they did not choose who to kill. Today, the perspective has changed: much more immoral seems to be a weapon that will kill with particular discrimination, choosing victims to its own taste. And if some militant power was stopped by the fact that, if it used biological weapons, everyone around would suffer, then with robots everything is more difficult - they can be programmed to destroy a specific group of objects.

In 1942, when the American writer Isaac Asimov formulated the three laws of robotics, it all seemed exciting, but completely unrealistic. These laws stated that a robot cannot and must not harm or kill a human. And they must unquestioningly obey the will of man, except in cases where his orders would be contrary to the above imperative. Now that autonomous weapons have become a reality and may well fall into the hands of terrorists, it turned out that the programmers somehow forgot to put Asimov's laws into their software. This means that robots can be dangerous, and no humane laws or principles can stop them.

A Pentagon-designed missile detects targets on its own thanks to software, artificial intelligence (AI) identifies targets for the British military, and Russia is showing off unmanned tanks. Colossal funds are spent on the development of robotic and autonomous military equipment in various countries, although few people want to see it in action. Just as most chemists and biologists are not interested in their discoveries eventually being used to create chemical or biological weapons, so most AI researchers are not interested in creating weapons based on them, because then a serious public outcry would damage their research programs.

In his speech at the start of the United Nations General Assembly in New York on September 25, Secretary-General António Guterres called AI technology a "global risk" along with climate change and rising income inequality: "Let's call a spade a spade," he said. “The prospect that machines will determine who lives is disgusting.” Guterres is probably the only one who can call on the military departments to change their minds: he previously dealt with conflicts in Libya, Yemen and Syria and served as the High Commissioner for Refugees.

The problem is that with the further development of technology, robots themselves will be able to decide who to kill. And if some countries have such technologies, while others do not, then uncompromising androids and drones will predetermine the outcome of a potential battle. All this contradicts all Asimov's laws at the same time. Alarmists may seriously worry that a self-learning neural network will get out of control and kill not only the enemy, but all people in general. However, the prospect of even quite obedient killer machines is not at all rosy.

The most active work in the field of artificial intelligence and machine learning today is not in the military, but in the civilian sphere - in universities and companies like Google and Facebook. But much of this technology can be adapted for military use. This means that a potential ban on research in this area will also affect civil developments.

In early October, the Stop the Killer Robot Campaign, a US non-governmental organization, sent a letter to the United Nations demanding that the development of autonomous weapons be internationally restricted. The UN signaled its support for the initiative, and in August 2017, Elon Musk and the United Nations International Conference on Artificial Intelligence (IJCAI) joined. But in fact, the US and Russia oppose such restrictions.

The last meeting of the 70 member countries of the Convention on Certain Conventional Weapons (on “inhumane” weapons) took place in Geneva in August. Diplomats failed to reach consensus on how a global AI policy could be implemented. Some countries (Argentina, Austria, Brazil, Chile, China, Egypt and Mexico) expressed support for a legislative ban on the development of robotic weapons, France and Germany proposed to introduce a voluntary system of such restrictions, but Russia, the USA, South Korea and Israel said they were not going to limit the research and development that is being done in this area. In September, Federica Mogherini, the European Union's top foreign and security policy official, said that weapons "affect our collective security" and that life and death must remain in the hands of the individual anyway.

Cold War 2018

US defense officials say the United States needs autonomous weapons to maintain its military advantage over China and Russia, which are also investing in similar research. In February 2018, Donald Trump demanded $686 billion for national defense in the next fiscal year. These costs have always been quite high and have only come down under the previous president, Barack Obama. However, Trump - unoriginally - argued the need to increase them by technological competition with Russia and China. In 2016, the Pentagon budgeted $18 billion for the development of autonomous weapons over three years. This is not much, but here you need to take into account one very important factor.

Most of the developments in the field of AI in the United States are carried out by commercial companies, so they are widely available and can be sold commercially to other countries. The Pentagon does not have a monopoly on advanced machine learning technologies. The American defense industry no longer conducts its own research in the way it did during the Cold War, but uses the achievements of start-ups from Silicon Valley, as well as Europe and Asia. At the same time, in Russia and China, such research is under the strict control of the defense departments, which, on the one hand, limits the influx of new ideas and the development of technologies, but, on the other hand, guarantees state funding and protection.

The New York Times estimates that military spending on autonomous military vehicles and unmanned aerial vehicles will exceed $120 billion over the next decade. This means that the discussion ultimately comes down not to whether to create autonomous weapons, but to what degree of independence to give them.

Fully autonomous weapons don't exist today, but Vice Chairman of the Joint Chiefs of Staff General Paul J. Selva of the Air Force said back in 2016 that in 10 years, the United States will have the technology to create such weapons that can decide who and when to kill. And as countries debate whether to limit AI or not, it may be too late.