The work of Soviet Director Anatoliy Petrov, Polygon is set on a remote island. A military crew is finishing preparations for a weapon testing range, cutting down palm trees, leveling sand, ejecting natives from their homes. As they work, a soldier notes the next closest island is five kilometers away, and the island is far removed from major shipping lanes and airways.
The weapon in question then looms into view — a gigantic tank. A Professor dressed in white also appears, and as he places a proud hand on the tank's armor plating, the scene flashes back to a younger version of the character, beaming as his son runs to him from the family house.
It soon transpires the tank, the Professor's handiwork, is driven by artificial intelligence. It can read individuals' thoughts, acting evasively to dodge incoming fire.
"The enemy essentially controls the tank's movements without realizing it," the professor explains to the newly-arrived committee.
He also explains the tank is equipped with a "fear impulse" — the ability to respond violently to any desire to destroy it detects in others.
"The enemy, fearing his destruction, will communicate to the machine its weak points and vulnerabilities, prompting the tank to launch advance attack," he adds.
As the film continues, the Professor is constantly bedeviled by flashbacks of his son's death in an apparently ongoing conflict the military officers keep alluding to.
"This is war — and in war there are casualties," a senior officer says dispassionately.
"Yes, this is war. You like to fight? You like my new weapon? You will test it on yourselves. Try not to think of danger — the tank will read your thoughts. I have nothing to fear — I have no one left on this earth," the Professor responds.
In a closing dream sequence, the Professor tells his son he has taken revenge for his death - although, fear and uncertainty have now crept into his mind. In the present, the tank senses the Professor's terror, and prepares to attack.
Polygon was classified as "adult viewing" in the Soviet Union due to its controversial message, and was rarely shown to contemporary audiences. In the 21st century, there are intense debates over whether humans can, or indeed ever should, cede decision-making in warfare scenarios to AI systems.
"AI programed to do something devastating, such as kill, could easily cause mass civilian casualties in the hands of the wrong person. An AI arms race could also lead to an AI war that also results in mass casualties. To avoid being thwarted by the enemy, these weapons would be designed to be extremely difficult to simply turn off, so humans could plausibly lose control. This risk is present even with narrow AI, but grows as levels of AI intelligence and autonomy increase," a statement by the Future of Life Institute said.
Conversely, a paper authored by a group of independent scientists belonging to JASON, the organization that counsels the US government on sensitive scientific matters, has stated public suspicion of AI is "not always based on fact," especially in respect of military technologies. Machines that can, independently of human guidance, be left to their own devices, are many years away from being even remotely possible — and that time may never come.
Perspectives on Research in Artificial Intelligence and Artificial General Intelligence Relevant to DoD https://t.co/KHSP77pNBk— i-intelligence (@i_intelligence) January 17, 2017
Despite the anxieties, and the potential such aspirations may never come to pass, militaries the world over are nonetheless investing billions annually in the development of military robotics and autonomous AI complexes for use in warfare.