Advertisement
Advertisement
Illustration: Craig Stephens
Opinion
Opinion
by Daniel Wagner
Opinion
by Daniel Wagner

As US, China and Russia fight for AI supremacy in new hyperwar battlefield, will good sense prevail?

  • In this ‘who dares wins’ race, the US has a lead in deep-learning algorithms but China and Russia are making greater strides in applications, unencumbered by privacy or rights concerns

Will artificial intelligence (AI) ever possess the nuance, intuition, and gut instinct necessary to be a good military or intelligence analyst? The jury will be out on that question for some time, but the world’s leading governments continue to develop AI with military applications under the presumption that, eventually, machines will possess that ability. China, Russia, and the United States are in a race to determine which country will dominate the AI landscape first.

China and Russia have had a free hand in applying the technology, unburdened by concerns about privacy, civil rights or degrees of presumed acceptability. China has excelled at developing AI via a vast number of well-trained computer engineers who programme machines using an unrivalled amount of data, produced by the country’s 1.4 billion people. And Russia has little hesitation about deploying AI in any number of instruments of war, including hypersonic weapons.
In contrast, the US sees AI principally as a national security tool, to be employed on the battlefield or to thwart terrorist attacks, and trails both countries in some aspects of the development of deployable machines. The manner in which it allocates and spends defence dollars tends to be slow and cumbersome – not conducive to speed or efficiency. That has hobbled the US in reacting swiftly, effectively and proactively in response to some cyberattacks, and has thwarted the development of cutting-edge AI tools.

That said, US researchers have trained deep-learning algorithms to identify Chinese surface-to-air missile sites hundreds of times faster than their human counterparts. The algorithms helped individuals with no experience of imagery analysis to find the missile sites scattered across nearly 90,000 square kilometres of southeastern China, for example. The neural network used matched the 90 per cent accuracy of human experts while reducing the man-hours needed to analyse potential missile sites from 60 hours to just 42 minutes. This has proven extremely useful, as satellite imagery analysts were drowning in a deluge of big data.

AI and robotics – the forces ushering in the era of “hyperwar” – already allow for asymmetric responses that are inexpensive, resilient and globally scalable. AI technologies such as natural language-based dialogue systems consume enormous amounts of information to augment human operators in non-combat situations, such as for maintenance and the remediation of equipment. Such capabilities will eventually be augmented by reality-based information-delivery technologies in combat scenarios.

At the operational level, commanders will be able to “sense”, “see” and engage enemy formations far more quickly by applying machine learning algorithms to collect and analyse huge quantities of information, and direct swarms of complex, autonomous systems to simultaneously attack the enemy.

At the strategic level, the commander supported by this capacity “sees” the strategic environment through sensors operating across the entire operational theatre. The strategic commander’s capacity to ingest petabytes of information and conduct near-instantaneous analysis – ranging from national technical means to tactical systems – provides a qualitatively unsurpassed level of situational awareness and understanding.
America’s adversaries are betting that a new wave of weapons will negate technologies and tactics at the heart of US military might, among them aircraft carriers and high-altitude missiles.
Russia’s interest is well established, and its military has deployed AI capable of conducting independent military operations. Russia is preparing to fight on a roboticised battlefield in the near term, wielding anti-tank weapons, grenade launchers and assault rifles. Russia is clearly well advanced on the path towards developing the next generation of autonomous military weapons, and China is following a similar path, aggressively testing hypersonic weapons, unmanned aircraft, and advanced submarine detection, among other capabilities.
China has built up a significant satellite manufacturing industry and has managed to develop quantum communications spacecraft with advanced encryption features. China will have major advantages in translating private-sector gains in the AI arena into national security applications, given the heavy integration of government in all aspects of the Chinese economy.
Given the challenge of feeding machines with knowledge and expert-based behaviours, as well as limitations in perception sensors, it will still be many years before AI will be able to truly approximate human intelligence in high-uncertainty settings – as epitomised by the fog of war. The intelligence community’s challenge is to improve source collection across platforms and domains without becoming overwhelmed.

Likewise, the challenge for the world’s leading militaries is to exercise restraint vis-à-vis the development of bioweaponry and space-based weapons, as they leap further into the development of the next generation of AI-powered weapons.

China-US competition on AI need not be a race to the death

It is perhaps too much to ask that China, Russia and the US pursue their AI-driven ambitions in a responsible manner, for the stakes could not be higher. The truth is that there are too few universally accepted rules governing AI development to restrict its development in the intelligence and military spheres.
The most that can be hoped for is that the leading nations’ AI ambitions will be governed by common sense and a realisation that, eventually, their adversaries will be able to unleash the same potential weaponry on each other. Will that prove to be an incentive to be the first to achieve AI military supremacy or a clarion call to establish some boundaries in the interim? If history is any guide, the world’s militaries are more likely to shoot first and ask questions later.

Daniel Wagner is CEO of Country Risk Solutions and co-author of AI Supremacy

Post