GROW YOUR TECH STARTUP

The AI revolution in defense technology is happening faster than we think

March 14, 2018

SHARE

facebook icon facebook icon

As the legendary Irish playwright George Bernard Shaw mused in his “Maxims for Revolutionists” (1948), “If history repeats itself, and the unexpected always happens, how incapable must Man be of learning from experience!”

We have seen our inability to learn time and again from Napoleon’s invasion of Russia to Hitler’s over a century later, the repeated attempts at the occupation of Afghanistan by several countries, the failure of Prohibition and the current failure of the War on Drugs… the list goes on.

We can learn from history. If we are to successfuly implement the rapidly mushrooming technology of Artificial Intelligence in a national defense capacity, we should look into historical revolutions in military technology: aerospace, cyber, biotech, and nuclear, and we should look to prior cases of transformative technology.

A 2017 report from the Belfer Center of Science and International Affairs at the Harvard Kennedy School entitled, “Artificial Intelligence and National Security,” highlights the need to look to the past at unexpected growth in military technology – only through the lens of history can we understand the trajectory of Artificial Intelligence tech.

The study found that, “comparing the technology profile of AI with the prior technology cases, we find that it has the potential to be a worst-case scenario. Proper pre-cautions might alter this profile in the future, but current trends suggest a uniquely difficult challenge.”

By looking at five characteristics of past military technological revolutions – destructive potential, cost profile, complexity profile, military/civil dual-use potential, and difficulty of espionage – scientists determined that the potential for things to go wrong with AI as a national defense strategy is very grave, and very real.

Taking the example of aerospace technology, we can see that the destructive potential at first was very low. Only in huge numbers could airplanes threaten the State’s existence. Attacks were easily defended. The cost profile was high – in 1945, a fighter plane cost about 50 times as much as a new car.

The complexity profile was high – only very sophisticated organizations could have the edge in aerospace tech. The military/civil-use potential was high – the first passenger airlines were converted WWI bombers. The difficulty of espionage was high – airplane factories could be concealed as anything else.

The profile of Artificial Intelligence is almost the opposite.  And that’s scary.

Read More: ‘AI will represent a paradigm shift in warfare’: WEF predicts an Ender’s Game-like future

The destructive potential of AI is high. The cost is low. The complexity profile is diverse but potentially low. The military/civil dual-use potential is high. And difficulty of espionage is high.

The report found, “that AI is likely to display some, if not all, of the most challenging aspects of prior transformative military technologies. In examining how national security policymakers responded to these prior technologies we agree with Scott Sagan, who pointed out that our forebears performed worse than we had known but better perhaps than we should have expected. The challenges they faced were tremendous.”

Elon Musk, Steve Wozniak, and the late Stephen Hawking who passed away today at the age of 76, endorsed an open letter describing how to assuage the threat – how to avoid the pitfalls of an armed Artificial Intelligence defense system.

These prominent experts raised concerns that, “if any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow.”

Read More: Stephen Hawking inaugurates artificial intelligence research center at Cambridge

The implications of this should please those looking to oppress: “Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group,” the open letter continues.

In conclusion, the writers of the open letter decided, “that AI has great potential to benefit humanity in many ways, and that the goal of the field should be to do so. Starting a military AI arms race is a bad idea, and should be prevented by a ban on offensive autonomous weapons beyond meaningful human control.”

Read More: The ongoing campaign against lethal autonomous weapons systems in warfare

Without international regulation on AI, we could be looking at the end of humanity as we know it. The international bodies must act before we open this proverbial Pandora’s Box any further. This technology can be used to either maim or heal, and we must guide research toward the latter.

Much as physicists have balked at the proliferation of nuclear weapons, biochemists have done the same in regards to chemical weapons – AI industry experts have spoken. Weaponization of AI could have devastating consequences, and it is up to the humans to stop it before it is too late.

SHARE

facebook icon facebook icon

Sociable's Podcast

Trending