GROW YOUR TECH STARTUP

DARPA wants to make AI a ‘collaborative partner’ for national defense

September 5, 2018

SHARE

facebook icon facebook icon

DARPA is building an Artificial Intelligence Exploration (AIE) program to turn machines into “collaborative partners” for US national defense.

Machines may serve as trusted and collaborative partners

The Defense Advanced Research Projects Agency (DARPA) is looking to make artificial intelligence more trustworthy through a series of projects that test an AI’s capacity for self-evaluation and that make the AI “show its work” for human evaluation.

DARPA’s fundamental and applied R&D AI programs are aimed at shaping a future for AI technology where “machines may serve as trusted and collaborative partners in solving problems of importance to national security.”

Announced on July 20, the AIE program will enable DARPA “to fund pioneering AI research to discover new areas where R&D programs awarded through this new approach may be able to advance the state of the art. AIE will enable DARPA to go from idea inception to exploration in fewer than 90 days.”

The AI Exploration program is one key element of DARPA’s broader AI investment strategy that “will help ensure the U.S. maintains a technological advantage in this critical area,” according to the DARPA announcement.

Making AI Trustworthy

darpa ai national defense

Making AI trustworthy is the basic, outward challenge of DARPA’s program, which has at its core the goal of dominating AI research for national defense worldwide. The agency wants to test Artificial Intelligence in order to ensure that humans can trust its results.

However, is trust just a one-way street in this case? When we think of trust, we can think of trust between two human beings where there is a mutual level of understanding.

We can also view trust as something as existing between humans and other living creatures such as wild animals, or pets.

We humans can trust to a certain degree of probability that our house cats won’t slit our throats in the night. Likewise, our pets can trust that we will provide them food and shelter, and we can trust that a lion would probably tear us to shreds if we were to approach it in the wild, but what about AI?

Is the relationship of trust between humans and AI more like that of humans and animals, or are groups like DARPA trying to make AI more human? If so, is that possible? Humans have the capability of trusting AI because we are human, whether we are gullible or not, but can an AI learn to trust humans? A question for another time.

The AI Exploration program is considered DARPA’s “third wave” of Artificial Intelligence advancement.

DARPA’s 3rd Wave of AI Advancement and Investments

darpa ai national defense

The first wave of DARPA’s AI advancement was a “rule-based” one with the second wave pertaining to “statistical learning based” AI technologies. DARPA-funded research and development enabled some of the first successes in AI, such as expert systems and search, and more recently has advanced machine learning algorithms and hardware.

DARPA is now interested in researching and developing “third wave” AI theory and applications that address the limitations of first and second wave technologies with “contextual adaptation.”

Since the AIE launch, DARPA has been issuing “AIE Opportunities” that focus on technical domains important to DARPA’s goals in pursuing disruptive third wave AI research concepts.

If successful, the project would essentially create a scientist from computer code

The ultimate goal of each AIE Opportunity is to invest in research that leads to prototype development that may result in new, game-changing AI technologies for U.S. national security.

According to Nextgov, DARPA announced the first research opportunity under AIE on August 24 called Automating Scientific Knowledge Extraction (ASKE), “which aims to build AI that can generate, test and refine its own hypotheses. If successful, the project would essentially create a scientist from computer code.”

DARPA’s Explainable Artificial Intelligence Program

Another DARPA-led AI program is the Explainable Artificial Intelligence (XAI) program, which aims to create a suite of machine learning techniques that produce more explainable models while maintaining a high level of performance. It also aims to help human users understand, appropriately trust and effectively manage the emerging generation of artificially intelligent partners.

On August 28, Raytheon partnered with DARPA to where its Explainable Question Answering System (EQUAS) will allow AI programs to “show their work,” increasing the human user’s confidence in the machine’s suggestions.

“Our goal is to give the user enough information about how the machine’s answer was derived and show that the system considered relevant information so users feel comfortable acting on the system’s recommendation,” said Bill Ferguson, lead scientist and EQUAS principal investigator at Raytheon BBN in a statement.

EQUAS will show users which data mattered most in the AI decision-making process. Using a graphical interface, users can explore the system’s recommendations and see why it chose one answer over another. The technology is still in its early phases of development but could potentially be used for a wide-range of applications.

AI Will Represent a Paradigm Shift in Warfare

While DARPA is trying to make AI more trustworthy for national defense, the World Economic Forum (WEF) reported in 2017 that the weaponization of AI will represent a paradigm shift in the way wars are fought.

Since the end of the Second World War, defense systems have been prioritized to deter attacks rather than actually responding to them after the fact. This has been the model for stability for the past 72 years, but that paradigm is now shifting with the rise of AI and machine learning.

That long-held stability, WEF Global Risk Report 2017, will see a shift towards Automatic Weapons Systems (AWS) and their attacks “will be based on swarming, in which an adversary’s defense system is overwhelmed with a concentrated barrage of coordinated simultaneous attacks.”

What is alarming about this technology is that it disregards the human capacity to want to prevent attacks before they start, which is key for international diplomacy. Instead, defense systems won’t be playing a game of diplomatic chess, but rather they will be responding to constant swarming attacks specifically designed to find every weakness and exploit it to the fullest.

Will DARPA’s research lead to machines that are truly trustworthy, or will we see what the WEF predicts?

SHARE

facebook icon facebook icon

Sociable's Podcast

Trending