GROW YOUR TECH STARTUP

DARPA wants AI to moderate social media groups, mitigate ‘destructive ideas’ during humanitarian efforts

September 9, 2021

SHARE

facebook icon facebook icon

DARPA is launching a new research project called Civil Sanctuary to develop multilingual AI moderators to mitigate “destructive ideas” while encouraging “positive behavioral norms.”

With the goal of providing technologies to support humanitarian missions, the Pentagon’s research funding arm is looking to create multilingual AI moderators that will “preserve and promote the positive factors of engagement in online discourse while minimizing the risk of negative social and psychological impacts emerging from violations of platform community guidelines,” according to the project opportunity announcement.

The Defense Advanced Research Projects Agency (DARPA) Civil Sanctuary project aims to provide technologies capable of supporting the Pentagon’s humanitarian assistance efforts “by facilitating online social environments where positive behavioral norms – those linked to the productive sharing of information, particularly during crises – are encouraged locally in user conversations through the use of multilingual AI moderators.”

Is Civil Sanctuary a vehicle for social media censorship?

According to the opportunity announcement, Civil Sanctuary “will exceed current content moderation capabilities by expanding the moderation paradigm from detection/deletion to proactive, cooperative engagement.”

The announcement doesn’t rule out censorship via detection/deletion, but the program’s main focus will be on proactively engaging social media communities with multilingual AI moderators.

Why is DARPA launching this program?

According to the Civil Sanctuary announcement, “social media environments often fall prey to disinformation, bullying, and malicious rhetoric, which may be perpetuated through broader social dynamics linked to toxic and uncritical group conformity.”

In other words, the Pentagon’s research arm sees social media groups as having a hive mindset that bullies the community while promoting “disinformation.”

DARPA’s response is, “New technologies are required to preserve and promote the positive factors of engagement in online discourse while minimizing the risk of negative social and psychological impacts emerging from violations of platform community guidelines.”

Who is the target audience?

At present, non-English speaking social media communities that wish to better enforce their community guidelines for humanitarian purposes are the target audience.

When all is said and done, the idea is to “demonstrate novel and generalizable technologies that commercial platforms may leverage via third-party vendors.”

Right now, Civil Sanctuary is focusing on training AI moderators in languages that are not English, and this will come in two phases.

Phase 1 will involve the initial prototyping of artificial agents for online mediation in minimally a single non-English language.

Phase 2 will extend Phase 1 systems to:

  1. Multilingual settings involving two or more non-English languages
  2. Changing community guidelines.

With non-English speakers being the target audience now, the technology developed could just as easily work with an English-speaking audience if ever applied.

The Pentagon’s humanitarian response efforts consist of both domestic and overseas operations.

What will Civil Sanctuary look like in action?

On the surface, it looks as though content deemed to be destructive will be flagged and a multilingual chatbot will try to convince the person posting that they are wrong while teaching them the errors of their ways.

Here’s what DARPA has to say:

“Civil Sanctuary will scale the moderation capability of current platforms, enabling a quicker response to emerging issues and creating a more stable information environment, while simultaneously teaching users more beneficial behaviors that mitigate harmful reactive impulses, including mitigating the uncritical acceptance and amplification of destructive ideas as a means to assert group conformity.”

Additionally, “Extending current research in computational dialogue and cognitive modeling, artificial agents created under this program will learn best practices for online mediation by observing human experts and then employ these skills to interactively guide user groups to adhere to community guidelines.”

Pros and cons?

The multilingual AI moderators may prove useful in making sure accurate information is seen by those who need it most during emergencies.

“During emergency situations and times of turmoil, these [social media] platforms can provide a crucial forum for discussing time-sensitive, potentially life-saving information,” the announcement reads.

“During DoD [Department of Defense] Humanitarian Assistance and Disaster Response (HA/DR) operations, relief efforts would benefit from a stable and constructive information environment that naturally facilitates informative dialogue.”

However, if the “generalizable” technologies are ever repurposed beyond the scope of certain non-English speaking social media communities, they could run the risk of being used for disinformation campaigns and censorship if ever put to the task.

DARPA has been funding research into monitoring social media and online news sources for a long time now, and big tech companies like Google, Twitter, and Facebook openly embrace this tactic with every type of coordinated inauthentic behavior removal update they give.

Back in 2011, DARPA launched the Social Media in Strategic Communication (SMISC) program “to help identify misinformation or deception campaigns and counter them with truthful information” on social media.

More recently, DARPA announced its INfluence Campaign Awareness and Sensemaking (INCAS) program that looks to “exploit primarily publicly-available data sources including multilingual, multi-platform social media (e.g. blogs, tweets, messaging), online news sources, and online reference data sources” in order to track geopolitical influence campaigns before they become popular.

When you combine the many research programs that delve into social media surveillance and intervention, the technologies being developed by DARPA can be used for positive or negative purposes.

It all depends on who’s using them, why, and whether they are trustworthy or not.

DARPA to ‘exploit social media, messaging & blog data’ to track geopolitical influence campaigns

Intel agency awards contract to company that harvests social media text, data

DARPA wants to make AI a ‘collaborative partner’ for national defense

DARPA sets sights on making AI self-aware of complex time dimensions

Pentagon is close to enterprise-wide infrastructure to deploy AI at scale: DoD AI Symposium

SHARE

facebook icon facebook icon

Sociable's Podcast

Trending