Social Media

DARPA to ‘exploit social media, messaging & blog data’ to track geopolitical influence campaigns

If abused, the data exploitation could end up serving geopolitical influences in its own right: perspective

The power to monitor, track, and potentially quash online campaigns before they become popular is getting a whole lot easier.

With the goal of detecting geopolitical influence campaigns while they are still evolving, DARPA is looking to exploit data from social media, messaging, online blogs, and digital news sources with a new research program.

And today, the Defense Advanced Research Projects Agency (DARPA) held an invite-only proposers day on Zoom to go over its new INfluence Campaign Awareness and Sensemaking (INCAS) program.

“INCAS will exploit primarily publicly-available data sources including multilingual, multi-platform social media (e.g. blogs, tweets, messaging), online news sources, and online reference data sources” — DARPA

The INCAS research program is aimed at detecting, categorizing, and tracking online geopolitical influence campaigns, including those that fly under the radar of most analysts, while simultaneously looking to reduce the influence of cognitive biases, such as confirmation bias, in the process.

To achieve its goals, “INCAS will exploit primarily publicly-available data sources including multilingual, multi-platform social media (e.g. blogs, tweets, messaging), online news sources, and online reference data sources,” according to the INCAS special notice.

If ever politicized, this type of DARPA-funded research could end up becoming its own antithesis — a geopolitical influence campaign in its own right.

DARPA has been funding research into monitoring social media and online news sources for a long time now, and big tech companies like Google, Twitter, and Facebook openly embrace this tactic with every type of coordinated inauthentic behavior removal update they give.

Back in 2011, DARPA launched the Social Media in Strategic Communication (SMISC) program “to help identify misinformation or deception campaigns and counter them with truthful information” on social media.

Sound familiar with what’s happening on social media news feeds today?

“The INfluence Campaign Awareness and Sensemaking program will develop techniques and tools that enable analysts to detect, characterize, and track geopolitical influence campaigns with quantified confidence” — DARPA

While DARPA serves to advance the capabilities of the US military, the technology developed often has a way of breaking-in to the private sector somewhere down the road.

For example, “DARPA-funded research […] has led to the development of both military and commercial technologies, such as precision guided missiles, stealth, the internet, and personal electronics,” according to a March 17, 2020 Congressional Research Service Overview report.

Curtis Hougland

Recently, it was reported that DARPA-incubated tech — which was originally developed for combating ISIS propaganda — was overtly politicized by a Political Action Committee (PAC) founded by an ex-DARPA contractor to target and monitor the president of the United States, although DARPA said the claim was misleading.

Back in May, the Washington Post reported that the Defeat Disinfo PAC, founded by Curtis Hougland, was “using open-source technology initially incubated with funding from DARPA,” and that it was “in service of a domestic political goal — to combat online efforts to promote President Trump’s handling of the coronavirus pandemic.”

Following publication of Washington Post story that was later picked up by FOX News, DARPA issued a statement on Twitter saying that “Hougland’s claim DARPA funded the tech at the heart of his political work is grossly misleading,” and that the agency was “apolitical.”

Additionally, a DARPA spokesperson told FOX News, “Unequivocally, DARPA funding did not help advance the technology with which Hougland now works any more than does his use of other agency technologies like the internet or mobile phone.”

The narrative remains; however, that “Hougland had received funding from DARPA […] to assist in the propaganda fight against ISIS, which had developed a small but sophisticated content machine that exploited social networks to amplify its vision,” according to Vanity Fair.

Hougland would later found an AI startup called Main Street One, along with a Political Action Committee that leveraged his own startup’s technology in a way that appears to be very similar to what he allegedly saw at DARPA.

His startup, Main Street One, aims “to win narratives online for campaigns, causes, and companies,” according to a section of its mission statement.

“INCAS is not specifically focused on detecting misinformation or bot activity” — DARPA

Now, DARPA is set to launch the INCAS program, which “will develop techniques and tools that enable analysts to detect, characterize, and track geopolitical influence campaigns with quantified confidence” using automated influence detection across social media, digital media, and other online data sources.

If DARPA’s INCAS program is successful in achieving its goals, the technology it develops would have the power to detect influence campaigns that are often overlooked by analysts because they get so little traffic.

DARPA says that these “‘low and slow’ campaigns are hard to detect early as their message volume may be beneath platform trending thresholds.”

The research program “is not specifically focused on detecting misinformation or bot activity, as influence campaigns may exploit a variety of
tactics and true information, but should be able to exploit such signals from extant capabilities to aid in detecting influence messaging,” according to the special notice.

Theoretically, DARPA’s INCAS program could create technology that would allow analysts to detect and take action against online movements before they get a chance to grow.

Whether online campaigns be nefarious or benign, the power to monitor, track, and quash them before they gain popularity is getting a whole lot easier.

Tim Hinchliffe

The Sociable editor Tim Hinchliffe covers tech and society, with perspectives on public and private policies proposed by governments, unelected globalists, think tanks, big tech companies, defense departments, and intelligence agencies. Previously, Tim was a reporter for the Ghanaian Chronicle in West Africa and an editor at Colombia Reports in South America. These days, he is only responsible for articles he writes and publishes in his own name. tim@sociable.co

View Comments

Recent Posts

Ethical Imperatives: Should We Embrace AI?

Five years ago, Frank Chen posed a question that has stuck with me every day…

4 days ago

The Tech Company Brief by HackerNoon: A Clash with the Mainstream Media

What happens when the world's richest man gets caught in the crosshairs of one of…

4 days ago

New Synop app provides Managed Access Charging functionality to EV fleets

As companies that operate large vehicle fleets make the switch to electric vehicles (EVs), a…

5 days ago

‘Predictive government’ is key to ‘govtech utopia’: Saudi official to IMF

A predictive government utopia would be a dystopian nightmare for constitutional republics: perspective Predictive government…

6 days ago

Nilekani, Carstens propose digital ID, CBDC-powered ‘Finternet’ to be ‘the future financial system’: BIS report

The finternet will merge into digital public infrastructure where anonymity is abolished, money is programmable…

2 weeks ago

Upwork’s Mystery Suspensions: Why Are High-Earning Clients Affected?

After more than ten years on Elance / oDesk / Upwork, I dare to say…

2 weeks ago