Government and Policy

US spy community wants to identify threats posed by tech behind ChatGPT, large language models

The US Intelligence Advanced Research Projects Activity (IARPA) issues a request for information (RFI) to identify potential threats and vulnerabilities that large language models (LLMs) may pose.

“IARPA is seeking information on established characterizations of vulnerabilities and threats that could impact the safe use of large language models (LLMs) by intelligence analysts”

While not an official research program yet, IARPA’s “Characterizing Large Language Model Biases, Threats and Vulnerabilities” RFI aims “to elicit frameworks to categorize and characterize vulnerabilities and threats associated with LLM technologies, specifically in the context of their potential use in intelligence analysis.”

Many vulnerabilities and potential threats are already known.

For example, you can ask ChatGPT to summarize or make inferences on just about any given topic, and it can comb its database to give you an explanation that sounds convincing.

However, those explanations can also be completely false.

As OpenAI describes it, “ChatGPT sometimes writes plausible-sounding but incorrect or nonsensical answers.”

But the risks posed by LLMs go well beyond nonsensical explanations, and the research funding arm for US spy agencies is looking to identify threats and vulnerabilities that might not have been fully covered in the OWASP Foundation’s recently published “Top 10 for LLM.”

“Has your organization identified specific LLM threats and vulnerabilities that are not well characterized by prior taxonomies (c.f., “OWASP Top 10 for LLM”)? If so, please provide specific descriptions of each such threat and/or vulnerability and its impacts”

Last week, UC Berkeley professor Dr. Stuart Russell warned the Senate Judiciary Committee about a few of the risks in the OWASP top 10 list, including Sensitive Information Disclosure, Overreliance, and Model Theft.

For example, Russell mentioned that you could potentially be giving up sensitive information just by the types of questions you were asking; and then the chatbot could potentially spit back sensitive or proprietary information belonging to a competitor.

If you’re in a company […] and you want the system to help you with some internal operation, you’re going to be divulging company proprietary information to the chatbot to get it to give you the answers you want,” Russell testified.

If that information is then available to your competitors by simply asking ChatGPT what’s going on over in that company, this would be terrible,” he added.

If we take what Russell said about divulging company information and apply that to divulging US intelligence information, then we can start to get a better understanding of why IARPA is putting out its current RFI.

But there could also be potential threats and vulnerabilities that are as-of-yet not known.

As former US Secretary of Defense Donald Rumsfeld famously quipped, “There are known knowns. These are things we know that we know. There are known unknowns. That is to say, there are things that we know we don’t know. But there are also unknown unknowns. There are things we don’t know we don’t know.”

So, for the current RFI, IARPA is asking organizations to answer the following questions:

  • Has your organization identified specific LLM threats and vulnerabilities that are not well characterized by prior taxonomies (c.f., “OWASP Top 10 for LLM”)? If so, please provide specific descriptions of each such threat and/or vulnerability and its impacts.
  • Does your organization have a framework for classifying and understanding the range of LLM threats and/or vulnerabilities? If so, please describe this framework, and briefly articulate for each threat and/or vulnerability and its risks.
  • Does your organization have any novel methods to detect or mitigate threats to users posed by LLM vulnerabilities?
  • Does your organization have novel methods to quantify confidence in LLM outputs?

The primary point of contact for the RFI is Dr. Timothy McKinnon, who also manages two other IARPA research programs: HIATUS and BETTER.

  • HIATUS [Human Interpretable Attribution of Text Using Underlying Structure]: seeks to develop novel human-useable AI systems for attributing authorship and protecting author privacy through identification and leveraging of explainable linguistic fingerprints.
  • BETTER [Better Extraction from Text Towards Enhanced Retrieval]: aims to develop a capability to provide personalized information extraction from text to an individual analyst across multiple languages and topics. 

Last year, IARPA announced it was putting together its Rapid Explanation, Analysis and Sourcing ONline (REASON) program “to develop novel systems that automatically generate comments enabling intelligence analysts to substantially improve the evidence and reasoning in their analytic reports.”

Additionally, “REASON is not designed to replace analysts, write complete reports, or to increase their workload. The technology will work within the analyst’s current workflow. 

“It will function in the same manner as an automated grammar checker but with a focus on evidence and reasoning.”

So, in December, IARPA wanted to leverage generative AI to help analysts write intelligence reports, and now in August, the US spy agencies’ research funding arm is looking to see what risks large language models may pose.


Image by kjpargeter on Freepik

Tim Hinchliffe

The Sociable editor Tim Hinchliffe covers tech and society, with perspectives on public and private policies proposed by governments, unelected globalists, think tanks, big tech companies, defense departments, and intelligence agencies. Previously, Tim was a reporter for the Ghanaian Chronicle in West Africa and an editor at Colombia Reports in South America. These days, he is only responsible for articles he writes and publishes in his own name. tim@sociable.co

Recent Posts

Is the MBA dead? The future of business education is digital

The COVID-19 pandemic not only changed where we work -- with a third of Americans…

7 hours ago

AI in the financial system could spell ‘the end of democracy’: Harari to BIS

Yuval Noah Harari says AI should stand for Alien Intelligence, that banks & govts should…

1 day ago

AI logistics firm Transmetrics launches new tool for vehicle fleet managers

Trucking fleet management can be a tedious task, often involving manual spreadsheets and repetitive data…

3 days ago

The Imperative of Integrating Low Resource Languages into LLMs for Ethical AI

In recent years, the emergence of Large Language Models (LLMs) has brought about significant shifts…

6 days ago

Not Your Typical CPA Firm: A CEO on Mission to Guide Companies Through the Ever-Changing World of Tech Compliance (Brains Byte Back Podcast)

In today’s episode of the Brains Byte Back podcast, we speak with Mike DeKock, the founder…

1 week ago

‘Social problems in substituting humans for machines will be easier in developed countries with declining populations’: Larry Fink to WEF

Blackrock CEO Larry Fink tells the World Economic Forum (WEF) that developed countries with shrinking…

1 week ago