Web

Senators ask big tech to explain how they curb child sexual abuse material

Five US senators are asking that big tech companies explain what efforts they are taking to curb child sexual abuse material.

“We are writing to request information on what your company is actively doing to identify, prevent, and report child sexual abuse material and other forms of child exploitation”

“We write with concern that technology companies have failed to take meaningful steps to stop the creation and sharing of child sexual abuse material (CSAM) on their online platforms,” Senators Josh Hawley, Richard Blumenthal, Lindsey Graham, Mazie Hirono, and John Cornyn wrote to Google CEO Sundar Pichai on Monday.

“We are writing to request information on what your company is actively doing to identify, prevent, and report child sexual abuse material and other forms of child exploitation,” the senators added.

According to New York Times author Michael Keller’s Twitter post on Tuesday, Google is one of 36 tech companies being asked to explain what they’re doing to curb CSAM on their platforms.

In their letter the five senators requested that Pichai answer the following questions in writing by December 4 (some questions have been left out or have been condensed for brevity and you can view the full list of questions here).

  • Do you automatically identify CSAM that is created and upoloaded to your platform(s)? Please describe how you identify CSAM.
  • How many reports of CSAM have you provided to the NCMEC CyberTipline on an annual basis for the past three years?
  • How many pieces of CSAM did you remove from your platform(s) in 2018?
  • What measures have you taken to ensure that steps to improve the privacy and security of users do not undermine efforts to prevent the sharing of CSAM or stifle law enforcement investigations into child exploitation?
  • What are the main obstacles in identifying all CSAM posted to your platform(s) automatically?
  • Do you provide notice to individuals in the transmission of CSAM when you report or submit evidence of such activities to NCMEC or law enforcement?
  • What other barriers do you face in receiving or sharing information, hashes, and other indicators of CSAM with other companies?
  • Have you implemented any technologies or techniques to automatically flag CSAM that is new or has not been previously identified, such as the use of machine learning and image processing to recognize underage individuals in exploitative situations?
  • What steps have you taken to ensure that CSAM detection efforts are incorporated in each appropriate produce and service associated with your platform(s)?
  • If your platform(s) include a search engine, please describe the technologies and measures you use to block CSAM from appearing in search results.
  • What, if any, proactive steps are you taking to detect online grooming of children?

The five senators cited the New York Times reporting by Keller and Gabriel Dance to back up their concerns, and today Keller listed 36 tech companies that the senators were questioning on the issue of CSAM.

Tim Hinchliffe

The Sociable editor Tim Hinchliffe covers tech and society, with perspectives on public and private policies proposed by governments, unelected globalists, think tanks, big tech companies, defense departments, and intelligence agencies. Previously, Tim was a reporter for the Ghanaian Chronicle in West Africa and an editor at Colombia Reports in South America. These days, he is only responsible for articles he writes and publishes in his own name. tim@sociable.co

View Comments

Recent Posts

G20 South Africa commits to advancing digital public infrastructure globally

DPI involves giving everybody electricity & internet, making them sign up for digital ID, and…

18 hours ago

Nisum, Applied AI Consulting partner-up to turn the promise of AI into tangible results

Across industries, AI has been promised as the magic bullet, poised to solve different business…

1 day ago

WEF blog calls for an ‘International Cybercrime Coordination Authority’ to impose collective penalties on uncooperative nations

How long until online misinformation and disinformation are considered cybercrimes? perspective The World Economic Forum…

1 day ago

With surge in AI-generated code creates security concerns, DeepSources launches trio of autonomous AI agents for DevSecOps 

Autonomous, AI-powered employees are set to begin roaming corporate networks sooner than expected, marking the…

5 days ago

As carcinogenic chemicals from cleaning products hit the headlines, Viking Pure Solutions is protecting employees from harm

Despite the ongoing fight to reduce, reuse and recycle plastics, when it comes to environmental…

5 days ago

Muddy Waters vs. AppLovin: Why Investors Might Be the Real Target

Muddy Waters’ recent short report on AppLovin reads serious. Abuse, violations, an impending takedown. But…

5 days ago