In response to the “Drunk Pelosi” viral video that did not use deepfake technology, the House Intelligence Committee will examine deepfakes in a hearing on Thursday.
As a response to the fake video that appeared to show House Speaker Nancy Pelosi drunkenly slurring her words, the House Permanent Select Committee on Intelligence will organize an open hearing on the national security challenges of Artificial Intelligence (AI), manipulated media, and deepfake technology on June 13.
This is despite the fact that the “Drunk Pelosi” video did not use deepfake technology. Deepfakes use AI that is capable of creating realistic-looking fake videos. The “Drunk Pelosi” video was just a video of her played in slow motion.
However it was done, the video made politicians take notice of doctored footage in general — something The Sociable warned about back in 2016 (see link below).
Concerning real deepfakes, House Intelligence Chairman Adam Schiff told CNN that he feared Russia might get involved in a “severe escalation” of its disinformation campaign targeting the US ahed of the 2020 presidential elections.
“And the most severe escalation might be the introduction of a deep fake — a video of one of the candidates saying something they never said,” added Schiff, who was asked by Republicans in March to resign from his “committee post for repeatedly pushing claims of collusion between President Trump’s 2016 campaign and Russian operatives.”
Former US president, Barak Obama, has also expressed concern about AI-powered fake videos he had seen that bear his likeness and model his voice and movements.
First Hearing Devoted to Deepfakes
In what will be the first House Intel Committee hearing devoted specifically to examining deepfakes and other types of AI-generated synthetic data, the Committee will inspect the national security threats that AI-enabled fake content poses, ways of detecting and battling it, and roles that the public sector, the private sector, and society as a whole should play to keep the “potentially grim”, “post-truth” future at bay.
“Advances in machine learning algorithms have made it cheaper and easier to create deepfakes – convincing videos where people say or do things that never happened,” according to the Committee press release.
“Such advances also support the production of fake audio, imagery, and text at scale, and these capabilities are fast becoming more accessible and widely available.
“Deepfakes raise profound questions about national security and democratic governance, with individuals and voters no longer able to trust their own eyes or ears when assessing the authenticity of what they see on their screens,” the press release concludes.
Psychological Impact of Deepfakes
Another topic of interest to the Committee is the enduring psychological impact of deepfakes as well as counterintelligence risks that loom.
The roles of Internet platforms in policing fake content will also be discussed along with the appropriate role for the US government to take within the difficult legal challenges raised by deepfakes.
The Committee’s willingness to discuss deepfakes might not be unfounded. Going ahead regulations and policing might be the norm of making videos, not a comfortable future for vloggers and Youtubers.
The Committee also seeks testimony on Future advancements in deepfake technology and how it could lead people to deny legitimate media.
As the following tweet shows, we can expect deepfake videos to surface more and more as the 2020 campaign proceeds, with confusion following denials and cross-accusations.
Prediction 1: The 2020 campaign will be muddled by shocking deepfake videos
Prediction 2: Shocking real videos will be called fake by those featured
— Erik Brynjolfsson (@erikbryn) June 5, 2019
If there is a silver lining to the deepfake fears, it’s that maybe our society will finally start to question everything we see and hear instead of accepting everything blindly.
It doesn’t take much to convince people. Social media comments to the fake Nancy Pelosi video show that many people fell for the fake.
As deepfake technology becomes more common, it’s starting to cause havoc among politicians and the public. Even low-quality deepfakes are enough to do the job of spreading a sentiment, be it, panic, hope, or hatred.
However, people are sure to start claiming they never said things they actually did. What happens when people start mistrusting actual real videos?
Easy to Make Fakes
Scientists from Stanford University, the Max Planck Institute for Informatics, Princeton University, and Adobe Research, recently demonstrated that it’s becoming increasingly easier to edit speech in videos and create realistic fakes.
Their method automatically annotates an input talking-head video with phonemes, visemes, 3D face pose and geometry, reflectance, expression and scene illumination per frame. To edit a video, the user has to only edit the transcript.
Additionally, Adobe’s Voco is like the Photoshop for voice.
Products that allow video and audio manipulation are flooding the market, as vloggers and film editors seek quality in their work.
The creative could use deepfake to get out of work, speeding tickets, school attendance or even a crime.
From the very top of power and politics to the everyday person, deepfakes could soon have the guilty screaming conspiracy while innocents suffer for words put in their digital mouths.