Source Alert

“Deepfake” videos go viral: USC experts on risks and solutions

So-called “deepfake” videos use artificial intelligence (AI) to alter images, including swapping faces and voices to create realistic videos. Deepfakes are on the rise, including those depicting U.S. House Speaker Nancy Pelosi and Facebook CEO Mark Zuckerberg, adding to concerns that the technology could be used to disrupt the 2020 presidential election. A U.S. House Intelligence Committee recently held a hearing on the matter, while USC researchers are sharing a new detective AI system at a conference that kicks off today in Long Beach, California. USC experts examine the risks of and countermeasures for deepfakes.

June 17, 2019

Contact: Jenesse Miller, (213) 810-8554 or jenessem@usc.edu

Can AI automatically detect deepfakes?

Wael Abd-Almageed is a computer vision, facial recognition and biometrics expert, research team lead and senior scientist with the Viterbi School of Engineering Information Sciences Institute.

Millions of videos are uploaded to social media daily. Computer scientists are developing tools that can scale and automatically detect fake content.
Abd-Almageed and colleagues have recently used AI to develop a forensics tool that performs with 96 percent accuracy when evaluated on a large-scale deepfake dataset.

“If you think deep fakes as they are now is a problem, think again. Deep fakes as they are now are just the tip of the iceberg and manipulated video using artificial intelligence methods will become a major source of misinformation,” Abd-Almageed said.

Contact: (703) 248-6174 or wamageed@isi.edu

 

How do journalists and the public separate fact from fiction?

Jeffrey Pearlman is professor and director of the Intellectual Property and Technology Clinic at the USC Gould School of Law.

Fake videos are not new, but technology is putting the ability to create them in the hands of more and more people while making them harder to detect.
Deepfakes could make it even more difficult for both journalists and the public to separate fact from fiction in the 2020 election.

“It is clear there is no silver bullet solution. It will take a combination of existing laws, new technological tools, updated norms for journalists, social media companies and the public, and possibly new regulation to protect democratic institutions,” Pearlman said.

Contact: (214) 684-4533 or jef@law.usc.edu

 

Problems with fake news go deeper than deepfake

Mark Marino is a professor of writing at the USC Dornsife College of Letters, Arts and Sciences and director of the Humanities and Critical Code Studies Lab.

In teaching a course on fake news, Mark Marino includes a session on how to manipulate images to show students how easy it is. Add artificial intelligence and “the possibilities are endless.”

“It is ironic that people are complaining about deepfakes on platforms designed to track their data and model their behavior. We train our machines to recognize our faces, construct our sentences, and otherwise create our online signatures with every transmission.

“Individuals need to challenge the information what we encounter.  The hope of our democratic experiment is in disciplined and critical media consumption by an informed and educated public,” Marino said.

Contact: (310) 420-4481 or markcmarino@gmail.com