Law Enforcement in the Era of Deepfakes

 

“I think there is going to be a point where we can throw absolutely
everything that we have at this, at these types of techniques, and
there is still some question about whether it is authentic or not.”
—Dr. David Doermann, speaking about deepfakes
to the U.S. House Intelligence Committee, June 13, 2019

Deepfake media (deepfakes) are hyper-realistic video, images, and sound forgeries created using artificial intelligence and machine learning algorithms. Types of deepfake videos include face-swapping, puppeting (person depicted mirrors the video’s creator), or the addition of completely synthetic content to a video. Deepfakes are created using generative adversarial networks (GANs) that use artificial intelligence (AI) and two machine learning algorithms: a generator (image creator) and a discriminator (image checker). The two algorithms are pitted against one another to create and refine the images until the discriminator algorithm cannot discern if the image is a fake.1 Sophisticated deepfakes do not require exotic means; they can be created using off-the-shelf gaming computers and software widely available on the internet.

The first deepfakes appeared in pornographic videos in 2017 on Reddit where celebrities’ faces were used in place of the real faces of pornographic actors.2 In 2018, comedic actor Jordan Peele made news when he posted a deepfake video of former U.S. President Obama insulting former U.S. President Trump and warning of the dangers of deepfake media.3 By 2019, the U.S. House Intelligence Committee was holding hearings on the potential threats to U.S. security posed by deepfakes.4 Concerns about the dangers of manipulated media have continued to grow as deepfakes become even more sophisticated and difficult to detect. For example, in 2021, deepfake videos of Tom Cruise appeared on TikTok, which were created to show just how far the technology had advanced.5 The videos quickly went viral and were so convincing that many commenters could not tell if it was really Tom Cruise or a deepfake.6 Deepfakes can undermine public trust in video as people struggle to discern whether a video is real or a forgery.7

Consider all of the sources of video law enforcement regularly uses as evidence—surveillance and doorbell cameras, cellphone videos, body-worn cameras, and dash mounted cameras—and it becomes clear the erosion of trust in video has significant implications for law enforcement. It is important for law enforcement to understand how criminals use deepfake technology, how deepfakes can impact law enforcement’s use of video evidence for investigations and prosecution, and what steps can be taken to prepare for a future where the authenticity and reliability of every video used as evidence in any trial is in question.

The Trouble of Deepfakes

Deepfake media (deepfakes) threaten public trust in video and present challenges for law enforcement with new types of investigations, evidence management, and trials. Deepfake media have already been used to commit crimes from harassment to fraud, and their use in crimes will likely expand.

In March 2021, in an attempt to remove her daughter’s rivals from the high school cheer team, the mother of a cheerleader was arrested for sending apparent deepfake photographs and videos of her daughter’s cheerleading teammates and coaches showing the teammates engaged in inappropriate conduct and messages encouraging the teammates depicted to kill themselves.8 Although the mother was held to answer on harassment charges, after the preliminary hearing, prosecutors said police were unable to prove that the mother created the deepfake video or even that it was falsified.9 The case highlights the difficulty law enforcement faces in investigating and analyzing digital video evidence in such cases.

According to a 2021 FBI Cyber Division bulletin, synthetic media was predicted to “almost certainly” be used by criminals to target victims for “spear phishing” attacks in in the following 12–18 months.10 There have already been reports of harassment with the suspects using deepfakes and synthetic generated images and financial crimes using deepfake audio to impersonate a CEO.11 In the United Kingdom, during a custody dispute, a parent attempted to introduce a deepfake audio as evidence that the child’s other parent was making violent threats. The deepfake was discovered after analysis of the recording’s metadata.12

U.S. federal evidence rules were relaxed in 2017 to simplify admission of digital video evidence when the recording process can be reliably authenticated by a digital identification process such as secured video surveillance systems or police body-worn cameras.13 With an increase in video evidence being used in court, though, the potential for deepfakes to be considered authentic rises accordingly. During a 2021 panel discussion on deepfake media, Emeryville, California, Police Captain Oliver Collins commented that the existence of deepfakes will likely have a “higher impact later in the judicial process” where confidence in the reliability of video can impact the weight the evidence is given.14 This is because video is “seen” by the brain in highly impactful ways.

Visual evidence is vivid and activates several areas of the brain, which can make it very persuasive.15 Researchers have even demonstrated that forged video images can induce false witness testimony.16 Law enforcement must be prepared for the challenges faced when the authenticity of influential evidence such as video can be called into question.

Although cases are rare, courts have already had to deal with the possibility of digital forgeries since the widespread use of photo editing software and have developed authentication standards to limit the risk of forged evidence.17 The significant difference today is the speed and ease with which visual media can be manipulated and the quality of the manipulated images, coupled with the lack of a current robust way to analyze, verify, and authenticate potential deepfake media. The erosion of public trust could enable the “liar’s dividend” where a person claims a real video is fake and benefits from the public’s skepticism and knowledge that deepfakes exist.18

The Current State of Deepfakes

Many in policing are still unaware of how sophisticated and easy the creation of deepfake media has become. A recent internet search revealed only a handful of articles on the topic targeted specifically to law enforcement personnel. In a 2021 panel discussion on deepfake technology, Emeryville Police Chief Jeff Jennings discussed the importance of just being aware of the possibilities that exist as deepfakes increase in sophistication and proliferation.19 Due to a lack of awareness, deepfake-specific laws exist in only a few states. Texas has a law banning deepfakes created to influence elections, Virginia banned deepfake pornography, and California has laws against both malicious deepfakes within 60 days of an election and nonconsensual deepfake pornography.20 Tragically, there are no laws as of yet that specifically address issues such as fully photorealistic child pornography.21 In addition to limited specific criminal statutes, law enforcement is not yet capable of detecting deepfakes.

Deepfake Detection

A critical problem facing any potential deepfake detection process is that the very method a program uses to detect a deepfake can be used to “train” new deepfake creation algorithms.22 One example of this was when a researcher developed an algorithm to track inconsistent eye blinking in deepfakes. Shortly after the researchers published their findings on the detection method, new algorithms to include more normal eye blinking were developed and the detection method was obsolete.23 There are both private and government-sponsored programs working to develop deepfake detection technology that can be deployed at scale; however, commercial deepfake detection software is currently not available.24 There are, however, companies working to remedy that deficit.

“Deepfakes can undermine public trust in video as people struggle to discern whether a video is real or a forgery.”

In September 2019, Facebook, Microsoft, Amazon Web Services, and several universities created a competition to develop deepfake detection capabilities. The Deepfake Detection Challenge offered $1 million in prize money. At the end of the challenge, the winning team was able to accurately detect deepfakes only 65 percent of the time.25 The Defense Advanced Research Projects Agency (DARPA), which helped develop technology such as the internet, GPS, stealth planes, and SIRI currently has a project focused on media forensics and deepfakes (MediFor).26 The MediFor program still has not developed end-to-end technology sufficient to perform a complete forensic analysis on deepfake videos.27 In February 2021, though, computer scientists at the University of California San Diego announced they had created a way to defeat deepfake detection algorithms.28 Deepfake technology continues to increase in sophistication rapidly even as law enforcement, the courts, and the legislature struggle to keep up.

The Response to Deepfakes

As no single effective measure to combat deepfakes exists, law enforcement’s response should include a holistic approach including education, lobbying, and collaboration.

The first step to combat deepfake media is the education of law enforcement personnel at all levels. An awareness campaign by state agencies and law enforcement associations could reach a wide audience, including patrol officers, evidence technicians, investigators, and command staff. The education campaign should include basic information on ways deepfakes are created, how criminals can already utilize deepfake media, and methods to identify deepfake media. State Commissions on POST could also convene subject matter expert panels to develop best practices for the collection, storage, analysis, and transmission of digital evidence to establish authenticity.

The investigation of deepfake crimes will require technical skill and resources beyond the capabilities of most agencies. Law enforcement personnel can look for other methods of corroborating video or refuting its authenticity such as cross-checking videos with independent sources, collecting cellphone records including data and GPS, and interviewing other people seen in the video.29 Developing methods to verify chain of custody and authenticity will be critical to maintain public trust in video evidence.

The second step is lobbying. As awareness grows, law enforcement leaders should lobby legislatures for laws that can help address the threat posed by deepfake media. Any such laws should be written in such a manner to account for advances in technology. Future laws could include mandating some form of digital authentication embedded in recording devices and closing loopholes that allow the proliferation of harmful material such as photorealistic child pornography.30

The third step is collaboration. Individual law enforcement agencies are unlikely to have the funding or personnel to conduct extensive forensic examination of deepfake evidence and will need to identify resources such as regional computer crimes task forces. Task forces that include local, state, and federal agencies and all computer forensics crime labs should become familiar with deepfake creation and detection technology. Law enforcement will need to locate and cultivate partnerships with computer science experts in the private sector or academic research institutions to aid in the forensic analysis of digital video and detection of deepfakes as the technology continues to evolve.

Many private companies and researchers are already at work developing advances in technology that could aid law enforcement in the future by verifying videos as they are created. Digital watermarks embedded within the video identifying the specific device used to record them could provide authentication of video evidence.31 A consortium of businesses including Adobe and Twitter have created software to track images and verify the creator and any changes that have been made.32 Similarly, blockchain technology could use unalterable electronic ledgers to track and verify authenticity of videos.33

Conclusion

The response to deepfakes will continue to be an escalating game of cat and mouse as methods of detecting deepfakes are quickly superseded by new methods to create and use deepfakes criminally. Deepfakes are the next evolutionary step up from “photoshopping.” Staying ahead of the machine learning curve will require knowledge of what deepfakes are, how to detect them, and where to go when more sophisticated analysis is needed. The police are already behind the innovation curve as video evidence is being submitted with greater frequency in courts. Unless they work to catch up, people may no longer be able to believe what they see.

Notes:

1Mika Westerlund, “The Emergence of Deepfake Technology: A Review,” Technology Innovation Management Review 9, no. 11 (November 2019): 39–52.

2 Westerlund, “The Emergence of Deepfake Technology.”

3Jordan Peele, You Won’t Believe What Obama Says in This Video! BuzzFeed Video, YouTube, April 17, 2018.

4Open Hearing on Deepfakes and Artificial Intelligence, Before the House Permanent Select Comm. On Intelligence, 116th Cong. (2019).

5Mark Corcoran and Matt Henry, “The Tom Cruise Deepfake that Set Off ‘Terror’ in the Heart of Washington DC,” ABC News, updated June 27, 2021.

6No Tom Cruises Isn’t on TikTok. It’s a Deepfake,” CNN Business, March 2, 2021.

7Robert Chesney and Danielle Keats Citron, “Deep Fakes: A Looming Challenge for Privacy, Democracy, and National Security,” California Law Review 107 (2019): 1753–1819.

8Marlene Lenthang, “Cheerleader’s Mom Created Deepfake Videos to Allegedly Harass Her Daughter’s Rivals,” ABC News, March 13, 2021.

9Megan Sheets, “Cops Who Accused Woman, 50, of Creating ‘Deepfake’ Photos and Videos of Her Daughter’s Cheerleading Rivals Admit They NEVER Found Evidence of Manipulated Images – But Will Still Prosecute Her for Harassment,” Daily Mail, May 18, 2021.

10Federal Bureau of Investigation, Cyber Division, “Malicious Actors Almost Certainly Will Leverage Synthetic Content for Cyber and Foreign Influence Operation,” Private Industry Notification 210310-001 (March 10, 2021).

11Rana Ayyub, “I Was the Victim of a Deepfake Porn Plot Intended to Silence Me,” Huffington Post, updated November 21, 2018; Raphael Satter, “Deepfake Used to Attack Activist Couple Shows New Disinformation Frontier,” Reuters, July 15, 2020; Jesse Damiani, “A Voice Deepfake Was Used to Scam a CEO Out of $243,000,” Forbes, September 3, 2019.

12Bari Weinberger, “Looking Out for Manipulated ‘Deepfake’ Evidence in Family Law Cases,” New Jersey Law Journal, February 22, 2021.

13Riana Pfefferkorn, “’Deepfakes’ in the Courtroom,” Public Interest Law Journal 29 (September 2020): 245–276.

14Oliver Collins, “Deepfakes Case for Change Panel,” interview by Frederick Dauer, February 4, 2021.

15Yael Granot et al., “In the Eyes of the Law: Perception Versus Reality in Appraisals of Video Evidence,” Psychology, Public Policy, and Law 24, no. 1 (2018): 93–104.

16Kimberly A. Wade, Sarah L. Green, and Robert A. Nash, “Can Fabricated Evidence Induce False Eyewitness Testimony?Applied Cognitive Psychology 24, no. 7 (October 2010): 899–908.

17John P. LaMonaga, “A Break from Reality: Modernizing Authentication Standards for Digital Video Evidence in the Era of Deepfakes,” American University Law Review 69, no. 6 (2020): 1945–1988.

18Chesney and Citron, “Deep Fakes.”

19Jeff Jennings, “Deepfakes Case for Change Panel,” interview by Frederick Dauer, February 4, 2021.

20David Ruiz, “Deepfakes Laws and Proposals Flood US,” Malwarebytes Labs (blog), January 23, 2020.

21Merritt Baer, “Child Pornography Law Set Legal Standards for Cybercrime. But How Do ‘Deep Fakes’ Factor In?” Fels Institute of Government, April 26, 2018.

22Yuezun Li and Siwei Lyu, Exposing DeepFake Videos by Detecting Face Warping Artifacts (presentation, CVPR 2019: Computer Vision and Pattern Recognition, Long Beach, CA, June 2019).

23James Vincent, “Deepfake Detection Algorithms Will Never Be Enough,” The Verge, June 27, 2019.

24Hany Farid (professor, University of California, Berkeley), email to Frederick Dauer, May 3, 2021.

25Jeremy Kahn, “Facebook Contest Shows Just How Hard It Is to Detect Deepfakes,” Fortune, June 12, 2020.

26Jane McCallion, “10 Amazing DARPA Inventions: How They Were Made and What Happened to Them,” ITPro, June 15, 2020.

27William Corvey, “Media Forensics (MediFor),” DARPA.

28University of California San Diego, “Deepfake Detectors Can Be Defeated, Computer Scientists Show for the First Time,” ScienceDaily, February 8, 2021.

29Weinberger, “Looking Out for Manipulated ‘Deepfake’ Evidence in Family Law Cases.”

30Baer, “Child Pornography Law Set Legal Standards for Cybercrime.”

31Lily Hay Newman, “To Fight Deepfakes, Researchers Built a Smarter Camera,” Wired, May 28, 2019.

32Content Authenticity Initiative website.

33Haya R. Hasan and Khaled Salah, “Combating Deepfake Videos Using Blockchain and Smart Contracts,” IEEE Access 7 (2019): 41596–41606.


Please cite as

Frederick Dauer, “Law Enforcement in the Era of Deepfakes,” Police Chief Online, June 29, 2022.