Categories
Blogs

Personal Privacy and Deepfakes

By: Taylor Toepke & Ross Stokes

In 2017, an anonymous user of Redditor posted a video of Scarlett Johansson in compromising sexual situations, creating one of the first exposures to deepfake. After videos of famous celebrities “appeared” in pornographic videos from the use of face stitching, researchers went to work and were able to create a deepfake video of President Barack Obama saying whatever they wanted him to say (Marr). Now there are such apps like DeepNude that allow someone to take an image of a fully clothed woman, remove her clothes and create nonconsensual porn.

Deepfake, the new and improved form of fake news, has a huge impact on damaging someone’s personal, ethical, and moral character. The definition of a deepfake is a “video created or altered using digital means with the aid of artificial intelligence (AI). With deepfake, persons appear to do or say things that did not happen” (Dixon 2019). The advanced technology available at the world’s hands has the ability to alter images and videos, change people’s facial expressions, and imitate voices. It also has become quite a controversy when we talk about the privacy issues regarding deepfake. The readily available applications where users can create deepfake images or videos could create numerous privacy issues and can lead us to question whether anyone is safe when it comes to these deepfake programs.

Deepfake may have started off with imitations of celebrities, but it encompasses altering speeches, messages, and even tweets of government officials. Possible deepfake videos of governmental figures could pose a lot of conflicts and issues in the near future. Despite its misuse, generating realistic simulations using artificial intelligence could change how we view deepfakes and help us create a better future.

Backbone of Deepfakes

Generative Adversarial Networks or GAN’s, are the basis for any deepfake and are composed of two neural networks: the generator and the discriminator. The generator generates new data instances while the discriminator evaluates those instances for authenticity, seeing if that piece of data belongs to the actual dataset or if it’s a “fake.” The main goal of the generator is to create new, synthetic images or digits to pass to the discriminator, essentially, trying not to get caught. Both networks change and adapt, their losses push against each other, but they generate new data that can be passed for real data (Nicholson 2019). This means that as each image passes through, the best one is approved and the ones that aren’t, are run through the process again, creating very realistic images and videos.

Societal Impact of DeepFakes

It’s no question the public is concerned about deepfakes as are government officials and others who are negatively impacted by them. There’s also no question that deepfake, when used unethically, can violate not only someone’s privacy, but their personal and moral image, creating a ripple effect of internet trolls and the people that have something to gain by exploiting another’s image. For example, a political figure could use deepfake against their competitor to make them “say” things that go against their campaign.

Because GAN’s have the ability to replicate and produce images that are very similar to the real image, the possibilities of using them along with AI, allows AI to learn a lot more in a lot less time. This is particularly beneficial when it comes to medical diagnoses. Alisya Kainth, an innovator and Internet of Things Developer, used GAN’s to generate MRI scans of brain tumors. In around ten minutes, Kainth’s GAN “generated a fairly accurate image of a brain tumor with only 200 samples of real data. …even though the images are fake and these tumors are not real, they can be fed into the networks anyways, and if a person one day is seen to have a similar tumor, diagnosis and treatment become a much more efficient process” (Kainth 2019). This process has the potential to limit the use of private, medical data, essentially because deepfake can produce tons of images at a time and won’t need to surface past patient’s brain scans. The use of GAN’s and AI can help diagnose rare diseases including types of cancers, blood disorders, brain tumors, etc.

Biometrics and Privacy

Voice clones have been an increasingly popular technology, especially with the increased use of the many AI voice-generated products, such as Siri from Apple or Alexa from Amazon. Voices help bridge the ordinary chatbot that sounds like a robot, to a more personalized touch, giving mute patients or even businesses the ability to speak and connect with other people. Rupal Patel, a professor at Northeastern University and CEO of VocalID, developed a voice synthesis platform and “By crowdsourcing speech and capturing vocalization samples from people who aren’t able to speak normally, researchers… can match voices with nonverbal people likeliest to sound similar” (Wiggers 2020). This means that the technology behind deepfake can cycle through numerous voices and create a voice that most sounds like a patient who is mute. The ability to create voices can also allow physicians and social workers the ability to create digital avatars with their voices to interact with their patients within apps. The uses of biometrics and voice cloning from deepfakes can inhabit a new world of possibilities, allowing for innovation and giving patients without voices, the ability to speak.

            The other part of biometrics can come from the use of anonymizing faces in videos. The technology behind deepfake can obscure a person’s face without losing their expression which could allow for the ability to help protect people’s right to privacy. According to Adam Dachis, “In circumstances where anonymity is vital but expression can make a difference, such as anonymizing the appearance of sources in the news or documentary films that could put a subject at risk by revealing their identity, this method could be very useful and employed today.” Deepfakes can give us the ability to help anonymize our voices and our faces, allowing for the ability of free speech and ensuring that people giving vital information to law enforcement are safe and their identity is private.

Deepfake and National Security

Although deepfake has many practical applications, it also poses a threat to national security. Opponents of deepfake argue that it can be used in malicious attempts to defame the character of political figures. For example, a foreign entity may use deepfake to destroy the credibility of an elected official for its own personal gain. This is a real threat considering all the Russian meddling in the 2016 presidential campaign. Opponents of deepfake have been taking action to stop the spread of misinformation. According to an article on http://www.cnet.com, three members of the U.S. House of Representatives urged the Director of National Intelligence, Dan Coats, that these machine-learning based forgeries need to stop (Ng 2018). Even though the threat of using deepfake in this manner is imminent, Congress has yet to pass any litigation against it.

 Deepfake, a WMD?

A weapon of math destruction (WMD) is defined as a mathematical model that claims to quantify important traits but may have harmful outcomes and often reinforce inequality (O’Neil 2018). The one trait that makes a WMD so powerful is scalability. A mathematical model is considered scalable when it can handle a growing amount of data and successfully process it. For example, models that are created and used to measure recidivism risk are highly scalable because they can be applied to basically anybody. Deepfake also has the potential of becoming a WMD simply because it is highly scalable. It can be used to harm the character of any celebrity, political figure, or pretty much anyone. It can violate privacy at a massive scale and construe data and images for the benefit of a third party. For example, an individual can use deepfake to viciously slander their enemies through social media platforms where the deepfake will spread rapidly and create havoc on a large scale.

Conclusion

Since its introduction, the technology of deepfake has proven that it has the potential to be useful when diagnosing diseases and creating voices for people that cannot speak. The technology behind deepfake is evolving and can bring power to many AI uses in the medical field while decreasing privacy concerns and the use of confidential data throughout the industry. However, there are some serious drawbacks: people misuse deepfake in malevolent ways that violate privacy and ethical norms. To really examine the pro’s and con’s of deepfake, the table below shows some of the ways we can look at deepfakes.

PositivesNegatives
Privatized facial images/videosThreat to national security
Create images for medical diagnosesCan be a WMD and when used for bad intentions, can cause harm at a massive scale
Less private data used and surfaced in the medical worldCan affect someone’s personal image
Creation of voice cloningWould be hard to manage/litigate the technology behind deepfakes
Can help AI and create more personalized healthcare 

Our Stance

The technology behind deepfakes are already current and being used today. The numerous applications that are available can distort anyone’s voice, face, or overall image. When used for harm, deepfakes can be a weapon of math destruction. There needs to be more litigation behind the technology of deepfakes. The medical world is hesitant to incorporate the use of GAN’s when running tests and scans to identify tumors because the generator and discriminator behind them can forget the strategies and produce inaccurate images. There needs to be a deeper understanding of the risks that can come from using the technology as well as having laws for the unethical use of deepfakes. The defamation of character part of deepfakes needs to be understood and legally, needs to be mandated, especially for the applications that allow for this simple process.

References

Dachis, Adam. “Deepfake Tech Can Now Anonymize Your Face to Protect Privacy.” ExtremeTech, 24 Sept. 2019, http://www.extremetech.com/extreme/298831-deepfake-tech-can-now-anonymize-your-face-to-protect-privacy.

Dixon Jr., Ret. ).Judge Herbert B. “Deepfakes: More Frightening Than Photoshop on Steroids.” Judges’ Journal, vol. 58, no. 3, Summer 2019, pp. 35–37. EBSCOhost, search.ebscohost.com/login.aspx?direct=true&db=asn&AN=138864171&site=ehost-live.

Kainth, Alisya. “Generating MRI Images of Brain Tumors with GANs.” Medium, Towards Data Science, 16 Oct. 2019, towardsdatascience.com/generating-mri-images-of-brain-tumors-with-gans-8cddedbabbe6.

Marr, Bernard. “The Best (And Scariest) Examples Of AI-Enabled Deepfakes.” Forbes, Forbes Magazine, 22 July 2019, http://www.forbes.com/sites/bernardmarr/2019/07/22/the-best-and-scariest-examples-of-ai-enabled-deepfakes/#2155d42b2eaf.

Ng, Alfred. “Deepfakes Are a Threat to National Security, Say Lawmakers.” Cnet, 8 Sept. 2018, http://www.cnet.com/news/deepfakes-are-a-threat-to-national-security-say-lawmakers/.

Nicholson, Chris. “A Beginner’s Guide to Generative Adversarial Networks (GANs).” Pathmind, 2019, pathmind.com/wiki/generative-adversarial-network-gan.

ONeil, Cathy. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Penguin Books, 2018.

Wiggers, Kyle. “Voice Cloning Experts Cover Crime, Positive Use Cases, and Safeguards.” VentureBeat, VentureBeat, 30 Jan. 2020, venturebeat.com/2020/01/29/ftc-voice-cloning-seminar-crime-use-cases-safeguards-ai-machine-learning/.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s