DeepFake and Why Seeing Isn’t Believing
In an era of immense technological advancement, what we perceive to be reality is often altered, and therefore, it takes more than just a simple glance to discern whether or not what we see is to be believed. Hollywood has been doing this for ages, whether it be the face of Paul Walker copy and pasted onto his brother’s body in Fast and Furious 7 or Sione Kelepi’s body with Dwayne Johnson’s face in Central Intelligence. Until now, computer generated imagery has been used for harmless entertainment because it is easier to create and manipulate in comparison to hiring extras to crowd the set, finding the perfect landscape for a shot, or building miniatures to further the plot line. What was once a means of making a more realistic film or game, is now becoming a weapon used to target politicians and average people alike.
DeepFake is a brand of technology that uses artificial intelligence to fabricate videos and audio-recordings of anyone doing or saying anything. It is essentially a very advanced adaptation of the face swap feature present on Snapchat with the added aspect of altering voices to fit the video. It may sound benign considering face swap is mostly used for entertainment purposes. However, DeepFake has been posing serious problems, enough so that communities creating the videos and the videos themselves have already been banned from major websites such as Reddit. Questions of morality have been raised since its creation. Although the majority of those with access to such an app that doesn’t require much money or skill would use it as they do the feature on Snapchat, there is always those who will turn something incredible into something incredibly dangerous.
Issues of national security, identity theft, and protection of intellectual property have come to be at the forefront of everyone’s mind when discussing DeepFake. Because of the immense difficulty of trying to decipher what is real and what is altered when using DeepFake, women like Noelle Martin have already had their image perversely twisted and used on adult websites without their consent and while underage. Currently, there are no laws against this because of the app and technology like it being available to the public is just now beginning. This means that anyone can have their face stolen and used without their knowledge, posing the question of how far revenge or harassment can go. The adult film industry itself has shunned DeepFake videos for this reason and because many actresses (in the industry and out of it) are having their likeness and videos stolen only to be remade and profited off of.
Other issues that could arise are ones that would stoke the fake news flames. Although the president may already say things that make us question whether or not it’s real, the power to make political figures say anything rather seamlessly is perilous. Walking through the world of politics often means that one is walking on egg-shells, and a video of U.S. President Trump or Russian President Putin saying something even minorly offensive could spark the next cold or world war. As Jordan Peele said, “we’re entering an era in which our enemies can make it look like anyone is saying anything at any point in time, even if they would never say those things.” Imagine the damage done between celebrities over tweets but on a global scale with videos for evidence. It’s definitely something worth considering and being a little nervous about, especially when everyone is already so ready to start a political ignominy over something they read on Tumblr.
In the face of all these potential threats is one company by the name of Truepic that is currently known for verifying images and videos on the internet, but it seems they’ve met their match with DeepFake. The founder of the company stated that “the world needs this… to separate wrong from right. It’s high time we take control of digital imagery abuse,” and he couldn’t be more right. In addition to these words, his company managed to raise roughly eight million dollars that is to be used to go on authenticating images and videos in question. As of right now, the only advice that companies of this sort can offer on detecting DeepFakes is to keep your common sense and take notice of how short the clip is, possible flickering, and blinking. DeepFake is capable of mimicking facial movement in real-time, but even the greatest of technology has a hamartia and blinking and lip movement seems to be just that.
The world we live in is becoming more dishonest behind the digital mask. That mask just became harder to recognize, but it is hopeful that we’ll adapt with our circumstances like we have with catfishing in the past. Integrity is becoming something of a lost art despite its importance. In saying so, if we wish to continue advancing our technology as such, we should also take precautions so that our monster doesn’t turn on us. Some laws need to come into existence and others need to be adapted to fit the changing times, so that any measure of security we can obtain is secured. Until then, it’s our responsibility to protect ourselves online and keep enough common sense to not believe or trust everything we read on the internet.