The Deepfake World:
1. Barack Obama called Donald Trump a “complete dipshit”.
2. Mark Zuckerberg brags of having “total control of billions of people’s stolen data”.
3. Jon Snow’s moving apology for the dismal ending to Game of Thrones.
4. The Indian PM Modi playing ‘Garbha’ with 10 women.
If you have seen any of the above, then you have seen a deepfake. It is the 21st century’s photoshopping. Deepfakes use a form of artificial intelligence called deep learning to make images of fake events. Thus, the name deepfake is derived.
Deeptrace & Deepfake:
The AI firm Deeptrace found thousands of deepfake videos online! 96% of these videos are pornographic with mapped faces from female celebrities. The new techniques allow unskilled people to make deepfakes with a handful of photos, fake videos are likely to spread beyond the celebrity world.
Deepfake technology can create convincing and entirely fictional photos from scratch. A non-existent Bloomberg journalist, “Maisy Kinsley”, who had a profile on LinkedIn and Twitter, was a deepfake.
Audio can also be deepfaked to create “voice skins” or” voice clones”. The accountant of a German energy firm paid £200,000 into a Hungarian bank account after being phoned by a fraudster who mimicked the German CEO’s voice. The company’s insurers believe the voice was a deepfake.
Making a Deepfake:
Researchers and special effects studios have warned about the video and image manipulation. Deepfakes were born in 2017 when a Reddit user of the same name posted doctored porn clips on the site. The videos swapped the faces of celebrities – Taylor Swift, Scarlett Johansson and others – on to porn performers.
It takes a few steps to make a face-swap video.
1. Run thousands of face shots of the two people through an AI algorithm called an ‘encoder’.
2. The ‘encoder’ finds similarities between the two faces, and reduces them to their shared common features, compressing the images in the process.
3. A second AI algorithm called a ‘decoder’ is taught to recover the faces from the compressed images.
4. Because the faces are different, you train one decoder to recover the first person’s face, and another decoder to recover the second person’s face. To perform the face swap, you simply feed encoded images into the “wrong” decoder.
For example, a compressed image of person A’s face is fed into the decoder trained on person B. The decoder then reconstructs the face of person B with the expressions and orientation of face A.
Another way to make deepfakes is through generative adversarial network, or Gan. A Gan pits two artificial intelligence algorithms against each other. The first algorithm, known as the ‘generator’, is fed random noise and turns it into an image. This synthetic image is then added to a stream of real images – of celebrities, say – that are fed into the second algorithm, known as the ‘discriminator’. Given enough cycles and feedback, the generator will start producing utterly realistic faces of completely nonexistent celebrities.
Deepfakes & Technology:
Academic and industrial researchers, amateur enthusiasts, visual effects studios and porn producers – anyone can make a deepfake video. It is difficult to make a good deepfake on a standard computer. Deepfakes are created on high-end desktops with powerful graphics cards and with cloud computing power. This reduces the processing time. But it takes expertise. Plenty of tools are now available to help people make deepfakes. Several companies will make them on order. There is a mobile phone app, Zao, that lets users add their faces to a list of TV and movie characters on which the system has trained.
Spotting Deepfake:
Spotting a deepfake gets harder with technology advancement. In 2018, researchers discovered that deepfake faces don’t blink normally. At first, it seemed like solution for the detection of deepfakes. But no sooner had the research been published, than deepfakes appeared with blinking. It is in the nature of the technology game – as soon as a weakness is revealed, it is fixed.
Poor-quality deepfakes are easier to spot. Bad lip synching, or patchy skin tone or flickering around the edges of transposed faces or hair strands visible on the fringe or badly rendered jewelry or strange lighting effects or inconsistent illumination or reflections on the iris etc. help us to spot deepfakes.
Governments, universities and tech firms like Microsoft, Facebook and Amazon are funding research to detect deepfakes. Research teams around the globe competing for supremacy in deepfake detection game.
Deepfake Havoc:
We can expect deepfakes to harass, intimidate, demean, undermine, destabilize and spark major international incidents. Most nations have their own reliable security imaging systems. There is still ample room for mischief-making. For example, the Tesla stock crashed when Elon Musk smoked a joint on a live deepfake web show. It is possible for deepfakes to shift stock prices, influence voters and provoke religious tensions.
Undermining Trust:
The aim of deepfakes is to create a zero-trust society, where people cannot, or no longer bother to, distinguish truth from falsehood. When trust is eroded, it is easier to raise doubts about specific events. As the technology becomes more accessible, deepfakes could mean trouble for the courts, where faked events could be entered as evidence. There is a personal security risk: deepfakes can mimic biometric data, and can potentially trick systems that rely on face, voice, vein or gait recognition. Scam potential is huge.
Deepfake Solution:
Artificial intelligence can help to spot fake videos. Many existing detection systems have a serious weakness: they work best for celebrities, because they can train on hours of freely available footage. Tech firms are working on detection systems that aim to flag fakes whenever they appear.
The other side of Deepfakes:
Deepfakes are entertaining and are helpful. Voice-cloning deepfakes can restore people’s voices when they lose them to disease. For the entertainment industry, technology can be used to improve the dubbing on foreign-language films, and more controversially, resurrect dead actors.
With deepfakes, the mischief-making is likely to increase. The world is becoming increasingly more synthetic. This technology will never go away!
About the Author
Dr. K. Raja Gopal Reddy is a seasoned internationally qualified Insurance professional.
What you are reading here, may not answer all the questions we have, but has the absolute power of asking unsettling questions which increase the interest in the strange world, and show the contradictory wonders lying just below the surface of the commonest things of life. Look at this disturbing but beautiful thought of Friedrich Nietzsche “God is dead. God remains dead. And we have killed him”.
Dr. Reddy can be reached at: raja66gopal@gmail.com


