Constructing and countering deepfakes

The Tom Cruise deepfakes by Chris Ume

Artificial Intelligence (AI) bears the capacity to effectively manipulate photos and videos, often leading individuals to believe that the illusion is real. With the pace of media consumption accelerating, people are increasingly convinced that these manipulated media are genuine, without verifying further. 

The concerns about deepfakes have led to a proliferation of countermeasures, with the passing of new laws that aim to bar people from making and promoting them. Social media platforms have been working diligently towards banning deepfakes from their networks as well. 

Facebook, for example, mentioned in a blog that they have been strengthening their policy towards misleading manipulated videos, ones that are identified as deepfakes. 

The blog mentioned that Facebook will “remove misleading manipulating media if:

  • It has been edited or synthesised – beyond adjustments for clarity or quality – in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they didn’t actually say, and 
  • It’s a product of artificial intelligence or machine learning that merges, replaces or superimposes content onto a video, making it appear to be authentic.

Causes and origin

Deepfakes are videos and images that have been manipulated through the use of sophisticated artificial intelligence, yielding fabricated images and sounds that appear real. 

Sophisticated deepfakes can now be generated with the advancements in the application of generative adversarial networks (GANs), where two AI algorithms could be placed against each other. The first creates the fakes, and the other grades its efforts, thus producing better fake images and videos. A research paper states that a GAN consists of two neural networks playing a game with each other. While a “discriminator” tries to determine if the information is real or fake, the “generator”, the other neural network, creates data that the discriminator thinks to be real.

“…generative adversarial networks (GANs), a common type of generative models, are designed around the core idea of training a generative model by optimizing an objective which rewards defeating a detector (or discriminator) of fake content.” In short, AI trains other AI to beat a potential human opponent.

A report by Sensity further highlights that deepfake videos have doubled every six months in the years 2018-2020. Entertainment, fashion, politics and sports were the worst affected. 

Deepfakes may lead to CEO frauds, which involve hacking into or spoofing legitimate business email accounts, and then persuading an employee or manager to conduct unauthorised transfer of funds. FBI statistics state that CEO frauds are now a $26 billion scam.

A disgruntled employee could perhaps create a fake video, where perhaps the founder of a company could be seen announcing a financial crunch or something as sensitive as his family member’s demise when it never happened at all.

Identifying deepfakes

According to research by Cornell University, deepfaked faces don’t blink normally in videos. This could be because most images show people with their eyes open, leading the algorithms to never learn about blinking. Surprisingly, no sooner was the research published, that deepfakes were created with eyes that blinked properly.

According to an IBM blog, blockchain-based news projects can build a new sense of trust, making it easier to track and verify deepfakes. Benjamin Gievis, co-founder of Block Expert, a Persian tech startup, stated that it is increasingly difficult to distinguish between the real and the fake — a problem that can be alleviated if a news source is authenticated. 

Block Expert along with IBM’s open-source Hyperledger Fabric launched Safe.press which allows a member to publish a press release or article, with a Safe.press stamp functioning like a digital seal of approval linked to an associated blockchain key. This helps validators reveal if a news item (or release) has been faked.

Facebook partnered with other industry leaders and academic experts in September 2019 to create the Deepfake Detection Challenge (DFDC) in order to accelerate the development of new ways to detect deepfake videos. In doing so, it created and then shared a unique new dataset consisting of 124,000 videos featuring eight facial modification algorithms. The DFDC has enabled experts from around the world to come together, benchmark their deepfake detection models, try new approaches, and learn from each others’ work, according to the blog. 

A Google blog states that AI can help in spotting deepfakes. AI works by studying existing real-world imagery/audio, and manipulating them further to create fictional content. “However, there are often some telltale signs that distinguish them from reality; in a deepfake video, voices might sound a bit robotic, or characters may blink less or repeat their hand gestures. AI can help us spot these inconsistencies,” the blog mentioned.