Deepfake – in particular video counterfeiting technology, which allows with the aid of artificial intelligence deep learning to create a moving image similar to reality – has become available to everyone over the past year, but has been surprisingly rarely used in influence operations.
Last year, foreign media repeatedly wrote about the growing threat of deepfake and predicted an explosive growth in the use of this technology to create media content and influence people. We also covered it in Propastop. Now, almost a year later, we can see that, although technology has become increasingly available, there have been surprisingly few cases of deepfake being used for malicious extortion.
“The threat seems to have been overestimated,” said Keir Giles, a specialist in Russian affairs at the British Chatham House think tank, during a webinar hosted by the NATO Strategic Communications Competence Center in May this year. “We have been waiting for years for some important political events to be influenced with deepfake. However, that has not happened. Why? ”
Giles suggested, as one possible answer, that perhaps this weapon was kept back for a sufficiently powerful event and they did not want it to be wasted on smaller issues. “If so, it was held back too long. Deepfake no longer has the ability to shock; the public is already aware of the danger and can take into account the possibility of counterfeiting.”
Researcher Tim Hwang, who attended the same webinar and has been responsible for Google’s artificial intelligence and machine learning, among other things, found that deepfake has not spread as feared in influence operations, as it is still cheaper and easier to run effective disinformation campaigns without the use of deepfake. “If you have a technique where you just put a misleading caption on a real photo, why make it complicated?”
Deepfake available to everyone
While governments have probably had the ability to create credible video manipulations for decades, in the last few years, this technology has become available to almost everyone. The freeware Faceswap and DeepFaceLab are easy to find on the web, and anyone interested can test their skills in creating deepfake. However, detecting such simpler deepfakes is not a problem for experts. Social Media giants are already developing deepfake automatic detection and removal programs, so deepfake is probably moving to more closed parts of the web – private networks, etc.
“The easiest way is to uncover deepfakes, of which there are many examples. User-level deepfakes are therefore very easy to detect because there has been a lot made, ” he said. According to him, deepfakes that are more complex are inaccessible to the average user, but certainly not to national governments. Detecting counterfeits at this level would be difficult even for experts.
Since deepfake is made with the help of machine learning, for which a lot of study material is needed, one of the preventive measures may be to “radioactivate” the training material – to add hidden information to the available photo and video material. However, this method only works for persons and objects whose recording can be checked.
However, according to both experts, the best weapon against deepfake is simply raising public awareness and educating people.
AI leaves the trolls unemployed
While people are fairly well aware of possible video manipulations, even technologically simpler audio and text-based deepfakes can still be unexpectedly dangerous.
In the autumn of 2019, information about a case where criminals cheated 220,000 euros from the CEO of a British energy company using a computer-generated deepfake phone call reached the media. The CEO was convinced that he was communicating with the boss of his parent company in Germany, whom he also knew personally. Inevitably, the question arises as to whether, for example, it would be possible to fool a military chain of command, for example.
Rapid advances in technology will soon allow such audio and text-based deepfakes to be generated in real time. Which means, on the one hand, that artificial intelligence for example, will leave the busy bees of the troll farm on Savuškini Street in St Petersburg, unemployed, but on the other hand, it significantly reduces the credibility of any mediated communication.
The original instead of a copy
When talking about Deepfake, it usually refers to copying and placing the identity of a well-known person in situations that have not actually taken place with that person’s participation. Keir Giles considers the creation of completely artificial deepfake personalities to be an upward trend. The first known completely artificial person is Katie Jones, who was unmasked last April with the assistance of Giles. This redheaded woman in her 30s was able to successfully maneuver through the corridors of power in Washington and communicate with many influential men in confidence through Social Media. Although she did not really exist. “Katie Jones was a messenger – there will be many,” Giles predicts.
Photo: deepak pal/Flickr/CC and Katie Jones (deepfake)