Deepfake is a term for technology used to combine and superimpose existing images and videos onto source images or videos using a machine learning technique called deep learning. The combination of the existing and source video results in a fake video that shows a person or persons performing an action at an event that never occurred in reality.
The main application for this technique is the exchange of a human face or image with another person in the video material and having the image in new video perform movements that mimic the movements from the other video.
The term has no good match in Estonian and a direct translation does not work. The English term deepfake is a combination of deep learning and a fake composite. Propastop’s translation offering is video fraud.
Due to the fear of fake news, fake accounts and general influencing, there is more talking about this technique as well as a fear of it being used by people to manipulate democracy.
This technique for example allows ordinary citizens to create credible videos about any head of state, making them saying anything the creator wants.
The emergence of this type of technique raises the reliability of distribution channels and criticism of the source of the video representation in which the head of a state declares war. Publication of this type of material on BBC would be taken much more seriously than the publishing of similar material on an anonymous blog.
The term is most likely to be the trendiest word in the last half year, used by politicians, influencing experts and simply people interested in influencing.
Photo: screenshot of CNN video story, opening image raindance.com