With the author’s permission, Propastop has published Priit Talve’s full article, which appeared in the Delfi Klikimagnet (Click Magnet) section.
Throughout the anonymous accounts, there are no attempts to give the profile’s name, picture or any hint of the individual behind the account. However, fake accounts give the impression of being a „real“ person with a profile name and picture behind the account, yet there is no person with that name in reality and the photos have been stolen from the internet.
These type of accounts have a life of their own. They are on someone’s friends list, actively participate in discussions, disputes, and shape Social Media content.
It is not difficult to create fake or anonymous accounts. Depending on the Social Media environment, it is either easy or more time-consuming, but anyone who wants to, can do it.
Creating original content for anonymous or fake accounts is also becoming easier every day. With the aid of artificial intelligence, it has become quite easy to create artificial „real“ people-specific materials such as a profile picture, video or audio file. It is not necessary anymore to have to steal someone’s likeness from the internet as a profile picture; an original unique likeness can be created, even though in reality that person has never existed.
The area of deepfake developed by artificial intelligence is evolving at a progressive rate, making it difficult to understand who is with us or confronting us in Social Media discussions. To understand all of this, take a look at thispersondoesnotexist.com. All the faces on this page are created by artificial intelligence and in reality do not exist.
The fight is up against a wall
The fight against fake persons and –news is happening on a daily basis. Whether it is the desire of the environments themselves wanting to clean up things or the fear that countries may punish them for it. However, the practice in recent years has shown that any such initiative has ended a few months later with an „unfortunately, it did not work“ statement.
In the recent past, Facebook had a solution by marking or flagging the fake news, which later unfortunately turned out to have the opposite effect. Marking drew much more attention to the fake news.
Facebook’s practice of suspending an account that spreads false or hateful information for 30 days has not made the environment any cleaner. This only forces those who distribute this information to create parallel fake accounts with which to continue their activities. With this however, artificiality increases even further.
The problem account report button is like a switch on a wall whose wires go nowhere. A single person’s button pushing goes unheard as a voice in the desert.
These are just a few examples of unsuccessful attempts to try to clean the environment. The daily picture clearly shows that successful solutions have not yet been developed.
Is it dangerous?
People are emotional, they are not rational beings. Most of us know not to let a stranger in the front door and in Social Media, you do not friend an unknown person. However, this does not mean that most people would not do it. There are friends and friends of friends in fake accounts and once a fake account has succeeded in taking the first step, further friends will be found at a faster pace and soon the fake account will be on the friend list of real people.
He has gotten into the company of individuals whose opinions, speeches and beliefs we pay attention to and trust. He can participate in discussions, shaping topics for our community and has access to information about our friends and us. However, we do not know who he is and whose and what agenda is he promoting.
It is not uncommon for multiple fake accounts to steer debates, giving the impression that a large number of people think in one direction and that it is the wish of the majority. In reality, there may be only one person or a small group of people behind the fake accounts pretending to be a large group of participants.
Another reason to manage the army of fake accounts is because of their ability to influence the algorithm underlying how the Social Media environment works. The system looks for signs that may be of interest to large groups of people, and then gives impetus to such topics to bring it more to the forefront for more people. However, in reality, it amplifies the interests of a small circle.
What should be done?
There is no magic wand or silver bullet that could solve the problems and there probably will not be. However, the challenges facing the digital world should be addressed jointly and in several layers.
Firstly is the level of law, where the potential of the current legal framework should be fully used, and when it is exhausted, think about modernizing regulations to better protect us. People should be more proactive in protecting their rights in the digital world. A good example in Estonia is Marika Korolev, who protected her rights in court and the accused were punished. There is a similar case in Finland, where Jessikka Aro was defamed, bullied by a hostility inciting scandalous news portal. The site was closed; its editor received a large fine and a prison sentence. Jessica was protected by criminal law on which basis the court ruled.
The second level involves the organizations that analyze and describe what is happening in the digital world, which can help clean up the information space. A good example is the Atlantic Council’s DFRLab (Digital Forensic Research Lab), which analyzes and describes information manipulation in the digital space. Everyone has freedom of speech and opinion, but when technology is used deliberately to distort messages; it no longer is freedom of speech, but information manipulation and should be highlighted to the public and if necessary restricted. DFRLab has done this successfully, stopping dozens of information manipulations on Twitter and Facebook.
The third and most important is to raise people’s awareness of the possibilities and techniques of manipulation by increasing their critical thinking and their ability to resist misinformation. It is important to get people to analyze what is happening around them in the digital environment. A good example is Finland, where critical thinking and conscious media consumption is taught in schools starting in kindergarten.
Estonia has a long way to go. There has been talk about the ability of former Soviet citizens to see through propaganda and be immune to it. However, this can only be true for propaganda from a certain age group in Russia, but it certainly does not help in the digital world.
Teaching digital media as an integral part of education
Currently, media education is a part of Estonian language courses in high schools and starting this Spring, its will be a separate optional subject available in the school curriculum. That is not enough. Each teacher can regulate Media education as a part of the Estonian language course and unfortunately, too often it is regulated down to a zero level.
Teaching digital media and critical thinking should become a definite part of basic or even elementary education as every year people are starting to live in the digital bubble at an earlier age. Figuratively speaking, today we have a situation where a person is driving a car at the age of 12, however, we do not teach him the rules of the road until he reaches the age of majority.
There certainly is a need at the national level for a specific media education specialist, who would keep a long-term view and coordinate such activities consciously and according to available finances. This Spring we had a good start with Media Literacy Week, which hopefully will become a tradition and help keep the focus on media education.
Photos: Individuals created by deepfake technology and not real people. Source: https://thispersondoesnotexist.com/. Opening image Max Pfandl/Flickr/CC