Deepfakes is a technology of artificial fraud, which generates realistic audio or video depicting someone saying or doing something that he or she never said or did. As a general rule, it is very difficult to detect digital manipulation, and easy enough to make it with a computer and some software, especially if professionals do it.
Historically false or distorted information has served as a weapon against enemies since ancient times. We can recall the “testament” of Anthony, which provoked the war of Rome against Egypt, Goebbels’ propaganda speech to justify the Nazi regime and anti-Semitic hysteria, the “information” of the US State Department on biological weapons of Saddam Hussein, received from non-existent defectors to ensure international support and participation of allies in the coalition in anti – Iraqi war and much more. The history of the media also contains a huge amount of false information, the purpose of which was not only the ideological impact, as in the above cases, but also the extraction of commercial gain. The yellow press supports them. However, nowadays most of the media value their reputation and therefore obey certain editorial control and legal requirements, which significantly narrows the space for shocking fabrications [CITS, 2018].
There are also “scientific fakes” about life on other planets, for example, or about the dangers of GMOs, although scientists have already proved that genetically modified products are harmless, moreover, they can solve the problems of world hunger in the long term. Nevertheless, with the development of new technologies and Internet communications, fake news has shifted to another, much more advanced and deep level of development, distribution and influence, which is not surprising given the scale of involvement of social network users and the speed of information dissemination. At the same time, it is necessary to recognize that audio and video recordings of events convince much more than written text. It seems to us that we are becoming direct witnesses of what is happening, almost participants. Can you not trust your own ears and eyes? That is why these video and audio materials bear a huge charge of incitement and the possibility of manipulating crowds of people.
Among the “dark” goals of creating deepfakes there may be individual blackmail and intimidation of politicians, businessmen, famous people, and even not very famous, but fearing for their honor, by fabricated videos with obscene scenes. Deepfakes can be so convincing that they cannot be distinguished from the truth. Millions of viewers could see a short video that looks like a video message from Barack Obama, in which he speaks very hard about Donald Trump; it was a fake, the author of which by this example showed people that they should not blindly trust everything that “floats” on the Internet [Savchenko, 2018]. Researchers at the University of Washington advanced even further in their experiments, using a neural network, they thus simulated the shape of a person’s mouth, which allows them to attach any audio to a real interview and get people to say something they never said [BBC, 2017].
Unlike newspapers and magazines, in social networks, it is difficult to determine the quality, reliability and even the source of information, because messages come sometimes through friends, members of groups, families without indications of origin, by so-called “information cascades”. If it is impossible to track the source, then, therefore, to verify the veracity of the information is also unlikely to succeed. For this reason, there is a huge scope for disinformation, which is especially dangerous in the political sphere, as it can unpredictably and biasedly affect internal and external events, say, electoral campaigns, issues of securitization and militarization of a country, terrorist and anti-terrorist actions, ethnic, religious conflicts, international relations, etc.[Chesney and Citron, 2018]. In unstable states and regions, such records can lead to protests, clashes, armed conflicts and even wars. Sometimes fake authors deliberately seek to achieve this effect.
Experts found that Twitter users retweet fake news stories with a probability of 70% more than real, true stories gain an average of 1,500 views – six times less than the detected fakes. More than a thousand people rarely shared real news stories, while the most popular fakes spread sometimes 100,000 users [Kavaleryan, 2018]. However, raising public awareness of a deepfake can lead to increasing people’s distrust, including towards the true news, and this situation does not contribute to strengthening stability and social well-being in society. Truth itself becomes something unattainable and unreal. Social tensions and irritations gradually accumulate.
What is possible to oppose this new technological adversity of the 21st century? European countries have begun to actively create government organizations to combat “fake”, fined for spreading fake news or “hatred” for up to 50 million euros. Omnipresent Ilon Mask promises to develop “Pravda” service to detect lies [Kavaleryan, 2018]. However, it is doubtful whether governments will be effective in confronting them, given their interest in fake weapons to justify the aggressive actions of the state against internal and external enemies. Computer programs for detecting fakes after distribution and verifying “digital authentication”, that is, the authenticity of content, before its distribution, are already being developed, but, first, it is not known whether social networks, these self-proclaimed heirs of traditional media, will want to use these programs, risking losing a significant part of attractive content and advertising, and secondly, even if the false news is exposed, it still leaves a toxic mark on millions of people, especially if it matches in some aspect with the worst expectations of people [Villasenor, 2019].
Users of social networks, that is, all of us, should obviously oppose this negative impact with the only means that is always at our disposal, this is our critical thinking. Specialists from the International Federation of Library Associations and Institutions have developed Rules for Critical Reading of News. They advise checking the place of publication, the mission of the site and its contact information, to check the author, references, arguments, and dates, not to trust our own prejudices, and turn to experts for confirmation [IFLA, 2019]. This is probably good advice for those who do not want to stay in the cold.
References:
BBC. (2017). “Obama is not real”: how the computer makes fake videos. Retrieved from https://www.bbc.com/russian/av/media-40636007. Accessed on 05.07.2019
Chesney, R. and Citron, D. (2018). Deepfakes and the New Disinformation War. Retrieved from https://www.foreignaffairs.com/articles/world/2018-12-11/deepfakes-and-new-disinformation-war. Accessed on 04.07.2019
CITS. (2018). A Brief History of Fake News. Retrieved from https://www.cits.ucsb.edu/fake-news/brief-history. Accessed on 04.07.2019.
IFLA. (2019). How to Spot Fake News. Retrieved from https://www.ifla.org/publications/node/11174. Accessed on 05.07.2019
Kavaleryan, A. (2018). History and theory of fake news: how they were produced from the time of Ancient Rome and how to distinguish them from the truth. Retrieved from https://knife.media/fake-news-history/ Accessed on 04.07.2019.
Savchenko, G. (2018). How will the fake news of the future look like – on the example of Obama’s fake speech. Retrieved from https://birdinflight.com/ru/novosti/20180418-video-obama-fake-monologue.html. 05.07.2019.
Villasenor, J., (2019) Artificial intelligence, deepfakes, and the uncertain future of truth
John. Retrieved from https://www.brookings.edu/blog/techtank/2019/02/14/artificial-intelligence-deepfakes-and-the-uncertain-future-of-truth/ 04.07.2019.
Note: The views expressed in this blog are the author’s own and do not necessarily reflect the Institute’s editorial policy.
Nadirova Gulnar Ermuratovna graduated from the Oriental Faculty of Leningrad State University, in 1990 she defended her thesis on the Algerian literature at the Moscow Institute of Oriental Studies, in 2006 doctoral thesis - on modern Tunisian literature at the Tashkent Institute of Oriental Studies, Professor.