Nickeled & Dimed

Penny for your thoughts?

We are accepting articles on our new email: cnes.ju@gmail.com

Deepfakes: Where reality takes a deep dive into fiction!

By Anushka S

Abstract:

In a digital age where fiction masquerades as fact, deepfakes plunge us into a world where reality itself is up for debate. As the boundaries between truth and illusion blur, our challenge is to navigate this murky terrain with sharp policies and a steadfast grip on the truth.

Introduction:

Imagine yourself scrolling through Instagram and you see a popular video of Prime Minister Narendra Modi playing garba. You shrug it off and move on to another reel without realizing that what you have just witnessed is a deepfake. By making use of a form of artificial intelligence called deep learning, one can create legitimate seeming videos by altering the original events as recorded on camera. A plethora of these videos are made on celebrities and world leaders. From Russian President Putin declaring peace to MP Manoj Tiwari talking in Haryanvi and English, these compelling videos have spread across the world. A reel like that of a prime minister dancing might seem harmless and even humorous, especially in the eccentric world of meme culture. But a deeper dive into the kind of videos that are created using deepfake underlines the dangerous effects it has on people’s perspectives and lives. Not only do these mischievous videos spread false information but they are also in violation of intellectual property rights. Tackling deepfake as a policy issue becomes even more complex as there is little to nothing that governments can do to prevent the production of such videos. The intervention of the government must follow the creation of the deepfake, which oftentimes takes place after considerable damage has already been caused by the deepfake. 

Deepfakes and Their Impact on Women:

While the political threat from deepfakes is imminent, most of these videos violate women and their bodies. A 2019 study by cybersecurity firm Deeptrace found that over 96% of online deepfake videos are pornographic, with 3% involving Indian subjects. Such videos also attempt to defame individuals, as seen in the case of Indian actress Rashmika Mandanna. Her face was superimposed onto another influencer’s body, who was wearing a bodysuit and shared online, exploiting both the original poster and Mandanna’s image. Deepfake videos of celebrities can be easily identified and debunked due to their large platforms and resources. A deepfake can be spotted with the help of artificial intelligence but many current detection systems have a significant flaw: they are most effective for celebrities due to the abundance of accessible footage for training. As a result, thousands of videos targeting women often go unnoticed, allowing them to escape legal consequences and sometimes serving as a form of revenge porn. Nevertheless, in the Indian context, certain laws under the Information Technology Act, 2000 (IT Act) can be applicable in the case of deepfake crime that involves the publication of a person’s image, and prosecution can also be carried out if sexually obscene deepfakes are created. 

Deepfakes in Politics: A Tool for Manipulation and Misinformation:

Meanwhile, the remaining 4% of deepfakes involve videos of well-known personalities, including political leaders. The spread of deepfake technology into accessible media, especially during elections, can significantly alter public perceptions of leaders. For instance, in March 2022, a deepfake video surfaced showing Ukrainian President Volodymyr Zelenskyy supposedly calling for Ukrainian troops to surrender to Russia. Although this video was quickly debunked, it highlighted how a single deepfake can alter narratives, spread confusion, and serve as a tool for psychological warfare. Another example is the viral deepfake video of the U.S. House Speaker Nancy Pelosi that falsely portrayed her slurring her words, which spread misinformation about her mental state. Such videos can heavily influence political campaigns. Tracking the creators of these videos is challenging, but the motivations behind them go beyond individual nefarious intent. Divyendra Singh Jadoun, who uses AI for creating Bollywood sequences, noted that during India’s 2024 election, many politicians sought his services for unethical purposes. These included fabricating audio of opponents, superimposing faces onto explicit images, and creating low-quality fake videos to discredit authentic footage. Such unethical campaign strategies can cause significant disruption, particularly on social media platforms. With the ease of sharing videos, these deepfakes gain massive popularity rapidly and become contentious topics of discussion. Although they are often debunked eventually, the brief period in which they are perceived as real can cause serious harm. Initially, misinformation was primarily spread through text, but video manipulation complicates efforts to prevent belief in false information, as people tend to trust what they see. ‘Trust’ is the one aspect that deepfake tampers with. Due to the difficulty in distinguishing real from synthetic media, the public often relies on external sources for verification, which can reveal biases favouring negative over positive content. Some public figures exploit this uncertainty through ‘the liar’s dividend,’ using media confusion to cast doubt on authentic information and deflect criticism, thereby making it harder for the public to discern the truth. Studies on deepfakes reveal that interactions with a digital replica can influence opinions and create ‘false’ memories about a person, even if viewers know it’s not real. Negative false memories can damage a person’s reputation, while positive ones may lead to complex and unforeseen interpersonal consequences.

Global Responses to Deepfakes: A Patchwork of Policies:

These fictitious videos have sparked concern among governments around the world. Countries have been quick to respond with rules and regulations on the creation and circulation of deepfakes. China enforces some of the world’s most rigorous deepfake regulations, requiring that all deepfake content be clearly labelled, and users must obtain consent and register with the government before creating or sharing such content. Overseen by the Cyberspace Administration of China (CAC), these rules demand identity verification, record-keeping, and the reporting of illegal deepfakes, effectively controlling every stage of deepfake technology’s use. While these measures might adequately handle the problem of deepfakes, they can also be criticized on the grounds of unnecessary intervention by the government. The government oversight might limit freedom of expression and stifle the creative use of deepfake technology, potentially being used to suppress dissent or critique. 

In the case of the European Union (EU), has developed an extensive framework to regulate deepfakes, incorporating transparency and accountability measures into broader digital and AI laws, like the Digital Services Act and the proposed AI Act. Social media platforms face penalties if they fail to remove deepfakes, and there are requirements for clear labelling of synthetic content. The EU’s emphasis on transparency and accountability seeks to tackle misinformation but may place varying burdens on different platforms. The government of the UK has passed legislation to penalize those who create sexually explicit content using deepfake. The offender would face a criminal record and an unlimited fine. If the deepfake content is then shared more widely, offenders could be sent to jail. 

While the U.S. lacks federal regulations on deepfakes, states like California and New York have enacted laws addressing political and pornographic misuse, imposing fines, jail time, and civil penalties. 

In India, pornographic deepfakes can only be penalized by the IT Act, 2000, and stronger legislation is expected through the introduction of the Digital India Act. A majority of these measures involve the identification of the user who created these videos. It is extremely difficult to identify creators, especially if they are produced in closed communities and not published on public websites. IP addresses can be useful to track such individuals. However, the government would have to set up special units that can devote their time and resources as this can prove to be a tedious task. The second step is to handle misinformation spread by such videos. India has developed a unique way of curbing misinformation which includes The Deepfakes Analysis Unit (DAU), established by the Misinformation Combat Alliance, which provides a distinctive service in India through a WhatsApp tipline. This platform enables users to report audio and video content they suspect to be misleading or harmful and created, wholly or partly, by artificial intelligence. Launched in March 2024, just before the initial phase of the Indian elections, the tipline has handled and assessed hundreds of audio and video files, excluding images. The majority of submissions are videos, with varying levels of generative AI involvement detected in the content. 

Conclusion:

Policy on free content and technology available on the internet is bound to be tricky. To investigate each crevice of social media sites and analyze what is true or false is close to impossible. Imposing strict content publication restrictions may lead to accusations of stifling freedom of speech. Countries could also limit public defiance in the disguise of solving the deepfake issue. Therefore, a cautious approach is essential when implementing such policies. Exercising caution while implementing policies is required. The state of Virginia in the United States is a commendable example of how effective policies can be enacted while ensuring people’s right to freedom of speech. Virginia criminalizes the production and sharing of sexually explicit deepfakes. However, the law permits exceptions for parodic or political uses and requires the Attorney General to set up a working group to continue researching deepfakes. This approach illustrates how a balanced policy can combat the dangers of deepfakes while protecting individual freedoms and encouraging creative and political expression. Many nations have introduced policies that impose strict penalties for deepfake offences, but the challenge lies in effectively identifying the perpetrators to ensure these laws are enforced. Establishing fact-checking units to detect and remove deepfakes and to help trace the creators could be beneficial. As AI continues to advance, we must acknowledge that we are not fully prepared for the potential misuse that lies ahead. Legislation will need to evolve, and continuous research will be necessary. Deepfakes may feel like a fleeting fever dream we’d rather ignore, but it’s time to wake up. Only through vigilant and timely policy can we ensure justice is served and the truth is upheld.

Author’s Bio:

Anushka S is a second-year student at Jindal School of International Affairs, pursuing her Bachelor’s in Political Science (Hons.). Her research interests include the intersection of religious studies and political theory, public policy, social welfare, and developmental growth.

Leave a comment