What do actors Amitabh Bachchan, Anushka Sharma and Shilpa Shetty, cricketer Sachin Tendulkar, National Stock Exchange (NSE) MD and CEO Ashishkumar Chauhan, and Indian industrialists Ratan Tata and Mukesh Ambani have in common? They've all been targets of deepfake videos that used their personas to peddle weight loss drugs and medication for diabetes, and promote fraudulent money-making schemes on social media.
Rapid advancements in artificial intelligence (AI) have ushered in a new era of technological disruption, with deepfakes emerging as a particularly concerning challenge for India's information integrity and public discourse.
Political Misinformation
In the buildup to the recent Lok Sabha elections, significant numbers of people in India had already been targeted by audio and audio-visual deepfakes, although these were mainly attempts to scam them in various ways. Around the Lok Sabha elections, there were also initial fears that political deepfakes could be used to try to influence voters.
These fears were only justified when the election campaign got underway. Fake clips showing Bollywood actors Aamir Khan and Ranveer Singh criticising Prime Minister Narendra Modi were circulated widely. However, ultimately, it may be said that while there were instances of deepfakes being used for mudslinging, targeted political attacks and even spreading misinformation about exit polls, parties used technology largely for their own voter outreach and engagement and to overcome language barriers.
Deepfakes were also used by parties to 'resurrect' their dead leaders to enhance the emotional connection with voters. In Tamil Nadu, an AI-generated voice message from former chief minister J. Jayalalithaa, who died in 2016, was circulated to criticise the current governing party, the Dravida Munnetra Kazhagam (DMK). The DMK in turn used AI-generated videos of their own deceased leader, M. Karunanidhi, to praise his son and current chief minister, M.K. Stalin.
Cheapfakes vs Deepfakes
While the risk posed by AI-generated content during the elections wasn't as deep as expected, even edited videos were often misattributed as AI-generated or "deepfake" content. For instance, an edited video of Union Home Minister Amit Shah that was circulated before the polls falsely claimed that he promised to scrap reservations for Scheduled Castes (SC), Scheduled Tribes (ST), and Other Backward Classes (OBC) if elected. While the clip was edited using simple video editing tools, the Bharatiya Janata Party (BJP) and several mainstream media organisations mislabelled it as a "deepfake".
It is important to remember that deepfakes are AI-generated media created using machine learning algorithms that can map one person's face onto another's body, and even alter their speech. Creating such media requires significant computational power. In contrast, 'cheapfakes' are manipulated media created using relatively simple editing tools. They are less sophisticated than deepfakes but can still effectively spread misinformation. They continue to be used widely in India.
What India Should Do
From detection algorithms for deepfakes to media literacy for cheapfakes, India can and should develop more targeted countermeasures for the various kinds of manipulated media.
The creation of deepfakes, which was once a complex and time-consuming process, has increasingly become very accessible. The turnaround time has reduced from days to minutes, even seconds.
Such rapid evolution in technology poses a significant challenge for fact-checkers and content moderators, who struggle to keep up with the pace of misinformation. Fact-checkers in India have found it an uphill battle to debunk deepfake content when it is spread at speed and scale or during heightened activity, such as election seasons. The disparity in the speed of information dissemination underscores the urgent need for more efficient and streamlined verification processes. A multifaceted approach is required to safeguard the integrity of its democratic processes. This approach must encompass technological advancements, regulatory frameworks, and a shift in societal mindsets.
The Way Ahead
While deepfakes have become increasingly difficult to detect, developing AI-powered tools to identify and counter such manipulated content holds promise. Tech giants like Google, Meta, and OpenAI have pledged to work closely with the Indian government and voters to protect access to truthful information. However, the effectiveness of the currently proposed measures, such as labelling and watermarking, remains to be seen.
Continued innovation and collaboration in this domain is paramount. By leveraging AI-powered tools, fact-checkers and content moderators can expedite the time it takes to discover false claims, pre-empt them before they cause harm and, in time, even automate deepfake detection, freeing up valuable human resources to focus on more nuanced and contextual analysis. This shift in mindset, from viewing AI as the ultimate verifier to understanding its role in supporting human verification, can lead to more efficient and effective strategies for combating deepfake-driven misinformation.
The Deepfakes Analysis Unit (DAU) initiative, a first-of-its-kind collaboration of academics, researchers, startups, tech platforms, and fact-checkers, is an example of notable efforts in this area.
Fostering a media-literate population is also crucial to counter deepfakes. Educating the public about the risks and the need to verify information can empower them to navigate digital content with scepticism and resilience.
Learning From Others
Meanwhile, the Indian government's initial reluctance to regulate AI seems to be ebbing as technology's potential for misuse becomes clearer. Recently, following a controversy over Google's Gemini chatbot, the government issued an advisory that reminded intermediaries to take action against AI-generated content that violates the provisions of the Information Technology (IT) Rules, which includes the spread of patently false or untrue information. The advisory also states that where software allows for the creation of deepfakes, it should also label or embed the content with unique identifying data that can be used to determine how the content was created, and by which user.
The deepfake challenge is not unique to India; it is a global phenomenon that requires a coordinated international response. The European Union will seek to regulate high-risk AI systems through its Artificial Intelligence Act. The UK, too, has tasked its regulators with developing rules for various sectors and use-cases. The US Department of Commerce may also issue guidance on labelling and watermarking deep fakes and develop standards for red-teaming AI models, which can have multiple uses.
India ought to develop a unique framework that suits its information environment. This can not only effectively counter the use of harmful deepfakes but also protect free speech and useful innovation (like developing generative AI tools that can help address language barriers).
Don't Drop the Guard
As many as 86% of Indians who were surveyed recently said that they believe misinformation and deepfakes will impact future elections and that candidates should be stopped from using generative AI in their promotional content. While the risk has not become as pronounced for now, the steady proliferation of deepfake videos of celebrities, industrialists, and news anchors promoting false medical and financial solutions shows the technology's potential for perpetuating financial fraud. Amid this, the exploitation of ordinary citizens remains a primary concern.
To navigate this challenge, India must embrace AI's potential to streamline the verification process. By learning from global best practices, India can safeguard the democratic foundations of its elections and empower its citizens to navigate the digital landscape with confidence.
(Jaskirat Singh Bawa leads editorial operations at Logically Facts, an independent, IFCN-accredited fact-checking organisation that operates in 12 countries and 16 languages globally. It is a subsidiary of the AI company Logically.)
Disclaimer: These are the personal opinions of the author
from NDTV News- Topstories https://ift.tt/rXMlNd2
Comments
Post a Comment