Deepfakes and Misinformation: Tackling Misinformation in the Age of AI

Abstract 

The rise of AI-driven deep fakes and misinformation is reshaping how we perceive truth in the digital age. With tools like Generative Adversarial Networks (GANs), anyone can now create highly convincing fake videos, audios, or images by blurring the lines between reality and fabrication.  This article  explores how such synthetic content is being used to spread political propaganda, commit financial fraud, damage reputations, and even harass individuals, particularly women, through non-consensual content. Focusing on India, the study examines the country’s legal response, including the Information Technology Act, 2000, Indian Penal Code, 1860 and the upcoming Digital India Act, 2025. Despite these frameworks, critical gaps remain, in enforcement, clarity, and AI-specific safeguards.  To provide perspective, the article compares India’s stance with global efforts like the U.S. Deepfakes Accountability Act and China’s AI content regulationoffering lessons India can adapt to its context. In response, the study puts forward clear and actionable recommendations like to create a National AI Governance Authority (NAIGA), mandate watermarking for AI-generated content, and launch widespread digital literacy programs. Ultimately, this research calls for a balanced legal and technological approach, one that keeps pace with innovation while protecting public trust, democratic processes, and individual dignity.

Click Here To Download The Paper

Author: Piyush Chaudhary