In the age of artificial intelligence and deepfake technology, ensuring a safe, trusted, and accountable digital space is a pressing concern for nations worldwide. India, with its vast digital footprint, has recognised the risks posed by the misuse of technology and is actively implementing legal and institutional frameworks to counter cyber threats and misinformation.
The Information Technology Act, 2000 (IT Act), along with its subsequent rules, lays the groundwork for India’s efforts to regulate cyberspace. This legislation criminalises a range of cyber offences such as identity theft, impersonation, privacy violations, and the dissemination of obscene or exploitative material.
Importantly, the IT Act extends to information created using artificial intelligence or other technologies. This ensures that AI-generated content, including deepfakes, is not exempt from accountability under Indian law.
In a move to adapt to the evolving digital environment, the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021 were introduced. These rules place clear responsibilities on digital intermediaries, especially social media platforms, including:
The rules empower users to file complaints with platform grievance officers. If dissatisfied with the response, users can escalate the issue to the Grievance Appellate Committees (GAC) viawww.gac.gov.in.
Recognising the growing misuse of deepfake technology, the Ministry of Electronics and Information Technology (MeitY) has been proactively engaging with stakeholders and digital platforms. These consultations have led to the issuance of advisories reminding intermediaries of their obligations to counter synthetic and manipulated media.
Such advisories serve to reinforce the compliance expectations outlined in the IT Rules, especially regarding malicious and misleading content created using AI.
The Indian Computer Emergency Response Team (CERT-In) plays a central role in India’s cyber protection strategy. It issues frequent alerts on:
In November 2024, CERT-In released a specific advisory on deepfake threats, offering guidance on prevention and mitigation. Additionally, in May 2023, it published a comprehensive advisory on minimising AI-based risks.
CERT-In has also ramped up public outreach with initiatives such as:
These efforts aim to build a vigilant digital society where citizens are empowered to identify and report cyber threats.
The Ministry of Home Affairs (MHA) has established the Indian Cyber Crime Coordination Centre (I4C) to streamline enforcement efforts across states and law enforcement agencies.
Citizens can report cybercrimes, including financial fraud and deepfake-related offences, through the National Cyber Crime Reporting Portal (https://cybercrime.gov.in). Additionally, the toll-free number 1930 provides real-time assistance for reporting online frauds.
India’s response to the growing menace of deepfakes and digital misinformation is multifaceted—spanning regulation, collaboration, public awareness, and enforcement. As technology evolves, so too does the government’s approach, ensuring that cyberspace remains secure, trustworthy, and inclusive for all users.
The initiatives reflect a broader commitment to safeguarding digital rights while holding intermediaries accountable—a crucial balance in the digital age.
Disclaimer: This blog has been written exclusively for educational purposes. The securities mentioned are only examples and not recommendations. This does not constitute a personal recommendation/investment advice. It does not aim to influence any individual or entity to make investment decisions. Recipients should conduct their own research and assessments to form an independent opinion about investment decisions.
Investments in the securities market are subject to market risks, read all the related documents carefully before investing.
Published on: Apr 7, 2025, 4:06 PM IST
Team Angel One
We're Live on WhatsApp! Join our channel for market insights & updates