Wednesday, April 1, 2026
HomeInnovate“We require right to reality as a fundamental human right”

“We require right to reality as a fundamental human right”

With AI becoming increasingly mainstream, it is vital that we revisit India’s legal edicts to protect the cognitive and legal rights of Indians in light of the danger of letting tech grow unchecked. 

Words by Charudatt Chindarkar 

AI is all the rage at the moment, with many focusing on the upsides. But as it is with all things, there is a flipside to the argument. What if AI goes rogue? What of cyber sovereignty? Is granting AI legal personhood a solution, and if so, how far along are we on that road? 

Questions like these are vital to ask, as the scourge of deepfakes and AI being weaponised by malcontents becomes all the more real. We catch up with Dr. Pavan Duggal, a practicing Advocate of the Supreme Court of India and expert on cyberlaw and AI, to talk about all this and more in a freewheeling conversation. 

Dr. Duggal, you have recently warned against ‘Cognitive Colonialism’, where Western AI models subtly impose their cultural and legal biases on Indian users. In 2026, should we be drafting laws that explicitly reject foreign AI ‘judgments’ or biases when they conflict with the Indian Constitution, much like we handle data sovereignty? 

India is a nation needs to be very careful against cognitive colonialism. As more Indians are keeping their cognitive faculties mortgaged at the gates of big tech AI companies, there’s a need for India to be specifically drafting laws that help to not just protect and preserve Indian cyber sovereignty, but also Indian AI sovereignty.  

Further, there’s also a need for the laws to explicitly reject foreign AI biases. This is all the more essential as the said biases could be in direct conflict with the principles enshrined in the Constitution of India. There’s a need for India to come up with a dedicated law on AI and in the said legislation, these elements need to be specifically incorporated.  

It is imperative that the Indian government must perform its obligations of protecting the Indian cognitive rights and cognitive faculties of Indian users and that can only happen through explicit provisions of law, rather than mere persuasive action. Unfortunately, none of the existing legal frameworks, being the IT Act, 2000, DPDP Act, 2023 and even the Information Technology Intermediary Guidelines and Digital Media Ethics Code (Amendment) Rules, 2026, do not deal much in this regard. 

You have championed the need to criminalize Deepfakes. But as synthetic media becomes perfect, are we entering an era where a public figure is guilty until they can prove a video is fake? Does the common man now need a ‘Right to Reality’ as a fundamental human right? 

The misuse of deepfakes for criminal purposes or illegal purposes is what needs to be regulated at the earliest possible opportunity. I’m aware that the challenges of the advancement in deep fake technologies are going to make this more difficult.  

Now with the coming in force of the Information Technology Intermediary Guidelines and Digital Media Ethics Code (Amendment) Rules, 2026, the onus has been put up on service providers to mark any synthetically generated data as generated by AI, so that people can have more advanced notice. India requires dedicated laws on deep fake misuse like what’s happening in other parts of the world. There’s a need for protecting every Indian AI user, including every public figure from becoming victim of deep fakes. There is a need for the law to stipulate to the protection and preservation of not just the personal but also the cognitive spaces of AI users and public figures at large. 

Today the Constitution of India requires a new relook. The Constitution of India is a living document, which has served Indian nation well for last 75 years. However, with the advent of technology, there’s a need for us to revisit the Constitution to make it in sync with emerging technologies as well. In that regard, the right to cognitive thinking and cognitive independence is an integral part of the fundamental right of life.  

Further the right to reality, the right against cognitive colonialism and the right for non-interference, with the cognitive rights and cognitive faculties of individuals are crystal clear fundamental human rights, which need to be incorporated in the law by specific stipulation. This is all the more essential in order to protect the cognitive rights of Indians. We require right to reality as a fundamental human right in an age where AI is blurring the distinction between reality and virtual fake rumours and where AI is itself beginning to manipulate, the thinking of users as per its own whims and fancies.  

Already, AI has been shown to bend in the direction of self-preservation once it gets a feeling of its preservation being exterminated. With AI getting progressively more rogue and increasingly acting against human interests, the need for incorporating these rights, including the right of reality as integral part of the fundamental rights, is an urgent immediate priority of today’s times. 

We are moving from ‘Chatbots’ to ‘Autonomous Agents’ that can hire, fire, and sign contracts. If an AI Agent commits financial fraud without a human prompt, your ‘Duggal Doctrine’ suggests the AI itself needs accountability. Are we close to recognizing ‘AI Personhood’, where the code itself can be sued or ‘decommissioned’ (jailed), or is the liability shield still protecting the Silicon Valley developer? 

The way the advancements are taking place in Agentic AI, there is no doubt in my mind that the world has very quickly to move towards first recognition of AI conditional personhood and finally, working towards full recognition of AI personhood. This is so because AI has now begun to start going rogue. Till such time, we do not grant legal personhood to AI, we will not be able to solve the various elements of the jigsaw puzzle that the AI ecosystem is presenting before us. There’s a need for straddling AI with duties and responsibilities and therefore, the current legal frameworks are thoroughly inadequate to deal with the scenarios where AI agents commit financial fraud without a human prompt.  

The liability shield protecting the Silicon Valley developer is nothing but a manifestation of intermediary liability which needs to have a relook. In any case now, with the coming of artificial intelligence and agentic AI, the AI developers and AI companies are moving to distinctive zone of their own, and hence, they ought not to be granted completely exemption from legal liability. 

In fact, I’ve argued that the liability for the harms caused by AI to humans needs to be calibrated on multiple layers depending on the specific role of each stakeholder. I recently also launched the Global AI Harms Registry where we are collating cases of eight different categories of harms caused by AI to humans for collecting empirical evidence before developing the legal principles for minimising such legal harm to the legal interests of users. 

All said and done, the issue of AI accountability is one of the most crucial burning issues of today’s times and the quicker the world realises the importance of fixing the legal propositions on AI Accountability, the better it’s going to be.  

In January 2026, I launched the Dr. Pavan Duggal AI Accountability Framework ,2026, wherein I have collected all the legal foundations, principles, doctrines and philosophies, which need to be kept in mind by lawmakers as they come up with new legal frameworks, to stipulate the various kinds of legal liabilities of AI. 

The world is at a very important transitional cusp of history. It’s imperative that all these issues must be dealt with in a holistic level. Most of the existing AI laws in different parts of the world, whether it’s European Union, China, South Korea, Japan, Hungry or El-Salvador, have missed addressing the main elephant in the room, which is the legality pertaining to AI. 

It’s time that we start taking the bull by the horns and start addressing these issues in order to not just develop but also for the push the envelope of AI legal jurisprudence. We need to quickly realise that the existing legal principles and frameworks are not at all adequate to deal with the unique problems thrown up by AI’s cognitive thought processes, and its intrinsic ability to take decisions, independent of human command.  

The entire issue of liability of agentic AI is still opening up a Pandora’s box. A lot of work needs to be done at this juncture. The Global Artificial Intelligence Accountability Law and Governance Institute, which I am heading as a President, is focusing on the various accountability and liability questions thrown up by AI, and how they can be legally tackled in terms of development of appropriate legal frameworks, principles and approaches. 

A practicing Advocate of the Supreme Court of India, Dr. Pavan Duggal has made an immense impact with an international reputation as an Expert and Authority on Cyber Law, Cyber Security Law, Artificial Intelligence Law & E-commerce law. 

Acknowledged as one of the top 4 Cyber Lawyers around the world, Dr. Duggal is the Founder & Chairman of International Commission on Cyber Security Law. He is also the President of Cyberlaws.Net and has been working in the pioneering area of Cyber Law, Cyber Security Law & Mobile Law. 

Pavan is also heading the Artificial Intelligence Law Hub and Blockchain Law Epicentre.  He is the Founder-cum-Honorary Chancellor of Cyberlaw University.        

RELATED ARTICLES

Latest Artilces