AI And Legal Safeguards: Enhancing Witness Protection Under the BNSS With Emotion Recognition Technology
- Centre for Advanced Studies in Cyber Law and AI CASCA
- Feb 11
- 5 min read
Updated: Mar 14
By- Sneha Patidar (Third year student at National law institute university, Bhopal) & Nikita Patidar (Third-year law student, Institute of Law, Nirma University, Ahmedabad).

INTRODUCTION
“We can only see a short distance ahead, but we can see plenty there that needs to be done.” With these words, Alan Turing emphasized the relevance of incorporating Artificial Intelligence (AI) into legal systems, particularly in enhancing witness protection. Witness protection is essential for individuals to testify freely without fear of retaliation, thereby upholding the integrity of justice. However, challenges persist in current protocols, including inadequate threat assessments and insufficient protective measures tailored to individual circumstances. The Bharatiya Nagarik Suraksha Sanhita (BNSS) 2023 states that individualized threat assessments should establish protective measures for witnesses, ensuring their safety and encouraging their participation in the justice process.
The integration of emotional recognition technology refers to systems designed to identify and interpret human emotions through data inputs such as facial expressions, vocal tones, and physiological signals, within witness protection frameworks mark an important intervention in the gaps in judicial processes’ safety. This can bring an early intervention of real-time detection of distress or fear, which makes it compatible with the BNSS, 2023, which set the entire range of protection provisions for witnesses into motion. It thus not only assures greater confidence among the witnesses but also fortifies the integrity of testimonies. But at the same time, the application has to ensure a balance between technological advancement and fundamental rights like privacy, dignity and ethical safeguards for sustaining justice and faith in the legal system.
NEW PROVISIONS UNDER BNSS
The Mahender Chawla v. Union of India case highlighted significant challenges in witness protection, particularly concerning witness intimidation and tampering. The Supreme Court emphasized that witnesses often face threats and coercion due to their participation in legal proceedings, leading to a reluctance to testify. This systemic inadequacy undermines the integrity of the judicial process and jeopardizes the pursuit of justice. In response to these challenges, Section 398 of the BNSS 2023 is an important step forward in witness protection. This section focuses on the physical safety of witnesses who are at risk of intimidation or threats due to their involvement in criminal proceedings. It acknowledges the vulnerability of witnesses and establishes a framework for the state to provide protection in the form of safe housing, concealment of identity, and monitoring of threats.
OVERVIEW OF EMOTION RECOGNITION TECHNOLOGY
Modern technology listens either through facial or vocal modulations-the two major specifications of advanced algorithms concerning physiological signals for analyzing emotion. In fact, emotion recognition has been found to have one micropulse expression and even tonal variations-ultimately identifying emotional conditions such as fear, anxiety, or happiness formed in the subsystem. Well-known is the applicability of such technology in law enforcement, in monitoring patients during therapy, and in marketing, to study how consumers respond to products. Quite a few applications exist to express what this technology could revolutionize over sectors. In the context of witness protection, there is a growing trend towards employing emotion recognition technology for real-time monitoring of emotional states. Jurisdictions such as the United Kingdom and Canada have begun integrating these technologies into their criminal justice systems to enhance witness support. During witness interviews, emotion recognition systems assess emotional responses and facilitate immediate interventions when distress signals are detected.
Upon identifying distress, trained mental health professionals can be deployed to provide on-site counseling and reassurance. Witnesses exhibiting significant distress may be escorted to designated safe areas to regain composure. Additionally, some jurisdictions utilize virtual reality (VR) environments to simulate courtroom settings, allowing witnesses to practice their testimony and reduce anxiety. Interventions may also include guided breathing techniques and cognitive behavioral strategies designed to help witnesses manage stress and reframe negative thoughts. Ultimately, the integration of emotion recognition technology not only enhances witness safety but also improves the quality of testimony, fostering an environment conducive to open communication and justice.
THE ROLE OF AI IN THE LEGAL CONTEXT
AI is changing the way law enforcement and witness protection are done. The BNSS has moved ahead in witness protection by creating individualized threat assessments, enabling law enforcement to evaluate specific risks faced by witnesses and implement tailored protective measures. The use of emotion recognition can further boost such measures by ensuring protection for these witnesses in time.
However, AI has its limitations. Emotion recognition tools may be misused, and there could be emotional misinterpretation owing to cultural and individual variations that go against outcomes. In the case of Christian Louboutin SAS & Anr. v. M/s The Shoe Boutique – Shutiq, the Delhi High Court emphasized that AI cannot replace human judgment, particularly in sensitive areas like witness protection. The court’s ruling highlights the necessity of human oversight in interpreting emotional data and making decisions that affect individuals’ safety and rights. Thus, while AI presents transformative opportunities for law enforcement and witness protection, its implementation must be carefully regulated to ensure fairness and equity.
While BNSS aims for tailored protection, over-reliance on AI may lead to standardized measures, undermining its effectiveness. India must adopt a robust regulatory framework that ensures ethical AI use, transparency, and safeguards against biases. By doing so, India can effectively integrate AI into legal processes while upholding justice and civil liberties.
INTEGRATING EMOTION RECOGNITION TECHNOLOGY INTO WITNESS PROTECTION: CONSTITUTIONAL SAFEGUARDS AND RECOMMENDATIONS
The integration of artificial intelligence (AI) technologies, particularly emotion recognition technology, into witness protection frameworks under the Bhartiya Nagarik Suraksha Sanhita (BNSS), 2023, must be firmly grounded in constitutional rights. Central to this discussion are Articles 21 and 14 of the Indian Constitution, which guarantee the right to life, personal liberty, and equality before the law. Article 21 has been interpreted by the Supreme Court to include the right to privacy, which can be interpreted as a crucial consideration when deploying technologies that monitor emotional states.
The BNSS aims to establish comprehensive witness protection programs to address the pervasive issues of intimidation and coercion faced by witnesses. The Supreme Court has emphasized the urgent need for effective measures to protect witnesses from powerful individuals who may seek to undermine justice through threats or bribery. While emotion recognition technology can enhance witness safety by identifying signs of distress or fear in real time, its application must be accompanied by stringent safeguards to prevent potential abuses, such as unauthorized surveillance or data breaches.
The Witness Protection Scheme (WPS) of 2018 laid important groundwork for safeguarding witnesses by providing measures such as relocation, identity changes, and financial support. The scheme categorizes witnesses based on their threat levels: Category A: Witnesses facing threats to their lives or their families. Category B: Witnesses with threats to their safety, reputation, or property. Category C: Witnesses experiencing moderate threats such as harassment.
However, the scheme does not adequately incorporate the potential benefits of emerging technologies like AI. By leveraging AI for real-time emotional assessments and tailored interventions, the WPS could significantly improve witness safety and support, addressing vulnerabilities that the current measures fail to consider.
Privacy concerns surrounding the use of emotion recognition technology are significant. The Supreme Court’s ruling in Justice K.S. Puttaswamy (Retd.) v. Union of India affirmed that privacy is a fundamental right, necessitating that any intrusion into personal privacy is justified by law and serves a legitimate state interest. Therefore, any monitoring of emotional states must occur with informed consent from witnesses, aligning with constitutional stipulations and best practices outlined in the Digital Personal Data Protection (DPDP) Act, of 2023. The Act establishes clear regulations regarding data collection and consent, ensuring that witnesses are informed about how their emotional data will be used. This alignment not only protects individual privacy rights but also enhances trust in witness protection measures.
To ensure accountability and transparency in AI systems used for witness protection, it is essential to establish independent review boards composed of legal experts, ethicists, and technologists. These boards would evaluate the implications of using AI technologies while ensuring compliance with legal standards and mitigating algorithmic biases. The evaluation could include reviewing data sources, assessing algorithmic fairness, and ensuring that informed consent protocols are followed to protect witness privacy and safety. Legislative measures should also mandate that developers disclose how their algorithms function and what data is utilized, fostering public trust in these technologies.
Training law enforcement personnel to accurately interpret emotional data is critical for effective implementation. Such training should emphasize the importance of human judgment during witness interviews and assessments, particularly given the diverse backgrounds from which witnesses may come. Additionally, public awareness campaigns can educate witnesses about how AI technologies can enhance their safety during trials, addressing concerns about privacy and fostering trust in these innovations.
Finally, ongoing research is necessary to optimize the use of emotion recognition technology in witness protection. Key areas for investigation include identifying biases within algorithms and studying their long- term effects on witness testimony quality. Developing systematic approaches for integrating AI evidence into court proceedings while safeguarding individual rights is crucial.
As India implements provisions under the BNSS for comprehensive witness protection—including safe housing and identity changes—it is imperative to prioritize these recommendations. The integration of emotion recognition technology must be accompanied by rigorous safeguards that prevent misuse while enhancing the safety and comfort of witnesses during legal proceedings. By adopting a comprehensive approach that combines technological innovation with strong legal protections, India can significantly improve its witness protection framework while addressing contemporary challenges within its judicial system.
CONCLUSION
The bringing together of AI and emotion recognition technology into burn networks’ witness protection strategy is thus a transformational change towards improving the safety and comfort of witnesses in legal processes. By providing effective emotional state monitoring and timely support, witnesses could be made to feel secure enough to testify without fear of intimidation or retribution. However, this would be contingent upon careful planning, sound regulatory frameworks, and ongoing discussions among legal professionals, technologists, and policymakers.
Putting India at the forefront of these changes requires serious and urgent attention to ethical issues and human rights in implementation. The success of AI will depend entirely upon good collaborative effort in shaping this new domain that moves away from technology to principles of justice and fairness into shaping witness protection. The need for continued research and policy development will once again be salient in this whole process, making sure that future improvements in AI serve justice rather than diminish it.
コメント