SC Flags AI-Generated Fake Judgments as Misconduct; Warns Lawyers

SC Flags AI-Generated Fake Judgments as Misconduct; Warns Lawyers | Quick Digest
The Supreme Court of India has deemed a trial court's reliance on AI-generated non-existent judgments as misconduct, stressing its serious implications for the integrity of the justice system. This follows broader concerns raised by the apex court regarding lawyers using AI tools to draft petitions containing fabricated case laws and quotes, emphasizing the need for rigorous verification.

Key Highlights

  • Supreme Court flags trial court using AI-generated fake judgments.
  • Reliance on fake AI judgments termed 'misconduct', not mere error.
  • Apex court to examine consequences and accountability in detail.
  • Concerns raised over lawyers citing non-existent cases like 'Mercy v. Mankind'.
  • Judges warn of AI's 'hallucinations' and fabricated quotes.
  • Incident highlights critical need for human oversight and verification in law.
The Supreme Court of India has recently taken a strong stance against the unverified use of Artificial Intelligence (AI) in legal proceedings, specifically flagging a trial court's reliance on what were found to be non-existent, allegedly AI-generated judgments. Terming such conduct as 'misconduct' rather than a mere error of law, the apex court has underscored its direct bearing on the integrity of the adjudicatory process and has indicated that legal consequences shall follow. This significant development highlights growing concerns within the Indian judiciary regarding the responsible adoption of AI tools by legal professionals. The issue came to the forefront during a hearing by a Bench of Justices Pamidighantam Sri Narasimha and Alok Aradhe on February 27, 2026, where a trial court in Andhra Pradesh had dismissed objections to an Advocate Commissioner's report, partly relying on several purportedly fake judgments. The Supreme Court declared that a decision based on such non-existent and fake alleged judgments is not merely an error in decision-making but amounts to misconduct. The bench has decided to examine the matter in detail, issuing notices to the Attorney General R Venkataramani, Solicitor General Tushar Mehta, and the Bar Council of India, while also appointing senior advocate Shyam Divan to assist in the matter. This particular incident follows a series of warnings issued by the Supreme Court regarding the 'alarming' trend of lawyers and litigants relying on AI tools for drafting petitions that include fabricated case laws and invented passages from genuine judgments. Earlier in February 2026, a bench comprising Chief Justice of India Surya Kant, Justice B.V. Nagarathna, and Justice Joymalya Bagchi had expressed grave concerns over this practice. Justice Nagarathna notably recalled an instance where a lawyer cited a completely fictitious case titled 'Mercy vs. Mankind' as a binding authority before her bench. Chief Justice Kant also highlighted similar occurrences in Justice Dipankar Datta's court, where a series of such non-existent judgments were cited. Judges have lamented that even when real Supreme Court judgments are cited, often the quoted portions do not exist in the original text, placing an 'additional burden' on judges to verify the authenticity of every extract. This phenomenon, often referred to as 'AI hallucinations,' where AI generates plausible-sounding but entirely fabricated information, poses a significant threat to the foundational trust in judicial proceedings. The Supreme Court's observations emphasize that while AI offers immense potential for efficiency in legal research and analysis, it is not infallible and requires rigorous human oversight and verification. The apex court's intervention coincides with a broader national and international discourse on AI governance. While India is positioning itself as a leader in global AI regulation, as evidenced by the India AI Impact Summit and the New Delhi Declaration, the challenges posed by agentic AI within its own legal system are becoming increasingly apparent. The judicial community stresses that AI can assist in research and case management, but the ultimate responsibility for accuracy lies squarely with lawyers and judges. Instances of AI misuse in the judiciary are not isolated. In December 2025, the Supreme Court encountered what was described as its first AI misuse case where a litigant submitted a response with hundreds of fabricated legal precedents. The Bombay High Court had previously imposed costs on a litigant for citing AI-generated fake case laws, and the Delhi High Court also noted a petitioner indulging in 'AI hallucination.' These events underscore the critical need for structured guidelines and training for legal professionals to understand AI's capabilities and, more importantly, its limitations. Legal experts suggest that the duty of competence for advocates has always required verifying every authority presented to a court. While AI tools can generate citations and plausible-sounding case names with apparent confidence, this does not absolve lawyers of their responsibility for independent verification. The Supreme Court's strong warning serves as a crucial reminder that technology should augment, not replace, human judgment, prudence, and ethical considerations in the administration of justice. The judiciary remains committed to ensuring that AI does not overpower the justice administration process, instead emphasizing its cautious and responsible integration. In conclusion, the Supreme Court of India's decision to treat the citing of AI-generated fake judgments as misconduct marks a pivotal moment in the intersection of technology and law. It sends a clear message to the legal fraternity about the high standards of accuracy and verification expected, particularly as AI tools become more prevalent. The ongoing examination by the Supreme Court aims to establish clear accountability and consequences, reinforcing the sanctity and integrity of the Indian judicial system.

Frequently Asked Questions

What exactly did the Supreme Court of India flag regarding AI?

The Supreme Court of India flagged a trial court's action of relying on non-existent, allegedly AI-generated judgments. The apex court explicitly stated that such reliance constitutes 'misconduct' rather than a mere error of law, emphasizing the severe implications for the integrity of the judicial process.

Why is citing AI-generated fake judgments considered misconduct?

Citing AI-generated fake judgments is considered misconduct because it undermines the foundational trust and integrity of the adjudicatory process. These 'hallucinations' by AI can produce fabricated information, including non-existent case laws or quotes, leading to decisions based on false premises, which compromises justice.

Have there been other instances of AI misuse in the Indian judiciary?

Yes, prior to this specific incident, the Supreme Court had already raised concerns about lawyers submitting AI-drafted petitions containing fake case citations, including a fictitious case titled 'Mercy vs. Mankind'. Other high courts, like the Bombay and Delhi High Courts, have also encountered instances of AI-generated fake citations and 'AI hallucinations' in legal submissions.

What are the implications for lawyers and the legal profession in India?

The Supreme Court's strong warning implies that legal professionals must exercise extreme caution and diligence when using AI tools for legal research and drafting. They are expected to rigorously verify all AI-generated content, especially case citations and quotes, against authentic legal sources. Failure to do so could lead to serious consequences, including disciplinary action for misconduct.

What is the Supreme Court's broader stance on AI in the judiciary?

While acknowledging AI's potential to enhance efficiency in legal work, the Supreme Court maintains that AI cannot replace human judgment, prudence, and ethical discretion in decision-making. The judiciary aims to integrate AI cautiously, ensuring it augments rather than overpowers the justice administration process, and stresses the irreplaceable role of human oversight.

Read Full Story on Quick Digest