AI in Indian courts – a slow start

Sep 25, 2023

Generative Artificial Intelligence (“AI”) technologies, such as ChatGPT, are often celebrated as transformative forces across various sectors, and are often even perceived to challenge the necessity of human involvement in such sectors.  Nevertheless, the legal sector in India has harboured a degree of scepticism towards AI, and adoption is slow.  

In the case of Christian Louboutin SAS & Anr. v. M/s The Shoe Boutique – Shutiq (CS (COMM) 583/2023), the Delhi High Court (“DHC”) asserted that, in its present state of technological development, AI cannot replace human intelligence in the adjudication process.  It emphasized that responses from AI chatbots cannot serve as the basis to adjudicate legal or factual matters in a court of law. 

The article examines the foregoing legal precedent and delves into the issue of the admissibility of AI-generated data as evidence in court proceedings in India.

Background

France-based Christian Louboutin SAS and Clermon ET Associes (the “Plaintiffs”) initiated a suit against an Indian firm, M/s Shoe Boutique (Shutiq) (the “Defendant”), alleging infringement of their trademark rights and unauthorized copying of their distinct shoe designs.  Notably, the Plaintiffs are renowned globally for their iconic “red sole” shoes, which have gained substantial reputation and goodwill, including in India since their introduction in 2012.  Apart from their signature “red sole” shoes, the Plaintiffs have also introduced a unique “spiked shoe style” in 2010. 

The Plaintiffs contended that their shoe styles possessed inherent distinctiveness and are readily identifiable as their creation.  To support their claim, the Plaintiffs cited a response from ChatGPT.  The Plaintiffs asked ChatGPT if Christian Louboutin was known for spiked men’s shoes, to which it responded affirmatively.  However, the DHC posed a different question to ChatGPT asking for the names of brands that produce spiked shoes, and in response, ChatGPT listed ten (10) different companies, including Christian Louboutin.

Key rulings     

The DHC observed that while the Defendant had undertaken not to imitate and sell the Plaintiffs’ designs, this did not imply that the Plaintiffs held a monopoly on all spiked shoes or coloured soles.  To grant an injunction, the Defendant’s products needed to be a “colourable or slavish imitation” of the Plaintiffs’ designs.  After examining the products of both parties, the DHC determined that the Defendant had a clear intent to imitate the Plaintiffs’ products for financial gain and pass them off as the Plaintiffs’ own.  Consequently, the suit was decided in favour of the Plaintiffs.

While the judgment primarily revolved around a trademark dispute, the DHC also made interesting observations about the reliability of chatbots in legal proceedings.  In view of ChatGPT’s response to the question posed by the DHC, the DHC concluded that ChatGPT cannot serve as the basis for adjudicating legal or factual matters in a court of law.  This is because, firstly, the response from Large Language Model (LLM) based chatbots like ChatGPT depends on various factors, including the nature and structure of the user’s query, training data, etc., and secondly, there are possibilities of AI chatbots generating incorrect responses, fictional case laws, imaginative data, etc., making the accuracy and reliability of AI-generated data a “grey area.”

The DHC asserted that in its current state of technological development, AI cannot replace human intelligence or the humane element in the adjudicatory process.  At most, AI tools can be used for preliminary understanding or research, but not more than that.   

Our comments

The DHC’s judgment emphasizes the irreplaceable role of human judgment in the adjudicatory process while also advocating for a cautious integration of AI into it.  Considering this, some key points emerge:

  • The case does not completely prohibit AI in the legal arena but defines its use as suitable for preliminary understanding and research.  For instance, the High Court of Punjab and Haryana recently employed Chat GPT in a case to determine bail jurisprudence. (Jaswinder Singh v. State of Punjab (CRM-M-22496-2022).)
  • The evidential relevance of ChatGPT is low, as it can also produce incorrect responses, fictional case laws, and imaginative data.  Consequently, the question arises as to who should bear the responsibility for any sanctions that may be imposed.  Can a lawyer reasonably argue that he/ she were unaware of the tool’s capability to fabricate cases?  Therefore, ChatGPT may be used as a starting point for research but should be further supplemented by traditionally admissible evidence.

Notably, the DHC decision aligns with recent rulings in the United States (the “US”), where lawyers appearing in court must confirm that generative AI was not the sole author of their legal filings, or if AI was involved, that a human verified the content.  Judge Brantley Starr of the US District Court for the Northern District of Texas recently addressed the potential of generative AI platforms to engage in “hallucinations” and provide inaccurate information such as quotes and citations.  Judge Gabriel Fuentes in the US District Court for Northern Illinois issued an order mandating disclosure of generative AI tool usage in the drafting of court documents, including specifying the AI tool and the manner of its application.  Parties are also required to disclose whether generative AI was used for conducting the corresponding legal research.  Further, Judge Stephen Vaden of the US Court of International Trade has required lawyers to also certify that the use of the AI tool “has not resulted in the disclosure of any confidential or business proprietary information to any unauthorized party.”

As technology continues to advance, participants of the legal industry will continue to experiment with it.  Eventually, AI will inevitably make its way into courtrooms, aiming to simplify tasks and enhance efficiency.  The Supreme Court of India is also cautiously embracing AI, as demonstrated by its pilot project under which it used AI and Natural Language Processing (NLP) for live transcription.  It has also used machine learning tools for translating judgments into other languages to enhance accessibility. 

In this context, the Indian government should promptly enact laws to regulate the use of AI tools.  As discussed in a previous post, other countries have either already implemented draft rules on AI (such as China) or are actively soliciting inputs on a federal AI legislation (like the US), or are in the stage of finalizing legislation (such as the European Union’s AI Act).  Locally, there is no AI Law regulating AI tools; instead, just a strategy coming from the government’s think tank, NITI Aayog, or the recommendations of the Telecom Regulatory Authority of India.

Specific to the legal sector, courts must continue to develop jurisprudence on the responsible usage of AI in the adjudicatory process, with a focus on striking a balance between adopting AI and preserving the human element of justice.  Taking a cue from the US, the Indian courts may also enforce mandatory disclosures regarding AI usage, including specifying the name of the AI tool, the manner in which it was employed, and the specific portions that were drafted or researched using it.  Further, it is imperative for the legal fraternity as a whole to actively engage in policy discussions concerning the utilization of AI tools.  This involves defining their scope and limitations, ensuring the protection of confidential client data, and addressing the risks of entrenched bias that could adversely affect marginalized groups.

More Insights

Fact-check units: Unchecked fact checkers

Download .pdf In the case of Kunal Kamra v. Union of India, on January 31, 2024, the petitioners challenged the constitutional validity of the 2023 amendment (the “2023 Amendment”) made to Rule 3(1)(b)(v) (pertaining to due diligence by an intermediary) (the “Impugned...

read more

Foreign investment liberalised in India’s space sector

Download .pdf Recently, India's Union Cabinet approved a significant amendment (the “FDI Amendment”) to India’s Foreign Direct Investment (“FDI”) Policy (the “FDI Policy”) in the space sector.  The FDI Amendment aims to open India’s space sector for foreign...

read more

Indian tax implications in cryptocurrency transactions

Download .pdf India’s Finance Minister introduced a specific tax regime for virtual digital assets (“VDAs”) in the Finance Bill 2022. Section 2(47A) of the Income-tax Act, 1961 (the “IT Act”) was brought in and defines VDAs to mean any information or code or number or...

read more
Share This