Untitled Document
 
 
 
Untitled Document
 
 
 
 
 
 
   
  Home > ¸¶ÀÌÆäÀÌÁö > ´º½º
AI Errors in Court Filings Spark Warnings on AI Risks
AI Errors in Court Filings Spark Warnings on AI Risks0Artificial intelligence has become a powerful tool in professional settings, assisting with complex tasks such as data analysis, legal research, and drafting documents. But its rapid integration has also revealed serious risks, particularly when inaccurate content reaches critical institutions like the courts.

On Aug. 13, the Supreme Court of Victoria in Australia confronted such a failure. Rishi Nathwani, a senior defense attorney and King¡¯s Counsel, submitted documents during a murder trial that contained fabricated quotations and references to fictitious judicial rulings. The documents had been generated by an AI tool and were not fully checked before being presented in court.

Clerks working for Justice James Elliott discovered the errors when they attempted to verify the citations in official databases but found no record of the cases. After further review, the defense acknowledged the citations were fabricated. The revelation halted proceedings for 24 hours. Nathwani later issued a formal apology, conceding the defense¡¯s lack of due diligence.

On Aug. 14, Justice Elliott addressed the incident in open court, describing the sequence of events as unacceptable. He emphasized that all content used in legal proceedings must be accurate and trustworthy and reiterated that any AI-generated material requires independent verification. He also cited the court¡¯s existing policy on responsible AI use in legal practice.
AI Errors in Court Filings Spark Warnings on AI Risks7
The case in Victoria is part of a broader pattern. In 2023, a federal judge in the United States fined two attorneys and their firm after they filed briefs that relied on fictitious case law generated by ChatGPT. Later that year, lawyers representing Michael Cohen, Donald Trump¡¯s former personal attorney, submitted filings that contained similar AI-related errors.

Concerns have also been raised in the United Kingdom. On June 21, 2024, British High Court Justice Victoria Sharp warned that presenting false legal material could amount to contempt of court. In more severe cases, she said, it may be prosecuted as perverting the course of justice, a criminal offense carrying a maximum sentence of life imprisonment.

The incidents underscore a growing tension between technological innovation and professional accountability. While AI can enhance efficiency and broaden access to resources, experts stress that it cannot replace human judgment, ethical responsibility, or thorough fact-checking. In fields defined by accuracy and trust, oversight must remain firmly in human hands.



Sean Jung
R&D Division Director
teen/1757053434/1613367592
 
Àμâ±â´ÉÀÔ´Ï´Ù.
1. What mistake did defense attorney Rishi Nathwani make during the murder trial in Victoria?
2. What immediate impact did the discovery of fabricated citations have on the court proceedings?
3. What warning did British High Court Justice Victoria Sharp issue in June 2024 regarding false legal material?
4. How does the passage describe the balance between AI innovation and professional accountability?
 
1. Do you think lawyers should be allowed to use AI tools in preparing legal documents? Why or why not?
2. If you were the judge, how would you respond to a lawyer submitting fake citations from AI?
3. Should mistakes made by AI be punished the same way as mistakes made by humans? Why?
4. Do you agree that presenting false legal material could be treated as a crime? Why or why not?
ȸ»ç¼Ò°³ | ȸ»çÀ§Ä¡ | Á¦ÈÞ ¹× Á¦¾È | ±¤°í¾È³» | °³ÀÎÁ¤º¸ º¸È£Á¤Ã¥ | À̸ÞÀϹ«´Ü¼öÁý°ÅºÎ | Site ÀÌ¿ë¾È³» | FAQ | Áö¿øÇÁ·Î±×·¥