{"id":5002,"date":"2025-09-23T19:15:50","date_gmt":"2025-09-23T19:15:50","guid":{"rendered":"https:\/\/cybersecurityinfocus.com\/?p=5002"},"modified":"2025-09-23T19:15:50","modified_gmt":"2025-09-23T19:15:50","slug":"california-court-issues-10000-warning-over-lawyers-chatgpt-brief","status":"publish","type":"post","link":"https:\/\/cybersecurityinfocus.com\/?p=5002","title":{"rendered":"California Court Issues $10,000 \u2018Warning\u2019 Over Lawyer\u2019s ChatGPT Brief"},"content":{"rendered":"<p>A California appeals court has fined Los Angeles attorney Amir Mostafavi $10,000 after he filed a brief riddled with fake case citations generated by ChatGPT.<\/p>\n<p>The sanction, which appears to be the largest of its kind in the state, according to CalMatters and legal researcher Damien Charlotin, came after judges found that 21 of 23 quotes in Mostafavi\u2019s appeal in the opening brief were fabricated. The published opinion warns that no court filing should contain citations that an attorney has not personally verified.\u00a0<\/p>\n<h2 class=\"wp-block-heading\">Court calls filing \u2018frivolous\u2019 and a waste of time<\/h2>\n<p>The 2nd District Court of Appeal said Mostafavi\u2019s filing broke court rules and amounted to a frivolous appeal. The judges also faulted him for wasting the court\u2019s time and taxpayer resources.<\/p>\n<p>Their opinion, issued 10 days ago, outlined how the submission was laced with fabricated material and stressed that attorneys must ensure every citation is authentic.<\/p>\n<p>Mostafavi\u2019s penalty appears to be the most costly fine issued by a California state court against an attorney over AI-generated text.<\/p>\n<p>The panel certified the opinion for publication as a warning, making clear that courts will not tolerate unverified material presented as legal authority.\u00a0<\/p>\n<p>In May, a US district court judge in California ordered two law firms to pay $31,100 in fees over \u201cbogus AI-generated research,\u201d saying they nearly cited fabricated material in an order and that \u201cstrong deterrence is needed.\u201d<\/p>\n<h2 class=\"wp-block-heading\">Mostafavi used ChatGPT to \u2018improve\u2019 appeal draft<\/h2>\n<p>Mostafavi told the court he wrote the appeal himself, then turned to ChatGPT in hopes of improving the draft. He acknowledged that he did not review the output before submitting it and stated that he was unaware that the program would insert case citations or fabricate material.<\/p>\n<p>He <a href=\"https:\/\/calmatters.org\/economy\/technology\/2025\/09\/chatgpt-lawyer-fine-ai-regulation\/\" target=\"_blank\" rel=\"noopener\">told CalMatters<\/a> it is unrealistic to expect lawyers to abandon <a href=\"https:\/\/www.eweek.com\/artificial-intelligence\/ai-software\/\">AI tools<\/a>, comparing their rise to the way online databases replaced law libraries. Until the technology stops <a href=\"https:\/\/www.eweek.com\/artificial-intelligence\/ai-hallucinations\/\">producing false information<\/a>, he said attorneys should proceed with caution.<\/p>\n<p>\u201cI\u2019m paying the price,\u201d Mostafavi said, warning that others could fall into the same trap.\u00a0<\/p>\n<h2 class=\"wp-block-heading\">Experts warn of growing wave of AI fabrications<\/h2>\n<p>Damien Charlotin, who tracks global cases of fake AI citations, told CalMatters that court filings containing fabricated case law have jumped from just a few a month to several a day. He explained that <a href=\"https:\/\/www.eweek.com\/artificial-intelligence\/ai-model-types\/\">AI models<\/a> are especially prone to hallucinating when asked to support complex legal arguments, a tendency that can lead to briefs being seeded with false material.<\/p>\n<p>Legal scholars echoed that concern. UCLA\u2019s Mark McKenna called relying on ChatGPT an \u201cabdication of your responsibility as a party representing someone.\u201d At the same time, Professor Andrew Selbst warned the technology is being \u201cshoved down all our throats\u201d in law schools and firms without serious consideration of consequences.\u201d\u00a0<\/p>\n<h2 class=\"wp-block-heading\">Too high a price for errors<\/h2>\n<p>AI models have been known to generate false information, with <a href=\"https:\/\/www.eweek.com\/news\/ai-hallucinations-increase\/\">independent tests finding hallucination rates<\/a> even in basic, verifiable tasks. The trend suggests that accuracy in <a href=\"https:\/\/www.eweek.com\/artificial-intelligence\/what-is-generative-ai\/\">generative systems<\/a> remains far from guaranteed so far.<\/p>\n<p>\u201cIn the meantime we\u2019re going to have some victims, we\u2019re going to have some damages, we\u2019re going to have some wreckages,\u201d Mostafavi said after his sanction.<\/p>\n<p>But when justice hangs in the balance, simply waiting for further AI developments is not enough. Allowing \u201csome victims, some damages, and some wreckages\u201d because of <a href=\"https:\/\/www.eweek.com\/artificial-intelligence\/what-is-artificial-intelligence\/\">artificial intelligence<\/a> is simply unacceptable.\u00a0<\/p>\n<p><a href=\"https:\/\/www.eweek.com\/artificial-intelligence\/ai-for-lawyers\/\"><strong>Law firms are turning to artificial intelligence<\/strong><\/a><strong> to handle tasks once reserved for junior associates. See how this shift is reshaping client services and redefining the role of legal professionals.<\/strong><\/p>\n<p>The post <a href=\"https:\/\/www.eweek.com\/news\/california-lawyer-fined-over-chatgpt-fake-cases\/\">California Court Issues $10,000 \u2018Warning\u2019 Over Lawyer\u2019s ChatGPT Brief<\/a> appeared first on <a href=\"https:\/\/www.eweek.com\/\">eWEEK<\/a>.<\/p>","protected":false},"excerpt":{"rendered":"<p>A California appeals court has fined Los Angeles attorney Amir Mostafavi $10,000 after he filed a brief riddled with fake case citations generated by ChatGPT. The sanction, which appears to be the largest of its kind in the state, according to CalMatters and legal researcher Damien Charlotin, came after judges found that 21 of 23 [&hellip;]<\/p>\n","protected":false},"author":0,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[2],"tags":[],"class_list":["post-5002","post","type-post","status-publish","format-standard","hentry","category-news"],"_links":{"self":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/5002"}],"collection":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/types\/post"}],"replies":[{"embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=5002"}],"version-history":[{"count":0,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=\/wp\/v2\/posts\/5002\/revisions"}],"wp:attachment":[{"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=5002"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=5002"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cybersecurityinfocus.com\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=5002"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}