Skip to main content

Artificial intelligence, or AI, has become a major source of fascination for the tech industry and the public at large. Generative AI in particular has exploded in popularity in recent months, with many people adopting certain AI models, such as ChatGPT, as tools that they use to accomplish a wide variety of tasks, from preparing letters to grocery lists. Given that many AI models are known to “hallucinate” or make up facts when the model does not know how to answer a question, many have also begun to see AI as a source of concern. Many courts have also begun to be concerned with AI following a rise in AI-generated filings with fictitious citations. One such example in Missouri was discussed at length in a previous blog article (https://gotlawstl.com/missouri-courts-tackle-artificial-intelligence/). The Illinois Supreme Court also recently published a two-page judicial reference sheet for judges to use to determine if a litigant is using AI technologies that may be hallucinating or being used to alter or distort any pictures or video footage.

Concerns about AI have also led the Illinois Supreme Court to announce its own policy on AI. The policy primarily states that “Unsubstantiated or deliberately misleading AI-generated content that perpetuates bias, prejudices litigants, or obscures truth-finding and decision-making will not be tolerated,” in addition to stating that the Rules of Professional Conduct and the Code of Judicial Conduct will apply to AI technologies.  (Cite: https://www.illinoiscourts.gov/News/1485/Illinois-Supreme-Court-Announces-Policy-on-Artificial-Intelligence/news-detail/) Illinois judges and attorneys may use generative AI as a drafting tool, but they are still required to review any final work product for any defects before it is filed with a court. Numerous other state supreme courts, including Missouri’s, have begun the process of creating AI policies by creating specialized teams to develop them. Others, such as Delaware and Illinois, have announced policies allowing the use of AI technologies under certain guidelines and in compliance with ethical rules. 

Many states have also begun to enact laws aimed at regulating, and in certain circumstances prohibiting, the use of AI technologies. One example is the Illinois AI Video Interview Act (https://www.ilga.gov/legislation/ilcs/ilcs3.asp?ActID=4015&ChapterID=68), which requires that applicants be notified if AI technologies are used in the hiring process. Another example, also from Illinois, can be found in a recent amendment to the Illinois Human Rights Act, which makes it a civil rights violation for an employer to use artificial intelligence to make decisions with respect to an employee’s “recruitment, hiring, promotion, renewal of employment, or conditions of employment, training or apprenticeship, discharge, discipline, tenure, or the terms, privileges, or conditions of employment.”  The amendment also prohibits employers from using artificial intelligence that has the effect of subjecting employees to discrimination on the basis of protected classes identified under the Illinois Human Rights Act, or to use employee zip codes as a proxy for protected classes. 

Although there have not been any laws enacted in Missouri regulating or prohibiting the use of AI, there have been bills introduced in both houses of the state legislature that would require disclaimers on any political advertisements that were made or edited with AI. It is important to know and understand the regulations on the use of AI technologies in a jurisdiction where a case is being tried. It is especially important to understand these regulations and AI technologies themselves when it appears possible that opposing counsel might be using AI during litigation.