AI Use in Court Is Ethical When Used Correctly | Opinion

AI Use in Court Is Ethical When Used Correctly | Opinion

Artificial intelligence (AI)’s emergence and stunning popularity in the legal sphere raises the question of whether it’s ethical for lawyers to use AI tools in their practices. There is little doubt that AI, like all technology, can be misused. Attorneys citing non-existing cases is but one well-known example.

Innovation has created challenges for the law before. Attorneys were skeptical about technological advances such as electronic legal research and e-discovery, yet those issues have largely been mitigated and are integral to law practice. Artificial intelligence is here to stay, so it’s up to leaders in law to help establish guidelines that ensure AI tools remain ethical. This month, the New York State Bar Association issued a 92-page report on the use of AI in law practice, highlighting many of the challenges and providing recommendations and guidelines to avoid issues.

Attorneys Are Using Multiple Types of AI, Some Better Than Others

AI use in litigation is widely debated because so many AI tools are available, and there must be more understanding of best practices. Lawyers need to understand the most popular types of artificial intelligence for litigation efforts to uphold legal ethics: generative AI and predictive AI.

Generative AI produces new data in all forms of media, including text, code, images, audio, video, and more, based on a provided prompt. Its use of unstructured data generates new and original content. For lawyers, this type of AI can support the generation of legal documents and contracts and client services with legal chatbots for website support.

Lawyers can get in trouble with this use of generative AI. Litigators have been caught using AI-generated case citations in legal documents that ended up being bogus, a pitfall of generative AI and its emphasis on creating new content.

Predictive AI uses machine learning to identify patterns within large datasets and predict future events/outcomes. Some of the data is traditional legal information, case outcomes, and types of cases. But, as with any big data project, it also includes data that doesn’t have an obvious relationship to legal precedent.

Using this approach plus behavioral analytics opens opportunities that are inaccessible through researching case law. Now, attorneys can predict human behavior.

Court room gavel
A courtroom gavel is seen.MediaNews Group/Boston Herald via Getty Images

Predictive AI is becoming a powerful tool that arms legal teams with intelligence beyond case law and extensive research in half the time, allowing them to better strategize for risk exposure, litigation likelihoods, settlement strategy, and more.

AI Arms Lawyers and Their Clients To Assess Judicial Temperament and Its Effect on Their Case

Previously, attorneys seeking to assess a particular judge’s likelihood of granting their motions relied on parsing basic statistics. This method only offers a rudimentary understanding of the judge’s decision-making.

In this regard, predictive AI tools offer a competitive advantage. Rather than just analyzing the basic statistics, this approach collects massive amounts of data that would overwhelm an attorney. Data collection on a judge’s previous motions for similar cases is a must, but data points on a judge’s background can help predict judicial temperament. The data on where a judge went to law school, their political affiliation, gender, net worth, and other biographical information creates a more comprehensive outline of their personality. Predictive algorithmic models, behavioral analytics, and AI do the work to identify authentic patterns that demonstrate judicial predilections that correlate with the case at hand. This approach provides highly accurate predictions about how the judge will rule and how long cases will resolve.

Judges’ behavioral analytics are essential for lawyers and clients alike. This data offers proof points on how successful litigation efforts will be and how long they expect to be in court, which are details any paying legal customer has a right to have. These predictions are necessary for attorneys to have critical information informing settlement decisions, motion practice, and risk assessment. Ignoring these metrics places clients at a disadvantage and likely raises ethical issues.

A Blanket Ban on AI Use in Courts Is Too Simplistic and Unrealistic

AI isn’t going anywhere anytime soon. Banning legal AI use would be a lazy approach to sophisticated and complicated technology that is only growing in prominence. Litigators must take a hybrid approach of checks and balances when using AI tools to set the U.S. court system up for success with AI-powered technology. It’s up to lawyers to oversee any work done by AI for completeness and accuracy, ensuring the data used by the AI tool is sound and that the output—either generative or predictive—is appropriate for the case at hand.

AI should be used to better society, enable people to do their jobs with fewer limits, and allow them to better practice law, write briefs, and advise clients. AI is not replacing anyone; instead, it is elevating the human aspect to an artificial level.

Dan Rabinowitz is co-founder and CEO of Pre/Dicta.

The views expressed in this article are the writer’s own.

Uncommon Knowledge

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

Newsweek is committed to challenging conventional wisdom and finding connections in the search for common ground.

Shout Out!!!
Tags
Share

Related articles