How do you safely apply a technology that draws from everything and can generate anything?
That is the question companies and industries around the world are facing when integrating generative artificial intelligence (AI) tools into their workflows and processes.
“On a macro level, I think firms are viewing this as an opportunity,” Shaunt Sarkissian, CEO and founder of AI-ID, told PYMNTS.
That sentiment was confirmed in the latest PYMNTS Intelligence report, which showed that 4 in 5 legal professionals believe generative AI will create “transformative efficiencies” in the legal sector alone.
“A lot of legal professionals view [AI] the same as any other tool that can enhance their efficiency, going back to the word processor,” Sarkissian said.
“There’s been technology in the legal field for a long time now,” he added.
But AI is unlike other technologies. And its applications are not merely about automating mundane tasks but empowering legal professionals to be more analytical and productive.
This brings both opportunities and risks.
“The devil is in the details, and it’s in how you use AI and are transparent around it,” Sarkissian said. “AI is a unique tool that needs to be shown and exposed to the end customer.”
The same PYMNTS Intelligence found that 72% of lawyers doubt their industry is ready for generative AI, and just 1 in 5 believe that the advantages of using AI surpass the disadvantages.
Sarkissian noted the intrinsic divide between individuals and firms, emphasizing that individual practitioners might feel threatened initially by technologies that they have heard could potentially replace them.
“Once they start using the technology, that feeling goes away,” he said. “They see how AI makes their job more efficient … and, if anything, new technologies create new frontiers of law.”
He drew a parallel to other industries, such as accounting, where technology initially raised concerns about job displacement but ultimately led to the emergence of new, more strategic roles.
“From lawyers, to doctors, to accountants, the role [when combined with AI] becomes more about making high-level judgment calls and strategic recommendation,” Sarkissian said.
While firms look to navigate the implications, challenges and potential strategies for harnessing the power of AI in the legal domain, sometimes the most important first step is just taking the first step and playing around with AI.
“Lawyers should first play with the tools and see not just what they can do, but also what their shortcomings are,” Sarkissian said. “At the end of the day, a lawyer is responsible for fact defense and knowledge of the law. … If you use a calculator and it told you 10 times 10 is 7,000, then you know not to use that calculator.”
While AI tools can speed up processes, legal professionals must maintain their roles as final arbiters, ensuring that the information presented is accurate and aligns with legal standards. The goal is to view AI tools as complementary, not end-to-end final answer ecosystems.
Sarkissian also stresses the need for transparency in the use of AI tools. Legal professionals should be explicit about when AI is involved in decision-making or document creation. This transparency extends to the end clients, fostering trust and allowing for a more informed relationship between lawyers and those they serve.
“If an element in a legal document was generated by AI, it should be noted — forced to be noted — and tracked throughout the system,” he said. “Just like a lawyer can be in legal jeopardy for their work, so should an AI system be. The client should be able to say, ‘Hey, not only you, Mr. Lawyer, have done this wrong, but this system also has done something wrong.’”
“I think that will mitigate the fear that some have in using AI tools, and it will also make the AI tool providers more accountable,” Sarkissian added.
Compounding the challenges of preparing the legal industry for effective and responsible use of AI is the lack of industrywide consensus about how to govern generative AI more broadly.
“I wouldn’t rely on the vendors to necessarily set the rules,” he said. “It may need a combination of the bar association and even the outcome of legislation where AI was wrongly applied.”
Potential regulatory measures to ensure the ethical use of AI in the legal domain could include setting standards for accuracy and safety in AI outputs, akin to safety standards for other tools and technologies, Sarkissian said.
Ultimately, whether through self-regulation by legal bodies or a federal apparatus, establishing minimum safety standards for AI outputs could mitigate risks and reassure both legal professionals and clients.
Sarkissian also noted that another potential danger lies in the prevalence of AI-generated legal documents, which could endanger consumers looking to save money on a boilerplate document or contract and receiving something that is “junk.”
“The industry needs to identify which are the most effective tools and also be vocal about telling consumers to use them at their own peril,” he said. “There’s always some benefit to having an actual lawyer look things over.”
As the legal industry navigates the evolving landscape of AI, Sarkissian outlined three steps for ensuring a smooth integration: embrace tools and engage with and identify the AI tools that best suit their needs; educate the market by transparently communicating the role of AI in document creation and decision-making processes, as well as being clear about its benefits and limitations; and engage with legislators around the development of regulations.
For all PYMNTS AI coverage, subscribe to the daily AI Newsletter.
The post Can AI Systems Be Accountable for Bad Legal Advice? first appeared on PYMNTS.com.