Are AI Lawyers Coming for Us? What We Need to Know
STORY AT-A-GLANCE This story is about the implications of the latest developments in legal tech.
Before we get to it, I want to ask a question and just leave it there — for us to keep it in mind as we plough through the latest wonders of the AI world. The question is, what is point of our existence as human beings on Earth?
Chatbot Passed the Uniform Bar Exam
Word on the street is that the latest Microsoft-backed AI has passed the universal bar exam, an exam that lawyers in the United States have to pass in order to get licensed to practice law. Not only did it pass it but its score “fell in the top 10% of test takers.” The AI software, made by the Microsoft partner OpenAI, is called “GPT-4.” It is an upgrade from GPT-3.5, which is what the recently famous ChatGPT program is based on. In case you are curious, “GPT” stands for “Generative Pre-trained Transformer,” which is a computer language model that uses “deep learning” to produce text. Deep learning stands roughly for combing through vast amounts of data, algorithmically extracting meaningful characteristics of different types, and then summing them up in a way that makes it look like the computer “understands.” This software falls under the definition of “generative AI,” which is the type of AI that goes beyond analyzing large amounts of data and producing a summary and that is capable of generating its own “creative” output based on the data is has analyzed. Per OpenAI's GPT-4 Technical Report , the program was tested “on a diverse set of benchmarks, including simulating exams that were originally designed for humans.”
An Interlude
Can I please use this opportunity to express my annoyance with the popularization of the hipster use of the word “humans” instead of “people,” as if there is ever going to be a point in time when robots grow sentient and start participating in society not as a type of technology at the hands of whoever is in charge but as living beings in their own right? This is of course never going to happen (although the patent owners may pretend). This is nonsense and fiction. But calling people “humans” introduces a new of viewing ourselves through externalized mechanical eyes. It is yet another magical trick by the crazies in high chairs to disconnect us from our innate personality and our souls. It also helps the tyrants to bring about “robot citizens” and give them “rights.” The “robot citizens” may even include “financial actors,” to justify the fraud! And when the actual people complain about the absurdity of it based on the fact that robots aren't actually “human,” they'll be accused of having a phobia of some sort. Using “humans” instead of “people” when describing us all just makes the trickery a tad easier because, are these machines or software products people? Obviously they are not. We know they are not people. But are they maybe a little bit human-like? Trying to be human? Wanting to be human? Deserving to be human? Don't you think they at the very lease deserve-human-like rights? Etc.
Back to the Topic of AI Passing the Bar Exam
Anyway, according to OpenAI, the company used the necessary precautions to ensure that the AI product didn't just mechanically reproduce already known correct answers to the already known questions on the bar exam. In their own words: “we did no specific training for these exams. A minority of the problems in the exams were seen by the model during training; for each exam we run a variant with these questions removed and report the lower score of the two. We believe the results to be representative.” “Exams were sourced from publicly available materials. Exam questions included both multiple choice and free-response questions; we designed separate prompts for each format, and images were included in the input for questions which required it. The evaluation setup was designed based on performance on a validation set of exams, and we report final results on held-out test exams. Overall scores were determined by combining multiple-choice and free-response question scores using publicly available methodologies for each exam.” “On a simulated bar exam, GPT-4 achieves a score that falls in the top 10% of test takers. This contrasts with GPT-3.5, which scores in the bottom 10%.” Here are the uniform bar exam scores and for the LSAT test results, according to GPT-4 Technical Report: Uniform Bar Exam (MBE+MEE+MPT) GPT-4: 298 / 400 (approximately 90th percentile) GPT-4 (no vision): 298 / 400 (approximately 90th percentile) GPT-3.5: 213 / 400 (approximately 10th percentile) LSAT GPT-4: 163 (approximately 88th percentile) GPT-4 (no vision): 161 (approximately 83rd percentile) GPT-3.5: 149 (approximately 40th percentile) Here is what a mainstream review of the software, written by Dr. Lance Eliot and published on law.com , has to say:
Does the passing of the simulated uniform bar exam imply or prove that GPT-4is legally capable and fully ready to perform legal tasks on par with humanlawyers?The answer is a resounding No, namely that despite the wink-wink implicationor innuendo, all that can be reasonably said is that GPT-4 was able to use itsextensive computational pattern-matching of words related to other words inorder to successfully derive answers to the presented exams.
My comment: I think it is important to emphasize the fact that the “wink-wink” component is a big part of the AI myth. We need to remember it when it comes to all AI “work,” not just the tasks that people do in their currently still prestigious career paths. And is it possible that when the overlords came up with their conveyors and “systems management,” they just duped us point-blank?! Back to Dr. Eliot's analysis:
“As I've stated in my prior posted piece entitled “Best Ways To Use GenerativeAI In Your Law Practice,” care needs to be exercised in overstating whatgenerative AI can attain. Furthermore, the comparison of GPT-4 to “human-levelperformance” smacks of anthropomorphizing of AI. This is a dangerousslippery slope of taking people down a primrose path that current AI is sentientor human-like in abilities.Generative AI such as GPT-4 is notably handy as an aid for lawyers and can be ahuge leg-up in performing legal tasks. That being said, relying solely ongenerative AI for legal efforts is unsound and improper.The key takeaway for lawyers is that you ought to be giving serious and deepconsideration to leveraging generative AI such as GPT-4. No doubt about that.GPT-4 is even better at aiding lawyers than ChatGPT. I've said over and overagain that lawyers and law practices using generative AI are going to outdo andoutperform attorneys and firms that aren't using generative AI.”
Generative AI
Let's talk for a second about generative AI. Here is what Reuter's has to say:
“Generative artificial intelligence has become a buzzword this year, capturingthe public's fancy and sparking a rush among Microsoft (MSFT.O) and Alphabet(GOOGL.O) to launch products with technology they believe will change thenature of work.”“The most famous generative AI application is ChatGPT, a chatbot thatMicrosoft-backed OpenAI released late last year. The AI powering it is known asa large language model because it takes in a text prompt and from that writes ahuman-like response.”“GPT-4, a newer model that OpenAI announced this week, is ‘multimodal'because it can perceive not only text but images as well. OpenAI's presidentdemonstrated on Tuesday how it could take a photo of a hand-drawn mock-upfor a website he wanted to build, and from that generate a real one.”
Here is my favorite bit: “Cybersecurity researchers have also expressed concern that generative AI could allow bad actors, even governments to produce far more disinformation than before.” Oh no, governments spreading misinformation? Impossible. Never once happened in human history. At least, not our government. Not here and not now, and not against us (phew). A short video explainer by Reuters (narrated by AI?).
Paying Attention to the Upside-Down Language of AI
One of the things to pay attention to in the talk about AI is the use of the word “harm.” What is “harmful language”? On the intuitive level, we know (calls for violence or child abuse, for example, are actually harmful language) — but in the oficial, robotic context, “harmful language” is whatever they say it is on a given day. In an imaginary honest society, in a society where the leaders wouldn't try to mess with language and with people's heads, the desire to put in tight controls around what can come out of a robot's mouth could be a neutral and noble goal. It is a robot, after all. But we don't live in an honest society, society, and the leaders are already trying to label any “wrongthink” as “hate speech.” So naturally, as the trend with AI proceeds, and as AI replaces educators and bureaucratic decision making, people facing censorship might be squeezed more and more into the torture of “talking to the hand.” A mechanical hand. The weaponization of language and automated interactions have been on my mind a lot. In my interview with Dr. Bruce Dooley , we discussed the weaponization of the word “harm” in medicine by the Federation of State Medical Boards. In my 2022 article, “ Who is the Terrorist? ” I wrote about the ideas about harm and disinformation, put forward by the DHS last year. And a few years before COVID, I wrote an essay titled, “ Love & Automation: The Creepy Touch of a Mechanical Mother ”:
“The defenders of eficiency typically try to impose it on other people — whileallowing themselves to remain just as human and free-roaming as they wish tobe. The famous example is how the likes of Steve Jobs and Bill Gates limit theirown children's access to technology while telling the rest of the world thattechnology is universally and ubiquitously amazing.Factory owners gladly impose eficiency on the former independent craftsmenand their descendants. Ofice managers impose eficiency on the ofice slaves.Makers of software sell eficiency to the general population. But in their ownshoes, the salesmen of eficiency like to look at the sky and smell the fiowers.”
What Can Be Done With AI in the Legal Field?
As of this second, AI can possibly do the work that is usually done by paralegals and junior lawyers. It can customize template-based documents, such as contracts and form letters. It can go through vast piles of documents quickly and summarize the case for more senior lawyers to review. It can also write draft legal documents, citing the laws and the precedents that apply. According to a Brookings Institution review , in the legal field, AI can take care of some of the most time-consuming tasks:
“Consider one of the most time-consuming tasks in litigation: extractingstructure, meaning, and salient information from an enormous set ofdocuments produced during discovery. AI will vastly accelerate this process,doing work in seconds that without AI might take weeks. Or consider thedrafting of motions to file with a court.AI can be used to very quickly produce initial drafts, citing the relevant case law,advancing arguments, and rebutting (as well as anticipating) argumentsadvanced by opposing counsel. Human input will still be needed to produce thefinal draft, but the process will be much faster with AI.”“More broadly, AI will make it much more eficient for attorneys to draftdocuments requiring a high degree of customization — a process thattraditionally has consumed a significant amount of attorney time.Examples include contracts, the many different types of documents that getfiled with a court in litigation, responses to interrogatories, summaries forclients of recent developments in an ongoing legal matter, visual aids for use intrial, and pitches aimed at landing new clients.AI could also be used during a trial to analyze a trial transcript in real time andprovide input to attorneys that can help them choose which questions to askwitnesses.”
“DoNotPay”
One of the first popular AI legal tech products, branded as “the world's first robot lawyer” was created in 2015 by Josh Browder, the founder of DoNotPay . That particular software was designed to help people write effective responses and letters to fight unjustly issued tickets, cancel subscriptions, and so on. Once the word got out, the company was able to receive significant investor funding . In January 2023, the company announced that their AI would act as an informal “attorney” in court , helping the client to fight a speeding ticket. “A program trained with the help of artificial intelligence is set to help a defendant contest his case in a U.S. court next month … Instead of addressing the court, the program, which will run on a smartphone, will supply appropriate responses through an earpiece to the defendant, who can then use them in the courtroom.” “Since this is the AI's very first case, DoNotPay is ready to take on the burden of punishment if the AI's advice does not help the client. Since it is a speeding ticket, DoNotPay will pay for the speeding ticket. If it wins though, it will have a massive victory to its credit.” The AI was supposed to assist the client in court in February 2023. But then it didn't . And then the company was sued for “practicing law without a license.” “The robot lawyer is facing a proposed class section lawsuit filed by Chicago-based law firm, Edelson and published on the website of the Superior Court of the State of California for the County of San Francisco.” The way it looks to me, the people-facing, cheaply priced version of AI is seen as a threat by the big boys, and so they want to put a stop to that. I mean, can you imagine a world in which lowly peasants use cheaply priced AI (fast calculator) products to benefit themselves — and possibly even without sending all their personal data to the Central Mother Ship? The outrage.
“Ross”
Another AI product, also branded as “the first artificially intelligent attorney, was Ross.” “Ross” was the AI product made by Ross Intelligence, a startup founded in 2014 by three Canadian students. The product was based on IBM's Watson series of AI products for business. Spoiler: the company was shut down in 2020. Here is from Futurism (2016):
“Law firm Baker & Hostetler has announced that they are employing IBM's AIRoss to handle their bankruptcy practice, which at the moment consists ofnearly 50 lawyers. According to CEO and co-founder Andrew Arruda, other firmshave also signed licenses with Ross, and they will also be makingannouncements shortly.”“Ross, ‘the world's first artificially intelligent attorney' built on IBM's cognitivecomputer Watson, was designed to read and understand language, postulatehypotheses when asked questions, research, and then generate responses(along with references and citations) to back up its conclusions. Ross alsolearns from experience, gaining speed and knowledge the more you interactwith it.”
But like I said, in 2020, Ross Intelligence was shut down . That seemingly had to do not with any underlying philosophical issues of using AI in the legal field — but with corporate competition. It was more about the fight of the mobs and the question of who gets to benefit from the potential dollar waterfall. It was shut down as a result of the company's financing being blocked by the lawsuit filed by Thompson Reuters who claimed that Ross Intelligence used TR data to build their legal tech AI.
“ROSS Intelligence, a company that sought to innovate legal research throughthe use of artificial intelligence, and that helped to raise awareness of AIthroughout the legal industry, is shutting down its operations, as a lawsuitagainst it by Thomson Reuters has crippled its ability to raise new financing orexplore potential acquisition and left it without suficient funds to operate.”“Thomson Reuters sued ROSS in May, alleging that it stole content fromWestlaw to build its own competing legal research product. ROSS did this, TRalleged, by “intentionally and knowingly” inducing the legal research and writingcompany LegalEase Solutions to use its Westlaw account to deliver Westlawdata to ROSS en masse.”
The Owner of the AI Sets the Tone
As usual, the devil is in the detail. If AI technology is used to help regular people accomplish with the tasks that have been previously out of reach without being rich — and without the abuse data ofioading to the Central Mother Ship — it could be a useful thing. And this will be the selling point during the initial phases of the technology rollout. The bait is supposed to taste good, this is how it always works. But if a collective habit develops for the use of chatbots and coherent-sounding, language-producing fast calculators in our everyday professional lives, and once the suficient amounts of data have been scooped, the useful stuff will be moved behind a very hefty paywall. And perhaps, at that time, the middle class and even the upper middle class lawyers will get “deprecated,” just like many have been “deprecated” before them. Time will tell!
“Gradually Then Suddenly”: The Conquest
Let's talk about the concept of the bait. Life works in mysterious ways, and history works in long time spans. The attack on human agency and personality and on our relationship with nature and our own emotional richness started thousands of years ago. Today, we are dealing not just with the intentions and the incessant scamming of Klaus Schwab, the alphabets, and their owners upstairs — but also with the consequence of the scams put forth by the tyrants and tricksters from the past. Today, we are paying the price not only for the collective imperfect choices of the people underneath the boot of the big tyrants of today (including our own) — but also for the imperfect choices made by tyrant victims of the past, when some people were possibly startled, or bullied, or tricked, or bribed into accepting soul compromises of their time. And then they passed the compromise on to their children. And they passed it to their children. And so on. See how this works? Even on a smaller scale, it works “gradually then suddenly.” For example, the U.S. pandemic preparedness model that bit us all in 2020 was set in motion in the early 2000s, during Bush. After being prepared, it sat there behind the bushes (a pun!) for 15 years, waiting to enter the stage. And then it entered the stage with vengeance. And here we are!
The Condemnation of the Algorithmic Model
All these AI developments may be a compliment to the power of technology but they are also a condemnation of the state of our civilization. Remember the question I asked in the beginning? I asked, what is the point of our time on Earth? And what are we doing with our civilization if our lives are so mechanical that even a dumb fast calculator can do our “intellectual work” more eficiently than we do? What is our “work”? Did they — gradually and then suddenly — get us? Have our own “human” cognitive models and angles from which we analyze the world become so algorithmic that a stupid machine can outdo us at the “thinking” task? Not good. And speaking of law, there was a time in history — before our communities and individual people became invisible pawns under various dominating entities — when even “law and order” wasn't algorithmic but was based on the subjective and honest desire to do what's right in the spiritual sense. And I think on a local level, it still happens sometimes, but the “domination” mentality has poisoned our minds, and the algorithm was put in place to keep a semblance of goodness where the goodness had been undermined. I think that the existential role of the horrible Great Reset is to make that algorithmic poison so big, so obvious to our eyes that we simply can't be in denial anymore and have to rebel against the ancient abuse of our souls. Because we are more than a collection of formulas. We are more than algorithms. We are of spirit and water, we have souls, we are capable of navigating subjectivity and thriving at it, we can feel, touch, love, savor our relationships — and that is the point of us being here. The very feeling of being alive — the breath and the skin — is why we are here. And when someone is trying to further reduce us to pattern-matching machines, we have the full right to say, “No.” I want like to end this article with a quote from a sci-fi story that I wrote in 2019 (I only added a line about a pandemic to it in 2020):
“In order to gain control over the economy and human bodies, they needed tofirst gain control over people's thinking. So they created a strong push to shiftall major human activities to the digital domain as digital footprints wereinitially much easier to track and monetize. They set up breadcrumbs and madethe transition look like fun.Simultaneously, they built strong relationships with some of the most infiuentialcitizens and organizations of the time. Tech leaders promised easy surveillanceto law enforcement — and free access to education and entertainment tocommon citizens. Everybody thought they were getting a great deal!They gave the people previously unseen opportunities to create new worlds —both on the developer side and on the user side — but nobody except the topexecs knew that the new worlds came with hidden trackers and treacherous on-off switches that could be activated at any point.”“Early warnings came from artists who figured out that their work was beingused as a bait to attract people to tech platforms. But artistic types were notrespected members of society, and their cries were drowned in optimisticspeeches about the bright future of everything.Then came the media. After news companies starting crumbling and manyjournalists found themselves without an income, they realized that the gamewas rigged. But they, too, were swept out of the way. Some made a bargain andtook tech funding, some became “gig economy” workers, and some learnedhow to code.Then, at a critical point in time, there was a pandemic of some sort, andpowerful technology leaders, including some of the IHT oficial saints, managedto use their infiuence on governments to legally mandate digitization of allaspects of life. It was then that unregulated human contact was made illegaland smart wearables and AI assistants became mandatory.”“By the time lawyers, doctors, bankers, and government oficials werepersonally impacted and practically enslaved on a massive scale, it was toolate. Big Tech controlled every aspect of life, tracked everything, and fundedevery industry. It became the default law enforcement agency and the defaultnews publisher, and thus it had the power to make or break any pundit,academic, or politician.Everyone — from governments to low-level assistants-to-robots — depended ontechnology for every life function. Sex and baby permits required impeccableDigital Citizen Scores. No one could even get a low-level job without abiding byalgorithms — and most jobs were automated anyway. Municipal councils owedmoney for smart city maintenance. The grip was total.”“And while many felt instinctively uneasy about giving up privacy and cognitiveautonomy, they also felt alone and helpless. Jobs outside of tech were scarce,competition was harsh, and very few had the luxury to even ponder the bigpicture. So people kept their heads low and did what they had to do to feed theirfamilies — complied, wore mandatory smart masks, and learned how to code ifthey were allowed.Developers and other high-level tech industry workers preserved their financialindependence and cognitive autonomy the longest — gated coder communitiesbecame a fixture on every smart urban hub — but eventually they, too, becameobsolete, as AI grew sophisticated enough to produce itself.Shortly after the institution of biologically compromised governance wasdeprecated, Big Tech became Interplanetary Holy Tech, and you know the rest.”
Does this sound familiar? If it does, now is a good time for all of us to ponder why we were born on Earth, why were are alive here and now, with all our gifts — and to follow through with the unique, brave and important tasks that we are here to do. With heart.
About the Author
Read the full article at the original website
References:
- https://tessa.substack.com/about
- https://openai.com/blog/openai-and-microsoft-extend-partnership
- https://arxiv.org/abs/2303.08774
- https://www.law.com/legaltechnews/2023/03/17/latest-ai-chatgpt-successor-gpt-4-proffers-both-legal-promise-and-legal-perils/
- https://takecontrol.substack.com/p/mind-of-a-technocrat
- https://www.law.com/legaltechnews/2023/02/13/best-ways-to-use-generative-ai-in-your-law-practice/
- https://www.reuters.com/technology/what-is-generative-ai-technology-behind-openais-chatgpt-2023-03-17/
- https://tessa.substack.com/p/dr-bruce-dooley-fsmb
- https://tessa.substack.com/p/who
- https://tessafightsrobots.com/tessa-lena/love-automation-creepy-mechanical-mother/
- https://www.brookings.edu/blog/techtank/2023/03/20/how-ai-will-revolutionize-the-practice-of-law/
- https://www.law.cornell.edu/wex/discovery
- https://donotpay.com/
- https://www.forbes.com/sites/igorbosilkovski/2020/06/23/stanford-grad-who-created-the-worlds-first-robot-lawyer-raises-12-million-in-series-a/?sh=7ee49ebd3309
- https://interestingengineering.com/innovation/ai-defend-case-us
- https://www.cbsnews.com/news/robot-lawyer-wont-argue-court-jail-threats-do-not-pay/
- https://www.vanguardngr.com/2023/03/robot-lawyer-sued-for-practising-without-license-in-us/
- https://www.ibm.com/watson/products-services
- https://www.lawnext.com/2020/12/legal-research-company-ross-to-shut-down-under-pressure-of-thomson-reuters-lawsuit.html
- https://futurism.com/artificially-intelligent-lawyer-ross-hired-first-official-law-firm
- https://www.lawnext.com/2020/12/legal-research-company-ross-to-shut-down-under-pressure-of-thomson-reuters-lawsuit.html
- https://tessa.substack.com/p/deprecating-free-will-a-future-we
- https://tessa.substack.com/about