Artificial intelligence (AI) and machine learning tools are already disrupting other professions. Journalists are concerned automation being used to produce basic news and weather reports. Retail staff, financial workers and some healthcare staff are also in danger, according to US public policy research organization, Brookings.
However, it may come as a surprise to learn that Brookings also reports that lawyers have a 38% chance of being replaced by AI services soon. AI is already being used to conduct paralegal work: due diligence, basic research and billing services. A growing number of AI based law platforms are available to assist in contract work, case research and other time-consuming but important back office legal functions. These platforms include LawGeex, RAVN and IBM Watson based ROSS Intelligence.
While these may threaten lower end legal positions, it would free up lawyers to spend more time analyzing results, thinking, and advising their clients with deeper research to hand. Jobs may well be added as law firms seek to hire AI specialists to develop in house applications.
What about adding AI into the criminal justice system, however? This is where the picture becomes more complicated and raises ethical questions. There are those who advocate AI to select potential jurors. They argue that AI could gather data about jurors, including accident history, whether they have served before and the verdict of those trials, and perhaps more controversially, a juror’s political affiliations. AI could also be used to analyze facial reactions and body language indicating how a potential juror feels about an issue, demonstrating a positive or negative bias. Proponents of AI in jury selection say it could optimize this process, facilitating greater fairness.
Others are worried that rushing into such usage could might have the opposite effect. Song Richardson, Dean of the University of California-Irvine School of Law, says that people often view AI and algorithms as being objective without considering the origins of the data being used in the machine-learning process. “Biased data is going to lead to biased AI. When training people for the legal profession, we need to help future lawyers and judges understand how AI works and its implications in our field.” she told Forbes magazine.
A good example would be Autonomous vehicles. Where does the legal blame lie for an accident? The driver, the car company, the software vendor or another third party? These are questions that are best answered by human legal experts who can understand the impact of IA and IoT on our changing society.
Perhaps a good way to illustrate the difference between human thinking and AI is that it usually wins in the game of Go because, while it plays according to formal Go rules, it does so in a way no human would ever choose.
If AI oversaw justice it might very well “play by the rules” also but this would may involve a strict interpretation of the law in every case, with no room for the nuances and consideration that experienced human lawyers and judges possess. Our jails may fill up very quickly!
Assessing guilt or innocence, cause and motive in criminal cases needs empathy and instinct as well as experience – something that only humans can provide. At the same time, it is not unknown for skilled lawyers to get an acquittal for guilty parties due to their own charisma, theatrics and the resources available to them. Greater involvement of AI could potentially lead to a more fact based and logical criminal justice system, but it’s unlikely robots will take the place of prosecution or defence lawyers in a court room. But at some point, AI may well be used in court, but its reasoning would still have to be weighted and checked against a tool like IBM Watson OpenScale to check the validity of its results.
For the foreseeable future, AI in the legal environment is best to enhance research, and even then, we should not trust it blindly, but understand what happens and whether results are valid and, as far as possible, how they are achieved.
The wider ethical debate around AI in law should not prevent us from using it right now in those areas that it will being immediate benefit and open new legal services and applications. Today, AI could benefit those seeking legal help. Time saving AI based research tools will drive down the cost of legal services making it accessible to those on lower incomes. It is not hard to envisage AI driven cloud based legal services that provide advice to consumers without any human involvement, either from startups or as add-ons to traditional legal firms.
For now, the impact of AI on the legal profession is undeniably positive if it reduces costs and frees up lawyers to do more thinking and communicating with clients. And with further development it may soon play a more high-level role in legal environments in tandem with its human law experts.