AI in Law: A case for caution
By Aditya Panuganti
Winner of Diversity Essay Scholarship
Big-Tech has been the vanguard of innovation in the twenty-first century, sweeping across industries and disrupting their business structures, thus compelling them to embrace the change to expand business prospects. Artificial Intelligence or AI is part and parcel of the new technological revolution: its ambitious goals strive to redefine the world as we know it. While it is enticing to immediately discuss the opportunities, ethics, and fallout AI presents, it is not retrogressive to first establish a description of AI. As Jonah Wu observes in his paper, "AI is a technology that has the capacity to perceive knowledge, make sense of data, generate predictions or decisions, translate information, or otherwise simulate intelligent behaviour".
In this essay, I shall present the conclusion that AI does not mitigate bias, increase access to justice or promote diversity in society, and provide an argument for the same by reconciling advancements, and research in the field with its implications for the Indian legal profession.
AI does not mitigate bias. AI’s bedrock is data; AI systems are designed to arrive at conclusions and make decisions based on their datasets. Machine Learning, or ML which is a sub-field of AI, goes one step further and hones its results by ‘learning’ from the new data it encounters. Data becomes significant when discussing bias in AI systems; bias in the system inherently translates to bias in AI’s results. A dataset that isn’t inclusive or is partial to certain results will steer AI to outputs that propagate this bias, for example, in 2014, tech giant Amazon used AI on a trial basis to review job applications from prospective employees. However, they realized that their algorithm discriminated against women applicants and preferred male applicants for the same position. The AI had based its decisions on previous years’ applications which had a significantly greater number of male applicants.
AI does not increase access to justice. A study published by PNAS (Proceedings of the National Academy of Sciences of the United States of America) shows how incomplete, or imbalanced datasets can decrease accuracy and skew the results of AI systems when used in medical diagnosis. This is a cautionary tale for AI systems in India and its legal industry. The legal system and the justice system in particular, are notorious for their lack of diversity across social groups. Currently, only 27.6% of judges in the lower judiciary are women, and this is 10% for the High Courts, as reported by ‘Bar and Bench’. By the end of 2019, 90% of undertrials in the country were not graduates and 28% were illiterate. As countries like Australia, UK, and the US begin to use AI to find relevant information and case law and receive reports while judging bail applications, we must be wary of introducing these systems into our judiciary. The numbers above represent our datasets, and this is the data that any prospective AI will be privy to. Social reform and economic prosperity of citizens have taken a beating during the pandemic, and the mechanization of the judiciary might result in depriving those coming before the courts of Lady Justice’s compassion when they need it most.
Tech innovators and tech recipients have great expectations of AI and the opportunities it seems to promise, and this is true of the legal industry also. Products like CaseIQ, Josef Legal, and Ironclad are AI-driven tools built to slice through repetitive and labour intensive legal work like contract review, discovery review, and case research. Some products go a step further and equip the common man to make sense of legalese in contracts and also understand the risks and obligations of the contract, giving scope for the circumvention of a lawyer entirely. While some might herald this as a door opening upon justice, it is imperative to note that these products provide a legal service and not necessarily access to justice, much in the same way as food delivery apps like Swiggy provide a service and not food security. Although there is scope for such systems to be of use to the common man, India’s diversity may be a setback. In India, which has 22 recognised languages, and a literacy rate of 74% (women’s rate is 64.4%) the effect of such technology will be dampened. Further aggravating the problem is the display of sexist bias in AI systems used in Natural Language Processing when translating between languages; there are instances of AI assigning pronouns to actions based on gender stereotypes.
Applications of AI in the judiciary present us with a question- can India’s justice system justify its use of AI? India’s adversarial system decides cases based on evaluation and sufficiency of proof, and ensures the neutrality of the bench during proceedings. It must be able to account for the results and processes of its reasoning, and as an extension, any AI it uses. How can the judiciary achieve this when the AI’s creators themselves don’t understand its reasoning? In the Indian system where trial court orders are regularly overturned, and high courts themselves are sometimes backed into a corner by their judgments, the introduction of AI might increase efficiency at the expense of justice.
AI does not promote diversity. In 2018 tech giant Facebook released a line of smart devices under the brand ‘Portal’. Built as video chat devices augmented with AI, the devices revealed problems in their AI during prototype testing. The AI-powered camera tended to focus less on people with darker skin tones, sometimes ignoring them altogether. The AI’s data underrepresented people of colour. Consequently, the device had trouble recognising them. One must understand that AI isn’t a tool that contradicts or undermines human values. AI is simply unable to deliver acceptable results because of blemished datasets that function as a reflection of a blemished society. AI can’t use information it doesn’t have and it strives to provide accurate results that conform with the data it does have. Such flaws are disturbing when put in an Indian context; a survey in 2018 by IDIA of students at National Law schools shows only 3.8% were from Muslim families, while 88% were from Hindu households. The Indian legal profession is a zero-sum game against the marginalised sections. AI in the legal profession may propagate these biases and possibly set them in stone.
AI is plagued with biased and unrepresentative data, and decision-making processes that its creators can’t explain, with results they sometimes can’t justify. There are AI success stories and one can’t deny the endless possibilities it offers and the alluring glimpses of an exciting future. However, there is more than enough evidence to warrant caution and validate apprehension. AI is holding up a mirror to our world and is indiscriminately confronting us with the problems in our society. AI is a door to a new world but not necessarily a better one. AI today is efficacious and successful in augmenting human capabilities; both the desirable and the ugly. AI today cannot replace humans, must not either, for the world today isn’t perfect and AI can make it disastrous. The onus is still upon humans and the systems of justice and democracy we cherish, to build an inclusive, peaceful, and just society.