Why is AI absent from legal decision-making?

It can reason, plan, learn, and even undertake many impressive feats of creativity; some misleadingly claim it’s sentient, to boot. So, why is this magic solution still under-utilised rather than being applied in areas, where we are failing and could do with an innovative angle?

Dora Cee
UX Collective

--

We tend to focus on AI-spurred disruption in manufacturing and celebrate increases in productivity thanks to automation. Yet, services, such as those provided by legal bodies and firms, often get ignored, when checking what advanced machines could bring to the table.

At the end of the day, penning and upholding the law seems to be a role meant for humans. Surely, we also know better how the apply our hand-crafted rulebooks to ourselves, without technology’s assistance? Perhaps it’s a much more nuanced discussion — but one still not given enough attention.

Woman holding up scales with a copy of the Universal Declaration of Human Rights laid out next to her.
Image by storyset on Freepik

For the record, I really got into the weeds here, and I appreciate not all of you may share my enthusiasm for investigating the topic from multiple angles. With that in mind, here is a handy set of bookmarks linking to specific sections and addressing relevant questions:

Legal and human rights issues of AI
AI in the private legal sector: traditional barriers
Trade-offs and opportunities for private firms
Implementing AI and UX laws in public & government areas
The takeaway: food for thought

Where there’s smoke — oh wait, we are on fire

I don’t know about you, but from where I’m standing, it seems to be a good time for our computer overlords to start chiming in. Too many people in power just don’t seem to be very good at being rational these days, to the point that constitutional changes can revolve around a single dogmatic belief, casually turning a blind eye to in-built contradictions and even reason. This human-led machine seems to be broken, and at this point whatever virus is spreading through its veins might be too fragile for us, mere mortals to handle all by ourselves.

If only we could introduce an impartial helper into the equation. Something that’s capable of learning, and deciphering data into sense more efficiently. Imagine if we had some form of technology to aid us in this chaos. That, we do, of course. Artificial Intelligence and machine learning are wonderful inventions; and will undoubtedly outsmart us in due course.

Does this sound a bit frightening? There is a silver lining to consider; if we can make place for AI in our legal frameworks, perhaps a significant part of biased, human error could be curbed. But first, let’s consider a few risks, so we know what we are up against.

Human rights and human… wrongs?

Before diving in, we can’t ignore the underlying problems which we (of Team Human) create for ourselves. In their State of AI in 2021 report, McKinsey point towards cybersecurity risks as the driving concern in the implementation of AI— albeit, on a positive note, this has seen a reduction from the previous year.

A few other matters that organisations consider relevant (and challenging) are:

  • regulatory compliance,
  • the ability to explain how AI models make decisions,
  • privacy issues,
  • impact on reputation,
  • as well as equity and fairness.
AI risks that organisations consider relevant in 2021 — a percentage breakdown by emerging and developed economies:
cybersecurity (47%, 57%), regulatory compliance (40%, 50%), explainability (34%, 44%), personal/individual privacy (45%, 41%), organisational reputation (24%, 37%), equity and fairness (30%, 30%), workforce/labour displacement (31%, 24%).
The state of AI in 2021 by McKinsey — check out their report for a full list of concerns mentioned.

Moving onto a more law-specific context, in a 2020 paper, researcher Rowena Rodrigues highlighted legal and human rights issues that could stem from the use of AI. Below are some of the areas, which could be negatively impacted:

1) A lack of algorithmic transparency could affect fair trial and due process, social rights and access to public services, rights to free elections, and more. This could be played out in people being denied jobs and benefits, or refused loans, just to name two examples — all due to poor system design, regulation, and models.

2) On the topic of poorly designed or secured technology, cybersecurity vulnerabilities could have an impact on the right to privacy, freedom of expression and the free flow of information.

3) Concern around unfairness, bias, and discrimination also signals a potential shortcoming. Think along the lines of equality and equal protection before the law, right to fair trial, and prohibiting discrimination on the basis of disability. In the wrong hands, the opportunities for wrongdoings could be endless.

4) Intellectual property issues could entail anything from owning property alone or with others, participating in the cultural life of the community or protecting your creations and inventions, to name a few concerns.

5) There could also be an adverse effect on workers, for example in their right to social security, favourable work conditions, freely choosing one’s occupation, and also, the rights of people with disabilities to work on an equal basis with others, amongst else.

Other problems could arise from:

  • the handling of privacy and data protection,
  • by access to justice being impacted, a lack of contestability,
  • a lack of accountability for harms,
  • and liability issues related to damage caused.

For a more detailed breakdown of these issues and others, you can read the full research article here.

A group of lawyers gathered around a desk at a law firm.
Image by storyset on Freepik

Rooted (and decaying) in tradition

Up next, let’s address the dinosaur in the room. My summary here is based on an article published in 2020 in the Cambridge Journal of Regions, Economy and Society, which saw the authors interview professionals in the field to fully gauge current roadblocks.

The (private) legal services sector is shaped by custom and tradition — there is a clear hierarchy and ladder that needs to be climbed to get from associate to the coveted partner status. The resulting ingrained problem is multi-faceted.

First, there is little incentive to challenge the status quo and go through the hassles of adopting new technologies. An all too often conservative and risk-averse attitude of those in top positions leads to an approach of responding to client needs rather than anticipating them. This adds up to delays in “getting with the times” and generally lagging behind on anything that seems foreign, disruptive and uncomfortable. In other words, the fewer changes, the merrier.

Then there is an issue hiding beneath the surface, that also runs deep in other power-wielding areas. Namely, that the time horizons of senior lawyers and the business don’t align. Those who are comfortable with current practices and standards, generally won’t be keen on rocking the boat if they will be retiring soon — and if they won’t be affected in the long run. Much like in politics, a personal agenda can outweigh noble changes that could serve a greater good.

Though current practices are outdated and even clients are getting impatient, practitioners in the field are wary of not only technology, but the digital skill gap that has gone unaddressed for so long. Another problem is that there is also a general confusion around what differentiates AI from simple automation, which makes the benefits of the former seem all the fuzzier.

Finally, there is a financial bottom line that makes professionals drag their feet.

Balancing the scales of trade-offs and opportunities

It shouldn’t be too surprising that clients are becoming increasingly enamoured with the idea of tech adoption. Who wouldn’t love to hear the terms “faster turnaround” and “lower prices” put together? This is the exact case in the legal territory.

Consider this: where the billable hours approach dominates, what happens when you get the same output and quality within a shrunk timeframe? Obviously, you will see a dip in profits, in return for automating or outsourcing more mundane, labour-intensive tasks to the giant robot.

Whilst this can ring an alarm to the overall business structure, there are some ways to even out the scales. For one, AI tech is likelier to augment and aid rather than replace functions. Where more junior associates were battling towering piles of paper, the same workforce can be redeployed to look after more “thoughtful” errands and oversee the bigger picture, whilst AI assists with administrative tasks that cannot be automated.

Stepping back into the zone of potential discomfort and going by this logic, law firms will then be expected to become more linear, replacing and challenging their current economic food-chain model. Meanwhile, the creation of new roles will be necessary to bridge the chasm between fresh tech and those who are not quite digitally inclined.

Roles such as legal analysts and legal knowledge engineers can also serve to structure the amounts of data legal services are already sitting on. There is a general air of worry around how to handle this under GDPR, whilst also keeping potential cyber risks or data breaches in mind. For international businesses, complying with different data protection laws in multiple countries poses a further concern. Perhaps by reducing admin-based cognitive load, these mental energies can be shifted towards solving such puzzles instead.

Engineer securing data on laptop.
Image by storyset on Freepik

Going public & enlisting the laws of UX

Venturing into constitutional and government territory, it is worth kicking off by mentioning that outdated work practices and systems are not just a problem in the private sector. In 2016, the Centre for Public Impact reported that 75 percent of the $90 billion spent on technology was invested in maintaining legacy systems.

Here too, AI could serve to reduce administrative workload, resolve resource allocation problems, and take on increasingly complex tasks. This could all help free up time and replace manual labour, leaving employees to focus on the more “human” side of interactions. Nonetheless, the same privacy, security, and ethical concerns remain.

There are six strategies for implementing and applying AI, recommended in a 2017 paper by the Harvard Ash Center for Democtatic Governance and Innovation. I may have bolstered these with a view through UX-tinted glasses, but the guidelines remain:

1. Augment employees, do not replace them.

Introducing this shiny new tech could lead to new roles created within the public sector related to its development and supervision.

Since AI works better in collaboration with humans, incorporating it in governments should be a way to augment human work rather than replace it. Nonetheless, fair labour laws should be updated in preparation for expected shifts in the workforce, in this case amongst civil servants.

2. Make AI a part of goals-based, citizen-centric program.

Breaking down a problem into its “what, why and how” components helps define solutions and whether AI is the right answer overall. Just like any other tech, it is a tool to assist us, but forcing it merely in order to earn a trendy badge and applaud is not the message here.

McKinsey recommend putting an emphasis on the entire customer’s end-to-end journey rather than focusing on individual “touchpoints” in their interactions with citizens. They also highlighted the general lack of data-driven insights, even where there is an awareness of underlying dissatisfaction with services.

In their research, they found three core needs that drove customer satisfaction:

  • “fast, simple and efficient processes,
  • the availability of online options for completing interactions,
  • and the transparency of information.”

This seems like a job for UX research & design, don’t you think? Let’s keep going — the paper also identified four criteria to prioritise changes.

First off, the reach (or the number of people benefitting from a service) must be identified. Next, the importance of it for the overall satisfaction should be weighted (this they call resonance; but we could also go with desirability, really). Articulating a current performance baseline would follow all this, to identify how well a service is doing as it is right now. Finally, feasibility has to be considered to check how readily and easily the government could implement changes to a service.

3. Get citizen input.

In order to demystify AI, participatory conversations are necessary to educate both policymakers and society as a whole about its use, trade-offs and benefits.

With this knowledge in hand, people could also feel more empowered and address ethics and privacy issues; even help co-create rules around the use of their data. User feedback is always a good way to bring awareness to pain points and concerns — besides, it also helps prevent major flaws and blind spots from going unnoticed.

4. Build upon existing resources.

With the amount of research around AI and systems already being used, governments don’t have to start from scratch. Currently, nearly 75 percent of companies have integrated AI into their business strategies, and governments can take advantage of such advances.

Non-profits and research institutions also offer the public access to their findings and studies, so there is no need to reinvent the wheel. Government-funded AI research is also a common theme, which means there should be enough material to help structure an implementation strategy.

5. Be data-prepared, and tread carefully with privacy.

How data is collected and managed can quickly become a sensitive point. Governments need to consider what types of data they need, when it expires, and how information can be utilised to provide context for a specific person.

To earn people’s trust and alleviate privacy concerns, these procedures must be transparent and opting in should be a voluntary choice, not a requirement. Consent for using data should always be at the forefront and external datasets should not get mixed up with government sources.

It is also important to ensure that data is accurate, because algorithms could end up feeding off wrong information, thereby making error-prone decisions and undermining equality.

6. Mitigate ethical risks and avoid AI-decision making.

Depending on how AI is programmed or trained, it can be susceptible to bias; especially if the data inputs are corrupted.

Basically, it is a beast we tame by feeding it data. This, too, can become a problem if its dinners are subjectively selected or using biased datasets. Even if we were simply making AI read so much legal jargon that it may as well get a PhD in law — are we sure our current standards are objective and impartial? Hence, the caveat: AI also needs to learn to spit that out when coming across toxic waste.

AI researcher Matt Chessen recommended a new public policy profession specialising in machine learning and data science ethics to prevent in-built biases. As he writes, “technologists often, consciously or unconsciously, encode laws, policies, and virtues in decision-making machine learning systems.” Until we can guarantee these errors are wiped out, AI should be used for analysis and process improvement rather than outright decision-making.

Lady Justice holding scales in one hand and a sword in the other.
Image by storyset on Freepik

tl;dr — what’s the short answer?

All this being said, there are clearly multiple reasons for people feeling uncomfortable with AI becoming a legal crutch. A general distrust in technology and relevant human rights and security concerns makes for a wobbly grounding, from the get-go.

Then there is a problem with traditional “power models” being more enticing over an equalised distribution. People also often don’t know enough about AI to know how to make the most of it and how it can benefit or impact them, so this is an area where conversations need to be facilitated for a better understanding.

Current systems will also need to undergo a major (perhaps challenging) overhaul to accommodate new technology, but for this, new types of professionals will have to be included in the process. A tip? To start with, look towards the AI/machine-learning crowd and invest in a UX team to bridge the gap between different generations and levels of tech-savviness.

AI is going through a Benjamin Button-style ageing process, whereas we… are categorically not. It’s high time to invest in changes that will benefit us long-term instead of pushing personal agendas and shady values.

Thanks for reading! ❤️

If you liked this post, follow me on Medium for more!

References & Credits:

--

--

Design / Psych / UX / AI & more | Here to translate scientific research into practical tips & advice.