GROW YOUR TECH STARTUP

Would you accept being judged by AI in a court of law?

September 11, 2019

SHARE

facebook icon facebook icon

In spite of incidents of inaccuracy and bias, agencies like Artificial Intelligence (AI) court judges are starting to get accepted.

However, AI has a lot to learn before we allow it to judge our moral behavior.

Ganes Kesari, Co-Founder and Head of Analytics at Gramener, tells The Sociable that right now AI is not ready to take decisions on cases, and even in the future, it would be better off in the court in an assistant’s role.

Ganes Kesari

Ganes Kesari

AI needs to acquire skills in ‘understanding’ context and interpreting scenarios

“Today, AI is more suited to play the role of a judicial assistant than that of a criminal judge. It is smart at processing details, summarizing cases and looking up references. It is not ready to take decisions on cases just as yet,” he says.

Imagine a criminal case being dragged on for months or years, as the judiciary investigate to deliver the most informed and unbiased judgement.

Now imagine the case given over to an AI algorithm, that scans all related to the case within minutes, based on which, it gives a judgement.

Would you be comfortable in being judged by an algorithm? Could smart justice become an accepted form of judiciary system?

Read More: A program to keep Prometheus out of machine learning systems

Most judiciary systems are archaic, and society seeks a change to it. Could AI be the answer? Not yet, says Kesari.

“AI needs to acquire skills in ‘understanding’ context and interpreting scenarios.”

After all, who will check that the AI’s decisions are not flawed? How will we determine ethics for AI court judges, when the stakes are as high as sending a person to jail?

AI Robots in Use

China has already started making use of robots in courts, while pursuing a transition to smart justice. These robots retrieve case histories and past verdicts and even have specialties, such as commercial law or labor-related disputes.

The Estonian Ministry of Justice is designing a ‘robot judge’ to go through a backlog of small claims court disputes. The AI judge is programmed to come to a decision after analyzing legal documents and other relevant information.

When AI can think beyond historical patterns it may be ready for a promotion

These live examples might be helping speed up a process that is frustrating in almost every country. Yet, we do not truly trust AI judges to be in charge of something that can result in jail time or heavy fines.

For example, even though Chinese court robots are reducing the workload of court officials, the emphasis is still on helping, rather than replacing, judges.

“The application of artificial intelligence in the judicial realm can provide judges with splendid resources, but it can’t take the place of the judges’ expertise,” Zhou Qiang, the head of the Supreme People’s Court, who advocates smart systems told World Government Summit.

Similarly, in Estonia, all decisions made by the AI are to be revised by a human judge.

Bias Inheritance

In an ideal world, AI would be free of human bias. As opposed to human decision making, a machine should be able to arrive at a decision by only looking at facts. However, we already know instances of human bias feeding into AI during training.

Read More: Technology’s impact on society reflects society’s impact on technology

“A lot of decision making with AI revolves around supervised learning, i.e. teaching models using historical patterns. This is where bias and ethical concerns stem from, since human history is anything but fair. When AI can think beyond historical patterns it may be ready for a promotion,” says Kesari.

AI — a Good Assistant

As an assistant, AI can pull in old data from a sea of history much faster, that too, probably without missing out on any information. The Visual Analytics for Sense-making in CRiminal Intelligence analysis (VALCRI) is an example.

A software tool, VALCRI helps investigators find related or relevant information in several criminal databases. It is already in use by various police agencies, including Austria, Belgium, the UK, and Germany.

Crime prediction is another aspect AI could help in. Meng Jianzhu, former head of legal and political affairs at the Chinese Communist Party, told World Government Summit that the Chinese government plans on using machine learning and data modelling to predict where crime and disorder may occur.

However, as AI worms its way into our judicial system, can we truly say that it can recognize good and evil by scanning data? Will we trust AI in the future to pass judgement on our behavior?

As technology progresses for good, the bad side also grows alongside. Couldn’t criminals learn to manipulate an algorithm to result in their favor?

Even in the distant future, it would be a wise decision to build humans into the decision making loop, to make any decision reviews or overrides

While it’s true that judicial systems in most countries are archaic, AI cannot yet be the answer. However, it can definitely help us get the answer. After all, there is no denying the AI market growth, which is expected to grow up to $190.61 billion by 2025.

However, as the future is bound to be AI powered, it still makes sense to use AI as a very thorough court secretary. As Kesari says:

“Even in the distant future, it would be a wise decision to build humans into the decision making loop, to make any decision reviews or overrides.”

Since we are still at the onset of allowing AI into our justice system, we can determine its course to be bias free and ethical and keep it under control rather than let it control us.

Disclosure: This article includes a client of an Espacio portfolio company

SHARE

facebook icon facebook icon

Sociable's Podcast

Trending