For The Record

Illustration by Gordon Schmidt

A criminal defendant awaits a court pretrial decision. The defendant has been told he will be sent home either on bail, with conditions, or on own recognizance.

The judge looks at the defendant before checking the computer-generated algorithm results on the screen in front of her. The “risk assessment algorithm” advises the judge as she makes her decision.

But how can the judge be certain the computer doesn’t have the same biases as the humans who created it? Can the judge really trust the algorithm? And if so, what’s to stop the court system from turning the decision over to AI?

This scenario was raised by Sharon Nelson, president of Sensei Enterprises, at the Richmond Journal of Law and Technology’s annual symposium in February. Each year, the symposium focuses on an area of cybersecurity; this year’s theme was Artificial Intelligence and the Law.

“Everyone left with a higher concern for where this technology is heading in the future,” said Ellie Faust, L’18, the annual survey and symposium editor. “Moving forward and getting into self-driving cars and cybersecurity systems, if the artificial intelligence can create its own algorithm, how are those going to be manipulated [by it] versus having a human in control?”

Another speaker, Ed Walters, CEO and co-founder of Fastcase, described AI’s prevalence in everything from Alexa to stocks and airplane takeoffs, yet many people don’t even realize it. More complex examples — driverless cars and algorithms in courts — create ambiguities that are difficult to address.

“There are a lot of these black boxes, and do we really want to make societal decisions based on something we can’t see?” Nelson said. “I do think that we have the ethical responsibility as attorneys to make sure that if these systems are to be used in things like sentencing, that we understand how they operate and how they got biases out of the system — because they do seem to be biased.”

JOLT, the first law review in the world published exclusively online, is tackling a cloudy issue that many lawyers don’t yet know how to address. Yet, as the speakers repeated throughout the symposium, this area of the law is constantly evolving.

“Companies trying to hire technology lawyers or just experts in the field of AI feel that there are not enough people out there who truly understand the technology,” said Brian Kuhn, co-creator and co-leader of IBM Watson Legal. “Law is such an old profession, so to bring new attorneys into the field of technology and getting them up-to-date with laws and regulation is really important.”