A photograph of a young mother holding a baby, walking through a war damaged street in Ukraine

The technology battlefield

Associate professor Rebecca Crootof is on the front lines of considering new legal horizons as technologies evolve.

 

Professor Rebecca Crootof built her law career on a single question: “How can we make things better?” And the inspiration for this question arose from an unexpected source: science fiction paperbacks.

“I grew up reading everything I could lay my hands on, but particularly science fiction,” Crootof said. “My dad had all these 1950s and 1960s paperbacks up in the attic, and they would put a human being in a new situation and [we’d see them] still act like a human being — for better and for worse. I really liked thinking about humans in new situations.”

What began as a side interest wove its way through Crootof’s career. Her journey to becoming a technology law expert began in a classroom — not as a law student, but as a ninth grade English teacher in a school confronting a history of racial discrimination. This work motivated her to join a civil rights nonprofit that addressed fair housing and fair employment issues.

“[At the nonprofit,] I kept running up against this J.D. ceiling, where I would develop a fair housing case, hand it over to the lawyers, and then they wouldn’t pay any attention to me afterwards — and probably entirely appropriately,” she said, “But I wanted to keep going with it. So I said, ‘OK, I’m going to go to law school.’”

Law school deepened Crootof’s understanding of civil and human rights. As her interest in human rights tangled with drone strikes and malicious cyberoperations, she noticed, “Oh, I’m back to science fiction world. This is all still about, ‘How do we keep being the best version of humans with crazy new technologies and new situations?’”

These elements of science fiction got Crootof focusing on the need for technology law in order to create a better experience for everybody. “How do we get better?” she came to ask. “How do we use law to get better results?” 

A photo of professor Rebecca Crooftof wearing a white blazer and black shirt. She is standing outside in front of a brick colonnade.
Photo by Gordon Schmidt
If your goal is minimizing civilian harm, which is one of my animating interests, just being able to say, 'Accidents happen,' is not an adequate answer.
Rebecca Crootof

WHERE LAW AND TECHNOLOGY MEET

“I say ‘technology law’ like it’s a thing,” Crootof said. “I’m working to make technology law a thing. It is currently not a recognized common subject taught in law schools. There are a number of people who teach AI and the law, internet law, and cyberlaw. But just like health law, just like energy law, just like the host of other recognized, established subjects, technology law is an area of the law with a distinct set of repeating problems, its own set of principles, methodologies, and neuroses that are worth distilling and recognizing as a stand-alone subject. And once you see the repeating regulatory patterns, you can apply these methodologies to any kind of technology to figure out how to best regulate it.”

Along with a course called Technology Law, Crootof teaches Torts and National Security and New Technologies, so it’s not surprising that she tends to focus on how things can go wrong. “All these new technologies allow for new harms and new types and kinds of accidents, either in ways that we’re not accustomed to or at a greater scale. [We also have to consider] the possibility of malfunction or unintentional misuse,” she said. This is particularly troublesome when it comes to the law of war, another of Crootof’s areas of expertise. “Looking at the international law of armed conflict, there’s no accountability mechanism for accidents. I’m trying to change that.

“But,” Crootof said, “criminal law requires intentionality, which is not why most accidents in war happen. Most accidents in war happen for more systemic reasons, not because any individual acted with criminal intent. And it’s tempting to say, ‘Well, it’s war. Accidents happen.’ But if your goal is minimizing civilian harm, which is one of my animating interests, just being able to say, ‘Accidents happen,’ is not an adequate answer.”

Crootof’s been pushing for what she calls a war torts regime that would explicitly create liability for states for these harms. “The question isn’t, ‘Which individual should be held liable?’ because that can lock us into individual criminal liability,” she said. “The question is, ‘What is the legal structure system that’s going to minimize the chances of this harm happening?’ And that’s where tort law comes in. Tort law is all about creating incentives and creating deterrence. So let’s use law to create the right incentives for the state, as the state is the entity that’s best able to minimize the likelihood of the accident happening.”

One goal [of my teaching] is the essential: Impart knowledge. But I also want to share my excitement.
Crootof

 EVERYDAY ACCIDENTS


For the majority of Americans, warfare is not our day-to-day. The everyday ways we brush up against the need for technology law are much more subtle than an active battlefield. While we’re increasingly aware of all the ways we are surveilled through the internet, Crootof’s research also digs into ways in which this enables companies to hurt us physically.

“We already know of situations where a company wants to change its terms of service, maybe to allow them to sell more of your data that they collect,” she said. “They give you a little pop-up that says, ‘We’ve changed our terms of service,’ right? ‘Do you agree?’ And we all click ‘yes,’ and we agree to all sorts of things that we don’t know we’re agreeing to legally. But if we don’t click on that, we can’t keep using that service. So OK, maybe that’s Twitter. Fine. I’ll give up using that form of social media. But it might be the speaker set I bought for my house and installed in every room. And if I don’t click ‘yes,’ they’re like, ‘All right, we’re just going to turn off the service, which means you can’t use the physical speakers that you purchased.’ Sonos did this. This has already happened. In another case, it was a garage door opener that someone left a bad review about, and the CEO turned it off remotely.”

A photograph of a person holding up a sign that says
Photograph by Sipa via AP Images

Crootof’s intrigue piqued at the idea that surveillance and remotely deactivating device features could be used for manipulation or punishment, most especially when those acts lead to physical harm.

“Your new speakers no longer working are one type of problem. But if your garage door gets left open, your car could be stolen. Or maybe your garage is connected to your house and an intruder uses that point of access to enter your home. Under current law, it’s not clear the company would be liable for your harm, even though they caused it.”

Ford Motor Co. recently published a patent application that would allow it to repossess cars by activating autonomous driving capabilities. “If you’re leasing a car and you miss a payment, [the company] could drive it out of your driveway, perhaps to a public road where it’s easier to tow away or to a junkyard if they deem it not worth keeping,” she said. “And because I think about accidents, I’m wondering what happens when there’s a kid in the back seat of the car? What happens when a cat is sleeping behind the back wheel? What happens when that car that is being remotely repossessed hits me driving down the road? Who’s responsible?”

 

CROOTOF IN THE CLASSROOM

Crootof takes this mindset of critical curiosity into her classroom, where her students learn how to address these complex regulatory questions. She especially loves teaching first-semester 1Ls. “It’s incredibly cathartic for me to say, ‘I know this feels like too much. I know this feels impossible. But trust me, we are going to get there together.’ I almost dropped out of law school my first semester because I thought it wasn’t for me. I couldn’t understand how the old cases were relevant to what I wanted to do in the present. So I try to teach my students the things I had to figure out on my own.”

Crootof’s commitment to breaking down the “messiness and confusion” of law, as she put it, into a manageable curriculum has led her to a robust teaching philosophy. “One goal is the essential: Impart knowledge. But I also want to share my excitement. My favorite classes were the ones where the instructor was excited and invigorated by the subject. And it definitely helps that Richmond lets me teach the subjects I’m interested in.”

Then, there’s her way of creating a classroom that’s both challenging and inclusive. “I push my students to get very comfortable discussing disagreements about policy but in a way where everybody feels like they have a voice. I want everybody feeling like even if every single person in the class disagrees with their policy choice, they are still respected.”

She models this from Day One. “I’m not going to pretend that I’m not biased. I’m open about my biases, and I invite students to acknowledge their own. When I teach, I’ll tell [them] very clearly what the law is, and in policy discussions, I’ll tell them if I think a law has good or bad results. When I think the law is wrong, I explain why, and when I do, I’ll also tell [them] the counterarguments in the most respectful manner I possibly can. I show them the ability to hold different understandings in one’s head at the same time; then I get my students to practice doing it themselves.

“With law, we can be proactive,” she tells both her students and her fellow academics. “We can think ahead about what we want. What world do we want to live in? And we can use law to create that.”