Algorithm Helps New York Decide Who Goes Free Before Trial

Wall Street Journal - September 20, 2020

Share

Corinne Ramey

BROOKLYN, N.Y.—After Yunio Morla was charged with assaulting and choking his ex-girlfriend, a Brooklyn prosecutor asked a judge to hold him on $15,000 bail. His record included a 2007 felony conviction involving the same woman.

The defense attorney pointed to a sheet on the judge’s bench that held a potential key to his client’s freedom: the result of a new algorithm that had crunched Mr. Morla’s data and scored the likelihood the 41-year-old contractor would appear at his future court dates. His score was 25 out of 25.

“You have some scientific evidence that people in similar situations would make it to all their court dates,” Mr. Morla’s lawyer said at the November arraignment.

“He does have a perfect score,” said Judge Hilary Gingold. She released Mr. Morla without bail.

The algorithm is at the center of a real-world experiment New York City began late last year to help decide who stays in jail before a criminal trial and who goes free. The tool, the result of a $2.7 million, five-year process, was designed to bring about better decisions by the 200 or so judges who make snap determinations every day about people’s freedom.

The algorithm—typically called a “risk assessment,” though city officials prefer “release assessment”—is set to return to use later this month after a six-month hiatus because of the coronavirus pandemic.

The tool has been mostly well received, with preliminary data showing its recommendations aligned with the likelihood of defendants showing up for court. Some judges have said they understand the science behind the tool and are therefore likely to trust its recommendations.

Its return comes at a challenging time for the city’s criminal courts, which curtailed operations during the pandemic. There are now 41,000 pending cases, about 40% more than this time last year. Shootings and homicides are up. Judges have been conducting video arraignments without using the assessment and have become accustomed to making decisions without it.

Still, the tool could help alleviate backlogs and avoid warrants, said Aubrey Fox, executive director of New York City Criminal Justice Agency, a pretrial-services nonprofit that administers the assessment and worked with the city to develop it.

“If anything, the courts are saying ‘We need your help in making sure people come back,’ ” he said.

Jurisdictions across the U.S. have long used algorithms to help make decisions about bail, classify inmates and sentence convicts. The city set out to build a new system to address a criticism of other models: that they recommended lockup for disproportionate numbers of young Black and Latino men. Many critics of the models say they were built using inherently biased data.

“You’re codifying these structural inequalities into the tool,” said Scott Levy, chief policy counsel at the Bronx Defenders, a public-defender organization. “That is particularly pernicious because you are doing it under the guise of science.”

The city’s effort began in 2014 shortly after Democratic Mayor Bill de Blasio took office espousing progressive policy goals, among which was to shrink the city jail population, then about 11,000. Crime had reached historic lows. “It still seemed that there were a lot of people, too many people, going in for very low-level offenses,” said Elizabeth Glazer, director of the Mayor’s Office of Criminal Justice.

Judges were too conservative, she said, setting bail for defendants statistically likely to return to court.

To build the tool, New York City’s pretrial agency in 2017 hired two research firms and handed them records from more than 1.6 million arrests from 2009 through 2015.

Crime Lab, a University of Chicago research center, used statistical software to analyze the data. Among other things, it found that defendants who reported a phone number and address were more likely to show up for court appointments.

Some results Crime Lab saw surprised it. A question used in New York City’s old algorithm—”Do you expect someone to come to arraignment?”—turned out not to predict anything, said Gregory Stoddard, a senior research director at the firm. Neither did the question of whether a defendant had a job, once the analysis took into account other factors including criminal history.

A recent bench warrant, which is an arrest order issued to a defendant who fails to appear for court appointments, had predictive value, showing a defendant less likely to appear.

The other firm, Luminosity Inc. of St. Petersburg, Fla., built 72 models, said director of data analytics Marie VanNostrand. It ultimately winnowed them to three.

Several months later, the firms compared conclusions—strikingly similar despite their different methods—and began working together to develop the final product. They presented the conclusions to a panel of researchers.

Among the debates was how to balance accuracy and fairness, said Ojmarrh Mitchell, a professor at Arizona State University who served on the panel. An accurate algorithm would do a good job of predicting whether defendants showed up, and thus whether to recommend release, Mr. Mitchell said, and a fair algorithm wouldn’t result in more release recommendations for white defendants than for others.

“There’s a trade-off between accuracy and fairness,” Mr. Mitchell said. “They wanted to maximize both. But maximizing one almost necessarily reduces the other.”

The advisers also debated policy questions such as at what level of risk a judge should or shouldn’t recommend release. For each defendant, the tool would provide a score and a recommendation for the judge.

More than a year of tinkering ensued. To decrease the differences in outcomes for different races and ethnicities, the researchers excluded data on low-level marijuana offenses and “theft of service,” mainly subway turnstile jumping. Removing fare beating and marijuana arrests lowered the racial disparity in the tool by 0.4%, essentially making it slightly fairer but a little less accurate.

Judges didn’t like it. “They said, ‘I was loving this, all this data-driven, evidence-based demonstration of what’s predictive. Now you’re putting a policy thumb on this,’ ” said Susan Sommer, general counsel at the Mayor’s Office of Criminal Justice.

The researchers added those charges back in.

In April 2019, New York state lawmakers threw a wrench into the process. They passed a law—some parts of which were rolled back in July—saying that for almost all misdemeanors and nonviolent felonies, judges can’t set bail and must release defendants.

The law forced the researchers to assess whether the tool they had devised still made accurate predictions, based on both the bail law and a separate law change that expunged past low-level marijuana convictions. After slight adjustments, they said it did.

The tool’s 25-point scoring is based on eight factors, weighted differently. Its creators projected it would recommend release without conditions for 86.7% of Black defendants, 89.7% of Hispanics and 90.7% of whites.

This is far different from the city’s prior tool, last updated in 2003. That one called for releasing just 34% of defendants without conditions—30.8% of Blacks and 39.5% of whites.

Data released this month show the new tool’s recommendations didn’t have such racial disparities. From Nov. 12 through March 17, the algorithm recommended releasing without conditions 83.9% of Blacks, 83.5% of whites and 85.8% of Hispanics. Defendants with higher scores returned to court more often than those with lower scores, showing the algorithm seemingly made accurate predictions.

In the case of Mr. Morla, the man who was accused of assaulting his ex-girlfriend and had a past felony but a perfect score on the new algorithm, it took into account that he had a working phone number and had lived at the same place for four decades. It didn’t weigh his previous conviction, which was for assaulting the mother of his two children, because that occurred more than a dozen years ago.

After his release, Mr. Morla did return to court. His case has since been dismissed. The Legal Aid Society, which represented him, declined to comment.

While most of the tool’s data derive from records held by various city and state agencies, two critical pieces—whether the defendant reports a phone number and the amount of time living at his or her past two addresses—depend on interviews conducted before arraignment.

One afternoon in November, recently arrested defendants crowded into a holding cell in the basement of Brooklyn Criminal Court, waiting to be arraigned. Many looked exhausted and scared.

One by one, they entered a locked booth, where on the other side of a plexiglass window an employee of the New York City Criminal Justice Agency asked questions that could affect their score.

A man wearing a green hoodie balked when the staffer asked for a phone number.

The staffer, knowing the man’s liberty could be on the line, tried to coax it out of him. “It would be really helpful,” he said gently, “if you would give me a contact so the judge can make a better recommendation that you be released, sir.” The man soon gave his phone number.