Robots are learning our worst human biases

Robots are learning our worst human biases
This post was published on the now-closed HuffPost Contributor platform. Contributors control their own work and posted freely to our site. If you need to flag this entry as abusive, send us an email.

Co-authored by Marc Kielburger

Brisha Borden was walking through her suburban Florida neighborhood when she spotted an unlocked bike. She took it for a block-long joy ride before dropping it.

It was too late. The cops were already on their way.

Charged with petty theft, the 18-year-old might have been let off with a warning. Instead, when her file was run through state software designed to predict recidivism rates, Borden was rated high-risk and her bond was set at $1,000.

She didn’t have an adult criminal record. Algorithms predicted her likelihood to reoffend based on her race—Borden is black.

Will a machine dispense blind justice, or can robots be racist?

Since the early 2000s, various state courts have used computer programs and machine learning to inform decisions on bail and sentencing. On paper, this makes sense. With prison populations ballooning across the US, artificial intelligence promises to take the human bias out of judgements, creating a fairer legal system—in theory.

Looking into 7,000 risk assessments, Pro Publica concluded the programs have mistakenly targeted black defendants. The report isolated other factors, like criminal history, age and gender—black defendants were still 77 per cent more likely to be labelled high-risk of violent crime compared with white defendants.

“We like to think that computers will save us,” says software producer and diversity advocate Shana Bryant. “But we seem to forget that algorithms are written by humans.”

Even code is embedded with social bias.

“The main ingredient [in artificial intelligence] is data,” explains Parinaz Sobhani, director of machine learning for Georgian Partners. The more information is fed through algorithms, the more precise the patterns and predications become.

“The question is, where is the data coming from?”

We are at the dawn of the age of artificial intelligence. And to make sure machines don’t mimic society’s implicit prejudices, we need people from all backgrounds coding them.

Borden’s case of algorithmic injustice is just one example. Machine learning is heralded as the future of everything from policing to healthcare.

But making fair machines depends on our ability to supply fair data. In state prisons, for instance, African Americans are incarcerated at five times the rate of white people. If we don’t address these kinds of systemic failings, we can’t expect machines to fix them while working with the same data.

Socially corrupt data massively failed an early image-recognition software designed by Google that categorized black people as gorillas. The program, meant to sort photos based on their subjects, was tested exclusively on white people. The tech sector, despite many efforts to the contrary, remains overwhelmingly white and male.

“If we don’t have a diverse group of people building technology, it will only serve a very small percentage of people: those who built it,” explains Melissa Sariffodeen, co-founder and CEO of the non-profit Ladies Learning Code.

That’s why questions about who get hired in the tech sector are about more than equality in the workforce.

“We are at a nexus point,” explains Bryant. If we don’t prioritize diverse voices in these emerging technologies, the future will have robots—but no less prejudice.

Craig and Marc Kielburger are the co-founders of the WE movement, which includes WE Charity, ME to WE Social Enterprise and WE Day. For more dispatches from WE, check out WE Stories.

Popular in the Community

Close

What's Hot