A New Study of Self-Driving Technology Finds a Truly Stunning Flaw. (Really, It’s All Too Human)



The promise of artificial intelligence is twofold: It’s supposed to free us from life’s mundane routines, and also, ultimately, to offer slivers of improvement over what humans can do.

Repetitive motion? Boring thought processes? Check, AI will handle them.

Bigger numbers, more calculations, faster reactions than humans? That’s part of the promise.

But it turns out that AI-empowered applications can amplify our less attractive qualities, rather than remedy them. It makes sense since we humans are the ones doing the empowering.

Case in point: a new study out of the Georgia Institute of Technology that says the self-driving cars appear to be better at not hitting and killing people with lighter skin tones. Put differently, the algorithm involved arguably incorporates the programmers’ implicit racial bias.

It’s an early study, and not yet peer-reviewed, so take it with a grain of salt for now. But it found that on average, AI models were about 5 percent less likely to identify (and thus, avoid hitting) dark-skinned people as compared to lighter skinned people.

The theory behind this is that programmers are unintentionally including their biases — even selection biases that they don’t recognize — when they create algorithms.

As a Vox article reporting on this suggested: “Since algorithmic systems ‘learn’ from the examples they’re fed, if they don’t get enough examples of, say, black women during the learning stage, they’ll have a harder time recognizing them when deployed.”

As a Vox article reporting on this suggested: “Since algorithmic systems ‘learn’ from the examples they’re fed, if they don’t get enough examples of, say, black women during the learning stage, they’ll have a harder time recognizing them when deployed.”

We’ve seen this kind of story before. Amazon accidentally created a hiring algorithm that was trained used 10 years of applicants’ resumes. Unfortunately, they were overwhelmingly male candidates, which taught the algorithm to value women’s applications less.

So, it’s not that self-driving cars are racist, or that automated hiring systems are sexist. Instead, let’s consider this a good, early opportunity to assess some of the real challenges with using AI that we thought would solve themselves. 

Here’s what else I’m reading today:

The opinions expressed here by Inc.com columnists are their own, not those of Inc.com.





Source link

Leave a Reply

Your email address will not be published. Required fields are marked *