March 24, 2020
Automated speech recognition less accurate for blacks: study
The technology that powers the nation's leading automated speech recognition systems makes twice as many errors when interpreting words spoken by African Americans as when interpreting the same words spoken by whites, according to a new study by researchers at Stanford Engineering.
While the study focused exclusively on disparities between black and white Americans, similar problems could affect people who speak with regional and non-native-English accents, the researchers concluded.
If not addressed, this translational imbalance could have serious consequences for people's careers and even lives. Many companies now screen job applicants with automated online interviews that employ speech recognition. Courts use the technology to help transcribe hearings. For people who can't use their hands, moreover, speech recognition is crucial for accessing computers.
The findings, published on March 23 in the journal Proceedings of the National Academy of Sciences, were based on tests of systems developed by Amazon, IBM, Google, Microsoft and Apple. The first four companies provide online speech recognition services for a fee, and the researchers ran their tests using those services. For the fifth, the researchers built a custom iOS application that ran tests using Apple's free speech recognition technology. The tests were conducted last spring, and the speech technologies may have been updated since then.Doesn't this rather argue for better education? Instead of blaming the racist computers shouldn't we admit we aren't educating children properly?
35 queries taking 0.3297 seconds, 107 records returned.
Powered by Minx 1.1.6c-pink.