AI is already learning how to discriminate
What happens when robots take our jobs, or take on military roles, or drive our vehicles? When we ask these questions about the rapidly-expanding role of AI, there are others we’re often overlooking—like the subject of a WEF paper released this week: how do we prevent discrimination and marginalization of humans in artificial intelligence?
Machines are increasingly automating decisions. In New York City, for instance, machine learning systems have been used to decide where garbage gets collected, how many police officers to send to which neighborhoods, and whether a teacher should keep their job. These decision-making technologies bring up equally important questions.
While using technology to automate decisions isn’t a new practice, the nature of machine learning technology—its ubiquity, complexity, exclusiveness, and opaqueness can amplify long-standing problems related to discrimination. We have already seen this happen: A Google photo tagging mechanism, for instance, mistakenly categorized people as gorillas. Predictive policing tools that have been shown to amplify racial bias. And hiring platforms have prevented people with disabilities from getting jobs. The potential for machine learning systems to amplify discrimination is not going away on its own. Companies need to actively teach their technology to not discriminate.