Algorithmic Discrimination: The World as it is or as it Ought to be?
Machine learning and artificial intelligence are driven by numbers. Pure computation. Yet the emergence of artificial intelligence and deep learning approaches to machine learning have raised concerns about the embodied values of technical systems. As Helen Nissenbaum originally noted in a 2001 “Computer” article, engineers cannot avoid the fact that the systems they create will invariably impact their users and, in some cases, society. Computational systems can and do have distinct values.
Few debates in society today reflect technology’s embodiment of values as current discussions related to algorithmic discrimination. This past summer, the New York Times wrote an article outlining the manner in which algorithms can potentially discriminate. Researchers at Harvard, the article explains, determined that a specific online advertisement related to arrest records showed up more often in search results related to historically black names.
Most people I speak with about algorithmic discrimination tend to have little familiarity with the concept, so I decided to do a quick survey. During the writing of this blog post, I polled 3 of my roommates about the concept of algorithmic discrimination with the question: “Have you heard that algorithms or technological systems can discriminate?” None of them had heard of it before, so I showed them a series of pictures designed to help them understand. I collected all of the following pictures from Google searches conducted on October 4, 2015, and I showed them three pairs of pictures in order. The test went something like this.
What do you notice about the differences between a search for the term “CEO” and a search for the term “secretary”?
What do you see when we compare searches for the term "doctor" with that of "nurse"?
Finally, for the third comparison, what do you see in “rich kid” versus “poor kid”?
As you can imagine, all three of the respondents quickly understood the concept. In the comparison of CEO to Secretary, there exists a clear gender divide. The same gender divide exists between searches for “doctor” and searches for “nurse". In the last example, we see a more racial and geographical divide. The first search for “rich kid” reveals mostly caucasian children while the second for “poor kid” reveals poorly fed children from poorer countries. I encourage you to try these searches yourself.
Even if these algorithms are operating with perfect objectivity, a concept that Lorraine Daston describes as functionally separated from human idiosyncracies, they help solidify entrenched prejudice in society. If a child searches for "CEO", and he doesn't fit the (very narrow) mold of a tall, white male, he or she may be discouraged or come to expect that CEO's are all white males. For adults, our unconscious prejudices are validated and reinforced when we see a collection of women associated with "nurse," a reflection of social prejudice that has failed to catch up with near gender parity in medical education for doctors.
It’s true that algorithms are "objective." They reflect numbers, heuristics, and a form of “absolute” truth. Humans play no role in their processing with the exception of their initial engineering. But I believe that our most important systems should, as Nissenbaum suggests, reflect our own values. As engineers, politicians, businessmen, and human beings, we should embed our own values in the systems we construct. This will only become more necessary as Artificial Intelligence and Machine Learning become a greater factor in our everyday lives.
Maybe the “pure computation” embodied by modern algorithms shouldn’t just reflect the world as it is. Maybe the next generation of algorithms should reflect the world as it ought to be. I believe our technical systems should embody the diversity, uniqueness, and equality that we, as a society, should value.