top of page

Blog! Blog! Blog!

  • Writer's pictureUIC Radio

Modern Artificial Intelligence and Algorithmic Systems: Inherently Racist By Design

The unrelenting presence and crisis of racism in the United States is no mystery—it’s loud and clear, especially in the recent age of the coronavirus pandemic. People of color have long been disproportionately targeted by police, convicted by legal systems, and neglected by the country’s healthcare system. What if I told you that racism is not only being perpetuated by real people, but also by robots and machines? It might sound like a stretch for some, but there is an ever growing body of research confirming that, yes, modern day technology is one of the biggest culprits behind the perpetuation of racism today. The development and use of artificial intelligence (AI) technology in recent decades has revolutionized every facet of Western society, but it has simultaneously raised concerns about the unintended consequences that it creates. AI is heavily reliant on highly flawed systems of classification and standardization, which inevitably produce problematic results within a society structured by racism and white supremacy. Artificial intelligence and algorithms have shown to be tainted by the same implicit racial biases that plague society, and subsequently reinforce these biases through its application in a variety of systems such as search engines, medical technology, face-recognition technology, and databases used by government agencies.

Geoffrey C. Bowker and Susan Leigh Star are two prominent scholars with experience in the discipline of the history of Science, Technology and Society (STS) —Star is an American sociologist with a focus in this particular discipline, while Bowker works as a professor of the study of computational systems. In the work of Bowker and Star, “Sorting Things Out: Classification and its Consequences,” they analyze the issue of classification and its consequences by identifying the ways in which they permeate our lives and broader society. In their broader argument, classification systems are not inherently bad, but dangerous—”dangerous” in the dimension of it being an ethical choice with real world consequences (Bowker and Star, 1999). For instance, the U.S. Immigration and Naturalization Service classified particular races and classes of people deemed “desirable” for residents, and the end result was a quota system that favored middle-class folks of western-European descent over people of African or South-American descent. Safiya Noble, a renowned internet studies scholar, adds to this growing body of knowledge addressing the perpetuation of racism enacted by modern technology. In her publication, “Algorithms of Oppression: How Search Engines Reinforce Racism,” she actively rejects the notion that technologies are free from racism and prejudice by presenting a number of examples of algorithmic racism as it operates in search engines like Google. She argues that the realm of technology “belongs to Whites and reinforces problematic conceptions of African Americans” (Noble, 2018). 

The work of Bacchini and Lorusso similarly explores this issue, but within the context of facial recognition technology and the ways in which it acts as an agent to reinforce racism. The findings of their study show that this technology, particularly in the context of its use by law enforcement agencies, is reliant upon insufficient databases that disproportionately incriminate black people as suspects for a crime. Subsequently, an overwhelming number of people of color are “stopped, investigated, arrested, incarcerated and sentenced as a consequence of face recognition technology” (Bacchini and Lorusso, 2019). Face recognition technology is not only flawed in the context of its use by law enforcement agencies, but also in its general use by everyday people like you and I. Results from a 2018 MIT study report that the ability for facial recognition systems to identify light-complected men is near perfect with an error rate below 1%. On the contrary, the ability for these systems to identify dark-complected women is far less impressive with an error rate as high as 34.7%

AI systems are incredibly brilliant in what they have allowed mankind to achieve in recent decades. They have contributed to the development of the coronavirus vaccine, self-driving cars, speech recognition and generation, and quantum computing. That being said, this critique of modern algorithmic and AI systems should not be mistaken as an argument against modern technology. Rather, it seeks to bring awareness to the fact that modern technology systems rely on insufficient or flawed databases that serve as an agent responsible for the perpetuation of racism. Aside from the apparent problem of racism, the invisibility of the issue is what makes this objective to spread awareness that much more imperative. It is also important to understand that we have not hit a point-of-no-return with no possible way of eliminating the racial bias and prejudice present in modern AI systems. For instance, black people are overrepresented in databases used by law enforcement agencies, which is the result of human error and not due to the inadvertent assumption that people of color are ‘inherently prone to committing crimes’. People of color are disproportionately targeted by law enforcement agents as a result of the racial biases and prejudices held by those individuals responsible for enforcing the law. Facial recognition technology systems and AI systems at large are capable of being modified and improved; but eliminating racial discrimination from technology systems cannot be fully realized until we also work towards a society that is free of racism. 


Bowker, & Star, S. L. (1999). Sorting Things Out : Classification and its Consequences. MIT Press.

Noble. (2018). Algorithms of Oppression : How Search Engines Reinforce Racism. New York University Press.

Bacchini, & Lorusso, L. (2019). Race, Again: How Face Recognition Technology Reinforces Racial Discrimination. Journal of Information, Communication & Ethics in Society (Online), 17(3), 321–335.

Buolamwini, Joy, and Timnit Gebru. “Gender Shades: Intersectional Accuracy Disparities in Commercial Gender Classification.” In Conference on Fairness, Accountability and Transparency, pp. 77-91. PMLR, 2018.



bottom of page