
Racial Bias in AI: Are You Really Good on any MLK Boulevard with Google Maps?
0
1
0
One day, Allison Bland, a media specialist from Princeton University, was driving through Brooklyn when Google Maps told her to “turn right on Malcolm Ten Boulevard.” It was a clear error on behalf of the technology, which erroneously interpreted the “X” in the street name as the Roman numeral for ten instead of the street’s actual reference to the African-American revolutionary, Malcolm X. It is a mundane error that reveals a lot about the vested interests of the program’s engineers. But it begs the question: who are the engineers behind this programming? What are the types of values that are being coded into things that we use daily? We need to ensure that technology is crafted in spaces with a vast variety of people from different backgrounds, so that the norms present within the architecture won’t have disastrous effects on our interactions with technology on a daily basis.
Artificial intelligence can be summarized as a digital system that has the capacity to “learn” to perform tasks by identifying data patterns over millions of examples rather than by following specifically programmed instructions. AI breaks down a routine into simple, repeated steps like a conveyor belt. AI is seen as a field devoid of emotion or the “faults” of the human mind, especially when it is advertised as a fix to human mistakes. In actuality, AI can reflect its creators’ values and can embed them into the foundation of an algorithm. The field of AI ethics questions the nature of technology’s construction and nefarious consequences, such as perpetuating systemic racism.
Systemic racism describes how an entire social system—in its political, legal, economic, and educational spheres—has and continues to maintain and exacerbate racial inequality in the United States. Systemic racism looks like perpetuating resource inequality, attitudes that uphold the rationalization of racial oppression, and microaggressions. If institutions can reproduce white privilege and the subordination of Black people within the socially constructed racial order through intentionality and indifference, then AI can operate as another institutional lever of systemic racism.
In essence, racism can shape the architecture of artificial intelligence and perpetuate racial prejudice on a macro-level. For example, discriminatory design discusses how racism shapes structures in the real world. The architecture of AI requires a vast amount of energy that has been recently met through the construction of “AI server cities” that emit high levels of light and noise pollution and high energy consumption, but these cities are constructed near predominantly Black communities, offloading negative externalities onto these vulnerable citizens at a higher rate. AI itself has also been known to influence housing by inhibiting Black users seeking to move from accessing certain opportunities, also known as “algorithmic redlining.” The way artificial intelligence constructs the architecture that allows it to facilitate its services to the US, alongside the algorithm itself, can reinforce and reflect social hierarchies.
If software engineers can implement their own biases into AI based on the mere priorities of companies designing this technology, then how can these priorities influence the social landscape and its subsequent consequences? How can we avoid discriminatory design that is currently latent in existing AI and developing technology?
A simple miscalculation or lack of values that address diversity and cultural sensitivity can produce the same effect as reproducing harmful rhetoric. Google Maps, which is just one example of an AI-powered application, has over one billion users every month, with over five million apps and websites depending on the data of Google Maps. Evidently, a staggering number of people utilize AI monthly, showcasing the dire need to regulate the creation of technological capacities. The consequences of an error, such as misspelling the name of a famous civil rights activist, amplify when it expands from an institution affecting only 300 million Americans to one-eighth of the entire global population in just 30 days. Furthermore, some law enforcement programs use AI to predict recidivism rates, and 61% of those classified as likely to reoffend were arrested within two years. However, the program falsely flagged Black people at almost twice the rate as White people. Law enforcement uses AI to scan faces trained with biased practices, which also misidentifies Asian and Black people at a rate one hundred times greater than their White counterparts. It could be argued that a lack of diverse engineers creating this technology led to a lack of racial and cultural foresight. Unfortunately, the lack of careful planning can have detrimental consequences on the lives of millions of Americans. A discriminatory design process perpetuates inequities through biased datasets. Numbers aren’t devoid of being affected by human bias and contributing to social stigmas.
The ethos of the latest emerging technology will soon control how resources are distributed by the government, influence hiring processes, and data collected on neighborhoods for predictive crime models, which will guide law enforcement surveillance, and much more. We should concentrate on developing technology, shaping its construction, embedding values into the technology, and consequently guiding its application. Gone are the days in which we can push technological deterministic narratives that remove the agency of humans in our roles of being able to shape technology.
Photo Credit
Christian Anderson, CC By-SA 2.0, via Wikimedia Commons






