What Role Does AI Play in Systemic Racism?

Published on

Technology dominates today’s world, playing a significant and consequential role in the lives of billions of people. Society has been revolutionized by tech industry innovations, a revolution that is currently being led by artificial intelligence, or AI. AI is characterized by a system of sophisticated algorithms which can synthesize information, complete tasks, and transmit information in a fraction of the time it would take for a human to do the same. Yet, despite their wide use, these algorithms are not free from flaws. Over the past decade, AI’s shortcomings have become increasingly clear. The unforeseen consequence of the proliferation of this technology has been a fortification of systemic racial oppression and prejudice within society. AI’s influence is felt in a variety of spheres, from workplaces and financial institutions, to real estate and law enforcement. These spheres have been widely criticized for enforcing racial inequality, and the addition of AI may be worsening this issue. Discrimination  is weaved into the algorithms, effectively automating it.

Several prominent tech corporations have been accused of utilizing prejudiced AI in their products, causing discomfort to non-white customers. In 2015, Google was subject to significant backlash after it was discovered that its photo app would automatically sort images of black people into folders titled “gorillas.” Nikon experienced similar fallout after customers found that its cameras misidentified Asian people as blinking. Hewlett Packard faced negative publicity following customers noticing that its web camera software could not reliably recognize dark-skinned people (Metz).

The issue of identifying dark-skinned people incorrectly in particular has been a common flaw in many algorithms. In October of 2016, a black graduate student at MIT, Joy Buolamwini, tried continuously to have a facial detection system identify her face, failing each time. When she decided to try it while wearing a white mask from a Halloween party she had recently attended, the system was finally able to detect her (Metz). Following this experience, she tested other tech used to identify faces, from companies such as Microsoft and IBM. In the cases of both companies, Buolamwini found there was a high margin of error in detecting dark-skinned women as men, and some error in detecting light-skinned men as women. In failing to reliably accurately identify and analyze non-white faces, AI is replicating the “Other Race Effect” (ORE), a term describing the phenomenon of individuals being more skilled at identifying faces of their own race rather than those of others (Tian, Jinhua, et al.).

In light of the tense relationship in the U.S. between non-white (especially black) communities and law enforcement, this trend of misidentification has raised concerns, considering that these AI are employed by police departments across the country. First adopted by the Los Angeles Police Department (LAPD) in 2008, U.S. police departments have been experimenting with predictive policing tactics in an effort to anticipate criminal behavior. Predictive policing is defined as the use of algorithms to analyze data in order to preempt crime (Lau). Algorithms will derive historical crime data from cities such as New York, Los Angeles, Miami, and Chicago, and identify potential crime hotspots. Police are then sent to these hotspots. This method has the effect of exacerbating the over-policing of non-white neighborhoods. This historical data is not impartial, yet AI has not been trained to take into account the data’s complexity (Akselrod). Algorithms may make use of past crime data which reflects symptoms of systemic racial discrimination, such as over policing, thus giving the data racial biases (June).

In 2016, ProPublica published a piece which goes in depth into the faults present in AI, and how this has rendered it a tool of systemic racial oppression. In their research, it was found that a popular software, created by a company called Northpointe, used to identify potential future criminals was egregiously biased against black people. The software was twice as likely to incorrectly flag black people as being at a high risk of committing crimes in the future, while also being twice as likely to incorrectly flag white people as being at low risk of committing future crimes. These risk assessments, done by Northpointe and other software, are currently being used in courts across the nation, influencing bond decisions, criminal sentencing, and rehabilitation. The software is not wholly inaccurate; of all people categorized as high risk, 61% committed crimes within two years of their assessment. However, the potential racial biases ingrained in risk assessments have made them controversial. Following the publishing of this study, Northpointe critiqued ProPublica’s conclusions: “Northpointe does not agree that the results of your analysis, or the claims being made based upon that analysis, are correct or that they accurately reflect the outcomes from the application of the model.” Further, Northpointe shared a number of key factors its software used to create assessments, such as a subject’s education level, job history, and a set of 137 questions, including “How often did you get in fights while at school?” and “Was one of your parents ever sent to jail or prison?” A subject’s race is not included in the questioning (Angwin et al.).

AI is having a similar impact on discrimination against people of color in housing. Automated systems for determining an individual’s access to loans have grown in popularity. In today’s age, loan rejections are increasing in frequency, however, a Lending Tree study found racial disparities in lending figures. Black Americans face a home loan denial rate of 17.4%, compared to just 7.9% loan denial to non-Hispanic whites (Sisson). Further, a 2019 Haas Berkeley study found that 6% of rejected loan applications from Latinx and black individuals would likely have been accepted had the applicant been white. This results in rejection of up to 1.3 million creditworthy applicants. The study also found that minority borrowers’ are subject to higher interest rates than their white counterparts due to discrimination, costing them an extra $765 million in interest yearly (Bartlett et al.).

Among the causes of this prejudice in the algorithms is inefficient examination of the datasets used to track certain variables. Tech workers may not be aware that the variable they are focusing on (e.g., health) may be strongly impacted by secondary variables (e.g., Gender, race, class). Without acknowledgement of how many variables can be interconnected and play on one another, prejudice and inequality may be coded into software uncritically, resulting in an “automation of the status quo” (Tian, Jinhua, et al.). Furthermore, chosen datasets reflect the biases of the people who create them, whether or not this is done intentionally. With 63.5% of the tech industry being white, there are often racial blind spots in the created algorithms . The misidentification of non-white faces by AI is due to white tech workers training their AI algorithms on datasets containing mainly  white faces, rather than racially diverse datasets. In 2017, Deborah Raji, a black female employee of tech startup Clarifai, analyzed the training images the startup was using for its facial recognition software. She found that 80% of the faces were white, and 70% of white faces were white men, making the software inadequate for recognizing non-white people and women. After this discovery, Clarifai’s chief executive, Matt Zeiler, stated, “The issue of bias in facial recognition technologies is an evolving and important topic. Measuring bias is an important step” (Metz).

Diverse datasets are crucial in eliminating bias. A 2021 study on VGG-Face (a facial recognition software) found that subsequent to providing the AI a dataset composed of an equal number of white faces and Asian faces, there was no racial bias in identification. This is in contrast to the AI’s performance on the prior dataset, which was composed of majority white faces, resulting in a bias towards identifying white faces (Tian, Jinhua, et al.). In order to mitigate the impact AI technology has on systemic racism moving forward, it is crucial that the tech industry make thoughtful choices in developing algorithms. To prevent creating biases, it is necessary to test algorithm efficacy and promote the use of diverse datasets. Sectors such as criminal justice and housing must be cautious in the implementation of AI software.





Works Cited

Akselrod, Olga. “How Artificial Intelligence Can Deepen Racial and Economic Inequities | News & Commentary.” American Civil Liberties Union, 13 July 2021, www.aclu.org/news/privacy-technology/how-artificial-intelligence-can-deepen-racial-and-economic-inequities.

Angwin, Julia, et al. “Machine Bias.” ProPublica, 23 May 2016, www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing.

Bartlett, Robert P., et al. “Consumer Lending Discrimination in the FinTech Era.” SSRN Electronic Journal, 2017, 10.2139/ssrn.3063448.

June, Kate. Artificial Intelligence’s White Guy Problem. 2016.

Lau, Tim. “Predictive Policing Explained.” Brennan Center for Justice, 1 Apr. 2020, www.brennancenter.org/our-work/research-reports/predictive-policing-explained.

Metz, Cade. “Who Is Making Sure the A.I. Machines Aren’t Racist?” The New York Times, 15 Mar. 2021, www.nytimes.com/2021/03/15/technology/artificial-intelligence-google-bias.html.

Sisson, Patrick. “Housing Discrimination Goes High Tech.” Curbed, 17 Dec. 2019, archive.curbed.com/2019/12/17/21026311/mortgage-apartment-housing-algorithm-discrimination.

Tian, Jinhua, et al. "Multidimensional Face Representation in a Deep Convolutional Neural Network Reveals the Mechanism Underlying AI Racism." Frontiers in Computational Neuroscience, 2020, https://doi.org/10.3389/fncom.2021.620281. Accessed 20 Nov. 2022.

More posts by Sarah Gama.
What Role Does AI Play in Systemic Racism?
Share
Twitter icon Facebook icon