[Happy New Year everyone! This essay is a piece I wrote for one of my library school courses about algorithmic bias. I thought I'd share it with you. While not written optimistically, I have come to realize since that as an aspiring librarian, I have a responsibility to try and help correct some of the flaws inherent in our knowledge organization systems.]
There is a growing body of multidisciplinary research within critical information and technology studies which suggests that technology and technological progress are not isolated from the socio-historical contexts they are situated in (Allen, 2019; Noble, 2018; Wajcman, 2010). The algorithms of “big data” that have come to govern most of our lives, from those underlying search engines to housing selection are no exception (Allen, 2019, p. 219; Noble, 2018, p. 27). Multiple scholars have found that algorithms are complicit in “technological redlining,” a carry-over of gendered and racialized discrimination onto the “cybertopia” of the Internet meant to be a liberating project (Allen, 2019; Noble, 2018, p. 1, 47; Wajcman, 2010).
In this post, I will further explore the concept of technological redlining as it relates to hegemonic and meritocratic ideas in my examination of the introduction and first chapter of information studies scholar Safiya Noble’s book Algorithms of Oppression: How Search Engines Reinforce Racism (2018) and legal scholar James Allen’s journal piece “The Color of Algorithms: An Analysis and Proposed Research Agenda for Deterring Algorithmic Redlining” (2019). Both scholars have contributed greatly to an understanding of the harms resulting from unregulated proliferation of algorithms and have called for greater awareness among the public and policymakers who consume and produce information largely mediated by these algorithms.
Beginning with Algorithms of Oppression, Noble makes it clear that algorithms are a technical product reflecting the privilege and biases of its designers, a dominant group in tech that believes in the meritocratic idea that popular technologies acquired their status through an impartial, yet democratic, process of technological development involving the best ideas and practices (Noble, 2018). To do this, she conducts a case study inquiry on commercial search engine algorithms (focusing on those employed by Google), by searching various terms related to group and individual identity, and then contextualizing her interrogation of the value judgments and ideas represented in the results within the wider intersectional critical literature. The results were shocking, often bringing up derogatory, pornographic, racist, and sexist content about women and people of color (Noble, 2018).
For example, she highlights the differences in the search suggestions that are brought up by Google’s auto-suggestion algorithm in searches beginning with the phrases “why are black women” and “why are white women.” For the former, suggestions to complete that sentence are “angry,” “loud,” “mean,” “attractive,” “annoying,” and “insecure.” As for the latter, suggested terms were “pretty,” “beautiful,” “mean,” “easy,” “insecure,” and “fake” (Noble, 2018, p. 21). Note the primary positioning of both white and black women as sexual objects. Conversely, a search for “professor style,” a professional identity term, brought up mostly pictures of white men in very similar suit-and-tie and khaki outfits (Noble, 2018, p. 23).
Noble argues that these racist and sexist interpretations manifested in most image results retrieved from the above terms associating women with reproduction, sex, and the domestic sphere, and men with the public and professional sphere, are nothing new. For Noble, this is further evidence of a larger pattern of “technological redlining” and “algorithmic oppression” in that harmful stereotypes are merely being reproduced and transmitted within newer modes of communication such as the Internet (Noble, 2018, p. 1, 4). Not coincidentally, such historically marginalized groups like women and people of color often do not have the economic, political, and social clout to stave off the damage that results from being associated at an individual or group level with these popularized portrayals. “Code is a language full of meaning” that imbues technology with the subjective understandings of its creators, rendering normalized narratives about its impartial nature inaccurate (Noble, 2018, p. 26).
While Noble’s methods of exposing the otherwise invisible power relationships evident in Google’s search algorithms using a black feminist lens are largely qualitative in nature, her insights are nevertheless erudite and evocative in highlighting the power a dominant minority with specialized skill sets holds over a dependent and less technologically literate majority.
Complementing Noble’s arguments about search engines being symptomatic of tech’s larger complicity in preserving and promoting the very oppressive system of social relations they purportedly avoid reifying with their “neutral” algorithms is James Allen’s treatment of housing algorithms. While Noble conducts her own case study to demonstrate the arguments made by techno-feminist critiques against the harms stemming from the normalization of gendered and racialized stereotypes, Allen conducts an extensive literature review, before concluding with a substantive section on possible solutions to the problems treated in his analysis. Moreover, Allen examines Noble’s “algorithmic oppression” more specifically as “algorithmic redlining,” defined as “sets of instructions” that “carry out procedures that prohibit or limit people of color from procuring housing or housing finance, particularly in non-minority neighborhoods” and threaten the “autogenerating” of discrimination (Allen, 2019, p. 222-223, 230; Noble, 2018, p. 1).
Echoing Noble’s assertions that people must critically examine how their data is used in machine-learning algorithms, Allen finds that most of these algorithms, meant to provide an efficient means of assuring for the greatest amount of people possible equitable lending and affordable housing, draw upon a wealth of discriminatory historical data generated during and reflecting past periods of segregation combined with present data-mining pertaining to additional factors such as ethnicity and residential location. For example, in the determination of creditworthiness, which is a critical first step in securing lending, many eligible for these funds are either denied or given disadvantageous interest rates because of a high concentration of risk factors in their data profile, such as having lived in a majority-minority neighborhood.
Both Allen and Noble advocate for algorithmic literacy among the public and policymakers and more transparency in algorithmic design. Allen pushes more specifically for the updating of past reforms, such as the Community Reinvestment Act (CRA) and the Fair Credit Reporting Act (FCRA), and their accompanying equivalents in Internet and intellectual property rights law (Allen, 2019). Essentially, there must be a higher bar for the disclosing and auditing of both the data used by algorithms and their conclusions, along with strictly enforced human oversight of algorithms throughout the process of decision-making (Allen, 2019).
Overall, both scholars provide valuable insights into how power is negotiated between those hegemons who know how to manipulate code in various contexts and those who do not, but do not go much beyond establishing their research agendas to elucidate the exact technical mechanisms needed to minimize algorithmic bias nor specifically address how women and people of color in tech negotiate a culture of meritocracy that undervalues their work and achieve positions that allow them influence in the development process. No doubt, this is due to the self-reinforcing nature of algorithms, in that they draw upon the fallibility of their creators and the data gathered about people by governments and private entities to make decisions (as well as its own past decisions) paired with the shield granted the code of the algorithm by proprietary or intellectual property rights law (Allen, 2019; Noble, 2018). Therefore, it is difficult to advocate for specific mechanisms to fix a code if one cannot access it. All in all, it is apparent that reforms are needed at multiple levels of society, but the growing prevalence of algorithms and their profitability combined with the need of addressing the larger structural systems in place that fostered their creation make the prospect of meaningful reform highly unlikely.
Works Cited:
Allen, J. A. (2019). The color of algorithms: An analysis and proposed research
agenda for deterring algorithmic redlining. Fordham Urban Law Journal, 46(2), 219-270. Retrieved from http://search.ebscohost.com.ezproxy.library.wisc.edu/login.aspx?direct=true&AuthType=ip,uid&db=lft&AN=136193121&site=ehost-live&scope=site.
Noble, S. U. (2018). Algorithms of oppression: How search engines reinforce racism. Retrieved from https://ebookcentral.proquest.com.
Wajcman, J. (2010). Feminist theories of technology. Cambridge Journal of Economics, 34(1), 143–152. Retrieved from https://academic.oup.com/cje/article-abstract/34/1/143/1689542?redirectedFrom=fulltext.
Subscribe to:
Post Comments (Atom)
Book Review: Rebecca Skloot's "The Immortal Life of Henrietta Lacks"
This is the second of my posts written during the COVID-19 quarantine, during which I tried to catch up on reading I've been neglecting...
-
{The second installment in a belated (yet continuing) celebration of Women's History Month, this week's entry will flashback to a li...
-
Back in the spring of this year, I had the pleasure of reading Harvard historian Jill Lepore's highly ambitious, yet riveting single-vol...
-
{March is the official start to Women's History Month! Here is one of two pieces about women's lives both past and present to celebr...
No comments:
Post a Comment