Eric Sears, Associate Director, Technology in the Public Interest, reflects on work to address and prevent the inequities and harms often built into new technologies.
Last year was bookended by two striking events in the artificial intelligence (AI) field that illuminated core racial justice challenges. In January 2020, a flawed facial recognition tool used by the Detroit Police Department led to the arrest Robert Julian-Borchak Williams, a Black man who did not commit the crime for which he was accused. This is thought to be the first known case of its kind, but a scenario that researchers and civil rights advocates have warned of for years.
The year ended with Dr. Timnit Gebru, a computer scientist and prominent Black woman in the AI field who co-founded the organization Black in AI, being fired from Google as co-lead of the Ethical AI Team. While the company claimed she resigned, it was widely reported that Dr. Gebru was fired for her advocacy on behalf of marginalized people in the company and research that critically examined an AI tool that aims to improve Google search results. Black people are woefully underrepresented in the AI field and as researchers at Google. Dr. Gebru’s firing demonstrates the risks that minoritized people in the AI field face for speaking truth to power.
Ironically, Williams’s fate was foreshadowed nearly two years prior to his wrongful arrest when Dr. Gebru and Joy Buolamwini, a computer scientist and founder of the Algorithmic Justice League, published a landmark study demonstrating bias in facial recognition tools. Their research revealed that commercial facial recognition tools sold by companies such as Microsoft and IBM were more likely to correctly identify White people and misidentify Black people like Williams. Research conducted by Buolamwini and fellow computer scientist Inioluwa Deborah Raji on Amazon’s facial recognition tool revealed similar disparities.
The research efforts of Boulamwini, Dr. Gebru, and Raji, coupled with surveillance concerns, helped fuel a movement to regulate facial recognition technology, an issue that took on new urgency in the wake of the 2020 protests against police brutality and anti-Black racism in the United States. While the work continues, a patchwork of local and state laws regulating facial recognition technology is emerging. Some companies have taken action, too. In June 2020, IBM said it would no longer offer general-purpose facial recognition tools, and Microsoft and Amazon have temporarily halted the sale of their facial recognition tools to the police.
As facial recognition technology demonstrates, beneath the veneer of new and emerging technology is an old story about power and how it operates. Who benefits and who experiences harm from the advancement and use of technology is unevenly distributed in our society. Left alone, we have seen how technology can too often reinforce our existing social structures that hinder progress towards a more just and equitable world. Therefore, it is essential to ensure people whose lived experience places them in closest proximity to the harms technology can cause help guide its development and use.
To that end, the Technology in the Public Interest program has supported Black, Indigenous, and People of Color (BIPOC)-led organizations and networks through multi-year institutional awards that are undertaking foundational research about the social implications of AI-related technologies; advancing efforts to regulate technology in a way that mitigates harms against historically marginalized communities; and diversifying the field of researchers who are shaping AI’s future. Taken together, these interlinking organizations, such as the Algorithmic Justice League, Black in AI, Data for Black Lives, Movement Alliance Project, Center on Privacy and Technology at Georgetown University, and Upturn, are making significant contributions.
Among the many lessons of 2020 is this: the ability to advance racial justice is mediated in part by technology and those who control its development. The COVID-19 pandemic has accelerated society’s dependencies on technology at a time when we are beginning to have a more nuanced understanding of technology’s role in buttressing existing systems of inequality and oppression. While most technology may not be inherently bad, it also is not neutral.
As we move further into 2021, there will likely be a growing imperative to use policy and regulatory measures to reign in the harms brought on or deepened by technology, be it biased algorithmic decision-making systems, social media content moderation, or addressing monopolies and outsized power within the technology industry. Such measures will be limited in advancing justice and equity unless the individuals and communities most at risk of harm are central to designing and implementing them. We will continue to deepen our grantmaking to support organizations and people doing this vital work.
Technology in the Public Interest grantmaking ›