, author
Eric Sears
Director, Technology in the Public Interest

Eric Sears, Director, Technology in the Public Interest, shares insights on what is needed to ensure AI governance centers human rights, community, and safety.

 

Artificial intelligence (AI) has become a significant animating force of national security and geopolitics. Over the last few years, a dangerous “AI arms race” narrative has taken center stage, whereby the United States and China are in competition to achieve increasingly advanced AI as a means towards global superiority. Leading AI companies and technology executives in the U.S. often embrace this narrative, and with good reason: they and their investors stand to significantly profit as advanced AI systems are industry driven and owned.

While deep connections between Silicon Valley and Washington, DC, have long existed, ties are growing closer than ever as technology companies and their leaders pursue valuable government contracts and seek to shape AI policy and regulation in their interest.  The military, intelligence agencies, and law enforcement are increasingly integrating AI technologies into surveillance, threat assessment, warfare, and for other purposes. This ensures AI will play an even more active role in shaping foreign and domestic policy moving forward.

Yet we know that AI systems can provide misleading and inaccurate information that could have profound national security implications. Moreover, without proper oversight and protections, AI technologies raise a range of security and human rights risks that could erode, not advance, democracy and national security. The specter of AI working toward authoritarian ends looms large.

Without proper oversight and protections, AI technologies raise a range of security and human rights risks that could erode, not advance, democracy and national security.

Throughout 2024, the intersection of AI, national security, and geopolitics became increasingly complex. There has been some progress in establishing safeguards to help ensure AI systems used for national security purposes uphold democratic values and include safety and rights-based considerations. However, we are entering into what could be a time of deep geopolitical instability, fueled by precipitous decision-making. In such a context, there is a real risk that rules and regulation governing AI, that are meant to ensure safety and protect rights, will be swept aside.

In response to the dynamics described above, the Technology in the Public Interest Program is supporting a growing cohort of organizations working at the nexus of AI, national security, and geopolitics. Here is what we have heard the sector needs as we listened to a variety of experts from an array of fields:

  • Advance a larger marketplace of ideas to guide policymaking and practice that centers public interest considerations.
  • Widen the aperture of what constitutes expertise in the field.
  • Build stronger civil society networks that are guided by democratic and public interest values.
  • Work globally and include expertise from global majority countries.
  • Examine the deepening relationships between the technology industry and governments.
  • Establish a shared lexicon between and among stakeholders that centers humanity.

Even during a period of great uncertainty, it is possible to make progress in the areas outlined above. Through the Council on Foreign Relations and University of Cambridge we are supporting efforts to advance new approaches to AI governance and foreign policy that center democratic and public interest considerations. Grants to New America and Kaiji aim to assess and reimagine U.S. and China relations in a way that moves beyond competition and “AI arms race” narratives. Support to the Brennan Center seeks to advance oversight, transparency, and accountability for the way in which AI technologies are used and deployed in a domestic national security context and shed light on the AI industry’s influence within the U.S. government. Grantee partner Tech Policy Press produces knowledge and analysis about the geopolitics of technology, with a special focus on AI.

Our work will continue to build in this area, and we will seek to connect these efforts with broader collaborations to advance global AI governance toward public interest. As always, we invite your ideas and comments about our work.