Eric Sears, Director of Technology in the Public Interest, discusses the need for foundations and civil society actors to support efforts to advance AI in the public interest, addressing harms and benefitting people and society.
The arrival of advanced chatbots like OpenAI’s ChatGPT, Google’s Bard (now Gemini), and Anthropic’s Claude that are powered by large-scale artificial intelligence (AI) models has ignited a new wave of fear, hype, and promise about AI technology globally. Power, money, and ideology act as stimulants in the race within the technology industry to create increasingly advanced AI systems. The private sector and governments have poured billions of new dollars into the AI ecosystem over the last year alone. Big Tech companies are vying to acquire the people, hardware, and data required to further commodify and build more powerful AI systems. Situated within this are ideological positions about the technology itself, with some leaders in Silicon Valley and the AI field warning that the technology—that they are eagerly creating and seek to profit from—poses existential risks to humanity.
A mentality of ‘move fast and break things’ often prevails in Silicon Valley and other centers of AI innovation because technology companies do not yet have the incentives to behave differently. This is poised to change as government actors move to regulate them. In the United States, this is most significantly seen in a sweeping Executive Order issued by the White House on October 30, 2023, titled “Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence,” which addresses civil rights, equity, safety, national security, innovation, competition, privacy, and consumer- and worker-related issues. Although new laws would provide greater opportunity to prevent AI harms, advance accountability, and ensure redress for victims, the Executive Order goes a long way in addressing core civil and human rights concerns public interest researchers and advocates have illuminated over the last several years.
CThe business of governing AI is just getting started, and the next three to five years will be defining.
Actions in the United States are not happening in isolation. A variety of countries and jurisdictions are actively engaged in AI regulation, as seen with the landmark European Union AI Act that was agreed upon in December 2023. Debates about AI regulation have helped kick-start global governance conversations, too. The business of governing AI is just getting started, and the next three to five years will be defining. Technology companies are pouring large amounts of money into lobbying efforts and related activities to shape the terms of the debate and outcomes. While technology companies have a role to play in informing regulation and implementing their own responsible AI practices, they cannot be relied upon to act alone.
If we are to see AI advance in the public interest, it will require focused attention from private foundations for the foreseeable future, to support civil society actors helping to shape the governance of these consequential technologies. MacArthur has been investing in efforts to address the social impacts of AI-related technologies for the last several years. As the Technology in the Public Interest Program continues to implement our grantmaking strategy, work related to AI governance will continue to grow. This will include supporting sociotechnical research on generative AI that seeks to inform policy and practice; strengthening network advocacy focused on AI-related policy issues; and advancing initiatives aimed at seeding new work at the intersection of geopolitics, AI, and global governance.
We will also deepen and further align our grantmaking with like-minded funders to ensure AI benefits people and society, centering individuals and communities most at risk of harm. As always, we welcome your ideas that seek to advance AI in the public interest.