billboard image Leveraging Diversity to Advance Beneficial AI
Partnership on AI’s Partner Forum in 2023 in San Francisco included a panel on AI inclusive design, one of the organization’s priorities. Credit: Partnership on AI.

Partnership on AI convenes representatives from tech companies alongside experts and scientists from civil society and academia to advance governance and best practices in artificial intelligence.

 

When social media platforms started gaining popularity years ago, their benefits were obvious more quickly than their disruptions and dangers.

Artificial intelligence (AI) is evolving differently. Its risks to society already are becoming as clear as its astonishing capacity for good.

The encouraging news is that a group is convening a diverse array of experts to better understand and address the risks of AI and help shape the technology, so it contributes to society in positive ways instead of undermining it.

Partnership on AI (PAI) is an international nonprofit bringing together leading scientists from influential tech companies and experts from civil society and academia to have candid conversations and work to advance responsible governance and best practices in AI.

Many of the players that PAI brings together would rarely meet otherwise and often disagree on issues. But the open, productive space PAI has created allows those knowledgeable participants with diverging views to work together effectively and emerge with stronger ideas.

Established in 2016, PAI hosts workshops, produces research, informs policy and practice, and creates greater public understanding about AI’s impact on society. The group works in four broad areas: inclusive research and design; AI safety, fairness, and transparency; labor and the economy; and AI and media integrity.

“We saw that by collectively creating communities around emerging issues and trends at the intersection of AI and society, we could identify those areas where, with further action, we could drive real impact in a way that other organizations weren’t able to do,” PAI’s Chief Executive Office Rebecca Finlay said.

That approach started by gathering a Who’s Who of technology companies as PAI founding members. They include Amazon, Apple, Facebook, DeepMind, Google, IBM, and Microsoft. From the outset, the founding partners were also intent on expanding PAI well beyond the tech industry.

“We wanted to have a deep sense of a democratic process in our conversations.”

“We wanted to have a deep sense of a democratic process in our conversations, with lots of different diverse perspectives that were balanced between the civil society groups, industry, academia, and philanthropic organizations,” said Eric Horvitz, Microsoft’s Chief Scientific Officer, who helped establish PAI and is on its board of directors.

Consulting a range of voices early, especially those of civil society groups, can help address stereotyping, discrimination, and similar issues before they are baked into an AI system, he said.

A group photo of many people

PAI’s 2019 All Partners meeting in London, U.K., exemplified its commitment to gather diverse voices from technology companies, civil society groups, and government. Credit: Partnership on AI

That lesson was learned in the formation of social media, said Sam Gregory, Executive Director of WITNESS, a civil society organization that helps people use video and technology to protect and defend human rights.

“The failure of inclusion early on, around the development of social media, led to a set of structures that were built and implemented without the input of civil society and human rights defenders,” Gregory said. The consequences have included widespread trafficking of dis- and misinformation, harassment, hate speech, extremism, and even violence.

PAI has more than 100 partner organizations—the American Civil Liberties Union, The New York Times, UNICEF, the TechEquity Collaborative, and the Center for Democracy and Technology are a few—with expertise in social systems, political institutions, labor markets, privacy protection, and human and civil rights.

Benchmark for AI Safety

One of the more urgent topics in global discussions on AI is the safety of foundational models—AI systems that require enormous amounts of data and computational resources that power various applications such as ChatGPT. If developed, deployed, and used responsibly, these transformative systems have the potential to enhance productivity across sectors, speed up scientific discovery, and more.

They also can produce more misinformation, eliminate jobs, and automate criminal activity.

In collaboration with its global community that included the Ada Lovelace and Alan Turing Institutes, Anthropic, Google, and IBM, PAI has produced a Guidance for Safe Foundation Model Deployment. The practical recommendations are aimed at the responsible development and deployment of foundational models.

“It’s a guide for practitioners and provides input for policymakers,” Finlay said, adding that it classifies AI harms and contains protocols for evaluation and disclosure of those potential harms. “PAI’s aim is for it to become a benchmark for AI safety and a catalyst for more work in this area.”

Addressing Synthetic Media Challenges

Another component of PAI’s work centers on the challenges of synthetic media—audiovisual content generated or modified by AI—especially its potential for producing misleading content.

As it did with guidelines on safe foundational models, PAI has worked with a broad range of partners, including WITNESS, Code for Africa, BBC, and OpenAI, to develop Responsible Practices for Synthetic Media. The framework for synthetic media developers, creators, and distributors is based on the concepts of consent, disclosure, and transparency.

It calls on organizations to collaborate against harmful uses of synthetic media, identify responsible and harmful uses of the media, and pursue mitigation when synthetic media causes harm.

“When we released those resources, we also asked for partners to sign on and say, ‘I will disclose how I am using synthetic media,’” Finlay said. “‘I will make sure that I am seeking consent for the use of this media and will list the responsible ways in which I protect privacy.’” Today a growing group of partners have signed on, such as Adobe, BBC, CBC/Radio Canada, Google, Meta, Microsoft, OpenAI, and TikTok.

PAI has created a framework on how to responsibly develop, create and share synthetic media, led by the organization’s Head of AI & Media Integrity, Clair Leibowicz.

A Vision of Justice, Equity, Prosperity

Core to PAI’s work is advancing responsible, trustworthy AI policy across the globe. Driven by a 14-member Policy Steering Committee established in October 2023, PAI’s convenings of diverse stakeholders explore challenging AI policy issues to build consensus and share insights with policymakers.

Over time, PAI has provided resources and evidence to the Federal Trade Commission, National Institute of Standards and Technology, Organisation for Economic Co-operation and Development, and the US-EU Trade and Technology Council. The organization also has participated in international forums and convenings on global digital policy.

“Governments at all levels play an important role in fostering AI competition, trust, and accountability,” Finlay said. “We keep policymakers up to speed on our work to enrich their understanding, pressure test our frameworks and, when appropriate, promote coordination across sectors.”

In addition, PAI supports its partners tracking and making positive changes to policy.

One example of that support was PAI hosting a webinar with the White House Office of Science and Technology Policy on principles to guide the design, use and deployment of automated systems. Those principles, known as the White House Blueprint for an AI Bill of Rights, were created to protect the public as artificial intelligence continues evolving.

All of PAI’s work is geared towards achieving its vision of artificial intelligence that advances justice, equity, and prosperity.

“For AI to help us meet the challenges of today and tomorrow, this must be truly a collective effort.”

“Our work is driving changes in practices across industry and informing innovations in policy that are advancing real-world benefits for people and society,” Finlay said. “It also means that the global reach and local impact of our partner community is growing.

“For AI to help us meet the challenges of today and tomorrow,” she said, “this must be truly a collective effort.”

Since 2017, MacArthur has provided $2.6 million to Partnership on AI for general operating support and event management.

 

Stay Informed