Press Release
New Research Offers Blueprint to Build Public AI
09. 30. 2024
New research released as nation’s foremost leaders in the AI governance space meet in DC at critical moment for the debate on Big Tech and AI
Washington, D.C. – Economic Security Project, Mozilla, and Vanderbilt Policy Accelerator (VPA) have come together to push forward a pivotal discussion on the future of public policy around Artificial Intelligence (AI). In conjunction with the convening, VPA and Mozilla are releasing new research proposing ways to design and implement an AI infrastructure that intentionally benefits the public. The papers form the foundation of a jointly-hosted public AI design workshop in Washington D.C. on October 2nd with notable leaders from the AI and political economy space. These leaders will be central to building an agenda that policymakers in the next administration can use to respond to the urgent need to govern and motivate the development of AI technologies and solutions. The research is timely, as the digital economy already comprises 10% of the nation’s broader economy, and grows each day. The convening and research respond to increasing public calls for sharper governance strategies around AI.
New research published today includes:
- Public AI: Making AI Work for Everyone, By Everyone, a paper by Nik Marda, Jasmine Sun and Mark Surman at Mozilla. The paper provides a vision for a robust ecosystem of initiatives that promote public goods, public orientation, and public use throughout every step of AI development and deployment. It explores how to build and support parts of a Public AI ecosystem, such as expanding multilingual access to voice data to train models; supporting startups and innovators with public AI tools; investing in fellowships, data programs, and grants; and partnering with policymakers to advance the field.
- “The National Security Case for Public AI” authored by Ganesh Sitaraman, Director of the Vanderbilt Policy Accelerator, and Alex Pascal, Senior Fellow at the Harvard Ash Center for Democratic Governance and Innovation offers a national security case for investing in public AI, which includes public provisioning of AI infrastructure and public-interest regulation of the private AI industry. It demonstrates how the reliance on unrestricted AI monopolies is a danger to national security, and that public AI could increase competition, technological innovation, as well as both resilience and independence during crises.
- “Creating a Public Cloud through the Defense Production Act” authored by Joel Dodge of Vanderbilt Policy Accelerator, identifies that the federal government already has the legal authority to acquire semiconductors and other necessary components for building a public option for cloud infrastructure via the Defense Production Act. It encourages Congress to strengthen that Act to ensure that the U.S. can remain at the forefront of tech competitiveness.
“The development of AI is happening at a rapid clip, and with it grows our collective responsibility to ensure the benefits of AI reach everyone. Ultimately, how we build AI—and its impact on society—is a choice we get to make. Do we let the dominant, entrenched companies dictate the means and ends of how AI is deployed, or do we advance a broad set of tools to ensure AI is built for broad-based prosperity?” said Taylor Jo Isenberg, Executive Director of Economic Security Project. “We are excited to partner with Mozilla and Vanderbilt Policy Accelerator to build a framework for AI that incorporates accessibility and safety, but also that identifies how governance and implementation practices at each layer of the stack will help us build this vital infrastructure while prioritizing public benefit.”
“The AI tech stack is extremely concentrated at critical layers, and this poses serious risks to American national security,” said Ganesh Sitaraman, Director of Vanderbilt Policy Accelerator. “Too often, monopolistic or oligopolistic firms abuse their power to stifle competition and innovation. Concentration also places the government in a dangerous position of dependence on a small number of powerful private actors who might not share America’s national interests. A public option at layers in the AI tech stack, plus public utility style-regulation to prevent abuses of power, will enhance American national security and ensure that the United States remains innovative and competitive. We are excited to collaborate with the Economic Security Project and Mozilla on this public AI workshop in order to design and implement public AI that strengthens national security and serves the public interest.”
“We can’t just rely on a few companies to build everything our society needs from AI, and we can’t afford the risk that they won’t. We know that AI has vast potential benefits, but most AI developers are focused on potentially profitable applications — some good for society, others not. But crucially, there are many other AI applications that could be beneficial, but aren’t being pursued at scale because they can’t generate a profit,” said Nik Marda, Technical Lead for AI Governance at Mozilla. “That’s why we need Public AI to expand who can build AI in the first place, and reduce the friction for everyone to use AI in a trustworthy manner. Mozilla is committed to doing our part by building key parts of the Public AI ecosystem, and we are thrilled to partner with the Economic Security Project and Vanderbilt Policy Accelerator to help mobilize a broad coalition to make Public AI a reality.”
Following the convening, members of the press are invited to attend a cocktail reception featuring remarks from Nabiha Syed, Executive Director of Mozilla Foundation, and Arati Prabhakar, Director of the White House Office on Science and Technology Policy.
Event details: Wednesday, October 2nd, 5:00 – 7:00pm EST, Juniper, at The Fairmont DC. Please contact Jenna Severson, [email protected] if you would like to attend the reception.