Shaping Markets
Building a New Political Economy for AI
06. 04. 2024
How can we govern AI for shared prosperity? ESP convened 35 experts on different aspects of AI and political economy to dig in and explore.
Introduction
In April, the Economic Security Project brought together 35 experts on artificial intelligence (AI) and political economy to answer the pivotal question, “How can we govern technology and AI to deliver on the promise of broad-based prosperity?”
Amidst the growing hype and corresponding interest in regulating AI, we saw the need for deeper exploration around a political economy frame to understand the impact of concentrated power on our economy and democracy. Policy developments like “A Roadmap for Artificial Intelligence Policy in the U.S. Senate” issued in May by a bipartisan working group led by Senate Majority Leader Chuck Schumer; the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence issued by President Biden in October 2023; and US-European forums to explore AI policy domestically and abroad have highlighted the real need for more expansive imagination to interrogate historical and ongoing approaches to thinking about AI as not just a technology, but a source of political and economic power. There was a shared sentiment that we have the opportunity to avoid repeating the same mistakes that have guided our approach to regulating tech thus far, and can instead set AI on a different course by shaping market structure and embracing a broad set of tools that can shift power in the AI ecosystem.
Together, we began workshopping the attributes and considerations that can shape a political economy approach to AI to build toward a more democratic, equitable economy. Our workshop broke down silos between people working on political economy and tech policy, technologists, advocates, labor unions, scholars, and funders to build toward a shared vision to tackle the concentrated power of dominant, entrenched companies in AI. Our goal is to ensure that we learn from the laissez-faire approach policymakers have deployed when it comes to regulating new technological developments and innovation, and instead confront the challenges of runaway corporate power head-on in shaping this burgeoning technology.
The MomentThe Moment We’re In
We’re in a widespread re-alignment moment. Shifts in policy mark a serious challenge to 40 years of neoliberalism and many are fundamentally rethinking—and embracing—the role of civil society and government to actively shape markets to serve the public. Policymakers nationwide are revisiting the hands-off regulatory approach that has resulted in the dominant companies picking winners and losers in the marketplace by wielding outsized power and control to stifle innovation and harm workers, consumers, and small businesses. In this context, policymakers are recognizing the need for evidence-based reforms that rise up to meet the promises and perils of the digital economy as AI becomes a central focus.
Ultimately, how we build AI—and its impact on society—is a choice we get to make. Do we let the dominant, entrenched companies dictate the means and ends of how AI is deployed, or do we advance a broad set of tools to ensure AI is built for broad-based prosperity? If we cannot rely on the free market, what is the alternative approach?
This re-alignment is not without its tensions, however. Not everyone agrees with the need for a new political economy approach to governing AI. As organizations and individuals challenge incumbent corporations’ outsized power and control over essential technology that everyday individuals, workers, and small businesses are becoming increasingly reliant upon in their daily lives, industry is pushing back and mobilizing the significant resources at their disposal to halt the threat of any real regulation that would hold them accountable. There are also ongoing, important questions about how to leverage public power democratically, especially in the face of authoritarianism. Still, there may be important, critical strategies and tactics that civil society can adopt in the short- and long-term to build consistent, effective countervailing power to contest who decides how this technology will impact society.
Political EconomyAn Exploration of the Political Economy of AI
AI is a tool that serves an end. The question of whose interests AI will serve, and who gets to decide, is still up for debate—but we have a limited window of opportunity to act, to address the immediate, ongoing harms, and rebalance power from these corporations to everyday people, workers, small businesses and entrepreneurs. Under the status quo, corporations have mostly free reign to develop AI while continuing to extract, exploit, and monetize three major pieces of critical input powering the AI industry: data, labor, and energy—for shareholder profit. U.S. policymakers have embraced regulation when social, political, and economic problems have arisen in lax regulatory environments historically in the banking, pharmaceutical, and transportation sectors; it’s time to embrace a similar comprehensive regulatory approach in the tech sector.
It is clear that we need to tackle concentrated power at its roots by targeting the underlying business model. A growing number of experts and thought leaders, many of whom were in the room, are turning to this important question. AI Now’s Landscape Summary provides an overview of the moment and how any discussion on political economy and tech needs to grapple with the concentrated economic power in the sector (the analysis is expanded in their recent exploration of AI and industrial policy); Vanderbilt Policy Accelerator’s Antimonopoly Tools for Regulating Artificial Intelligence explores the realm of policy levers and solutions, including promoting competition through procurement and building public capacity for AI.
The effects of AI are not just limited to the technology sector alone, they also have widespread, broader implications on our economy, democracy, and climate. These moments of technological change fundamentally shift the distribution and the dynamics of political and economic power. In the absence of robust privacy regulations that move us away from a notice-and-consent framework and toward individual agency and control, corporations are still free to collect, aggregate, and monetize infinite amounts of data through personal consumer devices, biometric surveillance, and other means. This data is then filtered into AI models and applications at scale to feed into business models that profit off of consumers’ data, without consideration for societal harms and having to take any responsibility for their actions.
The policy choices we make about AI development have a significant impact on the day-to-day lives of workers in a highly unequal, racialized society. This was a key theme at the workshop as presenters highlighted that technology has long been used to control and exploit workers even before AI came along.
Lastly, there are also significant environmental costs to AI. Training large language models (LLMs) require significant energy resources that produce large amounts of CO2 emissions, meaning that not only a small number of firms have the resources necessary to develop LLMs, but also that marginalized communities, especially in the Global South, are more likely to experience these negative effects.
The urgency and gravity of this choice point in how we regulate AI is shared. Delivering on the promise of AI would encourage and reward innovation, instead of stifling competition. It would open doors to new entrants and ideas, and free up an ecosystem locked in and controlled by dominant gatekeepers for the chosen few. It would build broad prosperity for all, instead of exacerbating the current extractive system that only delivers economic benefits for the very top.
As we began to uncover these dynamics and explore potential paths forward, we surfaced important questions for the field:
- How do we organize workers when workers employed by AI companies may have strong economic incentives to maintain the status quo? How do we resource and support workers who choose to defect, whistleblow, or organize to halt the use of AI toward problematic means?
- How can we build broad coalitions to secure policy wins and action—and is there a role for corporations that might share our vision and values? How might we make inroads with organizations and individuals who disagree and persuade or neutralize them?
- What normative role should the government at the federal, state, and local levels have in shaping the AI industry? How do we balance this role against historical, ongoing, and potential government abuses of power that have harmed and disenfranchised marginalized communities? How do we move toward a vision for co-governance that balances community input and democratic control with government involvement?
- Are there existing laws and authorities, including labor and employment, civil rights, antitrust, and more, that could provide an immediate stop-gap for some of the ongoing harms that AI poses? What new laws and authorities are necessary to build a more egalitarian, democratic political economy of AI?
- What is the affirmative vision for AI that we’re organizing toward? What are the success stories that we can lift up to demonstrate the impact and positive potential of AI?
While we don’t have clear, conclusive answers, grappling with these questions will be critical to seeding the next phase of the work.
Emerging Plan for ActionEmerging Plan for Action
Building Countervailing Power
There is enormous potential for leveraging broader organizing efforts to build countervailing power. The combination of efforts across labor organizing, racial justice, immigrant rights, criminal justice, tech industry whistleblowers, small and mid-sized businesses organizing, including start-ups and entrepreneurs, and more are all critical to reclaiming our collective power captured by tech. For example, labor unions are advocating for including workers from the beginning of the R&D process, which could ensure that innovation is used to create tech that is good for workers. We need everyone to see themselves in this fight—workers, voters, students, those who are most impacted now, and those who are just beginning to have concerns about the future.
Building the Toolbox
To begin building a new political economy of AI, we can embrace tools that shift power from corporations to different actors in the ecosystem—end users, consumers, workers, developers, entrepreneurs, creatives, and government. Tools that target imbalanced structural power or address bad behavior, complemented by robust and responsive government capacity and expertise, taken together can start to build toward a world where AI enables broad-based prosperity.
We identified and examined a broad—but not comprehensive by any means—suite of existing and new policy tools and levers that we can utilize to collectively harness the power of AI for the public good. These tools and levers generally fall into three broad, sometimes overlapping categories: 1) regulate the AI industry by passing new laws, 2) enforce existing laws on the books, and 3) build up public capacity.
Regulate
- Expand privacy protections to limit the use of consumer and worker data for surveillance and other means
- Expand financial oversight to ensure that AI companies are subject to securities and financial protection laws and regulations
- Expand labor and employment law to provide workers with additional rights and protections
- Expand civil rights laws and protections to protect marginalized communities
- Explore a potential sector-specific regulator to regulate AI, similar to models for finance (Consumer Financial Protection Bureau) and telecom (Federal Communications Commission)
- Mandate structural separation to eliminate conflicts of interest within the AI supply chain
- Mandate data portability and interoperability to decrease barriers to entry for nascent and potential competitors
- Require transparency into AI models and applications
- Strengthen intellectual property and copyright laws to protect original content that can be used to train AI models
- Secure environmental protection laws that address the environmental impact of AI
Enforce
- Utilize existing antitrust law and competition policy (such as unfair methods of competition authority, merger policy, and more) to stop AI companies from anticompetitive abuses on a case-by-case basis, including by structuring remedies in antitrust cases to maximize public benefits
- Utilize existing consumer protection law (including unfair and deceptive practices authority and state-level authorities like the California Privacy Protection Act), including by structuring remedies in consumer protection cases to maximize public benefits
- Utilize existing civil rights law to protect marginalized communities
- Utilize existing labor law to protect workers
Build
- Invest in public AI infrastructure like NAIRR and CalCompute
- Increase government capacity and expertise through knowledge-sharing requirements and talent recruitment and retention
- Leverage public-private partnerships through contracts and procurement (e.g., grants and direct funding) to advance pro-competition goals
- Set industry-wide norms and policies through standards-setting bodies
Potential Next Steps
To pave the way forward, we surfaced important strategies and tactics for the field to explore:
- Organize Labor to Secure Robust Worker Rights and Protections: We must organize workers impacted by AI and workers working on AI to ensure they are at the table from the beginning of the conversation when employers want to bring in tech to solve a problem. We need strengthened labor and employment law and stronger protections above what unions can negotiate on a case-by-case basis.
- Organize Start-Ups and Founders as Allies: As more small AI start-ups get off the ground, we have the opportunity to organize capital and ensure that founders are brought into a vision for AI grounded in broad-based prosperity.
- Build Coalitions: Drawing on lessons learned from other tech fights including net neutrality, privacy, and more, we can build coalitions that shift power and achieve shared goals in AI policy and development.
- Change Narratives: The current dominant narrative is that the current trajectory of AI is inevitable and necessary to compete with global nations, but we know that a different world is possible—but only if we mobilize strategically. We must steer toward narratives that underscore that everyone, workers and regulators included, is capable of understanding and using this technology, and that AI is an industry that is within our reach to change.
- Support Research: Research capacity to better understand the impacts of AI—especially on specific demographics, geographies, and groups—and the impacts of solutions to make the empirical case for specific interventions is critical. As AI is becoming quickly adopted and deployed by the public and private sectors alike, we have the learning opportunity to track and research these use cases to inform policy change.
We’re excited to continue and build on this conversation, including through forthcoming workshops focusing on different aspects of the political economy of AI, such as how to build a public option for AI.
The Economic Security Project would like to thank Ganesh Sitaraman at the Vanderbilt Policy Accelerator, Amba Kak at the AI Now Institute, Andrea Dehlendorf, and the team at Omidyar Network for their thought partnership in curating the programming, as well as the 35 participants who generously and thoughtfully engaged in our workshop.