When pondering the future of artificial intelligence (AI), one might be tempted to invoke the imagery of Skynet from the Terminator films, a cautionary tale of technology spiraling out of human control. While our current technological landscape hardly mirrors the cataclysmic events of those movies, the analogy serves a functional purpose, illustrating the imperative need for robust safety protocols as AI continues to integrate deeply into the fabric of society. The saga of California’s Senate Bill 1047 (SB 1047) underscores this need with stark clarity, capturing a moment in legislative and technological evolution where fiction increasingly informs reality.
Introducing the concept of the Gatekeepers’ Gambit
I like the concept of “The Gatekeepers’ Gambit” which is a regulatory strategy that aims to position oversight where it can be most effective, ensuring that those who hold the keys to significant technological power are also the ones shouldering the responsibility for its consequences. This approach not only helps prevent the kind of widespread harms we’ve seen in the past but also sets a standard for future developments in AI and digital technologies. After I talk about a recent failure to pass AI Safety legislation in California, I would like to return to this concept and how it might apply to not only AI Safety but perhaps other categories of online business, communication, and health.
Recent Example: California’s Failed SB 1047
SB 1047, officially titled the Safe and Secure Innovation for Frontier Artificial Intelligence Models Act, was an ambitious legislative effort aimed at setting regulatory thresholds for the development and deployment of advanced AI systems. The bill sought to establish safeguards by categorizing certain AI models based on their computational power and potential impact. Specifically, it defined “covered models” as those trained with computational resources exceeding 10^26 integer or floating-point operations—a staggering amount of computing power indicative of significant financial investment, likely in the hundreds of millions of dollars. The bill’s scope effectively meant that only the largest tech entities, those like OpenAI, could realistically fall under its purview. These companies, primarily based in technology hubs like Silicon Valley, represent the forefront of AI research and development, making the bill’s focus both geographically and industrially significant.
https://leginfo.legislature.ca.gov/faces/billNavClient.xhtml?bill_id=202320240SB1047
The journey of SB 1047 to Governor Gavin Newsom’s desk was marked by a series of legislative hurdles and amendments. Introduced by Senator Wiener, the bill went through multiple revisions as it moved through the Senate and Assembly, reflecting the complexities and challenges inherent in regulating a rapidly evolving technology. Each amendment served to refine the bill’s mandates, aiming to balance innovation with accountability. This iterative process highlighted the legislative agility required to keep pace with technological advancements, yet also showcased the difficulties in achieving consensus on the scope and depth of regulatory measures.
Governor Newsom’s decision to veto SB 1047 came as a significant setback for proponents of stringent AI regulation. In his veto message, Newsom expressed concerns about the bill’s potential effectiveness and scope, suggesting that while the intentions behind the legislation were sound, its practical implications could hinder technological progress. The Governor hinted at the possibility of future legislation that might address these concerns more holistically, indicating an ongoing dialogue about the role of governance in tech development. However, behind the scenes, there might be deeper economic considerations at play, particularly the impact on California’s tech giants. These companies, driving significant economic activity and innovation, could view such regulation as a constraint, potentially prompting shifts in operations or focus to less regulated environments. This scenario raises profound questions about the balance between fostering innovation and ensuring safety in the digital age. As we advance, the notion of “gatekeepers” in technology—akin to those in the European Union’s discussions around digital marketplaces—becomes increasingly pertinent. In the EU, entities like Apple and Amazon are scrutinized for their role in shaping market dynamics, with calls for enhanced regulatory frameworks to prevent abuses of power and ensure competitive equity. Transposing this concept to AI, particularly social media and advanced algorithmic decision-making, suggests a pathway where major players are held to higher standards of accountability, potentially mitigating widespread harms.
Fight the Incentives, Not the Companies
In light of these considerations, I advocate for a nuanced, yet firm regulatory approach that recognizes the unique position of major AI developers as gatekeepers of technology’s future. Just as we demand high safety standards from other industries with significant potential impacts on public welfare, so too should the giants of AI be subject to stringent oversight. This is not just about preventing the apocalyptic visions of science fiction from becoming reality. It’s about ensuring that AI develops in a way that genuinely benefits society, safeguarding against the unintended consequences that arise from unchecked technological advancement. The narrative around SB 1047 and its eventual veto is not merely a legislative anecdote but a critical juncture in our ongoing discourse on technology and governance. It serves as a clarion call for proactive engagement with the ethical, social, and economic dimensions of AI. As we stand on the precipice of significant breakthroughs in AI capabilities, our legislative frameworks must evolve in tandem, ensuring that innovation does not outpace our capacity for responsible stewardship. The future of AI should be guided by thoughtful, informed, and perhaps most crucially, ethically grounded decision-making, ensuring that technology serves humanity’s best interests, rather than leading us towards unforeseen vulnerabilities.
“The Gatekeepers’ Gambit” represents a strategic approach to regulating AI, where pivotal entities in the technology sector are identified and held to higher standards due to their significant influence on the digital ecosystem. This strategy is crucial not only for mitigating potential risks but also for ensuring that the immense power wielded by these entities is channeled towards fostering a safe and equitable technological future. By labeling and recognizing certain organizations as gatekeepers, we can impose a targeted regulatory framework that addresses the specific challenges posed by their operations. This can include rigorous auditing processes, enhanced transparency requirements, and stricter compliance standards for AI deployment. For instance, consider the potential implementation of such a gambit with companies like Facebook and Amazon. These giants, through their extensive user networks and sophisticated data analytics capabilities, exert considerable influence over public discourse and consumer behavior. By applying the gatekeeper label, regulatory bodies could have mandated that Facebook implement more robust and transparent content moderation systems much earlier, potentially curtailing the spread of misinformation and harmful content. Similarly, Amazon could have been required to ensure greater fairness in its marketplace, preventing algorithmically biased outcomes against smaller retailers, thereby fostering a more competitive environment.
In short, the major players in AI are getting unprecedented investment because they are seen to have a good chance of having the most influence in this space, and therefore guide the industry as a whole and generate value as a intermediary between stakeholders with anything AI. That sounds like a gatekeeper to me. So, you can strive for that semi-monopolistic position, but that also means you’ll be first hit to innovate for regulation in this space. As long as we keep the thresholds to include only the top players, we are not targeting one company, but a class of companies… the gatekeepers.
‘What about China?’ Argument
In the discourse on artificial intelligence (AI) development, the argument that the United States must vie for supremacy to maintain a competitive edge over global rivals like China often steers the national strategy towards cultivating singular, colossal entities in AI technology. Advocates of this approach argue that such dominance is crucial for national security and technological leadership. However, this perspective, rooted in a “winner-takes-all” philosophy, overlooks significant risks and underestimates the value of a diverse and competitive AI ecosystem. Historically, the concentration of power and capital in single or few entities has led to systemic risks and innovation stagnation across different industries. The ‘Too Big to Fail’ notion, for example, which became prominently discussed during the 2007-2008 financial crisis, demonstrated the peril of over-relying on major banks. These institutions engaged in risky financial practices under the assumption that they would receive government bailouts if they faltered, an expectation that led to economic catastrophe. Similarly, in the automotive industry, the near collapse of giants like General Motors and Chrysler in 2009 required government intervention, revealing the vulnerability of an industry dominated by a few. More recently, tech giants like Google and Amazon have been scrutinized under antitrust laws in both the United States and Europe, highlighting issues such as reduced competition, innovation slowdown, and excessive control over consumer data. These examples showcase the dangers of a monopolistic or oligopolistic approach, which can be mirrored in the AI sector if not carefully managed.
To counter the narrative that the U.S. needs to outcompete China by centralizing AI power, it is more prudent to foster a resilient, diverse, and competitive AI landscape. This strategy not only mitigates risks but also spurs innovation. Smaller, agile firms often drive technological breakthroughs by taking risks that larger entities might avoid, but they take these risks with limited exposure to the public. By promoting a multitude of strong AI players, the U.S. can enhance its national security more effectively than it could by placing all its bets on one or two dominant leaders. Encouraging a competitive AI environment aligns with democratic values and prevents the concentration of too much power in the hands of a few.
Move Fast and Prevent Harm
As we navigate the complexities of regulating artificial intelligence, it’s essential to recognize that the most influential players in the field—our “Gatekeepers”—are not just developers of technology but also pioneers of the new digital society. Companies like Google, Amazon, and others that operate at the cutting edge have a profound influence on public discourse, consumer behavior, and even the structure of the market itself. This position of influence comes with a unique set of responsibilities, which justifies a focused regulatory approach that is as dynamic and fast-paced as the sectors it seeks to govern. The mantra of Silicon Valley has long been “Move fast and break things.” This philosophy has driven innovation at a breakneck speed, but it has also led to significant public harms—data breaches, privacy invasions, and a widening digital divide, to name just a few. As we continue to witness the far-reaching impacts of these technologies, it becomes clear that regulators must also adopt a form of this mantra: Move fast and prevent harm. We must be swift and agile in our approach to regulation, just as the industries we regulate are with their innovations.
Focusing on the top-tier AI companies offers a strategic advantage. By tailoring regulations to address the specific technologies and practices of these industry leaders, we can mitigate risks before they become widespread. This targeted approach allows regulators to manage the complexities of advanced technologies in real-time, crafting rules that address the unique challenges posed by each new development. As these leading companies often set trends that shape the entire sector, early interventions can have a broad and lasting impact, setting standards that trickle down through the industry. This does not mean stifling innovation by imposing overly burdensome regulations. Instead, it’s about ensuring that innovation progresses with a keen awareness of its societal impacts. By regulating the “Gatekeepers,” we can create a feedback loop where the most powerful players in the field are the test beds for new regulatory frameworks. This method not only ensures that the most impactful technologies are kept in check but also helps craft more inclusive and effective overarching regulations that can adapt as the technology evolves. Ultimately, the goal of this approach is to protect and empower the public. The rapid advancement of technology should not come at the expense of public welfare. Instead, technological progress should be aligned with the needs and values of society. By moving fast to break bad practices among the gatekeepers of AI, regulators can prevent the public from being “broken” by the unchecked consequences of innovation.
We stand at the crossroads of unprecedented technological capabilities and potential societal shifts, the need for proactive and responsive regulation has never been more critical. The “Gatekeepers’ Gambit” is not just a strategy but a necessary evolution in how we govern the digital age, ensuring that technology serves humanity’s best interests and fosters a future where innovation thrives alongside ethical responsibility and public trust.
About the Author
Eric Hawkinson
Learning Futurist
erichawkinson.com
Eric Hawkinson is a Learning Futurist at Kyoto University of Foreign Studies, where he focuses on the integration of technology into education. Specializing in the creation of immersive learning environments, Eric employs augmented and virtual reality to enhance learning outcomes. He is an advocate for digital literacy and privacy, promoting open access to information and ethical technology practices. Outside his academic role, Eric is engaged in public outreach and professional development. He has established immersive learning labs, designed online courses, and advised on technology strategies across various sectors. His professional designations include Adobe Education Leader, Google for Education Certified Innovator, and Microsoft Innovative Expert. Eric’s notable projects, such as AR experiences for TEDxKyoto and WebVR for Model United Nations, reflect his commitment to using advanced technologies for global education and collaboration. Eric is dedicated to exploring the challenges and opportunities presented by emerging technologies, contributing significantly to the evolution of educational practices.
Roles
Professor – Kyoto University of Foreign Studies
Research Coordinator – MAVR Research Group
Founder – Together Learning
Developer – Reality Labo
Community Leader – Team Teachers
Co-Chair – World Immersive Learning Labs