Introduction
Late last May, OpenAI CEO Sam Altman testified before the Senate Judiciary Committee on the ascendant technology of generative AI. The putative motivation for Altman’s attendance at the Senate hearing was to allay congressmen’s concerns stemming from a mix of ignorance and dystopian fiction. However, the actual motivation was a blend of protectionist concerns regarding the technology’s impact on domestic labor markets, the spread of misinformation, and frustration over the technology outpacing the regulatory framework.
Congressional Dynamics
Though congressional treatment of tech firms is often antagonistic, as seen in Senator Josh Hawley’s (R-Mo.) belligerent treatment of Google CEO Sundar Pichai, the relationship between corporations and the state is typically one of mutual parasitism—with consumers as the host organism. Given Altman’s pleas for regulation, including a recent call for an international AI regulatory agency, it is unsurprising that he found “a friendly audience in the members of the subcommittee” for privacy, technology, and the law.
Concerns and Motivations
Altman was practically courting policymakers for protection, albeit under the guise of concern for the common good. The former president of YCombinator has no excuse for vague statements like “‘if this technology goes wrong, it can go quite wrong.’” If Altman has specific concerns that genuinely affect the public, he should articulate them clearly.
OpenAI’s Transition
Though OpenAI was founded as a nonprofit in 2015, it transitioned to a capped for-profit model in 2019. There is nothing inherently wrong with this shift; OpenAI required substantial capital to afford tens of thousands of H100 GPUs (~$40k per GPU) and to attract talent. To cover these expenses, OpenAI needed to draw in shareholders and strategic investments from corporate competitors.
The Resulting Landscape
The outcome? A particularly user-friendly generative AI accessible for free to the public. However, since OpenAI is in the profit-maximizing business, it faces the perverse incentive to achieve returns for its shareholders through regulatory capture.
Regulatory Capture
Instead of maintaining high profit margins through relentless innovation and iterative improvements of ChatGPT, OpenAI can reduce market entry by government fiat. The proposal advocated by Altman at the hearing? According to New York Times reporting, “an agency that issues licenses for the development of large-scale A.I. models, safety regulations and tests that A.I. models must pass before being released to the public.”
Barriers to Entry
Read: Hurdles, obstacles, and barriers to entry. The capital investment makes market entry difficult; regulatory capture makes it virtually impossible.
Historical Context
As Don Lavoie states in National Economic Planning: What Is Left? (1985), central planning was “nothing more nor less than governmentally sanctioned moves by leaders of the major industries to insulate themselves from risk and the vicissitudes of market competition.” Regulation is merely central planning’s less ambitious corollary: a means for the corporate elite “to use government power to protect their profits from the threat of rivals.” For more on Don Lavoie and an incisive analysis of his contributions to the Knowledge Problem, we refer the reader to Cory Massimino’s piece for EconLib.
OpenAI’s Strategies
In reality, OpenAI has adopted both strategies; it is profiting from the first-mover effect of its research and development efforts and has also partnered with Apple to bundle ChatGPT with services like Siri. At the same time, OpenAI is attempting to maximize rents through regulatory capture. While the first two strategies increase total surplus, the latter destroys it.
Regulatory Concerns
One would think that the current Neo-Brandeisian FTC regime would sound the alarm about such an obvious bid to restrict market entry and facilitate collusion. The staff in the Bureau of Competition & Office of Technology even released a statement, “Generative AI Raises Competition Concerns.” Unfortunately, the regulators do not articulate any concerns about collusion aided by government intervention.
Conclusion
As AI continues to advance, Luddism increases, and Congress holds more hearings on regulation, we should regard the purported public lashings with a wary eye towards the inevitable regulatory capture to follow.
Samuel Crombie is the co-founder of actionbase.co and a former Product Manager at Microsoft AI.
Jack Nicastro is an Executive Producer with the Foundation for Economic Education and a research intern at the Cato Institute.
- 0 Comments
- Ai Process
- Artificial Intelligence