Please consider supporting us by disabling your content blocker.
loader

AI antitrust law

Shubha Ghosh, J.D., Ph.D., is the Crandall Melvin Professor of Law and director of the IP Commercialization & Innovation Law Curricular Program at the Syracuse Intellectual Property Law Institute. Views are the author’s own.

Recently, the Department of Justice began investigating software companies providing algorithms to landlords to inform their pricing strategies, which critics claim are contributing to inflation in the rental market. While these investigations have generated significant attention because of how politically charged the housing industry has become in this inflationary environment, it is far from the only example of the DOJ targeting algorithmic AI.

The DOJ seems to believe that AI services, which are now used in about every sector of the economy, could coordinate collusive, anti-competitive behavior in violation of the nation’s antitrust laws.

To assist the DOJ’s efforts, the Senate is considering the Preventing the Algorithmic Facilitation of Rental Housing Cartels Act. This legislation would change the procedural and substantive rules of antitrust law to make it easier for enforcers to crack down on algorithms.

AI antitrust law

Shubha Ghosh

Courtesy of SIPLI

The national economy would suffer because of these special rules, which sharply undercut legal precedent. Moreover, such well-intentioned policies would have adverse effects on innovation in the United States. To address potential anticompetitive behavior, the DOJ and Congress’ attention should focus on conduct, not technologies.

Antitrust law focuses on behavior

It is a basic maxim of antitrust law that collusion is unlawful anticompetitive behavior.

In the landmark 1940 case United States v. Socony-Vacuum Oil Co., the Supreme Court considered if major oil companies found to have conspired to artificially raise and fix gasoline prices violated the Sherman Antitrust Act. It concluded that, by purchasing large quantities of gasoline in spot transactions to maintain prices at artificially high levels, they violated Section 1 of the Act — because “any combination which tampers with price structures is engaged in an unlawful activity” and “price-fixing agreements are unlawful per se under the Sherman Act, irrespective of the means employed to establish such prices.”

However, the Court distinguished between the unlawful behavior, which it defined as any “agreement or plan embraced to raise and maintain prices” among competitors, from the valid mechanisms or tools used to reach and support these price levels.

With respect to price fixing, the Court underscored that regardless of the pricing methods used — whether removing surplus products or setting purchasing programs — the primary legal concern is whether the entities’ actions constitute a direct interference with the free play of market forces. Put another way, the DOJ must show a physical agreement to price-fix between two or more parties to prove a Sherman Act violation.

The current DOJ investigation and the proposed congressional bill against rental algorithms would potentially sidestep this essential requirement by creating an inference of an agreement from the use of the algorithm. This shift would undermine traditional antitrust safeguards for competition.

In the context of housing AI, there must be proof that the AI, or the parties controlling the AI, engaged in an anticompetitive agreement to set prices. Mere allegations of collusion or parallel behavior are not sufficient to establish liability. The court would require evidence of communication, coordination or other indicia of an agreement among the parties involved.

Without such evidence, the court would not infer an agreement based solely on the AI’s behavior or the similarity in prices set by the AI. Allowing plaintiffs to secure rulings of Sherman Act violations through mere allegations would frustrate the doctrine’s key safeguards requiring an agreement.

Impediment to innovation

The DOJ’s investigation and the proposed legislation in support of an AI antitrust crackdown are targeting an emerging technology for at best short-term gains at the expense of long-term innovation.

The newly proposed test that the DOJ seems intent on creating would seek to allow courts to impute harm based on the technical capacity of AI rather than each tool’s intent. If the DOJ opts to go down this road, the courts should strike its actions down before a dangerous precedent arises.

Artificial intelligence, and the algorithms that implement it, are still in their infancy. They are far from the capacity of human intelligence, especially in their ability to facilitate collusion. The DOJ should not penalize artificial intelligence at this early stage of their innovation.

Conclusion

An agreement to fix price requires the meeting of human minds intending to act anticompetitively. Artificial intelligence and algorithms cannot enter into any such agreement or form such intent.

Existing antitrust law requires proof of an agreement to set price to establish the existence of a cartel. The proposed approach upends this well-recognized requirement by allowing the use of AI technology to serve as proof of an agreement. That profound shift would penalize the use of emerging technologies and circumvent the need for proof of anticompetitive behavior.

The focus should be on unlawful behavior rather than tools such as those based on artificial intelligence. Artificial intelligence tools are still being developed and have beneficial uses for data analytics and communication. Perhaps they can also be used to coordinate a cartel, but that is true for any traditional and emerging technologies.

If the investigation continues, the DOJ should pursue identifiable conduct rather than tools that might be used for unlawful activity. There is no need for the special legislation under consideration in Congress either. Traditional antitrust principles can proceed without interference in the development of artificial intelligence.