loader

U.S. AI Action Plan: Stakeholder Insights on Future Policies\n

In February, the U.S. Office of Science and Technology Policy (OSTP) issued a Request for Information (RFI) aimed at shaping the Trump administration’s new AI Action Plan. Stakeholders from various sectors contributed insights before the March 15 deadline, presenting their visions for U.S. AI policy.

\n

The responses came from diverse organizations including OpenAI, Google, and the Center for Data Innovation, focusing on advancing U.S. technological leadership while encouraging innovation.

\n

Innovation, Workforce Adoption & Economic Impacts

\n

Commenters raised concerns about how the federal government can spur AI growth and develop a capable domestic workforce. There is apprehension that a fragmented regulatory landscape at the state level could hinder progress. Major players like Google and OpenAI supported federal preemption of state regulations that could harm America’s innovation drive.

\n

Others, including the News/Media Alliance and the Center for Data Innovation, advocated for measures to bolster competition and support smaller companies engaging with AI technologies.

\n

Export Controls and Global AI Leadership

\n

Many responses highlighted concerns regarding China’s rapid AI advancements. OpenAI noted that the Chinese government’s support for AI could potentially diminish U.S. advantages. Recommendations included stricter controls on the export of advanced technologies and collaborative measures with allies to secure AI leadership.

\n

Infrastructure and Energy

\n

Stakeholders urged for infrastructure reforms to support AI’s increasing power demands, suggesting the establishment of special zones to attract investments in data centers and energy resources.

\n

Government Adoption of AI

\n

A consistent theme was the slow adoption of AI within federal agencies. OpenAI characterized the current government engagement with AI as ‘unacceptably low,’ urging reforms to accelerate AI integration across various governmental functions.

\n

AI Security and Safety

\n

Several organizations, including Anthropic and Google, emphasized the need for national security considerations in AI applications. Recommendations included establishing national standards for AI development to mitigate risks associated with more advanced models that may be misused.

\n

In summary, a strong consensus emerged among stakeholders on the necessity for cohesive, proactive regulations on AI to enable the U.S. to maintain its leadership position in global technology while fostering an innovative environment.