Please consider supporting us by disabling your content blocker.
loader

Introduction

Ahead of the AI safety summit in Seoul, South Korea, the United Kingdom is expanding its efforts in AI safety by opening a second location of the AI Safety Institute in San Francisco.

Why San Francisco?

The Bay Area is home to leading AI companies like OpenAI, Anthropic, Google, and Meta. By having a presence in San Francisco, the UK aims to be closer to the epicenter of AI development and gain better access to these companies.

“By having people on the ground in San Francisco, it will give them access to the headquarters of many of these AI companies,” said Michelle Donelan, the UK Secretary of State for Science, Innovation, and Technology.

Strategic Importance

Being closer to these companies not only helps in understanding what is being built but also increases the UK’s visibility. This is crucial as AI and technology are seen as significant opportunities for economic growth and investment.

Recent Developments

The AI Safety Institute, launched in November 2023, has already made notable strides. One of its significant achievements is the release of Inspect, a set of tools for testing the safety of foundational AI models.

Challenges and Future Plans

Despite these advancements, there are challenges. Companies are not legally obligated to have their models vetted, making engagement inconsistent. Donelan mentioned that the evaluation process is still evolving and aims to present Inspect to regulators at the Seoul conference.

Long-Term Vision

Donelan believes that more AI legislation will be built in the UK, but only after a better understanding of AI risks. The focus is on taking an international approach to AI safety, sharing research, and working collaboratively with other countries.

“Since day one of the Institute, we have been clear on the importance of taking an international approach to AI safety, share research, and work collaboratively with other countries to test models and anticipate risks of frontier AI,” said Ian Hogarth, chair of the AI Safety Institute.