The United States has taken a major step in addressing artificial intelligence safety concerns by bringing together experts from nine nations and the European Commission for the first meeting of the International Network of AI Safety Institutes (AISIs) in San Francisco.
"AI is a technology like no other in human history," declared U.S. Commerce Secretary Gina Raimondo at the gathering. She emphasized two key principles: preventing the release of potentially dangerous models and keeping AI focused on serving humanity rather than the reverse.
The network includes representatives from the U.S., U.K., Australia, Canada, France, Japan, Kenya, South Korea, and Singapore. Their shared mission centers on building technical expertise, understanding AI safety risks, and promoting global cooperation on AI development safeguards.
As part of this initiative, the U.S. announced a new Testing Risks of AI for National Security (TRAINS) Taskforce. This group will bring together multiple government departments to examine AI's implications for national security, including nuclear safety, cybersecurity, and military capabilities.
The timing of this international collaboration is particularly notable given rising tensions between the U.S. and China in AI development. Chinese cyber threats have become an increasing concern, with China's absence from the network standing out, especially as Chinese lab Deepseek recently announced a new "reasoning" model competing with OpenAI's technology.
Recent developments have heightened concerns about AI safety. The U.S. and U.K. AISIs recently evaluated Anthropic's Claude 3.5 Sonnet model, finding that its built-in safety measures could be "routinely circumvented." This discovery underscores the challenges in controlling advanced AI systems.
The San Francisco meeting identified three priority areas requiring immediate international cooperation:
- Managing synthetic content risks
- Testing foundation models
- Assessing advanced AI system risks
To support these efforts, $11 million in funding has been allocated for research into mitigating synthetic content risks, such as fraud prevention and protecting against abuse material generation.
Looking ahead, France will host an AI Action Summit in February, following similar events in Seoul and the UK. These ongoing international gatherings reflect the growing recognition that AI governance requires coordinated global action as the technology's capabilities continue to advance rapidly.
As Raimondo noted, balancing safety with innovation remains paramount: "Safety breeds trust. Trust speeds adoption. Adoption leads to more innovation." This approach aims to create a sustainable framework for AI development that serves humanity's best interests while managing potential risks.