Frontier Model Forum—Power Players Unite to Make AI Safer
By: Craig Kennedy
Frontier Model Forum—Power Players Unite to Make AI Safer
On Wednesday July 26th, 2023, Microsoft, OpenAI, Google, and Anthropic jointly announced the launching of the Frontier Model Forum, a new industry body focused on ensuring the safe and responsible development of frontier AI models. Frontier models are defined by the forum as large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models and can perform a wide variety of tasks.
Four Core Objectives
The announcement included four core objectives for the Frontier Model Forum:
- Advancing AI safety research to promote responsible development of frontier models, minimize risks, and enable independent, standardized evaluations of capabilities and safety.
- Identifying best practices for the responsible development and deployment of frontier models, helping the public understand the nature, capabilities, limitations, and impact of the technology.
- Collaborating with policymakers, academics, civil society, and companies to share knowledge about trust and safety risks.
- Supporting efforts to develop applications that can help meet society’s greatest challenges, such as climate change mitigation and adaptation, early cancer detection and prevention, and combating cyber threats.
Building on contributions already made by the US and UK governments, the EU, the G7, and others, the Frontier Model Forum will focus on the first three of its key objectives over the coming year to ensure frontier AI models are developed and deployed responsibly.
The Frontier Model Forum is open to other organizations that meet the membership criteria outlined by the forum.
What’s Next for the Frontier Model Forum?
The four founding companies will be funding an executive board and working group to put in place its charter and governance as well as establish an Advisory Board to help guide its strategy and priorities. The Frontier Model Forum is positioning itself to collaborate with established government and regulatory agencies, in addition to existing industry, civil society, and research efforts. Frontier Model Forum
Bottom Line:
This is a good initial step at getting some of the leading AI technology providers together to work with government and non-government organizations to establish sensible guardrails around AI model development. Large-scale machine-learning models are complex and having the leading companies in this field have a seat at the table should make for a healthy balanced discussion. Frontier Model Forum
See Craig LIVE for “Cybersecurity in the Age of AI: Fighting Fire With Fire”
Airing LIVE on Thursday, August 17th at 10 AM PT | 1 PM ET
Cybercriminals are aggressively weaponizing artificial intelligence (AI) to launch increasingly effective cyberattacks against organizations. These cybercriminals are using AI to launch sophisticated and stealthy cyberattacks, such as creating realistic deep fakes, generating malware that can evade detection systems, creating convincing phishing emails, or identifying and exploiting vulnerabilities in real-time.
In this webinar, you will learn how AI can help you fight fire with fire to combat and survive these AI-powered cybersecurity attacks.
Some key areas we’ll cover:
- How cybercriminals are changing the game with AI?
- What solutions are available to combat these threats?
- How emerging AI technologies will transform cybersecurity?
This blog is a part of the Digital Operations blog series by Aragon Research’s Sr. Director of Research, Craig Kennedy.
Missed an installment? Catch up here!
Have a Comment on this?