Site icon Aragon Research

Frontier Model Forum—Power Players Unite to Make AI Safer

By: Craig Kennedy

 

Frontier Model Forum—Power Players Unite to Make AI Safer

On Wednesday July 26th, 2023, Microsoft, OpenAI, Google, and Anthropic jointly announced the launching of the Frontier Model Forum, a new industry body focused on ensuring the safe and responsible development of frontier AI models. Frontier models are defined by the forum as large-scale machine-learning models that exceed the capabilities currently present in the most advanced existing models and can perform a wide variety of tasks.

Four Core Objectives

The announcement included four core objectives for the Frontier Model Forum:

  1. Advancing AI safety research to promote responsible development of frontier models, minimize risks, and enable independent, standardized evaluations of capabilities and safety.
  2. Identifying best practices for the responsible development and deployment of frontier models, helping the public understand the nature, capabilities, limitations, and impact of the technology.
  3. Collaborating with policymakers, academics, civil society, and companies to share knowledge about trust and safety risks.
  4. Supporting efforts to develop applications that can help meet society’s greatest challenges, such as climate change mitigation and adaptation, early cancer detection and prevention, and combating cyber threats.

Building on contributions already made by the US and UK governments, the EU, the G7, and others, the Frontier Model Forum will focus on the first three of its key objectives over the coming year to ensure frontier AI models are developed and deployed responsibly.

The Frontier Model Forum is open to other organizations that meet the membership criteria outlined by the forum.

What’s Next for the Frontier Model Forum?

The four founding companies will be funding an executive board and working group to put in place its charter and governance as well as establish an Advisory Board to help guide its strategy and priorities. The Frontier Model Forum is positioning itself to collaborate with established government and regulatory agencies, in addition to existing industry, civil society, and research efforts. Frontier Model Forum 

Bottom Line:

This is a good initial step at getting some of the leading AI technology providers together to work with government and non-government organizations to establish sensible guardrails around AI model development. Large-scale machine-learning models are complex and having the leading companies in this field have a seat at the table should make for a healthy balanced discussion. Frontier Model Forum


See Craig LIVE for “Cybersecurity in the Age of AI: Fighting Fire With Fire”

Airing LIVE on Thursday, August 17th at 10 AM PT | 1 PM ET

 

Cybercriminals are aggressively weaponizing artificial intelligence (AI) to launch increasingly effective cyberattacks against organizations. These cybercriminals are using AI to launch sophisticated and stealthy cyberattacks, such as creating realistic deep fakes, generating malware that can evade detection systems, creating convincing phishing emails, or identifying and exploiting vulnerabilities in real-time.

In this webinar, you will learn how AI can help you fight fire with fire to combat and survive these AI-powered cybersecurity attacks.

Some key areas we’ll cover:

Register Here


This blog is a part of the Digital Operations blog series by Aragon Research’s Sr. Director of Research, Craig Kennedy.

Missed an installment? Catch up here!

 

Blog 1: Introducing the Digital Operations Blog Series

Blog 2: Digital Operations: Keeping Your Infrastructure Secure

Blog 3: Digital Operations: Cloud Computing

Blog 4: Cybersecurity Attacks Have Been Silently Escalating

Blog 5: Automation—The Key to Success in Today’s Digital World

Blog 6: Infrastructure—Making the Right Choices in a Digital World

Blog 7: Open-Source Software—Is Your Supply Chain at Risk?

Blog 8: IBM AIU—A System on a Chip Designed For AI

Blog 9: IBM Quantum: The Osprey Is Here

Blog 10: The Persistence of Log4j

Blog 11: AWS re:Invent 2022—Focus on Zero-ETL for AWS

Blog 12: AWS re:Invent 2022—The Customer Is Always Right

Blog 13: How Good is the New ChatGPT?

Blog 14: The U.S. Department of Defense Embraces Multi-Cloud

Blog 15: 2022 Digital Operations—The Year in Review

Blog 16: Lucky Number 13 for Intel—Intel Is Back on Top

Blog 17: Quantum Decryption—The Holy Grail for Cybercriminals

Blog 18: Microsoft and OpenAI—Intelligent Partnership

Blog 19: ChatGPT—The First One Is Free

Blog 20: Bing and ChatGPT—Your Co-Pilot When Searching the Web

Blog 21: ESXiArgs—Ransomware Attack on VMware

Blog 22: The Cost of Supply Chain Security—$250M in Sales

Blog 23: OpenAI Delivers on APIs—Accelerating the Adoption of ChatGPT

Blog 24: OpenAI Delivers on Plugins—Is ChatGPT The New Generative Content Platform?

Blog 25: Microsoft Security Copilot—Defending the Enterprise at the Speed of AI

Blog 26: Operation Cookie Monster Takes a Huge Bite Out of The Dark Web

Blog 27: AWS Bedrock—Amazon’s Generative AI Launch

Blog 28: Google Cloud Security AI Workbench – Conversational Security

Blog 29: World Password Day – Is This the Last Anniversary

Blog 30: Intel Partners to Enter the Generative AI Race—Aurora genAI

Blog 31: Charlotte AI – CrowdStrike Enters the Generative AI Cybersecurity Race

Blog 32: NICE Catches the Generative AI Wave

Blog 33: AMD Instinct MI300X—A New Challenger to Nvidia

Blog 34: Storm-0558—Chinese Cyber Attack on US Government Organizations

Blog 35: Network Resilience Coalition—Making the Network Safer

Exit mobile version