Meta Dissolves Responsible AI Team Amidst OpenAI Shakeup
By Adam Pease
Meta Dissolves Responsible AI Team Amidst OpenAI Shakeup
The tech world has been reeling from the unexpected news of OpenAI’s C-Suite shakeup. As OpenAI’s future remains uncertain, Meta made its own pivot, disbanding its team formerly responsible for AI safety.
This blog discusses the news and its implications both for AI safety and for the overall generative AI market.
AI Alignment In a Corporate Landscape
AI alignment, which refers to the process of developing AI systems that effectively understand human preferences and take action in line with human goals, is an increasingly popular topic as large language models continue to advance.
Many experts fear that ‘misaligned’ AI could present dangerous—even existential—risks to human society. Since its inception, OpenAI has made AI alignment a central part of its mission, a choice that has led it to depart from a formerly open-source approach to releasing models, opting instead for a controlled strategy.
At the same time, many have been critical of the focus on alignment and safety, suggesting that it may represent an attempt at regulatory capture, or lead to a world where AI is controlled by a small monopoly of designated experts. Meta has been at the forefront of the open-source language model revolution, with many of its chief scientists arguing that freely accessible models are essential to a democratic future with AI.
Is the Future of AI Open Source?
Employees who had worked on Meta’s Responsible AI team will now pivot to focus on other projects, such as infrastructure. While Meta was sure to note that it still felt AI safety was an important value, the move suggests that its priorities are shifting to focus on the continued release of open-source models, which can fuel development and commercialization.
Whether OpenAI will follow suit in the wake of its shakeup is hard to say for now—it is possible that Altman’s departure will lead to an even more intense focus on safety, with state-of-the-art foundation models kept under tighter control. There is also the possibility that OpenAI will shift in the opposite direction, opting, more like Meta, to return to its roots in open source AI development.
Bottom Line
At the same time as private companies formalize their AI safety policies, the US federal government has made moves to regulate AI, recently announcing an Executive Order for AI as a Framework that establishes many general guidelines for the market. This changing landscape creates an uncertain future for AI innovators, but no doubt holds many opportunities as well.
Catch Analyst, Adam Pease, LIVE with VP of Research, Betsy Burton for our Free Webinar!
Airing on Wednesday, November 29th at 10 AM PT | 1 PM ET
The Aragon Research Tech Arc: Emerging Technologies You Need To Know
In this upcoming webinar, we will offer a sneak peek at Aragon Research’s Emerging Technologies Technology Arc for 2023. Additionally, we will review several technology arcs from 2023, focusing on highlighting the most significant technologies and recognizing the major trends and disruptions that the 2023 set of technology arcs reveal. We invite you to join us for this dynamic discussion on the technologies that require your immediate attention and preparedness for the future.
Key topics for discussion will include:
- Identifying the major emerging technologies that demand attention from every organization.
- Analyzing the significant technologies and trends that have surfaced in Aragon’s Technology Arc research for 2023.
- Gaining insights into how organizations should proactively prepare for the technologies of the future.
Don’t miss out on this opportunity to stay ahead of the technology curve and equip your organization for what lies ahead.
Have a Comment on this?