OpenAI announced the formation of a new Preparedness team tasked with assessing highly advanced foundation models for catastrophic risks and producing a policy for safe development of these models.
A preparedness challenge was also announced where contestants are asked to fill out a survey and the top ten submissions will receive $25,000 in API credits.
A phrase that is talked about in and out of government in relation to future harms is what is known as Frontier AI.
Frontier AI is cutting edge artificial intelligence that offers the possibility for solving humankind’s greatest problems but also carries the potential for great harm.
OpenAI defines Frontier AI as:
“…highly capable foundation models that could possess dangerous capabilities sufficient to pose severe risks to public safety.
Frontier AI models pose a distinct regulatory challenge: dangerous capabilities can arise unexpectedly; it is difficult to robustly prevent a deployed model from being misused; and, it is difficult to stop a model’s capabilities from proliferating broadly.”
OpenAI described the challenges in managing Frontier Models as being able to quantify the extent of harm should an AI be misused, forming an idea of what a framework for managing the risks would look like and understanding what harm might pass should those will malicious intent get ahold of the technology.
The Preparedness team is tasked with minimizing the risks of Frontier Models and producing a report called a Risk-Informed Development Policy that will outline OpenAI’s approach to evaluation, monitoring and creating oversight of the development process.
OpenAI describes the responsibilities of the team:
“The Preparedness team will tightly connect capability assessment, evaluations, and internal red teaming for frontier models, from the models we develop in the near future to those with AGI-level capabilities.
The team will help track, evaluate, forecast and protect against catastrophic risks spanning multiple categories…”
OpenAI Preparedness Team
Governments around the world are evaluating what the current potential for harms and what future harms may be possible from Frontier AI and how best to regulate AI to manage the development.
OpenAI’s Preparedness Team is a step to get ahead of that discussion and find answers now.
As part of that initiative, OpenAI announced a preparedness challenge, offering $25,000 in API credits to the top ten suggestions for catastrophic misuse prevention.
Read OpenAI’s announcement: