OpenAI has laid out plans to prevent any worst-case scenarios that could arise out of the powerful artificial intelligence technology that it is developing.
The company behind the mega-viral chatbot ChatGPT this week unveiled a 27-page “Preparedness Framework” document that outlines how it is working to track, evaluate and protect against “catastrophic risks” from cutting-edge AI models.
These risks range from AI models being used to cause a mass cybersecurity disruption to assisting in the creation of biological, chemical or nuclear weapons.
As part of the checks and balances under the new preparedness framework, OpenAI says that company leadership holds the decision-making power on whether to release new AI models, but its board of directors has the final say and the “right to reverse decisions” made by the OpenAI leadership team.
But even before it would get to that point of the board vetoing the deployment of a potentially risky AI model, the company says it has many safety checks that it would have to pass beforehand. A dedicated “preparedness” team will lead much of the multi-pronged efforts to monitor and mitigate potential risks from advanced AI models at OpenAI.
Massachusetts Institute of Technology professor Aleksander Madry is currently on leave from MIT to spearhead the startup’s preparedness team.
He will oversee a group of researchers tasked with evaluating and closely monitoring potential risks and synthesizing these various risks into scorecards. These scorecards, in part, categorize certain risks as “low,” “medium,” “high” or “critical.”
The preparedness framework states that “only models with a post-mitigation score of ‘medium’ or below can be deployed,” and only models with a “post-mitigation score of ‘high’ or below can be developed further.”
The document is notably in “beta,” the company said, and is expected to be updated regularly based on feedback.
The framework throws another spotlight on the unusual governance structure at the powerful artificial intelligence startup, which saw its board overhauled in the wake of a corporate blowup last month that resulted in CEO Sam Altman being ousted and then reinstated over the course of just five days.
The closely-watched corporate drama raised fresh questions at the time about Altman’s power over the company he co-founded, and the perceived limitations that the board had over him and his leadership team.
The current board, which OpenAI says is “initial” and in the process of being built out, consists of three wealthy, White men who have the tall task of ensuring OpenAI’s most-advanced technology accomplishes its mission to benefit all of humanity.
The lack of diversity in the interim board has come under widespread criticism. Some critics have also raised concerns that relying on a company to self-regulate is not enough, and lawmakers need to do more to ensure the safe development and deployment of AI tools.
The latest proactive safety checks outlined by OpenAI arrive as the tech sector and beyond have spent the past year debating the potential of an AI apocalypse.
Hundreds of top AI scientists and researchers — including OpenAI’s Altman and Google Deepmind chief executive Demis Hassabis — signed a one-sentence open letter earlier this year that said mitigating the “risk of extinction from AI” should be a global priority alongside other risks “such as pandemics and nuclear war.”
The statement drew widespread alarm from the public, though some industry watchers later accused companies of using far-off apocalypse scenarios to detract attention from the current harms associated with AI tools.
— CutC by cnn.com