In a strategic move to address safety concerns surrounding its cutting-edge models, OpenAI unveils a comprehensive framework, granting the board the authority to overturn safety decisions, as outlined in a plan disclosed on Monday.
Backed by Microsoft, OpenAI is steadfast in its commitment to deploying the latest technology only when deemed safe, with a specific focus on critical areas like cybersecurity and nuclear threats. The company takes a proactive approach by establishing an advisory group tasked with scrutinizing safety reports, ensuring a thorough evaluation process. Notably, the board retains the power to reverse decisions made by executives, adding an extra layer of checks and balances.
Since the launch of ChatGPT a year ago, concerns about the potential hazards of AI have loomed large, captivating the attention of both AI researchers and the wider public. While generative AI technology has captivated users with its poetic and essay-writing capabilities, it has simultaneously ignited fears related to disinformation spread and human manipulation.
In response to these concerns, a coalition of AI industry leaders and experts called for a six-month pause in developing systems more potent than OpenAI’s GPT-4 in April. The plea underscored potential societal risks associated with advancing AI capabilities. A May Reuters/Ipsos poll revealed that over two-thirds of Americans express apprehension about the potential negative impacts of AI, with 61% believing it could pose a threat to civilization.