OpenAI, the developer of ChatGPT, AI startup Anthropic, Microsoft, and Google have jointly announced the formation of the Frontier Model Forum. The organization will focus on developing new AI models that are “safe and responsible.”
The Frontier Model Forum was established to oversee the development of advanced AI models. (Photo: The Guardian).
According to Brad Smith, the President of Microsoft, companies creating AI technology have a responsibility to ensure its safety, security, and human oversight. The Frontier Model Forum represents an important step in bringing the technology industry together to enhance the responsibility of AI and address challenges to benefit everyone.
The forum members stated that their main goal is to promote research in AI safety, such as developing evaluation standards for models, encouraging the deployment of responsible advanced AI models, discussing trust and risks of AI with policymakers and scholars, and helping research the applications of AI in areas like climate change mitigation and cancer detection.
The forum’s launch comes at a time when countries are pushing for AI governance processes. On July 21, technology companies, including the founding members of the Frontier Model Forum, agreed to new AI safeguards after a meeting with U.S. President Joe Biden.
The parties committed to “sealing” AI content to make it easier to detect things like deepfakes and enable independent experts to verify AI models.
Earlier, on July 18, the United Nations Security Council held its first meeting on AI. During the meeting, UK Foreign Minister James Cleverly remarked that AI “will fundamentally change all aspects of human life.” He emphasized the urgency to “shape the governance of transformative technologies because AI knows no boundaries.
@Vietnamnet