According to a written statement from the White House, 7 US companies, including OpenAI, Amazon, Google, Meta, Microsoft, have agreed to adhere to security measures before launching their artificial intelligence products.
The measures in question include rules on third-party auditing of artificial intelligence systems used for commercial purposes and the protection of citizens’ personal information.
Four companies, as well as OpenAI, which developed Anthropic, Inflection and ChatGPT, have voluntarily agreed to perform security tests “to be carried out in part by independent experts”.
Security tests are aimed to protect against biosecurity and cybersecurity risks, which constitute the majority of the problems posed by artificial intelligence.
The companies also agreed to disclose the flaws and risks in their technologies to the public within the scope of these measures.
It was noted that the agreed measures are leading for the US Congress to pass laws regulating artificial intelligence in the future.
It was stated that various countries, including the UK, Japan, South Korea and Australia, were consulted to develop these measures.
Steps are taken for artificial intelligence regulations in the world
United Nations (UN) Secretary-General Antonio Guterres said on July 18 that a multilateral high-level advisory board would be formed within the UN to report at the end of the year to establish global standards for artificial intelligence.
Senate Majority Leader Chuck Schumer said that Congress must act quickly on regulations on artificial intelligence security.
Tech executives calling for regulation in artificial intelligence had gone to the White House in May to meet with US President Joe Biden, Vice President Kamala Harris and other officials.
The European Union and many countries are also looking for ways to regulate artificial intelligence security.