Daily News
G7 to agree AI code of conduct for companies
Published
2 years agoon

The Group of Seven (G7) industrial nations is set to establish a voluntary code of conduct for companies involved in the development of advanced Artificial Intelligence (AI) systems. This move comes in response to growing concerns about privacy and security risks associated with AI technology.
This code of conduct is poised to serve as a significant milestone in the regulation of AI across major countries. The G7, consisting of Canada, France, Germany, Italy, Japan, Britain, the US, and the European Union, initiated this process in May this year, during a ministerial forum known as the “Hiroshima AI process.”
Comprising 11 key points, the code’s objective is to foster a global environment for AI that is safe, secure, and trustworthy. It will offer voluntary guidance for organisations involved in the development of highly advanced AI systems, including foundational models and generative AI systems.
The code’s overarching aim is to strike a balance between reaping the benefits of AI while effectively addressing the associated risks and challenges. It encourages companies to implement measures for identifying, assessing, and mitigating risks throughout the entire lifecycle of AI. Additionally, it emphasises the need to address incidents and patterns of misuse after AI products have been introduced to the market.
In addition, the code calls on companies to publish public reports detailing the capabilities, limitations, and both the responsible and irresponsible uses of AI systems. It also stresses the importance of investing in robust security measures to safeguard against potential vulnerabilities.
The G7’s establishment of this voluntary code of conduct signifies a significant step towards regulating the development and deployment of advanced AI systems on a global scale. This initiative seeks to strike a crucial balance between harnessing the potential benefits of AI technology and addressing the associated risks and challenges.