Update: Global AI Regulatory Approaches

Author avatarSheerin Kalia ·Mar 11, 2024

In a previous post here, I provided an overview of the European Union’s draft Artificial Intelligence Act (“AIA”). On December 8, 2023 the European Parliament and the Council on the Artificial Intelligence Act reached political agreement on the content of the AIA. Next, the agreement will receive formal approval, which is expected to take place sometime in April 2024. After that, the AIA will be published in the European Union’s Official Journal, which is the gazette for binding EU laws. 

Twenty days after publication, the AIA will enter into force. Two years after that, it will be enforced, with some exceptions - prohibitions will be enforced after 6 months and the rules on General Purpose AI will be enforced after 12 months. As an interim measure, the EU has created an AI Pact. The pact allows AI developers around the world to voluntarily implement key legal obligations before the 2 year deadline. 

Like the EU, Canada and the United States are pursuing dedicated legislation as their artificial intelligence (“AI”) governance models. Other counties are regulating AI by using existing laws and legislation as opposed to dedicated AI legislation (e.g. Singapore), while others are taking a completely hands-off approach to allow innovators the opportunity to experiment before deciding on a governance model (e.g. India).  

Conceptually, differences in AI governance may be attributable to each State’s view of whether AI itself should be regulated or just its application. For example, Canada’s proposed Artificial Intelligence and Data Act (“AIDA”) regulates both AI and its application. The goal of AIDA is to regulate the design, development and deployment of AI in Canada, to ensure that AI is safe and non-discriminatory. The White House’s executive order on the Safe, Secure and Trustworthy Development and Use of Artificial Intelligence, released October 30, 2023, has similar aims.  The executive order is a blueprint for AI governance that uses 5 principles to guide the design, use and deployment of automated systems. Federal legislation, which will likely include legal obligations for businesses, is not expected until after the 2024 U.S. election. In the meantime, multiple AI lawsuits, including discrimination lawsuits and intellectual property litigation by copyright owners, have already begun to make their way through American courts.  

While various States sort through their approaches to AI governance, large global organizations that use traditional or generative AI would be well-served to establish an AI governance office. The office should include a Chief AI Officer and policies to mitigate risk associated with HR, product creation, intellectual property, marketing, data security, and privacy. Policies could also clearly outline when customers and employees will be notified of AI use and outline procedures to ensure that AI is being used to support, not replace, human decision making.


Image subscription

Never miss a post.

We'll keep you in the loop with everything good going on in the modern professional development world.

By submitting this newsletter request, I consent to LearnFormula sending me marketing communication via email. I may opt out at any time. View LearnFormula's privacy policy