Regulating AI in Europe

By Catherine Barrett, CEO of Annapolis Analytics

It is increasingly clear that Generative Artificial Intelligence (AI) tools offer innovative ways to expedite work and improve creativity, but also that they come with significant cybersecurity and other risks.  In response to these risks, the European Union (EU) is on the cusp of passing comprehensive regulation governing AI products and services that will set a global standard for safe and trustworthy AI. 

Specifically, in June of 2023, the Members of the European Parliament (MEPs) adopted changes to the Artificial Intelligence Act (AI Act) of 2021, moving one step closer to regulation of AI related goods and services. The AI Act sets forth a risk-based approach to regulating AI, from low-to-limited/minimal risk to high-risk and unacceptable risk, with the greatest degree of requirements and restrictions levied on AI goods and services falling into the high-risk category. The AI Act sets forth requirements for AI risk management, data and governance, quality management, and cybersecurity, among others. The EU AI Act is not finalized, however, and negotiations continue among EU countries in the EU Council, with a final version of the regulation expected before the end of 2023 and with full enactment of the law sometime thereafter.   

Rather than wait until final passage of the law, however, Ireland – specifically, the National Standards Authority of Ireland (NSAI) – is already preparing to implement the EU AI Act. On July 18, 2023 the NSAI published The AI Standards & Assurance Roadmap (herein after “the NSAI Roadmap”) in accordance with Ireland’s first AI strategy entitled AI-Here for Good: National Artificial Intelligence Strategy for Ireland (2021). The purpose of the NSAI Roadmap is “to ensure Ireland will be ready in good time to implement the EU AI Act” by outlining the roles and responsibilities of NSAI, the Department of Enterprise, Trade and Employment, AI national supervisory authority (when designated), and other relevant competent authorities, market surveillance authorities and notified bodies (when designated).1

The NSAI Roadmap references EU AI standards, including those currently being developed by the European Commission (EC), and the requirements for data, data governance and cybersecurity for AI systems.2 While still draft requirements, subsequent EU AI regulations are likely to include directives such as:  

Provide organizational and technical solutions to: 

  • Ensure AI systems are resilient against attempts to alter use, behavior, performance or compromise security properties by malicious third parties exploiting vulnerabilities 
  • Prevent and control cyberattacks targeting AI-specific assets (training data sets, digital assets or underlying information communications technology (ICT) infrastructure) 

Specifications for:

  • Data governance and data management procedures for AI system providers (specifically data generation and collection, data preparation operations, design choices, procedures for detecting and addressing biases…) 
  • Quality aspects of datasets used to training, validate and test AI systems  
  • Quality management systems for AI system providers, including post-market monitoring process 

As we wait for final passage of the EU AI Act sometime later this year, and final versions of regulations that will follow, these likely regulatory requirements give an indication of how to carefully invest to address the rising interest and adoption in AI-related goods and services.