Support Centre
Artificial Intelligence
workspace-icon
Back

Artificial Intelligence

The EU AI Act

The Council of the European Union announced, on 6 December 2022, the adoption of its general approach on the Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence ('the AI Act').

Key feautures of the AI Act include:

  • The AI Act is a specific legal framework for AI.
  • Legislation must support AI's potential to support break throughs.
  • The General Data Protection Regulation (Regulation (EU) 2016/679) ('GDPR') acts as a 'crystal ball'.
  • Consumer nexus determines risk profile.
  • Conformity assessments are a pre-market requirement for any AI system.

In this article, Sean Musch and Michael Borrelli, from AI & Partners, and Charles Kerrigan, from CMS, provide clarity on this ground-breaking piece of legislation on artificial intelligence ('AI') and why firms should take note.

Introduction

An AI system is a machine-based system that can, for a given set of human-defined objectives, generate output, such as content, predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.

To understand that statement, it is necessary to know that:

  • AI systems do not operate entirely without human intervention;
  • AI covers a wide range of systems that can be used to deliver multiple outcomes; and
  • self-learning AI systems (or self-supervised learning), a type of AI system, recognise patterns in training data in an autonomous way, without the need for supervision.

The legal literature on AI extends to extensive discussion of the ethical and moral use of AI in general use, and how it should be treated under the laws of different jurisdictions. Under current English law, there is no bespoke framework governing the development, production, and/or operation/use of AI to benefit the myriad of stakeholders.

Notwithstanding, with the forthcoming AI Act, a European legal framework for AI to address fundamental rights and safety risks specific to the AI systems, emerging areas of risks are poised to be addressed.

The AI Act came about given that EU law currently did not:

  • have a specific legal framework for AI;
  • provide a definition of an AI system; or
  • have a set of horizontal rules, covering a single definition of AI and a single set of horizontal requirements and obligations to address in a proportionate, risk-based manner and limited to the strictly necessary the risks to safety and fundamental rights specific to AI related to the classification of risks, related to AI technologies.

The development and uptake of AI systems generally takes place in the context of the existing body of EU law that provides non-AI specific principles and rules on the protection of fundamental rights, product safety, services, or liability issues. It is necessary to understand how this influenced the AI Act's and, crucially, how firms are affected.

Impact on UK businesses

At its core, the AI Act aims to ensure the proper functioning of the European single market by creating the conditions for the development and use of trustworthy AI, that is, how AI systems are made and deployed by businesses for user consumption. AI systems can be viewed in different ways, which affects the way in which they are treated from a legal standpoint.

Firstly, from a technological context, systems are typically software-based, but are often also embedded in hardware-software systems. A bimodal approach in the use of algorithms by businesses, mainly rule- and learning-based, differentiates its recognition and makes it harder to define. Secondly, in a social-economic context, the use of AI systems has led to important break throughs in a multitude of domains. An ability to support socially and environmentally beneficial outcomes and provide key competitive advantages to companies to name a few. Just as this has been aimed at European-based businesses, third-country firms should expect to understand the legal origins of the AI Act and what its intending to achieve. Products and services sold are subject to one form of regulation or another, regardless of the industry. Why should AI be any different?

Comparison with the GDPR

Businesses are still feeling the effects of the EU's legislative action to control personal data, otherwise known as the GDPR. The GDPR aimed to protect the fundamental rights and freedoms of natural persons, and in particular their right to the protection of personal data, whenever their personal data is processed. Not only did businesses stand up and take notice of it, they felt the commercial ramifications if they did not. Reputational, financial, and legal costs were deemed high for non-compliance. Similar to the GDPR, the AI Act has an extremely broad coverage, intended to cover the processing of personal data through 'partially or solely automated means', including any AI system. Comparisons drawn are both at the level of its scope of application, as well as the granularity with which the provisions apply.

Although costs of non-compliance with the AI Act are potentially not directly comparable (i.e. AI costs are given per product, whereas GDPR costs are given for the first year, they nevertheless give an idea of the order of magnitude). For example, regarding the GDPR, studies have found that 40% of small- and medium-sized enterprises ('SMEs') spent more than €10,000 on GDPR compliance in the first year, including 16% that spent more than €50,000. Depending on the final form of the AI Act costs of compliance could also be in this range.

Meaning of 'high risk'

AI systems would be considered high-risk because they pose significant risks to fundamental rights and freedoms of individuals or whole groups thereof. This remains a contentious point given the degree of impact perceived to have been caused by AI. One of the discussion points of the AI Act was a need for common criteria and a risk assessment methodology to separate 'high-risk' from 'non-high-risk' AI applications. Knowing the distinction can mean the difference between a lean go-to-market strategy versus one filled with a range of complexities and administrative hurdles.

At a high level, it could be reasonable to assume that:

  • AI systems that are safety components of products are high-risk if the product or device in question undergoes a third-party conformity assessment pursuant to the relevant new approach or old approach safety legislation; and
  • for all other AI systems, it should be assessed whether the AI system and its intended use generates a high risk to the health and safety and/or the fundamental rights and freedom of persons on the basis of a number of criteria that would be defined in the legal proposal.

Again, the message is clear - those AI systems that have an ability to affect the status of an individual, tangible or otherwise, are at the forefront of legislators's minds. 'People, planet, profit', as the recognised saying goes.

Compliance obligations/requirements

Providers and users are first in line. The AI Act proposes horizontal mandatory requirements for high-risk AI systems that would have to be fulfilled for any high-risk AI system to be authorised on the EU market or otherwise put into service. The same requirements would apply irrespective of whether the high-risk AI system is a safety component of a product or a stand-alone application with mainly fundamental rights implications.

As an example, to ensure compliance with the AI requirements, a provider would have to:

  • do a conformity assessment to demonstrate compliance with AI requirements before the system is placed on the market; and
  • re-assess the conformity in case of substantial modifications to take into account the continuous learning capabilities.

For high-risk AI systems, these clear and predictable requirements and obligations placed on all AI value chain participants are mostly common practice for diligent market participants and would ensure a minimum degree of algorithmic transparency and accountability in the development and use of AI systems.

Conclusion

To wrap things up, the AI Act brings widescale changes to the development, provision, and use/operation of AI. The obligations for firms should not be taken lightly.

Key things to note are:

  • Implementation timeline: Q1 2024 is the expected enforcement date. Pre-emptive actions are strongly recommended.
  • Preparation steps: Depending on the nature, scale, complexity, and nature of the business. Putting in place systems and controls to categorise AI systems marks a prudent first step.

Once published, the AI Act would lay down the first landmark regime governing the AI space in a comprehensive and harmonised manner; thus, its breadth would affect the AI industry and could represent a blueprint for other jurisdictions to follow. Therefore, now is a good time to prepare for the main disruptive changes the AI Act is on the point of introducing.

Sean Musch Director [email protected]

Charles Kerrigan Partner [email protected]

Michael Borrelli Director [email protected] AI & Partners, London