Artificial Intelligence
On July 9, 2024, the Federal Trade Commission (FTC) announced that it had issued a proposed order against NGL Labs, LLC and its co-founders, for the active marketing of their social media app, NGL App, to children and minors, and deceptive claims surrounding the use of artificial intelligence (AI) to prevent cyberbullying, in violation of Sectio
On July 8, 2024, the Polish data protection authority (UODO) published a guide to support institutions and organizations in ensuring better protection for children in the digital age. The guide, entitled 'Children's Image on the Internet.
On July 5, 2024, the Federal Council (Bundesrat) published the draft law on the criminal protection of personal rights against deepfakes. The draft law proposes a provision for the protection of personal rights specifically tailored to deepfakes and similar technical manipulations in the Criminal Code.
On June 21, 2024, the Council of the European Union published a note from the Presidency of the Council of the European Union to the Permanent Representatives Committee on the EU approach to Global Artificial Intelligence (AI) governance.
On July 5, 2024, the Federal Commissioner for Data Protection and Freedom of Information (BfDI) announced that the BfDI along with the Norwegian data protection authority (Datatilsynet) issued a joint statement on the 73rd meeting of the International Working Group on Data Protection in Technology (the Berlin Group).
On June 10, 2024, the cabinet of the United Arab Emirates (UAE) announced that it had approved a charter for the development and use of artificial intelligence (AI) in the UAE.
On July 4, 2024, the Press Information Bureau (PIB) announced the conclusion of the Global IndiaAI Summit. The highlights of the summit include:
On July 4, 2024, the Temporary Commission for Artificial Intelligence in Brazil (the Commission) published its updated report analyzing amendments to Bill No.
On June 19, 2024, the National Center for Artificial Intelligence Development under the Government of the Russian Federation (National Center for AI) announced that the State Parliament (Duma) had passed bill no. 512628-8 on the risks of artificial intelligence (AI) on its second reading.
On July 2, 2024, the Ministry of Information and Communications (MIC) requested public comments on the draft Law on Digital Technology Industry.
What is the scope of the draft Law?
The draft Law provides for its application to the digital technology industry, including:
On July 2, 2024, the Personal Data Protection Authority (KVKK) published its Turkish Journal of Privacy and Data Protection Volume: 6 - Issue: 1 (the Journal).
On July 2, 2024, the Federal Commissioner for Data Protection and Freedom of Information (BfDI) announced that July 6, 2024, will be the end of the term of the BfDI Commissioner Prof.
On July 1, 2024, the Dutch data protection authority (AP) published its annual report for 2023.
Algorithms
The AP highlighted its supervision of algorithms and artificial intelligence (AI). Specifically, the AP noted the ill-considered use of algorithms by government organizations, including:
On July 2, 2024, the Brazilian data protection authority (ANPD) announced that it had published Decision No. 20/2024/PR/ANPD in which it decided to temporarily ban Meta Platform Inc. from processing personal data to train Meta's artificial intelligence (AI), following an ex officio investigation.
On July 1, 2024, the US Senator for Colorado, John Hickenlooper, announced that a bill would be introduced requiring third-party audits for artificial intelligence (AI).
On June 26, 2024, Arkansas Governor, Sarah Huckabee Sanders, announced the launch of a working group to study and offer recommendations for the safe use of artificial intelligence (AI) within the Government. The working group will study, assess, and provide recommendations for policies, guidelines, and best practices for the ethical, effective,
On February 5, 2024, the Personal Information Protection Commission (PIPC) released the revised 'Guidelines for Processing Pseudonymous Data' (the Guidelines). This revision addresses the limitations of the existing guidelines, which only provided processing standards for structured data.
In this Insight article, Albert Yuen and Jasmine Yung, from Linklaters, discuss the increasing pace of regulatory developments across APAC jurisdictions, particularly focusing on Hong Kong's new Model AI Framework.
We are currently seeing a vast development and deployment of artificial intelligence (AI)-based systems and solutions across sectors and society as a whole.
Artificial intelligence (AI) is rapidly transforming Africa, but harnessing its potential responsibly requires strong governance.
The Information Commissioner's Office (ICO) published a series of chapters highlighting its emerging views on its interpretation of the UK General Data Protection Regulation (GDPR) and Part 2 of the Data Protection Act 2018, in relation to questions around the use, risks, and responsible deployment of artificial intelligence (AI).
In this Insight article, Daniela Schott and Kristin Bauer, from KINAST, elaborate on the Orientation Guide of the Committee of Independent German Federal and State Data Protection Supervisory Authorities - the German Data Protection Conference (DSK) - on artificial intelligence (AI) and data protection.
The Information Commissioner's Office (ICO), the UK data protection authority responsible for enforcing the UK General Data Protection Regulation (UK GDPR), announced earlier this year its series of consultations on how aspects of data protection law should apply to the development and use of generative artificial intelligence (AI) models.
In this Insight article, Dr. Cigdem Ayozger Ongun, Filiz Piyal, and Yaren Kilic, from SRP-Legal, delve into how artificial intelligence (AI) literacy can serve as a crucial tool for privacy protection.
The constant news of the development of artificial intelligence (AI) underscores its sheer prevalence in the world around us.
Generative artificial intelligence (AI) models, that is to say, AI models capable of generating text, images, code, audio, video, and other content as part of their output in response to inputs or prompts, such as OpenAI's ChatGPT and Dall-E, Meta's Llama, and Google's Imagen (accessed via Gemini), require significant volumes of high-quality dat
The rapid pace of development in artificial intelligence (AI) has seemingly only been paralleled by calls for its regulation. Since there is broad consensus regarding the possible impact of unregulated AI, governments globally have responded through policymaking to mitigate this risk.
In this Insight article, Caterina Ravera, Antonia Nudman, and Florencia Fuentealba, from Albagli Zaliasnik, delve into the rapid development of artificial intelligence (AI) systems and the importance of ethical safeguards and accurate media representation.
The EU AI Act
The Council of the European Union announced, on 6 December 2022, the adoption of its general approach on the Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence ('the AI Act').
Key feautures of the AI Act include:
- The AI Act is a specific legal framework for AI.
- Legislation must support AI's potential to support break throughs.
- The General Data Protection Regulation (Regulation (EU) 2016/679) ('GDPR') acts as a 'crystal ball'.
- Consumer nexus determines risk profile.
- Conformity assessments are a pre-market requirement for any AI system.
In this article, Sean Musch and Michael Borrelli, from AI & Partners, and Charles Kerrigan, from CMS, provide clarity on this ground-breaking piece of legislation on artificial intelligence ('AI') and why firms should take note.
Introduction
An AI system is a machine-based system that can, for a given set of human-defined objectives, generate output, such as content, predictions, recommendations, or decisions influencing real or virtual environments. AI systems are designed to operate with varying levels of autonomy.
To understand that statement, it is necessary to know that:
- AI systems do not operate entirely without human intervention;
- AI covers a wide range of systems that can be used to deliver multiple outcomes; and
- self-learning AI systems (or self-supervised learning), a type of AI system, recognise patterns in training data in an autonomous way, without the need for supervision.
The legal literature on AI extends to extensive discussion of the ethical and moral use of AI in general use, and how it should be treated under the laws of different jurisdictions. Under current English law, there is no bespoke framework governing the development, production, and/or operation/use of AI to benefit the myriad of stakeholders.
Notwithstanding, with the forthcoming AI Act, a European legal framework for AI to address fundamental rights and safety risks specific to the AI systems, emerging areas of risks are poised to be addressed.
The AI Act came about given that EU law currently did not:
- have a specific legal framework for AI;
- provide a definition of an AI system; or
- have a set of horizontal rules, covering a single definition of AI and a single set of horizontal requirements and obligations to address in a proportionate, risk-based manner and limited to the strictly necessary the risks to safety and fundamental rights specific to AI related to the classification of risks, related to AI technologies.
The development and uptake of AI systems generally takes place in the context of the existing body of EU law that provides non-AI specific principles and rules on the protection of fundamental rights, product safety, services, or liability issues. It is necessary to understand how this influenced the AI Act's and, crucially, how firms are affected.
Impact on UK businesses
At its core, the AI Act aims to ensure the proper functioning of the European single market by creating the conditions for the development and use of trustworthy AI, that is, how AI systems are made and deployed by businesses for user consumption. AI systems can be viewed in different ways, which affects the way in which they are treated from a legal standpoint.
Firstly, from a technological context, systems are typically software-based, but are often also embedded in hardware-software systems. A bimodal approach in the use of algorithms by businesses, mainly rule- and learning-based, differentiates its recognition and makes it harder to define. Secondly, in a social-economic context, the use of AI systems has led to important break throughs in a multitude of domains. An ability to support socially and environmentally beneficial outcomes and provide key competitive advantages to companies to name a few. Just as this has been aimed at European-based businesses, third-country firms should expect to understand the legal origins of the AI Act and what its intending to achieve. Products and services sold are subject to one form of regulation or another, regardless of the industry. Why should AI be any different?
Comparison with the GDPR
Businesses are still feeling the effects of the EU's legislative action to control personal data, otherwise known as the GDPR. The GDPR aimed to protect the fundamental rights and freedoms of natural persons, and in particular their right to the protection of personal data, whenever their personal data is processed. Not only did businesses stand up and take notice of it, they felt the commercial ramifications if they did not. Reputational, financial, and legal costs were deemed high for non-compliance. Similar to the GDPR, the AI Act has an extremely broad coverage, intended to cover the processing of personal data through 'partially or solely automated means', including any AI system. Comparisons drawn are both at the level of its scope of application, as well as the granularity with which the provisions apply.
Although costs of non-compliance with the AI Act are potentially not directly comparable (i.e. AI costs are given per product, whereas GDPR costs are given for the first year, they nevertheless give an idea of the order of magnitude). For example, regarding the GDPR, studies have found that 40% of small- and medium-sized enterprises ('SMEs') spent more than €10,000 on GDPR compliance in the first year, including 16% that spent more than €50,000. Depending on the final form of the AI Act costs of compliance could also be in this range.
Meaning of 'high risk'
AI systems would be considered high-risk because they pose significant risks to fundamental rights and freedoms of individuals or whole groups thereof. This remains a contentious point given the degree of impact perceived to have been caused by AI. One of the discussion points of the AI Act was a need for common criteria and a risk assessment methodology to separate 'high-risk' from 'non-high-risk' AI applications. Knowing the distinction can mean the difference between a lean go-to-market strategy versus one filled with a range of complexities and administrative hurdles.
At a high level, it could be reasonable to assume that:
- AI systems that are safety components of products are high-risk if the product or device in question undergoes a third-party conformity assessment pursuant to the relevant new approach or old approach safety legislation; and
- for all other AI systems, it should be assessed whether the AI system and its intended use generates a high risk to the health and safety and/or the fundamental rights and freedom of persons on the basis of a number of criteria that would be defined in the legal proposal.
Again, the message is clear - those AI systems that have an ability to affect the status of an individual, tangible or otherwise, are at the forefront of legislators's minds. 'People, planet, profit', as the recognised saying goes.
Compliance obligations/requirements
Providers and users are first in line. The AI Act proposes horizontal mandatory requirements for high-risk AI systems that would have to be fulfilled for any high-risk AI system to be authorised on the EU market or otherwise put into service. The same requirements would apply irrespective of whether the high-risk AI system is a safety component of a product or a stand-alone application with mainly fundamental rights implications.
As an example, to ensure compliance with the AI requirements, a provider would have to:
- do a conformity assessment to demonstrate compliance with AI requirements before the system is placed on the market; and
- re-assess the conformity in case of substantial modifications to take into account the continuous learning capabilities.
For high-risk AI systems, these clear and predictable requirements and obligations placed on all AI value chain participants are mostly common practice for diligent market participants and would ensure a minimum degree of algorithmic transparency and accountability in the development and use of AI systems.
Conclusion
To wrap things up, the AI Act brings widescale changes to the development, provision, and use/operation of AI. The obligations for firms should not be taken lightly.
Key things to note are:
- Implementation timeline: Q1 2024 is the expected enforcement date. Pre-emptive actions are strongly recommended.
- Preparation steps: Depending on the nature, scale, complexity, and nature of the business. Putting in place systems and controls to categorise AI systems marks a prudent first step.
Once published, the AI Act would lay down the first landmark regime governing the AI space in a comprehensive and harmonised manner; thus, its breadth would affect the AI industry and could represent a blueprint for other jurisdictions to follow. Therefore, now is a good time to prepare for the main disruptive changes the AI Act is on the point of introducing.
Sean Musch Director [email protected]
Charles Kerrigan Partner [email protected]
Michael Borrelli Director [email protected] AI & Partners, London