Support Centre

Europe

Insights

We are currently seeing a vast development and deployment of artificial intelligence (AI)-based systems and solutions across sectors and society as a whole. At the time of writing, these deployed AI systems are about to become subject to a detailed and comprehensive regulatory burden in and of itself as the EU is about to roll out its newly finished AI Act.

While the AI Act will introduce new obligations on AI developers and deployers, it should not be forgotten that medical sector technology is already subject to its own rules. Particularly, the EU Medical Devices Regulation (MDR) and In Vitro Diagnostic Medical Devices Regulation (IVDR) have already been established to reform the European medical device (MD) regulatory framework and set high safety and performance standards for medical devices in the EU. As AI is increasingly being deployed in the medical sector, one may ask, how do the sector-agnostic AI Act and the sector-specific MD regulations coincide? Otto Lindholm, Counsel at Dottir Attorneys Ltd, looks at the overlap between the different legislation and provides some key takeaways for navigating these.

The Information Commissioner's Office (ICO) published a series of chapters highlighting its emerging views on its interpretation of the UK General Data Protection Regulation (GDPR) and Part 2 of the Data Protection Act 2018, in relation to questions around the use, risks, and responsible deployment of artificial intelligence (AI). 

Part one of this Insight series focused on chapter one of the ICO's guidance on the lawful basis for web scraping. In part two, James Castro-Edwards, from Arnold & Porter, looks at chapter two of the guidance which discusses how the purpose limitation principle of the UK GDPR applies to different phases of the generative AI lifecycle.   

In this Insight article, Daniela Schott and Kristin Bauer, from KINAST, elaborate on the Orientation Guide of the Committee of Independent German Federal and State Data Protection Supervisory Authorities - the German Data Protection Conference (DSK) - on artificial intelligence (AI) and data protection. The guide was published on May 6, 2024, and outlines data protection criteria necessary for the compliant use of AI applications and serves as a guideline for their selection, implementation, and use.

The Information Commissioner's Office (ICO), the UK data protection authority responsible for enforcing the UK General Data Protection Regulation (UK GDPR), announced earlier this year its series of consultations on how aspects of data protection law should apply to the development and use of generative artificial intelligence (AI) models. The term 'generative AI' refers to AI models that create new content, which includes text, audio, images, or videos. The ICO recognizes that responsible deployment of AI has the potential to make a positive contribution to society, and intends to address any risks so that organizations and the public may reap the benefits generative AI offers.

The ICO guidance responds to a number of requests for clarification made by innovators in the AI field, including the appropriate lawful basis for training generative AI models, how the purpose limitation principle plays out in the context of generative AI development and deployment, and the expectations around complying with the accuracy principle and data subjects' rights

The ICO has published a series of chapters, which outline its emerging views on its interpretation of the UK GDPR and Part 2 of the Data Protection Act 2018, in relation to these questions. The ICO is in the process of seeking the views of stakeholders with an interest in generative AI to help inform its positions. In part one of this Insight series, James Castro-Edwards, from Arnold & Porter, delves into chapter one of the ICO's guidance, focusing on legitimate interests as a lawful basis, the risks involved in web scraping, and measures that developers can take to mitigate such risks.

Generative artificial intelligence (AI) models, that is to say, AI models capable of generating text, images, code, audio, video, and other content as part of their output in response to inputs or prompts, such as OpenAI's ChatGPT and Dall-E, Meta's Llama, and Google's Imagen (accessed via Gemini), require significant volumes of high-quality data in order to train the model and enable it to assimilate the information and refine its output, through an iterative process. Generative AI models do not 'memorize' or recount their training data, per se, but instead learn to predict the appropriate output based on probabilities having regard to patterns in training data.

According to OpenAI, ChatGPT was developed using 'three primary sources of information:' publicly available information on the internet, information licensed from third parties, and information provided by users or human trainers. Meta's Llama 2 was similarly 'pretrained on publicly available online data sources' and trained on '2 trillion tokens,' which are the units of data into which training data is split whereby each word, punctuation mark, or pixel, for example, would constitute a separate token. Both developers state that they either did not intentionally target for, or sought to remove from, training data sources with high volumes of personal data. The process of gathering or extracting, through the use of an automated tool or bot, data from websites, known as web scraping, of publicly available data including personal data has legal implications for website operators, developers of AI models, their deployers, and data subjects. Nicola Cain, of Handley Gill Limited, discusses these legal implications for all individuals involved in web scraping data.

The legal framework for direct marketing activities is regulated by two main legislations in the EU, namely the General Data Protection Regulation (GDPR) and the Directive on Privacy and Electronic Communications (2002/58/EC) (as amended) (the ePrivacy Directive).

The GDPR is the general data protection framework applicable to companies and natural persons established in the EU or who direct their services towards EU citizens. This is an important consideration to make in terms of direct marketing because it includes US companies that are directing services to EU customers and sending marketing emails to EU customers, and that will need to respect the GDPR rules. From a material scope of applicability, the GDPR only applies when processing personal data of natural persons that are identifiable (either directly or indirectly). This means that mailing lists that solely consist of generic professional email addresses are not subject to the strict requirements in the data protection legislation in the EU.

The ePrivacy Directive is the data protection framework applicable in the electronic communications sector. The ePrivacy Directive provides a set of specific rules on data protection in the area of electronic communications, such as on the confidentiality of electronic communications, the treatment of traffic data (including data retention), and rules on spam and cookies.

A proposal for an ePrivacy Regulation was published on January 10, 2017, as the ePrivacy Directive is no longer optimally suited to the fast-changing nature of the electronic communications sectors. However, the discussions on the proposal for an ePrivacy Regulation have been stalled at the Council for almost six years, and it is uncertain whether the proposal will be adopted in the foreseeable future. The ePrivacy Directive, therefore, remains the law of the land, complementing the GDPR. Jolien Clemens, Attorney-at-Law at Timelex, explores the ePrivacy Directive rules and the GDPR as the currently applicable legal frameworks in the context of direct marketing.

Six years after the go-live of the General Data Protection Regulation (GDPR), covered organizations have gotten very used to Data Protection Impact Assessments (DPIA). Seasoned privacy professionals have definitively been part of many talks about the difference between DPIAs and Privacy Impact Assessments (PIA), and if there is or should be any difference.

In a time when everyone is talking about artificial intelligence (AI) and the upcoming EU AI Act (the AI Act), organizations are turning to privacy experts to see if this new legislative and regulatory focus will lead to a similar level of compliance work (and expense). In particular, they are wondering whether the AI Act's Conformity Assessments (CA) and Fundamental Rights Impact Assessments (FRIA) will find their way into every organization's compliance framework.

In this article, Maarten Stassen, of Crowell & Moring LLP, compares the GDPR's DPIAs with the AI Act's CAs and FRIAs, considering their key practical considerations and impact on organizations.

In today's digital age, businesses are constantly seeking innovative ways to connect with their customers and drive growth. One technology that has been making waves in the marketing industry is artificial intelligence (AI). According to the definition laid down in the EU AI Act in the last version available, 'AI system' means 'a machine-based system designed to operate with varying levels of autonomy, that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.' Gianluigi Marino and Andrea Cantore, from Osborne Clarke, define AI in marketing and discuss risks and obligations.

Austria has now implemented the EU Whistleblowing Directive (2019/1937) in the Austrian Whistleblower Protection Act (HinweisgeberInnenschutzgesetz) (the Whistleblowing Act), which came into force in 2023 and adds to a number of sector-specific regulations already in place. Dietmar Huemer and Katharina Spreitzhofer, from Huemer | Legal Solutions, give insights into the application of the new Whistleblowing Act.

The artificial intelligence (AI) industry in the UK has experienced significant growth over recent years, putting the UK in a strong position in the global AI market. The UK Government has expressed a 'pro-innovation' stance on AI, in terms of public funding, technology policy, and its regulatory approach, and has favored an iterative, sectoral approach to the regulation of AI. The UK's pro-innovation aim is explicit in the Government's white paper on AI regulation (the AI White Paper) and its response to the AI White Paper consultation, published in February 2024 (the Response). Amy Smyth, Fiona Maclean, and Georgina Hoy, from Latham & Watkins LLP, discuss the key ideas of the AI White Paper and the Response and compares the UK's AI regulatory landscape to other approaches around the globe.

In pursuit of a longstanding governmental objective to converge with EU legislation, notably the General Data Protection Regulation (GDPR), substantial revisions have been made to the Personal Data Protection Law No. 6698 (the Law). Published in the Official Gazette in March 2024, these amendments represent a concerted effort to align the Law with the GDPR principles, particularly focusing on addressing specific contentious issues. Yücel Hamzaoğlu, Partner at Hamzaoğlu Hamzaoğlu Kınıkoğlu Attorney Partnership, takes a look at the amendments and their impact on the current provisions.

In this Insight article, Iain Borner, Chief Executive Officer at The Data Privacy Group, delves into the transformative impact of the EU Artificial Intelligence Act (AI Act), which establishes a regulatory framework aimed at fostering trustworthy artificial intelligence (AI) aligned with European values. With a focus on high-risk AI systems, the AI Act introduces mandatory compliance processes and provisions, setting a precedent for ethical innovation that prioritizes people's rights and safety.