1. Introduction
On 21 April 2021, the European Commission published its proposal for an Artificial Intelligence Regulation[1] (hereinafter “AI Regulation”) with the aim of creating – so far unique worldwide – a legal framework for the use of artificial intelligence (AI).
As early as February 2020, the European Commission published in a White Paper on AI[2] what it considers to be the political options for the controlled promotion of the use of AI. With the draft AI Regulation now presented, the Commission is not least responding to one of several recommendations of the European Parliament[3] to take legislative measures that exploit the potential of AI while respecting ethical principles.[4] This draft was preceded, among other things, by a broad consultation process with stakeholders and close coordination with a specially established expert group on AI[5].
Even though the AI Regulation is still in draft form, it already seems advisable to deal with the not inconsiderable effects on future M&A transactions and investments already made. This article first outlines the main provisions of the AI Regulation (section 2), then discusses the resulting effects on M&A transactions (section 3) and concludes with an outlook (section 4).
2. Essential provisions of the AI Regulation
The AI Regulation aims to create a single legal framework, in particular for the development, marketing and use of AI, in line with EU values. The intention is to implement proportionate regulation that balances the risks associated with the use of AI with the opportunities for innovation.
From a structural point of view, the AI Regulation follows a risk-based approach. A distinction is made between AI systems with (i) unacceptable risk, (ii) high risk and (iii) low or minimal risk. It depends on the classification into one of these risk groups to which extent prohibitions, requirements and obligations apply to the placing on the market, putting into service and use of AI systems. The highest density of regulations applies to high-risk AI systems.
AI systems with an unacceptable risk are those that infringe EU values, in particular fundamental rights. They are prohibited by the AI Regulation. These include practices that have a significant potential to manipulate through subliminal techniques beyond their consciousness or to influence the behavior of certain vulnerable groups, such as children or persons with disabilities, in such a way that they harm themselves or others. In addition, social scoring by public authorities and the use of “real-time” biometric remote identification systems in public spaces for law enforcement purposes is prohibited to a great extent (an exception applies, for example, in the event of a threat of a terrorist attack).
In contrast, high-risk AI systems are characterized by the fact that they pose a high risk to the health and safety or fundamental rights of natural persons (e.g. AI systems for the interpretation of law for judicial authorities[6]). A non-exhaustive annex to the AI Regulation lists such high-risk AI systems. The classification should not only depend on the functioning of the AI system, but also on its purpose. AI systems that meet certain requirements – in particular with regard to data governance, documentation, transparency and provision of information to users as well as human oversight – are to remain permissible as high-risk AI systems on the European market. In order to avoid duplication of testing efforts, for high-risk AI systems that are used as safety components of other products of the so-called New Legislative Framework[7]. medical devices or toys), the legal requirements for the AI should only be verified in the context of the conformity assessment procedure applicable to the products.
The requirements for high-risk AI systems include, for example, the establishment of a risk management system (Article 9 (1) of the AI Regulation) and ensuring automatic logging of events (Article 12 (1) of the AI Regulation). However, the AI Regulation generally does not regulate the technical implementation of the requirements, though certain obligations are established for providers, users and other participants along the AI value chain. For example, providers of high-risk AI systems must also set up a detailed quality management system (Article 17 (1) of the AI Regulation).
Providers of high-risk AI systems that are not related to products covered by other Union harmonization legislation should also be required to register their high-risk AI system in an EU database to be established and managed by the Commission (Article 51 AI Regulation).
AI systems with low or minimal risk will remain largely unregulated under the AI Regulation. An exception applies here with regard to such systems that (i) interact with humans (e.g. chatbots), (ii) are used to recognize emotions or to associate (social) categories on the basis of biometric data, or (iii) generate or manipulate content (“deepfakes“). More extensive transparency obligations apply to these cases.
Comparable to the General Data Protection Regulation, the AI Regulation leaves supervision and enforcement to the individual member states and obliges them to enact sanctioning regulations. The EU again relies on the system of graduated fines already used in the context of the General Data Protection Regulation, which can reach a peak of up to EUR 30,000,000 or 6% of the total worldwide annual turnover of the preceding financial year.
3. Impact on M&A transactions
3.1 The AI Regulation adds new items to the classic to-do lists in the context of M&A transactions and also forces investors (e.g. in the private equity sector) to take a close look at their portfolio of existing investments, especially when planning a sale/exit.
Before acquiring companies handling AI technologies, the acquirer should obtain as clear an overview as possible at an early stage as to how the target company relates to AI systems, in particular whether it develops, markets and/or uses them or intends to do so. A similar approach should be taken when analyzing existing holdings (esp. portfolio companies).
In the course of this analysis, the basic level of risk within the meaning of the AI Regulation of these AI systems should first be determined. Potential acquirers should also check whether the relevant AI systems of the target company/investment correspond to those in the list of examples of rules in the Annex to the AI Regulation. In this context, it is important to determine the basic functionality and purpose of the existing AI systems, which admittedly also requires the relevant technical expertise (technical due diligence). The degree of risk determined is a decisive factor in determining which obligations apply to the target company and, in the future, indirectly to the acquirer.
Once the AI in question has been assessed and the applicable obligation regime has been outlined in principle, the fundamental effects on the acquiring company or its group of companies should be examined: For example, has the acquiring company already taken certain measures itself in accordance with the AI Regulation and can these also be used for the target company’s AI to be acquired? Does the acquirer already have its own risk management system set up which can be extended to the target company’s AI? Or is the target company subjecting the acquirer to those obligations under the AI Regulation for the first time? In the latter case, an attempt should be made to estimate the financial and organizational effort required for their fulfilment, which, however, is likely to present an acquirer with not inconsiderable difficulties at this point in time. If the AI system is only a secondary activity of the target company, it should also be considered whether a carve-out of the AI activity from the target company prior to the transaction appears possible and expedient.
3.2 Once the fundamental decision to acquire the target company has been made, the AI systems must be analyzed in more detail. Depending on the extent of the AI activity, it may well be appropriate to examine and evaluate “AI compliance” as an independent section of the due diligence.
When reviewing “AI compliance”, however, one will have to move away from a purely legal review. It is true that the AI Regulation specifies the requirements to be met by high-risk AI – for example, the automatic recording of processes and events during the operation of the AI system must be secured. How these requirements are met, however, is a technical question that is difficult to answer without appropriate expertise in this area. Therefore, close coordination of legal due diligence and technical due diligence must be ensured.
In terms of content, the target company should first be checked for any prohibited practices in the area of AI. If such practices exist, it must be determined whether the functioning of the AI used can be changed so that it falls into a lower risk group, which is likely to be primarily a technical question. Risks that have already arisen must also be identified and included in the consideration.
The highest audit effort is likely to be in the area of high-risk AI systems, as the requirements of the AI Regulation are detailed and broad here. For existing AI systems, it should be examined whether all requirements of the AI Regulation are fulfilled or, if this is not the case, whether and with what effort they can be fulfilled in the future. The analysis should also address the question of whether a proper conformity assessment procedure has been carried out. In the given case, evidence should also be requested as to how the target company as provider of the AI system has complied with its registration obligation under the AI Regulation. In the event of breaches of the AI Regulation, the risk of sanctions would also have to be assessed. This approach should also be considered for AI compliance in already existing participations; after all, it is necessary to evaluate the participation in light of the new requirements, in particular also against the background of threatened sanctions or the relevance for the respective business model.
However, precisely because AI systems are also constantly being modified and developed further, the focus should also be on the future: If it is already possible to foresee in which areas existing AI is to be improved or that new AI is to be developed, it should at least be considered in advance what regulatory effects are or could be associated with this.
3.3 The findings obtained in the course of the due diligence must then be reflected accordingly in the purchase agreement, which applies to both the acquirer and the seller. If a certain AI system is to be carved out of the target company beforehand, it should be considered, for example, making the conclusion of this carve-out a condition to completion.
Particular attention will have to be paid to the formulation of an AI guarantee, although this may vary in the degree of detail depending on the complexity and scope of the existing AI. It is advisable to map the existing AI systems in an annex to the guarantee. For these AI systems, it should be guaranteed that they comply (in all material aspects) with the requirements of the AI Regulation. In particular, the guarantee should also cover the proper registration of the AI systems as well as the assurance that the correct performance of the conformity assessment has been carried out. A purchaser should also obtain a guarantee that no prohibited AI is used. The seller should also be aware of any risks based on its own analysis.
The legal issues associated with AI overlap with those of other areas of law. A clear demarcation should be ensured, in particular with regard to other guarantees concerning, for instance, intellectual property, regulatory issues or general compliance.
Should certain risks have become apparent as a result of the due diligence (such as ongoing proceedings for fines), a corresponding indemnification would have to be negotiated.
Depending on the individual case, it might also be advisable to impose on the seller not to modify any AI systems between signing and closing in such a way as to change their risk rating.
4. Outlook and practical advice
The AI Regulation is currently going through the EU legislative process. At present, a general transition period of 24 months from its entry into force is envisaged. Sanctioning provisions would have to be implemented by the member states within 12 months.
Even if it is likely to take some time before the AI Regulation finally comes into force, experience with the General Data Protection Regulation teaches us that companies in particular, but also investors, should deal with the AI Regulation as soon as possible in order to familiarize themselves with the requirements and develop (technical) implementation methods at an early stage. Finally, it seems advisable – especially for private equity investors – to analyses the current investments (portfolio) in the light of the AI Regulation in order to keep the future exit open or to secure the return on investment (ROI). Transaction advisors for AI-affine companies will have no choice but to develop at least a basic technical understanding of how the AI used works.
[1] Proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (artificial intelligence act) and amending certain Union legislative acts (COM/2021/206 final).
[2] White Paper on Artificial Intelligence – A European approach to excellence and trust, COM (2020) 65 final.
[3] E.g. European Parliament Resolution of 20 October 2020 with recommendations to the Commission on a civil liability regime for artificial intelligence, 2020/2014 (INL).
[4] European Parliament Resolution of 20 October 2020 with recommendations to the Commission on a framework of ethical aspects of artificial intelligence, robotics and related technologies, 2020/2012 (INL).
[5] See https://digital-strategy.ec.europa.eu/en/policies/expert-group-ai.
[6] AI Regulation Annex III, no. 8 a).
[7] Regulation (EC) No 765/2008 of the European Parliament and of the Council of 9 July 2008 setting out the requirements for accreditation and market surveillance relating to the marketing of products and repealing Regulation (EEC) No 339/93 (OJ L 218, 13.8.2008, p. 30); Decision No 768/2008/EC of the European Parliament and of the Council of 9 July 2008 on a common framework for the marketing of products and repealing Council Decision 93/465/EEC (OJ L 218, 13.8.2008, p. 82); and Regulation (EU) 2019/1020 of the European Parliament and of the Council of 20 June 2019 on market surveillance and compliance of products, and amending Directive 2004/42/EC and Regulations (EC) No 765/2008 and (EU) No 305/2011 (OJ L 169, 25.6.2019, p. 1).