on security and liability of the EU Commission to the European Parliament, the Council and the European Economic and Social Committee of 19.2.2020

This report was published together with the White Paper on Artificial Intelligence – a European concept for excellence and trust – by the EU Commission on 19.2.2020. This report analyses the relevant current legal framework in the EU. It examines where there are uncertainties regarding the application of this legal framework due to the specific risks posed by AI systems and other technologies. The report concludes that current product safety legislation already supports an extended approach to protect against all types of risks posed by the product depending on its use. However, in order to provide greater legal certainty, provisions could be included that explicitly address newer risks associated with the new digital technologies. In summary, the report could be said to provide an outlook on the expected legal regulations at EU level for the next few years in the field of AI systems and there in particular with regard to the associated security and liability issues. Here, the report distinguishes between two main areas of regulation, product safety regulations and questions regarding the existing liability frameworks for digital technologies.

  1. Product safety regulations
    While the report concludes that current product safety legislation already supports an expanded concept of protection against all types of risks posed by a product depending on its use, it is not clear how this is to be achieved. However, to create greater legal certainty, provisions could be included which explicitly address new risks related to the new digital technologies.
    1. The autonomous behaviors of certain AI systems during their life cycle can lead to significant security-related changes in products, which may require a new risk assessment. In addition, it may be necessary to provide for human control from the design phase onwards throughout the life cycle of AI products and systems as a protective measure.
    2. Explicit obligations for manufacturers could also be considered, where appropriate, in relation to mental safety risks to users (for example, when working with humanoid robots).
    3. EU-wide product safety legislation could include both specific requirements to address the safety risks posed by incorrect data at the design stage and mechanisms to ensure that the quality of data is maintained throughout the use of AI products and systems.
    4. The issue of opacity of algorithm-based systems – the possibility of self-directed learning and self-directed performance improvement of some AI products – could be addressed by setting transparency requirements.
    5. In the case of stand-alone software that is marketed as such or downloaded into a product after it has been marketed, existing requirements may need to be adapted and clarified if the software has safety implications.
    6. Given the increasing complexity of supply chains in new technologies, provisions making cooperation between economic operators in the supply chain and users mandatory could also contribute to legal certainty.

  2. Liability regulations
    The characteristics of new digital technologies such as AI may challenge certain aspects of existing liability frameworks and reduce their effectiveness. Some of these features may make it difficult to trace the damage back to an individual, which would be required under most national rules to make fault claims. This could significantly increase costs for the injured party and make it difficult to pursue or prove liability claims against actors other than producers.
    1. Persons who must have suffered damage because of the use of AI systems will enjoy the same level of protection as persons who have been harmed by other technologies. At the same time, there must be enough room for further development of technological innovation.
    2. All options envisaged to achieve this objective – including possible amendment of the Product Liability Directive and possible further targeted harmonization of national liability laws – should be carefully considered. For example, the Commission invites comments on whether and to what extent it might be necessary to mitigate the consequences of complexity by changing the rules on the burden of proof for damages caused by the operation of AI applications as provided for in national rules of conduct.
    3. In the light of the above comments on the liability framework, the Commission concludes that, in addition to the possible adaptation of this existing legislation, new legislation specifically targeted at AI may be necessary to adapt the EU legal framework to current and expected technological and commercial developments.

    The White Paper identifies the following areas as possible additional regulatory points:
    • A clear legal definition of AI
      A risk-based approach should be taken here, i.e. there should be AI applications with high or low risk. Here, regulatory efforts should be concentrated on those applications with high risk, so as not to cause disproportionately high costs for SMEs. Criteria for the risk class should be the question whether the AI application is used in a sector where, due to the nature of the typical activities, significant risks are to be expected. The second criterion is whether the AI application is used in a sector in which significant risks are to be expected.
    • Key features
      The requirements for high-risk AI applications can relate to the following key features: Training data, data and record retention, information to be presented, robustness and accuracy, human oversight, special requirements for certain AI applications, for example, remote biometric identification applications.
    • Addressees
      Many actors are involved in the life cycle of an AI system. These include the developer, the operator, and possibly other actors such as manufacturer, dealer, importer, service provider, professional or private user. The Commission believes that in a future legal framework, the individual obligations should be the responsibility of the actor(s) best able to manage potential risks. For example, AI developers may be best placed to manage the risks arising from the development phase, while their ability to control risks in the exploitation phase may be more limited. The Commission considers it essential that the requirements apply to all relevant economic operators offering AI-based products or services in the EU, whether they are established in the EU or not.
    • Compliance and enforcement
      Given the high risk that certain AI applications represent overall, the Commission considers at this stage that an objective ex-ante conformity assessment would be necessary to verify and ensure that certain of the above-mentioned mandatory requirements for high risk applications are met. An ex-ante conformity assessment could include procedures for testing, inspection, or certification. This could include a review of the algorithms and data sets used in the development phase.

      a) Governance
      A European governance structure for AI, in the form of a framework for cooperation between the competent national authorities, is necessary to avoid fragmentation of responsibilities, to strengthen the capacities in the Member States and to ensure that Europe gradually equips itself with the capacities needed for the testing and certification of AI-based products and services
  1. Conclusion
    Even though the considerations made by the EU Commission in the White Paper and in the report on the impact of artificial intelligence on the adaptation of the legal nationally different existing regulations regarding artificial intelligence are still at a very unspecific stage and still in the middle of the political discussion, the following can be stated
    1. With an adapted or supplementary legal regulation on EU level regarding the questions of product safety (i.e. market access requirements) as well as regarding the reorganization of liability issues in connection with AI systems, it can be assumed with some certainty that this will happen in the course of the next few years.
    2. Especially AI vendors should be prepared for the fact that the algorithm must be transparent, verifiable, and finally meet certain certification requirements. In addition, an extended liability and thus responsibility of the AI provider that goes beyond the known extent of product liability, for example with regard to responsibility for supply chains and complex products, is certainly to be expected. As a result, this will only be associated with changed, more transparent development processes and extended responsibility, i.e. considerably higher costs for the corresponding insurance cover.

On December 9, 2020, the EU Commission intends to announce a series of new planned competition and antitrust regulations to improve the control of technology groups, particularly the major Internet platforms.

In a report published on November 19, 2020, the EU Court of Auditors also urges the improvement of corresponding EU regulations. In particular, the report criticizes the fact that two antitrust proceedings against Internet platforms already exist under current law, but that enforcement here leaves much to be desired.

Although the EU Commission has opened antitrust proceedings against Google, these have been still pending before the European Court of Justice for more than three years without a decision.

Already a few weeks ago, the contents of the planned new EU regulations – the so-called Digital Services Act – were leaked, so I would like to give you a short list of the intended regulations in the following

1. Exclusive use of data

Large online platforms could be banned under the EU Digital Services Act from using collected User Data if it is not made available to smaller platforms. The activities of so-called “gatekeeper” platforms such as Google, Amazon and Facebook will be discussed in particular. These large corporations have a disproportionately high degree of economic power and control over the online world and can therefore participate in deciding “at the gate” who may enter the market.

According to the new regulation, gatekeepers are only allowed to use data

  • which is produced on the platform itself
  • or which are generated and collected on other services of the donors

for their own commercial purposes if it is also made available to other commercial users.

2. Rankings

Furthermore, online search engines are to be prohibited from displaying their own services preferentially and in an exposed position. This regulation represents a considerable tightening of the previous EU regulation from July 2019. In this regulation, search engines were only obliged to make it clear and transparent if they give preference to their own products and services.

3. Freedom of choice and Pre-installation

Equally, e-commerce giants will be prohibited in the future from restricting the ability of business users to offer the same goods and services to consumers under different conditions through other online intermediary services. It will also be prohibited for large companies to pre-install only their own apps on hardware systems. It must also be possible for consumers to uninstall applications that have already been pre-installed by the manufacturer.

4. Einführung einer sogenannten „Grauen Liste“

Furthermore, the EU Commission intends to introduce a so-called “grey list” of activities that the executive considers unfair and which may therefore require increased supervision by a competent authority in the future. According to this list, the platform giants are not allowed to prevent third parties from accessing essential information about customers and are instructed not to collect personal data beyond what is necessary to provide their services.

It ultimately remains to be seen in what concrete form the EU Commission will now present these regulations in December 2020. However, it can certainly be expected that intensive lobbying by the major Internet groups, as has already happened several times in the past, will result in some changes and public discussion before the final adoption. It remains – as so often in life – thus further exciting.