on security and liability of the EU Commission to the European Parliament, the Council and the European Economic and Social Committee of 19.2.2020
This report was published together with the White Paper on Artificial Intelligence – a European concept for excellence and trust – by the EU Commission on 19.2.2020. This report analyses the relevant current legal framework in the EU. It examines where there are uncertainties regarding the application of this legal framework due to the specific risks posed by AI systems and other technologies. The report concludes that current product safety legislation already supports an extended approach to protect against all types of risks posed by the product depending on its use. However, in order to provide greater legal certainty, provisions could be included that explicitly address newer risks associated with the new digital technologies. In summary, the report could be said to provide an outlook on the expected legal regulations at EU level for the next few years in the field of AI systems and there in particular with regard to the associated security and liability issues. Here, the report distinguishes between two main areas of regulation, product safety regulations and questions regarding the existing liability frameworks for digital technologies.-
Product safety regulations
While the report concludes that current product safety legislation already supports an expanded concept of protection against all types of risks posed by a product depending on its use, it is not clear how this is to be achieved. However, to create greater legal certainty, provisions could be included which explicitly address new risks related to the new digital technologies.- The autonomous behaviors of certain AI systems during their life cycle can lead to significant security-related changes in products, which may require a new risk assessment. In addition, it may be necessary to provide for human control from the design phase onwards throughout the life cycle of AI products and systems as a protective measure.
- Explicit obligations for manufacturers could also be considered, where appropriate, in relation to mental safety risks to users (for example, when working with humanoid robots).
- EU-wide product safety legislation could include both specific requirements to address the safety risks posed by incorrect data at the design stage and mechanisms to ensure that the quality of data is maintained throughout the use of AI products and systems.
- The issue of opacity of algorithm-based systems – the possibility of self-directed learning and self-directed performance improvement of some AI products – could be addressed by setting transparency requirements.
- In the case of stand-alone software that is marketed as such or downloaded into a product after it has been marketed, existing requirements may need to be adapted and clarified if the software has safety implications.
- Given the increasing complexity of supply chains in new technologies, provisions making cooperation between economic operators in the supply chain and users mandatory could also contribute to legal certainty.
-
Liability regulations
The characteristics of new digital technologies such as AI may challenge certain aspects of existing liability frameworks and reduce their effectiveness. Some of these features may make it difficult to trace the damage back to an individual, which would be required under most national rules to make fault claims. This could significantly increase costs for the injured party and make it difficult to pursue or prove liability claims against actors other than producers.- Persons who must have suffered damage because of the use of AI systems will enjoy the same level of protection as persons who have been harmed by other technologies. At the same time, there must be enough room for further development of technological innovation.
- All options envisaged to achieve this objective – including possible amendment of the Product Liability Directive and possible further targeted harmonization of national liability laws – should be carefully considered. For example, the Commission invites comments on whether and to what extent it might be necessary to mitigate the consequences of complexity by changing the rules on the burden of proof for damages caused by the operation of AI applications as provided for in national rules of conduct.
- In the light of the above comments on the liability framework, the Commission concludes that, in addition to the possible adaptation of this existing legislation, new legislation specifically targeted at AI may be necessary to adapt the EU legal framework to current and expected technological and commercial developments.
The White Paper identifies the following areas as possible additional regulatory points:-
A clear legal definition of AI
A risk-based approach should be taken here, i.e. there should be AI applications with high or low risk. Here, regulatory efforts should be concentrated on those applications with high risk, so as not to cause disproportionately high costs for SMEs. Criteria for the risk class should be the question whether the AI application is used in a sector where, due to the nature of the typical activities, significant risks are to be expected. The second criterion is whether the AI application is used in a sector in which significant risks are to be expected. -
Key features
The requirements for high-risk AI applications can relate to the following key features: Training data, data and record retention, information to be presented, robustness and accuracy, human oversight, special requirements for certain AI applications, for example, remote biometric identification applications. -
Addressees
Many actors are involved in the life cycle of an AI system. These include the developer, the operator, and possibly other actors such as manufacturer, dealer, importer, service provider, professional or private user. The Commission believes that in a future legal framework, the individual obligations should be the responsibility of the actor(s) best able to manage potential risks. For example, AI developers may be best placed to manage the risks arising from the development phase, while their ability to control risks in the exploitation phase may be more limited. The Commission considers it essential that the requirements apply to all relevant economic operators offering AI-based products or services in the EU, whether they are established in the EU or not. -
Compliance and enforcement
Given the high risk that certain AI applications represent overall, the Commission considers at this stage that an objective ex-ante conformity assessment would be necessary to verify and ensure that certain of the above-mentioned mandatory requirements for high risk applications are met. An ex-ante conformity assessment could include procedures for testing, inspection, or certification. This could include a review of the algorithms and data sets used in the development phase.
a) Governance
A European governance structure for AI, in the form of a framework for cooperation between the competent national authorities, is necessary to avoid fragmentation of responsibilities, to strengthen the capacities in the Member States and to ensure that Europe gradually equips itself with the capacities needed for the testing and certification of AI-based products and services
- Conclusion
Even though the considerations made by the EU Commission in the White Paper and in the report on the impact of artificial intelligence on the adaptation of the legal nationally different existing regulations regarding artificial intelligence are still at a very unspecific stage and still in the middle of the political discussion, the following can be stated- With an adapted or supplementary legal regulation on EU level regarding the questions of product safety (i.e. market access requirements) as well as regarding the reorganization of liability issues in connection with AI systems, it can be assumed with some certainty that this will happen in the course of the next few years.
- Especially AI vendors should be prepared for the fact that the algorithm must be transparent, verifiable, and finally meet certain certification requirements. In addition, an extended liability and thus responsibility of the AI provider that goes beyond the known extent of product liability, for example with regard to responsibility for supply chains and complex products, is certainly to be expected. As a result, this will only be associated with changed, more transparent development processes and extended responsibility, i.e. considerably higher costs for the corresponding insurance cover.