EU proposal for a Regulation of artificial intelligence
A proposal for a Regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) was published
The draft Regulation introduces provisions that will regulate the development, marketing and placing on the European Union market of AI systems, including AI solutions used in products.
The draft Regulation defines an ‘artificial intelligence system’ (AI system) as meaning software that is developed with one or more of the techniques and approaches listed in Annex I (e.g. Machine learning approaches, Logic- and knowledge-based approaches, Statistical approaches, Bayesian estimation, search and optimization methods) and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.
The rules introduced by the draft Regulation are said to be limited to the minimum necessary requirements to protect the safety and fundamental rights of persons considering the risks and challenges posed by AI systems, without unduly constraining or hindering technological development or otherwise disproportionately increasing the cost of placing AI solutions on the market. They rules differ according to the degree of risk AI systems are considered to present.
The Regulation defines several AI applications as high-risk. They include e.g.:
- Critical infrastructures (e.g. transport)
- Remote biometric identification
- Migration, asylum and border control management
- Administration of justice and democratic processes
- Safety components of regulated products subject to third party conformity assessment under EU sectoral product safety legislation (e.g. AI application in robot-assisted surgery);
Requirements for such high-risk systems include, among others:
- Adequate risk assessment and mitigation systems
- High quality of the datasets feeding the system to minimise risks and discriminatory outcomes
- Logging of activity to ensure traceability of result
- Detailed documentation providing all information necessary on the system and its purpose for authorities to assess its compliance
- Clear and adequate information to the user
- Appropriate human oversight measures to minimise risk
- High level of robustness, security and accuracy.
Public and private providers of such high-risk AI systems will have to follow conformity assessment procedures to demonstrate compliance with the abovementioned requirements before those systems can be placed on the Union market or used in the Union.
To facilitate this, the draft Regulation includes provisions relating to the future harmonisation and drawing up of harmonised technical standards to be adopted by the European standardisation organisations (CEN/CENELEC and ETSI) based on a mandate from the European Commission.
As regards high-risk AI systems which are safety components of products, the draft Regulation will be integrated into the existing sectoral safety legislation to ensure consistency, avoid duplications and minimise additional burdens. In particular, as regards high-risk AI systems related to products covered by the New Legislative Framework (NLF) legislation (e.g. machinery, medical devices, toys), the requirements for AI systems set out in this proposal will be checked as part of the existing conformity assessment procedures under the relevant NLF legislation. The manufacturers of products covered by the New Legislative Framework will have to, among others, include in the technical documentation extensive information about AI solutions they use.
As regards high-risk AI systems related to products covered by relevant Old Approach legislation (e.g. aviation, cars), this proposal would not directly apply.
If you would like to find out more about the draft Artificial Intelligence Act, please do not hesitate to contact our EFS Product Compliance Team.