REGULATORY SCRUTINY BOARD OPINION Proposal for a Regulation of the European Parliament and of the Council laying down harmonized rules on Artificial Intelligence (Artificial Intelligence Act) and amending certain Union Legislative Acts
Tilhører sager:
- Hovedtilknytning: Forslag til Europa-Parlamentets og Rådets forordning om harmoniserede regler for kunstig intelligens (retsakten om kunstig intelligens) og om ændring af visse af Unionens lovgivningsmæssige retsakter {SEC(2021) 167 final} - {SWD(2021) 84-85 final} ()
- Hovedtilknytning: Forslag til Europa-Parlamentets og Rådets forordning om harmoniserede regler for kunstig intelligens (retsakten om kunstig intelligens) og om ændring af visse af Unionens lovgivningsmæssige retsakter {SEC(2021) 167 final} - {SWD(2021) 84-85 final} ()
Aktører:
1_EN_avis_impact_assessment_part1_v2.pdf
https://www.ft.dk/samling/20211/kommissionsforslag/kom(2021)0206/forslag/1773317/2379083.pdf
EUROPEAN COMMISSION
SEC(2021) 167
22.03.2021
REGULATORY SCRUTINY BOARD OPINION
Proposal for a Regulation of the European Parliament and of the Council
laying down harmonized rules on Artificial Intelligence (Artificial
Intelligence Act) and amending certain Union Legislative Acts
{COM(2021) 206}
{SWD(2021) 84}
{SWD(2021) 85}
Europaudvalget 2021
KOM (2021) 0206 - SEK-dokument
Offentligt
________________________________
This opinion concerns a draft impact assessment which may differ from the final version.
Commission européenne, B-1049 Bruxelles - Belgium. Office: BERL 08/010. E-mail: regulatory-scrutiny-board@ec.europa.eu
EUROPEAN COMMISSION
Regulatory Scrutiny Board
Brussels,
RSB
Opinion
Title: Impact assessment / Proposal for a Regulation laying down requirements for
artificial intelligence
Overall 2nd
opinion: POSITIVE
(A) Policy context
Artificial intelligence (AI) can contribute to make the EU ready for the digital age. The EU
approach to AI aims to promote innovation capacity in AI, while supporting the
development and uptake of ethical and trustworthy AI across the economy. The strategy
proposed in the White Paper on AI aims to build ecosystems of excellence and trust for AI.
The ecosystem of excellence consists of measures to support research, foster collaboration
between Member States and increase investment in AI development and deployment. The
ecosystem of trust foresees robust safety requirements for AI-based products and services
that respect fundamental EU values and rights. It would give citizens the confidence to
embrace AI-based solutions, while encouraging businesses to develop them.
(B) Summary of findings
The Board notes the substantial changes and significant improvements made to the
report and the clarifications provided on key issues, such as the interaction with other
initiatives and the content of options.
The Board gives a positive opinion. The Board also considers that the report should
further improve with respect to the following aspect:
(1) The report does not clearly justify the presented cost levels and does not present
their sources. The remaining uncertainty on the costs of the initiative make it
difficult to judge to what extent the (fixed) costs could create prohibitive barriers
for SMEs or new market entrants.
2
(C) What to improve
(1) The report should explain the methodology and sources for its cost calculations in the
relevant annex. It should include a detailed discussion of where and why the presented
costs deviate from the supporting study. The report should better discuss the combined
effect of the foreseen support measures for SMEs (lower fees for conformity assessments,
advice, priority access to regulatory sandboxes) and the (fixed) costs, including for new
market entrants.
The Board notes the estimated costs and benefits of the preferred option(s) in this
initiative, as summarised in the attached quantification tables.
(D) Conclusion
The DG may proceed with the initiative.
The DG must take these recommendations into account before launching the
interservice consultation.
If there are any changes in the choice or design of the preferred option in the final
version of the report, the DG may need to further adjust the attached quantification
tables to reflect this.
Full title Proposal for a Regulation of the European Parliament and the
Council laying down requirements for artificial intelligence
Reference number PLAN/2020/7453
Submitted to RSB on 23 February 2021
Date of RSB meeting Written procedure
3
ANNEX: Quantification tables extracted from the draft impact assessment report
The following tables contain information on the costs and benefits of the initiative on
which the Board has given its opinion, as presented above.
If the draft report has been revised in line with the Board’s recommendations, the content
of these tables may be different from those in the final version of the impact assessment
report, as published by the Commission.
Overview of Benefits (total for all provisions) – Preferred Option
DESCRIPTION AMOUNT COMMENTS
Direct benefits
Fewer risks to safety and
fundamental rights
Not quantifiable Citizens
Higher trust and legal certainty
in AI
Not directly quantifiable Businesses
Indirect benefits
Higher uptake Not directly quantifiable Businesses
More beneficial applications Not quantifiable Citizens
Not quantifiable: impossible to calculate (e.g. economic value of avoiding fundamental rights infringements)
Not directly quantifiable: could in theory be calculated if many more data were available (or making large
numbers of assumptions)
Overview of costs – Preferred option
CITIZENS/
CONSUMERS
BUSINESSES ADMINISTRATIONS
One-off Recurrent One-off Recurrent One-off Recurrent
Comply
with
substantial
require-
ments
Direct
costs
€ 6000 –
7000 per
application
€ 5000 – 8
000 per
application
Indirect
costs
Verify
compliance
Direct
costs
€ 3000 –
7500 per
application
Indirect
costs
Audit QMS
€1000 –
2000 per
day,
depending
on
complexity
Renew
audit, €300
per hour,
depending
on
complexity
Establish
competent
authorities
Direct
costs
1-25 FTE
per MS; 5
FTE at EU
Indirect
costs
4
EUROPEAN COMMISSION
Regulatory Scrutiny Board
Brussels,
RSB
Opinion
Title: Impact assessment / Proposal for a Regulation laying down requirements for
Artificial Intelligence
Overall opinion: NEGATIVE
(A) Policy context
Artificial intelligence (AI) plays a key role in the agenda of making the EU ready for the
digital age. The European approach to AI aims to promote Europe’s innovation capacity in
AI, while supporting the development and uptake of ethical and trustworthy AI across the
economy. The EU strategy proposed in the White Paper on AI aims to build ecosystems of
excellence and trust for AI. The ecosystem of excellence consists of measures to support
research, foster collaboration between Member States, and increase investment in AI
development and deployment. The ecosystem of trust foresees robust safety requirements
for AI-based products and services that respect fundamental EU values and rights. It would
give citizens the confidence to embrace AI-based solutions, while encouraging businesses
to develop them.
(B) Summary of findings
The Board notes the useful additional information provided in advance of the
meeting and commitments to make changes to the report.
However, the Board gives a negative opinion, because the report contains the
following significant shortcomings:
(1) The report is not sufficiently clear on how this initiative will interact with other
AI initiatives, in particular with the liability initiative.
(2) The report does not discuss the precise content of the options. The options are not
sufficiently linked to the identified problems. The report does not present a
complete set of options and does not explain why it discards some.
(3) The report does not show clearly how big the relative costs are for those AI
categories that will be regulated by this initiative. Even with the foreseen
mitigating measures, it is not sufficiently clear if these (fixed) costs could create
prohibitive barriers for SMEs to be active in this market.
5
(C) What to improve
(1) The content of the report needs to be completed and reworked. The narrative should be
improved and streamlined, by focusing on the most relevant key information and analysis.
(2) The report should clearly explain the interaction between this horizontal regulatory
initiative, the liability initiative and the revision of sectoral legislation. It should present
which part of the problems will be addressed by other initiatives, and why. In particular, it
should clarify and justify the policy choices on the relatives roles of the regulatory and
liability initiatives.
(3) In the presentation of the options, the report focusses mainly on the legal form, but it
does not sufficiently elaborate on the content. The report should present a more complete
set of options, including options that were considered but discarded. Regarding the
preferred option, the report should give a firm justification on what basis it selects the four
prohibited practices. There should be a clear definition and substantiation of the definition
and list of high-risk systems. The same applies to the list of obligations. The report should
indicate how high risks can be reliably identified, given the problem drivers of complexity,
continuous adaptation and unpredictability. It should consider possible alternative options
for the prohibited practices, high-risk systems, and obligations. These are choices that
policy makers need to be informed about as a basis for their decisions.
(4) The report should be clearer on the scale of the (fixed) costs for regulated applications.
It should better analyse the effects of high costs on market development and composition.
The report should expand on the costs for public authorities, tasked to establish evolving
lists of risk rated AI products. It should explain how a changing list of high-risk products is
compatible with the objective of legal certainty. The analysis should consider whether the
level of costs affects the optimal balance with the liability framework. It should reflect on
whether costs could be prohibitive for SMEs to enter certain markets. Regarding
competiveness, the report should assess the risk that certain high-risk AI applications will
be developed outside of Europe. The report should take into account experiences and
lessons learnt from third countries (US, China, South Korea), for instance with regard to
legal certainty, trust, higher uptake, data availability and liability aspects.
(5) The report should explain the concept of reliable testing of innovative solutions and
outline the limits of experimenting in the case of AI. It should clarify how regulatory
sandboxes can alleviate burden on SMEs, given the autonomous dynamics of AI.
(6) The report should better use the results of the stakeholder consultation. It should better
reflect the views of different stakeholder groups, including SMEs and relevant minority
views, and discuss them in a more balanced way throughout the report.
(7) The report should make clear what success would look like. The report should
elaborate on monitoring arrangements and specify indicators for monitoring and evaluation.
Some more technical comments have been sent directly to the author DG.
(D) Conclusion
The DG must revise the report in accordance with the Board’s findings and resubmit
it for a final RSB opinion.
Full title Proposal for a Regulation of the European Parliament and the
Council laying down requirements for Artificial Intelligence
6
Reference number PLAN/2020/7453
Submitted to RSB on 18 November 2020
Date of RSB meeting 16 December 2020
Electronically signed on 22/03/2021 14:31 (UTC+01) in accordance with article 11 of Commission Decision C(2020) 4482