Ahead of the upcoming negotiations on the EU AI law, the AI Association publishes five proposals to improve regulation.
The AI Association calls for the EU AI Act to keep an eye on the regulatory burden and compliance costs for the European AI ecosystem in order to avoid competitive disadvantages and loss of innovation.
The Association proposes five key areas that should be considered in the upcoming trialogue negotiations:
- Foundation models should be subject to transparency and data governance requirements that are proportionate to the risk level of the specific use case.
- The high-risk classification in ANNEX III should be further narrowed down to critical areas, more appropriately considering the respective provider/deployer’s size and resources, and only include use cases not already covered by existing regulatory frameworks.
- The definition of #AI in the AI Act should be narrowed to ensure that the AI Act focuses strictly on AI systems and not on any advanced software.
- The EU should facilitate the timely development of harmonized standards in line with the rapid technological evolution of AI. Industry experts should be closely involved in the standardization process to provide more clarity and certainty for stakeholders.
- In addition to regulatory sandboxes, the AI Act should include and be accompanied by other provisions with greater potential to stimulate and support private sector initiatives, particularly European AI start-ups, and SMEs.
Trilogue negotiations on EU AI legislation to start at the end of July
The European Parliament voted on its position on the EU AI Act in plenary on 14 June and adopted it by a large majority (499 votes in favor, 28 against and 93 abstentions). The adopted text represents Parliament’s negotiating position for the inter-institutional negotiations (trilogue) with the Council of the European Union and the European Commission. The first trilogue took place on 14 June, during which the EU institutions presented their positions.
The first operational trilogue meeting is scheduled for July 23, 2023, with the EU aiming to conclude negotiations by the end of 2023. Before that, stakeholders can provide feedback on the Parliament’s position – the paper from the German AI Association is one of those feedbacks .
The very definition of “artificial intelligence” shows how complicated this process is:
According to the current position of the Parliament, an “artificial intelligence system (AI system) means a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments.”
This is in line with the OECD definition and differs from the Commission’s proposed definition, which limits an AI system to software that acts for human-defined goals and also includes a possible metaverse by including “virtual environments”.
What is an AI system?
The AI Association criticizes that none of the definitions used focus on the essential characteristics of an AI system, such as learning, modeling, or reasoning. Instead, the definition of AI systems is so broad that the EU AI law becomes a regulation of advanced software instead of an AI regulation.
The AI Association, therefore, proposes that AI systems be defined as follows:
An ‘artificial intelligence system’ is a system that uses an algorithmic model, that is developed by a training process using external data from data sources or the environment, to analyze data and provide results that can support decision making.
Other points of contention are the definitions of “significant risk” and “foundation model”. Details can be found in the German AI Association’s position paper.