This project has received funding from the European Union’s Horizon 2020 research and innovation programme under grant agreement No 964220. This website reflects only the author’s view and the Commission is not responsible for any use that may be made of the information it contains.

European Parliament approved the Artificial Intelligence Act

Last week, on Wednesday 13th March, the European Parliament ushered in a new era of artificial intelligence (AI) regulation by approving the Artificial Intelligence Act. This pivotal legislation is designed to safeguard fundamental rights, bolster democracy, uphold the rule of law, and champion environmental sustainability in the face of high-risk AI technologies. Simultaneously, the Act establishes obligations for AI based on its potential risks and level of impact. it aims to foster innovation and solidify Europe’s position as a global leader in the AI landscape.

The Act establishes a comprehensive framework that delineates obligations for AI systems based on their potential risks and impact levels. This legislation:

  • Safeguards on general-purpose artificial intelligence
  • Limits on the use of biometric identification systems by law enforcement
  • Bans on social scoring and AI used to manipulate or exploit user vulnerabilities
  • Right of consumers to launch complaints and receive meaningful explanations

The comment from Rossella di Bidino, a Health Technology Assessment expert from the HTA of the AI Unit of the Advanced Graduate School of Health Economics and Management (ALTEMS).

Dr. Rossella di Bidino

The approval of the EU AI Act is a great step, even bigger if we consider that it happened at almost the same time as the agreement on the European Health Data Space.

The Act addresses high-risk AI systems and aims to manage crucial points such as risk assessment, development and assessment, as well as regulatory requirements. The potential crucial role of regulatory sandboxes is enlightened many times in the Act. EU is aware of the need to not only foster AI innovation but to accelerate access to markets. They also support in terms of facilitating regulatory learning.

From the remit of the AI Act are “research, testing or development activity regarding AI systems or models before their being placed on the market or put into service” excluded.

The regulation is still subject to a final check and is expected to enter into force twenty days after its publication in the official Journal, and be fully applicable 24 months after its entry into force.

The exceptions include:

  • bans on prohibited practises, which will apply six months after the entry into force date;
  • codes of practise (nine months after entry into force);
  • general-purpose AI rules including governance (12 months after entry into force);
  • obligations for high-risk systems (36 months).


The AI-Mind consortium will investigate the Act’s implications for the moment in which the AI-Connector and Predictor will be tested in real-world conditions and to monitor additional regulations on the national level.  Stay connected to learn more about the next steps and how they impact the AI-based tools developed within the AI-Mind project.

Read more about the AI Act here: