Results for ""
The European Artificial Intelligence Act (AI Act), the world's first comprehensive regulation on artificial intelligence, has been implemented by the European Commission. The AI Act ensures that AI developed and used in the EU is trustworthy, with safeguards to protect people's fundamental rights. The regulation aims to establish a harmonised internal market for AI in the EU, encouraging the uptake of this technology and creating a supportive environment for innovation and investment.
"AI has the potential to change the way we work and live and promises enormous benefits for citizens, our society and the European economy. The European approach to technology puts people first and ensures everyone's rights are preserved. With the AI Act, the EU has taken an important step to ensure that AI technology uptake respects EU rules in Europe", said Margrethe Vestager, Executive Vice-President for a Europe Fit for the Digital Age.
The AI Act introduces a forward-looking definition of AI based on a product safety and risk-based approach in the EU. Most AI systems, such as AI-enabled recommender systems and spam filters, fall into the 'minimal risk' category. These systems face no obligations under the AI Act due to their minimal risk to citizens' rights and safety. Certain AI-generated content, including deep fakes, must be labelled as such. Users need to be informed when biometric categorisation or emotion recognition systems are being used- risks related to this will be categorised under 'specific transparency risk'.
AI systems identified as high-risk will be required to comply with strict requirements, including risk-mitigation systems, high-quality data sets, logging of activity, detailed documentation, clear user information, human oversight, and a high level of robustness, accuracy, and cybersecurity. AI systems, considered a clear threat to people's fundamental rights, will be banned. This includes AI systems or applications that manipulate human behaviour to circumvent users' free will, such as toys using voice assistance to encourage the dangerous behaviour of minors, systems that allow 'social scoring' by governments or companies, and certain applications of predictive policing.
The EU member states have until 2 August 2025 to designate national competent authorities who will oversee the application of the rules for AI systems and carry out market surveillance activities. Three advisory bodies will support the implementation of the rules. The European Artificial Intelligence Board will ensure a uniform application of the AI Act across EU Member States and act as the main body for cooperation between the Commission and the Member States. A scientific panel of independent experts will offer technical advice and input on enforcement. In particular, this panel can issue alerts to the AI Office about risks associated with general-purpose AI models. The AI Office can also receive guidance from an advisory forum composed of diverse stakeholders.
The Commission is also developing guidelines to define and detail how the AI Act should be implemented and facilitating co-regulatory instruments like standards and codes of practice. The Commission opened a call for expressions of interestto participate in drawing up the first general-purpose AI Code of Practice and a multi-stakeholder consultation, giving all stakeholders the opportunity to have their say on the first Code of Practice under the AI Act.
The majority of the AI Act's rules will start applying on 2 August 2026. However, prohibitions on AI systems deemed to present an unacceptable risk will already apply after six months, while the rules for so-called General-Purpose AI models will apply after 12 months.