The EU's new AI Act establishes a set of comprehensive rules for the regulation of AI, aiming to safeguard fundamental rights and ensure ethical AI use within Europe and beyond.
The European Union Regulation No. 2024/1689 (the AI Act), coming into force on 1 August 2024, is the first legislative instrument globally to regulate artificial intelligence. The AI Act aims to ensure that AI technologies developed, deployed, and used within the EU are safe and uphold fundamental rights. It focuses on compliance with safety, transparency, and ethical standards before AI products can be marketed or used within the EU’s internal market. As a regulation, it will have a direct effect across all member states, requiring no national legislation for implementation.
Definition of ‘AI Systems’
The AI Act defines an AI system as "a machine-based system that is designed to operate with varying levels of autonomy and may exhibit adaptiveness after deployment. It infers from the input it receives to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments." This broad definition includes machine learning models used in image recognition, neural networks in natural language processing, expert systems in medical diagnosis, and rule-based AI in automated customer service chatbots. However, it likely excludes traditional software systems that follow fixed instructions without adaptiveness or autonomy, such as basic spreadsheet programs or static webpage scripts.
Not all AI systems fall under the Act's jurisdiction. Article 2 excludes AI systems developed and used exclusively for military purposes or those used for national security and public safety, ensuring that the AI Act does not encroach on member states’ ability to deploy AI technology for their defence and security.
Scope of the AI Act
The EU AI Act establishes obligations for providers, deployers, importers, distributors, and operators of AI systems with a link to the EU market. Most of these obligations fall on providers—legal or natural persons who develop or place AI systems in the EU internal market, regardless of whether they are based in the EU or a third country. The AI Act does not impose any regulatory obligations on end-users, meaning individuals or entities that use or interact with AI systems, but do not operate or develop them, will not be subject to regulatory oversight under the Act.
The AI Act has extraterritorial application, meaning that non-EU companies providing AI systems within the EU market or affecting EU citizens must also comply with the Act.
How the AI Act works
The AI Act classifies AI systems into four distinct categories: (a) unacceptable risk, (b) high risk, (c) limited risk, and (d) minimal risk. This categorization is based on the application of AI rather than the underlying technology itself, meaning that the same technology might fall into different risk categories depending on its use. The obligations imposed by the Act vary according to the risk level of the AI system.
(a) Unacceptable Risk AI Systems
‘Unacceptable risk’ AI systems include social scoring by governments, technologies that manipulate human behaviour to users' detriment, and real-time biometric identification in public spaces. These systems are deemed unacceptable because they pose significant threats to individual rights and societal values. Accordingly, Article 5 of the Act strictly prohibits placing ‘unacceptable risk’ AI systems on the EU market.
(b) High Risk AI Systems
‘High risk’ AI systems include those used in critical infrastructures such as transport, education, employment, and biometric identification. Providers of high-risk AI systems must comply with stringent requirements, including comprehensive risk management, robust data governance, and human oversight mechanisms.
(c) Limited Risk AI Systems
‘Limited risk’ AI systems pose minimal potential for harm and therefore require only basic transparency obligations. These systems typically include AI systems that interact directly with end-users, such as customer service chatbots and AI systems that generate deepfakes. The AI Act mandates that these systems meet specific transparency obligations, ensuring that end-users are aware they are interacting with an AI system and understand the AI system’s purpose and functionality.
(d) Minimal Risk AI Systems
The Act recognises that certain AI systems pose ‘minimal risk’, defined as those with little or no potential for harm, typically used in contexts with limited impact on individuals and society. Examples include AI systems used in video games or spam filters. ‘Minimal risk’ AI systems are currently not subject to specific regulations under the AI Act.
Roles and obligations of stakeholders
Providers of AI systems bear the primary responsibility for ensuring their AI systems comply with the Act's requirements. This includes conducting conformity assessments, implementing quality management systems, and registering high-risk AI systems in the EU database. Distributors must verify that providers have fulfilled their obligations and report any non-compliance to national authorities. Deployers are responsible for the proper usage and monitoring of AI systems, particularly high-risk ones. They must ensure these systems are used as intended and take corrective actions if issues arise. Importers must ensure that AI systems from non-EU countries meet EU standards and address any discrepancies.
General purpose AI and associated obligations
The AI Act also regulates General Purpose AI (GPAI) models, defined as models trained with large amounts of data using self-supervision at scale, capable of performing a wide range of distinct tasks. GPAI models can fall into any risk level depending on their application. Providers of ‘high risk’ AI systems based on GPAI models face enhanced regulatory obligations, including maintaining up-to-date technical documentation and ensuring compliance with EU intellectual property laws.
Regulatory governance and enforcement
The enforcement of the AI Act is a collaborative effort between the European Commission’s newly established EU AI Office and national supervisory authorities. Member states must designate "national competent authorities" to oversee the Act’s implementation within their jurisdiction. These authorities will conduct market surveillance and verify the proper performance of AI system conformity assessments.
Non-compliance with the AI Act carries significant penalties, structured according to the severity of the violation. Violations involving ‘unacceptable risk’ systems incur the highest penalties – EUR 35 million or 7% of the violating entity's worldwide annual turnover, whichever is higher. Violations involving ‘limited risk’ systems are subject to lower fines. The Act also imposes penalties for providing incorrect, incomplete, or misleading information to notified bodies or national competent authorities, with maximum penalties of EUR 7.5 million or 1% of the violating entity's worldwide annual turnover, whichever is higher.
Implementation timeline
The AI Act enters into force on 1 August 2024, but most provisions do not become effective until 2025. Chapters I and II, covering general provisions, definitions, and rules regarding prohibited uses of AI, will be enforced from 2 February 2025. Certain requirements, including notification obligations and governance, will be enforced from 2 August 2025. Providers of GPAI models have until 2 August 2027 to achieve compliance. Member states must designate and communicate their national competent authorities to the Commission by 2 August 2025.
Significance for businesses in Cyprus
For businesses in Cyprus, the AI Act signifies a critical step towards ensuring AI technologies are safe, transparent, and ethically sound. Companies developing or deploying AI systems in Cyprus must comply with the AI Act's stringent regulations, ensuring their products meet EU standards. This compliance will enhance the credibility of Cypriot businesses in the EU market, fostering trust and competitiveness. Understanding and adhering to these regulations will be essential for businesses to navigate the evolving landscape of AI technology and maintain a competitive edge in the EU market.
By Yiolanti Maou
For more information or any inquiries, please feel free to contact us at This email address is being protected from spambots. You need JavaScript enabled to view it.
Back to NewsFollow us
1 Kinyra Street, 5th floor
1102 Nicosia
115 Faneromenis Avenue,
Antouanettas Building
6031 Larnaca
12 Platonos Street,
3027 Limassol
4 Nicou Nicolaidi & Kinyra,
2nd floor, 8011 Paphos
164A Georgiou Gourounia,
1st floor, 5289 Paralimni