Nixu: The AI Act entered into force – Now what?


Artificial intelligence is a topic on which almost everyone has an opinion. Some see it as a solution to foundational societal issues that drives human advancement and unlocks new potential in fields like medicine, where it aids in groundbreaking research, early disease detection, and personalized treatments. Yet, others perceive AI more as a toy, focusing on its applications in entertainment and everyday convenience, and often overlook its potential to address complex challenges. Regardless of what AI may or may not bring us in the future, it is an excellent tool for enhancing efficiency, decision-making and innovation. That is why organisations should make use of the benefits of AI but also do so in a legally sustainable manner.
What does AI then mean from a regulatory compliance perspective? While AI has several legal and compliance dimensions, the most current piece of legislation relating to it is the EU Artificial Intelligence Act[1] (AI Act). Having entered into force on 1 August 2024, it is the first EU-level law regulating specifically artificial intelligence, affecting a large number of organizations. Indeed, we have noticed a rising interest in the AI Act among organizations keen to understand how it will impact their legal compliance and operational practices.
So, how should one start to unpack the AI Act?
Like many other recent pieces of EU law (e.g., NIS2), the AI Act emphasises risk management, possible heavy sanctions, and management responsibility. At the same time, understanding the AI Act requires understanding its context.
The AI Act is product safety regulation that aims to protect the fundamental rights, health, and safety of humans. Here, it differs from, for example, the NIS2 and CER directives. The AI Act follows the logic of other, already established product safety laws. This affects the interpretation of some of its definitions, which are not found in the AI Act, but in other product safety regulation.
It should also be emphasized that the AI Act primarily applies to organizations that develop and offer AI-based solutions, imposing requirements on those responsible for bringing these systems to market. To a limited extent, it also applies to companies using AI tools, for example, Microsoft Copilot or ChatGPT, to support their work. In the AI Act, these organisations are called deployers.
In addition, the AI Act tackles AI with a risk-based approach. AI systems will be classified into different risk categories based on their use: unacceptable risk, high risk, transparency risk, and minimal or no risk. Certain risks, such as cyber security risks specific to AI systems and biases that may lead to discrimination, are specifically mentioned in the Act.
[1] Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act)
As mentioned earlier, the AI Act introduces heavy sanctions.
Although the happiness of knowing one’s organisation complies with applicable legislation is a good motivator, the fear of sanctions is often an even better one. While penalties vary between member states, the maximum amount of fines is the same in every country. Depending on which article the organisation infringes, fines may reach as high as €35 million or 7% of the organisation’s total annual worldwide turnover, whichever is higher. The AI Act’s sections on administrative fines will be applied from August 2025 or August 2026 in the case of general-purpose AI models. These will be complemented by national rules on penalties and other enforcement measures.
Artificial intelligence and its governance will demand both significant time and financial investment from organizations within its scope. It will impact a wide range of industries, including critical infrastructure.
To explore the future of cybersecurity for critical infrastructure and the effects of AI on it, join Nixu’s CEO, Teemu Salmi, on Stage One at Cyber Security Nordic on 30 October 2024, from 11:15 to 11:45. Teemu will share his insights and present our recent cross-industry research findings, shedding light on the current state and future aspirations of critical infrastructure industries when it comes to their cyber security.
Come visit Nixu’s stand S2, where you can also preorder our Maritime and Energy Cyber Priority reports or sign up for our upcoming cross-industry webinar.
You can find the EU AI Act here: https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32024R1689