On Wednesday, March 13th, 2024, the EU Parliament members adopted the landmark AI law, the “EU Artificial Intelligence Act.” Thus, the EU now has the first legislation on AI in the world.
The regulation establishes obligations for AI based on its potential risks and level of impact. The overall intent of EU lawmakers has been to ensure citizens can trust that the technology will not harm them as it is implemented and updated - while boosting innovation and establishing Europe as a leader in the AI field.
This means that companies will now be obligated to familiarise themselves with the new requirements and prepare for compliance by performing risk assessments, documenting workflows, and meeting certain transparency requirements.
As our Group Head of Legal, Rasmus Lenler-Petersen, notes: “Marketers utilising the benefits from AI tools will need to assess their actual use of AI technologies to determine their level of obligations under the AI Act. With the current technologies utilised by marketers, most use cases will fall into the “specific transparency risk” and non-high risk categories, e.g., when utilising well-known tools such as ChatGPT, Gemini, or other generative AI tools and models. It’s also worth noting that higher compliance levels will apply for AI tech developers compared to AI tech users.” He further notes: “Companies utilising AI now have the opportunity to prepare, assess and adapt their strategies accordingly.”
The EU Commission proposed the new legislation in 2021. Since then, EU countries and the EU Parliament have been in tough negotiations to create an act that protects EU citizens without limiting the innovation and growth of AI companies and AI solutions.
Link to the European Parliament's coverage of the new AI Act here.
Who does it apply to?
The AI Act applies to all public and private actors inside and outside the EU as long as the AI system is placed on the EU market or its use affects people in the EU. The AI Act covers AI system providers, deployers, and users.
Do fines for non-compliance apply?
Yes, non-compliance with the AI Act can result in fines up to 35m EUR or 7% of global annual turnover.
That said, it is important to note that the AI Act introduces a risk-based approach, categorising AI systems.
Companies must assess their AI technologies to determine their classification under this framework and adhere to the corresponding compliance requirements. The risk-based approach distinguishes between:
The final text of the AI Act must be published in the “Official Journal” of the EU.
The regulation is in the final stages of the lawyer-linguist check, and once the final law text is ready, it must be published in the “Official Journal” of the EU. It will also formally get the EU Council's approval. Upon publication in the Official Journal, it will come into effect 20 days later, with full implementation of the entire AI Act 2 years after that.
Staggered start dates apply for the Act. Where restrictions on certain AI practices become applicable six months post-effectiveness, codes of practice are enforced nine months after, general AI regulations, including governance, start after 12 months, and stringent requirements for high-risk AI systems are implemented at 36 months.
So, in summary:
As the EU introduces the AI Act, companies employing AI face a new regulatory landscape. The legislation demands that companies rigorously assess and classify their AI technologies according to potential risks.
The key requirements for companies are:
To prepare for compliance, companies need to invest in internal processes for risk assessment, documentation, and transparency. Developing a comprehensive understanding of the Act's requirements and ongoing engagement with regulatory developments will be essential, similar to what we saw back in 2018 when new GDPR requirements took effect.
Implementing an AI governance strategy could be the starting point. A robust strategy must align with business objectives and identify areas within the business where AI will benefit the organization's strategic goal most.
It's still unclear how some parts of the new legislation should be interpreted, how it will come into effect in practice, and how local EU country authorities will enforce the rules once they become effective. E.g. whether all AI-generated content should be marked with "created with AI" or similar.
We hope that significant AI models and tool providers will take the lead on adapting to comply with the EU AI Act, enabling European companies to continue to use these tools in full legally.
s360 will continue to monitor developments closely, aiming to obtain more information from the AI industry and the EU that we can share with our customers.
Johan Peen, Group COO, [email protected], +45 3063 9366.
Rasmus Lenler-Petersen, VP & Head of Legal, [email protected], +45 2071 2469.
Thomas Toftdahl Jensen, VP & Head of Data Intelligence. [email protected], +45 6060 9106.
It's important to stress that the above can't replace legal advice. s360 and its employees do not offer legal counselling in any form, including circumstances surrounding the setup, integration, use and input/output of AI, Generative AI, Large Language Models, API’s, algorithms and analytic software tools. s360 does not accept any form of responsibility in regards to direct or indirect losses as a consequence of the use of this article, including loss following from inadequate or wrongful use of information, evaluations or other conditions. s360 recommends seeking legal counselling from a qualified lawyer if you are in doubt about any legal requirements and conditions, AI compliance and/or use of data.
Johan Peen
Group COO
[email protected]
Rasmus Lenler-Petersen
VP & Head of Legal
[email protected]
Thomas Toftdahl Jensen
VP & Head of Data Intelligence
[email protected]
Join our monthly s360 mail to get industry news on AI, digital marketing, technology and data. We put a lot of effort into our newsletter to provide valuable and actionable insights to you.