[wpml_language_selector_widget]

The EU AI Act has been adopted - Here is what marketers and businesses need to know

On Wednesday, March 13th, 2024, the EU Parliament members adopted the landmark AI law, the “EU Artificial Intelligence Act.” Thus, the EU now has the first legislation on AI in the world.
March 14, 2024
Est. 1 minute

The EU AI Act has been adopted - Here is what marketers and businesses need to know

On Wednesday, March 13th, 2024, the EU Parliament members adopted the landmark AI law, the “EU Artificial Intelligence Act.” Thus, the EU now has the first legislation on AI in the world.

The regulation establishes obligations for AI based on its potential risks and level of impact. The overall intent of EU lawmakers has been to ensure citizens can trust that the technology will not harm them as it is implemented and updated - while boosting innovation and establishing Europe as a leader in the AI field.

This means that companies will now be obligated to familiarise themselves with the new requirements and prepare for compliance by performing risk assessments, documenting workflows, and meeting certain transparency requirements.

As our Group Head of Legal, Rasmus Lenler-Petersen, notes: “Marketers utilising the benefits from AI tools will need to assess their actual use of AI technologies to determine their level of obligations under the AI Act. With the current technologies utilised by marketers, most use cases will fall into the “specific transparency risk” and non-high risk categories, e.g., when utilising well-known tools such as ChatGPT, Gemini, or other generative AI tools and models. It’s also worth noting that higher compliance levels will apply for AI tech developers compared to AI tech users.” He further notes: “Companies utilising AI now have the opportunity to prepare, assess and adapt their strategies accordingly.”

The EU Commission proposed the new legislation in 2021. Since then, EU countries and the EU Parliament have been in tough negotiations to create an act that protects EU citizens without limiting the innovation and growth of AI companies and AI solutions.

Link to the European Parliament's coverage of the new AI Act here.

Who does it apply to?
The AI Act applies to all public and private actors inside and outside the EU as long as the AI system is placed on the EU market or its use affects people in the EU. The AI Act covers AI system providers, deployers, and users.

Do fines for non-compliance apply?
Yes, non-compliance with the AI Act can result in fines up to 35m EUR or 7% of global annual turnover.

Introduction of a risk-based tiering approach

That said, it is important to note that the AI Act introduces a risk-based approach, categorising AI systems.
Companies must assess their AI technologies to determine their classification under this framework and adhere to the corresponding compliance requirements. The risk-based approach distinguishes between:

  • Unacceptable risk (prohibited practices): E.g., manipulation leading to physical or psychological harm, social scoring, biometric ID, and emotion recognition in the workplace and schools.
    • Example: A system scraping social media to score individuals' trustworthiness without consent, which will be banned.
  • High risk (subject to strict compliance requirements): E.g., Products in education, HR, credit assessment, access to services, and law enforcement. These involve systems with potential significant harm to health, safety, or fundamental rights.
    • Example: An AI system used in hospitals to diagnose diseases must undergo strict risk mitigation and transparency requirements.
  • Limited risk (subject to transparency obligations): E.g., Chatbots and deepfake systems. This covers AI systems with lesser risks.
    • Example: A chatbot used for customer service must inform users they're interacting with AI.
  • Minimal/No-Risk AI: includes AI applications like spam filters, which are subject to minimal regulations.
  • Specific risk (specific transparency obligations): Such requirements are imposed, for example, where there is a clear risk of manipulation. Users should be aware that they are interacting with a machine, such as GPAI, generative AI, such as ChatGPT, or Chatbots.
    • Example: Customer Service Agents on websites or voice calls.

What’s next, and when to comply?

The final text of the AI Act must be published in the “Official Journal” of the EU.

The regulation is in the final stages of the lawyer-linguist check, and once the final law text is ready, it must be published in the “Official Journal” of the EU. It will also formally get the EU Council's approval. Upon publication in the Official Journal, it will come into effect 20 days later, with full implementation of the entire AI Act 2 years after that.

Staggered start dates apply for the Act. Where restrictions on certain AI practices become applicable six months post-effectiveness, codes of practice are enforced nine months after, general AI regulations, including governance, start after 12 months, and stringent requirements for high-risk AI systems are implemented at 36 months.

So, in summary:

  • After 6 months: EU countries will be obliged to ban prohibited AI systems;
  • After 1 year, rules for general-purpose AI systems will start to apply, and
  • After 2 years, the entire AI Act will be enforceable.

Where does this leave you if your company utilizes AI in marketing or data science practices?

As the EU introduces the AI Act, companies employing AI face a new regulatory landscape. The legislation demands that companies rigorously assess and classify their AI technologies according to potential risks.

The key requirements for companies are:

  • Risk assessments: Companies must carry out risk assessments to identify the potential risks associated with their AI systems. These risks could include safety risks, bias risks, and security risks.
  • Technical documentation: Companies must create technical documentation for their AI systems. This documentation should explain how the AI system works and how it meets the Act's requirements.
  • Transparency: Companies must be transparent about how their AI systems work. This means providing users with information about how the AI system will make decisions and what data it will use.
  • Data ethics and protection: The AI Act requires companies to implement ethical data processing and protection of personal data in accordance with existing GDPR rules.

To prepare for compliance, companies need to invest in internal processes for risk assessment, documentation, and transparency. Developing a comprehensive understanding of the Act's requirements and ongoing engagement with regulatory developments will be essential, similar to what we saw back in 2018 when new GDPR requirements took effect.

Implementing an AI governance strategy could be the starting point. A robust strategy must align with business objectives and identify areas within the business where AI will benefit the organization's strategic goal most.

It's still unclear how some parts of the new legislation should be interpreted, how it will come into effect in practice, and how local EU country authorities will enforce the rules once they become effective. E.g. whether all AI-generated content should be marked with "created with AI" or similar.

We hope that significant AI models and tool providers will take the lead on adapting to comply with the EU AI Act, enabling European companies to continue to use these tools in full legally.
s360 will continue to monitor developments closely, aiming to obtain more information from the AI industry and the EU that we can share with our customers.

Contact information

Johan Peen, Group COO, [email protected], +45 3063 9366.
Rasmus Lenler-Petersen, VP & Head of Legal, [email protected], +45 2071 2469.
Thomas Toftdahl Jensen, VP & Head of Data Intelligence. [email protected], +45 6060 9106.

It's important to stress that the above can't replace legal advice. s360 and its employees do not offer legal counselling in any form, including circumstances surrounding the setup, integration, use and input/output of AI, Generative AI, Large Language Models, API’s, algorithms and analytic software tools. s360 does not accept any form of responsibility in regards to direct or indirect losses as a consequence of the use of this article, including loss following from inadequate or wrongful use of information, evaluations or other conditions. s360 recommends seeking legal counselling from a qualified lawyer if you are in doubt about any legal requirements and conditions, AI compliance and/or use of data.

Newsletter

Stay updated on AI & regulation

Join our monthly s360 mail to get industry news on AI, digital marketing, technology and data. We put a lot of effort into our newsletter to provide valuable and actionable insights to you.