Home | Blogs | About | Contact

EU AI Act challenges Sales & Marketing

Europe’s legislators are leading the charge in regulating AI with the passage of the European Union Artificial Intelligence Act.
by Stephan Steiner

The EU parliament approved this groundbreaking set of rules on March 13, 2024, marking the world’s first major regulatory framework for governing AI use by companies. The act, conceived in 2021, categorizes AI technologies by risk level, ranging from “unacceptable” to high, medium, and minimal risk.

Expected to take effect after receiving final endorsement from the European Council in May 2024, the regulation will be implemented and enforced gradually from 2025 onwards.

What is the EU AI Act?
The EU AI Act is a comprehensive legal framework that mandates transparency and reporting requirements for any company introducing AI systems to the EU market or whose system outputs are utilized within the EU, irrespective of where the systems are developed or deployed. Proposed by the European Commission in April 2021 and politically agreed upon by all three EU institutions in December 2023, the act aims to ensure compliance with EU standards regarding AI usage.

Why does it matter?
Every business either plans to use or is currently utilizing AI. Companies involved in deploying AI systems within the EU market will be accountable for complying with the EU AI Act, with obligations varying based on the level of risk posed by their AI systems throughout the value chain.

The introduction of AI in Sales and Marketing holds significant promise, offering personalized chatbots, dynamically generated content, and automation tools leveraging deep knowledge based on a user’s interactions and scores. However, a key question arises: if AI-generated content is considered Medium risks, will end-users engage with content if it’s labeled as “AI generated”?

Will transparency and added value create enough trust to triumph over human-generated messages? Or will the “AI generated” label be perceived as unauthentic – and (potentially) result in automatic suppression by email (or communication) systems, reducing overall response rates? How will Sales & Marketing teams react to continuously generate affordable demand for their products and services while fostering long-term relationships with their customers at scale?

Examples of EU AI Act Risk Levels:

  • High risk: AI applications in fields such as robot-assisted surgery, credit scoring systems, law enforcement, and automated visa application processing.
  • Medium risk: Use cases that leverage AI for content creation which is used in human interactions. Such content needs to be clearly labeled as AI generated to offer full transparency. This applies for instance to Chatbots, where humans must be made aware that they are interacting with a machine so they can make informed decisions to continue or exit conversations. Technically, this also covers Sales and Marketing content generated by AI in automation tools – but will prospects and customers engage when it’s labeled “AI generated” – or will the email systems simply move those emails to trash or spam (example for illustration only)?
  • Minimal or no risk: Applications like AI-enabled video games or spam filters, subject to fewer restrictions.


Potential Financial Impact!
Similar to other European Union laws, non-compliance can result in severe fines and penalties.

  • Up to 7% of total worldwide annual turnover or €35 million (whichever is higher) for severe violations.
  • Up to 3% of total worldwide annual turnover or €15 million (whichever is higher) for most other violations.
  • Additionally, supplying incorrect information to authorities may incur fines of up to 1.5% of total worldwide annual turnover or €7.5 million (whichever is higher).


What can businesses do next?
This new law is not dissimilar to previous EU regulations (like GDPR), and we expect many more countries to follow with comparable AI regulations. Not unlike managing privacy and security requirements, companies need be prepared for:

  • Foster a culture of understanding and transparency within your organization, partners, and vendors.
  • Assess your AI system or use case risk level and categorize accordingly.
  • Implement governance platforms/tools to manage and mitigate AI risks, especially for high-risk scenarios.
  • Document and retain necessary procedures, processes, and technical specifications to ensure ongoing compliance.
  • Seek expert (legal) advise, particularly for high-risk categories.
  • For Marketing & Sales, continue to listen to your prospects and customers – keep testing various scenarios and be upfront with your customers and service providers (example: high rejection rate for emails by ISPs).

What are your thoughts? I look forward to exchanging ideas and best practices.

#StephanSteiner #AIcompliance #AIEUact