AI is increasingly becoming an important part of communication strategies. While some organizations are only starting to explore the technology’s potential, others have already established AI training programs and equipped their teams with AI expertise. As AI becomes central to modern communication strategies, professionals need to understand the obligations set forth by the EU AI Act. This legislation introduces essential rules for transparency and compliance, affecting both providers and deployers.
Join us as we break down parts of the Act’s most relevant chapters for communicators — Chapter IV on Transparency Obligations and Chapter V on General-Purpose AI Models.
The EU AI Act places a strong emphasis on transparency to ensure that people interacting with AI or consuming AI-generated content are fully informed. These transparency obligations apply to both providers and deployers, with distinct responsibilities for each.
If you are a provider — developing an AI tool that generates content such as text, images, video, or audio — it is your responsibility to ensure that this content is clearly marked as AI-generated. For instance, if your AI creates marketing visuals or press releases, the content must be labeled in a machine-readable format that indicates it was produced by AI. This rule is especially important for preventing misinformation and making sure that users understand when they are engaging with artificially generated material.
On the other hand, if you are a deployer — using an AI tool to create content for your campaigns or social media — you must ensure that any AI-generated content is presented with this disclosure. For example, if you use AI systems to create or manipulate images, audio, or videos, you must disclose that the content is artificially generated. When it comes to AI-generated text that informs the public on important matters, you must be transparent about it as well, unless a human editorial review has taken place.
In addition to content creation, the AI Act requires that users be informed when they are interacting with an AI system. For example, if an organization uses an AI chatbot to manage customer service inquiries, the system must clearly notify users at the start of the conversation that they are engaging with AI. This notification must be clear unless it is obvious to a reasonably well-informed person, based on the context and circumstances, that they are engaging with AI.
General-purpose AI models (GPAI), such as large language models (like ChatGPT) or content generation tools, are becoming common in communications. These versatile AI systems are capable of performing a wide range of tasks. Therefore, the AI Act brings specific obligations regarding GPAI.
If you are a provider of GPAI models, your obligations under the AI Act include ensuring that the models are transparent and compliant with EU laws. Providers must maintain comprehensive documentation about the model’s capabilities, its training process, and its limitations. This documentation helps deployers understand how the AI model works and what its potential risks are.
Additionally, providers must ensure that GPAI models comply with EU copyright laws, especially when these models are used to generate creative content. For instance, if a GPAI model is trained on copyrighted material, providers are responsible for publishing summaries of the data used for training. This ensures that deployers are aware of any potential copyright issues when using these models.
Let’s consider how these transparency and compliance rules might play out in everyday communication work:
If a communications team uses an AI tool to generate images for a social media campaign, that team must ensure transparency about the images being AI-generated. This applies regardless of whether the AI tool was developed in-house (making the organization a provider) or purchased from a third party (making the organization a deployer). This transparency is particularly important in cases where the content could influence public opinion, such as AI-generated images used in political campaigns, news media, or public announcements.
Similarly, if the AI system creates or manipulates deepfakes (images, audio, or videos), a clear disclosure must be provided. In cases where the content is artistic, satirical, or fictional, the labeling should not interfere with the enjoyment of the work but must still be present.
For deployers using AI to generate text in public communications, such as newsletters, press releases, or policy documents, there are strict transparency rules. Any AI-generated text that informs the public on important matters must be disclosed as AI-generated unless it was overseen by a human. If a human reviews, edits, and takes responsibility for the final content, there is no requirement to disclose that AI was involved in its creation. This exception allows communicators to use AI as a drafting tool while still maintaining control over the content.
However, if the AI-generated text is published without human intervention or review, the fact that it was created by an AI must be made clear to the audience. Transparency helps prevent misinformation and ensures that the public understands the role AI played in generating the content.
The EU AI Act makes it clear that both providers and deployers of AI systems have crucial roles in ensuring transparency and compliance. For communicators, this means being proactively transparent about AI-generated content and adhering to copyright laws.
Whether you are developing AI tools or using them in your day-to-day operations, understanding your obligations will help you stay compliant, build trust with your audience, and avoid potential legal pitfalls.
In the next post of this series, we’ll explore practical do’s and don’ts for communicators using AI, helping you make the most of these tools while navigating the evolving legal landscape.
Have questions about how the EU AI Act impacts your work? Contact us today to learn more.