Latest Articles > The EU AI Act: DOs, DON’Ts, and what’s ahead

The EU AI Act: DOs, DON’Ts, and what’s ahead

October 2024
|
4 minutes
Latest Articles
by Savion Ray

We’ve covered the basics of the EU AI Act and what’s important for communicators to know. Now, it’s time to get practical — what should communicators do, avoid, and expect in the future as they navigate AI in their daily operations? Here’s a guide to help you make the most of AI in communications while staying compliant with evolving regulations.

Practical next steps for communicators using AI

It is essential to balance leveraging AI’s capabilities and adhering to the guidelines set forth by the EU AI Act. Understanding what to do and what to avoid will help you navigate these new regulations while ensuring the ethical and effective use of AI. Here are some practical steps to guide you in integrating AI responsibly into your communications strategy:

Do stay transparent when using AI

Transparency is one of the fundamental obligations under the EU AI Act. Whether you’re generating AI-created images, videos, or text, it’s important to make it clear that AI played a role in creating the content. However, suppose AI-generated text is reviewed and approved by a human. In that case, there is no need to disclose that AI was involved, as the human editor assumes responsibility for the content. When it comes to images or visual content that are either generated or modified with the help of AI, transparency is required. This approach builds trust and helps avoid potential legal risks, especially when the content could influence public opinion or critical decision-making.

Do train your team on AI best practices

AI is only as effective as the people using it, which is why ongoing training is crucial. The EU AI Act places a strong emphasis on transparency, compliance, and ethical use of AI systems. Every team member involved in deploying or interacting with AI must understand these obligations. Training your team on the AI Act’s requirements — such as properly disclosing AI-generated content, respecting copyright laws, and knowing when human oversight is crucial — ensures your organization stays compliant and avoids potential legal pitfalls. Additionally, equipping your team with a solid understanding of AI best practices fosters a culture of responsible AI use, which is essential as the technology becomes more integrated into communication strategies.

Don’t use AI irresponsibly in sensitive contexts

AI offers powerful tools, but with great power comes great responsibility. Avoid deploying AI in situations where it might exploit vulnerable groups or lead to unintended harm. For example, using AI-generated content in political campaigns or areas involving sensitive personal data could carry significant risks. Always consider the ethical implications of using AI in these contexts, and ensure you follow best practices to avoid exploitation or misinformation.

Don’t rely solely on AI without human oversight

AI can enhance productivity, but it should never completely replace human judgment — especially in areas that affect decision-making, like job applications, public messaging, or customer interactions. Ensure that AI-generated content or recommendations are reviewed and approved by a human before they are used publicly. This not only prevents potential errors but also helps you remain compliant with the EU AI Act.

What to expect moving forward

As the implementation of the AI Act continues, here are some key developments to keep an eye on:

Watermarks and labels becoming standard

A significant change we are seeing is the growing use of watermarks and labels to identify AI-generated content, such as images, videos, and text. These markers help ensure transparency, making it clear to audiences and stakeholders when AI creates or alters content. 

The AI Office has yet to set specific standards for these practices. One leading approach is the C2PA standard, which attaches important details — like the author, date, and AI system used — to content, securing it with cryptographic protection. When combined with watermarks, this approach ensures that content remains authentic and tamper-proof. While the responsibility for implementing these watermarks typically lies with the creators of AI systems, it’s essential for anyone using AI tools to ensure they are working with content that complies with these transparency requirements. 

Development of an industry-wide Code of Practice

As we move forward, the code of practice for AI is set to become a central part of AI regulation. The EU’s Code of Practice for GPAI, currently being drafted with input from various stakeholders, will be key in setting these standards. While the final version is expected in April 2025, the ongoing discussions already highlight essential focus areas, such as transparency in data use, risk assessments, and the disclosure of training data. Notably, while GPAI providers are required to report on the data they use to train their models, the specifics around licensed, scraped, and open data are still being debated. 

Room for future adjustments

The EU AI Act is designed to be adaptable as technology evolves. Expect future updates that could expand the list of high-risk AI systems or tighten transparency requirements. As AI continues to develop, so too will the regulatory landscape, which means communicators must stay proactive in monitoring and adapting to these changes.

Preparing for the future

As AI becomes integral to communication, staying ahead of legal obligations and ethical standards is more important than ever. The EU AI Act provides a robust framework for transparency and compliance, but companies shouldn’t wait for industry standards to catch up. Start developing your own internal AI policies now, and ensure that AI is used responsibly and effectively.

Have questions about how the EU AI Act impacts your work? Contact us today to learn more.

share
Join our newsletter

Keep up to date with what is happening in the industry.

Sign up for our newsletter

Get in touch

Sign up for our newsletter