AI Content Governance: Leading with Compliance and Innovation

This article has been kindly contributed by Paul Melcher, visual technology expert and founder of online magazine Kaptur.

 

As the European Union advances towards adopting a unified stance on comprehensive AI legislation, it’s imperative for content creation and management providers to stay well-informed about the forthcoming changes. This moment presents a unique opportunity to lead by implementing practices today that may become mandatory in the future. By doing so, providers can assist users in establishing appropriate guidelines for managing AI-generated content across various mediums.

The proposed Artificial Intelligence Act classifies AI systems into categories based on their potential societal impact, ranging from minimal to high risk. High-risk systems will be subject to increased regulatory scrutiny, yet all AI applications will be monitored to some extent. A key requirement is the clear identification of AI-generated content, be it text, images, videos, or audio.

This initiative mirrors an executive order by President Biden in the USA, seconded by the proposed AI labeling act, which emphasizes the need for effective labeling and provenance mechanisms to ensure Americans can discern AI-generated content. Although the specifics are yet to be defined, the Department of Commerce is tasked – due to be made public in early April 2024 – with suggesting standards for detecting AI-generated content and authenticating official materials.

In tandem with the swift legislative advancements on content labeling, there are also rapid developments in copyright protection, affecting both the AI training process and its outputs. While DAM vendors may not be directly affected, copyright protection in training data significantly influences the choice of models for tool implementation. Current lawsuits and growing evidence suggest that European and UK legislation will soon make unauthorized use of scraped data for training purposes illegal. Thus, a DAM company utilizing Stability AI , Midjourney, or OpenAI for its solutions could expose its users to serious legal risks as their training data was obtained without proper authorization and compensation to copyright holders.

Moreover, as labeling AI-generated content is on track to become a global standard, ensuring the authenticity of content remains paramount. The emergence of AI-assisted editing tools necessitates a standardized method for tracing a file’s history, a practice that will likely become obligatory. The initiative led by the Coalition for Content Provenance and Authenticity (C2PA), alongside support from existing metadata standards like the International Press Telecommunications Council (IPTC), positions Content Credentials to become the digital equivalent of ‘food labeling’ standard for digital files. Compatibility with such standards is poised to become a requisite for all DAM solutions, as evidenced by Fotoware and Aprimo, which already offer this feature, as well as Meta (Facebook and Instagram) and OpenAI (Dall•E 3).

In its repositioning as a business’s ground truth content governance – the ultimate source of authenticated content – DAM must also ensure that the authenticity its users have worked so hard to create is preserved beyond its boundaries. Since metadata by itself is too brittle, reinforcement via invisible watermarking, image matching, and similarity hash positions DAM not only as a repository of trustable content but as a reference library for outside users who will increasingly doubt the legitimacy of what it sees.

DAM systems must pivot to become critical enablers of compliance and ethical stewardship in the face of evolving AI legislation. By focusing on the seamless integration of content labeling, copyright adherence, and authenticity verification, DAM platforms will not only mitigate legal risks but also elevate their role in fostering responsible AI use. This proactive approach will ensure that DAM systems are not merely repositories but active participants in shaping a future where digital content is both trustworthy and transparent. As the landscape of AI regulation unfolds, the success of DAM solutions will be measured by their ability to adapt, innovate, and lead in promoting ethical practices across the digital content ecosystem.

 

About Paul Melcher

Paul has over 20 years’ experience working in the visual technology sector, and is the Founder and Director of Melcher System LLC.  He also has extensive expertise in generative AI, image recognition and content licensing.  You can discover the latest news, technologies and trends in the visual sector via his online magazine Kaptur.

You can connect with Paul via his LinkedIn profile.

Share this Article: