LinkedIn introduces labels for AI-generated images

LinkedIn is now labelling AI-generated images to improve transparency and avoid misinformation. This move is part of a larger trend, following TikTok’s recent similar actions, and is based on standards from the Coalition for Content Provenance and Authenticity (C2PA).

Why labels for AI-generated media matter?

The labels, visible in the top corner of images, allows users to know when images are artificially created. This helps prevent plagiarism and also being aware that AI has been used for its creation. While some AI content is harmless or funny, other types can be harmful, such as fake images of military actions or distorted portrayals of conflicts.

Ideal and current solutions

The best solution would be automated systems that label AI content immediately to prevent it from spreading. However, the current focus is on creating consistent labelling practices across different platforms.

The role of C2PA

The C2PA is working to establish industry-wide standards for identifying and labelling AI-generated media. Their goal is to help platforms like LinkedIn and others to easily detect deepfakes and/or inform users about the content’s origins.

Challenges and potential solutions

One of the main challenges is the speed at which deepfakes can be created and shared. Often, when a deepfake is identified as fake, it has already reached a large audience. Additionally, as deepfakes become more sophisticated, they are harder to detect.

To address these issues, automated detection tools are being developed to identify AI-generated images.

LinkedIn’s initiative to label AI-generated images is a significant step towards improving transparency and tackling the deepfake problem.

Sources: LinkedIn & Social Media Today.

Leave a Reply

 

LinkedIn training and workshops

We can enrich your organization with tools to improve your competencies on LinkedIn. Learn how we can help your company.
Read more