What governing bodies say

Discover tools, trends, and innovations in eu data.
Post Reply
shaownhasan
Posts: 429
Joined: Sun Dec 22, 2024 10:35 am

What governing bodies say

Post by shaownhasan »

Abroad, it’s a different story. The European macau business email list Commission formally introduced the EU AI Act in August 2024, which aims to stop the spread of misinformation and calls upon creators of generative AI models to introduce disclosures.

The act says: “Deployers of generative AI systems that generate or manipulate image, audio or video content constituting deep fakes must visibly disclose that the content has been artificially generated or manipulated. Deployers of an AI system that generates or manipulates text published with the purpose of informing the public on matters of public interest must also disclose that the text has been artificially generated or manipulated.”

However, the AI Act stipulates that content reviewed by humans and that humans hold editorial responsibility for does not need to be disclosed. The act also categorizes the risk of AI content, and seems to focus most heavily on “unacceptable” and “high-risk” scenarios (i.e., exploitation, negatively impacting people’s safety and privacy, individual policing).

While this act could be a step toward universal AI disclosure standards, it still leaves a lot of room for interpretation and needs further clarification—especially for marketers and brands.

Where legislation falls short, consumer expectations (and concerns) can guide brand content creation. For example, the Q2 2024 Sprout Pulse Survey found that 80% of consumers agree that AI-generated content will lead to misinformation on social, while another 46% are less likely to buy from a brand that posts AI content. These two stats could be correlated, according to Sarney.
Post Reply