Please consider supporting us by disabling your content blocker.
loader

New AI Regulations Shift Responsibility to Users in Social Media\
\

Starting January 1, 2025, major social media platforms like Instagram and Facebook will update their terms of service to incorporate generative artificial intelligence (AI) tools. LinkedIn has already made similar updates as of November 20, 2024, while others, including X, are expected to follow suit. The trend points towards a significant shift, where responsibility for AI-generated content inaccuracies is being passed onto users.

\
\

Rather than relying solely on proprietary AI tools like ChatGPT or Google Gemini, these social networks are providing their AI systems directly to users. By doing so, they emphasize a critical point in their updated policies: any user-generated content they share, including that produced by the platform’s AI, remains their responsibility, even if the AI provides unreliable or misleading output.

\
\

Meta’s terms for AI, in particular, state, ‘The accuracy of any content, including outputs, cannot be guaranteed and outputs may be disturbing or upsetting.’ LinkedIn echoes this sentiment, suggesting that content generated by its AI features might be ‘inaccurate, incomplete, delayed, misleading or not suitable for your purposes.’

\
\

Shifting Responsibility

\

According to Sara Degli-Esposti, a researcher at Spain’s National Research Council, the new policies communicate to users an unsettling notion: ‘The user is responsible for ensuring that the content complies with community guidelines, even when it may be generated by a potentially flawed system.’ This perspective indicates a concerning trend where social media companies step back from the responsibility associated with misleading information generated through AI.

\
\

The implications of generative AI, such as misinformation and content quality, raise ethical questions about the readiness of everyday users to navigate this landscape. Javier Borràs, a researcher at CIDOB, points out that ‘the current lack of education and culture around generative AI means users may not understand the inherent risks involved.’ Social media platforms, while presenting users with new tools, need to ensure they are equipped to utilize them judiciously.

\
\

As user education becomes paramount, some experts suggest that platforms should be clearer about the nature of AI-generated content. ‘Users should be reminded consistently that results may be inaccurate and require verification,’ Borràs argues, emphasizing the need for a more informed user base.

\
\

This evolving scenario reflects a greater trend within technology: as tools become more sophisticated, individual accountability is heightened. In a digital space where misinformation can spread rapidly, understanding the role of the user as a gatekeeper of content integrity becomes increasingly vital.