Technology

Civil advocacy groups press Big Tech on AI-fueled misinformation

Technology

Civil advocacy groups press Big Tech on AI-fueled misinformation

AP Photo/File

FILE – This combination of 2017-2022 photos shows the logos of Facebook, YouTube, TikTok and Snapchat on mobile devices. A trade group representing TikTok, Snapchat, Meta and other major tech companies sued Ohio on Friday, Jan. 5, 2024 over a pending law that requires children to get parental consent to use social media apps. (AP…

More than 200 civil advocacy groups urged leading technology companies to increase efforts to combat misinformation fueled by artificial intelligence (AI) ahead of elections across the globe, in a letter published Tuesday.

The groups wrote top executives for popular technology companies including Google, Meta, Reddit, TikTok, YouTube and X, requesting “swift action” to reinforce their respective platforms’ safety, as global extremism and threats to democracy continue.

The civil advocacy organizations claimed tech companies have retreated from necessary protections such as “content moderation, civil-society oversight tools and trust and safety,” which has made the platforms “less prepared to protect users and democracy in 2024.”

“In 2020, Black people and other people of color, women, and non-English speakers were – and continue to be – disproportionately targeted with online election lies,” the letter said, reported first by The Washington Post.

The groups asked the tech companies to reinstate election integrity policies, increase staffing for enforcement teams in multiple languages and enforce rules against election lies and hate in advertising.

They asked that humans review political advertisements before they’re published “with enhanced scrutiny and labeling of ads containing generative AI.”

The civil advocacy organizations asked the tech companies to disclose AI-generated political content and prohibit the use of deepfakes in political ads. They also asked the companies to change their algorithms to promote “factual election content” and ensure AI products have monitoring and enforcement teams in place.

The groups requested that influencers, public figures and political candidates be held to the same moderation and enforcement standards as normal users.

The civil advocacy organizations also asked the companies to improve transparency by regularly sharing reports including AI use and efforts to moderate content in a timely manner.

They have asked the tech CEOs to respond in writing with their intentions by Apr. 22.


Tags

2024 elections


AI


big tech


Democracy


Google


Meta


misinformation


Copyright 2024 Nexstar Media Inc. All rights reserved. This material may not be published, broadcast, rewritten, or redistributed.