Debate Explorer

Social media sites should be responsible for controlling disinformation on their platforms

Technology
Speech, Technology, Fakenews, Facebook, Twitter, Media

Twitter has recently started applying notices of possible disinformation under tweets on its platform. While it seems to have terms of service regarding the types of content allowed on platform, it seems to apply these rules differently depending on how high profile the account is. For example, they determined Donald Trump violated its terms of service with a potential incitement to violence, which would normally result in the termination of his account. Twitter said the tweet and account would remain, as the information was of public interest. They did put a notice on the offending tweet, which warned users of the tweets content before they could read it.

Mark Zuckerberg has responded in a television interview, saying social media sites shouldn't be fact checking content on their platform, and Facebook will not follow suit with Twitter.

Many feel that social media sites must take action to prevent lies and abuse from spreading on their platforms. Others are nervous about massive corporations deciding what kinds of content and speech are acceptable, especially in the modern online landscape where tech giants have little competition.

Further background: Mark Zuckerberg responds hereTwitter’s decision to label Trump’s tweets was two years in the making | Washington Post Twitter censors Trump's Minneapolis tweet for 'glorifying violence' | Fox NewsTrump's Tweets Force Twitter Into a High-Wire Act | Wired The Chaser goes viral with provocative post mocking Zuckerberg’s position on Facebook factchecking | The Guardian Twitter CEO Jack Dorsey responds here Leaked posts show Facebook employees asking the company to remove Trump’s threat of violence | The Verge

Economic Growth

Pro

AI technologies will dramatically increase productivity across sectors, creating new economic value and opportunities that outweigh job displacement.

Key Evidence

PwC research estimates AI could add $15.7 trillion to global GDP by 2030

42 contributors

Existential Risk

Con

Advanced AI systems could potentially pose existential risks to humanity if they develop goals misaligned with human values or escape human control.

Key Evidence

Open letter signed by 1,000+ AI researchers calling for pause on advanced AI development

36 contributors

Bias & Fairness

Neutral

AI systems can reflect and amplify existing societal biases, leading to unfair outcomes in areas like hiring, lending, criminal justice, and healthcare.

Key Evidence

Multiple studies show facial recognition systems have higher error rates for women and people with darker skin tones

51 contributors

News

EU Passes Comprehensive AI Act

The European Union has approved landmark legislation to regulate artificial intelligence, establishing the world's first comprehensive legal framework for AI.

June 10, 2025 • Read more

UN Establishes Global AI Ethics Committee

The United Nations has formed a specialized committee to develop international standards for ethical AI development and deployment.

May 28, 2025 • Read more

Major Tech Companies Sign AI Safety Pledge

Leading technology firms have jointly committed to a set of principles for responsible AI development, including safety testing and transparency measures.

May 15, 2025 • Read more

Worldview

2.4M Supporters
196 Countries

Support Distribution

42%
North America
28%
Europe
18%
Asia Pacific
12%
Other Regions
View Full Map
Weekly Growth
+12.4%
Active Debates
1,284
Top Supporters
JD
Dr. James Davis
AI Researcher • Stanford
MK
Dr. Michael Kim
Ethics Professor • MIT
EL
Emma Liu
Economist • World Bank
Public Opinion
Show %
Key Concerns