Why social media can not reasonable content material within the shade


In 2016, on the one hand, I was able to count the kind of interventions tech companies were willing to take to rid their platforms of misinformation, hate speech, and harassment. Over the years, crude mechanisms like content blocking and account banning have evolved into a more complex set of tools, including quarantining topics, removing posts from search, banning recommendations, and downgrading posts with Priority.

Yet misinformation remains a serious problem, even with more options available. There was a lot of misinformation reported on election day – for example, my colleague Emily Drefyuss found that when trying to deal with content, Twitter was modified using the hashtag #BidenCrimeFamily with tactics such as “de-indexing” by blocking search results and users including Donald Trump Use of variants of the same day. However, we don't yet know much about how Twitter decided these things in the first place, or how it weighs and learns from the way users respond to moderation.

What actions have these companies taken? How do your moderation teams work? How are decisions made?

When social media companies suspended accounts and labeled and deleted posts, many researchers, civil society organizations, and journalists tried to understand their decisions. The lack of transparency about these decisions and processes means that the election results for many this year are marked with an asterisk, just like in 2016.

What actions have these companies taken? How do your moderation teams work? How are decisions made? In recent years platform companies have put together large task forces dedicated to clearing up election misinformation and marking early declarations of victory. Sarah Roberts, a professor at UCLA, has written about the invisible work of platform content moderators as a shadow industry, a maze of contractors, and complex rules that the public knows little about. Why don't we know any more?

In the post-election fog, social media has become the terrain for an inferior war on our cognitive security, as misinformation campaigns and conspiracy theories proliferate. When the broadcast news business took on the role of information gatekeeper, it came with obligations of public interest such as the timely, local and relevant exchange of information. Social media companies have inherited a similar position in society, but not assumed the same responsibility. This situation has weighed on the cannons of allegations of bias and censorship regarding the moderation of election-related content.

Bear the costs

In October, I joined a group of experts on misinformation, conspiracy and infodemics for the company's Permanent Select Committee on Intelligence. I was flanked by Cindy Otis, an ex-CIA analyst. Nina Jankowicz, a Disinformation Fellow at the Wilson Center; and Melanie Smith, Head of Analysis at Graphika.

When I was preparing my testimony, Facebook was struggling to cope with QAnon, a militarized social movement that was monitored by its Dangerous Organizations Department and condemned by the House in a non-partisan law. My team has been studying QAnon for years. This conspiracy theory has become a favorite topic among misinformation researchers because it has remained extensible, adaptable, and resilient in the face of efforts by platform companies to quarantine and remove it.

QAnon has also become a topic for Congress because it is no longer about people participating in a strange online game: it landed like a tornado in the lives of politicians who are now the target of harassment campaigns torn from the fever dreams deviate from conspirators to violence. In addition, it happened quickly and in new ways. It usually takes years for conspiracy theories to spread through society and for important political, media, and religious figures to be promoted. Social media has accelerated this process with ever-increasing forms of content delivery. QAnon followers don't just comment on current news. they bend it to their command.


Steven Gregory