Social media giants crossed a threshold in banning US President Donald Trump and an array of his supporters — and now face a quandary on defining their efforts to remain politically neutral while promoting democracy and free speech.
After the unprecedented violence in the seat of Congress, Trump was banned for inciting the rioters — on platforms including Facebook, Twitter, Google-owned YouTube and Snapchat. The alternative network Parler, which drew many Trump backers, was forced offline by Amazon’s web services unit.
The bans broke new ground for internet firms but also shattered the longstanding notion that they are simply neutral platforms open for all to express any views.
“Banning Donald Trump was a crossing of a Rubicon for social media firms, and they can’t go back,” said Samuel Woolley, a professor and researcher with the University of Texas Center for Media Engagement.
“Up to now their biggest goal was to promote free speech, but recent events have shown they can no longer do this.”
Twitter chief Jack Dorsey last week defended the Trump ban while acknowledging it stemmed from “a failure of ours ultimately to promote healthy conversation” and that it “sets a precedent I feel is dangerous: the power an individual or corporation has over a part of the global public conversation.”
Javier Pallero, policy director for the digital rights nonprofit group Access Now, said the banning of Trump could be just the beginning for social media firms grappling with dangerous content, including from political leaders.
“The companies have reacted to calls for violence by the president in the United States, and that’s a good call. But they have failed in other areas like Myanmar,” where social media has been used to carry out persecution, Pallero said.
Human rights first?
Social platforms are being forced in some part of the world to choose to follow national laws or to prioritize human rights principles, Pallero noted.
“We ask platforms to put human rights first. Sometimes they do, but all decisions on content governance are always a game of frustration,” he said.
In authoritarian regimes with restrictive social media laws, Pallero said the platforms “should stay and give a voice to democracy activists… however if they have to identify dissidents or censor them, they probably should leave, but not without a fight.”
Woolley said social networks that banned Trump are likely to face pressure to take action against similarly styled leaders who abuse the platforms.
“They can’t simply ban a politician in the US without taking similar action around the world,” he said. “It would be seen as prioritizing the United States in way that would be seen as unfair.”
Platform power
Trump’s ban was a major step for Twitter, which the president used for policy announcements and to connect with his more than 80 million followers. Until recently, platforms have given world leaders leeway when enforcing rules, noting that their comments are in the public interest even if they are inflammatory.
The de-platforming of Trump underscored the immense power of a handful of social networks over information flows, noted Bret Schafer, a researcher with the nonprofit Alliance for Securing Democracy.
“One of the things that compelled them to act was that we saw the president’s rhetoric manifest itself into real-world violence,” Schafer said. “That may be where they draw the line.”
But he noted inconsistencies in enforcing these policies in other parts of the world, including in authoritarian regimes.
“The is a legitimate argument on whether leaders in some of these countries should be allowed to have an account when their citizens do not, and can’t take part in the discussion,” Schafer said.
Regulatory conundrum
Internet firms are likely to face heightened calls for regulation following the recent turmoil.
Karen Kornbluh, who heads the digital innovation and democracy initiative at the German Marshall Fund, said any regulatory tweaks should be modest to avoid government regulating online speech.
Kornbluh said platforms should have a transparent “code of conduct” that limits disinformation and incitements to violence and should be held accountable if they fail to live up to those terms.
“I don’t think we want to regulate the internet,” she said. “We want to apply offline protection for individual rights.”
Platforms could also use “circuit breakers” to prevent inflammatory content from going viral, modeled as those used on Wall Street for extreme swings.
“The code should focus not on content but practices,” she said. “You don’t want the government deciding on content.”
Schafer cited a need for “some algorithmic oversight” of platforms to ensure against bias and amplification of inflammatory content.
He said the controversial Section 230 law remains important in enabling platforms to remove inappropriate content, but that it remains challenging “to moderate in a way that protects free speech and civil liberties.”
Daniel Kreiss, a professor and researcher with the University of North Carolina’s Center for Information, Technology, and Public Life, said the major platforms “are going to have to rebuild their policies from the ground up” as a result of the crisis.
“This situation absolutely reveals the power of platform companies to make decisions on who gets hurt in the public sphere,” Kreiss said.
“The power they have is not simply free speech, it’s free amplification. But because they are private companies, under the law we give them a fair amount of latitude to set their own policies.”
Follow our socials Whatsapp, Facebook, Instagram, Twitter, and Google News.