The 'de-platforming' of Donald Trump and the future of the Internet
This is no time for beer-popping one-liners about free speech; this is deadly serious
"De-platforming."
That's the word of the week in tech-land, and it's about time. After the storming of the U.S. Capitol by a criminal mob on Wednesday, a host of companies have cut off President Trump's digital air supply. His Tweets and Facebook posts fell first; next came second-order effects, like payment processors cutting off the money supply. Finally, upstart right-wing social media platform Parler was removed from app stores and then denied Internet service by Amazon.
Let the grand argument of our time begin.
The story of Donald Trump's de-platforming involves a dramatic collision of Net Neutrality, and Section 230 "immunity," and free speech, and surveillance capitalism, even privacy. I think it's the issue of our time. It deserves much more than a story; it deserves an entire school of thought. But I'll try to nudge the ball forward here.
This is not a political column, though I realize that everything is political right now. In this piece, I will not examine the merits of banning Trump from Twitter; you can read about that elsewhere.
The deplatforming of Trump is an important moment in the history of the Internet, and it should be examined as such. Yes, it's very fair to ask, if this can happen to Trump, if it can happen to Parler, can't it happen to anyone?
But let's examine it the way teen-agers do in their first year of college. Let's not scream "free speech" or "slippery slope" at each other and then pop open a can of beer, assuming that that's some kind of mic drop. Kids do that. Adults live on planet Earth, where decisions are complex, and evolve, and have real-life consequences.
I’ll start here. You can sell guns and beer in most places in America. You can't sell a gun to someone who walks into your store screaming, "I'm going to kill someone," and you can't sell beer to someone who's obviously drunk and getting into the driver's seat. You can't keep selling a car -- or even a toaster! -- that you know has a defect which causes fires. Technology companies are under no obligation to allow users to abuse others with the tools they build. Cutting them off is not censorship. In fact, forcing these firms to allow such abuse because someone belongs to a political party IS censorship, the very thing that happens in places like China. Tech firms are, and should be, free to make their own decisions about how their tools are used. (With...some exceptions! This is the real world.)
I'll admit freely: This analogy is flawed. When it comes to technology law -- and just technology choices -- everyone reaches for analogies, because we want so much for there to be a precedent for our decisions. That takes the pressure off us. We want to say, “This isn't my logic. It's Thomas Jefferson's logic! He's the reason Facebook must permit lies about the election to be published in my news feed!” Sorry, life isn't like that. We're adults. We have to make these choices. They will be hard. They're going to involve a version 1, and a version 2 and a version 3, and so on. The technology landscape is full of unintended, unexpected consequences, and we must rise to that challenge. We can't cede our agency in an effort to find silver bullet solutions from the past. We have to make them up as we go along.
That's why the best thing I read the past few days about the Trump deplatforming was this piece by Techdirt's Mike Masnick. He raises an issue that many tech folks want to avoid: Twitter and Facebook have tried really hard to explain their choices by shoehorning them into standing policy violations. That has left everyone unhappy. (Why didn't they do this months ago? Wait, what policy was broken?) Masnick gets to the heart of the matter quickly.
So, Oremus is mostly correct that they're making the rules up as they go along, but the problem with this framing is that it assumes that there are some magical rules you can put in place and then objectively apply them always. That's never ever been the case. The problem with so much of the content moderation debate is that all sides assume these things. They assume that it's easy to set up rules and easy to enforce them. Neither is true. Radiolab did a great episode a few years ago, detailing the process by which Facebook made and changed its rules. And it highlights some really important things including that almost every case is different, that it's tough to apply rules to every case, and that context is always changing. And that also means the rules must always keep changing.
A few years back, we took a room full of content moderation experts and asked them to make content moderation decisions on eight cases -- none of which I'd argue are anywhere near as difficult as deciding what to do with the President of the United States. And we couldn't get these experts to agree on anything. On every case, we had at least one person choose each of the four options we gave them, and to defend that position. The platforms have rules because it gives them a framework to think about things, and those rules are useful in identifying both principles for moderation and some bright lines.
But every case is different.
For a long time, I have argued that tech firms' main failing is they don't spend anywhere near enough money on safety and security. They have, nearly literally, gotten away with murder for years while the tools they have made cause incalculable harm in the world. Depression, child abuse, illicit drugs sales, societal breakdowns, income inequality...tech has driven all these things. The standard "it's not the tech, it's the people" argument is another "pop open a beer" one-liner used by techie libertarians who want to count their money without feeling guilty, but we know it's a dangerous rationalization. Would so many people believe the Earth is flat without YouTube's recommendation engine? No. Would America be so violently divided without social media? No. You built it, you have to fix it.
If a local mall had a problem with teen-age gangs hanging around at night causing mayhem, the mall would hire more security guards. That's the cost of doing business. For years, big tech platforms have tried to get away with "community moderation" -- i.e., they've been cheap. They haven't spent anywhere near enough money to stop the crime committed on their platforms. Why? Because I think it's quite possible the entire idea of Faceook wouldn't exist if it had to be safe for users. Safety doesn't scale. Safety is expensive. It's not sexy to investors.
How did we get here? In part, thanks to that Section 230 you are hearing so much about. You'll hear it explained this way: Section 230 gives tech firms immunity from bad things that happen on their platforms. Suddenly, folks who've not cared much about it for decades are yelling for its repeal. As many have expressed, it would be better to read up on Section 230 before yelling about it (Here's my background piece on it). But in short, repealing it wholesale would indeed threaten the entire functioning of the digital economy. Section 230 was designed to do exactly what it is I am hinting at here -- to give tech firms the ability to remove illegal or offensive content without assuming business-crushing liability for everything they do. Again, it's a law written from an age before Amazon existed, so it sure could use modernization by adults. But pointing to it as some untouchable foundational document, or throwing out the baby with the bathwater, are both the behavior of children, not adults. We're going to have to make some things up as we go along.
Here's the thing about "free speech" on platforms like Twitter and Facebook. As a private company, Twitter can do whatever it wants to do with the tool it makes. Forcing it to carry this or that post is a terrible idea. President Trump can stand in front of a TV camera, or buy his own TV station (as seems likely) and say whatever he wants. Suspending his account is not censorship. As I explain in my Section 230 post, however, even that line of logic is incomplete. social media firms use public resources, and some are so dominant that they are akin to a public square. We just might need a new rule for this problem! I suspect the right rule isn't telling Twitter what to post, but perhaps making sure there is more diversity in technology tools available to speakers.
But here's the thing I find myself saying most often right now: The First Amendment guarantees your right to say (most) things; it doesn't guarantee your right to algorithmic juice. I believe this is the main point of confusion we face right now, and one that really sends a lot of First Amendment thinking for a loop. Information finds you now. That information has been hand-picked to titillate you like crack cocaine, or like sex. The more extreme things people say, the more likely they are to land on your Facebook wall, or in your Twitter feed, or wherever you live. It's one thing to give Harold the freedom to yell to his buddies at the bar about children locked in a basement by Democrats at a pizza place in Washington D.C. It's quite another to give him priority access to millions of people using a tool designed to make them feel like they are viewing porn. That's what some people are calling free speech right now. James Madison didn't include a guaranteed right to "virality" in the Bill of Rights. No one guaranteed that Thomas Paine's pamphlets were to be shoved under everyone's doors, let alone right in front of their eyeballs at the very moment they were most likely to take up arms. We're going to need new ways to think about this. In the aglorithmic world, the beer-popping line, "The solution to hate speech is more speech," just doesn't cut it.
I'm less interested in the Trump ban than I am in the ban of services like Parler. Trump will have no trouble speaking; but what of everyday conservatives who are now wondering if tech is out the get them? If you are liberal: Imagine if some future government decides Twitter is a den of illegal activity and shuts it down. That's an uncomfortable intellectual exercise, and one we shouldn't dismiss out of hand.
Parler is a scary place. Before anyone has anything to say about its right to exist, I think you really should spend some time there, or at least read my story about the lawsuit filed by Christopher Krebs. Ask yourself this question: What percentage of a platform's content needs to be death threats, or about organizing violence, before you'll stop defending its right to exist? Let's say we heard about a tool named "Anteroom" which ISIS cells used to radicalize young Muslims, spew horrible things about Americans, teach bomb-making, and organize efforts to…storm the Capitol building in D.C.. Would you really be so upset if Apple decided not to host Anteroom on its play store?
So, what do we do with Parler? Despite all this, I'm still uncomfortable with removing its access to network resources the way Amazon has. I think that feels more like smashing a printing press than forcing books into people's living rooms. Amazon Web Services is much more like a public utility than Twitter is. The standard for removal there should be much higher, I think. And if it makes you uncomfortable that a private company made that decision, rather than some public body that is responsible to voting citizens, it should. At the same time, if you can think of no occasion for banning a service like Parler, then you just aren't thinking.
These are complex issues and we will need our best, most creative legal minds to ponder them in the years to come. Here's one: Matt Stoler, in his Big newsletter about tech monopolies, offers a thoughtful framework for dealing with this problem. I'd like to hear yours. But don't hit me with beer-popping lines or stuff you remember from Philosophy 101. This is no time for academic smugness. We have real problems down here on planet Earth.
Should conservative social network Parler Be Removed from AWS, Google, and Apple app stores? This is an interesting question, because Parler is where some of the organizing for the Capitol hill riot took place. Amazon just removed Parler from its cloud infrastructure, and Google and Apple removed Parler from their app stores. Removing the product may seem necessary to save lives, but having these tech oligarchs do it seems like a dangerous overreach of private authority. So what to do?
My view is that what Parler is doing should be illegal, because it should be responsible on product liability terms for the known outcomes of its product, aka violence. This is exactly what I wrote when I discussed the problem of platforms like Grindr and Facebook fostering harm to users. But what Parler is doing is *not* illegal, because Section 230 means it has no obligation for what its product does. So we’re dealing with a legal product and there’s no legitimate grounds to remove it from key public infrastructure. Similarly, what these platforms did in removing Parler should be illegal, because they should have a public obligation to carry all customers engaging in legal activity on equal terms. But it’s not illegal, because there is no such obligation. These are private entities operating public rights of way, but they are not regulated as such.
In other words, we have what should be an illegal product barred in a way that should also be illegal, a sort of ‘two wrongs make a right’ situation. I say ‘sort of’ because letting this situation fester without righting our public policy will lead to authoritarianism and censorship. What we should have is the legal obligation for these platforms to carry all legal content, and a legal framework that makes business models fostering violence illegal. That way, we can make public choices about public problems, and political violence organized on public rights-of-way certain is a public problem.
Here are more pictures from Sunday’s somber moment that I was lucky enough to witness.