This is not what is happening on social networks today.
Example: the âlab leakâ theory (a controversial theory that is now probably true; I personally believe so) was âcensoredâ at a certain time in the history of the pandemic because the âdebateâ included massive amounts of horrible behavior, spam-level posting, and abuse that spilled over into the real world - e.g. harrassment of public officials and doctors, racially-motivated crimes, etc.
Why is this link not being censored now? Hypocrisy? Because the facts changed?
The Virus Hunting Nonprofit at the Center of the Lab Leak Controversy
It was âcensoredâ not because it was a wrong idea, but because ideas really can - at certain times and places - become lightning rods for actual, physical, kinetic mob behavior.
That is just an unpleasant, inconvenient truth that all of you (regardless of your political leaning) need to accept about speech. Ideas really ARE powerful, and like anything else that is powerful, yes, they can be DANGEROUS.
Iâm sorry, itâs just true.
It would have been perfectly acceptable if the lab leak theory were being discussed in a rational, evidence-based manner by scientists on Twitter, but that is not what happened.
Replace âlab leak theoryâ with whatever topic you think has been unfairly censored, and the reason it was censored (or any other action taken against it) is not because of the content of that topic, I ABSOLUTELY ASSURE YOU.
It is because at Certain Times, given Certain Circumstances, humans will Behave Badly when confronted with Certain Ideas, and if you are The Main Platform Where That Idea is Being Discussed, you cannot do NOTHING, because otherwise humans will continue behaving badly.
Here is what I think about Twitter:
I think the last few years of @Jackâs administration have been the best years of Twitterâs history.
I think Jack really matured as an exec, his prior experience with Twitter, then his success with Square (i.e. doing it wrong, then doing it right) really raised him to a world-class CEO level, and Twitter finally got to be âpretty good.â
And âpretty goodâ is about as good as any social network can possibly be, in my opinion.
(@Jack, if you are reading this, my hatâs off to you. Saying this as one of the few people who have ever run a social platform: you showed the world how it shouldâve been done)
There is a reason why Jack has a crazy meditation routine and eats one meal a deal and goes on spiritual retreats. Because it takes an INHUMAN level of mentality to be able to run something like this.
Because the problems are NOT about politics, or topics of discussion. They are about all the ways that humans misbehave when there are no immediately visible consequences, when talking to (essentially) strangers, and the endless ingenuity they display trying to get around rules.
These last few years, @Jack did a really good job.
And whoever the midwits were who didnât think so have kicked him out, and now Elon thinks heâs going to come in and fix some problems.
Elon is not going to fix some problems. I am absolutely sure of this. He has no idea what heâs in for.
(He might hire back Jack, which might be ok, but I donât know if Jack wants the job. Who knows. All the tech titans are buddies, kind of)
Elon is going to try like heck to âfixâ the problems he sees. Each problem he âfixesâ will just cause 3 more problems.
And the worst part, the part that is going to hurt ALL OF HUMANITY, is that this will distract from his mission at SpaceX and Tesla, because itâs not just going to suck up his time and attention, IT WILL DAMAGE HIS PSYCHE.
I mean, itâs not like he isnât already an emotionally damaged guy. (Sorry Elon, itâs pretty obvious) But he has overcome a lot. And he does not need more trauma from running Twitter.
And I know Iâm not just projecting my own traumas from the time of running Reddit, because:
Mark Zuckerberg talks about e-foiling in the mornings to avoid having to think about bad news coming in thatâs like âbeing punched in the face.â
Ellen Pao was horrifically scarred by her run as Reddit CEO and the active harrassment, far beyond merely adjudicating community misbehavior.
Jack has his meditation retreats and unusual diets and spiritual journeys - heâs an odd guy yeah - but Iâm pretty sure some of that is so he can cope with All You Fucking Assholes.
Never heard much from Dick Costolo, but I havenât seen him do much stand-up improv since he left Twitter, have you? Dick might still be recovering.
Itâs not a fun job, and itâs not like how anyone on the outside imagines. Elon is a very public personality, and he will be faulted by ALL SIDES any time Twitter Does Anything to Solve A Problem, even if he isnât the CEO.
âWhy is chairman of the board @elonmusk standing by while @[newtwitterceo] is doing X, which is wrecking Y?â
â@elonmusk, how can you allow X horrible thing to happen? I thought you were against censorship!â
So: my take is this:
@elonmusk, Iâm all with you on the Values Of The Old Internet.
This is not The Old Internet. That is gone. It is sad. Itâs not because the platforms killed it.
It is because we brought all of our old horrible collective dysfunctions onto the internet, and the internet is very fast and everyone can say anything to anyone, and the place where that happens the most is on the social platforms.
(It doesnât happen very often on e.g. Amazon, except when it does, and of course thatâs when Amazon Censors You!)
It is hard. It is VERY hard. Like eating glass, as Elon would put it.
But it is not as hard as running a social network. And if Elon knows whatâs good for him AND HUMANITY, he wonât do it - he will stick with the Real Atoms, which is what we really need.
Addenda: a few people have interpreted this thread as meaning that I support or that it was a justification for censorship.
(That is a reasonable misinterpretation) but it is not true.
I am very much against censorship. I am, for example, against the censorship of every topic that the social networks blocked during the pandemic especially. I have personally been harmed by this.
However, I also understand many non-obvious things about the complex dynamics that arise in large social network platforms, and I will tell you this:
Censorship is inevitable on large social network platforms. If you run one of sufficient size, you will be FORCED to censor things. Not by governments, or even by âusers,â but by the emergent dynamics of the social network itself.
Someone also said something like, âitâs unacceptable that anyone be considered the omniscient arbiter of whatâs true or notâ (sorry if Iâm misquoting you; thereâs a lot of replies)
I also agree with that. It is impossible for anyone to do, and also terrible.
Yet, the structure and dynamics of running a large social network will FORCE you to do it.
IIRC, almost every large social platform started out wanting to uphold free speech. They all buckle.
And itâs not because certain ideas are good or bad, or true or false. It has to do purely with operational issues that arise with humans that disagree in large numbers on digital platforms.
The social platforms arenât censoring you (or some idea you like) because they disagree with you. They are censoring because they are large social platforms, and ideas are POWERFUL and DANGEROUS.
(That is the whole point. Ideas wouldnât be worth much if they werenât dangerous or powerful. But you canât always control what people are going to do with powerful things)
What they censor has little to do with what is true or false. It has a little bit to do with whatever the current politics are, but not in the way you probably expect.
Let me be clear: if you run a large social network, you will be forced by inexorable circumstance to censor certain things, you will be forced to âarbitrateâ on topics you have an (inevitably) limited understanding of, and it will all be really really shitty.
(The alternative is just collapse of the platform, so I guess you do always have a choice - but then youâre not a social platform anymore)
The process through which all of that will happen is painful, which is why I donât think Elon should do it. It is not a good use of his time, and I think his time is uniquely valuable and limited.
Moderating spam is very interesting: it is almost universally regarded as okay to ban (i.e. CENSORSHIP) but spam is in no way illegal.
Spam actually passes the test of âallow any legal speechâ with flying colors. Hell, the US Postal Service delivers spam to your mailbox.
When 1A discussions talk about free speech on private platforms mirroring free speech laws, the exceptions cited are typically âfire in a crowded theaterâ or maybe âthreatening imminent bodily harm.â
Spam is nothing close to either of those, yet everyone agrees: yes, itâs okay to moderate (censor) spam.
Why is this? Because it has no value? Because itâs sometimes false? Certainly itâs not causing offline harm.
No, no, and no.
No one argues that speech must have value to be allowed (c.f. shitposting). And itâs not clear that content should be banned for being untrue (esp since adjudicating truth is likely intractable). So what gives? Why are we banning spam?
Hereâs the answer everyone knows: there IS no principled reason for banning spam. We ban spam for purely outcome-based reasons:
It affects the quality of experience for users we care about, and users having a good time on the platform makes it successful.
(And successful on a social platform usually means a lucrative ads program, which is ironically one of the things motivating spam in the first place)
Theory: all user-generated content sites inevitably have ads on them.
(But thatâs a digression)
Not only that, but you can usually moderate (identify and ban) spam without understanding the language.
Spam is typically easily identified due to the repetitious nature of the posting frequency, and simplistic nature of the content (low symbol pattern complexity).
Machine learning algorithms are able to accurate identify spam, and itâs not because they are able to tell itâs about Viagra or mortgage refinancing, itâs because spam has unique posting behavior and patterns in the content.
Moreover, AI is able to identify spam about things it hasnât seen before.
This is unlike moderation of other content (e.g. political), where moderators arenât usually able to tell that a ânew topicâ is going to end up being troublesome and eventually prompt moderation.
But spam about an all-new low-quality scammy product can be picked up by an AI recognizing patterns even though the AI doesnât comprehend whatâs being said.
It just knows that a message being broadcast with [THIS SET OF BEHAVIOR PATTERNS] is something users donât want.
Spam filters (whether based on keywords, frequency of posts, or content-agnostic-pattern-matching) are just a tool that a social media platform owner uses to improve the signal-to-noise ratio of content on their platform.
Thatâs what youâre doing when you ban spam.
I have said before that itâs not topics that are censored, it is behavior:
Because it is not TOPICS that are censored. It is BEHAVIOR.
(This is why people on the left and people on the right both think they are being targeted)
The problem with social networks is the SOCIAL (people) part. Not the NETWORK (company).
So now we move on to the next classes of content on the ladder:
2: non-controversial topics
3: controversial topics (politics, religion, culture, etc)
Letâs say you are in an online discussion about a non-controversial topic. Usually it goes fine, but sometimes one of the following pathologies erupts:
a) ONE particular user gets tunnel-vision and begins to post the same thing over and over, or brings up his opinion every time someone mentions a peripherally-related topic. He just wonât shut up, and his tone ranges from annoying to abrasive to grating.
b) An innocuous topic sparks a flamewar, e.g. someone mentions one of John Mulaneyâs jokes and it leads to a flamewar about whether itâs OK to like him now, how DARE he go and⌠how can you possibly condone⌠etc
When I SAY those things, they donât sound too bad. But I want you to imagine the most extreme, pathological cases of similar situations youâve been in on a social platform:
a guy who floods every related topic thread with his opinion (objectively not an unreasonable one) over and over, and
a crazy flamewar that erupts over a minor comment that wonât end and everyone is hating everyone else and new enemy-ships are formed and some of your best users have quit the platform in DISGUST
You remember that time those things happened on your favorite discussion platform? Yeah. Did you blood pressure go up just a tiny bit thinking about that?
Okay. Just like spam, none of those topics ever comes close to being illegal content.
But, in any outcome-based world, stuff like that makes users unhappy with your platform and less likely to use it, and as the platform owner, if you could magically have your druthers, youâd prefer it if those things didnât happen.
Most users are NOT Eliezer Yudkowsky or Scott Alexander and confronted with an inflammatory posting thinking, âHmm, perhaps I should challenge my priors?â Most people are pretty easy to get really worked up.
Events like that will happen, and they canât be predicted, so the only thing to do when it happens is to either do nothing (and have your platform take a hit or die), or somehow moderate that content.
RIGHT NOW RIGHT HERE I want to correct a misconception rising in your mind:
Just because I am saying you will need to moderate that content does NOT mean I am saying that all methods or any particular method employed by someone is the best or correct or even a good one.
I am NOT, right here, advocating or recommending bans, time-limited bans, or hell-banning, or keyword-blocking, or etc etc whatever specific method. I am JUST saying that as a platform owner you will end up having to moderate that content.
And, there will be NO relation between the topic of the content and whether you moderate it, because itâs the specific posting behavior thatâs a problem. What do I mean by that?
It means people will say, âYou banned people in the discussion about liking John Mulaney Leaving His Wife but you didnât ban people in the discussion about Kanye West Being Anti-Semitic ARE YOU RACIST HEY I NOTICE ALL YOUR EXECS ARE WHITE!â
No, itâs because for whatever reason people didnât get into a flamewar about Kanye West or there wasnât a Kanye-subtopic-obsessed guy who kept saying the same thing over and over and over again.
In fact, once again, I challenge you to think about it this way: could you make your content moderation decisions even if you didnât understand the language they were being spoken in?
Here, there is a parallel to the usage of âLorem Ipsumâ in the world of design.
Briefly, when showing clients examples of a proposed webpage design, professional designers usually replace the text with nonsense text, i.e. âLorem ipsum dolor etcâŚâ because they donât want the client to be subconsciously influenced by the content.
Like if the content says, âThe Steelers are the greatest football team in historyâ then some clients are going to be subconsciously motivated to like the design more, and some will like it less.
(Everyone from Pittsburgh who is reading this has now been convinced of the veracity and utter reasonableness of my thinking on this topic)
Everyone else⌠letâs take another temporary detour into the world of carbon credits.
Carbon credits are great for offsetting your carbon footprint, but are they really helpful to the true climate problem?
Now back to where we were⌠when we left off, I was talking about how people are subconsciously influenced by the specific content thatâs being moderated (and not the behavior of the user) when they judge the moderation decision.
When people look at moderation decisions by a platform, they are not just subconsciously influenced by the nature of the content that was moderated, they are heavily - overwhelmingly - influenced by the nature of the content!
Would you think the moderation decision you have a problem with would be fair if the parties involved were politically reversed?
Youâd certainly be a lot more able to clearly examine the merit of the moderation decision if you couldnât understand the language of the content at all, right?
People in China look at America and donât really think the parties are so different from each other, they just think itâs a disorganized and chaotic system that resulted in a mob storming the capitol after an election.
Even if youâre trying to cause BAD things to happen, e.g. Russia is just happy that the US had a mob storming the capital after an election instead of an orderly transfer of power. They donât care who is âthe good guy,â they just love our social platforms.
Youâll notice that I just slippery-sloped my way from #2 to #3:
2: non-controversial topics
3: controversial topics (politics, religion, culture, etc)
Because #2 topics become #3 topics organically - they get culture-linked to something in #3 or whatever - and then youâre confronting #3 topics or proxies for #3 topics.
You know, non-controversial #2 topics like⌠vaccines and wearing masks.
If you told me 10 years ago that people would be having flamewars and deep identity culture divides as a result of online opinions on WEARING MASKS I would have told you that you were crazy.
That kind of thing cannot be predicted, so thereâs no way to come up with rules beforehand based on any a-priori thinking.
Or some topics NEED to be discussed in a dispassionate way divorced from politics:
Example: the âlab leakâ theory (a controversial theory that is now probably true; I personally believe so) was âcensoredâ at a certain time in the history of the pandemic because the âdebateâ included massive amounts of horrible behavior, spam-level posting, and abuse that spilled over into the real world - e.g. harassment of public officials and doctors, racially-motivated crimes, etc.
Like the AI, human content moderators cannot predict when a new topic is going to start presenting problems that are sufficiently threatening to the operation of the platform.
The only thing they can do is observe if the resultant user behavior is sufficiently problematic.
But that is not something outside observers see, because platforms donât advertise problematic user behavior because if you knew there was a guy spam-posting an opinion (even one you like) over and over and over, you wouldnât use the platform.
All they see is the sensationalized (mainstream news) headlines saying TWITTER/FACEBOOK bans PROMINENT USER for posts about CONTROVERSIAL TOPIC.
This is because old-media journalists always think itâs about content. Newspapers donât really run into the equivalent of ârelentless shitposting usersâ or âflamewars between (who? dueling editorialists?).â Itâs not part of their institutional understanding of âcontent.â
Content for all media prior to social media is âanything that gets people engaged, ideally really worked up.â Why would you EVER want to ban something like that? It could only be for nefarious reasons.
Any time an old-media news outlet publishes something that causes controversy, they LOVE IT. Controversy erupting from old-media news outlets is what modern social media might call âsubclinical.â
In college, I wrote a sort of crazy satirical weekly column for the school newspaper. The satire was sometimes lost on people, and so my columns resulted in more letters to the editor than any other columnist ever. The paper loved me.
(Or itâs possible they loved me because I was the only writer who turned in his writing on time every week)
Anyhow, old media controversy is far, far below the intensity levels of problematic behavior that would e.g. threaten the ongoing functioning or continued consumer consumption of that old-media news outlet.
MAYBE sometimes an advertiser will get mad, but a backroom sales conversation will usually get them back once the whole thing blows over.
So we observe the following events:
1: innocuous discussion
2: something blows up and user(s) begin posting with some disruptive level of frequency and volume
2a: maybe a user does something offline as a direct result of that intensity
3: platform owner moderates the discussion to reduce the intensity
4: media reporting describes the moderation as targeting the content topic discussed
5: platform says, âno, itâs because they [did X specific bad behavior] or [broke established ruled]â
6: no one believes them
7: media covers the juiciest angle, i.e. âIs PLATFORM biased against TOPIC?â
Because, you see, controversial issues always look like freedom of speech issues.
But no one cries freedom of speech when itâs spam, or even non-controversial topics. Yeah, you close down the thread about John Mulaney but everyone understands itâs because it was tearing apart the knitting group.
âBecky, you were banned because you wouldnât let up on Karen and even started sending her mean messages to her work email when she blocked you here.â
Controversial topics are just overrepresented in instances where people get heated, and when people get heated, they engage in behavior they wouldnât otherwise engage in.
But that distinction is not visible to people who arenât running the platform.
One of the things that hamstrings platforms is that unlike judicial proceedings in the real world, platforms do not or cannot reveal all the facts and evidence to the public for review.
In a real-world trial, the proceedings are generally public. Evidence of the alleged wrongdoing is presented and made part of the public record.
Although someone might be too lazy to look it up, an interested critic will be able to look at the evidence on case before deciding if they want to (or can credibly, without being debunked) whip up an angry mob against the system itself.
At Reddit, weâd have to issue moderation decisions (e.g. bans) on users and then couldnât really release all the evidence of their wrongdoing, like abusive messages or threats, or spamming with multiple accounts, etc.
The justification is that private messages are private, or sometimes compromising to unrelated parties, but whatever the reasons, that leaves fertile ground for unscrupulous users to claim that they were victimized and politically interested parties to amplify their message that the platform is biased against them.
I had long wondered about a model like âput up or shut upâ where any users challenging a moderation decision would have to consent to having ALL the evidence of their behavior made public by the platform, including private logs and DMs.
But there are huge privacy issues and having a framework for full-public-disclosure would be a lot of work. Nevertheless, it would go a long way to making moderation decisions and PROCESSES more transparent and well-understood by the general public.
Social platforms actually have much BETTER and more high-quality evidence of user misbehavior than âthe real world.â In the real world, facts can be obscured or hidden. On a digital platform, everything you do is logged. The truth is there.
And, not only that, the evidence can even be presented in an anonymized way for impartial evaluation.
Strip out identifiers and political specifics, and like my âin a language you donât understandâ example: moderators (and armchair quarterbacks) can look at the behavior and decide if itâs worthy of curtailment.
Again, this is a lot of work. You canât just dump data, because itâs a heightened situation of emotional tension: the first time you try, something extra will get accidentally disclosed, and youâll have ANOTHER situation on your hands. Now you have two problems.
So I donât know if thatâs workable. But what I do know is, people need to think about content moderation differently, because:
1: It is a signal-to-noise management issue
2: Freedom of speech was NEVER the issue (c.f. spam)
3: Could you still moderate if you canât read the language?
Warning: donât over-rotate on #3 and try to do all your content moderation through AI. Facebook tried that, and ended up with a bizarre inhuman dystopia. (I have a bunch more to say about this if people care)
Having said all that, I wish to offer my comments on the (alleged) âwar room teamâ that Elon has apparently put to work at Twitter:
I donât know the other people super well (tho Sriram is cool; he was briefly an investor in a small venture of mine), but Iâm heartened to know that @DavidSacks is involved.
Sacks is a remarkably good operator, possibly one of the best ones in the modern tech era. He was tapped to lead a turnaround at Zenefits when that company got into really hot water:
âContent moderationâ is the most visible issue with Twitter (the one talking heads love to obsess over) but itâs always been widely known that Twitter suffers from numerous operational problems that many CEOs have tried in vain to fix.
If Twitter were operationally excellent, itâll have a lot better chance of tackling its Inherently Very Hard Moderation Problems and maybe emerge with novel solutions that benefit everyone. If anyone can do that, itâs Sacks.
Twitter employees are about to either be laid off or will look back on this as the time they did the best work of their lives.
Finally, while Iâve got your attention, Iâd like to tell you my personal secret to a positive Twitter experience - a little-known Twitter add-on called Block Party: @blockpartyapp_
One thing that Twitter did well (that Iâm surprised FB hasnât copied) is exposing their API for content filtering.
This allows 3rd-party app developers to create specialized solutions that Twitter canât/wonât do.
Block Partyâs founder Tracy Chou understands the weird and subtle nuances of content filtering on the internet: you donât use a cudgel, you need a scalpel (or three).
Block Party doesnât simply wholesale block things, it filters them in an intelligent way based on criteria you set, and uses data across the system to tune itself.
It doesnât just throw away things it filters for you, it puts them in a box so you can go through it later when you want. Because no automated filter is perfect! (Remember the âbizarre inhuman AI dystopiaâ from above?)
If youâre someone who gets a LOT of hate (or just trash) and you donât really WANT to go through it but need to (just in case thereâs something valuable), you can also authorize a trusted friend to do it for you.
Overall, it has smoothly and transparently improved the signal-to-noise ratio of my Twitter experience, especially during a period of cultural upheaval when youâd expect MORE crazy crapâŚ
But no, for me, my Twitter experience is great and clean and informative and clever. Iâve used Twitter more and more ever since installing it.
Disclosure: as a result of these experiences, Iâm now an investor in Block Party.
If you enjoyed this and want more spicy takes on social media (and advice on how to fix the climate, or investment tips), follow me!