Yishan Wong on Content moderation

Former Reddit CEO Yishan Wong wrote about Content Moderation on his Twitter. It is a subject I have great interest in, and given the current uncertainty around content persistence on Twitter, I de-microblogged it best I could, and am posting it here in entirety for others. Original link here.

 

 

[Hey Yishan, you used to run Reddit, ]

How do you solve the content moderation problems on Twitter?

(Repeated 5x in two days)

Okay, here are my thoughts:

The first thing most people get wrong is not realizing that moderation is a SIGNAL-TO-NOISE management problem, not a content problem. Our current climate of political polarization makes it easy to think it’s about the content of the speech, or hate speech, or misinformation, or censorship, or etc etc. Then you end up down this rabbit hole of trying to produce some sort of “council of wise elders” who can adjudicate exactly what content to allow and what to ban, like a Solomonic compromise.

No, what’s really going to happen is that everyone on council of wise elders will get tons of death threats, eventually quit, the people you recruit to replace them will ask the first group why they quit, and decline your job offer, and you’ll end up with a council of third-rate minds and politically-motivated hacks, and the situation will be worse than how you started.

No, you can’t solve it by making them anonymous, because then you will be accused of having an unaccountable Star Chamber of secret elites (especially if, I dunno, you just took the company private too). No, no, they have to be public and “accountable!”

The fallacy is that it is very easy to think it’s about WHAT is said, but I’ll show you why it’s not.

First, here is a useful framing to consider in this discussion: imagine that you are doing content moderation for a social network and you CANNOT UNDERSTAND THE LANGUAGE. Pretend it’s an alien language, and all you’re able to detect is meta-data about the content, e.g. frequency and user posting patterns. How would you go about making the social network “good” and ensure positive user benefit?

Well, let me present a “ladder” of things often subject to content moderation:

1: spam

2: non-controversial topics

3: controversial topics (politics, religion, culture, etc)

If you launch a social network, the FIRST set of things you end up needing to moderate is #1: spam. Vigorous debate, even outright flamewars are typically beneficial for a small social network: it generates activity, engages users. It doesn’t usually result in offline harm, which is what typically prompts calls to moderate content. (And: platform owners don’t want to have to moderate content. It’s extra work and they are focused on other things. They kind of care about money, but mostly they wish you would shut up and be civil.)

But that is impossible: they (we) made a platform where anyone can say anything, largely without consequence, so people are going to be their worst selves, and social networking is now The Internet, and everyone is on it (thank you @chamath), saying WHATEVER THE HELL THEY WANT.

But the platforms have to be polite. They have to pretend to enforce fairness. They have to adopt “principles.” Let me tell you: There are no real principles. They are just trying to be fair because if they weren’t, everyone would yell LOUDER and the problem would be worse.

What happens is that because of the fundamental structural nature of social networks, it is always possible for a corner case to emerge where people get into an explosive fight and the company running the social network has to step in.

Again: Omega Events

Because human variability and behavior is infinite. And when that happens, the social network has to make up a new rule, or “derive” it from some prior stated principle, and over time it’s really just a tortured game of Twister.

You really want to avoid censorship on social networks? Here is the solution:

Stop arguing. Play nice. The catch: everyone has to do it at once.

I guarantee you, if you do that, there will be NO CENSORSHIP OF ANY TOPIC on any social network.

Because it is not TOPICS that are censored. It is BEHAVIOR.

(This is why people on the left and people on the right both think they are being targeted)

The problem with social networks is the SOCIAL (people) part. Not the NETWORK (company).

“The best antidote to bad ideas is not to censor them, but to allow debate and better ideas.”

How naive.

“Debate” is a vague term, and what a social network observes that causes them to “censor” something is masses of people engaging in “debate” - that is to say: abusive volumes of activity violating spam and harassment rules, sometimes prompting off-site real-world harm.

This is what you think of when you hear “debate.”

Old time public debate on a stage

This is not what is happening on social networks today.

Example: the “lab leak” theory (a controversial theory that is now probably true; I personally believe so) was “censored” at a certain time in the history of the pandemic because the “debate” included massive amounts of horrible behavior, spam-level posting, and abuse that spilled over into the real world - e.g. harrassment of public officials and doctors, racially-motivated crimes, etc.

Why is this link not being censored now? Hypocrisy? Because the facts changed?

The Virus Hunting Nonprofit at the Center of the Lab Leak Controversy

It was “censored” not because it was a wrong idea, but because ideas really can - at certain times and places - become lightning rods for actual, physical, kinetic mob behavior.

That is just an unpleasant, inconvenient truth that all of you (regardless of your political leaning) need to accept about speech. Ideas really ARE powerful, and like anything else that is powerful, yes, they can be DANGEROUS.

I’m sorry, it’s just true.

It would have been perfectly acceptable if the lab leak theory were being discussed in a rational, evidence-based manner by scientists on Twitter, but that is not what happened.

Replace “lab leak theory” with whatever topic you think has been unfairly censored, and the reason it was censored (or any other action taken against it) is not because of the content of that topic, I ABSOLUTELY ASSURE YOU.

It is because at Certain Times, given Certain Circumstances, humans will Behave Badly when confronted with Certain Ideas, and if you are The Main Platform Where That Idea is Being Discussed, you cannot do NOTHING, because otherwise humans will continue behaving badly.

Here is what I think about Twitter:

I think the last few years of @Jack’s administration have been the best years of Twitter’s history.

I think Jack really matured as an exec, his prior experience with Twitter, then his success with Square (i.e. doing it wrong, then doing it right) really raised him to a world-class CEO level, and Twitter finally got to be “pretty good.”

And “pretty good” is about as good as any social network can possibly be, in my opinion.

(@Jack, if you are reading this, my hat’s off to you. Saying this as one of the few people who have ever run a social platform: you showed the world how it should’ve been done)

There is a reason why Jack has a crazy meditation routine and eats one meal a deal and goes on spiritual retreats. Because it takes an INHUMAN level of mentality to be able to run something like this.

Because the problems are NOT about politics, or topics of discussion. They are about all the ways that humans misbehave when there are no immediately visible consequences, when talking to (essentially) strangers, and the endless ingenuity they display trying to get around rules.

These last few years, @Jack did a really good job.

And whoever the midwits were who didn’t think so have kicked him out, and now Elon thinks he’s going to come in and fix some problems.

Elon is not going to fix some problems. I am absolutely sure of this. He has no idea what he’s in for.

(He might hire back Jack, which might be ok, but I don’t know if Jack wants the job. Who knows. All the tech titans are buddies, kind of)

Elon is going to try like heck to “fix” the problems he sees. Each problem he “fixes” will just cause 3 more problems.

And the worst part, the part that is going to hurt ALL OF HUMANITY, is that this will distract from his mission at SpaceX and Tesla, because it’s not just going to suck up his time and attention, IT WILL DAMAGE HIS PSYCHE.

I mean, it’s not like he isn’t already an emotionally damaged guy. (Sorry Elon, it’s pretty obvious) But he has overcome a lot. And he does not need more trauma from running Twitter.

And I know I’m not just projecting my own traumas from the time of running Reddit, because:

Mark Zuckerberg talks about e-foiling in the mornings to avoid having to think about bad news coming in that’s like “being punched in the face.”

Ellen Pao was horrifically scarred by her run as Reddit CEO and the active harrassment, far beyond merely adjudicating community misbehavior.

Jack has his meditation retreats and unusual diets and spiritual journeys - he’s an odd guy yeah - but I’m pretty sure some of that is so he can cope with All You Fucking Assholes.

Never heard much from Dick Costolo, but I haven’t seen him do much stand-up improv since he left Twitter, have you? Dick might still be recovering.

It’s not a fun job, and it’s not like how anyone on the outside imagines. Elon is a very public personality, and he will be faulted by ALL SIDES any time Twitter Does Anything to Solve A Problem, even if he isn’t the CEO.

“Why is chairman of the board @elonmusk standing by while @[newtwitterceo] is doing X, which is wrecking Y?”

“@elonmusk, how can you allow X horrible thing to happen? I thought you were against censorship!”

So: my take is this:

@elonmusk, I’m all with you on the Values Of The Old Internet.

This is not The Old Internet. That is gone. It is sad. It’s not because the platforms killed it.

It is because we brought all of our old horrible collective dysfunctions onto the internet, and the internet is very fast and everyone can say anything to anyone, and the place where that happens the most is on the social platforms.

(It doesn’t happen very often on e.g. Amazon, except when it does, and of course that’s when Amazon Censors You!)

It is hard. It is VERY hard. Like eating glass, as Elon would put it.

But it is not as hard as running a social network. And if Elon knows what’s good for him AND HUMANITY, he won’t do it - he will stick with the Real Atoms, which is what we really need.

Addenda: a few people have interpreted this thread as meaning that I support or that it was a justification for censorship.

(That is a reasonable misinterpretation) but it is not true.

I am very much against censorship. I am, for example, against the censorship of every topic that the social networks blocked during the pandemic especially. I have personally been harmed by this.

However, I also understand many non-obvious things about the complex dynamics that arise in large social network platforms, and I will tell you this:

Censorship is inevitable on large social network platforms. If you run one of sufficient size, you will be FORCED to censor things. Not by governments, or even by “users,” but by the emergent dynamics of the social network itself.

Someone also said something like, “it’s unacceptable that anyone be considered the omniscient arbiter of what’s true or not” (sorry if I’m misquoting you; there’s a lot of replies)

I also agree with that. It is impossible for anyone to do, and also terrible.

Yet, the structure and dynamics of running a large social network will FORCE you to do it.

IIRC, almost every large social platform started out wanting to uphold free speech. They all buckle.

And it’s not because certain ideas are good or bad, or true or false. It has to do purely with operational issues that arise with humans that disagree in large numbers on digital platforms.

The social platforms aren’t censoring you (or some idea you like) because they disagree with you. They are censoring because they are large social platforms, and ideas are POWERFUL and DANGEROUS.

(That is the whole point. Ideas wouldn’t be worth much if they weren’t dangerous or powerful. But you can’t always control what people are going to do with powerful things)

What they censor has little to do with what is true or false. It has a little bit to do with whatever the current politics are, but not in the way you probably expect.

Let me be clear: if you run a large social network, you will be forced by inexorable circumstance to censor certain things, you will be forced to “arbitrate” on topics you have an (inevitably) limited understanding of, and it will all be really really shitty.

(The alternative is just collapse of the platform, so I guess you do always have a choice - but then you’re not a social platform anymore)

The process through which all of that will happen is painful, which is why I don’t think Elon should do it. It is not a good use of his time, and I think his time is uniquely valuable and limited.

Moderating spam is very interesting: it is almost universally regarded as okay to ban (i.e. CENSORSHIP) but spam is in no way illegal.

Spam actually passes the test of “allow any legal speech” with flying colors. Hell, the US Postal Service delivers spam to your mailbox.

When 1A discussions talk about free speech on private platforms mirroring free speech laws, the exceptions cited are typically “fire in a crowded theater” or maybe “threatening imminent bodily harm.”

Spam is nothing close to either of those, yet everyone agrees: yes, it’s okay to moderate (censor) spam.

Why is this? Because it has no value? Because it’s sometimes false? Certainly it’s not causing offline harm.

No, no, and no.

No one argues that speech must have value to be allowed (c.f. shitposting). And it’s not clear that content should be banned for being untrue (esp since adjudicating truth is likely intractable). So what gives? Why are we banning spam?

Here’s the answer everyone knows: there IS no principled reason for banning spam. We ban spam for purely outcome-based reasons:

It affects the quality of experience for users we care about, and users having a good time on the platform makes it successful.

(And successful on a social platform usually means a lucrative ads program, which is ironically one of the things motivating spam in the first place)

Theory: all user-generated content sites inevitably have ads on them.

(But that’s a digression)

Not only that, but you can usually moderate (identify and ban) spam without understanding the language.

Spam is typically easily identified due to the repetitious nature of the posting frequency, and simplistic nature of the content (low symbol pattern complexity).

Machine learning algorithms are able to accurate identify spam, and it’s not because they are able to tell it’s about Viagra or mortgage refinancing, it’s because spam has unique posting behavior and patterns in the content.

Moreover, AI is able to identify spam about things it hasn’t seen before.

This is unlike moderation of other content (e.g. political), where moderators aren’t usually able to tell that a “new topic” is going to end up being troublesome and eventually prompt moderation.

But spam about an all-new low-quality scammy product can be picked up by an AI recognizing patterns even though the AI doesn’t comprehend what’s being said.

It just knows that a message being broadcast with [THIS SET OF BEHAVIOR PATTERNS] is something users don’t want.

Spam filters (whether based on keywords, frequency of posts, or content-agnostic-pattern-matching) are just a tool that a social media platform owner uses to improve the signal-to-noise ratio of content on their platform.

That’s what you’re doing when you ban spam.

I have said before that it’s not topics that are censored, it is behavior:

Because it is not TOPICS that are censored. It is BEHAVIOR.

(This is why people on the left and people on the right both think they are being targeted)

The problem with social networks is the SOCIAL (people) part. Not the NETWORK (company).

So now we move on to the next classes of content on the ladder:

2: non-controversial topics

3: controversial topics (politics, religion, culture, etc)

Let’s say you are in an online discussion about a non-controversial topic. Usually it goes fine, but sometimes one of the following pathologies erupts:

a) ONE particular user gets tunnel-vision and begins to post the same thing over and over, or brings up his opinion every time someone mentions a peripherally-related topic. He just won’t shut up, and his tone ranges from annoying to abrasive to grating.

b) An innocuous topic sparks a flamewar, e.g. someone mentions one of John Mulaney’s jokes and it leads to a flamewar about whether it’s OK to like him now, how DARE he go and… how can you possibly condone… etc

When I SAY those things, they don’t sound too bad. But I want you to imagine the most extreme, pathological cases of similar situations you’ve been in on a social platform:

a guy who floods every related topic thread with his opinion (objectively not an unreasonable one) over and over, and

a crazy flamewar that erupts over a minor comment that won’t end and everyone is hating everyone else and new enemy-ships are formed and some of your best users have quit the platform in DISGUST

You remember that time those things happened on your favorite discussion platform? Yeah. Did you blood pressure go up just a tiny bit thinking about that?

Okay. Just like spam, none of those topics ever comes close to being illegal content.

But, in any outcome-based world, stuff like that makes users unhappy with your platform and less likely to use it, and as the platform owner, if you could magically have your druthers, you’d prefer it if those things didn’t happen.

Most users are NOT Eliezer Yudkowsky or Scott Alexander and confronted with an inflammatory posting thinking, “Hmm, perhaps I should challenge my priors?” Most people are pretty easy to get really worked up.

Events like that will happen, and they can’t be predicted, so the only thing to do when it happens is to either do nothing (and have your platform take a hit or die), or somehow moderate that content.

RIGHT NOW RIGHT HERE I want to correct a misconception rising in your mind:

Just because I am saying you will need to moderate that content does NOT mean I am saying that all methods or any particular method employed by someone is the best or correct or even a good one.

I am NOT, right here, advocating or recommending bans, time-limited bans, or hell-banning, or keyword-blocking, or etc etc whatever specific method. I am JUST saying that as a platform owner you will end up having to moderate that content.

And, there will be NO relation between the topic of the content and whether you moderate it, because it’s the specific posting behavior that’s a problem. What do I mean by that?

It means people will say, “You banned people in the discussion about liking John Mulaney Leaving His Wife but you didn’t ban people in the discussion about Kanye West Being Anti-Semitic ARE YOU RACIST HEY I NOTICE ALL YOUR EXECS ARE WHITE!”

No, it’s because for whatever reason people didn’t get into a flamewar about Kanye West or there wasn’t a Kanye-subtopic-obsessed guy who kept saying the same thing over and over and over again.

In fact, once again, I challenge you to think about it this way: could you make your content moderation decisions even if you didn’t understand the language they were being spoken in?

Here, there is a parallel to the usage of “Lorem Ipsum” in the world of design.

Lorem Ipsum

Briefly, when showing clients examples of a proposed webpage design, professional designers usually replace the text with nonsense text, i.e. “Lorem ipsum dolor etc…” because they don’t want the client to be subconsciously influenced by the content.

Like if the content says, “The Steelers are the greatest football team in history” then some clients are going to be subconsciously motivated to like the design more, and some will like it less.

(Everyone from Pittsburgh who is reading this has now been convinced of the veracity and utter reasonableness of my thinking on this topic)

Everyone else… let’s take another temporary detour into the world of carbon credits.

Carbon credits are great for offsetting your carbon footprint, but are they really helpful to the true climate problem?

Now back to where we were… when we left off, I was talking about how people are subconsciously influenced by the specific content that’s being moderated (and not the behavior of the user) when they judge the moderation decision.

When people look at moderation decisions by a platform, they are not just subconsciously influenced by the nature of the content that was moderated, they are heavily - overwhelmingly - influenced by the nature of the content!

Would you think the moderation decision you have a problem with would be fair if the parties involved were politically reversed?

You’d certainly be a lot more able to clearly examine the merit of the moderation decision if you couldn’t understand the language of the content at all, right?

People in China look at America and don’t really think the parties are so different from each other, they just think it’s a disorganized and chaotic system that resulted in a mob storming the capitol after an election.

Even if you’re trying to cause BAD things to happen, e.g. Russia is just happy that the US had a mob storming the capital after an election instead of an orderly transfer of power. They don’t care who is “the good guy,” they just love our social platforms.

You’ll notice that I just slippery-sloped my way from #2 to #3:

2: non-controversial topics

3: controversial topics (politics, religion, culture, etc)

Because #2 topics become #3 topics organically - they get culture-linked to something in #3 or whatever - and then you’re confronting #3 topics or proxies for #3 topics.

You know, non-controversial #2 topics like… vaccines and wearing masks.

If you told me 10 years ago that people would be having flamewars and deep identity culture divides as a result of online opinions on WEARING MASKS I would have told you that you were crazy.

That kind of thing cannot be predicted, so there’s no way to come up with rules beforehand based on any a-priori thinking.

Or some topics NEED to be discussed in a dispassionate way divorced from politics:

Example: the “lab leak” theory (a controversial theory that is now probably true; I personally believe so) was “censored” at a certain time in the history of the pandemic because the “debate” included massive amounts of horrible behavior, spam-level posting, and abuse that spilled over into the real world - e.g. harassment of public officials and doctors, racially-motivated crimes, etc.

Like the AI, human content moderators cannot predict when a new topic is going to start presenting problems that are sufficiently threatening to the operation of the platform.

The only thing they can do is observe if the resultant user behavior is sufficiently problematic.

But that is not something outside observers see, because platforms don’t advertise problematic user behavior because if you knew there was a guy spam-posting an opinion (even one you like) over and over and over, you wouldn’t use the platform.

All they see is the sensationalized (mainstream news) headlines saying TWITTER/FACEBOOK bans PROMINENT USER for posts about CONTROVERSIAL TOPIC.

This is because old-media journalists always think it’s about content. Newspapers don’t really run into the equivalent of “relentless shitposting users” or “flamewars between (who? dueling editorialists?).” It’s not part of their institutional understanding of “content.”

Content for all media prior to social media is “anything that gets people engaged, ideally really worked up.” Why would you EVER want to ban something like that? It could only be for nefarious reasons.

Any time an old-media news outlet publishes something that causes controversy, they LOVE IT. Controversy erupting from old-media news outlets is what modern social media might call “subclinical.”

In college, I wrote a sort of crazy satirical weekly column for the school newspaper. The satire was sometimes lost on people, and so my columns resulted in more letters to the editor than any other columnist ever. The paper loved me.

(Or it’s possible they loved me because I was the only writer who turned in his writing on time every week)

Anyhow, old media controversy is far, far below the intensity levels of problematic behavior that would e.g. threaten the ongoing functioning or continued consumer consumption of that old-media news outlet.

MAYBE sometimes an advertiser will get mad, but a backroom sales conversation will usually get them back once the whole thing blows over.

So we observe the following events:

1: innocuous discussion

2: something blows up and user(s) begin posting with some disruptive level of frequency and volume

2a: maybe a user does something offline as a direct result of that intensity

3: platform owner moderates the discussion to reduce the intensity

4: media reporting describes the moderation as targeting the content topic discussed

5: platform says, “no, it’s because they [did X specific bad behavior] or [broke established ruled]”

6: no one believes them

7: media covers the juiciest angle, i.e. “Is PLATFORM biased against TOPIC?”

Because, you see, controversial issues always look like freedom of speech issues.

But no one cries freedom of speech when it’s spam, or even non-controversial topics. Yeah, you close down the thread about John Mulaney but everyone understands it’s because it was tearing apart the knitting group.

“Becky, you were banned because you wouldn’t let up on Karen and even started sending her mean messages to her work email when she blocked you here.”

Controversial topics are just overrepresented in instances where people get heated, and when people get heated, they engage in behavior they wouldn’t otherwise engage in.

But that distinction is not visible to people who aren’t running the platform.

One of the things that hamstrings platforms is that unlike judicial proceedings in the real world, platforms do not or cannot reveal all the facts and evidence to the public for review.

In a real-world trial, the proceedings are generally public. Evidence of the alleged wrongdoing is presented and made part of the public record.

Although someone might be too lazy to look it up, an interested critic will be able to look at the evidence on case before deciding if they want to (or can credibly, without being debunked) whip up an angry mob against the system itself.

At Reddit, we’d have to issue moderation decisions (e.g. bans) on users and then couldn’t really release all the evidence of their wrongdoing, like abusive messages or threats, or spamming with multiple accounts, etc.

The justification is that private messages are private, or sometimes compromising to unrelated parties, but whatever the reasons, that leaves fertile ground for unscrupulous users to claim that they were victimized and politically interested parties to amplify their message that the platform is biased against them.

I had long wondered about a model like “put up or shut up” where any users challenging a moderation decision would have to consent to having ALL the evidence of their behavior made public by the platform, including private logs and DMs.

But there are huge privacy issues and having a framework for full-public-disclosure would be a lot of work. Nevertheless, it would go a long way to making moderation decisions and PROCESSES more transparent and well-understood by the general public.

Social platforms actually have much BETTER and more high-quality evidence of user misbehavior than “the real world.” In the real world, facts can be obscured or hidden. On a digital platform, everything you do is logged. The truth is there.

And, not only that, the evidence can even be presented in an anonymized way for impartial evaluation.

Strip out identifiers and political specifics, and like my “in a language you don’t understand” example: moderators (and armchair quarterbacks) can look at the behavior and decide if it’s worthy of curtailment.

Again, this is a lot of work. You can’t just dump data, because it’s a heightened situation of emotional tension: the first time you try, something extra will get accidentally disclosed, and you’ll have ANOTHER situation on your hands. Now you have two problems.

So I don’t know if that’s workable. But what I do know is, people need to think about content moderation differently, because:

1: It is a signal-to-noise management issue

2: Freedom of speech was NEVER the issue (c.f. spam)

3: Could you still moderate if you can’t read the language?

Warning: don’t over-rotate on #3 and try to do all your content moderation through AI. Facebook tried that, and ended up with a bizarre inhuman dystopia. (I have a bunch more to say about this if people care)

Having said all that, I wish to offer my comments on the (alleged) “war room team” that Elon has apparently put to work at Twitter:

I don’t know the other people super well (tho Sriram is cool; he was briefly an investor in a small venture of mine), but I’m heartened to know that @DavidSacks is involved.

Sacks is a remarkably good operator, possibly one of the best ones in the modern tech era. He was tapped to lead a turnaround at Zenefits when that company got into really hot water:

“Content moderation” is the most visible issue with Twitter (the one talking heads love to obsess over) but it’s always been widely known that Twitter suffers from numerous operational problems that many CEOs have tried in vain to fix.

If Twitter were operationally excellent, it’ll have a lot better chance of tackling its Inherently Very Hard Moderation Problems and maybe emerge with novel solutions that benefit everyone. If anyone can do that, it’s Sacks.

Twitter employees are about to either be laid off or will look back on this as the time they did the best work of their lives.

Finally, while I’ve got your attention, I’d like to tell you my personal secret to a positive Twitter experience - a little-known Twitter add-on called Block Party: @blockpartyapp_

One thing that Twitter did well (that I’m surprised FB hasn’t copied) is exposing their API for content filtering.

This allows 3rd-party app developers to create specialized solutions that Twitter can’t/won’t do.

Block Party’s founder Tracy Chou understands the weird and subtle nuances of content filtering on the internet: you don’t use a cudgel, you need a scalpel (or three).

Block Party doesn’t simply wholesale block things, it filters them in an intelligent way based on criteria you set, and uses data across the system to tune itself.

It doesn’t just throw away things it filters for you, it puts them in a box so you can go through it later when you want. Because no automated filter is perfect! (Remember the “bizarre inhuman AI dystopia” from above?)

If you’re someone who gets a LOT of hate (or just trash) and you don’t really WANT to go through it but need to (just in case there’s something valuable), you can also authorize a trusted friend to do it for you.

Overall, it has smoothly and transparently improved the signal-to-noise ratio of my Twitter experience, especially during a period of cultural upheaval when you’d expect MORE crazy crap…

But no, for me, my Twitter experience is great and clean and informative and clever. I’ve used Twitter more and more ever since installing it.

Disclosure: as a result of these experiences, I’m now an investor in Block Party.

If you enjoyed this and want more spicy takes on social media (and advice on how to fix the climate, or investment tips), follow me!