Tag Archives: facebook

Censorship Is Complicated: What Internet History Says about Meta/Facebook

In light of this week’s announcement by Meta (Facebook, Instagram, Threads, etc), I have been pondering this question: Why am I, a person that has long been a staunch advocate of free speech and encryption, leery of sites that talk about being free speech-oriented? And, more to the point, why an I — a person that has been censored by Facebook for mentioning the Open Source social network Mastodon — not cheering a “lighter touch”?

The answers are complicated, and take me back to the early days of social networking. Yes, I mean the 1980s and 1990s.

Before digital communications, there were barriers to reaching a lot of people. Especially money. This led to a sort of self-censorship: it may be legal to write certain things, but would a newspaper publish a letter to the editor containing expletives? Probably not.

As digital communications started to happen, suddenly people could have their own communities. Not just free from the same kinds of monetary pressures, but free from outside oversight (parents, teachers, peers, community, etc.) When you have a community that the majority of people lack the equipment to access — and wouldn’t understand how to access even if they had the equipment — you have a place where self-expression can be unleashed.

And, as J. C. Herz covers in what is now an unintentional history (her book Surfing on the Internet was published in 1995), self-expression WAS unleashed. She enjoyed the wit and expression of everything from odd corners of Usenet to the text-based open world of MOOs and MUDs. She even talks about groups dedicated to insults (flaming) in positive terms.

But as I’ve seen time and again, if there are absolutely no rules, then whenever a group gets big enough — more than a few dozen people, say — there are troublemakers that ruin it for everyone. Maybe it’s trolling, maybe it’s vicious attacks, you name it — it will arrive and it will be poisonous.

I remember the debates within the Debian community about this. Debian is one of the pillars of the Internet today, a nonprofit project with free speech in its DNA. And yet there were inevitably the poisonous people. Debian took too long to learn that allowing those people to run rampant was causing more harm than good, because having a well-worn Delete key and a tolerance for insults became a requirement for being a Debian developer, and that drove away people that had no desire to deal with such things. (I should note that Debian strikes a much better balance today.)

But in reality, there were never absolutely no rules. If you joined a BBS, you used it at the whim of the owner (the “sysop” or system operator). The sysop may be a 16-yr-old running it from their bedroom, or a retired programmer, but in any case they were letting you use their resources for free and they could kick you off for any or no reason at all. So if you caused trouble, or perhaps insulted their cat, you’re banned. But, in all but the smallest towns, there were other options you could try.

On the other hand, sysops enjoyed having people call their BBSs and didn’t want to drive everyone off, so there was a natural balance at play. As networks like Fidonet developed, a sort of uneasy approach kicked in: don’t be excessively annoying, and don’t be easily annoyed. Like it or not, it seemed to generally work. A BBS that repeatedly failed to deal with troublemakers could risk removal from Fidonet.

On the more institutional Usenet, you generally got access through your university (or, in a few cases, employer). Most universities didn’t really even know they were running a Usenet server, and you were generally left alone. Until you did something that annoyed somebody enough that they tracked down the phone number for your dean, in which case real-world consequences would kick in. A site may face the Usenet Death Penalty — delinking from the network — if they repeatedly failed to prevent malicious content from flowing through their site.

Some BBSs let people from minority communities such as LGBTQ+ thrive in a place of peace from tormentors. A lot of them let people be themselves in a way they couldn’t be “in real life”. And yes, some harbored trolls and flamers.

The point I am trying to make here is that each BBS, or Usenet site, set their own policies about what their own users could do. These had to be harmonized to a certain extent with the global community, but in a certain sense, with BBSs especially, you could just use a different one if you didn’t like what the vibe was at a certain place.

That this free speech ethos survived was never inevitable. There were many attempts to regulate the Internet, and it was thanks to the advocacy of groups like the EFF that we have things like strong encryption and a degree of freedom online.

With the rise of the very large platforms — and here I mean CompuServe and AOL at first, and then Facebook, Twitter, and the like later — the low-friction option of just choosing a different place started to decline. You could participate on a Fidonet forum from any of thousands of BBSs, but you could only participate in an AOL forum from AOL. The same goes for Facebook, Twitter, and so forth. Not only that, but as social media became conceived of as very large sites, it became impossible for a person with enough skill, funds, and time to just start a site themselves. Instead of neading a few thousand dollars of equipment, you’d need tens or hundreds of millions of dollars of equipment and employees.

All that means you can’t really run Facebook as a nonprofit. It is a business. It should be absolutely clear to everyone that Facebook’s mission is not the one they say it is — “[to] give people the power to build community and bring the world closer together.” If that was their goal, they wouldn’t be creating AI users and AI spam and all the rest. Zuck isn’t showing courage; he’s sucking up to Trump and those that will pay the price are those that always do: women and minorities.

Really, the point of any large social network isn’t to build community. It’s to make the owners their next billion. They do that by convincing people to look at ads on their site. Zuck is as much a windsock as anyone else; he will adjust policies in whichever direction he thinks the wind is blowing so as to let him keep putting ads in front of eyeballs, and stomp all over principles — even free speech — doing it. Don’t expect anything different from any large commercial social network either. Bluesky is going to follow the same trajectory as all the others.

The problem with a one-size-fits-all content policy is that the world isn’t that kind of place. For instance, I am a pacifist. There is a place for a group where pacifists can hang out with each other, free from the noise of the debate about pacifism. And there is a place for the debate. Forcing everyone that signs up for the conversation to sign up for the debate is harmful. Preventing the debate is often also harmful. One company can’t square this circle.

Beyond that, the fact that we care so much about one company is a problem on two levels. First, it indicates how succeptible people are to misinformation and such. I don’t have much to offer on that point. Secondly, it indicates that we are too centralized.

We have a solution there: Mastodon. Mastodon is a modern, open source, decentralized social network. You can join any instance, easily migrate your account from one server to another, and so forth. You pick an instance that suits you. There are thousands of others you can choose from. Some aggressively defederate with instances known to harbor poisonous people; some don’t.

And, to harken back to the BBS era, if you have some time, some skill, and a few bucks, you can run your own Mastodon instance.

Personally, I still visit Facebook on occasion because some people I care about are mainly there. But it is such a terrible experience that I rarely do. Meta is becoming irrelevant to me. They are on a path to becoming irrelevant to many more as well. Maybe this is the moment to go “shrug, this sucks” and try something better.

(And when you do, feel free to say hi to me at @jgoerzen@floss.social on Mastodon.)

Facebook’s Blocking Decisions Are Deliberate – Including Their Censorship of Mastodon

In the aftermath of my report of Facebook censoring mentions of the open-source social network Mastodon, there was a lot of conversation about whether or not this was deliberate.

That conversation seemed to focus on whether a human speficially added joinmastodon.org to some sort of blacklist. But that’s not even relevant.

OF COURSE it was deliberate, because of how Facebook tunes its algorithm.

Facebook’s algorithm is tuned for Facebook’s profit. That means it’s tuned to maximize the time people spend on the site — engagement. In other words, it is tuned to keep your attention on Facebook.

Why do you think there is so much junk on Facebook? So much anti-vax, anti-science, conspiracy nonsense from the likes of Breitbart? It’s not because their algorithm is incapable of surfacing the good content; we already know it can because they temporarily pivoted it shortly after the last US election. They intentionally undid its efforts to make high-quality news sources more prominent — twice.

Facebook has said that certain anti-vax disinformation posts violate its policies. It has an extremely cumbersome way to report them, but it can be done and I have. These reports are met with either silence or a response claiming the content didn’t violate their guidelines.

So what algorithm is it that allows Breitbart to not just be seen but to thrive on the platform, lets anti-vax disinformation survive even a human review, while banning mentions of Mastodon?

One that is working exactly as intended.

We may think this algorithm is busted. Clearly, Facebook does not. If their goal is to maximize profit by maximizing engagement, the algorithm is working exactly as designed.

I don’t know if joinmastodon.org was specifically blacklisted by a human. Nor is it relevant.

Facebook’s choice to tolerate and promote the things that service its greed for engagement and money, even if they are the lowest dregs of the web, is deliberate. It is no accident that Breitbart does better than Mastodon on Facebook. After all, which of these does its algorithm detect keep people engaged on Facebook itself more?

Facebook removes the ban

You can see all the screenshots of the censorship in my original post. Now, Facebook has reversed course:

We also don’t know if this reversal was human or algorithmic, but that still is beside the point.

The point is, Facebook intentionally chooses to surface and promote those things that drive engagement, regardless of quality.

Clearly many have wondered if tens of thousands of people have died unnecessary deaths over COVID as a result. One whistleblower says “I have blood on my hands” and President Biden said “they’re killing people” before “walking back his comments slightly”. I’m not equipped to verify those statements. But what do they think is going to happen if they prioritize engagement over quality? Rainbows and happiness?

Update 2024-04-06: It’s happened again. Facebook censored my post about the Marion County Record because the same site was critical of Facebook censoring environmental stories.

The Good, Bad, and Scary of the Banning of Donald Trump, and How Decentralization Makes It All Better

It is undeniable that banning Donald Trump from Facebook, Twitter, and similar sites is a benefit for the moment. It may well save lives, perhaps lots of lives. But it raises quite a few troubling issues.

First, as EFF points out, these platforms have privileged speakers with power, especially politicians, over regular users. For years now, it has been obvious to everyone that Donald Trump has been violating policies on both platforms, and yet they did little or nothing about it. The result we saw last week was entirely forseeable — and indeed, WAS forseen, including by elements in those companies themselves. (ACLU also raises some good points)

Contrast that with how others get treated. Facebook, two days after the coup attempt, banned Benjamin Wittes, apparently because he mentioned an Atlantic article opposed to nutcase conspiracy theories. The EFF has also documented many more egregious examples: taking down documentation of war crimes, childbirth images, black activists showing the racist messages they received, women discussing online harassment, etc. The list goes on; YouTube, for instance, has often been promoting far-right violent videos while removing peaceful LGBTQ ones.

In short, have we simply achieved legal censorship by outsourcing it to dominant corporations?

It is worth pausing at this point to recognize two important princples:

First, that we do not see it as right to compel speech.

Secondly, that there exist communications channels and other services that nobody is calling on to suspend Donald Trump.

Let’s dive into those a little bit.

There have been no prominent calls for AT&T, Verizon, Gmail, or whomever provides Trump and his campaign with cell phones or email to suspend their service to him. Moreover, the gas stations that fuel his vehicles and the airports that service his plane continue to provide those services, and nobody has seriously questioned that, either. Even his Apple phone that he uses to post to Twitter remains, as far as I know, fully active.

Secondly, imagine you were starting up a small web forum focused on raising tomato plants. It is, and should be, well within your rights to keep tomato-haters out, as well as people that have no interest in tomatoes but would rather talk about rutabagas, politics, or Mars. If you are going to host a forum about tomatoes, you have the right to keep it a forum about tomatoes; you cannot be forced to distribute someone else’s speech. Likewise in traditional media, a newspaper cannot be forced to print every letter to the editor in full.

In law, there is a notion of a common carrier, that provides services to the general public without discrimination. Phone companies and ISPs fall under this.

Facebook, Twitter, and tomato sites don’t. But consider what happens if Facebook bans you. You might be using Facebook-owned Whatsapp to communicate with family and friends, and suddenly find yourself unable to ask someone to pick you up. Or your treasured family photos might be in Facebook-owned Instagram, lost forever. It’s not just Facebook; similar things happen with Google, locking people out of their phones and laptops, their emails, even their photos.

Is it right that Facebook and Google aren’t regulated as common carriers? Perhaps, or perhaps we need some line of demarcation between their speech-to-the-public services (Facebook timeline posts, YouTube) and private communication (Whatsapp, Gmail). It’s a thorny issue; should government be regulating speech instead? That’s also fraught. So is corporate control.

Decentralization Helps Dramatically

With email, you get to pick your email provider (yes, there are two or three big ones, but still plenty of others). Each email provider will have its own set of things it considers acceptable, and its own set of other servers and accounts it’s willing to exchange mail with. (It is extremely common for mail providers to choose not to accept mail from various other mail servers based on ISP, IP address, reputation, and so forth.)

What if we could do something like that for Twitter and Facebook?

Let you join whatever instance you like. Maybe one instance is all about art and they don’t talk about politics. Or another is all about Free Software and they don’t have advertising. And then there are plenty of open instances that accept anything that’s respectful. And, like email, people of one server can interact with those using another just as easily as if they were using the same one.

Well, this isn’t hypothetical; it already exists in the Fediverse. The most common option is Mastodon, and it so happens that a month ago I wrote about its benefits for other reasons, and included some links on getting started.

There is no reason that we must all let our online speech be controlled by companies with a profit motive to keep hate speech on their platforms. There is no reason that we must all have a single set of rules, or accept strong corporate or government control, either. The quality of conversation on Mastodon is far higher than either Twitter or Facebook; decentralization works and it’s here today.

How the Attention Economy Hurts You via Social Media Sites like Facebook

Note: This post is also available on my website, where it will be periodically updated.

There is a whole science to manipulating our attention. And because there is a lot of money to be made by doing this well, it means we all encounter attempts to manipulate what we pay attention to each day. What is this, and how is it harmful? This post will be the first on a series on the topic.

Why is attention so important?

When people use Facebook, they use it for free. Facebook generally doesn’t even try to sell them anything, yet has billions in revenues. What, then, is Facebook’s product?

Well, really, it’s you. Or, more specifically, your attention. Facebook sells your attention to advertisers. Everything they do is in service to that. They want you to spend more time on the site so they can show you more ads.

(I should say here that I’m using Facebook as an example, but this applies to other social media companies too.)

Seeking to maximize attention

So if your attention is so important to their profit, it follows naturally that they would seek ways to get people to spend more time on their site. And they do. They track all sorts of metrics, including “engagement” (if you click “like”, comment, share, or otherwise interact with content). They know which sorts of things are likely to capture your (and I mean you in specific!) attention and show you that. Your neighbor may have different interests and Facebook judges different things are likely to capture their attention.

Manipulating your attention

Attention turning into money isn’t unique for social media. In fact, in the article If It Bleeds, It Leads: Understanding Fear-Based Media, Psychology Today writes:

In previous decades, the journalistic mission was to report the news as it actually happened, with fairness, balance, and integrity. However, capitalistic motives associated with journalism have forced much of today’s television news to look to the spectacular, the stirring, and the controversial as news stories. It’s no longer a race to break the story first or get the facts right. Instead, it’s to acquire good ratings in order to get advertisers, so that profits soar.

News programming uses a hierarchy of if it bleeds, it leads. Fear-based news programming has two aims. The first is to grab the viewer’s attention. In the news media, this is called the teaser. The second aim is to persuade the viewer that the solution for reducing the identified fear will be in the news story. If a teaser asks, “What’s in your tap water that YOU need to know about?” a viewer will likely tune in to get the up-to-date information to ensure safety.

You’ve probably seen fear-based messages a lot on Facebook. They will highlight messages to liberals about being afraid of what Trump is doing, and to conservatives about being afraid of what Biden is doing. They may or may not even intentionally be doing this; it is their algorithm predicts that those would maximize time and engagement for certain people, so that’s what they see.

Fear leads to controversy

It’s not just fear, though. Social media also loves controversy. There’s nothing that makes people really want to stay on Facebook like anger. See something controversial and you’ll see hundreds or thousands of people are there arguing about it — and in the process, giving Facebook their attention. A quick Internet search will show you numerous articles on how marketing companes can leverage controvery to get attention and engagement with their campaigns.

Consequences of maximizing fear and controversy

What does it mean to society at large — and to you personally — that large companies make a lot of money by maximizing fear and controversy?

The most obvious way is it leads to less common ground. If the posts and reactions that show common ground are never seen because they don’t drive engagement, it poisons the well; left and right hate each other with ever more vigor — a profitable outcome to Facebook, but a poisonous one to all of us.

I have had several friendships lost because I — a liberal in agreement with these friends on political matters — still talk to Trump voters. On the other side, we’ve seen people storm the Michigan statehouse with weapons. How did that level of disagreement — and even fear behind it — get so firmly embedded in our society? Surely the fact that social media shows us things designed to stimulate fear and anger must play a role.

What does it do to our ability to have empathy for, and understand, others? The Facebook groups I’ve been in for like-minded people have largely been flooded with memes calling the President “rump” and other things clearly designed to make people angry or fearful. It’s a worthless experience, and not just that, but it’s a harmful experience.

When our major media — TV and social networks — all are optimizing for fear, anger, and controvesry, we have a society beholden to fear, anger, and controvesy.

In my next installment, I’m going to talk about what to do about this, including the decentralized social networks of the Fediverse that are specifically designed to put you back in charge of your attention.

Update 2020-12-16: There are two followup articles for this: how to join the Fediverse and non-creepy technology purchasing and gifting guides. The latter references the FSF’s page on software manipulation towards addiction, which is particularly relevant to this topic.