The 11th Reason to Delete your Social Media Account: the Algorithm will Find You

TL;DR: outrage mobs aren’t a bug. They’re a feature.

After the introduction, there are five parts: the algorithm is real, the algorithm wants you online, the algorithm will find you, walk away from the algorithm, no, but seriously.

Update: a new post on charting a post-social media future.

Update: a nice piece by philosopher Anne-Sophie Barwich, who also deleted her social media accounts this year.

Introduction

A few years ago, Jaron Lanier wrote Ten Arguments to Delete your Social Media Accounts Right Now. Lanier’s book has the helpful feature of being completely unambiguous in its message (when, Jaron, when should I delete them? Oh). I ended up assigning it as optional reading for my undergraduate class, Bubbles. The Thanksgiving break means that students usually patch out that week and miss class, so I run an optional seminar instead. I’ve learned a huge amount from these little liminal-moment seminars each year, and some of them have led to real revisions in my own thinking, see, e.g., my views on University censorship when I was on Jim Rutt’s Currents podcast. In previous years, we read John Locke’s pluralistic Letter Concerning Toleration, but Lanier’s book has the advantage of not needing any coaching in close-reading.

That year the near-unanimous response from the students was to reject the book. Only one student (of ten, or so) had sympathy with the view, and wrote a fascinating (again, optional) essay later that semester. I was surprised by the support the students had for their lives on social media, and while a few of them felt that being on Facebook (or similar Facebook-owned systems) wasn’t quite optional, they felt the benefits outweighed the downsides.

Of course, I didn’t follow Lanier’s prescription either. I had deleted my Facebook account a year beforehand, but had an active Twitter habit. While I felt Lanier’s arguments were dead-on they were not, as the philosophers, say, dispositive: they didn’t settle the matter for me.

My views on this have shifted a great deal, however, and quite rapidly. I want to talk now about the reason I deleted my Twitter account a few days ago, pulling me entirely off social media. For me, it’s the 11th reason, since Lanier’s weren’t enough. (I tend to think in terms of reasons, which can be accepted or rejected, rather than arguments, which attempt to persuade.) From here on out, to be clear, I’m speaking not as a researcher, but a private citizen.

The 11th Reason is that, eventually, the algorithm will find you. This is very bad. It may have already happened to you (and you may not know it yet), but if it hasn’t, it’s basically a matter of time.

“The algorithm will find you” has two parts to it. On the one hand, the algorithm will find you meaning that it will discover you as a source for others, and direct them to you, in potentially disturbing ways. On the other hand, the algorithm will find you meaning that it will discover how to keep you online—regardless of the cost.

The Algorithm is real

Being “found” by an algorithm may seem a little science-fictional, but it’s not.

First, a social media site like Twitter or Facebook is gathering extraordinary amounts of data on you. For example, when you type something into a status-update box, and then delete, this information is transmitted to their servers. The location of your cursor on the screen, your hesitations, where you linger as you doom-scroll—all of these things are logged and transmitted.

Second, your identity is constantly tied to other places on the web. You may have noticed this when you make a purchase that violates your ordinary patterns. It’s amusing to discover that the Internet thinks you’re the opposite sex because you’ve purchased a gift (or even simply considered doing so), or that you have some addiction (so half the ads are selling you the addiction, and the other half selling you counseling to get out of it).

Third, you are one of hundreds of millions. Not only do these companies have extraordinary access to your micro-actions, and to your own personal context, but they have an enormous training set to determine how people “like you” behave. They can model you both as a human being, or as a demographically micro-targeted one. Signals that are invisible to you at the personal level, or even in your entire life experience, are plain as day.

Just as an example: you may have seen a friend go down a bad path, say, alcoholism. You may have been watching the signs of that pattern for a while, with increasing concern. These might even be quite subtle and early on—e.g., that he lingers a bit hoping for an extra drink, hesitates a second or two, before leaving. You’ve learned to spot these things from personal experience, perhaps a television show, or an online article.

Social media has a database of the micro-actions of (depending on how you define it) millions of people who struggle with alcoholism. Their data includes things that are necessarily below conscious experience, let alone learnable by a human. Although it’s not labelled as such in the algorithm’s internal workings, social media knows your friend is an alcoholic before you do, and probably before he does.

This is real. Social media companies used to give academics access to some of this data. I know that they log what you type but do not “send”, because of an interesting article that was written on how people have second thoughts on what they Tweet. Colleagues on the other side of the corporate wall have talked about the micro-data. The cross-platform tracking is an open secret. At some point, the companies realized that spreading this about was bad PR and largely cut the academics out.

Social media data collection violates every single expectation of privacy and personal sovereignty you have.

And not just you. Everyone else, as well.

The Algorithm wants you online

This is simple. Social media companies make money by selling ads. As far as I can tell, the underlying data is too valuable to sell (LinkedIn may be an exception—this seems like a dual system). The longer they keep you online, the more money they make. The algorithm is fine-tuned by thousands of extraordinarily good people with degrees not just in computer science, but social psychology, behavioral economics, and beyond.

The goal is to figure out how to keep you online, how to create the circumstances under which you are kept online, and how to shift your own preferences and behaviors in order to make achieving the first two goals easier and more decisive. The third goal—making you into a person with different values—doesn’t have to be an explicit goals of the system. It’s just what happens when you build a really good reenforcement algorithm.

One way to think about why this happens is as a post-selection effect. Some sites may have other—even noble—goals. These goals compete with the desire to simply keep people online longer. Revenues decline. They are bought by a more ruthless company. Facebook, for example, according to a calculation by Matt Stoller, is making millions of dollars off of QAnon. QAnon keeps people online.

In short: as has been said many times, you are the product. The longer you’re online, the more use they get.

The Algorithm will find you

You are unique. This is both part of the American Ideology, and actually true. You have built, or are building, a life that is the product of an enormous number of decisions you’ve made and ideas you’ve formed. You’ve done this in a context that may have been more or less oriented towards your flourishing and empowerment, but nobody has navigated it the way you have. Even if you have an “ordinary” life from the outside, you have inner life that is anything but.

That means that, at first, you’re very hard to model. Social media doesn’t exactly know who you are. They have some idea from what you’re posting, and who you’re following, but you’re not like your colleagues or your friends. Your life trajectory is not like any others.

But the longer you are on, the more data accumulates. Some of these signals are, as discussed above, extraordinarily weak—things that you can’t even notice because they happen too rapidly for conscious experience (~100s of milliseconds). Others may well be above the awareness threshold for you, but not their cumulative meaning.

At some point, the algorithm finds you—it determines how to increase your time online.

An example: early on in my Twitter use (I was going to say “career”) I saw a Tweet from Stanford Libraries that they had digitized a significant chunk of the transcripts of the French Revolution. I was teaching class that afternoon, so I did a little exploration and put up some preliminary results as an example of how to explore datasets. Three years later, with colleagues in computer science and history, we published an award-winning paper based around that data.

From the point of view of Twitter, however, this is a massive fail. I saw the Tweet, and logged off to work on it. The algorithm, if it was watching right then, learned to give me less of that. (I’ve often wondered if the algorithm is actively choosing what to feed you for epistemic reasons—i.e., not just trying to keep you online, but feeding you things that it thinks will best increase its knowledge of what, in the future, might.)

What keeps you online, of course, may be to your benefit. I’ve learned a lot from staying a bit too long on Twitter—e.g., that there’s a deep relationship between Lorentz transformations in special relativity and (wait for it) logistic regression. I’ve also kept people online to their benefit, with my occasional Tweet storms on (say) Kullback-Liebler divergence, the Many Worlds Hypothesis, the fact that OTC vitamin supplements are almost certainly harmful in every circumstance, and beyond. I’ve worked out some interesting ideas, and I’ve been fed really interesting information by people who know a lot more than I do, and who come from worlds that I don’t usually encounter.

But eventually the algorithm finds a way to push your buttons. It figures out which content is going to cause you to engage in a compulsive fashion. Jung would call this constellating a complex: drawing out what is maladaptive in your psyche.

For some people that might be a rage-spiral of political content; for others, interpersonal conflict or a desire to poke the bear—not in a beneficial way, but in a way that, in retrospect, reveals and magnifies what is aligned against your own flourishing. Less noticeable but I think no less common a response is lethargy, passivity, fatalism, anomie; somewhere in between is, “aggrieved entitlement”, or the projection of self-scorn, and so on—the list is as least as long as the list of positive qualities, a shadow-list of their inversions and distortions.

In the meantime of course, social media is modeling everyone else as well. Among other things, they’re figuring out how you can be used to keep them online. If you’ve ever pushed someone’s buttons before, you know that’s literally part of the definition of not a good idea. That process is monetized and run on a grand scale. Outrage mobs are the most extreme version. They’re not a bug. They’re a feature.

At this point, you have a branch point. If you recognize what’s happening, you run. If you don’t, you go deeper until something goes really deeply wrong, and you walk.

Personally, I think I got lucky, because the algorithm found me twice in two days. The first time, it found a complex (let’s call it) and used it to keep me online. The second time, it used me to keep others online. It was the conjunction of the two that made it hard to ignore—if there had been some space between them, I might have dismissed both of them in turn. But it had become clear to me that there was a hidden common cause in play (a co-explanatory account).

What those events are isn’t particularly important, but I’ll describe them anyway, in case they ring a bell. I’ve learned from totally well-adjusted and respectable people that similar things have happened to them as well, and it’s been very useful.

In the first case (the algorithm finding my complex), I was confronted with information that was not only wrong, but being (I felt) used for political ends and to (in my opinion) disempower and manipulate people. I knew this information was wrong because I had gone back to the original data and done a statistical analysis. I then spiraled out to other datasets that turned out to be all consistent with my original findings.

I never encountered anyone of the opposite opinion who had done this level of work. This made me increasingly aggressive in arguing with others on the point, until I was generating sufficient aggro in myself and others that I was online not to talk about error analysis for a binomial distribution, but about how awful people were. When I finally logged off, I felt exhausted, drained, and (most importantly) ego-dystonic. I couldn’t sleep. This is not the person I am, I thought, but it made me afraid that I might become that way.

In the second case (the algorithm leading people to me), it was a discussion of climate change and cryptocurrency. For the first half hour, this was a discussion, mildly heated, the (to my mind) rough-and-tumble Twitter thing.

At some point, however, the algorithm discovered that this was an excellent way to keep cryptocurrency proponents online. A large number of these people were channeled by the algorithm to my account, about a thousand of whom sent personal attacks in a chained escalation of counter-speech. Being a (very small-scale) victim of an outrage mob was extraordinarily disturbing in ways I won’t go into, and I hope it doesn’t happen to you. About a hundred of these people also followed my feed, presumably hoping for more opportunities in the future. Twitter, of course, made money off the whole process, selling ads to everyone in between their Tweets.

(As a side note, I reported one Tweet for advocating offline harassment. Twitter rejected the complaint in minutes. My immediate response was—after all those amazing Tweet threads on information theory, you won’t protect me? No benefit of the doubt? But of course not. Twitter is not your publisher. You’re its product.)

In both cases, the algorithm was at work. In the first case, not only was Twitter presenting me with information that was keeping me online, it was also drawing others into the conversation that it thought would keep me online. I followed perhaps six thousand accounts. Perhaps it might have found another one that afternoon?

In the second case, for my Tweets to make it into Bitcoin Twitter required not only that they reach one of those people, but that others be presented with that person’s response and be themselves drawn in. (If you feel that I’m talking about you, I’m probably not—regardless, only one piece of writing on Bitcoin of mine is now online, which I very much doubt will be found offensive.)

I’ve talked here largely in the passive voice: as something the algorithm is acting on, rather than as an agent responsible for their actions. I think that’s OK. Agency is, in part, about (1) avoiding things that push your buttons, and (2) figuring out, reflectively, what those buttons are really about so (1) is less and less necessary. In this context, agency is turning off social media, in part because (2), in the presence of the algorithm, is never-ending. The algorithm not only seeks out your buttons, but learns how to cultivate, and magnify, the ones that you had dealt with, in ways that are essentially invisible.

Walk away from The Algorithm

In retrospect I wonder how long ago the algorithm had found me. A month beforehand? A year? Three years? Of course, I’ll never know. I escaped hitting “rock bottom”, which the traditional wake-up call. But the algorithm works on super-human scales, at levels of subtlety that we can’t approach.

Has the algorithm found you? For some people the tell might be whether you’ve sought out an argument online that you would never have done in person. For others, the tell might be quite the opposite—the cultivation of an anomalously submissive personality; as the kids say, you’ve become a simp. Maybe you’ve gotten depressed and increasingly reliant on the online support of strangers far away who just happen to be on the site you use. ADHD? It may not end well. I don’t think we’ve begun to catalog all the different ways in which things go wrong. What’s much more worrying is that you might not notice. My guess is also that the algorithm uses people in different ways: e.g., it likely uses women more as a means to keep others online, using them to push other people’s buttons.

Will the algorithm find you? I think it’s certainly possible that it might take a long time, if you’re on-again/off-again. You may benefit in the meantime, the way I stumbled across the Stanford data, before the algorithm finds you.

There’s one very unambiguous way people discover the algorithm has found them, of course. Somehow, something they’ve said offhand—or, more usually, something someone else says about what they’ve said offhand—goes viral. They lose their jobs, their livelihoods, everything. The other person often does too. That’s when social media hits the jackpot. You’ve probably looked in on a few trainwrecks yourself. They’re made for you. Social media sold you ads while you looked, and learned a little more about you. The people themselves are collateral damage.

People like social media. It reminds me of how people used to smoke, but only at parties. There’s certainly a social benefit to being able to talk to a stranger to ask for a light. But the downside is too high. I’ve come around to the view that social media is like tobacco: there is no safe level of use.

There’s also second-hand smoke. Jaron Lanier talks about collective effects. You being online draws other people online. It’s not, in other words, just that the algorithm harms you. It is also using you to harm other people. So there’s a second important benefit here, to leaving, which is a moral one. Stop harming people.

I appreciate Paul Skallas‘ suggestion that Twitter is like a bar, but at some point the experience shifts. It becomes the very (un-Lindy) social form of a highly addictive, ammonia-laced cigarette. There are instructive, dis-analogies. Bars have bouncers, for example. They’ll throw people out—including you!—if things get too intense. They have an ambiance, which provides common expectations for behavior. All of these things are helping both you, and the people around you.

No, but seriously

Delete your social media account. Facebook makes it a bit tricky (you have to Google how to do it), but it only takes a few clicks. There’s a waiting period of thirty days during which you can change your mind. (Notably, there’s not a thirty day waiting period to create an account—how odd.)

The only exception I can think about is running an “institutional” account. But if you’re doing that, you’re a communications professional, you are literally paid to do it, and I don’t see it as a harm unless personal boundaries get crossed. The social media companies will be manipulating your business, not you, that’s a different matter. If you’re running a freelance “brand” I think it makes sense to have an account that posts links to your work off Twitter, but only if that posting is automatic, replies are disabled, and you don’t log on. There is currently no platform-neutral notification system. Meanwhile, and just as an example, I’ve seen advice directed at early-career academics, that having a social media presence can be used as a way to communicate your research. I think this is a terrible idea, for the reasons outlined above.

In as much as you’re a person on social media, the algorithm will find you.

I’ve wanted to stay very focused on a single 11th reason. I will say, briefly, that you can get every benefit you get from social media in another form. The most obvious one, for those who like to work out ideas, is writing for a publication. It doesn’t have to be the New Yorker. There are an enormous number of venues online, they have real readerships, and very low barriers to entry. I occasionally write science fiction stories; the most recent is in Teleport Magazine, which was great, and also has a higher acceptance rate than the New Yorker. Writing long form is a much more challenging process in part because you’re not getting the rapid-fire feedback of social media.

The other obvious one is to leave the house. Social media is powerful in part because it creates common, shared experiences. But there are other sources of these—mostly in public spaces and conversations. “It’s a dangerous business, going out of your door.” You don’t have to go totally offline: Slack and IRC provide ways to talk and organize without an Eye of Sauron. Indeed, IRC is probably ideal—it’s an open source system without profit. The only moral dangers are the ones you bring. There used to be RSS, which was platform-neutral but (perhaps unsurprisingly) killed off, in part by Google.

If you’re worried about freaking people out by leaving social media without warning, write a comment thread talking about the reasons why you’re deleting your account. It will take a few minutes, enough time to propagate to enough people that the world won’t think you’re on the lam when you delete ten minutes later. You can turn off comments, so you’re not drawn to engage further.

You’re welcome to link this piece, although I won’t know you have.

I won’t judge you if you stay. The reasons I’ve talked about here turned out to be dispositive for me. They might not be dispositive for you. That’s OK.

The benefits, in the end, I believe, are real. It’s not just that you escape the algorithm. I don’t know what it might be for you. If you leave before it finds you, perhaps not much. But if you leave, whatever does happen next is something that’s up to you, not it.