Designing for Evil
Wherein I give you a “Defense against the Dark Arts” primer for designers.
I want to talk about software design. Specifically, I want to talk about how to design your products to resist the effects of evil.
I need to open this entry with a trigger warning. It isn’t possible to talk about defending against harassment without being exposed to it.
That said, here we go.
I strongly believe that I have a duty to try to prevent harm from coming to those who choose to use the things I design. This means that I need to think about the bad parts of the system, which often isn’t very pleasant.
I want to talk about Anita Sarkeesian and the horrible things that have been happening to her over the past years but first I feel like I need to establish some street credentials.
Back in the year 2011, several employees of the Wikimedia Foundation were put up on the site’s yearly fundraising banners. I was one of these people. I was a very successful banner candidate. I’ve written about this experience before but I wasn’t very expansive about the darker side.
Whenever my banner went up for a test run, I could literally feel the internet turn its attention to me like the fucking Eye of Sauron. Hundreds of tweets, LinkedIn views, Facebook posts. Pow, pow, pow. Lots of it was fun and exciting. Some of it was . . . not.
It’s a bit of a bummer to be told by random strangers that you look like a pedophile. Almost especially when they don’t know anything about you.
Back to Anita.
I don’t want to write about Gamergate or the state of the art of misogyny on the internet but I need to provide some context.
Anita Sarkeesian is a feminist game critic. She produces a series of educational videos about how sexism pervades the game industry. She does not, in any way, call for censorship or banning of topics or anything like that. She really only says, “just be aware of what’s happening here and maybe try to do better.”
For these statements, she has been continually bombarded with harassment through every possible means available to trolls on the internet.
In early 2015, she posted a blog entry detailing a single week’s worth of harassment. Scrolling through it is an inexhaustible stream of sewage and hatred. Some of it is ironically self-aware.
Let’s scroll through a miniscule amount of Anita’s harassment.
I dare you to click over and scroll through the full list. See if you can get through Monday.
Anita gets thousands of times more hatred than I ever did. I almost buckled under the weight of the sewage directed at me. I can’t imagine how strong she must be to keep going.
I’ve not been very scientific in my investigations but it appears that only about half of these accounts have been suspended or blocked. Not that such action matters much: these shit-goblins simply create a new anonymous account and let the good times roll again.
This is the face of evil. Beelzebub with the thousand eyes and mouths.
It’s a true failure on Twitter’s part. One they have acknowledged in public but (at the time of this writing) have done nothing to address.
When you design a product without understanding how it will be used for evil, you are designing poorly.
Let’s take a moment to understand the basic mindset of internet trolls. There are, as near as I can tell, three primary motivations that any one troll will have at a time.
Understanding these things will help you defend your users against them.
To Defeat the System
These trolls want to break the system just to break it. To do it for the lulz or the thrill of doing it. The desire to defeat systems (hacking or cracking them) is a deep part of hacker psyche. They aren’t necessarily motivated by evil but they often will open the door for others who are.
These people will find holes in your systems. They do it just to find them. But once they’ve found them, they nearly always share these holes with others.
To Subvert the System
Trolls who subvert a system intend to use it against the spirit of the system. This is often for laughs but sometimes it has very, very dark results.
In 2008, Christopher Poole was elected as the most influential person of 2008 by Time Magazine, beating out Barack Obama, because users of 4chan figured out how to game the voting software. This year’s Hugo Awards have been hijacked because someone figured out how to bend the rules to their favor. No big deal, right? No one is getting hurt, right?
Some horrible people use Secret to disseminate revenge and child porn. Secret’s not a great way to do bulk distribution of child porn, though. Embedding zip archives of this stuff into svg files and uploading them to a site like Flickr or the Wikimedia Commons may be, however. Maybe as large attachments in un-sent emails on any one of a thousand free-to-use web mailers.
To Weaponize the System
This is when your system or design is being used against you or another person in a hostile, damaging manner. This nearly always happens because of “Not Thinking It Through”.
This may not always happen directly in your product, mind. Data leakage may lead to someone being doxxed on another site, which may then lead to a swatting. Or worse.
Consider the proud young parent posting photos of their child at play to Facebook with open privacy settings. Are there things in that photo where a predator could identify the location?
How can you prevent your product or design or system from being abused? How can you deal with it?
Well, there’s no silver bullet on this. There are a series of strategies you can employ, though. Many will not apply. You will probably need to use multiple ones, each at differing degrees of strength or opacity.
Some of these strategies suck, but I’ll include them for completeness’ sake.
Just do jack and shit about it.
This is the worst strategy. You can do it – and some companies appear to remain successful while doing so. This is the way that car companies handle recalls: only deal when there’s sufficient blood on the pavement to affect the bottom line.
I personally find this to be odious and unethical.
Shut it Down
Just prevent anyone from doing it at all. This typically means shutting down your application entirely. It’s often a last-resort solution.
PostSecret had a short-lived application that allowed users to post their own photos and captions. It was pulled when people starting posting porn and gore because there were no features to limit this and there was insufficient moderation to work at scale.
This is not a good mitigation strategy because everyone loses.
This is a strategy for understanding your weakness. Many design teams create personas for the users they want to service. The customers they want to have. Good personas are often an excellent tool for helping to understand the business needs of your product or market. These personas are almost universally nice, however, and always assume good faith on the part of the persona.
I say to you thus: you must always make at least one “troll” persona. You must learn to think like your enemy. Think about their motivations and how they will subvert your product to aid them.
Limit Feature Strength
This is reducing or intentionally crippling your product in order to protect your users.
Years ago I worked on a site that was intended as a social and games site for children. They wanted to have a chat system. Obviously, we wanted to make sure that foul language wasn’t a part of it.
It would be easy to write a series of regular expressions so that the chat catches and censors Carlin’s magic seven and all variations. It’s not so easy to catch “Hello, little girl, what time do you get out of school?” or “I am going to put you in a wood chipper.”
This is why Nintendo’s chat systems only allow you to pick from canned statements.
Very simple. Have a very strong code-of-conduct and brook exactly zero violations. You must be merciless. You must not allow for rules-lawyering. Identify bad-actors and get rid of them.
Wikipedia has some editors who are simply horrible, toxic individuals. The way they conduct themselves and talk to new users drives new users away forever. They are allowed to remain because there is always some bullshit reason why the latest round of bad behavior is “okay”.
This is the type of behavior that creates gender gaps.
You can educate users as to the bad things that could potentially happen and things to prevent risk.
The biggest problem here is that no one wants to read a bunch of snooze-fest documentation. I didn’t join Facebook to have to take a class about it. Sometimes you can put up interstitial dialogs (like an end-user license agreement) but are you ever really sure that the user understands this?
Does the proud parent really understand that the photo of their daughter’s recital they just uploaded is geo-tagged? Did they think about the fact that they took it at the school? Do they really understand what “Friends of friends can see this” means?
Simply prevent people from posting or using the service completely anonymously. Allowing pseudonymity is fine and even great (and recommended). Just make sure that there is a way to tie any activity back to a specific user.
Purely anonymous culture is fairly toxic so you don’t want that anywhere near you. There’s a reason moot stepped down from running 4chan. But you don’t want to force “real names”, either, because that will probably open you up to other scenarios (like dead-naming transexual people).
Access Control Systems
Give users controls over who can contact them and how. This nearly always requires both white and black lists to work along side a default setting.
Livejournal does this very well: my private posts are only readable by those I’ve set as “friends”, and I can even write elaborate rules about posting only to groups, or to specific people.
Facebook has this kind of fine control, too, but it falls apart very quickly. There are too many options and degrees of visibility and the lack of any serious group support makes managing access difficulty.
It should be terribly easy to add someone to a block list. Press-and-hold on a tweet and I can block it in one tap. Blocking someone on Secret, however, requires me to first read the offending secret (which usually contains a photo of gore or revenge porn), report it, and then I can block the user.
Shadow Reputation Systems
This is a great method but it requires a lot of research and technology. You’ll need to instrument everything in your product and identify several patterns of behavior used by your bad actors.
When your system sees someone engaging in these behaviors, you silently and secretly drop them off into the bucket. This is called shadow-banning or hell-banning.
For example, say your product is one that allows your users to rent out extra rooms in their apartments for short-term stays. If a new user joins your site and then their first several actions are to browse exclusively female profiles, you might be able to determine that they really aren’t there for the rooms but instead to creep on women. The system could then silently prevent messages they sent from arriving at their targets and they themselves may never appear in searches.
In order for shadow-bans to work, you cannot allow anonymous access to your site. You must sit behind a log-in wall. The reason is that if the banned user can see that their comments are not being seen, that they are invisible, they will know that they’ve been shadow-banned.
When all is said and done, when you’ve set your ideas to paper, you have to sit down and ask yourself a very specific question:
How could this feature be exploited to harm someone?
Now, replace the word “could” with the word “will.”
How will this feature be exploited to harm someone?
You have to ask that question. You have to be unflinching about the answers, too.
Because if you don’t, someone else will.
Comments on Designing for Evil
“Wikipedia has some editors who are simply horrible, toxic individuals. The way they conduct themselves and talk to new users drives new users away forever.”
Correction: they drive *all* kinds away, long-time users and other contributors, and make Wikimedia Foundation employees dread participating in many discussions. It depresses me the Foundation pays heroic individuals for Community Liaison work ( http://wikimediafoundation.org/wiki/Job_openings/Community_Liaison ) in part to shield staff from the awfulness in an open transparent project that is so valuable to humanity.
The black magic in wizard-level evil-speak is its practitioners have mastered a condescending, demeaning, dismissive, hurtful, insulting tone that isn’t even against community standards and passes any and all AI filtering, while the appropriate response (“F*** you, you abusive s***head, for p***ing all over my best-faith efforts, may you rot in hell for ruining my day for no good reason”) makes the VICTIM the bad guy.
You’re absolutely right. I was going a bit easy on Wikipedia because I love it so much. I wonder if things would be different if I had been more vocal about how horrible the situation is.
Of course, the more elaborate the systems which are in place for silencing and eliminating voices, the more _truly_ evil people will work to get themselves put in charge of managing those systems and defining the offenses. How are you going to keep these tools from falling into the hands of those who wish to control discourse for political reasons or simply for personal aggrandizement?
Telling people that I don’t want them to shit in my yard is not censorship or “silencing and eliminating voices.” That’s kind of the weird thing that so many people – the harassing, men’s rights types – don’t get.
Call it whatever you like, but in return I’d appreciate it if you’d address the point I brought up. These are tools for preventing people from speaking on a particular platform. What stops them from falling into the wrong hands? Say, someone who decides the definition of “harassment” completely depends on the political opinions of the parties involved?
Okay. I’ll address your point: You don’t have one. The exact purpose of many of these techniques is to prevent certain people from interacting with platforms they are deployed on. This isn’t censorship or fascism or reduction of free speech or anything like that. At best, it’s preventing assholes from polluting the conversations that are occurring.
Lmao, I agree. You do look like a pedophile.
Your fake email address is the best, by the way.
Whereas I think you look like Lemmy from Motorhead, except not ugly.
…and still living.
I enjoyed your post and would like to know your thoughts on appropriate consequences (if any at all) regarding a machine or person who has not agreed to any terms or policies but is still able to use a site/service to the extent where they could be considered to be breaking the rules? (rules which are described in the terms/policies which have not yet been agreed upon and are also limited to the site/service).
Also, could you share Underbridge’s email for my amusement?
Well, I think the only appropriate response to the scenario you’re outlining is banning or blocking. If your product can’t do it, you’ll have to do it at the webserver layer, maybe.
And it was just “[email protected]” – very specifically calling out the fact that they were trying to troll me.
I came this way via Steam…
Good to know you won the fight.
Browsing a bit I hit upon this “talk about”. I don’t personally do Twitter or Facebook, nor any other social media fora. Over the past few years I’ve noticed a noisy rustle amongst the pages of news media sites I traverse. It’s all to do with lowly things, things that brush against grass blades, breath warm dog doings and dodge inclement weather via mole tunnels. Collectively they’ve been termed Trolls.
When a child I loved fairy stories. I knew, of course, it was all make-believe. But sometimes when a new character impressed themselves upon my tiny mind they would become a part of my reality. Bed covers were the ideal way of dispatching them.
You see, pretending something isn’t there is really very easy.
To deny Trolls you must believe they don’t exist. Just like God and Santa, Trolls are fictional representations of need and greed. They truly urge to be fed. Starving them of all calories is the most humane way of destroying them. Don’t acknowledge they exist. Don’t talk about them as that just serves to make them stronger, greedier and more energetic. Ignore them. Everyone, just ignore them. Don’t respond. Don’t talk about them. Don’t write about them. Don’t even whisper or they will manoeuvre to the very edges of nearby shadows.
Make them demons and they will become demonic. They need quenching.
Don’t ascribe them the mythology that they are dangerous, devious paedophiles. This is the myth. The creatures you’re most at risk of will sit on your bed and read you stories, fairy stories. They will have familiar faces and soothing voices.
You will call them, mother, father, brother, sister, gran or gran dad… Uncle, aunt. Even Mr. or Mrs. Doe next door. They will rarely be far away.
Best ignore them.
Night, night. Sleep tight.
Another writer paints the rose-colored picture of what Sarkeesian has done. She didn’t just offer an opinion about sexism in video games. She declared things as fact which were not. She also makes other ridiculous statements from her world view of patriarchy. She got a large sum of donor money for making videos and after 2 years as far as I know did not complete her project.
She lied about being a gamer, she made things up about the games (hitman being one example). She stole game footage from other gamers claiming it as her own. Her academic credentials are pathetic as is her thesis. And if she reads this she would call it harassment rather than simply statements made that reveal her as hypocrite and a liar.
Her package of feminism is morally revolting. Her own proposal for a game broke the same rules she critiqued other games for. e.g. using violence against another gender to solve problems, not having in-depth characters, objectifying men. Everyone should do themselves a favor and watch Thunderf00ts videos on the continued Sarkeesian scam. Not only that but Zoe Quinn another perpetual victim was sleeping with gaming journalists to get good publicity for her game.
Again trying to make the story about harassment when really guys get just as much harassment playing xbox live for 30 minutes as girls might get over twitter over a weekend. It’s an even playing field, but some of the players want to be victims and every where they go they ruin things at every turn, making the story about them.
Cool story, bro. Not sure why you’re here, though.
Your response to Mike XD really leaves a bad taste in the mouth. I can’t see any justification for being so rude and derogatory to someone who has taken the time to interact with you over a point made in an public blog post.
Cool story, bro.
I find your comment the most honest and useful content on this entire page. While its true that misogyny and hateful bottom feeding internet trolls are both very real problems today, another significant problem in the world is simply having large groups of people surrender their critical thinking to becom mere puppets for a subculture built around rhetoric, lies, and willful ignorance.
Sarkeesian is a manipulator who intentionally incites those bottom feeders, not to improve the situation by shining a light on it, but rather to exploit those naive enough to think she is a victim.
Cool story, bro.
>This is the way that car companies handle recalls: only deal when there’s sufficient blood on the pavement to affect the bottom line.
I find this analogy hyperbolic. How many people have died from communicating? You’re talking about making communication software that
A) Doesn’t allow people to speak freely since they’re forced to use their real identity over it
B) Takes significant cost in order for every single piece of communication to be moderated
C) Only allows content which a certain group in power permits
This kind of software is just as defective to me as a car that doesn’t do what you tell it. Personally, I don’t use any such medium, as there’s generally nothing I care about on these mediums, and I don’t like the idea of having my personal correspondance monitored, whether it’s in person or remote. I use private end to end encrypted communications to talk to everyone I know, and will only talk to them in person if they wont adopt one. Your complaints mostly seem to be about people making negative comments to famous people. This has and will always happen. PR moderates comments over public mediums and always have, mostly to save face, not for ethical reasons.
As you know, famous people receive too much communication to process. It basically becomes noise. Really, theres no difference between spam and attempts by the public to communicate with famous people. This is why most famous people only talk to people they know or are introduced by someone they know. Software already supports this concept by private instant messengers which require you to whitelist someone before they can communicate with you.
You also seem to complain about child porn being distributed over some program (page doesn’t load for me so I can’t see what it is). In fact, any content that can be represented in binary can be transmitted over *any* medium. You will never fix this without removing free will or having absolute surveillance over every individual. Again, you claim software should filter this content. It already does. All operators are forced to remove any content they see that’s illegal, and are forced to comply to DMCA notices and similar requests. At the end of the day, people will and always will have exchanged items in person. So your concern is reducible to a plea for removal of free will and/or absolute surveillance.
Basically, it just sounds like you want more surveillance and moderation. Not anything new.
You’ve clearly paid attention and accurately grasped my points and meaning! I’ll be happy to let everyone know that you’re right and have solved the problems!
Sorry, I’m not sure but I thought the problem here was unmoderated negative opinions? I don’t cosider this a problem that can be solved where a solution is desirable. In private you simply communicate with who you want. If you don’t like someone, you can just not talk to them. In public, anyone can say what they want about you. You can’t control the public’s opinion. You can also construct a public medium with moderation and kick out anyone who says something you perceive as negative. Sorry if these are bad examples, but this is what marketing angencies and North Korea do. The end result is a warped view of reality as you can only see a certain subset of opinions. This often even alienates readers to the point where they just ignore the medium. Personally, I’ve received “death threats” on my public facets as well, but I just ignore them. I don’t know if everyone else is like me, but I also don’t spend time going through and deleting them, as that would be self-patronizing and I feel like it’s an attempt at manipulating public perception (this was on a medium intended for mature audiences of course, just as Wikipedia is).
You may want to actually read the words I wrote, if you think this is about “unmoderated comments”. I know that Gamergaters like making everything about that, about how Anita should just suck it up and take it, and you see everything through that naive lens, but it’s not.
Feel free to come back when you have something intelligent to say.
You really make clear observations and do supply the current solutions to the problems.
Only they all fall in the same catogory as democracy it gives the best options yet. But none of them are really good.
Power corrupts so who controls the moderators or what needs moderating?
There is no problem in a perfect dictatorship, till the point the dictator prooves imperfect. So alas you give the best solutions yet, but they dont suffice.
Well written peace that made me think, thank you
I’ve never seen someone refer to the deadnaming issue. I’m surprised because few people even acknowledge the existence of transgender people. (Now that I think about it, I’m glad that facebook let me change my name when I transitioned.)
Thanks for voicing it
A bit late to the party, and while I do appreciate the article, I have to admit it’s a bit hilarious that after all the discussion of designing against abuse you ended up leaving comments open, anonymous, and seemingly unfiltered.
Except they’re not anonymous or unfiltered. They’re moderated and approved.