Sell your kids for clicks

There’s a deeply worrying article in The Guardian about the rise of child labour on the internet.

Making videos of your kids might not seem like work, but it is: as one interviewee puts it, “it’s not play if you’re making money”. Child performers are subject to laws designed to protect them from exploitation not just by employers but by their parents. Online, those laws are being evaded or avoided.

Money made online by children, and that money can be significant, goes directly to their parents, because children can’t have social media accounts on the likes of YouTube or Facebook.

We’re easily seduced by technology, and that seduction often blinds us to the distinctly old-fashioned things that technology enables: union-busting, unethical practices and “disruption” not just of industries but of the laws designed to protect individuals from rapacious employers and greedy parents alike. YouTube may be relatively new, but children being exploited by the people behind the cameras is not.

 

“Let it all burn down”

In an extract from his upcoming book Ruined By Design, Mike Monteiro explains the problem with social media and how it ruined the early promise of the internet.

The people who built Twitter (and other services) were a bunch of young men. This is important.

More accurately, they were a bunch of white guys. Those white guys, and I’ll keep giving them the benefit of the doubt and say they did it with the best of intentions, designed the foundation of a platform that would later collapse under the weight of harassment, abuse, death threats, rape threats, doxxing, and the eventual takeover of the alt-right and their racist idiot pumpkin king.

Women are woefully under-represented in the tech sector; ethnic minorities and LGBT people are barely on the radar. So straight white guys build systems that enabled such horrors because as cisgender straight white guys, they haven’t experienced the things that women and minorities experience in life and online.

Incidentally, nobody is saying there’s anything wrong with being a cisgender straight white guy. That’s not what Monteiro is saying, and it’s not what I’m saying. The point here is that people build stuff based on what they know.

Monteiro:

All the white boys in the room, even with the best of intentions, will only ever know what it’s like to make make decisions as a white boy. They will only ever have the experiences of white boys. This is true of anyone. You will design things that fit within your own experiences. Even those that attempt to look outside their own experiences will only ever know what questions to ask based on that experience. Even those doing good research can only ask questions they think to ask. In short, even the most well-meaning white boys don’t know what they don’t know. That’s before we even deal with the ones that aren’t well-meaning. (I see you, Travis.)

You don’t ask “could this be used maliciously by abusive exes?” if you haven’t fled an abusive ex. You don’t ask “could this be used to target gay people?” if you haven’t been targeted as a gay person. And so on.

Twitter never built in a way to deal with harassment because none of the people designing it had ever been harassed, so it didn’t come up. Twitter didn’t build in a way to deal with threats because none of the people designing it had ever gotten a death threat. It didn’t come up. Twitter didn’t build in a way to deal with stalking because no one on the team had ever been stalked. It didn’t come up.

This is one of the key problems with the internet as it is today: it’s been largely built by and for cisgender straight white guys. So for example Facebook enforces a real-name system that bans pseudonyms because cisgender straight white guys don’t need to hide their identities – but women fleeing abusive exes and LGBT people often do. Again and again we see platforms used maliciously because the people who built those platforms didn’t imagine such abuse, and don’t seem too keen on policing it either.

Technology is often portrayed as an unalloyed good, disrupting moribund industries and giving power to the people. But all too often it gives power to the wrong people: the oppressors, not the oppressed.

We designed and built platforms that undermined democracy across the world. We designed and built technology that is used to round up immigrants and refugees and put them in cages. We designed and built platforms that young, stupid, hateful men use to demean and shame women. We designed and built an entire industry that exploits the poor in order to make old rich men even richer.

One of the most telling signs that something is very wrong with social media is the flood of tech firm media founders and executives who won’t let their own children go online much, or at all. As Monteiro says:

When we refuse to let our own children use the fruits of our labor while still cashing the checks we’re earning by addicting other people’s children — all the while rending our garments over “what’s happening to kids today!” — we need to burn all our work down.
Nothing is happening to the children. We are doing something to the children. Let it all burn down, and let those that come after us sift through the ashes to learn from our mistakes.

The great internet sex war

In the aftermath of the social network Tumblr banning all explicit content, some writers have considered the wider implications. The reasons for the bans are pretty clear – for example, Tumblr has a problem with illegal content and it’s easier and cheaper to ban all potentially problematic content than to moderate it – but the results can be far-reaching.

Steven Thrasher in The Atlantic explains What Tumblr’s Porn Ban Really Means.

But the Tumblr adult-content purge reveals the enormous cultural authority, financial extraction, and what the philosopher Michel Foucault called “biopower” that tech companies wield over our life. As intimate interactions are ever more mediated by tech giants, that power will only increase, and more and more of our humanity is bound to be mediated through content moderation. That moderation is subjective, culturally specific, and utterly political. And Silicon Valley doesn’t have a sterling track record of getting it right.

The problem with such subjectivity is summed up pretty well by one trans person’s question: they’re undergoing transition from male to female. At what point do their nipples become “female-presenting”, which is explicitly prohibited in the new Tumblr rules? It’s the same issue that means Facebook takes down breastfeeding images: boobs are just for porn, right?

There’s a problem with some explicit content. But not all of it. For some people it’s an opportunity to explore sexuality and identity in a safe environment. Take trans people, for example. Explicit Tumblr blogs are among the very few places where you can see positive portrayal of trans men and trans women as sexually desirable. They’re also among the few places where you can see what your body might look like after hormones, or after surgery. Content bans affect that content too.

Thrasher again:

Using social media intimately in our life hasn’t been all bad. Indeed, as a recent scientific article by Oliver Haimson on some 240 Tumblr gender “transition blogs” showed, social media can play “an important role in adding complexity to people’s experiences managing changing identities during life transitions.”

I can attest to that: before I came out I spent a lot of time reading LGBTI Tumblr blogs that posted what the new rules might well prohibit.

Over at Engadget, Violet Blue describes “the internet war on sex“.

While we were all distracted by the moist dumpster fire of Tumblr announcing its porn ban, Facebook updated its startling, wide-ranging anti-sex policy that is surely making evangelicals and incels cream their jeans (let’s just hope they don’t post about that). Facebook’s astonishing ban on language pertaining to sexuality, among many other things sex-related, is so sweeping and egregiously censorious that it’s impossible to list all its insanity concisely.

It’s called the “Sexual Solicitation” policy. Along with “sexual slang,” the world’s standard-bearing social media company is policing and banning “sex chat or conversations,” “mentioning sexual roles, sexual preference, commonly sexualized areas of the body” and more.

This, remember, is the social network that can’t tell the difference between hardcore pornography and women sharing photos of themselves breastfeeding.

Once again, the rules are designed to address a problem with some content. However:

…the arc of internet sex censorship is long, and it bends as far away from justice (and reason) as possible. Corporations controlling the internet had been steadily (and sneakily, hypocritically) moving this direction all along, at great expense to women, LGBT people, artists, educators, writers, and marginalized communities — and to the delight of bigots and conservatives everywhere.

The Facebook and Tumblr news came after Starbucks announced it will start filtering its WiFi with one of those secret porn blacklists that always screw productivity for anyone researching grown-up topics, and invariably filter out crucial health and culture websites.

The list goes on. Instagram goose-steps for Facebook’s censors; Amazon buries sex books; Patreon, Cloudflare, PayPal, and Square are among many which are tacitly unsafe for anyone whose business comes near sexuality. Google’s sex censorship timeline is bad, YouTube is worse. Twitter teeterson the edge of sex censorship amidst its many uncertainties of trust for its users.

The problem here is that even if you agree with the rationale behind the steps the tech giants take, there is always collateral damage – and that damage tends to affect minorities and creative people and educators.

Here’s an example from a few weeks back in Sweden: a government-run website made a sex education video. Facebook, Instagram and Snapchat blocked it.

Locked out

I’ve been locked out of my Twitter account for a terrible, terrible crime.

No, not being a big old Nazi. Messing with the year of birth in my profile page. This, apparently, is a really bad thing and I can’t currently read anything on Twitter or see other people’s messages to me.

It’s been brilliant.

Being unable to access Twitter has made it clear that my relationship with social media is completely out of whack. I’m following too many people and indulging too many more, and the result is a firehose of fury with precious little of the funny cat pictures and dad jokes I signed up for. It’s become a massive time thief and a drain on my mental health.

I’m not quite ready to bin Twitter altogether, although I’m close, but assuming Twitter decides to let me back in again I’m going to massively reduce the number of people I follow – not because they’re bad people, because I don’t follow bad people, but because I’ve let myself fall into a situation where there are just too many people talking at once. I can’t hear myself think above the din.

It’s a start

Facebook has taken down much of Alex “Infowars” Jones’ content, as have Apple and Spotify.

(Update, 7/8/18: Apple was the first to move. The others were clearly waiting for somebody else to lead.)

Reuters:

The company [Facebook] said it removed the pages “for glorifying violence, which violates our graphic violence policy, and using dehumanizing language to describe people who are transgender, Muslims and immigrants, which violates our hate speech policies.”

Apple:

Apple does not tolerate hate speech

This stuff is all in the terms and conditions. For example, for Apple’s podcasts there is an outright ban on:

  • Content that could be construed as racist, misogynist, or homophobic
  • Content depicting graphic sex, violence, gore, illegal drugs, or hate themes

Although its enforcement has been patchy, this is Facebook’s policy:

We do not allow hate speech on Facebook… We define hate speech as a direct attack on people based on what we call protected characteristics – race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity and serious disability or disease.

I have some sympathy for these firms, because enforcement is a big job. Facebook again:

Over the last two months, on average, we deleted around 66,000 posts reported as hate speech per week — that’s around 288,000 posts a month globally.

That’s a lot of hate. But the point is, it’s against the rules whether it’s uploaded to Apple, posted on Facebook, streaming on Spotify or tweeted on Twitter. Apple alone is now a $1 trillion company; Facebook $522 billion; Twitter $32 billion; and Twitter $24 billion. If they’re short of moderators, they can afford to hire more.

Facebook doesn’t want to be evil, but it is

This, by Nikhil Sonnad, is a superb analysis of what’s wrong with Facebook and why it’s sending the world to hell in a handcart. Much of it applies equally to Twitter.

Sonnad begins with the story of Antonio Perkins, who was shot dead as he filmed a Facebook video.

Although his death is tragic, the video does not violate the company’s abstruse community standards, as it does not “glorify violence” or “celebrate the suffering or humiliation of others.” And leaving it up means more people will connect to Perkins, and to Facebook, so the video stays. It does have a million views, after all.

The problem is that Facebook doesn’t see people as people. We’re just data.

…the imperative to “connect people” lacks the one ingredient essential for being a good citizen: Treating individual human beings as sacrosanct. To Facebook, the world is not made up of individuals, but of connections between them. The billions of Facebook accounts belong not to “people” but to “users,” collections of data points connected to other collections of data points on a vast Social Network, to be targeted and monetized by computer programs.

There are certain things you do not in good conscience do to humans. To data, you can do whatever you like.

By this reading, Mark Zuckerberg is a modern-day Victor Frankenstein. He’s created a monster and has no idea how to control it, if controlling it is even possible any more.

John Naughton makes the same point in The Guardian.

 This all became evident last week in a revealing interview the Facebook boss gave to the tech journalist Kara Swisher. The conversation covered a lot of ground but included a few key exchanges that spoke volumes about Zuckerberg’s inability to grasp the scale of the problems that his creature now poses for society.

…I can see only three explanations for it. One is that Zuckerberg is a sociopath, who wants to have as much content – objectionable or banal – available to maximise user engagement (and therefore revenues), regardless of the societal consequences. A second is that Facebook is now so large that he sees himself as a kind of governor with quasi-constitutional responsibilities for protecting free speech. This is delusional: Facebook is a company, not a democracy. Or third – and most probably – he is scared witless of being accused of being “biased” in the polarised hysteria that now grips American (and indeed British) politics.

Sonnad again:

Facebook’s value system has diverged from that of the rest of society—the result of its myopic focus on connecting everyone however possible, consequences be damned.

With that in mind, the thread running through Facebook’s numerous public-relations disasters starts to become clear. Its continued dismissal of activists from Sri Lanka and Myanmar imploring it to do something about incitements of violence. Its refusing to remove material that calls the Sandy Hook massacre a “hoax” and threatens the parents of murdered children. Its misleading language on privacy and data-collection practices.

Facebook seems to be blind to the possibility that it could be used for ill.

That blindness is already having terrible consequences. For example, the violence in Myanmar that  Sonnad refers to is attempted genocide. The UN human rights chief there, Markuzi Darusman, told reporters that social media had “substantively contributed to the level of acrimony and dissension and conflict, if you will, within the public. Hate speech is certainly of course a part of that. As far as the Myanmar situation is concerned, social media is Facebook, and Facebook is social media.” There are many individual tragedies too, such as people driven to suicide by howling online mobs. And of course social media has been fundamental in the rise of the far right and associated violence.

We’re going to look back on this social media age with horror.

Sympathy for the Devil

This New York Times story about the parents of Noah Pozner, who was murdered in the Sandy Hook massacre, is horrific.

In the five years since Noah Pozner was killed at Sandy Hook Elementary School in Newtown, Conn., death threats and online harassment have forced his parents, Veronique De La Rosa and Leonard Pozner, to relocate seven times. They now live in a high-security community hundreds of miles from where their 6-year-old is buried.

“I would love to go see my son’s grave and I don’t get to do that, but we made the right decision,” Ms. De La Rosa said in a recent interview. Each time they have moved, online fabulists stalking the family have published their whereabouts.

Inevitably, Donald Trump believes that the man responsible for this horror, the snake-oil salesman and human stain Alex Jones, is “amazing“. His channels do big numbers for YouTube and Facebook.

Jones and other demons hide behind the right to free speech, which is enshrined in US law. In our social media age US law is global: the likes of Facebook and Twitter are US companies who take a US approach to the content they publish.

Whether by accident or design, that means they’ve become platforms for some of the worst people on the planet. I think it’s by design, because Facebook and Twitter do make editorial choices. Facebook won’t let you upload a photo of a woman breastfeeding. Twitter won’t let you use the name Elon Musk in your Twitter handle.

That’s beyond the pale. Holocaust denial, targeted attacking of women and minorities, inciting racial hatred, rape and death threats… that’s all fine, it seems. On Twitter, right-wing armies relentlessly attack people without consequence; the people they assault are the ones who often end up banned.

The supposed right to online free speech is starting to resemble the US right to bear arms: something that’s been perverted and used to cause untold misery. What scares me is that we’ve only scratched the surface of its malign power.

Block party

One of the reasons I haven’t binned Twitter is the existence of block lists. These enable you to automate the blocking of various bad people; they can’t see your messages (there’s a way around that, but few bother with it) and more importantly you don’t see theirs.

The numbers can be quite terrifying. One of the block lists I use, a list of anti-trans trolls, has thousands of people on it. I’m sure a few of them are falsely listed but for me that’s a small price to pay for relative freedom from online abuse.

One of the most high-profile block lists I’ve seen recently is Repeal Shield, which attempted to filter out the nastiest abuse aimed at Yes supporters in the Irish abortion referendum. Aidan O’Brien discusses the list and the interesting, if unsurprising, patterns that emerged.

Repeal Shield ended up blocking 16,000 people with very few false positives. Many of the troll accounts had clearly been set up purely to harass pro-repeal women; others had been around longer and also shared far right and/or anti-semitic content.

You’ll be shocked – shocked! – to discover that nearly three-quarters of the accounts were American. Some of them were quite clear about that; others claimed to be from Ireland but used US time stamps or only posted when everybody in Ireland had gone to bed.

I’ve written before about the malign influence of US social media users on other countries’ politics; the numbers demonstrate how big a problem it is.

It also demonstrates how big a problem abuse is on Twitter. Of all the accounts blocked by Repeal Shield, just 2.42% of them have since been suspended by Twitter’s abuse team.

This is important for various reasons. There’s the fact that Twitter is clearly doing next to nothing to curb the abuse that’s a fact of online life for women, members of minority groups and anybody the far right doesn’t like. And there’s the fact that social media is being used to sway elections.

Twitter’s response to the growing problem – it’s not just here; right now there are concerns over political bots in Malaysia, where over 17,000 bots tweeted over 44,000 pro-government messages in a single week  – is typically useless. It has just announced new rules on political advertising.

The company will require advertisers running political campaign ads for federal elections to identify themselves and certify they are located in the U.S… Twitter said it won’t let foreign nationals target political ads to U.S. residents.

That’s the advertising around tweets, not the tweets themselves. And that means it won’t change a damn thing.

The problem with Twitter has never been the display ads, the electronic equivalents of billboards. It’s the tweets and retweets, the fake news and the vicious abuse.

Social media has been weaponised.

Infowar! Huh! What is it good for?

Profits!

This is disturbing, to say the least. As the Cambridge Analytica scandal rumbles on, here’s Adam Ramsay’s view of “what happens when you privatise military propaganda”:

If you privatise war, don’t be surprised if military firms start using the tools of war on ‘their own’ side. When Eisenhower warned of the Military Industrial Complex, he was thinking about physical weapons. But, just as unregulated semi-automatics invented for soldiers end up going off in American schools, it shouldn’t be any kind of surprise that the weapons of information war are going off in Anglo-American votes.

I’ll take the quiet life

I’m doing something I should probably do more often: unfollowing a lot of people on social media. It’s not that they’re bad people. Quite the opposite. It’s that unfortunately good people often share bad things.

I block or filter out a lot of people on Twitter and other networks: nazis, bigots, people who point at planes, men’s rights activists, accounts sharing overly graphic images of cruelty, and arseholes of various kinds. And the reason I block them is because they post things I don’t want to see or read.

Unfortunately, many of the people I follow take screenshots of those things and post them online, thereby making me look at the very worst examples of the things I don’t want to see.

They’re doing it for good reasons, such as battling bigotry or cruelty. But they’re doing it in a way that forces me to see things I don’t want to see: the way social media works is that when they post it, it’s injected straight into my timeline whether I want it or not.

In effect, it overrides my choice. I’ve said “I don’t want to see this”, and the social network says “I’m going to show it to you anyway, again and again.”

It’s not that I want to live my life in a bubble, free from any bad news. It’s that there’s a limit to how much time you can spend staring into the abyss every day when you’ve got stuff to do. If you’re not careful on social media, the abyss follows you around all day demanding you stare into it again and again and again.