Category: Technology

Shiny gadgets and clever computers

  • A girl called “it”

    Platformer has obtained what it calls “the dehumanizing new guidelines moderating what people can now say about trans people on Facebook and Instagram.” Examples include “trans people aren’t real. They’re mentally ill”, “a trans woman isn’t a woman, it’s a pathetic confused man” and “a trans person isn’t a he or a she, it’s an it.”

    The report says that Meta’s chief marketing officer Alex Schultz, the firm’s highest-ranking gay employee, has suggested that for FB and Instagram users, “seeing their queer friends and family members abused on Facebook and Instagram could lead to increased support for LGBTQ rights.”

    It’s not just trans people. It’s pretty much anybody who isn’t MAGA. And it’s not really new, because marginalised people have been trying and failing to get Meta to moderate hate speech for a long time. But what’s different is that this is now policy, and the policy explicitly says that hate speech is fine when directed towards specific minorities.

  • Own your everything

    When Elon Musk bought Twitter, he didn’t just destroy the good parts of a thriving social network. He also did massive damage to many people’s livelihoods, including creative people for whom Twitter was a key part of their marketing and who saw their post engagement – how many people see and interact with them – effectively disappear. And now the same’s happening over at Facebook, Instagram and Threads thanks to their parent company Meta’s new policy, “it’s great when you hate”.

    Meta’s moves to emulate Twitter are bad for business – not Meta’s business, but yours.

    It’s not just the open embrace of online hate, with Meta happily saying it’ll allow the online abuse of women, immigrants, LGBTQ+ people and more. Meta’s various properties also already engage in significant censorship, such as hiding posts by LGBTQ+ people and content it deems “political”, while also suppressing links to anything that isn’t hosted on Facebook, Instagram or Threads.

    The very creative people that helped make Instagram so big now have to post “link in bio” because Meta won’t let them include links to their own creations in their own posts if those creations are on their own websites or other social networks. And that’s getting worse as Facebook, Instagram and Threads hide more posts by the people you choose to follow in favour of ads and shoddy AI.

    If you’re a creative type, doing nothing isn’t an option: unless you’re selling AI-generated crap to the bigot market or reinventing yourself as a troll account you’re going to see the reach of your posts diminish as some of your audience leaves and the people who still follow you see fewer and fewer of your posts.

    In the short term that means it’s wise to look at other social networks, if you haven’t already. Bluesky has the juice right now, but that comes with an important caveat: it could easily go to shit too. Many people quit Twitter and attempted to rebuild on Threads; many of them are now facing a repeat as they look for yet another new home.

    Sometimes clichés persist because they’re true, and that’s definitely the case when it comes to finding a basket to put your eggs into. Just because the eggs are electronic doesn’t change the underlying truth: having everything in one place means there’s a single point of failure.

    With any social network your access can be removed without warning and for no good reason, with few if any rights to appeal. And if it is, the things you’ve posted, the connections you’ve made, the audiences you’ve built… they go too.

    I know several people whose businesses and/or careers have really suffered because of social network policy changes, censorship or bad-faith reports by third parties, and what adds insult to injury is that there really isn’t anything you can do about it because third party networks don’t give a shit. As Meta’s Mark Zuckerberg famously said of early Facebook users, “They ‘trust me’. Stupid fucks.”

    It’s not just social media. Online spaces for creatives can disappear or remove entire archives overnight: art websites, digital magazines, blogging platforms. I don’t archive my consumer news stories because they’re ephemeral, but everything else I write I archive – and as a result I have copies of features that no longer exist anywhere else because they were never printed and the online versions are long gone.

    I’ve been online for over 30 years now, and in that time countless social networks have risen and fallen: USENET, CompuServe, AOL, MySpace, Friends Reunited, Friendster, Google Plus, Bebo, Vine, Flickr, Twitter, Orkut, Jaiku and many more. And that’s before you factor in the many thousands of user-created content websites and online publishers that have been and gone too.

    User-generated content is a scam, and part of what Cory Doctorow described when he coined the term “enshittification”:

    Each commercial social media service has two imperatives: first, to make it as easy as possible to switch to their service, and second, to make it as hard as possible to leave.

    The harder it is to leave – for example, because you’ve built your entire business on a specific social media platform, and going elsewhere would mean losing most or all of your audience – the more the social network can then exploit you. That potential loss is a “switching cost”. Doctorow:

    When switching costs are high, services can be changed in ways that you dislike without losing your business. The higher the switching costs, the more a company can abuse you, because it knows that as bad as they’ve made things for you, you’d have to endure worse if you left.

    Meta’s social networks aren’t dead, although their hyper-growth is ending. But it’s important to understand that abusing you, the user, isn’t something that happens by accident. It’s the entire strategy. And if you’re using social media for your livelihood, you need to have a strategy of your own for when the abuse becomes intolerable.

  • Zuck’s death cult

    Today, Mark Zuckerberg announced changes to Facebook, Threads and Meta, initially for the US because EU law won’t allow such changes. The short version: hate speech is back, baby!

    You’ll see the details elsewhere so I won’t repeat them here, but ultimately the goal is to remove safeguards and moderation from Meta’s platforms. That freedom for Facebook in particular is new for the US, but it’s not new for Facebook: we saw it in Myanmar, where Facebook was instrumental in genocide.

    That’s not just my opinion; it’s the opinion of UN investigators and of Amnesty International too. As Amnesty put it: “While the Myanmar military was committing crimes against humanity against the Rohingya, Meta was profiting from the echo chamber of hatred created by its hate-spiralling algorithms.”

    Amnesty:

    Internal studies dating back to 2012 indicated that Meta knew its algorithms could result in serious real-world harms. In 2016, Meta’s own research clearly acknowledged that “our recommendation systems grow the problem” of extremism…

    In one internal document dated August 2019, one Meta employee wrote: “We have evidence from a variety of sources that hate speech, divisive political speech, and misinformation on Facebook… are affecting societies around the world. We also have compelling evidence that our core product mechanics, such as virality, recommendations, and optimizing for engagement, are a significant part of why these types of speech flourish on the platform.”

    Hate speech is the oil of Meta’s business, and Zuckerberg doesn’t care about the human cost.

    As tech journalist and long-time Meta critic Ed Zitron writes on Bluesky:

    Meta will burn down everything in search of growth. They have been doing so in broad daylight for years. They will make people angry and sad and hateful (and have done so before) in search of growth in their dying platform. They will make everything worse to create growth. It’s a death cult

  • Bots and brooms

    I meant to post this a while back: a piece I wrote for Gutter magazine about the “content industry” and what it means for artists.

    Of course there has always been business around art. The music business, the art market, the publishing industry, the comedy circuit, the comic book trade and others have all seen their share of bandwagon boarders and cold-eyed careerists. But for most of that time the art and the business have co-existed, however awkwardly and inequitably. What happens when there’s all business and no art?

  • Perverse incentives

    One of the “keyboard warriors” who fuelled the recent English racist riots, Twitter user WayneGb88, appeared in court yesterday and was jailed for three years. During the trial, he told the court that he earns approximately £1,400 per month from posting hate speech on the former Twitter.

    This is why hate speech is everywhere: it pays very well. US “detransitioner” Chloe Cole recently revealed that she earns roughly $200,000 a year flying around as a guest of the Christian Right trying to get trans people’s healthcare banned; other anti-trans grifters are raking it in too.

    Being hateful is no longer a hobby; it’s a career, and a lucrative one.

  • Fake images, real harms

    Over the last few days, I’ve read about two people who’ve been the subject of faked sexual images. Such images are typically created by grafting a person’s face onto the body of a porn performer, but increasingly this process is being handled by AI-type apps that can create very convincing-looking fakes with minimal human input.

    Irrespective of the techniques used, the intention is the same: to dehumanise, to degrade. But the response to such abuse depends very much on how much power you have. When the images are of Taylor Swift, even X/Twitter will eventually take action, albeit in a cursory manner after many hours and many more millions of image shares. When you’re 14-year-old schoolgirl Mia Janin, you have no such power.

    Janin killed herself after being bullied at her school by male classmates, some of whom it’s reported pasted images of her and her friends’ faces onto pornography that was then shared around the school via mobile phones. It was part of a wider campaign of abuse against her, and the use of sexual images is a form of abuse that’s increasingly common: according to the latest figures from the National Police Chief’s Council, for 2022, some 52% of sexual offences against children were committed by other children, 82% of the offending children were boys and one-quarter of those offences involved the creation and sharing of sexual images. And as ever, these figures are the tip of an iceberg: the NPCC estimates that five out of six offences are never reported.

    As Joan Westenberg writes, when even Taylor Swift isn’t protected from such abuse, what chance do ordinary women and girls and other powerless people have?

    When a platform struggles (or simply refuses) to protect someone with Swift’s resources, it shows the vulnerability of us all. Inevitably, the risks of AI misuse, deepfakes and nonconsensual pornography will disproportionately affect marginalized communities, including women, people of colour, and those living in poverty. These groups lack the resources to fight back against digital abuse, and their voices will not be heard when they seek justice or support.

    There are growing concerns that just as the rise of generative AI apps makes such fakes easier than ever, social networks are cutting back on the very trust and safety departments whose job it is to stop such material from being spread. Today, X/Twitter announced that in response to the Taylor Swift fakes it will create a new trust and safety centre and hire 100 content moderators. Before Musk took over, the social network had more than 1,500. And as this is a Musk announcement, those 100 new moderators may never be hired at all.

    X/Twitter is an extreme example, but the history of online regulation has a recurring thread: tech firms will do the absolute minimum they can get away with doing when it comes to moderating content. Content moderation is difficult, expensive and even with AI help, labour intensive. It’s also a fucking horrible job that leaves people seriously traumatised. But it’s necessary, and as technologies such as AI image generation become more widespread it needs more investment, not less. You shouldn’t need to be Taylor Swift to be protected from online abuse.

  • The Grey Goo

    Following on from yesterday’s post about bots ruining social media, the excellent Ian Betteridge writes about what we can expect when creating crap is much faster than detecting it.

    This is the AI Grey Goo scenario: an internet choked with low-quality content, which never improves, where it is almost impossible to locate public reliable sources for information because the tools we have been able to rely on in the past – Google, social media – can never keep up with the scale of new content being created.

  • Death by a billion bots

    Via Joan Westenberg on Threads, here’s ReplyGuy. ReplyGuy is a bot that will find conversations on the internet and promote your product automatically by spamming those conversations while pretending to be people.

    Every day we take a step closer to the dead internet, where the bulk of online conversations are bots talking to bots and humans are left in the margins, if they’re there at all. So much of social media is now bot-based rather than people-based.

  • Death should be the end

    There’s a joke I like about technology companies, first posted by Alex Blechman:

    Sci-Fi Author: In my book I invented the Torment Nexus as a cautionary tale

    Tech Company: At long last, we have created the Torment Nexus from classic sci-fi novel Don’t Create The Torment Nexus

    Like the best jokes it’s funny because it’s true: all too often, tech firms care about whether they could do something rather than whether they should. Which is how a supposedly AI-generated comedy routine by George Carlin, who died in 2008, came to be made.

    I say “supposedly” because the whole thing seems awfully fishy. But what’s definitely true is that some people have created a Carlin sound-a-like, and it’s awful. Ed Zitron:

    AI-Carlin’s jokes feel like they were generated by feeding transcripts of his real work into a generative AI, with the resulting CarlinGPT bot prompted by Sasso and Kultgen and its outputs heavily edited. 

    If this was entirely written by humans, it is even more shameful, both in how terribly unfunny it is and how little they understand Carlin’s work.

    Finding bad examples of AI isn’t difficult: significant parts of the internet seem to be using it to create overly bright images of improbably breasted young women with waists so tiny that if they were real women, they’d snap. But I think there’s one example that is so bad you’d think I’d invented it, and it’s about this painting by Keith Haring.

    The painting is called Unfinished because, as you can see, it’s unfinished. That’s deliberate, because it was the final painting of Haring’s life: the unpainted section represents the many lives lost to AIDS. He died the following year.

    A few days ago, an AI user finished it.

    I thought it was a joke, but it doesn’t appear to be. Somebody has used generative AI to complete the painting, to fill in the space and to remove  the very thing that makes it so meaningful and so powerful. The fact that the AI has produced shoddy work is almost irrelevant, because of course it did. The whole exercise is a classic example of someone who could do something, but who should not do it.

    In electronic publishing, a plague of crap AI-generated content is an unintentionally ironic echo of Orwell’s 1984, in which a key character works “in the Fiction Department [in] some mechanical job on one of the fiction-writing machines.”

    She enjoyed her work, which consisted chiefly in running and servicing a powerful but tricky electric motor… She could describe the whole process of composing a novel, from the general directive issued by the Planning Committee down to the final touching-up by the Rewrite Squad. But she was not interested in the final product. She “didn’t much care for reading,” she said. Books were just a commodity that had to be produced, like jam or bootlaces.

    And it’s not just art. Serious people are spending serious money to create AI versions of people, so that in the not too distant future you’ll be able to converse with an AI chatbot that mimics the voice and the speaking mannerisms of your favourite dead loved ones so that you can attempt to cheat the Grim Reaper – something we’ve seen described many times over in literature, rarely with a happy ending attached.

    Rather than building machines to simulate storytellers, tech evangelists might be better off reading some of them. They might want to start with W W Jacobs’ story The Monkey’s Paw.

  • Authors who don’t exist

    Meet Jason N. Martin N. Martin, the author of the exciting and dynamic Amazon bestseller “How to Talk to Anyone: Master Small Talks, Elevate Your Social Skills, Build Genuine Connections (Make Real Friends; Boost Confidence & Charisma)”

    Except you can’t meet him, because he doesn’t exist. He’s an AI-generated character with an AI-generated face credited with writing an AI-generated ebook with an AI-generated cover. Both the cover and the content are likely based on content that’s been plagiarised: most of the large language and content models used for AI generation have been fed with real humans’ work in order for them to emulate it without credit or, of course, payment.

    Once you’ve found Jason, Amazon will recommend another 11 just like him.

    Between the synthetic faces, the use of repetitive and potentially AI-generated text, and art copied from other sources, the 57 books that these authors have published over the last two years may well contain almost no original human-generated content, and Amazon’s algorithms in their current state have the unfortunate effect of worsening the problem by recommending additional inauthentic books or authors once a customer stumbles upon one of them.

    Amazon isn’t the only place this is happening, and books aren’t the only sector it’s happening in: there’s a flood of computer-generated content in everything from music to furniture listings. Just the other day Amazon’s listings were full of products called “I’m sorry but I cannot fulfill this request it goes against OpenAI use policy”. X/Twitter is already full of ChatGPT bots posting, and your search engine results are starting to fill up with AI-generated content too. I’ve been trying to research some products recently and it’s been like swimming through treacle: so much content returned by search engines is completely useless now.

    The odd listings are most likely the result of dropship sellers using ChatGPT to write everything from product descriptions to product names in huge volumes, but they’re a good example of the pernicious creep of AI into almost everything online – partly due to tech platforms’ lack of interest in removing useless content. Sometimes it’s funny – ChatGPT confidently informed me that I died a few years ago – but it’s increasingly replacing actual information in your search results. And then that bad information becomes the source data for the next generation of AI articles.

    That could mean AI is an ouroboros, a snake eating its own tail: the more AI-generated content there is, the more AI will use that content as its source – and that means the very many errors AI systems are currently making will cascade. AI researchers have a name for the potential outcome: model collapse. It means that the language models used by AI are so full of bad data that their results are useless at best and absolute gibberish at worst.

    There’s a famous saying in tech: garbage in, garbage out. Thanks to AI, we’re currently seeing that happen on an epic scale.