Author: Carrie

  • Fake images, real harms

    Over the last few days, I’ve read about two people who’ve been the subject of faked sexual images. Such images are typically created by grafting a person’s face onto the body of a porn performer, but increasingly this process is being handled by AI-type apps that can create very convincing-looking fakes with minimal human input.

    Irrespective of the techniques used, the intention is the same: to dehumanise, to degrade. But the response to such abuse depends very much on how much power you have. When the images are of Taylor Swift, even X/Twitter will eventually take action, albeit in a cursory manner after many hours and many more millions of image shares. When you’re 14-year-old schoolgirl Mia Janin, you have no such power.

    Janin killed herself after being bullied at her school by male classmates, some of whom it’s reported pasted images of her and her friends’ faces onto pornography that was then shared around the school via mobile phones. It was part of a wider campaign of abuse against her, and the use of sexual images is a form of abuse that’s increasingly common: according to the latest figures from the National Police Chief’s Council, for 2022, some 52% of sexual offences against children were committed by other children, 82% of the offending children were boys and one-quarter of those offences involved the creation and sharing of sexual images. And as ever, these figures are the tip of an iceberg: the NPCC estimates that five out of six offences are never reported.

    As Joan Westenberg writes, when even Taylor Swift isn’t protected from such abuse, what chance do ordinary women and girls and other powerless people have?

    When a platform struggles (or simply refuses) to protect someone with Swift’s resources, it shows the vulnerability of us all. Inevitably, the risks of AI misuse, deepfakes and nonconsensual pornography will disproportionately affect marginalized communities, including women, people of colour, and those living in poverty. These groups lack the resources to fight back against digital abuse, and their voices will not be heard when they seek justice or support.

    There are growing concerns that just as the rise of generative AI apps makes such fakes easier than ever, social networks are cutting back on the very trust and safety departments whose job it is to stop such material from being spread. Today, X/Twitter announced that in response to the Taylor Swift fakes it will create a new trust and safety centre and hire 100 content moderators. Before Musk took over, the social network had more than 1,500. And as this is a Musk announcement, those 100 new moderators may never be hired at all.

    X/Twitter is an extreme example, but the history of online regulation has a recurring thread: tech firms will do the absolute minimum they can get away with doing when it comes to moderating content. Content moderation is difficult, expensive and even with AI help, labour intensive. It’s also a fucking horrible job that leaves people seriously traumatised. But it’s necessary, and as technologies such as AI image generation become more widespread it needs more investment, not less. You shouldn’t need to be Taylor Swift to be protected from online abuse.

  • The Grey Goo

    Following on from yesterday’s post about bots ruining social media, the excellent Ian Betteridge writes about what we can expect when creating crap is much faster than detecting it.

    This is the AI Grey Goo scenario: an internet choked with low-quality content, which never improves, where it is almost impossible to locate public reliable sources for information because the tools we have been able to rely on in the past – Google, social media – can never keep up with the scale of new content being created.

  • Death by a billion bots

    Via Joan Westenberg on Threads, here’s ReplyGuy. ReplyGuy is a bot that will find conversations on the internet and promote your product automatically by spamming those conversations while pretending to be people.

    Every day we take a step closer to the dead internet, where the bulk of online conversations are bots talking to bots and humans are left in the margins, if they’re there at all. So much of social media is now bot-based rather than people-based.

  • Faking the news

    There’s an excellent example of how newspapers create and maintain moral panics in the Sunday Times today, when Camilla Long notes with horror that:

    One school in Wales has written to parents saying it will not be providing “litter trays” for children “who identify as cats”.

    The reason for the letter was to debunk the idea that any children were identifying as cats, an anti-trans internet fiction enthusiastically spread by, er, The Times and The Sunday Times on multiple occasions.

    For example: “reports last week of a girl identifying as a cat”, 24 June 2023; “a litter of teenagers who self-identify as cats have begun stalking [a] town”, July 10 2023; “A friend of mine who runs a nice little café was surprised one day to see an adolescent girl enter his establishment, dressed from whiskers to tail as a cat… the girl identifies as a cat, Mum and Dad [explained]”, 24 December 2023. And so on.

    As I’ve written before, there is a horrific grain of truth to the story: some schools do indeed have litter trays in classrooms. Those schools are in America, where litter trays are provided in case a child needs to go to the toilet during an active shooter drill or active shooting.

    Like most anti-LGBTQ+ bullshit, the “kids are identifying as cats” story was fabricated by the right-wing press – in this case Fox News, before being amplified by Turning Point UK (a hard right pressure group) and GB News. It then spread via The Telegraph, the Daily Mail, LBC and, inevitably, The Times and Sunday Times. It was then picked up by beleaguered PM Rishi Sunak who condemned “schools [that] are allowing children to identify as cats, horses and dinosaurs.” None of those things happened.

  • Whispered words and power chords

    Somewhat later than planned, I’m delighted to tell you about the new HAVR EP: Love Will Save Us From Sadness. Which perhaps could have been called “Hey, do you guys ever think about dying?”

    It’s our best work yet, I think, and I’m particularly proud of the lyrics: these are songs from a sad place but there’s a lot of positivity and joy in them as well as meditations on grief and loss. Over the course of the EP you’ll find crashing waves of Fender strats, hazy pop, huge layers of distortion and, of course, some anthemic rock.

    As ever, we’ve got the music on bandcamp and you can have it for free; I’ll be putting them on the usual streaming services shortly.

  • Death should be the end

    There’s a joke I like about technology companies, first posted by Alex Blechman:

    Sci-Fi Author: In my book I invented the Torment Nexus as a cautionary tale

    Tech Company: At long last, we have created the Torment Nexus from classic sci-fi novel Don’t Create The Torment Nexus

    Like the best jokes it’s funny because it’s true: all too often, tech firms care about whether they could do something rather than whether they should. Which is how a supposedly AI-generated comedy routine by George Carlin, who died in 2008, came to be made.

    I say “supposedly” because the whole thing seems awfully fishy. But what’s definitely true is that some people have created a Carlin sound-a-like, and it’s awful. Ed Zitron:

    AI-Carlin’s jokes feel like they were generated by feeding transcripts of his real work into a generative AI, with the resulting CarlinGPT bot prompted by Sasso and Kultgen and its outputs heavily edited. 

    If this was entirely written by humans, it is even more shameful, both in how terribly unfunny it is and how little they understand Carlin’s work.

    Finding bad examples of AI isn’t difficult: significant parts of the internet seem to be using it to create overly bright images of improbably breasted young women with waists so tiny that if they were real women, they’d snap. But I think there’s one example that is so bad you’d think I’d invented it, and it’s about this painting by Keith Haring.

    The painting is called Unfinished because, as you can see, it’s unfinished. That’s deliberate, because it was the final painting of Haring’s life: the unpainted section represents the many lives lost to AIDS. He died the following year.

    A few days ago, an AI user finished it.

    I thought it was a joke, but it doesn’t appear to be. Somebody has used generative AI to complete the painting, to fill in the space and to remove  the very thing that makes it so meaningful and so powerful. The fact that the AI has produced shoddy work is almost irrelevant, because of course it did. The whole exercise is a classic example of someone who could do something, but who should not do it.

    In electronic publishing, a plague of crap AI-generated content is an unintentionally ironic echo of Orwell’s 1984, in which a key character works “in the Fiction Department [in] some mechanical job on one of the fiction-writing machines.”

    She enjoyed her work, which consisted chiefly in running and servicing a powerful but tricky electric motor… She could describe the whole process of composing a novel, from the general directive issued by the Planning Committee down to the final touching-up by the Rewrite Squad. But she was not interested in the final product. She “didn’t much care for reading,” she said. Books were just a commodity that had to be produced, like jam or bootlaces.

    And it’s not just art. Serious people are spending serious money to create AI versions of people, so that in the not too distant future you’ll be able to converse with an AI chatbot that mimics the voice and the speaking mannerisms of your favourite dead loved ones so that you can attempt to cheat the Grim Reaper – something we’ve seen described many times over in literature, rarely with a happy ending attached.

    Rather than building machines to simulate storytellers, tech evangelists might be better off reading some of them. They might want to start with W W Jacobs’ story The Monkey’s Paw.

  • Dirty tricks, tested on trans

    England’s buffer zone laws, voted in by an overwhelming majority of MPs, will be deliberately undermined by new Home Office guidance. That’s according to the I Paper, which says that despite legislation banning religious groups from harassing women, the guidance says that forced birthers “would still be able to approach women attending clinics, conduct “silent prayer” and offer information to and engage in discussion with patients, all inside the 150m zones.”

    The guidance is currently open for consultation and if you care about women’s rights, you should contribute. Because you can be sure that the forced birthers will.

    This is a particularly twisted trick, because it effectively tells the police and other authorities not to enforce a law designed specifically to protect vulnerable people; it says that those people’s rights don’t matter, irrespective of what the letter and the spirit of the law says. And it’s a trick we’ve seen before, because it’s exactly what Liz Truss has done via the EHRC and its guidance regarding the Equality Act and its protections for trans people.

    Truss has close links with the US Heritage Foundation, a key driver of anti-trans and anti-abortion activity in the US; I wasn’t aware of Cleverly having similar links but whether by accident or design he too is serving up something from the evangelical right’s wish list.

    As I’ve written many times, I’ve long since given up on expecting most people to care about trans rights. But if you’re not a straight white Christian man, you should be paying attention to the tricks politicians, media groups and hate groups are using to target us and roll back our rights. Because as we’ve seen again and again, what they test on us is only ever the beginning.

  • A terrible echo

    In 2016, every major political party in Scotland stood on a platform that included gender recognition reform. The Scottish Government then threw the issue open to public consultation in 2017 (and again in 2019), during which social and mainstream media – with significant input from genital-obsessed weirdos – repeatedly lied about the proposed legislation, demonised trans people and defamed them as dangerous to children. Gender recognition reform has still not happened.

    In 2021, every major political party in Scotland stood on a platform that included banning conversion therapy. The Scottish Government then threw the issue open to public consultation in 2024, during which…

    If anything, the vitriol around this consultation is even worse. Although it’s largely coming from the same people as before there’s no pretence of “reasonable concerns” this time. Just constant abuse online and ridiculous evangelical claims of the “ordinary parents will be jailed for seven years” variety.

    Any time the rights of marginalised people are thrown open to the public, those consultations are flooded by bigots and misrepresented by the conservative press: whether the rights of trans and non-binary people in Scotland, women in Ireland or gay couples in Australia or Romania, consultations have repeatedly been used by the religious and far right to demand the that marginalised people receive worse treatment and have fewer human rights than they enjoy.

    If the majority wanted marginalised people to have equality, we wouldn’t need to legislate something so basic as protecting young people from treatment the UN defines as torture. We shouldn’t have to ask permission from the very people who deny us those basic rights.

  • Authors who don’t exist

    Meet Jason N. Martin N. Martin, the author of the exciting and dynamic Amazon bestseller “How to Talk to Anyone: Master Small Talks, Elevate Your Social Skills, Build Genuine Connections (Make Real Friends; Boost Confidence & Charisma)”

    Except you can’t meet him, because he doesn’t exist. He’s an AI-generated character with an AI-generated face credited with writing an AI-generated ebook with an AI-generated cover. Both the cover and the content are likely based on content that’s been plagiarised: most of the large language and content models used for AI generation have been fed with real humans’ work in order for them to emulate it without credit or, of course, payment.

    Once you’ve found Jason, Amazon will recommend another 11 just like him.

    Between the synthetic faces, the use of repetitive and potentially AI-generated text, and art copied from other sources, the 57 books that these authors have published over the last two years may well contain almost no original human-generated content, and Amazon’s algorithms in their current state have the unfortunate effect of worsening the problem by recommending additional inauthentic books or authors once a customer stumbles upon one of them.

    Amazon isn’t the only place this is happening, and books aren’t the only sector it’s happening in: there’s a flood of computer-generated content in everything from music to furniture listings. Just the other day Amazon’s listings were full of products called “I’m sorry but I cannot fulfill this request it goes against OpenAI use policy”. X/Twitter is already full of ChatGPT bots posting, and your search engine results are starting to fill up with AI-generated content too. I’ve been trying to research some products recently and it’s been like swimming through treacle: so much content returned by search engines is completely useless now.

    The odd listings are most likely the result of dropship sellers using ChatGPT to write everything from product descriptions to product names in huge volumes, but they’re a good example of the pernicious creep of AI into almost everything online – partly due to tech platforms’ lack of interest in removing useless content. Sometimes it’s funny – ChatGPT confidently informed me that I died a few years ago – but it’s increasingly replacing actual information in your search results. And then that bad information becomes the source data for the next generation of AI articles.

    That could mean AI is an ouroboros, a snake eating its own tail: the more AI-generated content there is, the more AI will use that content as its source – and that means the very many errors AI systems are currently making will cascade. AI researchers have a name for the potential outcome: model collapse. It means that the language models used by AI are so full of bad data that their results are useless at best and absolute gibberish at worst.

    There’s a famous saying in tech: garbage in, garbage out. Thanks to AI, we’re currently seeing that happen on an epic scale.

  • Overreacting

    From the very beginnings of the war on trans people, we’ve been accused of overreacting whenever we report what anti-trans groups and politicians say they want to do to us – which in many cases is the complete elimination of trans people by any means necessary.

    Most UK anti-trans groups and key anti-trans figures have signed a declaration calling for the “elimination of transgenderism”; many talk openly about removing all our human rights, healthcare and legal protections. Some openly wish to see us dead.

    This is something that campaigners for all women’s rights have long experienced: when they tried to raise the alarm about the US Republicans’ openly stated goal of rescinding Roe vs Wade, they were told not to be so silly. Roe vs Wade, of course, is gone with abortion and contraception now under sustained attack in multiple states – and Obergefell v Hodges, which enabled equal marriage, and Loving v Virginia, which struck down bans on interracial marriage, are next in the firing line. We know this because the religious right told us, as they usually do.

    One of the tactics that’s been openly discussed for a few years now is to classify the very existence of trans people as a sexual act, and to then use that classification to ban trans people from everyday life. And here’s legislators in West Virginia trying to do just that. In two separate bills, Republican lawmakers propose to ban “obscene matter”; in their definition of such, they include “any transvestite and/or transgender exposure, performances or display to any minor.” In other words, the mere presence of a trans person near a child would be a sexual offence.

    It’s easy to dismiss this as the latest whacky nonsense from crazed US fundamentalists. But the exact same arguments are being advanced over here by anti-trans groups, many of which work closely with US evangelical groups such as the Alliance Defending Freedom, one of the key drivers and drafters of US anti-trans legislation. And the arguments are in the 2025 Project manifesto by groups including the Heritage Foundation, which is very close to the current UK government. Many of these documents and strategies make it clear that it’s not just trans people being targeted here but queer people more widely, along with women’s reproductive rights.

    The West Virginian bills aren’t expected to become law. But they are a tiny part of a wave of anti-trans bills in the US, bills that UK anti-trans activists and politicians would like to see in the UK too.

    We’re not overreacting; if anything, we’re underselling the threat to trans people’s lives, to the wider LGBTQ+ community and to reproductive freedom.