AI is our generation’s asbestos. Ban it in the creative industries. Now.

Holden Sheppard
By Holden Sheppard | 24 July 2023
Holden Sheppard.

Holden Sheppard is a multi award-winning author.

“Disruption” is my least favourite buzzword. A few years back, you couldn’t swing a cat without hitting a power-suited TED talker delivering an onanistic D-bomb into their Madonna mic.

Although the term made me grind my teeth, I was always fascinated by its connotations. “Disruption” does not imply that a new technology is a threat to be overcome. It carries an optimistic defeatism: this new thing is here, whether we like it or not, and we mere humans are powerless to stop the unrelenting advance of technology. The implication is that the savviest, most aloof way to deal with disruption is to accept it and adapt. And quit yer whining.

I have been thinking about disruption lately in the context of generative AI. As an Australian author – I earn most of my living as a novelist – my spidey senses have been tingling about AI since Chat GPT first became ubiquitous on the internet. Without pretending to be any kind of expert on the matter, I have kept my ear to the ground as random skirmishes around AI use in the publishing world started cropping up in my social media feeds.

And right now, the mood is near-universal: writers are packing our dacks about AI, with good reason. In the past few months, we have seen:

  • US-based speculative fiction magazine Clarkesworld bombarded with AI-written submissions, resulting in the magazine banning all AI-written content;
  • Amazon flooded by incomprehensibly bad AI-written novels, crowding the market and making discoverability of books written by human authors harder;
  • Companies trying to hire writers at a fraction of normal rates to edit AI-generated copy, rather than write their own material; and
  • Stories emerging of publishers trying to force authors to sign contracts with clauses in them which would permit the text of their novels to be fed into AI training systems.

In Australia, our writing competitions are starting to add clauses prohibiting AI-written entries being eligible for cash prizes (which form tangible and vitally-needed income for short-story writers and poets). Our national peak body, the Australian Society of Authors, has been impressively proactive in rapidly working to develop a policy on the new technology, recognizing that the moment of this technology disrupting our world is no longer imminent – it is already here.

The most high-profile battleground is Hollywood. The Writers Guild of America (WGA) has been on strike since 2 May, concerned studios will use AI to replace screenwriters entirely, or hire them for a fraction of their usual fee to polish AI-created scripts.

Last week, the Screen Actors Guild – American Federation of Television and Radio Artists (SAG-AFTRA) also went on strike, citing similar concerns. It is the first time both actors and writers have been on strike in the US at the same time since 1960, bringing Hollywood to a halt. The Alliance of Motion Picture and Television Producers (AMPTP) proposal was for actors to have their faces scanned into a system, paid for one day’s work, then have their likeness used by studios for eternity without consent or payment, supplanting actors with AI-derived content. Commentators pointed out this proposal is so dystopian, it is literally a plotline from an episode of Black Mirror.

In a now viral speech on July 14, SAG-AFTRA President Fran Drescher delivered an impassioned indictment of the AMPTP that “the jig is up” and ringing alarm bells for artists, stating, “If we don’t stand tall right now … we are all going to be in jeopardy of being replaced by machines.”

Hollywood’s battle with AI is relevant for writers in Australia, too, because it is a moment of history for the arts, and nobody yet knows how long it will last, or how it will be resolved. And what happens in the States may chart a precedent for how our own creative industries, and unions, respond to this historic – shudder – disruption.

Although financial struggles for writers are nothing new (in Australia, the average annual author income is $18,000), AI presents a new level of degradation beyond just devaluing the artist. We are now staring down the barrel of the elimination of the artist.

I don’t think any editors or publishers I have met want this, but somewhere at the top, executives and/or shareholders are salivating over how profitable art could become if they could just get rid of the artist altogether. TV series with no writers; films with no actors; audiences who would tune in because they wouldn’t have any other choice. Imagine the profits!

Of course, if these people had a handle on what good art is or why audiences tune in to watch it, they would not be acting this way. There is no art without the artists: art is what makes us human. AI cannot create anything new or original, as writers do. AI cannot emote or express, as actors do. All AI-generated content comes from pirating the copyrighted art of other artists and regurgitating it out in warped (and usually terrible) output. AI-generated art is, by its very nature, dutiful copyright infringement by a programmed machine. A world of AI art is a world without art.

It is important for the optimists among us not to think of the strikes as having any chance of changing the hearts and minds of the powerful, either. That is not what this is about. We are talking about executives who would probably happily eliminate the TV show itself, and eliminate televisions, and eliminate the audience at home, if it meant that costs kept going down and the profits kept going up. I imagine their utopia would just be the entire arts world reduced down to a big green button on their corporate desk that lights up and pisses out reams of cash every time they bash their fist on it.

The strikes will not change their minds. The strikes will make that big green button stop lighting up. And that is the only way to get change on this issue.

The challenge ahead for artists – and organisations that care about artists – is to ensure that the only way executives’ big green buttons ever get to light up again is by meeting the ethical demands of artists, and having that mandated by law and policy. And now is absolutely the time to strike, while writers and actors are still imperative in the process of generating profit. If we are all replaced by AI in the next year, it will be too late to do anything about this. AI cannot go on strike for us.

Of course, there are much bigger problems with AI than artists’ rights infringements. Many AI experts have been calling for either a pause or a total stop on the development and use of AI, especially urgently since the release of Chat GPT 4. On 22 March, hundreds of scientists and industry figures signed and released an open letter calling for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4”. The open letter, signed by CEOs, tech founders, professors of computer science and more (including Yoshua Bengio, Stuart Russell, Steve Wozniak, and Elon Musk) states in no uncertain terms:

AI systems with human-competitive intelligence can pose profound risks to society and humanity, as shown by extensive research and acknowledged by top AI labs. As stated in the widely-endorsed Asilomar AI PrinciplesAdvanced AI could represent a profound change in the history of life on Earth, and should be planned for and managed with commensurate care and resources. Unfortunately, this level of planning and management is not happening, even though recent months have seen AI labs locked in an out-of-control race to develop and deploy ever more powerful digital minds that no one – not even their creators – can understand, predict, or reliably control.

The letter has attracted over 33,000 signatures at time of writing.

If that sounds alarming, one expert from the Machine Intelligence Research Institute who has worked on AI since 2001 – Eliezer Yudkowsky – wrote a chilling article for TIME magazine a week after the open letter was published, stating the open letter was understating the seriousness of the situation.

Yudkowsky’s solution for how we should handle AI was dire, but simple and logical:

Shut it all down. We are not ready. We are not on track to be significantly readier in the foreseeable future. If we go ahead on this everyone will die, including children who did not choose this and did not do anything wrong.

Those fears mirror those of the late Stephen Hawking, whose warning about AI generated buzz several years ago. In his words:

“Unless we learn how to prepare for, and avoid, the potential risks, AI could be the worst event in the history of our civilization. It brings dangers, like powerful autonomous weapons, or new ways for the few to oppress the many. It could bring great disruption to our economy.”

As Hawking pointed out, humans can avoid risks, something often overlooked when “disruption” is spoken about as a fait accompli. Technologies are not inevitable once introduced. If that logic were true, we would still be flying around in explosive Hydrogen airships, building fences out of deadly asbestos and splashing them with lead-based paints. Technologies meet their end when they are too dangerous to humans. To paraphrase historian Yuval Noah Harari in his 2011 book Sapiens, the arrow of history does not always move in a straight line. AI is neither inexorable nor irreversible.

In the writing sector, I have heard people suggest AI ought to be permitted for authors to play with and experiment with in their own writing, on their own terms. In principle, I would generally agree that writers should be free to do whatever we like in our creative process – as long as it doesn’t harm anyone. But there’s the rub. Because all AI is trained using texts that were – and still are being – fed into it without the consent of the artist, any writer using AI for their own work is participating in the infringement of other artists’ copyright, to which they did not consent, do not know about, cannot opt out of and are not compensated for. I am not seduced by any argument that a small amount of private use of AI by writers themselves is acceptable. It is worse than piracy.

And my greater concern is that, if we push simply for some industry regulation, as many are proposing, this will not be enough to avoid abuse of the technology. Leaving the door open for personal use of AI in the creative process is enough for executives to exploit. You can foresee how cheekily the wink-wink, nudge-nudge conversations will flow from executives. “We’re not legally allowed to tell you to use AI, of course, but we are only paying you 1/10 of the fee we used to pay you. So, up to you if you want to use some AI to speed things up for yourself – we can’t stop you.”

Simply regulating the use of Large Language Models like Chat GPT in various industries would be a long, slow process and would be so open to loopholes. Companies would simply open AI farms in overseas territories that saw an opportunity to leverage an unregulated market for their own profit.

Some, like The Author’s Guild in the USA, are campaigning for author credit and compensation when their works are used to train AI in a new open letter released this week. I have not signed the letter. Like Yudkowsky, I do not believe mere regulation goes far enough. I do not want to be paid a few cents to have my work pirated by machines and spat back into the market for someone else’s profit. I do not want the machine pirating my work at all. I am not going to consent to automated copyright infringement of my work. I am not going to sign a contract with anyone to allow them to do this to my work.

I do not believe any writer should be consenting to this and I am furious at the suggestion that this is where we ought to be heading with this conversation. It is like letting someone break into your house and steal everything you own – your furniture, your clothes, your money – and instead of reporting it to the police and taking the thief to court, we're being encouraged to meekly grovel to the burglar and say, “Hey man, look, if you can give me my socks back, I won't tell the cops and we can call this even.” Writers are being directed to act like burglarized victims begging for socks because we are so used to being trampled on, but this milquetoast response is not enough. A courageous response would be saying don’t break into my fucking house or give me back all of my stuff.

Some arts organisations confess they don’t yet have a clear, finalised policy on generative AI. This is understandable, as the rapid rise of this technology caught us all on the hop. We didn’t know our houses would be broken into; it’s already happened and we can’t undo it. But we know generative AI exists now and we know how it operates, so we can now make decisions about how we interact with AI from this point on. And this new technology does not trump existing copyright law just because it is a “disruption”. I would posit that if your organisation believes in copyright and is against piracy, you already have a baseline position on generative AI: at a starting point, you are against it.

My neo-Luddite position on generative AI in the creative industries may seem reactionary or simplistic. Possibly it is. But it is also a courageous response, and the more I read, the more I am convinced that a courageous response to this technology is the only ethical position that will safeguard the livelihoods of artists around the world. For any minor benefits AI might bring (and I have not been convinced there are any genuinely tangible benefits to any writer), the disadvantages are so catastrophic that AI as an invention is a pyrrhic one.

It is also so far from what the twentieth-century vision of future technology was. We were supposed to invent technologies to take away all the hard, boring, horrible work we don’t want to do, freeing us all up for a paradise where we wake up each day free to create art, play sport and make love. That was the dream.

AI is, by contrast, a nightmare. This technology is not analogous to the revolutionary tractors in the early 1900s or the washing machines of the 1950s, taking away back-breaking labour away from humans who were all too relieved to see the back of it. AI is removing meaningful, joyful work – art itself – the essence of our humanity, to our collective horror and dismay.

AI is this generation’s asbestos. All evidence and expert advice points to it being dangerous to humans and it is antithetical to the existence of contemporary, working artists. I believe the most ethical position is to ban AI in the arts. Make it illegal in our sector, while we still can.

In the coming weeks, every artist and arts organisation will need to make a decision about their own position on AI. This is not a moment for sitting on either our hands, or a fence. We need to take a stand because our livelihoods – and therefore, our lives – literally depend on what happens next. Australian artists need to know what Australia’s arts industry stands for. Are we going to be met with the callous greed of Hollywood executives? Or will our arts industry stand with us?

Despite the nomenclature and the discursive framing, this historic moment of AI disruption is not a case of machines versus humans. AI didn’t invent itself. Technology is not some omnipotent deity or alien presence bearing down on humanity like an inevitable tsunami. Technology is – at least currently – made by humans. Technology does not accidentally disrupt our world: wealthy humans disrupt the world to best generate more profit for themselves. This is neoliberal profit-at-any-cost policy reaching its ultimate, utilitarian and nihilistic logical end point.

The advent of AI is humans versus humans. The battle lines are drawn, and the war has already started.

Which side are you on?

Holden Sheppard




comments powered by Disqus