So, I’m well aware that AI is something of a volatile topic these days. Saying the wrong thing can lose you friends, and perhaps even faster: win you enemies. I’m seeking to do neither. But as an organisation that is passionate about supporting people to express their real and true selves… and as workshop facilitators who are passionate about creating safe spaces for said people to do said expressing… I reckon AI should be a topic we’re all willing to discuss. Because I’m pretty sure it ain’t going anywhere soon, other than to ‘bigger and more pervasive places and spaces.’
To be clear from the get-go: for the purposes of this specific AWA Forum discussion, the AI I’m referring to is Generative AI (or GenAI) – the kind of AI that might ‘help or augment’ what would otherwise be considered ‘a human cognitive process.’ Meaning: asking AI to write an email for you, or to write a research paper for you, or to write a novel for you – any and all of these tasks would require Generative AI.
What would not require Generative AI? A spell check. A grammar check. A subtitling service. A transcription service. Let me put it this way. I just did a bunch of artist interviews. I recorded our conversations using the Voice Memos app on my phone. And then I used HappyScribe, an AI transcription app to deliver the first draft of my transcript. Point is, I could have achieved the exact same result myself – it just would have taken me a lot longer to listen to each phrase and sentence and then type each one up, word by painstaking word. AI gave me speed – I used it as a tool in the manner that I might use an electrical drill to insert or extract a screw from a wall, faster than I could by hand. But HappyScribe’s AI transcription service didn’t give me one single ‘additional’ word other than those that my interviewee and myself uttered.
With Generative AI though, that’s going beyond a mere tool. If you ask it to write a poem or a short story for you, it’s going to supply you with many more words in response, than the few you give to it by way of your topic and your instructions. And you know where those ‘extra words’ are coming from? Those words that are putting muscles, flesh, skin tone, and facial features on top of the raggedy skeleton you gave to the AI in the first place? Those words are coming from the works of writers just like you and me. Writers who for whatever reason had their works sitting in digitally available places. Places where Meta and the makers of ChatGPT, and other companies have simply gone, “Oh! Thanks for that!” while they have rampantly stolen without permission. And while they have stolen without providing compensation.
(This is not to mention other downsides to GenAI, which include massive harm to the environment by way of exponentially increasing power demands and water consumption.)
An Australian writer colleague of mine, Zanni Louise, who has authored more than forty bestselling and internationally published books for kids, recently put out a wonderfully eloquent post about Generative AI on her Substack. It’s entitled: “Whatever you do, do not feed the beast. From romantasy to AI, it’s a good time to get out of our comfort zone and remember what being human is all about.”
I encourage everyone to read it. She gives great examples, suggests further reading, offers resources for protecting your work, and links for deeper investigation.
I guess the main question I’d like to posit here today is this. Regarding GenAI, where do we go from here as workshop facilitators trying to create and maintain safe, supportive, inclusive spaces? Personally, I don’t want someone handing in a manuscript for review that has been written “with the help of AI.” I’m not a chess player. I didn’t sign up to try and ‘beat the computer’. So I don’t want to begin to try and give feedback on words that did not come 100% from the writer in front of me. And I don’t believe that is where AWA writing should be coming from. But I imagine manuscript review situations aren’t the only GenAI-related ones we’ll encounter in our groups. What thoughts or answers should we have up our sleeves when other GenAI issues come up?
As a famous actor friend of mine responds to wannabe actors when they ask her, “But what’s the short cut to becoming as famous as you?” she answers, “The short cut is to just get on and DO THE WORK. Because there IS no short cut!”
I do not see GenAI as a tool. I see it as a short cut. And I’m interested in people who want to get on and do the work – work that is full of all the messy, awkward, fabulous, harrowing, painful, mind-expandingly imaginative and inspiring words that tend to spill onto ink-blotched scraps of paper when they come from the hands and the brains of fellow human beings.
How about you?
Thanks Mathew for the topic and your well thought out comments. I will have to check out the Zanni Louise piece.
In some ways I think AWA sessions are safe, some ways, as much of the writing is newly birthed during the session so the chance for AI interference is minimal. If writers have the prompt before hand that could be out the window. I guess we could all be fooled on occasion, but I do have some inherent trust in the writers in my groups to be true to the process.
Beyond that, I have found AI an exceptional helpmate when trying to do some quick research to better understand the breadth of some topic I am writing about. I wrote a piece about the impact of guns and AI generated statistical information that helped me craft an a more impactful piece. I could’ve done the research on line or at a library, but this was quick and comprehensive.
I took a workshop recently on using AI from a writer and editor I trust totally, which was fascinating. It provided some perspective on this issue. I tested AI out, posting a piece to it, telling it not to save, not to rewrite but to provide feedback and suggestions to raise the tension level. It was a fascinating process. The feedback was very AWAesque, positive, strength-based, complimentary. The suggestions were interesting – I didn’t use them, because I still believe it is my job to write. But it was fascinating.
I understand the writer’s guild and publishers limit AI in any piece to less than 5%. Not sure how they track that, maybe they have AI do it. I saw a piece recently by James Patterson, where he said he wasn’t worried that AI would replace him as a writer. He was more worried it would destroy readers, who would rely on AI to provide the Cliff Note versions (showing my age) about everything.That they would rely on AI to think for them. They wouldn’t read anymore, diminishing the powerful language writers provide and send us careening down the slope toward “Idiocracy” (I used AI to find the title as I couldn’t find it in my brain). Maybe that’s the other risk, we will stop using our beautiful brains…the creative folks around our tables (and zoom screens) may be immunized by all we do. Thanks for posing this topic.
Thanks guy, I do appreciate you taking the time to write a thoughtful answer. And I also encourage you (and other readers here), PLEASE ACTUALLY READ THE ZANNI LOUISE PIECE!
I have no doubt it was helpful that AI grabbed those gun statistics for you. Just as I’m sure it was fascinating to use AI in that writing workshop.
Zanni had this (and more) to say on research:
“Using AI to shortcut research is cheapening the beautiful journey you go on when you dive into a topic and walk around, stumbling on serendipitous discoveries, which give you the same jolt of satisfaction you’d get from finding a diamond on the beach.”
Like her, I am concerned with the deeper ramifications of what we’re doing to ourselves, our readers and each other, by ‘outsourcing’ our creativity and imagination to a thing that was built on dark and dodgy principles, is bringing extreme wealth to an already obscenely wealthy tech sector, and is costing our environment serious amounts of damage.
I used to remember a zillion phone numbers in my head – international codes included. I remembered them through repeated actions – of punching in those numbers in specific sequences and patterns, that also helped me remember by the little tonal sounds the numbers would each play. I had to DO something. I had to perform a TASK. I had to TAKE ACTION in order to receive the reward of speaking to my friend / lover / family member / colleague on the other end. (And hey, I’m not talking about working in a coal mine here; I’m talking about “dialling a phone number.”)
These days, less and less and less do we have to DO anything to achieve a result. And the more we become beings who simply press one button or make one keystroke to have our ‘desire satiated,’ then yes, I believe that along the way we are losing valuable parts of our humanity. And along the way we’re ceasing to use our immeasurably beautiful brains, especially when it comes to taking care of our immeasurably beautiful planet, and all the extraordinary and wondrous things that live and exist upon it.
Hhmm, and now having thought through and written this response, I think I’ve stumbled across a possible new writing prompt. Gotta go. Thanks guy!
Here’s another really interesting perspective, from someone who uses AI a lot for all kinds of ‘tool-like’ assistance. It’s an article entitled, “We need to stop pretending AI is intelligent – here’s how.”
https://theconversation.com/we-need-to-stop-pretending-ai-is-intelligent-heres-how-254090
A few snippets:
• AI has no understanding. No consciousness. No knowledge in any real, human sense. Just pure probability-driven, engineered brilliance — nothing more, and nothing less.
• It has no taste, no instinct, no inner compass. It is bereft of all the messy, charming complexity that makes us who we are.
• And please, don’t come at me with: “You’re too harsh! You’re not open to the possibilities!
• I use AI every day. It’s the most powerful tool I’ve ever had.
• But it is still a tool — nothing more, nothing less. And like every tool humans have ever invented, from stone axes and slingshots to quantum computing and atomic bombs, it can be used as a weapon. It will be used as a weapon.