What it’s like to bring AI into a human creative practice, with caution and curiosity
A personal perspective from Philip Maughan
Another newsletter about AI, who needs that? A lot of people it would seem.
I have been fortunate enough to spend time with both design students and young creatives recently and the dominant theme to our conversations has been how they adapt their career goals and skill sets in The Synthocene Era.
There is no scenario where AI doesn’t irrevocably change the way we live and work or simply fades into irrelevance. My fear is not that robots are coming for our jobs but that if we don’t attend to it, the people creating the technology will be single-handedly in charge of how it changes our lives. We urgently need voices from outside the tech industry to help shape its future. With this in mind we reached out to one of our favourite writers Philip Maughan for a personal perspective on how they are integrating AI into their practice.
Helen x
Perhaps the greatest challenge in getting started with AI, for those who haven’t, isn’t intellectual or technical, but emotional. In a recent essay for the New Yorker, historian D. Graham Burnett recalls asking a class of Yale undergraduates to raise their hand if they used AI. Nobody did. It may be their response was due to a general prohibition—the use of ChatGPT or other tools will get you reported to the academic dean—but it could also be a gut response common among students, teachers and graduates of humanities subjects. Burnett sums it up nicely: “Everyone seems intent on pretending that the most significant revolution in the world of thought in the past century isn’t happening.”
It’s been a strange few years. It’s still strange. A familiar yet alien presence has manifested in our midst which Burnett goes on to describe as “part sibling, part rival, part careless child-god.” Another writer portrays large language models (LLMs) like ChatGPT, Claude and DeepSeek as “an extremely massive (metaphorical) database of almost all expert knowledge in all fields, with a very smart friend to act as a guide to that database.” The emotional tension rises when we try to reconcile these free-to-use chatbots with the endless media hype around AI, the philosophical polarisation between doom and abundance, the thinly veiled shilling on X and tasteless slop almost everywhere else. In this brief post I’ll give an overview of how I use AI and explain why it makes sense to get started.
What Is Good Writing?
Weirdly, learning to use AI feels a little like getting good with money, relationships, or finding an accountant. You pick up bits of wisdom here and there. Along the way, you feel equal parts elated and ashamed at not having figured them out sooner. You remain confused as to why the whole thing has to be so opaque. During an interview about writing, Sam Altman, OpenAI’s CEO, was asked the seemingly obvious question of how he uses ChatGPT personally. He begins with the amusing admission that he didn’t use it much for quite some time, adding that through experimentation, he kept discovering more and more ways it could be useful.
I use AI every day in my work as a writer, editor and researcher—though it has never written anything for me, nor do I suspect it ever will. Why is that? Well, the simplest way I can think to put it is because I aspire to write things only I could write. Humans are complex beings with chaotic and singular histories. Good writing is a process in which a complex being makes lots of decisions—maybe thousands, maybe millions—to produce a final text, generally in collaboration with another complex being called an editor. (The quality of the writing and the length of the text will of course impact the number of decisions to be made.)
Ted Chiang makes this argument in his essay “Why A.I. Isn’t Going to Make Art”. Looking at how large language models work, Chiang argues that the number of decisions required to write a novel or paint a painting is so vast it defeats the objective function of current, consumer-facing AI, which is to produce output based on minimal information. In order to do this, it fine-tunes a response based on a mean average drawn out of the data it’s been trained on. The result is often derivative, disappointing, cringe, at least as far as art goes. LLMs are word-prediction machines, and good writing is not one of the ways that human beings are predictable.
Good writing is dense with unexpected gestures, resonant observations, with rough edges that trace the outline of the author. To say this is not to denigrate AI. I am overwhelmed most days by what it can do—and it makes me look back at my career up to now and weep.
The Golden Rule
My first editorial job was as an assistant at a magazine. One of my tasks was to sit and transcribe interviews conducted by more senior members of staff. It was dull, slow work, but nowhere near as bad as transcribing my own interviews, forcing me to listen to my awful, nasal voice on repeat as I tried to capture the essence of what had been said for hours and hours on end.
Nowadays I use Otter.ai to transcribe interviews. I then feed the barely legible conversations it produces into Claude using a prompt like “clean up this interview to avoid fillers, false starts and repetitions retaining the original voice without eliding any topics we discussed,” which outputs a readable version of the conversation. It’s pure bureaucratic bliss.
What else? I use Claude like a conversational mega-thesaurus when the mot juste hangs just out of reach. Unlike a normal thesaurus where the interconnections between words are fixed, I can describe my way towards a word or concept I know but can’t quite remember, or the AI can suggest a better alternative. I feed finished texts I’ve edited into the AI with instructions to proofread or fact check, though I am very judicious about accepting suggestions. This is probably the golden rule for using AI and writing: never ask it to advise on something you have zero critical aptitude for. You need to be able to evaluate its output. The nice thing of course is that you can ask it to teach you whatever you want to learn.
As I experiment I’m not thinking about the future, capitalism, the singularity or whether AI has a theory of mind. I’m just seeing what it can do to help me in the here and now. So far I haven’t used it much for research aside from pulling together a basic frame or context, like talking to Wikipedia. But after hearing so much lately about the deep research function on ChatGPT, I plan to go there next. However, even if it speeds up research, which can be maddeningly inefficient, the work of conducting interviews with other humans, building narrative, judging the relevance of what I’m saying, overall presentation, style, etc., will still fall to me, which brings me to my final point.
It’s naive to think that AI won’t change how we work as writers, copywriters, art directors and creative people of all stripes. Much of the drudgery is being sped up or vanquished entirely, and the young editorial assistant inside of me welcomes this with open arms. But it would be equally foolish to ignore the drawbacks. AI still hallucinates. It struggles to create new knowledge, can make basic mistakes and isn’t very original. All of this sounds like a call for more personality, more intention, self-awareness and artistry in our work. It calls for more you, helped along by a mind-like presence that will push you forward and pick up the slack.
Philip Maughan is a writer and researcher working on creative and commercial projects based in London. His essays on food technology, simulations, fashion, and the evolution of humanity can be read at BBC Future, Noema, Tank, Kaleidoscope and elsewhere. He has worked with a range of brands and institutions on everything from campaign copy to speculative design, some of which include Prada, MoMA, Stone Island, Antikythera, Modem and Deloitte Ventures.
Thanks for reading. As The GOODStack develops, we will bring thought pieces from a great selection of voices and writers. We’re also eager to hear your ideas and suggestions to help shape The GOODStack into a resource that truly resonates with and benefits our community. Keep in touch!
The GOODStack’s initial curation is led by Helen Job. Helen has more than 20 years of experience in futures and cultural research, working both in-house and at agencies of various sizes. Formerly the Head of Research at SPACE10, Helen now leads her own creative ecosystem, Neu Futur, centred on deep research and preferable futures.
Really enjoyed this ✨