Science fiction author Ted Chiang's recent New Yorker article raised many intriguing questions about AI—unsurprising, given that Chiang's writing often tackles deep philosophical issues. In this piece, he argues that AI cannot truly create art because it lacks the intentionality, understanding, and ability to make the numerous meaningful choices required for artistic creation.
This topic is relevant even for non-artists. We consume art or creative works daily, whether on YouTube, streaming a movie, or reading a book. Just as we should care about good food for our bodies, we should care about good art for our minds. Moreover, I'd argue that much knowledge work has a strong creative (perhaps even artistic) element. I can spend hours crafting a compelling storyline and creating a convincing piece of work that generates buy-in and leads to important decisions. I've seen data analysts pour immense care into designing reports. While I'm not claiming my work matches the artistic level of Chiang's short stories, his choice to publish in the New Yorker (rather than "Artist's Monthly") invites broader commentary.
Let me first outline my understanding of Chiang's reasoning. I encourage you to read the full article for more context and nuance.
Art requires numerous choices: Chiang argues that creating art involves making vast numbers of choices, both conscious and unconscious. Writing a 10,000-word story involves roughly 10,000 choices. AI, given a brief prompt, must fill in these choices, often resulting in bland or derivative work.
AI lacks intention and understanding: He contends that language models like ChatGPT aren't truly using language because they lack intention to communicate. They can generate coherent sentences but don't understand or feel anything behind those words.
AI-generated content isn't truly creative: Chiang suggests AI appeals to people who think they can express themselves in a medium without actually working in it. He argues true creativity comes from engaging deeply with the medium and its unique expressive potential.
Current AI is skilled but not intelligent: Using François Chollet's distinction, Chiang argues that while AI programs are highly skilled, they aren't particularly intelligent because they're inefficient at gaining new skills and struggle with unfamiliar situations.
AI is dehumanizing: He concludes that generative AI is fundamentally dehumanizing because it treats humans as less than what we are: creators and apprehenders of meaning. It reduces intention in the world and lowers expectations of what we read and write.
This is thought-provoking, as expected. I've already seen one comprehensive critique of Chiang's arguments here, but I'd like to add my perspective, based on personal experience using popular LLMs for writing and storytelling, and MidJourney for over a year on a secret art project.
In essence, Chiang's reasoning rests on one key assumption: Using AI reduces human-made artistic choices, even when you prompt your AI extensively. I'm not convinced this is so black-and-white in practice. To borrow Chiang’s example: Writing a 10,000-word short story without AI doesn't necessarily mean making 10,000 choices. Realistically, you might make hundreds or a few thousand choices—defining structure, crafting arguments, drafting, and iterating. Not every word requires a choice, and not all choices carry equal weight. If your reasoning is muddled, word choice matters little. Conversely, a novel idea can shine even with imperfect execution. Demanding that artists make "10,000 artistic choices" sets an exceptionally high bar for creation. I am sure Chiang meets this requirement, but do all human artists and creators?
I’d make a more nuanced case, therefore. Using AI tools doesn't automatically diminish my artistic role, even if I'm not making "10,000 choices." I remain the creator, whether using AI or just a notebook. While LLMs and tools like MidJourney can generate mediocre work autonomously, that's not a good reference point. When I get started, I gather the energy and courage to create something—a fundamentally human, creative act. When I use AI tools, I guide the output at every step. I define structure, guiding questions or themes, and tone, making numerous intentional choices. To refine my content, I provide multiple iterations of prompts, first on the overall topic, then focusing on specific aspects. With Claude, you can even set up a "project" for your art, providing extensive background information, briefings, and examples of good writing. If I dislike the direction, it's my responsibility to redirect. If my LLM doesn't produce enough novel content, I must engage with the storyline, generate better ideas, and feed them back via prompts or direct edits. I'm content with AI as a skilled (but not self-aware) "summer intern" or assistant. It's really quite skilled, and I don't need it to be intelligent or self-aware—that's what I bring to the mix. ☺️
To rephrase Chiang's question: "Is it art when I guide the AI?" My answer is yes, if you guide the AI closely. It's a human act of creation, involving hundreds or thousands of choices. Is it more artistic if I make all 10,000 choices? Perhaps. Would I draft a love letter using Claude or ChatGPT? Certainly not. But my AI-assisted work is artistic enough, even if I make only 2,000 choices.
That said, Chiang makes a lot of compelling points, especially regarding how a lack of intention in our work can dehumanize us and lower our expectations of what we read and write. I'd argue there's already little intention in the world, so we shouldn't be even less intentional. We often navigate life on autopilot, including in knowledge work. We process emails mechanically, focused solely on emptying inboxes. We already have low expectations in many aspects of work. Companies and individuals often react blindly, rather than mindfully setting and executing plans. Apprehending and making meaning don't always matter. In my 19 years of professional experience, I've encountered much mediocre, intellectually lazy reasoning: "I want to do X, but please explain to me why X is good."
None of this is very intentional. So, let's be more intentional with our work, whether with or without AI support.
All opinions are my own. Please be mindful of your company's rules for AI tools and use good judgment when dealing with sensitive data.
Screenshots: Newyorker.com, via me