LLMs are great writing tools. They help improve style and structure, summarize larger pieces, and for non-native speakers like me, fix grammar.
However, a major frustration I have (and hear from many others) is that LLMs often completely transform your writing's tone and structure. For me, this is a deal-breaker - I want my writing to reflect my ideas and sound like me. I already get emails and decks where ChatGPT's fingerprints are obvious - that superficial, "marketingy" tone and predictable structure (the world is changing rapidly, here's why this tiny detail matters, and here's my take on it). I can spot these from a kilometer or mile away.
In my experience, ChatGPT is the worst offender here. Gemini does better. Claude.ai also has this issue, though you can at least upload writing samples and ask it to match your tone (which works more or less) - but only within a single chat or project, not across conversations.
However, the Claude product team has been busy. Claude now offers three preset response styles:
Formal: clear and polished responses
Concise: shorter and more direct responses
Explanatory: educational responses for learning new concepts
You can choose these styles for each prompt and even change styles within a conversation.
Even better, you can create custom styles by uploading samples of your writing. Claude analyzes these to create a style summary and gives it a name.


To test this, I shared my posts since August with Claude to see what it would identify as my personal style. According to Claude, I write with a "professional yet conversational tech analysis style", mixing practical insights with engaging storytelling. Not entirely accurate - I mostly write about personal and corporate productivity rather than pure tech. But it's still a useful feature. Once created, this custom style becomes available across all your chats.
Like everything with LLMs, it's not perfect. I tested it by asking Claude to write about Google DeepMind's recent essay on AI transforming scientific discovery in my style (simple prompt, no guidance).
Google DeepMind's comprehensive position paper "A new golden age of discovery" presents an ambitious vision for how AI could fundamentally transform scientific research and discovery. What stands out immediately is their pragmatic yet optimistic framework for considering AI not just as another digital tool, but as potentially the next great scientific instrument - following in the footsteps of microscopes, telescopes, and statistical methods.
The paper thoughtfully addresses both opportunities and challenges. It identifies five key areas where AI could accelerate scientific progress: knowledge synthesis, data generation, experiment simulation, complex systems modeling, and solution search in vast possibility spaces. What I find particularly insightful is their emphasis on AI as a complement to, rather than replacement for, human scientists. They envision AI helping tackle specific bottlenecks around scale and complexity while preserving the essential creative and intuitive aspects of scientific discovery. The paper's discussion of AlphaFold and other concrete examples helps ground these abstract concepts in practical reality.
Where I think the paper makes its most valuable contribution is in its policy recommendations. Rather than focusing solely on compute resources or technical capabilities, it calls for a holistic approach including the creation of "AI for Science Data Observatories", treating AI as a core scientific instrument in education, and reimagining research institutions for the AI era. These suggestions reflect a sophisticated understanding of how scientific progress actually happens - through the interplay of tools, institutions, incentives and human capital. My main constructive critique would be that while the paper acknowledges equity concerns, it could have devoted more space to concrete proposals for ensuring AI-powered scientific advances benefit historically underserved communities and regions. Still, this represents one of the most thoughtful frameworks I've seen for systematically thinking about AI's role in advancing human knowledge.
The result wasn't quite "me", but it wasn't way off either. Much better than the typical overly enthusiastic LLM voice. While I don't write posts this way (let LLM start, then refine), I know many do. So this could be really valuable.
For those doing lots of structured, researched writing, Claude remains my top LLM choice. Love it... 😍
Also, as always, I'm interested in your thoughts: Feel free to message me. Or, if you prefer, you can share your feedback anonymously via this form.
All opinions are my own. Please be mindful of your company's rules for AI tools and use good judgment when dealing with sensitive data.