Anyone who spends significant amounts of time with LLMs can spot AI generated writing. For a while, it was even a meme when the word "delve" almost instantly meant that whatever you're reading was written by an LLM. The phenomena came to be known as "LLM Slop" - the word "slop" clearly signifying low-effort low-quality content.
This essay isn't about identifying what is AI generated and what's not- but it's an essay which I had to write because I've changed my mind on LLM Slop in general.
If I'm changing my mind on something important, it is often the case that smart people still hold similar (or even opposite) beliefs- so hopefully, by the end of this essay, you'll understand why it's important.
Here's how my views have changed -
II.
Whenever I saw an email, a text message, a tweet or a blog post with LLM-generated text, I instantly lost respect for the person. For some reason, it felt like a betrayal - like here I am, spending my limited time reading what you've written, and yet, you've chosen to throw LLM-slop at me, not even bothering to respect my time.
Even the meta-thought of someone having the audacity to post LLM-generated text made me question their principles.
You've made the conscious choice of deciding to post AI generated text - an output of a prompt, perhaps thinking to yourself that no one is going to know anyway - so, you clearly don't deserve my attention, nor my time, right?
Now, if you were nodding along reading the above lines, I want you to understand that it's perfectly all right to think that way. If you send your friend an emotional text, and he responds with, "Hey, I'm sorry to hear that. I'm deeply moved by what you did. It's not about how much courage you had—but how gracefully you handled it...", I wouldn't blame you if you decided to never text that friend again. If you're reading a product review, and you see a 5 star rating or a 1 star rating, clearly noticing it's AI written, I won't blame you for ignoring it.
But I think we're in the very small minority who notice these things. And if we don't evolve and adapt fast, we'll be perpetually annoyed by the world, at large.
Here's what the usual format looks of an LLM generated response looks like -
It's not just about [abstract noun]--it's about the interplay between [modifier] [noun] and the emergent [buzzword] of [gerund phrase].
We're not merely [verb]-ing--we're [adverb]-[participle]-ing the [adjective] [concept] that underlies [domain-specific term]. This isn't a matter of [noun] versus [noun]--it's a liminal negotiation across a spectrum of [pluralized abstraction].
It's less of a [noun] and more of a [noun-as-process] refracted through the lens of [adjacent jargon]. What emerges isn't a conclusion, but a [modifier] [noun] woven from the threads of [vaguely gestured domains]. The question isn't what, or even why--it's how we [verb] with the [plural abstraction] in a way that [vague outcome verb]-s.
Watch this talk and see if you can spot why it’s an LLM generated script. These are employees of Google, intentionally and shamelessly reading an entirely LLM generated script on stage. Is it ironic or more appropriate that it’s at one of the biggest generative AI events of the year? Okay, it’s impossible for an AI to fully generate that script and clearly, a lot of input went into it - but is an LLM output the best final version of a talk that they could’ve converged at? Not today, in the future it might be.
Over time, it is my hope that LLM-written writing is not only going to undetectable, it's going to be better than any human written text. But, there's a small part of me that still believes that because LLMs will write more or less similarly- anyone sufficiently using them will be able to spot them.
If the latter is the case, I don't think the world will be a worse place than it is, right now. You'll still be reading well written, grammatically correct writing- so, in some ways, it is even better. As LLM written text improves, it might just be the case that what's LLM written feels better to read, and you'll wish everyone else wrote the same way. What's better? Reading a sentence like "I has many jobs before but I eats breakfast tomorrow", or a coherent beautifully written sentence? Is an argument made by an internet troll better because it now includes nuance, sometimes extremely unnecessary nuance, over ad hominem attacks?
I think the aversion to LLM-written text, even right now, isn't because LLMs produce bad writing. People still use LLMs extensively on their own, so if the output was really low quality, it would be impossible to tolerate even using ChatGPT.
Thus, the aversion to LLM-written text probably lies in the fact that it's similar to reading a book written by a ghostwriter, or interacting with the social media page run by a PR intern. You're not peeking into the raw thoughts of someone - reading the intimate structure of their ramblings, but rather, you're being fooled by someone using an LLM.
This "fooling" is perhaps what feels disgusting right now, and everywhere from LinkedIn to X, someone's saving time by copy pasting something from an LLM to post, hoping the general public won't notice or perhaps, not even caring if they do notice.
Once enough people do it, it'll stop being so repulsive- but instead, just be another thing you see on the internet. I think as people keep using LLMs, that's how they will talk as well. An entire generation is now growing up, and a lot of them will have an AI to ask infinite questions to- and that is a better world than the world we lived in even 5 years ago.
So, even when someone is writing on their own in the future, their writing will be indistinguishable from something produced by an LLM- and it is the imperfections in the writing - a wrong comma, a plural verb instead of singular, etc, these imperfections may indicate that something is human written.
Over a long time horizon, maybe you'll get an essay like this written entirely using LLMs. And instead of reading it, you will condense the essay down to its summary and that's how you'll consume it. You think the summary is great, so you instruct an AI agent to comment on the essay something positive. The AI agent, who has personal context of your digital life, goes and writes a comment that's indistinguishable in its meaning from what you would've written.
In the above interaction, it's certainly faster, and it'll get cheaper, but something still feels lost.
III.
I once heard Naval Ravikant say that books can be essays, and essays can be tweets - so just write the tweet. It's more efficient, the act of compression into the core message makes you work hard to justify the core message in the first place, and it saves everyone else time.
A lot of books, where it's just one idea that's repeated throughout the book using examples and metaphors, padding the length just big enough to justify the spine, can certainly benefit from ruthless compression.
Yet, there are very few great tweets, but hundreds of great books - even if we just confine it to the self help non-fiction space. I think this is because despite our tendency for efficiency, we still need the layers of reinforcing tissue around it. We still need the stories that can't be condensed, the emotions that just can't be tweeted. We need the anecdotes that can't be summarised. "Never give up" is an easy moral, but the right story can make you remember it better than any tweet. Ask anyone who reads which books changed their life, and they'll have an answer - but few tweets are remembered for weeks, let alone years or a lifetime.
This is why I think even if it's AI generated content - unless you really know you can summarise it and still get the meaning- you shouldn't do it. This means that it's for you to decide what to consume via an LLM-summarized lens, versus what to read - instead of just reading everything one way or another.
Obviously, there will soon be people not using LLMs that you wish would just use them.
And as the Internet swims in perfectly grammatically correct LLM-written content - there will still be writers who do it without AIs. Not entirely without AIs, maybe they run their writing through a grammar checker, maybe they brainstorm and outline with an AI, but I think - the true joy of writing lies in what I'm doing right now - just writing. Will people bother seeking out writers writing the old-fashioned way, without LLMs?
I think that shouldn't matter.
If someone can create something better using an LLM, than they can without it, then by all means - they should be releasing the AI-written content. If they're transparent about it, that's even better. I can certainly see myself combining 10 deep research reports by o3 into writing a fascinating essay, which would certainly have flavors of LLM generated writing, but it would be better than what any LLM or any human without an LLM can produce.
But even when someone is not necessarily doing the above merging - maybe, they are just going to an LLM, prompting it with “Write me a post about mistakes newly wed couples make so that I can post it to r/relationships” and then they copy paste and post that, that’s the exact kind of content I’d have said was killing the Internet. But now? I embrace it. And so should you. It’s more likely than not that the text contains something new in it - and just disdain for the author shouldn’t mean you also stop reading the content.
IV.
But also, why feel disdain for the author?
I know I’ve mentioned that I’ve felt it before, but here’s one way to look at it that might shift your perspective. As we all stumble towards the singularity - there might be a moment in the future where everyone’s words - both spoken and their internal thoughts, are filtered through advanced large language models - models so intelligent and personalized so perfectly that we won’t even know how we lived without them.
Maybe it’s a future with brain computer interfaces so cheap and affordable and safe that everyone has one- and using the model capabilities inside your head is more efficient and effective than using your primitive meat brain. In such a scenario, we won’t judge anyone for being assisted or augmented by LLMs- but that scenario is already here. We’re all augmented by the technology we use everyday - reaching for our phones when we wake up, as if it’s an inseparable organ, just outside our bodies.
Soon, we'll see policy proposals written entirely with LLMs. We'll see politicians tweet out LLM-generated text- and we should think highly of the politicians that do, because it means they're augmented with an intelligence that will allow them to craft better policies and take care of their citizens more effectively than the politician still thinking with their meat minds. And if this is the case, shouldn't we be happy whenever we spot LLM-generated text on LinkedIn or Twitter, or even from a newsletter?
I think we should, and if you came into this essay being as annoyed by AI slop as I previously was - I hope I’ve made you reconsider that perspective. Slop is the future, and we’re all in it together - so we might as well embrace it early.