Claude actually! Yeah the content editing was heavily LLM assisted as I'm a terrible writer and I wanted the read to be enjoyable. So I compiled all the research and worked with Claude to build the article. I then attempted to go through with a fine tooth comb and write it in my own words. That is one particular sentence I missed with a highly recognizable LLM pattern which I will fix. I simply also don't have time to really market the game, I care more about the quality of the software. I know if the software is great then it will be successful. But I wanted to share the story in a compelling way. Apologies if it was distracting!
Marketing copy has been using this rethorical artifact for decades, as well as journalism with some success, it is like an analogy, but pruned down in meanin by the opposition. It is impactful, and probably got high marks with the humans doing RHLF, that I suppose, came mostly from journalism and marketing schools.
Professionally typeset books. Designers have been typing it—and the other dashes—manually using modifiers+hyphen on Mac since 1984. You can type them—plus the bullet character—today on iOS by doing a long press on the hyphen key.
I’m not talking about the em-dash, which is not a great indicator IMO but the horrible overuse of binary oppositions with a kind of false surprise, e.g.:
The problem was not em-dashes — but binary opposition!
That sort of thing.
It is a much clearer marker of llm use than the em-dash. The sad thing is when searching for info on this the most convincing reply in search was generated by an LLM, which went on at length about why LLMs do this as some sort of consequence of their internal structure. I have absolutely no idea if that’s true — it really sounds a bit trite and exactly the kind of thing LLMs would confidently assert with no basis. I would want to hear from someone working in LLMs, but their blogs are probably all generated by an LLM nowadays. So this conundrum is a good example of a question where LLMs actively work against clear resolution.
This is in my view the most insidious damage word generators are inflicting on our culture — we can no longer assume most writing is honest or well-meaning because amoral LLMs fundamentally are not wired to make that distinction of true and untrue or right and wrong (unlike most humans) and many people will use and trust what they generate without question, polluting the online space and training data until everything is just a morass of half-known facts sprinkled with generated falsehoods that are repeated so often they seem true.
How do we check sources when the sources themselves were generated by LLMs?
My personal feel (completely subjective) is that during RLHF humans are incredibly sensitive to this pattern, especially when talking about personal or emotional issues. Any reply in the form of "it's not you, it's them" is such a dopamine hit that the LLMs started applying it for everything else.
The em-dash meme, if it's actually a thing, I find really annoying. That's the style in a lot of places and for a lot of people with or without spaces on either side. It was house style a a few different places I worked over about 25 years. More broadly, I assume LLMs are training on how a lot of people actually write. I'm not changing my writing style because some people will flag it as LLM-generated <shrug>.
I think it comes from the RLHF. If you haven't interacted with LLMs enough to get turned off by it, I think that kind of speech is seen as powerful and confident.
RLHF = Right Left Hand Foot. It's a technique in Bavarian interpretative folk dance where you jump around, artfully hitting the soles of your feet with your hands in order to court women who are busy carrying unbelievable numbers of beer Steins into the mountains.
That's what came to mind when I saw the abbreviation. Then I looked it up:
It is a staple of marketing and journalism writing. And the guys doing the HF on it most probably came from this exact background: marketing and journalism.
Whatever it is that caused "It's not X, It's Y", it's more recent than LLMs as a whole.
As far as I remember, neither GPT3.5, GPT4, nor Claude Instant did it. I think Gemini was the first to really do it, and then out of nowhere, everybody was doing it.