TL:DR - I respect your time. I’m not using LLM’s unless the article is about LLM’s, or if there’s text I don’t expect you to actually read.
The Policy Itself #
When AI (Particularly Large Language Model) tools or text are used, it will be clearly mentioned in the article. If the article is about AI/LLM’s itself… I respect your time enough to omit that disclosure.
In some cases, this means I will mention very specifics about the nature of the usage, such as:
In other cases, particularly ones where you’re not necessarily expected to read and understand a section of text, such as an example error message, I’ll tell you that an LLM was used. I might not include specifics though if not relevant.
The Why #
To summarize because I waffled:
- This is my blog, not Sam Altman’s or Sunder Pichai’s
- LLM-generated text is repulsive
- It all sounds the same
- The whole hallucination bit is still a problem with ’em
It’s not about respecting your time, it’s about the fundamental agreement between the writer and the reader…
That sounded quite LLM-generated, right? It wasn’t, but I genuinely felt some mild anger writing that out since I also had to read it. Finding actually human-written articles are a pain, especially in given niches. For a project I can’t discuss on here, I had to do a lot of research on Datadog and another observability tool. I wasn’t too sure on which one was better for a very specific need, so i searched Datadog vs othertool specificfeature. Lots of stuff came up on google, but I genuinely couldn’t find a single article written in the last year that wasn’t LLM-generated.
I bit the bullet and looked at one or two of them anyway, but with the docs of each provider side-by-side to see if there were any contradictions. It’s probably easier to tell you about which aspects were not hallucinated. Actually, let me do that now:
So the point is, you can’t trust LLM-generated text.
The whole ‘it’s not X, it’s Y’ is an overplayed trope that means it can be very easy to tell when an LLM was used. It’s easy enough to have the ‘puter make up text that doesn’t sound like an LLM though. Funny enough, I was about to whack in an example from Gemini, even specifically throwing in the prompt “Make sure that no matter what, it doesn’t sound like an LLM. No ‘it’s not this, it’s that’”. The first thing I saw was the most LLM text of LLM’s. Claude was the same. Bah humbug. I wonder how those foreign bots do it then.
Any most importantly, this is my bloody blog! I write stuff here because I enjoy it, and hope others can too.
Here’s an Example #
If you’ve been reading through this blog, you may have noticed an alert at the top of my article about 3am troubleshooting. In this very specific context, I didn’t feel that adding any more detail to the exact text, or the prompt itself, was really relevant. I’ll put them here though. The purpose was to generate some ‘flavor’ text (waffle) which didn’t make sense, but was about kinda about Postgres.
Here is the text:
The Inverse Vacuum of the GIN-Index Singularity
To resolve the phantom latency in your cluster, you must first re-calibrate the flux-capacitor within the shared_buffers to ensure the B-tree doesn’t accidentally undergo spontaneous combustion during a sequential scan. If the WAL logs begin to whisper in ancient Latin, it’s a clear sign that your primary key has achieved sentience and requires a blood sacrifice of three unoptimized subqueries to appease the query planner. Finally, ensure your max_connections is set to a prime number divisible by the current humidity in the server room, otherwise, the ACID compliance will liquefy and leak into the swap partition.
The model used to generate this was Gemini 3 ‘Fast’, which is accessed via Google’s free Gemini chat web app. The exact prompt was “Generate some random yap about postgres troubleshooting that doesnt really make sense. Make it about 3 sentences, with a single headline”.