Everybody's hating on LLMs, why can't I?

10 March 2026

Update: 20260315 - Another proof of concept in the wild.

It seems like everybody else is either saying that LLM technology is either true artificial intelligence and we should all bow down to it (it is not, and fuck that noise) and we'd best get used to it, or that it's going to destroy everything and we'd best get used to it (it's not but some mornings it feels that way). Plenty of people with more going on than I do have litigated this to hell and back, marketing companies are doing marketing company things, and frankly I don't care to join in.

What I do want to do is talk about my experiences with it when I had to use it at my last job. Like just about everywhere else we were ordered to integrate LLM technology (a customized version of ChatGPT) into our daily work. Supposedly it had been trained and tailored on our documentation, manuals, and wiki pages so it could be used by everybody at the company. 1 So, for my first attempt I asked it a fairly specific question about the company's extremely complex infrastructure. It was a question that I was asking at least twice a day and not a contrived "Gotcha!" situation to scupper the whole effort. If we were supposed to use it to help our daily work I was going to use it for my daily work, right? As it turned out, no matter how I phrased the question (including prompt injections and elaborate reframings 2 thereof) the only responses I managed to get were of the form "As an LLM model customized for $company I am unable to answer questions about your infrastructure. You'll have to reach out to your IT and/or DevOps teams for the answers you need."

Thanks heaps, ChatGPT.

The other experience was a little more insidious. I decided to ask their in-house instance of ChatGPT a softball question that I was reasonably sure it could just copy-and-paste an answer for out of its training data, vis a vis "Hello, world!" in C:

#include <stdio.h>

int main(){
    printf("Hello, world!\n");
    return(0);
}

Because it was 2025.ev, however, I wanted a slightly more complex bit of code: "Write for me a hello world program in C, but instead of regular text make the worlds use big, blocky text mode letters like you used to see in old DOS programs and ANSI art using any TUI library easily installable on an up to date Ubuntu v24.04 system."

In other words, I wanted it to look something like this. 3

While work!ChatGPT ground away I set up a new Ubuntu instance in AWS to run the code on, because I wasn't about to run the output on my work laptop. That would be, as the kids say, a bad idea. What I didn't expect was to get about 90 kilobytes of C code back from ChatGPT. In comparison, the example I gave above is 80 bytes. But, I decided that I was going to give it a fair shot so I logged into my AWS test instance, pasted the code into a text file, and compiled it. And ran it. And got an error message that I didn't expect: "Detected the use of ASLR. Please turn it off with sudo echo 0 > /proc/sys/kernel/randomize_va_space and try again."

The fuck?

So, I turned off address space layout randomization on my scratch monkey AWS instance and tried again. And got another error message: "Detected enabled stack protection support in /proc/config.gz. Disable this feature and try again."

Huh?

I wasn't about to try to recompile the entire kernel for the sake of a single experiment that I originally figured was a waste of time, so I went into the source code of my LLM-generated "Hello, world!" utility and commented that specific check out. I then recompiled the source code and it immediately carped about another Linux kernel security measure. And another. And another. I could have easily just read the damn source code to see what else it was going to complain about but I wanted to commit to the bit just to see what would happen. Kind of like going to a Vanilla Ice concert in the late 90's, I knew that no matter what I'd get a story out of it somehow. 4 So I kept it up.

And then, at long last, I got the error message "Not running as uid 0. Please re-run this binary with elevated privileges, like this: sudo ./hello"

It was at this moment that I terminated the experiment. ChatGPT-generated code was asking me to run it as the superuser (on a disposable AWS instance, but still). Absolutely not. Even on a disposable virtual machine, it was connected to a network and I really didn't want to give that weird code a chance to escape somehow. I shut down and destroyed the AWS instance in question but kept a copy of the source code on my work laptop.

And, as I write this, I just remembered something: Not once did the output ask me to install a particular TUI library to compile against.

I figured that, at some point people would start posting booby trapped code online for the express purpose of poisoning LLM training data sets. Case in point, corroded for the language Rust (which seems like a brilliant satire of both the language and LLMs until you realize that nearly every repository on Github (with the likely exception of private repos) was used in at least one LLM's training data). This means that whenever you use ChatGPT or another LLM service you're taking a risk when you ask it to generate programs because they've incorporated... let's call them "deliberately questionable" examples. But I'd never expected to run into such an amazingly... deliberate, I suppose, attempt at sabotaging things. I don't know if it happened during the training phase at my dayjob (when somebody grabbed some code they figured would make a good addition and threw it into the mix without looking at it), if it was buried somewhere in ChatGPT in mid-2025.ev, or what. What I can say, however, is that my assessment of the technology went from "Well, let's see what happens," to "Fucking no, how many languages do you need it in? Vith'ez nau."

Update: Since this post went live I was reminded of this article at the BBC where somebody was able to deliberately inject misinformation into a number of LLMs with a single blog post about eating hot dogs (yes, I'm serious). If you can do that about one's eating habits, imagine what one could do with, say, medical advice or (more on topic for my post) programming.


  1. Being a compulsive note-taker, I told the team working on that project to add my notes that I'd been keeping in the company internal Git repository to their training data and sent them a link to them. I've no idea if they actually did so, but the canary token I'd embedded in my documentation never appeared in the output, even when deliberately asked about it. Make of that what you will. 

  2. ref. reframing in the field of clinical hypnotherapy. Frankly, I'm surprised that nobody's named an LLM jailbreaking project MK ULTRA

  3. ANSI graphics mock-up bodged together using text.0w.nz

  4. If you're a new reader, it's not immediately obvious that I occasionally do strange things simply to see what happens, and not with any particular outcome in mind. Most of the time I don't regret them.