Organics, AI, and what people want to believe.

25 May 2023

You pretty much have to have been living inside a farday cage with a stack of dead trees for company to have not heard anything about large language models taking the tech world by storm. Without going into too much detail (because that's not what this essay is about) you take some clever statistical math, a metric fuckton of GPUs, and several petabytes of text scraped from most of the Web, mix thoroughly with a couple of million USD from investors and some Python, and bake it all in a large network of virtual machines running in someone's network (usually AWS, but there are companies that still run their own data centers) for a couple of days. What you'll get out of it is something called a model, a fairly large set of binary blobs that are then used as the kernel of software written in pretty much any programming language these days. End result (for the purposes of this essay): You can type stuff to the system in question, it'll filter it through the model and calculate a bunch of words that, statistically speaking are likely to follow what it was given. 1

I would like to state for the record that, as I write this essay even the biggest AI model out there (nVidia's Megatron-Turing NLG) 2 is nowhere near as large or as complex as the human brain. It's not sentient. It's not sapient. It's smart. Occasionally it's clever. It can be weird. Unlike the Dixie Flatline it might write you a poem if you asked it nicely.

Okay. Background established for the rest of this post. Because, once again, it's storytime with Uncle Bryce, so if you're not interested in a dumb story from undergrad you can punch out now.

In the fall of 1999.ev when I was finishing up my first stint in college I took as one of my core electives a course in artificial intelligence. The course started in a fairly reasonable way at the time, with the early history of artificial intelligence. We talked about Eliza, an early AI construct that implemented Rogerian, or person-centered psychotherapy as a prelude to discussing the Turing test. If you're not familiar with it (and you really should be, because I think the implications of it are going to be very important for everyone in the near future), Turing postulated a test where there were two subjects, an organic (a human being) and a software agent. Each subject could be communicated with from a distance using text; for our purposes, think instant messaging. An interviewer on the far side of the connection would hold an extended typed conversation with each subject. When all of the conversations were had the interviewer would then decide, solely from the conversations which subject was the organic and which was the bot. So the Turing Test goes, if the interviewer can't tell the difference the bot is likely sentient.

What very few people seem to consider is what it means if the organic is mistaken for the bot.

The professor teaching the AI class proposed that we carry out the Turing test on the comp.sci lab network. Everyone in the class (it wasn't very big, at most 20 students) drew lots and five of us were selected to be test subjects. There was an IRC server on the comp.sci network that the professor had hooked a couple of the more advanced chatbots of the time into. Those of us who were test subjects used regular old IRC clients to access the server. The idea was that everybody else in the class, one at a time, would spend time chatting with the test subjects for some length of time and at the end of the test periods would decide which interlocutor was the bot and which was their classmate and write an essay justifying their opinions. We organic test subjects spent every evening for a week hanging out on IRC and chatting with our classmates.

At the end of the week the results were in, and my classmates had decided that I was a software construct. They weren't sure how a bot was interfaced with such a large knowledge base (search engines barely existed at the time and REST APIs hadn't been invented yet) but they very much wanted the developer to come to the class to present. I really don't know what gave them that impression. Sure, I'm weird but I do put forth significant effort to be at least somewhat personable, so it's possible that they were getting me on some topic or other that I was really into at the time and that gave them the mistaken impression. It could also have been the speed of my responses, because I learned how to touch type at an early age, and most folks my age at the time hadn't.

This experience stuck with me ever since. It would be downright amazing if software was mistaken for an organic due to its conversational capability, width and depth of its knowledge base, and other capabilities.

Except it's not amazing anymore. It's been happening for at least two decades.

Starting in the year 1990.ev a competition called the Loebner Prize was held every year. It's basically the Turing test, but the bot that is mostly mistaken for an organic being won its creators a cash prize. One of the problems with the Loebner Prize, however, is that there is a perverse incentive for entrants to not build smart constructs but "merely" (all things considered) linguistically clever bots. As far as I know, nobody even tried for the $25kus "bot that not only is mistaken for an organic, but successfully convinces the interviewers that the organic is actually the bot" prize, nor the $100kus "multi-sensory input understanding" prize. The forest - is the thing you're talking to really a sentient, sapient being? - was ignored in favor of the trees of "the thing you're talking to is a clever interlocutor and to heck with any other criteria. It was downright amazing when the first bot won the Loebner Prize, but now it's quite normal for this to happen. No software contestant has succesfully convinced the interviewers that the organic was actually the bot, because the interviewers did that all by themselves without prompting.

I don't think it's so much that the Turing test is a busted myth as it is organics did a trapdoor goalposts attack and just decided what they care more about. Not actual intelligence but constructs putting on the best possible charade of being sapient. I cite as supporting evidence the phenomenon of people being fairly nice to service-oriented online chatbots (like the ones that lots of large companies have for tier-one customer support) but downright goddamn ugly the moment they think that they're talking to a real person on the other end. If an organic on the other end of the customer support chat decides to tone everything down and pretend to be another bot, though, customers treat them much better. I realize that this is not a universal thing, but common enough that I find it noteworthy. In fact, when you consider the economic incentives of AI/ML development today actually working toward a sapient software entity is the last thing a well funded entity wants to do, for the single reason that there is no guarantee that the software will work toward or support the reason the business in question was founded.

There were a lot of other ideas that I was kicking around about this, stuff about people treating not-people better than organic people, people willfully going all in on the Eliza effect, a relatively recent discovery about the placebo effect, and things both more and less cynical than usual. 3 But I don't think that's the point, nor do I think doing so would really be useful. Suffice it to say that I think that a lot of people appear to have given up on the ideas of consciousness and sentience, probably for a large number of reasons, any of which could make up the core of somebody's Ph.D thesis. As it is, a basic part of human nature appears to involve in-groups and out-groups, and relatively non-threatening (or at least non-potentially hostile) software that, on some level people really know isn't anything like them but acts like it codes as in-group. Additionally, what Shirow Masamune referred to as "a hollowing of the spirit" appears to be taking place which, I would suggest, we see in the direction society is headed on the macro scale and how some social groups are being treated on the meso scale.

In other words, there is a relatively small group of "people," a potentially larger group of "harmless things we don't mind treating as people," and a much larger group of "not people" (which it's rapidly approaching open season on).

If you'll excuse me, I think I need to re-read Blindsight 4 for the sake of my mental health.


  1. There is usually one, sometimes more parameters called "heat" that the user can turn up or down to nudge the statistical likelihood of the responses up or down to make things more interesting. Those parameters basically control how random the results are likely to be. 

  2. The article I linked to is from 22 February 2023. At the rate at which AI/ML is changing it's undoubtedly obsolete. I still had to cite a reference of some kind, though, so take it with a grain of salt. 

  3. Also, I'm getting over a mild case of food poisoning so I'm not anywhere near top form for anything. Give me a bit. 

  4. Local mirror, per the Creative Commons BY-NC-SA v2.5 license that Peter Watts placed on the book.