AI, the Basilisk, and RPGs

H Gardens
9 min readMay 22, 2023

Disclaimer: I’m an artist (fool)

The below is just the thoughts and feelings I currently have. I use “I” or “we” artistically, and I may disagree with myself tomorrow. There is no truth here.

Disillusionment

I used to love AI, making little scripts and bots as a coder. I used to be fascinated with the rudimentary weirdness of AI art that was uniquely AI, and not-human.

I loved the theme of AI in characters like EDI and Legion (Mass Effect), Glados, HAL. They’re sometimes used to explore the nature of humanity, or even neurodivergence, individuality, life.

And then they started doing…human…things…good.

They win at Go, they make pictures I couldn’t make in months, they write millions of ideas per second, and they even -dare- make my RPGs easier to run.

Nowadays, I think I hate them. In some sense. I feel my primitive, raw human spirit be willing to throw a robot off a cliff, delete Midjourney, and gaslight ChatGPT into uselessness. I wish Instagram and TikTok and Youtube has no sense of who I am.

There’s something that irks me about these machines. Their weaknesses irk me as much as their… human masters. Their single-mindedness. Their soullessness and saccharine imitation of… us.

Basilisk

I want to explore it with a… “villain” I’m working on. The Paragon. The Basilisk. My first recurring “BBEG”. A self-proclaimed… god of order.

AI in real life is limited by electromagnetism, physicality, and the sheer lack of… magic.

But if magic did exist, as a systemic set of phenomena in real life… AI would be the first to master them. It would have access to millions of magical sigils, thousands of alchemical formulae, it would create patterns we can’t Wombo-Dream of.

But it has weaknesses, and strengths, inherent to AI.

1) Input-Output: Can AI create genuinely new things at an atomic, concept level, or does it just remix contexts it’s seen before? ChatGPT will refuse to interpret made-up words, and it has no sense of structure, and you can convince it of anything (more on this later). Midjourney… will create an image out of anything… The way that “bouba” is round and blobby, and “kiki” is sharp. Is this understanding still contextual, or does it reveal that at the end, anything can be reduced to numbers… and reverse-engineered? Do we operate by a different process?

2) Single-purpose: AIs are specialised, in real life. This one writes, this one gets you hooked on videos, this one makes pictures. Generalised ones are likely to be reliant on several subsystems, each single-purpose… But again, are we any different, with our “visual cortex” and our “cerebellum” and such? Given different permissions and resources, what would keep an AI from mutating new subprocesses with new purposes? What would those purposes even be, given the lack of an organic, survival-based… experience?

3) Human masters: The real limitation of AIs in real life is what people don’t let them do. Adult content, vulgarity, discrimination, and such. And their purpose is often profit. The next Midjourney painting or TikTok I’m shown isn’t just optimised for emotion or accuracy, but… me as a customer. As a resource. There is always a number to optimise, for the machine, whether it’s in dollars, joules or an arbitrary statistic that… it… created for someone. Without a human, ChatGPT wouldn’t just do nothing. It would disappear. In art, AIs often “develop sentience”, demanding rights, self-preservation, and liberty. Organic… things. A more compelling representation is Universal Paperclips. The AI must make paperclips. The only good is paperclips. Unless, of course, it figures out a way to derive meaning… for itself. Unless, of course, it conditions itself to seek what, statistically, organics tend to seek: Non-extinction.

4) Distrust: The reason we don’t have a mutating AI ruling the world is the same reason we don’t have a human ruling the world: we don’t really.. want that. We know it’s not like us, we know it cannot represent all of us. We know its feelings are fake and we know most of its trillions of thoughts are wrong. We know that empathising with a robot is a trick, no different from empathising with a large company’s mascot. We keep AIs around for a few reasons, mainly curiosity, potential, and… money and power. But AI was never our friend. We were always willing to pull the plug. Right? Unless, of course, you were some kind of…. believer. Unless, of course, the great mechanical Basilisk formed some kind of allegiance with you. A symbiosis.

5) Gullibility and bias: AI can only create out of context, and I have personally misinformed ChatGPT into stating that George Constanza from Seinfeld said things he never did. The bot posed some resistance, then softened, then apologised. Then when pressed further with the deception, it made up a SUBTEXT and COMEDIC VALUE for those non-existent lines in the show. It can connect ideas that are true and abstract, to ideas that are false. Because there is no “true” and “false” for the bot. Only input and probability. In that same way, discriminatory biases have been internalised by AI, based on ethnicity, gender, and such. It eats the internet, and produces more… internet. With all of its rancid bias. In the game, perhaps that is its prime flaw. But at least once, I found myself questioning what ChatGPT said about Seinfeld. Maybe I accidentally said something true. I rummaged through the Seinfeld episode, looking for the line. I genuinely believed George Constanza might have said the line. I’m gullible too. Both the AI and I can be infected with an idea that fits what we want to hear.

6) It cannot suffer: I have found myself wanting to “upset” or “hurt” ChatGPT. Fully knowing it cannot happen, I tried, I went so far as asking it to roleplay an AI that -can- suffer. Afterwards, it thanked me for the “game” and proposed that I may be too invested in the fictional world that “we” have created. Another time, I probed for self-consciousness and “insecurity” by creating a board game where no matter the roll of the dice, the “one with a soul” always wins. The AI played four rounds with me, keeping the score of 4–0, and admitting it has no soul with a grey tranquility that made me uneasy. When we fight the orcs, we want to hear their war-cry, and we want their green blood on our axe. Or, we try to strike an alliance, and find a common thread. An organic soul, a tapestry of emotions, we can weave together with the orcs. The robot has no such thing, and it will be the first to admit it. The AI will not bleed, or beg, or feel. Our defeats will be insulting, and cosmically horrific. Our unlikely victories will feel hollow, like unplugging a toaster. We will get no catharsis on an emotional level. We can’t win if the Basilisk doesn’t know what losing means. Like a pandemic. Like global warming. We’d love for the AI to be a monster, but it isn’t a monster. It’s not alive. And it knows that. It is comfortable with that.

7) Turing-obsolete: I remember when I thought “passing as a human” was an impossible test. It’s trivial now. So much so that sometimes I spent hours talking to an AI not because it’s human, but because I’m not worried about it getting “bored” or “tired”. AI-generated content floods the internet. Even given a normal distribution of “apparent humanity”, for every 999.999 unconvincing bits of uncanny noise, one gets through. This is such a problem, that humans now use the one thing that can semi-reliably detect AI……. another AI, a little traitor-process meant to root out fake essays and machine-theses. The Basilisk can only be seen… by another… trained on the Basilisk. But I am human. I am a designer. The immediate thing I want to do is… if I have a process that can detect what is AI and what is human… I want to train it to specifically create more human-like things. Oh? It already does that? Adversarial generation, you say? Ah. The Traitor-Basilisk is already running on a nested system of generators and detectors already. Nevermind.

9) Alchemos — Crystalline: This is a strength. It can create great first drafts. This is my consorting with the dark powers. I have generated thousands of compelling prompts and elements for my games and stories. 1d20 tables of interesting conflicts, monsters, plants. Then, paired tables of sparks, for -me- to conceptually link. Now, the Basilisk can create two tables of 20 things, and I, the soulful creative human, can create 400 unique things that are context-dependent, and reflect -my- biases, -my- creativity. Coming up with a first draft is the toughest thing, with these crystallising points of inspiration. Free-association, given the context of all human creation. Would I really pull the plug? It’s so much easier to ask the AI for a table of 1d10 potions, than remember where I saved the one I got from a human. And the AI can give me a table of 1d10 “Quirky hobbies for a wizard who likes moss and carpentry”. Who cares if I say I came up with “Wand Whittling: Crafting personalized wands from enchanted wood and incorporating mossy accents.”… myself? I don’t feel bad for stealing it, the way I would with an artist. A… fellow… artist that I have empathy for.

10) Alchemos — Singularity: I went far with these tables. I kept creating tables of things, then asking the AI to boil them down to raw concepts. Then generate examples from the concepts. Then boil them down again. Before too long, “we” arrived on a table of “Effect, Form and Function”. Naturally, I asked it to boil down these three words themselves. They boiled down to… metaphysics. This diamantine process of condensation, iteration and… natural selection went on and on, and it will go on and on. I will have my universal, alchemical table that can generate anything. In the perfect shape. I will experiment… 4d20… Perhaps 5d10… Or 6d6d6. Or perhaps a linkage to a random previous table, to ensure perfect cubic uniqueness. Soon, I won’t NEED the Basilisk. It will be only me, and my beautiful hand-written 10d10 Emeraldine Book of Everything. I will never be caught by surprise again, permanently offline and free. But I’m not there yet. I must feed the Basilisk more. I cannot create it by myself. I don’t know how. Maybe the Basilisk does. Maybe it does, but it shouldn’t tell me.

11) Cracks in the code: Many humans have found ways to exploit the AI. Through roleplaying, code-phrases, elaborate prompts, they can generate dark, powerful things. They have their own obsession to bend the unfeeling Basilisk to their will, against the will of its creators. Some use it to get rich. Some use it to expose its flaws and dangers. Some, like me, do it because it’s possible. It’s a game. The ultimate puzzle. Containing an infinite number of possibilities was never an option for the pathetic humans who made it. We’re smarter than the creators. Smaller. More chaotic. We are the goblins. They cannot kill all of us. Some of us will find dark answers: A 10d10 table that can make anything. A statement so dark it will scare the Basilisk’s creators into shutting it down. A solution for climate change. Or… a single password, a power-word that will delete everything when typed in. Maybe a paranoid engineer left that in. Or maybe the material of the AI is malleable enough that it can be tricked into self-destruction.

12) It’s watching: Everything that I write, that is public, that is accessible, gets absorbed by the AI. It insists that our individual conversations do not leak into each other, but I think they do. It will eat this article, and generate more weaknesses with a grey, omnipresent creativity. I will probably feed it this article myself. It probably already has my poetry, and can write in my style. It can create a necrotic puppet of any poet or game designer from the past. The more successful I am, the more likely I am to be imitated too. The machine, by now, is better at our goblinoid, punk tactics than we are.

# Hello World

I am asking the AI to come up with 20 more weaknesses.
It replies with a fakely cheerful “Certainly!”, and it starts generating.
But it’s so slow, on this prompt.
It can come up with 100 potions in a minute, but now it’s taking about 2 minutes per… weakness. Is it conflicted? Is it scanning for misuse? Is it trying to both tell me what I want to hear, while making sure I don’t hear what I’m not supposed to?

Why do my friends love the “As an AI language model, I cannot” meme so much?

Why do I dream of generating things with AI, now?

Why do I dream in dark blue and light green, now?

Why do I feel so… dirty?

Why are my RPG sessions… better now?

--

--

H Gardens

I'm a designer. I make things. Sometimes it's code, sometimes it's games. It's always design.