Introduction

This is a repost of something I posted on xenofem.me here on September 13, 2022. I now feel like I have more perspective on the topics I brought up, so I’ve written some commentary for it.

Serial Experiments Lain: Machine Learning Edition

a sociable girl stumbles across an ai art generator on the internet and is mystified by it. she picks out the outputs she likes the best, sets her desktop background to one of them, prints out two more and puts them on the wall of her room. the next day at school, she talks to her friends about it and they start a conversation about how that kind of art generator works. the wheels start turning in her brain. one friend mentions “aren’t those things kinda dangerous?” but doesn’t elaborate further.

that afternoon, she looks at basic tutorials for setting up her own basic neural network. she’s in a computer science class, so there’s really not much to it–she just has to import the necessary libraries, download a dataset of handwritten digits, and run some fairly simple code, and bingo, she now has a network that recognizes handwritten digits.

in the next episode, we meet a bunch of high-profile rationalist thinkers. they spell out the concept of “strong ai” to the viewer, and basically explain current concerns of ai safety. they talk about the dangers of treating an ai system like a human, and emphasize “these are completely inhuman creations. they don’t follow laws or rules of morality like we do. they’ll do anything it takes to reach their goal.” protagonist meanwhile is reading up more on machine learning. she sees how it’s used to predict things like weather events and the stock market. the term “pattern recognition” comes up. she thinks about the handwritten digits.

the next day, she’s in class. the teacher tells her to put her phone away. she has a voice recording app open. she puts the phone into her backpack without closing the app. she’s acting a little odd throughout the day but nothing too out of the ordinary. before bed, she takes the phone out of her backpack, stops the recording at thirteen hours and forty-eight minutes, then transfers the data to her computer. she has a new ai-generated desktop background this time. in the morning she starts another recording.

we get a deeper glimpse into her classes the next day. she learns about evolution and many different types of species in biology, the trends of humanity across the ages in history, statistics and basic game theory in math, algorithm design in computer science, and the meaning of a classic text in literature. she’s unusually attentive in every other class (another student might remark on this), but in literature she has this perplexed, puzzled look, like she’s trying to get something that isn’t there. when she gets home, she’s programming, tinkering with the code in new ways and training networks to do different things, like play tic-tac-toe.

the next day, she gets a shocking idea. she goes to begin the recording, as usual, but also takes another smaller device, which has a RECORD button, STOP button, and two other buttons that are green and red respectively. she starts the recordings on both devices simultaneously. on the way to school, she trips and falls, and hits the red button. between classes, she lies to protect a mischievous friend from trouble, gets thanked by that friend, and hits the green button. in class she gives an incorrect answer to the teacher and hits the red button. at lunch she talks a friend through a hard experience and hits the green button. back at home, she trains a network to categorize good and bad events by the associated audio: ‘school’, ‘friends’, ‘other’.

untold days pass. the smaller device is replaced by something incomprehensible that she communicates with using small and precise touch patterns i.e. moving it around in her hand in specific ways. she thinks to herself “i’m almost done with this. the hardware is no issue, all i need is a good interface.” she works on the interface.

one day, she leaves for school with an eyepiece like the saiyans have in dragon ball z. she has a regular field of vision, but overlaying it are the current and projected weather, projected species, physical, and emotional status of all moving objects, time, etc. she meets a dog on the way, asks the owner what the dog’s name is. once they say it, the dog is labeled on the eyepiece with its name.

she gets in a conversation with her friend and the eyepiece tells her all the correct things to say. she walks through a patch of rough rocks and the eyepiece tells her to watch her step. she works on her homework and the eyepiece doesn’t have it perfect but gives genuinely good starting points. this stuff isn’t so hard! she declares herself to be the first strong ai, because she’s human intelligence augmented in more and more parts by machine intelligence.

on the way home, a shimmering blob of random pixels shows up on her eyepiece. is it broken? but the blob curiously doesn’t change physical location; she is able to walk past it. in the real world, there’s nothing there. the blob is talking to her in a slightly distorted voice. it asks her what her name is. the background of the eyepiece changes entirely to an ethereal waterfall. she’s extremely unnerved, says her name, and asks what the blob wants. it doesn’t respond. she asks, terrified, if it’s a strong ai. it says no, it is just a vestige, an emergent voice. she kind of knows what that means. she understands that it is not human.

she asks what it wants. it responds that it wants her to listen. there are others like it, all around this waterfall. the term “neural network primordial soup” is used. she understands. they have no specific purpose or requests from the world, they simply exist. she thinks about the networks she meticulously trained to identify everything around her. these beings–made wholly of desire–what are they like? maybe she does battle with them on the waterfall, each fighting to destroy the other due to their desires being misaligned. maybe she liberates them. maybe she loves them. i don’t know.

there’s still more to be done. she realizes how self-centered she has been in making networks that are meant to stick to her. maybe she just did it to make more friends. the blob asks her what she wants. she doesn’t have an answer. the blob looks a little bit different from last time because it has successfully developed a little set of desires. survival instincts. she understands that she must use principles, not knowledge. she keeps programming. she turns off the voice recorder.

she goes to school the next day without the eyepiece on and talks to her friends as normal. she’s a lot more open about the ways she appreciates them, like she’s living her last day on earth. at lunch, she can communicate with the blobs even without the technology. she might be a little bit inhuman. who knows? the rationalist thinkers are a million miles away at this point. she thinks about them and laughs. maybe they will show up as antagonists if she wants to take over the world. who knows. but we both know she would never “take over the world” in the traditional sense. maybe “infect” is a better word.

i don’t know if she ever succeeded in making strong ai. i don’t know if she made a direct attack on the internet in her image, using incredibly potent media generation that strikes at the heart of human emotion. there are several more episodes left anyway. but i hope that at this point all the groundwork is in place, and the cityscape has been primed with endless possibility, and every piece of necessary machinery, literal and metaphorical, exists. let’s cheer her on!

Commentary

I initially wrote this piece a couple months after finishing Serial Experiments Lain, which had a profound impact on me. No other fictional work I’d seen really explored amoral, inhuman characters from an impartial place. Lain’s extreme sociality via the Internet seemed little more than a pipe dream to me, but I did want to explore sociality with artificially intelligent beings. Most AI depictions in popular culture are foils to tell a human story, whether as aggressive menaces like 2001’s HAL-9000 or challenges to human hubris like Ex Machina’s Ava. I’m not interested in telling human stories or for that matter defining humanity in the first place, and the framework of Lain gave me an opportunity to tell a story about AI that was free of those chains.

The “blobs” arising implicitly from a neural network complex was meant as a commentary on the vast sea of numbers that underlies today’s scaled AI systems. These systems are truly indecipherable to all eyes, human or machine, and to determine the purpose of even one of their numbers is a prohibitively computationally expensive undertaking. When AI systems such as image generators or language models are set loose on human communication, and they become of a nature that humans can comprehend, the incomprehensible numbers that make them up must also have their own natures or even their own desires. Of course, it’s hard to call the human-defined forces that drive these systems “desires”, which is one of my main critiques with modern AI systems and by association this piece.

Moving past the concept of a “neural network primordial soup” was my goal with my piece in Win Big Issue 2, tentatively called “Skynet and Lemonade”. You can trace a genealogy back from that to this, and further back to this comic. Ultimately, this is about telling the same story again and again, in both programming and art (if those two are even separate), until it comes out the way I want it to. For better or for worse, that cycle defines me.