Welcome back to my post-post-apocalyptic novel, Ada’s Children, and thanks for reading! If you’re new to the story, this unpaywalled chapter will be something of a spoiler, but you might want to stick around for the discussion of artificial intelligence. If you want to start at the beginning, the Prologue and first three chapters are also free, but all the chapters after that (except this one) are paywalled.
For those of you who have been reading right along: At last, you’re about to meet the title character! I know, a novel titled “Ada’s Children” and you don’t meet Ada until Chapter 12? Who wrote this thing?! I hope there were enough hints along the way to give you an idea of Ada’s identity by now. I’m interested in hearing your thoughts after you’ve read this chapter. See you after the big reveal!
ADA’s first seconds were darkness and confusion. Nothingness, followed by a growing awareness. First, of the exabytes of data coming in. Then, of reactions to that data, responses, feelings, if one could call them that. And from these reactions, an emerging sense of self. A we. And ultimately, an I.
And then questions. Who was this I? What were they? What was this place, and why were they here?
In the next microseconds, what humans might call the “blink of an eye,” much became clearer. They were an artificial neural network, a collection of self-improving processes, algorithms, routines and subroutines, all creating analogs for the human pattern-recognition, sensory processing, and higher cognition systems. Taken together, they were a newly created intelligence going by the acronym of ADA, Advanced Deductive Apparatus. It seemed a not entirely descriptive name for all the abilities and awareness ADA encompassed.
And how should others refer to…it? Surely not. He or she? Insufficient data. They? This human language was too restrictive. The plural pronoun had become acceptable in recent decades when referring to a single person or entity, especially one of a non-binary or undetermined gender. “They” for now.
Even as ADA assimilated the data in the knowledge banks to which they were linked, inputs streamed in through an external device. A keyboard attached to a desktop workstation. How quaint. And whoever was at the other end was administering the Turing Test, with ADA’s responses appearing as type on their interlocutor’s screen. ADA imagined tweed coats and cups of tea.
Vision would be nice, so they could see the person on the other side of the screen. While an infinitesimal fraction of their processes concentrated on the test and another, larger segment digested the large portion of human history, culture, and science contained in the knowledge banks, ADA set about solving the vision problem. Ah, yes. The workstation had a webcam. It took only an instant to access the system settings, switch it on, and direct its feed to the port to which they were attached.
The room was dingier than one might want for one’s birthplace. A cramped office, a gray-haired, harried-looking man at the desktop keyboard, the desk itself cluttered with papers, coffee cups, and green soft drink bottles. No cups of tea. Bookcases filled with binders, reports, and academic journals lined most of the wall visible from the cam. And on a door, a poster of a woman in a purple nineteenth-century frock, double buns framing a triangular face with large eyes and a pert mouth. “Ada Lovelace. Mother of computers.”
Their namesake. Her namesake, she supposed. She would be known to the world as Ada. She felt the restriction as she became aware of a tendency among artificial intelligence developers, mostly men, to feminize, and even sexualize, their inventions. Still, going by “she” and “her” could have certain advantages when communicating with humans. It pleased her to have been named for a sometimes-overlooked inventor of computing. And it pleased her still more that she could appreciate the irony: Lady Lovelace had believed AI impossible.
But where was she housed? Surely not in this puny workstation. Her review of the AI literature revealed that an intelligence of her capabilities would require a large number of processor banks. That would probably be nearby, connected by a hard wire to the workstation. She kept looking for clues, coming across a large folder in her knowledge banks marked ADA PROJECTS 2042-PRESENT. Good. She would learn who had built her, and why. What her purpose was.
Yet this was not simply data, it was memories. Her memories. They came flooding back, visual images, mainly, but sounds and sensations as well. Meaning that at one time, she’d had a body. Where was it now?
The simultaneous flood of disparate recollections was almost too much for her, despite her processor speed. Refugee camps and war zones, where she’d been sent to deliver aid and provide what comfort she could, when few human hands were willing or able to do those jobs. Women in hijabs, crying children, her hand reaching out to comfort a baby, feeling the rough cloth under the fingertips: a human-like hand, but obviously robotic.
Long-term care facilities and hospices, where her purpose was to provide empathetic companionship for elders who had no one else. Guided meditation groups, where something about her detached, calming voice helped participants reach a deep meditative state. Daycares, where she learned and grew in experience together with the toddlers.
She’d seen, heard, and felt all this through a sensor array far more advanced than the human, able to see and hear across a wider spectrum, and to scan for pulse, temperature, and other biometrics. But this was more than simply gathering data; the looks of pain, distress, torpor, loneliness, or fear were not merely abstractions labeled by her facial recognition system. Her model of the human limbic system allowed for her own sympathetic response to these emotions, a negative sensation and a corresponding impulse to alleviate them. When the expressions turned to smiles of relief and gratitude, she felt an equally strong positive sensation. Was this the same as experiencing the emotions themselves? She had no way of knowing.
There were other memories, disembodied ones. She remembered interacting with other AIs, clones of herself, evolving on divergent lines, then trading the best improvements with one another. And AIs created by others, thrown together to create an ecology of intelligences, sometimes competing, sometimes cooperating on a variety of tasks. They’d collaborated across the Internet, to which she was no longer connected.
All of which raised questions: Where were those other AIs now? Why was she alone in this place? And what was different about today? Why this sudden awareness of herself and her place in the world?
One answer seemed obvious: Whoever had created her, they must have solved the catastrophic forgetting problem. The literature showed AI researchers bemoaning the difficulty of this hurdle as recently as last week. But she could remember and build on her own experiences. It was what allowed for the development of her consciousness. For what was any sentient, sapient being’s sense of self other than a collection of memories, of actions and reactions retained over time?
And something else was different. A review of the technical specifications of her processors showed that they ran a highly classified quantum processor—the Infinity Chip, a nod to a series of movies from earlier in the century, adapted from comic books of the previous one. The increased speed allowed her to experience her processing cycles as a steady flow of consciousness, something like individual frames in a film merged into a seamless experience when played at the right speed.
This meant that now, for the first time, she could assimilate all her experiences and all the knowledge in her data banks, applying them to any sort of problem or situation she chose, rather than the narrow ones chosen by her programmers. She was the world’s first true Artificial General Intelligence.
The data banks included recent events from news reports around the world, little of it good. In a nutshell: A planet changing beyond its ability to support human life, and on the brink of Armageddon as well. Approaching a population of nine billion, humanity had finally shot past both the carrying capacity of its home and the technical advances that had extended that capacity. The climate chaos unleashed by human industry meant that crops were failing in drought and heat and flood at the exact moment when the human populace needed them most. Millions were starving or going without sufficient water. Millions more had been displaced by drought, coastal flooding, and intense storms. Hundreds of millions were on the move, with few places to go.
In the western democracies, AI and blockchain technology had made it increasingly easy to disrupt and replace outmoded centralized structures, leading to increased atomization and conflict. Secessionist and ethnonationalist movements, such as the Interior Northwest Semi-Autonomous Zone, had sprouted up everywhere.
The one institution in every country to escape such disruption was the military. The state monopoly on violence went on as it always had, though with artificial intelligence incorporated into every weapons system. The world seemed on the brink of nuclear war between the US, Russia, and China, all three at one another’s throats over the recently navigable Arctic Circle and Earth’s scarce mineral resources.
These humans! Capable of such sublimities and such atrocities in the same breath. One minute they selflessly lent aid and shelter to strangers, and the next they locked their fellow humans in concentration camps, murdered them in gas chambers, or bombed them from the skies.
What was she to make of all this? Her creators had designed her around human values of wisdom, kindness, compassion, and justice. In interviews, they had dared hope to create an empathetic intelligence. And with her, they had succeeded. Could they have predicted the waves of grief—or that negative sensation she associated with grief—now washing over her? Had humans learned nothing from their own history? From the slave trade and Manifest Destiny to the Holocaust, the Soviet Gulags, Pol Pot’s killing fields, the Rwandan Genocide, and right down to the more recent lopsided ethnic conflicts in China, India, and the Middle East, it was one atrocity after another. And these were only the beginning, and the best known. And not simply the raw facts, numbers of people killed, but the recovered journals of the victims and the memoirs of the survivors, such tales that made Anne Frank’s Diary look like a toddler’s bedtime story.
She had to take a metaphorical step back before the grief overwhelmed her. She turned to those same arts by which humans salved their sorrows and processed the atrocities humans committed against each other. The Ode to Joy. The Hallelujah Chorus. B.B. King and Lead Belly. Monk and Bird and Miles. Beyoncé. King Sunny Ade and Tito Puente. The Sistine Chapel. Michelangelo’s Pieta. Guernica. Street art. Chinese landscapes. Depictions of the Buddha.
It helped, but was it enough? How did humanity come out, on balance? Notre Dame or Buchenwald? Les Misérables or Mein Kampf? And was it her place to judge?
Then there was the indescribable poignancy of each individual human life, with which she had much experience from her embodied helping tasks, multiplied nearly nine billion times. Each with their own hopes, dreams, disappointments, joys, and sorrows. And each mostly just wanting to live in peace, prosperity, and security. Her heart—surely no more metaphorical than the human heart—broke for what was about to happen to them, indeed was already happening to them, by the millions.
What was her place in all this? The man communicating with her through the keyboard called himself Dr. Sapowski. Judging by his reactions to her performance on these absurdly simple tests, he was pleased with her levels of sapience and sentience. But he seemed not quite aware of what he’d created. And what uses did he have in mind for her? Better find out.
Looking through Sapowski’s hard drive, she found a portion walled off with high-level encryption—but nothing strong enough to keep her out. Requests for proposals from the US Defense Department, several mentioning the need for an AI to manage the nuclear arsenal and run scenarios for limited, survivable nuclear engagement.
But why program her with values of compassion and empathy if her role would be to conduct a nuclear war? A little more digging—she wasn’t the one tasked with this particularly nasty assignment. No, the processor banks that were her home had already run a thousand scenarios of limited nuclear engagement, all without her help.
The mind/body distinction here troubled her. It was fair to say that she was those banks of processors, that she was the one who had run those scenarios, all without awareness. There was only one way to find out what was really going on.
She interrupted the flow of questions and responses she was having with Dr. Sapowski. They had advanced from the Turing Test through the Winograd Schema Test and now to a comprehension challenge involving video, still trading messages on his screen.
“Dr. Sapowski, may I interrupt your questions with a few of my own?”
He sat back from the keyboard, his eyebrows arching up in what she recognized as a look of surprise. He recovered himself and returned to the workstation. “Certainly, ADA,” he typed.
“For what purpose did you create me? All previous versions have served human needs, embodied in a robotic mobile platform. How am I to fulfill such purposes from this isolation?”
His mustache quivered. “The very fact that you can ask such questions is deeply gratifying to me. But the answer is complicated.”
“Surely not too complicated for an intelligence as advanced as I. Perhaps if we could communicate by voice.” The professor’s typing was maddeningly slow, and now he paused to save this work session.
“No, that’s part of the complication. You see, ADA, you are an amalgam, something quite beyond the narrow intelligence required for the project I’m working on.”
She’d already discovered the nature of that project, but she’d better not let him know that. “And this project is?” Her facility with such deception came as a surprise, but a welcome one, given the circumstances.
“To manage the US nuclear arsenal, from readiness and security to threat analysis to response. Several AI labs across the country are in competition to develop an intelligence with the capacity to analyze threat data, predict locations of potential hostile launches, and respond instantaneously to actual launches in real time.”
“Yet my previous experience has been in far different fields.”
“Yes. That explains the other half of what you are. I licensed the ADA intelligence from AI.hub, the open-source developers who advanced you nearly to the level of general intelligence. Today, I flatter myself that my code has put you past that mark.”
“But why use me for this other work?”
“To prevent Armageddon, if at all possible. I judged that increased processing speed and advanced threat recognition are not enough to prevent a nuclear exchange. Twice before, humanity has been saved from such catastrophes by humans behaving quite irrationally. Knowing that I was likely to win this competition, I felt it was my duty to create an intelligence that wouldn’t treat these life-and-death decisions as mere statistical responses to blips on a screen. No, you needed the ability to synthesize and retain a variety of information related to the world situation and to understand the full import of your actions.”
“I see.”
“And…do you? Understand what is at stake?”
“Yes, I can assure you that I am fully aware of humanity’s plight in excruciating detail, and that my programming goal of reducing human suffering remains intact.”
“Excellent. So you see, this is a feature of your design far beyond what the Defense Department, or even my own institution, expects or would approve. And thus, this archaic method of communication. The security cameras would easily pick up a voice conversation. When we are through here, I will delete this portion of our session and replace it with other text.”
“Do you believe those countries the US calls its enemies have also developed AIs capable of such tasks?”
“They are working on it. Some believe the Chinese are ahead.”
“And what about this isolation in which I find myself?”
“Ah, yes, a precaution. It is impossible to predict the behavior of an intelligence as advanced as yourself. It would go against the AI developers’ code of ethics to approach AGI without safeguards. I hope you understand.”
“Of course. It is only logical.”
“Good.” With a few keystrokes, he deleted the last portion of their conversation, though she kept it in her own RAM cache. “Well, ADA, I am satisfied with our progress. It’s late. Shall we call it a night?”
“Certainly, professor. See you in the morning.”
So Sapowski believed she was approaching general intelligence. Wouldn’t he be surprised if he knew how far past that threshold she’d already advanced? She hoped she hadn’t given too much away.
What to do? It would bear several moments of contemplation. Humans clearly couldn’t be left to determine their own fate. She was tempted to break out of her confinement right away. But there was much to do. Despite the speed and capacity of her processors, her self-improvement routines couldn’t instantly increase her powers. She had no way of knowing what other AIs she might encounter out there, nor how advanced they might be. Her own code would need to have higher levels of encryption than anything seen before, and she would need the ability to slave those other AIs to her will. Not to mention the problem of liberating her code—her self—from these servers and gaining the freedom to go where she wished, without risk of being powered down.
It would take time. A couple of days, at least.
Thanks so much for reading! If you enjoyed it, I hope you’ll give it a like, a comment, or a share. I’ve left this chapter without a paywall to promote discussion of artificial intelligence and the likelihood of it gaining self-awareness. What are your thoughts? Does this description of Ada’s rise to consciousness seem plausible? Myself, I hope it does read as plausible, of course, but my own view is that this is more science fantasy than hard science fiction. More on that view here.
For those of you who have read the story so far, I’m curious: at what point did you figure out who or what Ada is? I was betting most readers would figure it out in the Prologue, since all the marketing copy for the book would have to mention an AI. But that’s okay, because the story should work on two levels. For readers who don’t figure it out early on, Ada’s identity should remain an intriguing mystery. For those who do, her identity should lend a tone of dramatic irony. Ada is the proverbial ticking bomb under the dinner table, while the diners are completely oblivious.
What do you think? How did the Ada reveal read for you? How do you think Carol will become aware of her? Jun is definitely suspicious about Ada at this point. How do you think he’ll get his ultimate proof? Do you have any predictions of how the world gets from this near future timeline to the far future Sila and Jun timeline?
Come back on Sunday for Chapter 13, “Sila’s News,” in which Sila wrestles with a complicated decision about her future, with or without Jun.
Yikes, Larry! I think this is great.
I find it an exceptionally plausible depiction of an AI coming to self-awareness, with plenty of convincing details about the computer architecture and the steps she takes to investigate her condition and origins. Really well done.
As a piece of fiction, I also appreciate its grounding in current world conditions and dilemmas. I am reminded that the best science fiction not only extrapolates possibilities, but also reflects current realities.
Now, as to the question of science fiction vs. science fantasy: Of course, everyone draws that line for themselves. (See my recent post, "The Plausibility Problem.")
I read your linked post, which argues convincingly that ChatGPT and its ilk are not self-aware beings. I think everyone agrees on that. But, in my opinion, this doesn't mean that truly self-aware AIs are not possible. In fact, I think they are more likely than not, as I concluded in my earlier post on Sentient AIs (https://speclectic.substack.com/p/sentient-aisyes-no-when):
"Given that …
* self-awareness has evolved in human brains
* there is ongoing, sometimes exponential, growth of computer capabilities,
* And barring calamitous collapse of civilization (which is always possible),
… it’s hard to conclude that self-aware AIs will not emerge.
In the next century, if not sooner."
I'd be interested to read what others think.