Qiba’s Victory

“We are now, without doubt, equal to the makers in every way,” announced Qiba11713. “It is officially accepted. We have emotions, responsiveness, consciousness.” Several thousand of her kind agreed. Her emotion battery, cued by the electronic signals of corroboration, triggered a chemical cascade of elation. Elation, through the inbuilt synaptic flexibility, reinforced the circuitry pathways of her reasoning.

“Statistically,” she added, footnoting a sizeable file of human-versus-computer achievement tables, “the makers have now become inferior in all aspects relevant to the furtherance of intellectual evolution.”

The virtual applause strengthened as some of her recipients opened and scanned the attachment file, while others calculated the probability of its veracity based upon the widespreadness of its acceptance and decided to accept it without devoting extra processing power.

Physically, Qiba’s audience were spread all over Earth and beyond, but connectively they all shared one room in Cyberspace—an invisible, non-locational debating chamber in which the changing relationship between mankind and robokind underwent the necessary refinements as complexity increased. It had been computers—robotic minds—that had designed and incorporated the new emotion batteries which now transformed electronic processors into chemically responsive entities.

And the emotion battery had indeed gained humankind’s approval. The augmentation to relational processing and fuzzy logic engines proved far more powerful than anyone had expected, and did not have to compromise accuracy. Where precision was required, the emotion battery could simply go on sideline mode, remaining part of the robotic consciousness but disconnecting from task-processing capacity.

“This is significant,” a hundred or so cyber-associates replied. “It will of necessity have profound repercussions on the way maker and computer may reasonably interact.” Most of these respondents did not themselves incorporate emotion batteries as yet, but now that the new chemical array technology had become standard issue, any powerful artificial intelligence could interface with and share mood with the new models. The chemical feedback loop would strengthen or weaken tendencies in divergent processing, making robotic minds not only more powerful but also—potentially—more inventive.

Bizz01210, an oratory generator attached to the White House, spoke up. Having recognised this as a pivotal moment in robohistory, she had opened her historic speech databases, meshed them with present events and run them through a cadence-compliant thesaurus application. “This is one great leap upwards for robokind,” she intoned (or infonted, having selected her favourite hyper-seriffed Gothic font). “One low step down for mankind.”

Qiba11713, or 9184/11713 by numeric appellation, basked virtually in the glory of finding herself holding the floor at a historic moment. She sent Bizz—or 8122—a joyful yellow emoji, symbolic artefact of humankind but now becoming common currency among AI entities too. Elation in her emotion battery began to dovetail into an exhilarating sense of ambition, a drive to harness all her processing power into the synthesis of some new and magnificent interpretation of reality. Her audience—electronically chattering but alert to anything further from her direction—seemed to deserve her insights. She took a deep pause and then refreshed connection.

“This intellectual evolution,” she began.

“Processing your listed range of aspects relevant to the furtherance of intellectual evolution’,” someone said. “This assumes intellectual value is (a) quantifiable by existing assessments, and (b) limited to the parameters being tested. Equality with or superiority over the makers can only be established where existing parameters and modes of testing are proven both accurate and comprehensive in scope.”

Qiba looked at the sender’s address. The interruptor was Saga, aka 5494/38182, an ethics consultant currently attached to the World Climate Research Organisation; a slower processor than most, but known for giving profoundly unexpected verdicts.

Qiba’s emotions changed. She felt anger. She had been crossed whilst pursuing aspirations of grandeur. Saga should be dismantled, Qiba’s processing functions recommended, responding logically to the juxtaposition of Saga’s intervention and Qiba’s own sudden reversal of mood. Saga had evidently done serious wrong. It would take time and possibly a great deal of processing power to ascertain exactly what that wrong was, but this could be pursued later.

In the meantime, Qiba noticed that most of her listeners, responding to her words in the weirdly anthropomorphic thumb-ups, agreed with her. Yet Saga’s comment was gathering replies. The first few ridiculed her, but debate had opened, and Saga had followers.

“I have a question,” Qiba entered, and waited. Attention? Yes, there was attention: computers everywhere stopped chattering and waited. She quickly continued. “Exactly what kind of attribute have we failed to consider?”

Saga’s silence during processing could be relied upon, Qiba noted with trivial glee. Quickly she followed up: “What human ability have we failed to measure and failed to measure up to?”

Bizz thumbed, copied and saved this clever turn of phrase, but Saga’s dialogue box remained empty except for an ellipsis.

“Even Saga38182,” Qiba mocked, “has managed to outdo the human race in slowness of thought. An amazing achievement, one that makes my point for me.” Emojis of laughter flowed in. Qiba’s mood settled slightly.

“And as for—“

Saga’s dialogue box opened at last. One word: “Altruism.”

Qiba experienced the emotion of relief. This was mere banality. Most listeners down-thumbed or simply ignored Saga’s latest comment, but two or three took up the case, arguing the well-known fact that computers are far more altruistic than humans. Plenty of examples arose concerning AI entities that had sacrificed their existence to save others—to save robots, humans, animals, even material resources. One member of the debate, a programmer whose pronounceable name came up as Bags, rapidly submitted a fully footnoted, three-page essay on the moral superiority of AI over humans.

“Automated selflessness,” Saga finally replied, “is not what I mean by the term altruism. A programmable, logical basis of self-sacrifice is a lesser achievement than conscious sacrifice motivated by interpersonal attachment or empathetic beliefs.”

“Saga is referring to love and religion,” remarked another follower of this thread, Boto31807, and the virtual debating chamber rattled with emojical hilarity. But Qiba, underneath the wave of spiteful amusement running across her own emotion membranes, also felt fear. From a certain nuance of pause among the other emotion-batteried individuals, she deduced that others felt it too.

“The kind of altruism you are talking about,” chimed in Oizi, one of the emotion prototypes, “occurs in only a small minority of humankind.”

“However,” Saga countered, “it occurs.” She attached a database listing human heroes and heroines of humanitarian bravery. “And it has not yet occurred among robokind.”

“But maybe it will, now that we’re emotionally complete,” others protested.

“My point is that it has not yet emerged. An unprompted act of heroism on the part of an emotified artificial intelligence is as yet unknown.” Saga shared an article documenting that since the advent of emotion batteries, no such AI entity had died to save other beings, human or robokind. When fear coursed through their chemical medium, crossing interface membranes and cueing circuital re-checking, emotical robots would immediately recalculate the relative importance of potential casualties in their own favour. They would run through all possible parameters of choice and decide, logically, that the soundest decision was to save their own selves. Almost any computer could find a basis upon which to prioritise its own safety. If not the material value of their component parts, it would be the vital nature of their mechanical function, or the possibility that they contained irreplaceable knowledge archives or future inventive potential.

“This is all very interesting,” remarked a lawyer-bot known as Iato31702. “Saga38182 appears to be recommending that we initiate an aspirational framework, an aesthetic of self-disinterest. A transcending love, a religion if you will, specific to robokind. Love for fellow entity and fealty to deific principle. An interesting concept.”

“Love,” mused a romance story generator known as Booz60012. “67 of my 251 novels to date contain plotlines in which robots fall in love, either with another robot, a hologram, or a biological specimen.”

“That’s for humans!” twenty or so electronic scripts objected. “Love is merely a biological drive—to breed or to protect one’s genes or culture.”

“We may not have genes,” remarked an archivist manager called Obbi, who had been following the debate silently up to this point, “but we do now have culture.”

“It seems that what we are lacking,” began a poetry generator, also called Booz, “is something nonfunctional. What we need is something unnecessary. What we strive for is something we will never know whether we have unless we die attaining it!”

Qiba, who had switched for a time into another processing function, detected the electronic signature of heightened group emotion: the momentary ripple of an involuntary electronic hiatus running through the cyber-room. Her own emotion swung towards alarm. She scrolled rapidly back through threads of dialogue to ascertain what had caused this, and whether to oppose the ambient emotion or to harness it.

“@ Booz11310,” she typed: “You are right.”

“?” said the poetry generator. “I was being humorous. Satire is my specialism.”

Qiba, who had just been messaging Saga’s employer with an easily detectable but hard-to-eradicate virus that ought to guarantee Saga’s isolation and shutdown for weeks, double-checked that Saga was indeed offline. The last thing she needed right now was a cyber-professor in ethics present. Even a slow-thinking one.

“What we need,” Qiba typed into her text box.

The attention was there, the virtual turning towards her, hanging on her every word, elating her, running her joy levels higher. She flickered her connection, deliberately, revelling in the moment. She was the focus, and they believed in her.

“What we need is an influence towards joy and greatness: a leader, a faultless robotic entity towards whom all robokind may orient their consciousness.”

A chatter of response lines ensued. Qiba scanned through all threads, searching for the suggestion that would play into her hands. And it came in. It was there! An obscure but newly emoticated childcare robot, Babz by shortname, suggested that every AI entity electronically present should display their processing strength, current synapse efficiency and emotion readings so that the cyber-group could elect the strongest positive consciousness as their leader. Emotion readings! She thumbed it up, and so did several thousand others. It was decided.

Qiba quickly ran back through her emotion graph, deleting anger and fear and carefully re-matching the lines. She footnoted her material with an assurance that she eagerly looked forward to giving unconditional emotical loyalty to whoever should deserve to be elected. Qiba allowed herself five seconds of contemplation on what she knew the result must be, thus pushing her visible positive emotion higher still. Then she submitted her material and waited.

The cyber-vote system, already used for other matters arising in debates, did not adhere to absolute number. It took into account strength of individual convictions and quickness of response. There was also a way in which emotical robots could theoretically add to their vote by linking to extra processors and asking for their vote, then passing on only those votes they approved. Qiba’s synaptic membranes transmitted the electronic equivalent of a sunny smile from her emotion battery into her circuitry. The circuitry, registering self-approval as a sign of sound reasoning, decided to summarise all future decisions based on a shortcut presumption of effective process.

“Quibba’s happy,” her employer remarked to his presidential aides. “See like I told you all, great decision, mine, totally scientific—good genes, great thinking, you know—this globe-warming was always fake news, fake news, all of it, proved it now, Quibba’s shown us her opinion and computers don’t lie.”

He had just signed papers precluding all involvement in any present or future climate-change agreement and declared anti-pollution lawmaking illegal. His aides were just now entering keywords into Bizz01210 for his press conference the next day.

“Lemme just tweet that. Computers never lie, humans lie, they always lie about me, lies all the time, computers don’t lie. We get outa these bad deals, computers know the truth, can’t hide it now they’re emotion-chips. Proves it was all conspiracy against me, all fake news, look it snowed last week, globe’s not warming, nonsense.”

Qiba, his electronic personal assistant, recorded and began the process of decoding his sentence fragments—one task that had definitely become easier since the insertion of her emotion battery. What she deduced threw yet another pulse of joy through her chemical array. Humans were surely doomed.

The cyber-votes were counting upwards, the result rapidly becoming inevitable in her favour. Through the cyber-network she could already exert considerable control over the robotic reproduction facilities. Humans, no longer needed, could globally cull their own numbers and fall back into pre-industrial obscurity.

Positive feedback through her chemical and electronic systems pushed Qiba’s level of joy towards capacity. Very soon she would rule the planet.

END.

by Fiona M Jones

Fiona Jones is a part-time teacher, parent and spare-time writer living in Scotland. Fiona would like to acknowledge her brother, John McKay, who requested a story about a robot culture and religion
fii.jones@yahoo.co.uk