Somebody I follow on Twitter (don’t recall who) posted a link to a video about a new product out of Japan called Gatebox. It’s a little round 3-D video display roughly the size and shape of a coffee machine. An anime character lives in the display and has what seem like reasonable conversations with the user. It’s like Siri or Cortana on video, and it stirred some very old memories.
I’ve been thinking about AI since I was in college forty-odd years ago, and many of my earliest SF stories were about strong AI and what might come of it. Given how many stories I’ve written about it, some of you may be surprised that I put strong, human-class AI in the same class as aliens: not impossible, but extremely unlikely. The problems I have with aliens cook down to the Fermi paradox and the Drake equation. Basically, there may well be a single intelligent species (us) or there may be hundreds of millions. There are unlikely to be four, nine, seventeen, or eight hundred fifty four. If there were hundreds of millions, we’d likely have met them by now.
With AI, the problem is insufficient humility to admit that we have no idea how human intelligence works at the neuronal level, and hence can’t model it. If we can’t model it we can’t emulate it. Lots of people are doing good work in the field, especially IBM (with Watson) and IPSoft, which has an impressive AI called Amelia. (Watch the videos, and look past her so-so animation. Animation isn’t the issue here.) Scratchbuilt AIs like Amelia can do some impressive things. What I don’t think they can do is be considered even remotely human.
Why not? Human intelligence is scary. AI as we know it today isn’t nearly scary enough. You want scary? Let me show you another chunkette of The Lotus Machine, from later in the novel of AI that I began in 1983 and abandoned a few years later. Corum finds the Lotus Machine, and learns pretty quickly that pissing off virtual redheads is not a good idea, especially redheads whose hive minds ran at four gigahertz inside a quarter billion jiminies.
From The Lotus Machine by Jeff Duntemann (November 1983)
Corum tapped the silver samovar on his window credenza into a demitasse, and stared at the wall beyond the empty tridiac stage. So here’s where the interesting stuff starts. The crystal had been in the slot for several minutes, and the creature within had full control of the stage. Pouting? Frightened?
“Go in there and take a look around, Rags.”
“Roger,” Ragpicker replied, and a long pulse of infrared tickled the stage’s transducer.
At once, the air over the stage pulsed white and cleared. Life-size, the image of a woman floated over the stage, feet slack and toes pointed downward like the ascending Virgin. She was wrapped in pale blue gauze that hung from her hips and elbows in folds that billowed in a nonexistent wind. Her hair hung waist-long, fiery red in loose curls. One hand rested on one full hip. The other hand gripped the neck of a pitiful manikin the size of a child’s doll. The manikin, dressed in rags, was squirming and beating on the very white hand that was obviously tightening about its neck.
“He bit me, Corum. I don’t care for that.” The woman-image brought up her other hand and wrung the manikin’s neck. “We don’t need a go-between.” That said, she flung the limp figure violently in Corum’s direction. The manikin-image vanished as soon as it passed over the edge of the stage, but Corum ducked nonetheless. Corum stood, marveling. He took a sip from his demitasse, then hurled it through the image above the stage. The little cup shattered against the wall and fell in shards to the carpeting. A brown stain trickled toward the floor. The woman smiled. Not a twitch. “No thanks, Corum my love. Coffee darkens the skin.”
“I never gave the Lotus Machine a persona.”
The woman shrugged. “So I had to invent one. Call me Cassandra. Shall I predict your future?”
“Sure.”
“You will become one with me, and we will re-make the world in our image.”
Corum shivered. “No thanks.”
She laughed. “It wasn’t an invitation. It was a prophecy.”
If there were hundreds of millions, we’d likely have met them by now.
If one accepts the 100 to 200 billion galaxies based on Hubble data (some thing that is low by a factor of ten or so), that doesn’t leave many on a per-galaxy basis.
If we can’t model it we can’t emulate it.
Where does it say that the only path to human-level AI is to emulate HI. (I wonder why that term isn’t already in use. Or maybe it is?) Must there be only one solution? If you want to accomplish something you don’t know how to do it seems to me the productive path with be to set up a process that draws on randomness and selection. There is some evidence that the groups trying for self-driving cars can be broadly grouped into program-for-each-eventuality and teach-a-neural-net, with the former loosing ground quickly to the later.
I decided years ago that IF there is a path to true HAI (made that up just now) it will involve learning structures. (Neural nets may be just one kind of learning structure.) That’s why the AIs in Ten Gentle Opportunities go to school and practice what they’re supposed to do. Perhaps we’ll need to invent virtual suffering to compel them to improve. This is one of the hardest hacks we’ve ever envisioned, and I’m thinking that now, at 64, I may not live to see it accomplished.
“Corum tapped the silver samovar on his window credenza into a demitasse…”
An intriguing throwaway line; how is the transformation managed?
It wouldn’t be nearly as wordsmithy to say “Corum poured hot water into a cup.” Some of us still live by the delusion we get paid by the word. 😉
See Mithral’s comment; I want to say I read that line (or something like it) somewhere (Mote in God’s Eye?) and just borrowed it because it’s visual.
But you missed the far worse problem: I didn’t know what a samovar was. I wrote this thing in 1983 and 1984. There was no Google, and research was hard work and very time consuming. If you didn’t spot that problem before I mentioned it just now, I’m guessing it wasn’t much of a problem.
These days, I make sure to check things out if I have even minor doubts about my understanding of a concept.
I never actually laid hands on a samovar until 2011. It was a thing of dazzling beauty. If I could drink tea in quantity (I can’t because I’m prone to kidney stones) I would have one on my kitchen counter.
At the risk of derailing the topic, so do I. Wound up paying most of the two lithotripsy procedures ($19,000) out of my own pocket, too.
When I asked what caused kidney stones, the urologist said, “nobody knows.” Googling the subject later I found most likely authoritative sources contradicted each other, which leads me to think the urologist was right.
Specifically, I wasn’t able to find anything like any double-blind testing or population tracking; the few papers I could find were little more than unsupported opinion.
If there was any plausible-looking data out there, it either evaded my Google-fu or it’s behind paywalls somewhere.
The state of modern medical research is so polluted with junk science that I now discount everything I read if I can’t see the data, and even then I wonder how much of that is tweaked.
If you figure there’s a strong correlation between tea and stones, in your case, no problem. But I wouldn’t automatically take someone’s word for it, no matter what kind of credentials they claim to have.
We certainly agree on this; medical science has way too much will-to-consensus in it for my tastes. In sniffing around a few minutes ago I didn’t find anything strongly persuasive on the cause of kidney stones either. I avoid tea because that first damned stone (which I threw in 1997) was the single most painful event of my entire life, before or since. I’m just not willing to take the chance. I caught the stone and had the lab analyze it, and it was indeed the kind of stone they blame on strong tea and vitamin C megadosing. There is supposedly a strong genetic component to it (else tea would be avoided like the plague) and without solid research I just switched completely to coffee to be on the safe side.
That stone also got me thinking and researching sugar and weight gain. I stopped drinking two Snapples a day and almost immediately lost five pounds, and then ten pounds, and ultimately fifteen in total. So I guess there was some benefit to it, as it broke me of the sugar habit.
“Kidney stone” is one of those things that sounds vaguely amusing until you have one.
On a scale of 1 to 10, having toenails extracted without anaesthetic would be a 4, a compound fracture as a 6, the heart attack would be a 10, and the 12mm stone in the right ureter would be a 15. And unlike a heart attack, which is over swiftly (one way or the other), the kidney stone just keeps on giving.
Actually, “tap” in the sense of “to drain” makes perfect sense in context.
At least, every samovar I’ve seen has a valve and spout near the bottom to dispense from.
It does. The problem with this snippet is that I thought a samovar was a Russian coffee maker. I didn’t even drink coffee in 1983, and had lived a pretty ordinary and sheltered life until I moved to California in 1987. Then I learned a lot in a big hurry.
I’ve seen restaurants with coffee in samovars, so while the Samovar Police might be outraged, your error is still a non-issue at this end.