(Been a while since I have posted! Lots I could talk about–such as the amazing Charlotte
Both Artificial Intelligence and Artificial Life (AI and AL for the rest of this post) fascinate me. I have been interested in AI since I was a wee lad and the idea that we could create something that seemed every bit as intelligent and interactive as a human being intrigued me. Back when I got one of my first books on programming in BASIC (indeed, I am that old), it had a program representing a scaled-down version of ELIZA that interested me in the possibility that one day a created machine might be able to have a plain-language conversation with a human being.
And we are making amazing advances in the field of AI, to be sure. In part, many of those advances are because some researchers have stopped trying to mimic the functioning of human intelligence, which remains to some extent inscrutable. As a result, more and more programs are becoming what would be called, by many measures, “intelligent” but with an “intelligence” that is increasingly inscrutable to their programmers. That is, the “thought” processes aren’t always open to being deconstructed in ways we would completely understand, yet the results do generally show some chain of successful reasoning — at least, that’s what I have read on the topic. It seems that as we are increasingly creating programs and machines that can “learn” more than we program them with, we are losing the ability to describe how they actually come to the “conclusions” they do.
This might be related to some of the weird answers Watson came up with on Jeopardy!, which I mentioned in my post here: “My Dear Watson Isn’t So Elementary” (which also refers to ELIZA).
The practical concerns of this–using “intelligences” that draw conclusions in ways that we don’t exactly understand–are real. For instance, such AI systems have been used, in at least one case that I know of, to solve a real world crime. In such a case, if the AI system’s “reasoning” cannot be reconstructed, then the humans have to be able to construct valid reasoning on their own. If they cannot, would the AI system’s conclusion be sufficient as evidence? What if lives are at stake? It’s like the mathematician’s dilemma concerning computer-aided proofs of theorems, but with, you know, jail time.
Still, such conundrums aside, I find that a little disappointing — “that” being the idea that in order to make strides in Artificial Intelligence we are abandoning to some extent an attempt to model our own intelligence. Call it sentience-bias, but the intelligence I would most like to see simulated is one that is very much like our own. At the same time, in some cases it seems a matter of abandoning the method in order to embrace the same outcome. Since our own intelligences grow over time through learning and experience, part of the hope of some is to cease trying to recreate intelligence from scratch and to simply create the mechanisms by which intelligence is able to grow. In some cases, the resulting intelligence may be very foreign to what we would think of as human intelligence, while in others it may be very similar.
There are two AI efforts I have read about in the last few months that really intrigue me. One is being driven by the creators of Apple’s Siri in their new company, Viv Labs. I remember the first Siri app, which I had before Apple bought the company behind it (and which stopped working the moment Apple bought the company). And while Siri is a disappointment to many, it actually is pretty impressive in a number of ways. Now, we have others seeking to outshine it, like Microsoft’s Cortana, named after the clothing-challenged AI interface in the popular Microsoft HALO game series. (I think Google has a voice-based assistant, as well, but I do not know if it has a name. Googlette?) All of these, though, pale in comparison to what we really want in an AI assistant, which is, essentially, Tony Stark’s “Jarvis.” We want “someone” who can understand us in real, natural, human language. So far, Siri, Cortana, and their comrades fall short of that and don’t seem like real “AI” as much as simple voice-based interfaces.
But what Viv is working on should change that in an amazing way. Wired had a good article on the operating system that Viv Labs is creating (called “Viv,” itself) if you are interested: “Siri’s Inventors Are Building a Radical New AI That Does Anything You Ask.” If you just want a quick hit instead of reading the whole article (though the whole article is interesting, IMO), the diagram at the bottom showing Viv dissect the statement “On my way to my brother’s house, I need to pick up some cheap wine that goes well with lasagna” and producing (in 0.05 seconds) a helpful list of possibilities is fascinating by itself. Closest thing to a Jarvis that I’ve seen, and light years beyond Siri.
The second research effort that has grabbed my attention is Baby X, being built by New Zealand’s Auckland Bioengineering Institute Laboratory for Animate Technologies. This article on “The Creators Project” blog covers Baby X pretty nicely: “Baby X, The Intelligent Toddler Simulation, Is Getting Smarter Every Day” (the videos below are also featured in that article and may not seem as impressive unless you read the article).
While the creators of Baby X are smart to choose a child’s face, I think, because the features that make it seem child-like also help us to overcome the “uncanny valley” effect (that is, the somewhat ironic revulsion we feel towards things that seem near-human, such as digitally animated faces that seem somehow creepily “off,” as opposed to things that seem more obviously less so, such as C3PO’s face; the nearer to actual human appearance, the more our revulsion until the “valley” is crossed to full human appearance where we become accepting instead of revulsed–clearly too much for parentheses, so read this if you are curious). Child-like features (big eyes, etc.) make us instinctively want to like the image, which may help overcome the counter-instinct that feels revulsion at something nearly-but-subtly-not-fully human.
Regardless, from what I get from the article and from the Institute’s website, Baby X’s programming is based on a model of actual biological activity (the release of dopamine in rewarding circumstances, etc.) in the hopes that it will learn and respond in a more human way if the manner of learning is related to how we learn and respond.
Here are a couple of videos if you are interested:
Clearly, an actual baby’s brain is not being fully modeled in detail, as those interactions are still beyond our ability to accomplish. The vast network of neuronal connections in an adult brain’s connectome is even beyond the Internet’s complexity (as discussed in someone’s Tomorrow’s World article: “The Enigmatic Human Brain”). Baby X looks as though it is based on more simple models that attempt to reproduce brain activity without modeling every detail that produces that activity.
Another area of research that has grabbed my attention, however, does model brain activity at every level of detail, and represents, to me, more of an attempt at Artificial Life than Artificial Intelligence. The key factor making it possible is that the brain involved is remarkably simple: That of the tiny worm Caenorhabditis elegans (or C. elegans, for short) — a small (about 1 mm long), non-parasitic round worm. It is the only living creature to date which has had its entire connectome (the entirety of its neuronal connections) completely mapped out. It helps that the worm’s brain consists of only 302 neurons.
As a result, one can virtually create a digital creature that behaves just like C. elegans, even though the digital creation is not, in the same sense, alive. Once all the neurons are in place and the connections established, activity within the network begins acting just like it does in the real worm.
In fact, some have gone further. Using LEGO Mindstorms kits — simple robot-building kits using LEGO bricks — people have created robot versions of C. elegans. Certain things have to be modified, to be sure — for instance, where the real creature responds to signals indicating the presence of food, the robot’s sensors might detect the presence of sound, or motor neurons (or their worm equivalent) might be made to activate wheel instead of a muscle.
However, what is fascinating is that the only real “programming” of the robot worm is that the neurons and their connections are simulated. That is, there is no line of human code saying, “If you run into a wall, turn around and go a different direction.” Rather, the neurons are put into place, they are all connected to each other in the same was as in the worm, and then the worm is turned on. The result? The robot does, indeed, behave just like the worm. No additional programming necessary: Just neurons responding to each other and activity moving around the connections from one neuron to another. In order to “program” the worm, it wasn’t necessary to tell it what to do: Just create the neurons, order them in the same way as the real worm’s, and the robot worm comes to “life” behaving just as its real-life inspiration does.
Here’s a video of someone’s LEGO C. elegans in action:
The video does a nice job of showing the “neurons” in action. And, again, though the robot’s behavior may not seem all that impressive, it should be noted that no one gave the robot a bit of human-generated computer code that said, “When you hit a wall, turn around.” Rather, the worm’s neurons and their connections were modeled “as is” and the behavior just occurs. By reproducing the worm’s neuron arrangement in the robot, the worm’s behavior has also been reproduced in the robot. In a sense, this is “Artificial Intelligence” (a worm’s intelligence, to be sure), but it seems to me to be more fully like “Artificial Life” and, in some ways, I find AL more fascinating as AI.
Actually, there is so much more I would like to say, but I have spent more time on this post than I planned — a nice break from the other things I am working on today. So I will leave the questions all of this might generate to you in the comments. (OK, here’s one: Should it ever be possible to completely map a human being’s connectome–his or her neurons and all of their connections–would we expect the human being to be perfectly reproduced, as well? What role would the presence of the human spirit have in that? Would it be a hobbled reproduction lacking in the real “spark” that truly makes us “us”? What would it be? Where are the boundaries? At what point between C. elegans and H. sapiens would such modeling break down? — All right, that’s more than one…)
Fascinating stuff! At least it is for me. 🙂 If you have any thoughts, feel free to share them below.