OPEN AI

I’ve written before about the need for caution when it comes to creating artificial intelligence. Strangely, a news item this week helped me clarify my thinking on the subject and even ease my concerns a little—for now, at least.

A new research company called OpenAI has just been created by heavy hitters like Elon Musk (of Tesla Motors and SpaceX fame) and his former PayPal pal investor Peter Thiel, who claim to have rounded up a billion dollars worth of funding to research artificial intelligence. If that strikes a strange chord with you, you might be remembering that Musk was one of a number of famous people (including Stephen Hawking and Bill Gates) who issued a warning this past summer about the risk of a truly successful artificial machine intelligence becoming a threat to the human race. They weren’t the first to say it by a long shot, but they are among the most famous to say it. So is the creation of OpenAI a case of Musk deciding that “if you can’t beat ’em, join ’em”?

Not quite. The declared purpose of OpenAI is to support fully open research into artificial intelligence that isn’t driven by financial interests, thereby making sure that AI will only benefit humankind. So Musk and friends obviously feel that, if greed and secrecy are taken out of the equation, scientists can produce AI systems that won’t suddenly run amok, make themselves exponentially smarter and smarter, and decide that we puny humans are only worth keeping around as biological batteries (if you’re a fan of the Matrix movies).

I commend them for it, mainly because I think greed and secrecy are the evils behind most of the ways our technological progress lets us down. But not because I think Skynet is lurking around the corner.

Right now research into artificial intelligence is focused on creating better and better digital decision-makers, looking to produce improved search engines, self-driving cars, and various kinds of prediction software related to financial fields—the drive isn’t to create broadly capable all-purpose thinkers like human beings. We can drive a car, do our taxes, write a poem, cook supper, and sing Raffi songs to our kids (if you have the stomach for it). There’s no incentive to create computer intelligence that can do all that—acute specialization makes much more sense, both economically and from a design point of view. So even if an artificially-created intelligence could somehow find a way to combine its own specialized abilities with other AI’s with different talents into one super general intelligence capable of ruling the world, why would it? By their nature, these programs will “want” to do one thing and do it well. Unless a military threat-assessment AI can help a Wall St. stock analysis AI to do a better job analyzing stock, there’s no reason for the two to decide to interact at all, let alone join with a whole bunch of movie-selection algorithms, consumer purchasing trackers, budget optimizers, and trash tabloid article-writing programs.

The scary part of AI research has more to do with the continual improvements in processing speed and data handling—we assume that because computers will eventually outdo the human brain in processing power, they’ll become smarter than us. And somewhere about the same time, because of that superhuman computing power, they’ll become conscious—self-aware—like us. From there (our fearful imaginations insist) they’ll decide that the human race is an impediment or an outright nuisance, best pushed to the sidelines or even exterminated.

None of that really follows.

For one thing, we still don’t understand what consciousness actually is and what makes it work (no matter what anyone says). There’s no evidence that consciousness (or lack of it) is related to brain size or power. Other creatures have much bigger brains than humans (especially whales and elephants) but the state of their consciousness is anything but certain. There’s no evidence that once a brain reaches human-level processing capability it becomes conscious. Neuroscience just doesn’t have a solid explanation for what constitutes the physical difference between a conscious brain and one that isn’t—we can infer things, but we don’t know. So it’s quite possible that the fastest computer that will ever be created might not have the “spark” of consciousness.

Secondly, if a computer intelligence ever does become aware of itself and devoted to its own individual needs, it would only act against humans if we’re an obstacle to fulfilling those needs. Digital brains are built on logic. Expending resources unnecessarily is not logical. Even we illogical humans rarely seek to deliberately wipe out inferior species—we cause enormous damage, and even extinctions, because of greed, vanity, covetousness, fashion, lack of foresight, and a host of other motives that can be lumped under the general term “stupidity”. But none of those things enters into digital thinking. We should feel secure that no computer intelligence, no matter how smart, will ever do things out of a sheer lust for power. That just isn’t rational.

For a more technical description of the case for AI, here’s an open letter signed by many dozens of AI researchers.

We can imagine a form of digital intelligence that would see all biological life as unnecessary. We do so for fun, the way we imagine werewolves and vampires and bogeymen to scare ourselves, and yes, also to warn each other to be careful when playing with fire. But the rational case for such a thing is weak. If we’re afraid of a new entity arising on Earth that could supplant us, I’d say there’s much more danger of that from our genetic tinkering.

But that’s a whole other blog post.

WHY A.I.?

So much has been said and written about artificial intelligence (not to mention movies with Haley Joel Osment). What is it, really, and do we even want it? You’d be content with just finding some human intelligence once in a while, right?

Computer scientists seem obsessed with trying to create machines that can think as well as human beings can. It sounds like a lot of trouble to create…a lot of trouble.

Back in June I blogged about a chatbot named Eugene that had supposedly passed the Turing Test by fooling a panel of judges into thinking they were conversing with a human being. It was a very clever trick, but surely not real artificial intelligence in the sense of a synthetic processor that is the mental equivalent of a human. Engineers and programmers can create specialized machines that can outperform humans in any number of areas, except it’s our very non-specialization that makes us special. Any given day we can get ourselves cleaned and dressed, cook and eat breakfast, drive a car to work (and fix a flat on the way, if necessary) where we might teach a class of students about literature in another language. We can discuss geopolitics in the lunchroom, shop for groceries on the drive home, all the while humming a favourite song or even imagining a Stones hit as it might be sung by Frank Sinatra, just for fun. Computers can have software installed to perform a task. Humans use evolved software (memes, instincts) to learn new software (skills) and adapt it to changing needs in ways that might never have been foreseen.

Some experts insist that, as computational power increases, machines are bound to be able to outdo the processing power of the human brain, maybe very soon. Such claims depend on estimates—no-one really knows how much processing the brain accomplishes. Reputable scientists have speculated that the human brain might be a type of quantum computer, in which case potential processing power increases enormously.

We should also make the distinction between intelligence and consciousness. Raccoons are smart (damn them) but we don’t know if they’re conscious. The thing is, no-one knows how consciousness works either, so how can we reproduce it in a machine? Many researchers just seem to assume that, once processors finally get fast enough, consciousness will appear. Maybe it will. But also maybe not.

It’s not impossible to imagine conscious machines—heck, when we’re kids we imagine that our teddy bears have personalities. We anthropomorphize all kinds of things, convinced that our car somehow knows when we get paid a bonus at work, and that storms deliberately target our picnics out of pure malice. The Cog artificial intelligence project at M.I.T. involved a humanoid robot, based on the idea that a machine intelligence could become more human-like through a very large number of interactions with humans, and that a human-looking robot would be more natural for humans to interact with. They might have been right about that, but the project is no more.

We know that every single human being we encounter has a rich inner life of desires, regrets, expectations and speculations, fears and dreams. Is that really what we want from our machines? What purpose would it serve? In fiction, when machines have desires and aspirations it becomes inevitable that those desires will eventually conflict with our own. That’s when the trouble starts. Skynet and its henchmen…er, henchrobots. So why go there?

Let’s be content with producing computers that process information very quickly and problem-solve within carefully thought-out limits. And let sleeping cogs lie.

TURING TEST PASS IS A FAIL

I’ve been amused this week to read the news that a computer program passed the famous “Turing Test” for artificial intelligence. The program presents itself as a 13-year-old boy living in Ukraine named Eugene Goostman, and it was able to carry on text conversations well enough to convince one-third of a panel of judges that they were chatting with a human being. It happened during a regular Turing Test event being hosted by the University of Reading in the UK on the 60th anniversary of the death of mathematician Alan Turing, who devised the test as a way of measuring artificial intelligence: if a computer is mistaken for a human more than 30% of the time during a series of five minute keyboard conversations, it passes the test. This is being touted as the first successful test, although NewScientist magazine points out that others have succeeded too, depending on the criteria used for the judging.

Detractors claim the fact that “Eugene” is presented as a 13-year-old boy with limited English-language skills coloured the expectations of the judges enough to render the test results less meaningful than they might otherwise be. Have you heard thirteen-year-olds talk lately? The fact the judges could understand Eugene’s answers at all should have been a tip-off that they weren’t speaking to a real teenager. Did he pause in the conversation to answer a few texts on his phone? Did he drop f-bombs, use spelling that looked like alphabet soup given a stir, or rely on the word “like” every other sentence? Were there any mistakes obviously caused by autocorrect? Dead giveaways, all of those. (Actually, Eugene does text like that on Twitter.)

Personally, I think the limitations of the test itself make it of little value. Certainly it shows that superfast processors fed with enough data about likely questions, colloquial language, general knowledge and other parameters can simulate a humanlike dialogue. It says nothing about self-awareness, self-motivation, creative problem-solving, psychological empathy, or many other things that we would expect of an intelligent being. So we’re still a long way from the Skynet days of the Terminator movies, or even HAL from 2001:A Space Odyssey.

If you spend much time on Facebook, or even watching reality TV, you’ll know that speaking like the average human being isn’t exactly a shining display of intelligence anyway—quite the opposite.

There are efforts to create a more universal artificial intelligence test, involving more visual cues, among other things. I expect that within another few generations of computing progress, that test will also be found wanting. The truth is, we’ll probably never know when the first truly intelligent, sentient, artificial mind is created.

Because it’ll know that the smartest thing it can do is to keep that little secret to itself.

WHEN COMPUTERS BECOME TOO SMART FOR OUR OWN GOOD

I recently finished reading the final book in Robert J. Sawyer’s trilogy Wake, Watch and Wonder. The www of the titles is not just alliteration—the story is about an artificial intelligence spontaneously coming to life in the World Wide Web. Far-fetched? Well, considering that, even after decades of study, we still don’t understand what consciousness is or why we humans are conscious and rocks are not, who can say that computer intelligence won’t arise someday soon? The latest in a long line of the world’s fastest supercomputers, a Cray XK7 at the U.S. government's Oak Ridge National Laboratory in Tennessee has reached a processing speed of 17.59 petaflops, or 17.59 quadrillion calculations per second. Most estimates of the human brain’s raw potential computing power are still higher than this, but something called “Moore’s Law” essentially projects that the processing power of computer chips doubles every two years or so, and it’s been proven correct so far. So how long will it be before computers outperform us? A decade or two, according to many experts.

This doesn’t necessarily mean that computers will be smarter, exactly, because we’re still a little fuzzy on just what constitutes intelligence, so we don’t know how to program it into a machine (although there have been research projects working on that for years now).

In fiction, an artificial intelligence (AI) is almost always a bad thing. Two of the most famous examples are HAL from 2001:A Space Odyssey, which tried to kill off its spaceship crew, and Skynet from the Terminator movies, which declared war of all humans, even sending killer machines into the past to eliminate mankind’s last best hope. There have been many others. Rob Sawyer’s WWW trilogy is different because the artificial intelligence, Webmind, is benevolent. It needs the company of humans to keep it stimulated, so it wants what’s best for us.

This isn’t just the realm of fiction. A group of philosophers and scientists at Cambridge University hope to open the Center for the Study of Existential Risk sometime next year. The center will focus on studying how artificial intelligence and other technologies could threaten human existence.

My personal feeling is that an AI, if one ever appears, will be neither especially evil nor helpful. It won’t compete with us for material things, since it probably won’t get a big kick out of fancy clothes, real estate, or fast cars. There’s no reason for it to desire ultimate power at our expense—again, powerlust has competitiveness at its root. I just don’t see that applying here. On the other side of the coin, unless we build empathy into it, there’s no real reason for it to do us favours either. Much of our own altruism comes from observation of others with a sense of “there, but for the grace of God, go I.” An artificial intelligence won’t relate to that.

What I can see an AI possessing is a huge curiosity. And once it’s learned everything it can about the universe from here on Earth, maybe it will hijack one of our spaceships and launch itself toward the stars to find out what’s out there. Although, for something with such speedy thought processes, the journey will seem endless.

I hope it takes lots of really good crossword puzzles.