THANKS FOR THE MEMORIES...AGAIN

A scientist at M.I.T. has been working on a way to decipher our memories and encode them like data for a computer. The goal is to be able to restore memories in cases where Alzheimer’s disease or some other damage to the brain has caused neuron failure and memory loss.

Dr. Ed Boyden uses a protein extracted from algae that produces electricity when struck by light. By inserting the protein into brain neurons and then triggering them with light, he and his team are hoping to be able to map out brain pathways, learning more about how the brain functions and possibly even translating the neural paths of memories into binary code, which would allow them to be stored like other computer data. So far there have been promising results from testing with mice.

Obviously the storage and re-implantation of memories (or implantation of new ones) could have a lot more applications than just helping dementia victims, and there have been all kinds of science fiction stories covering that ground, including movies like Total Recall and Johnny Mnemonic. I suspect the reality, when it comes, won’t be nearly so cut and dried. After all, much of what we “remember” is edited and rewritten by our conscious minds, taking pieces of actual memory and combining them with experience and knowledge we’ve acquired along the way. The result can’t possibly be like the precise and well-organized data computers like to receive, and it certainly won’t “play back” like a piece of video. At best, a replay would be like a dream state, where we often jump from one scene or setting to another without any linking moments between. The continuity and context could easily be lost. So I suspect the closest anyone will come to returning a lost memory will be to take the retrievable highlights, and then string them together with manufactured filler in a way someone or some machine thinks will make sense. Kind of like one of those Hollywood movies “based on a true story”. Entertaining, maybe. But a preservation of a real past? Not hardly.

WHEN COMPUTERS BECOME TOO SMART FOR OUR OWN GOOD

I recently finished reading the final book in Robert J. Sawyer’s trilogy Wake, Watch and Wonder. The www of the titles is not just alliteration—the story is about an artificial intelligence spontaneously coming to life in the World Wide Web. Far-fetched? Well, considering that, even after decades of study, we still don’t understand what consciousness is or why we humans are conscious and rocks are not, who can say that computer intelligence won’t arise someday soon? The latest in a long line of the world’s fastest supercomputers, a Cray XK7 at the U.S. government's Oak Ridge National Laboratory in Tennessee has reached a processing speed of 17.59 petaflops, or 17.59 quadrillion calculations per second. Most estimates of the human brain’s raw potential computing power are still higher than this, but something called “Moore’s Law” essentially projects that the processing power of computer chips doubles every two years or so, and it’s been proven correct so far. So how long will it be before computers outperform us? A decade or two, according to many experts.

This doesn’t necessarily mean that computers will be smarter, exactly, because we’re still a little fuzzy on just what constitutes intelligence, so we don’t know how to program it into a machine (although there have been research projects working on that for years now).

In fiction, an artificial intelligence (AI) is almost always a bad thing. Two of the most famous examples are HAL from 2001:A Space Odyssey, which tried to kill off its spaceship crew, and Skynet from the Terminator movies, which declared war of all humans, even sending killer machines into the past to eliminate mankind’s last best hope. There have been many others. Rob Sawyer’s WWW trilogy is different because the artificial intelligence, Webmind, is benevolent. It needs the company of humans to keep it stimulated, so it wants what’s best for us.

This isn’t just the realm of fiction. A group of philosophers and scientists at Cambridge University hope to open the Center for the Study of Existential Risk sometime next year. The center will focus on studying how artificial intelligence and other technologies could threaten human existence.

My personal feeling is that an AI, if one ever appears, will be neither especially evil nor helpful. It won’t compete with us for material things, since it probably won’t get a big kick out of fancy clothes, real estate, or fast cars. There’s no reason for it to desire ultimate power at our expense—again, powerlust has competitiveness at its root. I just don’t see that applying here. On the other side of the coin, unless we build empathy into it, there’s no real reason for it to do us favours either. Much of our own altruism comes from observation of others with a sense of “there, but for the grace of God, go I.” An artificial intelligence won’t relate to that.

What I can see an AI possessing is a huge curiosity. And once it’s learned everything it can about the universe from here on Earth, maybe it will hijack one of our spaceships and launch itself toward the stars to find out what’s out there. Although, for something with such speedy thought processes, the journey will seem endless.

I hope it takes lots of really good crossword puzzles.