OCTOPUSES AREN'T ALIEN AFTER ALL?

Image from the H.P. Lovecraft Wiki

Image from the H.P. Lovecraft Wiki

A number of science-related stories caught my eye this week: a competition to design Elon Musk’s Hyperloop, claims that octopus DNA is alien, and a planned clinical trial to revive people who are brain dead. How to choose? So read on about each of these.

First, the Hyperloop: You may remember back in 2013 when billionaire Elon Musk of SpaceX and Tesla cars fame announced his idea for an enclosed, near-vacuum, high-speed rail transportation system that would run across the continent between the largest U.S. cities in hours instead of days. Tech lovers jumped on the idea, so much so that Musk had to publish disclaimers denying any connection to the hyperloop companies that sprang up, and a year ago he announced a competition for universities and other organizations to design the ultimate hyperloop transport pod. He even had a test track built at the SpaceX headquarters in Hawthorne, California. The response has been terrific, and so are the designs—you can take a look at them at The Verge. The plan is have the test pods compete sometime this August but no date has been confirmed.

The economic benefits of such a high-speed transportation system could be considerable, but I’m more excited about the ecological benefits of getting that many cars and buses off the highways and commuter jets out of the air. The classic science fiction stories I loved to read as a kid (like The City and the Stars by Arthur C. Clarke) often had planet-wide transport systems, maybe running right through a planet, and while the scenery at such high speeds (or underground) might not be much of an attraction, it sounds a lot more environmentally friendly than sub-orbital rockets shooting all over the globe. That’s a win in my book.

You may have seen recent Facebook posts of articles claiming something like “Scientists Say Octopuses Are Alien!” The drift of the story is that researchers had found that “octopuses have a genome that yields an unprecedented level of complexity, composed of 33,000 protein-coding genes” which is beyond the number found in a human being. Other quotes proclaimed that they are utterly unlike any other creatures on Earth. In other words, the flamboyant octopus must be alien!

Except the original article in the journal Nature didn’t make that claim at all. The point was that octopus DNA can rearrange itself in ways that previously had only been seen in vertebrates, not invertebrates—notable, sure, but hardly alien. And the article was published almost a year ago—why did so many “news” outlets jump on it now? Snopes.com explains the whole charade more extensively. The takeaway is: don’t believe everything you read, especially online. I have to wonder whether this flap speaks to a childhood obsession with Lovecraft’s Cthulhu among web journalists.

So what about bringing the dead back to life? No, it’s not yet another zombie movie or a re-imagining of Frankenstein. A new clinical trial in India will explore the possibilities of using stem cells to repair brain damage in patients who are officially brain-dead because of accidental injuries (only remaining alive because of life-support machinery). The research, if it goes ahead, will involve the injection of stem cells and peptides, plus transcranial laser stimulation with infrared lasers. Stem cells are the body’s embryonic-type cells capable of becoming any of the specialized cells our bodies use for a huge variety of functions. Stem cells have been used in treatments for cancer and autoimmune diseases. Might they be able to replace damaged brain cells and eventually enable a clinically dead person’s brain to “reboot” itself? That’s a simplified explanation, but the question of whether or not the clinical trial will go ahead is a big IF now, not only because of the question of medical ethics, but also because of concerns that the lead researcher may not be qualified to conduct that type of study.

It will be interesting to see what happens if the trial goes ahead, but if the process works, what then? The implications for healing brain injury patients are staggering, but it’s unlikely that such research would stop there. Why not revitalize aging brains? Return the next aging Einstein to his youthful mental prime? Or, yes, perhaps even bring the recently dead back to life, as long as decay hasn’t proceeded too far. It might even be a way of preserving the brains of special people beyond the life of their physical bodies.

OK, now I can’t help picturing Richard Nixon’s brain in a jar on the TV show Futurama, and that means it’s time to stop writing. But there’s always lots of juicy stuff to read in the science columns. Just be sure to keep your inner skeptic fully consulted.

WHOSE DATA IS IT ANYWAY?

You can’t use a computer or other networked device these days without hearing about “the cloud”. Cloud file storage means that your computer, phone, or tablet uploads files to some company’s computer servers via the internet. The advantages include: a) saving storage space on our own device’s hard drive or flash memory, b) you can access your files from other internet-connected devices you own without having to make copies, c) other people can access your files with your permission (like photos you want to share), and d) you can backup your files and not worry about them being lost if your computer implodes. Sounds like a good deal, right? Cloud services usually offer free storage up to a certain limit, and then let you buy more space if you need it (because who ever deletes files anyway?—well, actually some cloud services do, but we’ll get to that).

More and more software companies are moving away from selling software to you in favour of having you subscribe to their service (like Adobe’s iconic Photoshop), with all of your work-in-progress automatically stored “in the cloud”, of course.

There have been problems. Business servers can be damaged or hacked or shut down if the company goes out of business. Internet services can have outages. But it’s some more insidious features that have kept me away from cloud storage.

If you’ve ever had an Apple iCloud account and wanted to cancel it, change to a new one, or just sign out, you’ll have seen a warning that documents stored in your iCloud account will be deleted from your local computer.

What?? Why? Whose files are they anyway?

Something similar can happen if you subscribe to the music streaming service, Apple Music. In fact, people who weren’t careful have apparently lost thousands of tunes they purchased, created, or got elsewhere, because of the strange way Apple does these things. In the case of iCloud, I’ve read that you can’t actually delete an account—your files all remain on Apple’s servers in case you ever want to sign back in. And Apple isn’t unique—a number of services had to backpedal because their terms of agreement seemed to suggest they would own the data they stored. So the biggest players now expressly state that they do not claim ownership…except they still act like they do.

Again, whose files are they? You thought they were yours, but once you’ve uploaded them to the cloud, a company can delete them from your own computer and then hang onto them for as long as they like.

No thanks. Extra hard-drives aren’t that expensive.

So where will all this lead? Well, it will take some determined lobbying to stop this trend, and I don’t see anything like that happening. People blindly accept the situation because of the convenience it offers, just like they willingly give companies access to huge amounts of private personal information for “reward points” or other paltry incentives. I don’t understand that either. But since hardly anyone objects, we have to assume it will only get worse, and soon all of the electronic documents, photos, music, and other forms of creativity and entertainment you produce or consume will be under the control of others.

Don’t expect it to stop there.

Eventually our phones and tablets will be replaced by devices that directly interface with our brains. Our minds will have internet connectivity, with the ability to access all of that information and entertainment by the power of thought. Now we upload our photos to the cloud. Maybe by then we’ll depend on it to store our actual memories. And when we do, who will have control over them? I think you know the answer. We’re willing to hand over custody of personal documents and pictures for the sake of a few gigabytes of free storage, so it’s not realistic to expect we’ll balk at such things when we’re offered the ability to practically relive that Bruce Springsteen farewell concert we loved so much, note by note, anytime we feel like it.

Just as long as we don’t opt out of the storage company’s service, or do anything else to cross them, and as long as they don’t go out of business or succumb to a malware attack. Then it’s ‘bye bye memories’.

The two Total Recall movies were based on a Philip K. Dick story called “We Can Remember It For You Wholesale”, but that was about implanting fictional memories for fun. What about when a company makes you subscribe to their service to be able to access your own memories? Or when you’re able to learn specialized job skills using direct information downloads to your brain, but the training company can take those skills back if you stop paying for them? Or if you’re a creative type and you want to keep working on that epic fantasy novel you’re writing but the cloud server is offline, or there’s been a glitch that erased a couple of chapters, or the service wants half the royalties if the novel ever sells…or…or…? Are you getting the picture?

Whose data is it anyway? Unless you’re keeping it totally under your own control, that’s just not so easy to answer anymore.

 

This blog post doesn’t even touch on the other risks of cloud computing, like cyberattacks and weak security among users. If you want to read more, here are some starters from InfoWorld, Business News, and Information Week.

CAN WE PROGRAM ROBOTS TO MAKE ETHICAL DECISIONS?

Self-driving cars are being tested by Google, Tesla, and other companies around the world. So far their safety record is good, but then they’re programmed to be much more conservative than the average human driver. Such cars are among the first of many robots that could potentially populate our everyday life, and as they do, many of them will be required to make what we’d consider ethical choices—deciding right from wrong, and choosing the path that will provide the most benefit with the least potential for harm. Autonomous cars come with some unavoidable risk—after all, they’re a couple of tons of metal and plastic traveling at serious speed. But the thought of military forces testing robot drones is a lot more frightening. A drone with devastating firepower given the task of deciding which humans to kill? What could possibly go wrong?

Most discussions of robot ethics begin with science fiction writer Isaac Asimov’s famous Three Laws of Robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

It should be remembered that Asimov created the three laws to provide fodder for a series of stories and novels about scenarios in which the three laws failed. First and foremost, he was looking to tell interesting stories. As good as the laws are for fictional purposes, the reality will be vastly more complicated.

The core value of the three laws is to prevent harm to human beings above all. But how do we define harm? Is it harmful to lie to a human being to spare his or her feelings (one of Asimov’s own scenarios)? And there’s the question of quantifying harm. Harm to whom and how many? Some recent publications have pointed out that self-driving cars may have to be programmed to kill, in the sense of taking actions that will result in the loss of someone’s life in order to save others. Picture a situation in which the car is unavoidably faced with the sudden appearance of a bus full of children in front of it and cannot brake in time. If it veers to the left it will hit an oncoming family in a van, or it could choose to steer right, into a wall, and kill the car’s own occupants. Other factors might enter in: there’s a chance the van driver would veer away in time, or maybe the bus has advanced passenger-protection devices. Granted, humans would struggle with such choices, too, and different people would choose differently. But the only reason to hand over such control to autonomous robot brains is in the expectation that they’ll do a better job than humans do.

One of the articles I’ve linked to below uses the example of a robot charged with the care of a senior citizen. Grandpa has to take medications for his health but he refuses. Is it better to let him skip the occasional dose or to force him to take his meds? To expect a robot to make such a decision means asking it to predict all possible outcomes of the various actions and rank the benefits vs. the harm of each. Computers act based on chains of logic: if this, then that. And the reason they can take effective actions at all is because they can process unthinkably long chains of such links with great speed, BUT those links have to be programmed into them in the first place (or, in very advanced models, developed by processes like the search algorithms used by Google and Amazon that simulate self-learning).

A human caregiver would (almost unconsciously) analyze the current state of Grandpa’s health and whether the medicine is critical; whether the medication is cumulative and requires complete consistency; whether Grandpa will back down from a forceful approach or stubbornly resist; if he has a quick temper and tends to get violent; if his bones are fragile or he tends to bruise dangerously with rough handling; if giving in now will provoke greater compliance from him later, and so on. Is it possible to program a robot processor with all of the necessary elements of every possible scenario it will face? Likely not—humans spend a lifetime learning such things from the example of others and our own experience, and still have to make judgments on all-new situations based on past precedents that a computer would probably never recognize as being relevant. And we disagree endlessly amongst ourselves about such choices!

So what’s the answer? Certainly for the near term we should significantly limit the decisions we expect such a technology to make. Some of the self-driving cars in Europe have a very basic response when faced with a troublesome scenario: they put on the brakes. The fallback is human intervention. And that may have to be the case for the majority of robot applications, with the proviso that each different scenario (and its resolution) be added to an ever-growing database to inform future robotic decision-making. Yes, the process might be very slow, especially in the beginning, and we’re not a patient species.

But getting it right will be a matter of life and death.

There are some interesting articles on the subject here, here, and here, and lots of other reading available with any Google search (as Google’s computer algorithms decide what you’re really asking!)