In conjunction with Foresight Update 52
Progress in Thinking Machines
by J. Storrs Hall, PhD.
Research Fellow, Institute for Molecular Manufacturing
J. Storrs Hall, PhD |
Back in 2000, partly as a response to the infamous screed by Bill Joy, I wrote an essay entitled “Ethics for Machines”. Some of you may have read it then; if not, you can find it at http://autogeny.org/ethics.html. In the essay I took more or less for granted that AI was coming, not just human-level but super-human in capability. I argued that such an intelligence needs to be judged at least in human terms, i.e. if it has a human level of understanding, abilities, and competence, but lacks that combination of a sense of right and wrong and the links that tie such a sense to its motivations, it is a psychopath. It’s clearly a very bad idea to create such things.
One of the first polls taken on the Nanodot site (http://nanodot.org/) was about whether I should expand the essay into a book. (I think the result was in the affirmative.) I’ve been working on this between other projects since then and enough has happened, both on my part and in the field as a whole, to be worth a progress report.
First of all in the field of evolutionary ethics: when I wrote the essay, it seemed a somewhat novel idea, partly because I hadn’t done the research into it that I have since. What’s more, the field itself has moved forward considerably. I now have at least a foot of shelf space devoted to books purely on this subject, and more than half of them have publication dates beginning with “2”.
The point of worrying about just what morality is, exactly, has to do with the fact that you can’t just tack a set of formalized rules onto your robot’s head and expect it to do any good. We all know the difference between the letter and the spirit of the law. If the AI’s wanted to do something different, being more intelligent than you who wrote the rule in the first place, they’d find loopholes or reinterpretations. But if morality is really evolutionary, they’ll wind up inventing it themselves ultimately, and should be smart enough not to throw away what we give them as a valuable first step. And good progress is definitely being made on that front.
Now for the intelligence part. One thing that is always a concern when writing about a subject one doesn’t understand, is how much effort to put into the writing and how much into pushing the understanding. (A lot has been written about both morality and AI by people who seem to favor the “writing” side of the equation.) Of course, nobody really understands intelligence — indeed, Marvin Minsky has recently claimed that AI has been “brain dead” since the ’70’s. (see http://www.wired.com/news/technology/0,1282,58714,00.html, “AI Founder Blasts Modern Research”)
What Minsky was referring to is the fact that early AI’ers were trying to find some core, general, common-sense intelligence, and in the meantime, AI has split into a group of subfields such as speech, vision, learning, robotics, theorem proving, and so forth, each of which has developed a certain academic insularity and lost sight, to some extent, of the original goal. And it’s true that reading a typical AI textbook or conference proceedings gives you the feeling of listening to the blind men describing the elephant.
On the other hand, the progress in each of the subfields has been nothing short of spectacular. At last year’s AAAI conference, a robot found its way to the front desk, registered for the conference, went to the appropriate lecture hall, and delivered a paper. (see http://www.palantir.swarthmore.edu/GRACE/) Recently, statistics-based systems made the headlines when they demonstrated the capability of learning to translate between languages in a couple of hours by absorbing large corpora of parallel (pre-translated) texts (and doing better than human-programmed systems such as Babelfish). (http://www.iht.com/articles/104781.html) The latest speech recognition systems do a very good job of understanding spoken English, albeit about well-circumscribed domains.
All of these systems perform well, or well enough, in the specific domains where the authors put in the conceptual legwork. But still none exhibits common sense. Is common sense some really clever algorithm we just haven’t figured out yet? Or is it more like 20 years distilled experience (or 40, or 60)? Minsky points to Doug Lenat’s CYC project (http://www.cyc.com/) as one of the kind of things he’s looking for. CYC is a general encyclopedia, in logical form, encoding enough facts about the world to do certain kinds of common sense understanding as logical inferences.
But CYC was built by hand, one fact at a time. What’s needed, as mentioned above, is some way to get the facts into the database from experience. My talk at last year’s Foresight Gathering (see the bottom of http://www.foresight.org/SrAssoc/spring2002/index.html) explained the approach I was taking to that end, as well as how the resulting structure made for an overall system where components you might call a “moral sense” and a “conscience” fit in.
The system was based on what I called “sigma units” that could use memories, either specific or abstracted. A sigma unit searched the memory for a recollection of a situation that bore resemblances to the current one, and applied an operation I called “analogical quadrature” to elicit an action to perform in the current situation. In the talk, I passed lightly over the questions of how the memories got there and how they were represented — crucial questions both.
In a counterpoint to the old joke, I’m gratified to be able to report that I’m still ignorant on those points, but I’m satisfied that I’m ignorant at a much lower level, about more detailed phenomena.
And the book should be out next year.
—Dr. J. Storrs Hall is an IMM Research Fellow. He can be reached at josh@imm.org.
IMM would appreciate learning your thoughts on the above article.