AI Expert Newsletter
AI - The art and science of making computers do interesting
things that are not in their nature.
This month, I've written another AI Alphabet. I hope - one of
the entries will explain this - that after reading it, you will
be thinking globally and breadth-first, not locally and depth-first.
Adi Shavit wrote to tell me about the LC++
library, developed by Brian McNamara and Yannis Smaragdakis,
at www.cc.gatech.edu/~yannis/lc++/. The library uses macros to enable
users to write in a Prolog-like syntax inside C++ programs. For
example, here are some fragments of code from the tutorial:
DECLARE(X ,int, 10);
DECLARE(Y ,int, 11);
string bart="bart", lisa="lisa", maggie="maggie", marge="marge";,
lassert( parent(marge,bart) );
lassert( parent(marge,lisa) );
lassert( parent(marge,maggie) );
lassert( ancestor(Par,Kid,1) -= parent(Par,Kid) );
lassert( ancestor(Anc,Kid,X) -= parent(Anc,Tmp) &&
ancestor(Tmp,Kid,Y) && X.is(plus,Y,1) );
iquery( ancestor(Anc,bart,X) );
These declare the functors
and some logical variables and constants, assert a few rules, and
then do a query.
The site includes a detailed paper
about how LC++ is implemented.
"Proactive". "Synergetic". "Restructuring". "Leveraged"... And
"Agent"? Does it mean anything, or is it just another buzzword?
Franklin and Graesser apply linguistic philosophy to the word and
propose a definition.
This short entry on the Conscious Entities site introduces a paper
by Michael Anderson and Donald Perlis on brittleness, the inability
of robots and programs to cope with unexpected developments. They
mention a contestant in the DARPA Grand Challenge, where robots
had to negotiate their way through a real-world journey. This one
drove into a fence it could not see, and then continued trying to
move forward for the rest of the contest. The robot was brittle.
Sadly, the fence was not.
Making AI fun, exciting and interesting.
Suppose I am an AI agent in a video game: I have to shoot and
disable a swarm of killer bees. Had I been designed around the principles
of Good Old-Fashioned AI, the core of my cognition would be a propositional
world-model continuously updated by interpreting sensory data, and
I would represent each bee as a unique object within a world-centered
I see a bee. Sense data indicates I haven't seen it before.
So I'll give it a new ID:
see( me, bee-12374 ). is_yellow( bee-12374 ). is_flying( bee-12374
I turn my head. Then I turn back. I see a bee.
Is this new image I see also
bee-12374, or is it a
different object to which I must allocate an ID?
And if the latter, how long must I retain my knowledge about
In contrast, deictic representations try to avoid problems of
instance identification and world-model update by being relative
to the agent and its intentions: this bee in front of me, which
I am aiming at and intend to kill. The paper explains deictic representations,
with references to important prior research, and proposes how the
brain might use body movements such as changes in gaze position
as deictic pointers.
Many people will know of Norman from his book The Design of
Everyday Things. (The cover of one edition depicts a coffee
pot with its spout on the same side as its handle.) In this Ubiquity
interview on his The Future of Everyday Things, Norman explains
why we should pay more attention to fun:
Norman: If you make something more pleasant, it's easier to
use. (My first article on this subject is going to be published
in the July/August 2002 issue of ACM's Interactions magazine.)
The usability community has not paid enough attention to beauty,
to fun, or to pleasure. I'd like to change that. The theme of
affect and emotion is growing so much within me that I'm considering
changing the title and focus of the book to "Emotion and Design."
Ubiquity: Incidentally, your own Web site, http://www.jnd.org/,
is very nice, especially the trailing balls that follow cursor
movement on the "Gratuitous Graphics" page. Do many Web sites
achieve the right balance of fun and seriousness?
Norman: Not enough. For a lot of Web sites, it's all or none.
It's rare to find a Web site that's really fun.
Emotion may be vital to AI programs as well as to their users.
Without it, they may be unable to adapt correctly to threats and
opportunities. As Norman explains,
the brain changes when we are happy, making pleasant
objects easier to use. We are global, breadth-first thinkers when
happy, local, depth-first thinkers when stressed. Affect is truly
an important factor in how we live in the world. There are a lot
of exciting new findings. I want to bring them to the attention
of designers - and engineers who build large, complex, autonomous
Giving a short explanation of the frame problem, this is one of
many pages to quote the fable of R2D2 from Daniel Dennett's classic
Cognitive wheels: The frame problem of AI:
Once upon a time there was a robot, named R1 by its creators.
Its only task was to fend for itself. One day its designers arranged
for it to learn that its spare battery, its precious energy supply,
was locked in a room with a time bomb set to go off soon. R1 located
the room, and the key to the door, and formulated a plan to rescue
its battery. There was a wagon in the room, and the battery was
on the wagon, and R1 hypothesized that a certain action which
it called PULLOUT(WAGON, ROOM) would result in the battery being
removed from the room. Straightaway it acted, and did succeed
in getting the battery out of the room before the bomb went off.
Unfortunately, however, the bomb was also on the wagon. R1 knew
that the bomb was on the wagon in the room, but didn't realize
that pulling the wagon would bring the bomb out along with the
battery. Poor R1 had missed that obvious implication of its planned
Back to the drawing board. "The solution is obvious," said the
designers. "Our next robot must be made to recognize not just
the intended implications of its acts, but also the implications
about their side-effects, by deducing these implications from
the descriptions it uses in formulating its plans." They called
their next model, the robot-deducer, R1D1. They placed R1D1 in
much the same predicament that R1 had succumbed to, and as it
too hit upon the idea of PULLOUT(WAGON, ROOM) it began, as designed,
to consider the implications of such a course of action. It had
just finished deducing that pulling the wagon out of the room
would not change the colour of the room's walls, and was embarking
on a proof of the further implication that pulling the wagon out
would cause its wheels to turn more revolutions than there were
wheels on the wagon . . . when the bomb exploded.
Back to the drawing board. "We must teach it the difference
between relevant implications and irrelevant implications," said
the designers, "and teach it to ignore the irrelevant ones." So
they developed a method of tagging implications as either relevant
or irrelevant to the project at hand, and installed the method
in their next model, the robot-relevant-deducer, R2D1 for short.
When they subjected R2D1to the test that had so unequivocally
selected its ancestors for extinction, they were surprised to
see it sitting, Hamlet-like, outside the room containing the ticking
bomb, the native hue of its resolution sicklied o'er with the
pale cast of thought, as Shakespeare (and more recently Fodor)
has aptly put it. "Do something!" they yelled at it. "I am," it
retorted. "I'm busily ignoring some thousands of implications
I have determined to be irrelevant. Just as soon as I find an
irrelevant implication, I put it on the list of those I must ignore,
and . . ." the bomb went off.
"Natural selection is a mechanism for generating an exceedingly
high degree of improbability". So runs Ronald Fisher's quote, heading
this resource page for programs that work improbably well in finance
- for example, Evolution
of trading rules for the FX [foreign exchange] market, or, how to
make money out of genetic programming. For those wanting
to go back to the start of it all, the page links to Origin
of Species and The
Descent of Man.
Here's another AAAI page, a subtopic of their Resources
for Students. Tips and suggestions deal with deciding on a topic
and searching for articles, as well as typical questions: "could
you please give me as much information on AI as possible"; "please
send me any information that you have on Artificial Intelligence
about the way it will affect the future"; "what are some threats
and opportunities concerning artificial intelligence?"
In a robot arm, inverse kinematics is the problem of calculating
what angles the joints need to be in order to get the end of the
arm - the "hand" - to a desired position. This is a rather useful
thing to know if the robot wants to reach towards and grab something.
Forward kinematics - finding the hand position from the joint angles
- is easy. Inverse kinematics is not, which is why a lot has been
written about it, such as the nice article by Hugo
Virtual Actors Who Can Really Act, Ken Perlin uses inverse kinematics
to make game characters more plausible. Too many game characters
move in exactly the same way whatever their psychological state.
But as any animator knows, movement depends on mood; Wile E. Coyote
may bound along expectantly as he's about to unwrap his new ACME
Your Own Tornados Kit, but he'll move rather differently when
the tornados turn and come after him. Perlin's work involves fine-tuning
or "shading" body movements to convey such subtleties. He does a
lot of other things too, as the applets on his home
page show. There's some amazing stuff there.
- Prolog in
Learning and Research Technology, ioctl.org/logic/jsprolog-history.
Amongst the links on Jan's home
is all you need to run a successful Computer Science degree course.
The man gives an exceptional demonstration with a Prolog you can
run in your browser. It includes a quicksort: type
qsort( [one,two,three,four,five,six,seven,eight,nine], Sorted ).
and back come the atoms sorted by name:
Sorted = [eight, five, four, nine, one, seven, six, three, two]
Koans, collected by Brewster Kahle, rpcp.mit.edu/~gingold/random/koans.html.
In the days when Sussman was a novice, Minsky once came
to him as he sat hacking at the PDP-6. "What are you doing?" asked
"I am training a randomly wired neural net to play Tic-Tac-Toe."
"Why is the net wired randomly?" asked Minsky.
"I do not want it to have any preconceptions of how to play."
Minsky shut his eyes.
"Why do you close your eyes?" Sussman asked his teacher.
"So the room will be empty."
At that moment, Sussman was enlightened.
John von Neumann said that "life is a process which may be abstracted
from other media". If this is so, then just as we can study software
independently of the hardware that runs it, we can study life independently
of the matter that embodies it. One big question is how self-replicating
molecules arose. Did it happen all at once, or over several stages?
Is it logically necessary that if self-replicators arose elsewhere,
they would do so in the "same" way, and if so, how can we characterise
the kind of organisation involved? Artificial chemistry tries to
answer such questions by setting up mathematical models of chemical
reactions, keeping those properties needed to explain biological
organisation while throwing away the rest.
One of the first artificial chemistry models, that of Fontana
and Buss, was based on λ-calculus. Chemical reactions combine
molecules to form new molecules, and we normally think of molecules
as objects. But molecules are also functions: if given alcohol as
an argument, water just mixes with it, but to sodium it does something
more ferocious. (Certain chemists of my acquaintance used to dispose
of sodium waste from their organic syntheses by tossing it into
the Cherwell, presumably to frighten passing punts.) So we want
a system whose elements are objects and at the same time functions
that can act on these objects; which is what λ-calculus provides.
In 1999, MIT Provost Robert Brown asked a committee of faculty,
students, and administrators to consider how the Internet would
affect education, and how MIT should respond. The committee recommended
that MIT should simply give its course materials away. With funding
from the Andrew W. Mellon Foundation and the William and Flora Hewlett
Foundation, the result was MIT's OpenCourseWare programme. To quote
from one of many testimonials, from Maruf Muqtadir studying in Bangladesh:
Your OpenCourseWare is an amazing and remarkable step!
I am currently a student of computer science at BRAC University
of Dhaka, Bangladesh, and I find it very much useful to learn about
my courses. I have always had a dream to study at MIT, since I came
to know about the institution, its unique teaching methods, but
for many reasons I am not able to do so. This initiative gives me
the opportunity to self-teach myself.
Amongst the AI-related courses are Maths, Linguistics and Philosophy,
Brain and Cognitive Sciences, and Electrical Engineering and Computer
Science - including a downloadable
textbook on inventions and patents, with a section on the future
of American patents. There's much else too - Japanese language,
sailing yacht design, US military budget and force planning - and
more is to be added.
NetLogo, which my social-simulationist friend Edmund
Chattoe recommended to me, is a free Logo designed for modelling
systems containing very large numbers of similar agents. It has
a lot of users and a lot of models built in it, and an active discussion
list where new users can get advice.
I think that I shall never see
A matrix lovely as a tree.
Trees are fifty times as fun
As structures a la PL/I
Imagine you are an engineering student at an Oxford college (I
mention no names) which admits students good either at engineering
- the Gnomes
- or at rugby - the Hearties.
Suppose there's no correlation at all between being good at rugby
and being good at engineering. However, as a Gnome, you are naturally
biased against your beer-swilling rugby-playing fellows, and you
decide to seek proof that their brains are abnormally small. You
wander the grounds with your laptop and stats software, note the
engineering and rugby ability of each co-student you pass, and then
press "Compute Correlation". And Bingo!, you will see a negative
correlation coefficient. (You can see why if you draw a scatterplot
for all the students who applied to the college, chop out
the area for those not admitted, and then consider the regression
line for those left on the plot.)
This is Berkson's paradox, also known as "explaining away", and
it's one of many topics discussed in Murphy's excellent paper on
inference and learning with Bayesian networks, which also relates
them to Kalman filters, Hidden Markov Models, and a number of other
models. Bayesian networks have become popular in AI - as one researcher
has said, they offer an efficient way to deal with the lack or ambiguity
of information that has hampered previous systems, and provide an
overarching graphical framework that brings together diverse elements
of AI, increasing its range of application to the real world. Murphy,
writing in 1998, says that the most widely used Bayes Nets are those
embedded in Microsoft's products, including the Answer Wizard of
Office 95, the Paperclip, and over 30 Technical Support Troubleshooters.
Nowadays, they are being used in spam filtering.
I wonder whether Bullfighter,
Deloitte's freeware program for detecting buzzwords, uses Bayesian
filtering? Deloitte discovered
a direct linkage between clear business talk and good business performance.
In examining Enron's communications during its last three years,
they found that as Enron began to sink, its press releases, financial
reports, letters to shareholders, and speeches by top executives,
became increasingly laden with ambiguous words and sentences.
"Qualia" is the philosophical term for subjective experiences:
seeing red, feeling the sting of a wasp, tasting horseradish sauce.
The Sony Qualia
Man page claims that Sony take a close interest in qualia -
as perhaps should any entertainment company - and that Ken Mogi,
leader of their qualia project, has published a Qualia
Manifesto calling for more of the things.
Many philosophers say qualia are the most important problem for
the philosophy of mind. Why should I believe that you have subjective
experiences? If you do, are they anything like mine, or could it
be that whenever I see red, you see green? Will Artificial Intelligences
have subjective experiences? If so, can we predict them from the
nature of their programs? David Chalmers, who also wrote the Matrix
as Metaphysics feature referenced in my Where am I?
entry, discusses these problems in Absent
Qualia, Fading Qualia, Dancing Qualia.
I predict that if we can ever know an AI's qualia, it won't be
long before someone publishes a book on How to Make Your Computer
A recent entry in this excellent collection is Toddlers
Sing With RUBI - two robots at UCSD are attending nursery school
to teach songs, colors and shapes to one- and two-year old children.
QRIO (for "Quest for Curiosity") from Sony, and RUBI (for "Robot
Using Bayesian Inference"), developed at the Machine Perception
Laboratory of UCSD, are there to study the uses of interactive computers
for early childhood education.
Another entry recalls the rat-brained
fighter pilot I featured last December. According to Lab
Cultures Used to Create a Robotic 'Semi-Living Artist', researchers
at the University of Western Australia and the Georgia Institute
of Technology have created a new class of creative beings - a picture-drawing
robot in Perth, Australia whose movements are controlled by the
brain signals of cultured rat cells in Atlanta. They call it the
Swarm Intelligence uses many simple agents to generate useful
global behaviour via local interactions, no central controller needed.
This site links to researchers, papers, software, and conferences.
An interview with Eric Bonabeau, who has applied swarm intelligence
to routing in telecoms systems, remembers
As a kid I'd always been terrified of insects. I remember
with retrospective anguish my holidays in the south of France, when
picnics turned into nightmarish fights against carnivorous wasps
and ferocious ants raiding my sandwich. Sometimes I wonder how on
earth I could dedicate eight years of my life to social insects.
This large scale psychoanalytic phase transition took place in the
early 1990's in Santa Fe, at the foot of the Rocky Mountains, the
southernmost city before the New Mexican desert takes over. As a
France Telecom R&D engineer, I was an unlikely candidate for
such a radical transformation.
He continues by explaining that although one social insect may
not be capable of much, a colony can achieve great things. A colony
of ants can collectively find out where the nearest and richest
food source is located, although no individual ant knows. If a food
source is put near an ant nest, separated from it by a bridge with
two branches, the colony is most likely to find the shorter branch.
By laying and following pheremone trails, the ants perform an emergent
computation, a route optimisation.
Participants in the original Turing test had to converse about
topics such as arithmetic, weather and poetry; Shrager's computer
must match a human in discussing photosynthesis. Amongst the links
on this page are two excellent sites about the original Turing test.
Ubiquity is ACM's Web-based magazine, dedicated to fostering
critical analysis and in-depth commentary, including book reviews,
on issues relating to the nature, constitution, structure, science,
engineering, cognition, technology, practices and paradigms of the
IT profession. It published the interview with Donald Norman which
I mentioned under E; and here is an interview with
co-author with Peter Norvig of Artificial Intelligence: A Modern
Approach. He has a crisp definition of AI:
An intelligent system is one whose expected utility is
the highest that can be achieved by any system with the same computational
This is a nice paper on a course where final-year undergraduates
at Bradford University were taught to build AIs for the real-time
Artificial Life environment Terrarium Academic and the board game
Virus. The authors aren't the first to teach via games - a famous
example during the expert systems boom was Truckin',
a game developed by Mark Stefik and others at Xerox Parc for teaching
LOOPS. Indeed, I've done this too, with Traveller
So I'm not surprised to read that the Bradford students very much
liked this style of Artificial Intelligence teaching, and that the
authors hope to make freely available the clients and servers they
built to enable different AIs to compete.
Chalmers wrote this paper for the philosophy section of the official
Matrix website. As such, although most is intended for readers with
no background in philosophy, it's a serious work, relevant to central
issues in epistemology, metaphysics, and the philosophy of mind
XSLT is the language developed
for transforming XML documents into other XML documents. It's an
interesting language, being entirely functional; and although we
often think of XML as representing text, it can in fact represent
general trees, so XSLT is a tree-transformation language. Ben-Kiki's
posting links to an XSLT program for solving the N-queens problem
(place N chess queens on an N × N square board so no queen
threatens another); follow-ups suggest how XSLT could be improved.
There aren't too many AI-related words beginning with Y, so this
letter called for a bit of searching. Simulated
Sailing reviews sailing simulators. As Posey
Yacht Design's pages explain - the review rated their Tactics
and Strategy Simulator highly - even a stupid simulated opponent
might have basic collision avoidance. But more intelligence is needed
to handle matters such as getting clear air at the start, maintaining
it against interference from other boats, and balancing against
expected shifts in the wind.
Piss up at a yacht club comes from a slide presentation on constraint
programming, using as an example a multi-boat party subject to complicated
constraints on how crews circulate between boats. The objective
is to minimise the number of boats. The slides stress how important
it is to find the right formulation for constraint problems: increasing
the search space may actually speed up search, if it reduces the
number of variables and propagates constraints sooner.
With a contents which includes links to Agent-Based
Computational Economics, the International
Society of Artificial Life, and Alastair
Channon's Evolutionary Emergence of Intelligent Behaviours,
not to mention Algorithmic Chemistry, this is an
excellent starting point for Artificial Life explorations. Though
if you believe the statistical argument in Are
You Living in a Computer Simulation?, we're probably already
Getting back to the history lesson, the prospects for the decade
look mostly medical. Progress is expected to speed up shortly,
as the fundamental patents in genomic engineering begin to expire:
the Free Chromosome Foundation has already published a manifesto
calling for the creation of an intellectual-property free genome
with improved replacements for all commonly defective exons.
Experiments in digitizing and running neural wetware under emulation
are well-established; some radical libertarians claim that as
the technology matures, death - with its draconian curtailment
of property and voting rights - will become the biggest civil
rights issue of all.
Some commodities are expensive: the price of crude oil has broken
sixty euros a barrel and is edging inexorably up. Other commodities
are cheap: computers, for example - hobbyists print off weird
new processor architectures on their home inkjets; middle aged
folks wipe their backsides with diagnostic paper that can tell
how their VHDL levels are tending.
The latest casualties of the march of technological progress
are: the high street clothes shop, the flushing water closet,
the Main Battle Tank, and the first-generation of quantum computers.
New with the decade are cheap enhanced immune systems, brain implants
that hook right into the Chomsky organ and talk to you
using your own inner voice, and widespread public paranoia about
limbic spam. Nanotechnology has shattered into a dozen disjoint
disciplines, and skeptics are predicting that it will all peter
out before long. Philosophers have ceded qualia to engineers,
and the current difficult problem in AI is getting software to
Fusion power is still, of course, fifty years away.
Quoted from Tourist by Charles Stross: originally published
in Isaac Asimov's Science Fiction Magazine for February 2002,
republished 2005 in his novel Accelerando, and downloadable
under the Creative
Commons License from www.accelerando.org/.
Past newsletters are available at either www.ddj.com
As ever, interesting links and ideas for future issues are very
welcome. Feel free to contact either myself (below) or Jocelyn <email@example.com>
with comments, thoughts and suggestions.
Until next month,
Copyright ©2005 Amzi! inc., CMP, and Jocelyn Paine. All Rights