AI Expert Newsletter
AI - The art and science of making computers do interesting
things that are not in their nature.
In our March
issue, I explored some of the neural-net applets around the
Web. One, intriguing because of the slowly growing network of lime
green dots which writhes around inside as if struggling to escape,
was the DemoGNG
applet for competitive learning, written by Hartmut Loos and
Bernd Fritzke at Bochum University. The applet says it implements
"neural gases" and "growing neural gases"; since I'd not come across
these before, I decided to write about them for this issue. There
are also a few items vaguely connected with Lisp, some fun, some
serious; and - inspired by a British comic song - a meander through
qualitative reasoning, cognitive models in education, and economics.
As ever, we welcome news, views, and comments.
So what's a neural gas? The DemoGNG applet page mentioned above
links to the applet source code and documentation, to a user
guide, and to the authors' report Some
Competitive Learning Methods. This explains that a neural
gas is a particular kind of competitive learner. In general, competitive
learning networks learn to cluster or classify their inputs, with
different output nodes representing different clusters or classes.
The outputs compete for inputs to represent. Kohonen self-organising
feature maps are a well-known example. (As the comp.ai.neural-nets
FAQ explains in How
many kinds of Kohonen networks exist?, there are several
different kinds, with subtleties that are often ignored.)
Sometimes when using competitive learners for clustering, we want
the clusters for their own sake, to tell us about the structure
of the data. Sometimes we want to reduce each cluster to a representative
element, as in signal quantisation. This first trains the learner
on a sample of typical input signals, giving a set of clusters.
Hopefully, the number of clusters is much less than the number of
possible signals. We then allocate each cluster a number, and compress
future data by mapping each signal to the number of the cluster
it falls in, transmitting the number in place of the signal. And
sometimes, we want information about the connectivity between clusters.
At the end of this article,
there's a nifty example, skeletonising images of bacterial colonies
to line segments. For more information on competitive learners,
see the Some Competitive Learning Methods report: it's a
nice comparison of competitive learners, using uniform terminology
throughout, and not limited to neural gases.
Let's now turn from competitive learners in general to neural
gases in particular, and a demonstration.
Start by loading DemoGNG.
It has a lot of controls, of which I'll use probability distribution
(just above bottom left); max nodes (to its right); start, stop,
and reset; and the network model menu (at the top). Note that after
a run, you need to press stop, and then reset to discard the current
network and make a new. There's a speed control on the right, which
you may want to change to suit your browser and computer.
Change the probability distribution to Rectangular. This draws
a pale blue rectangle depicting the space of possible signals. On
starting a run, the applet will create a network, and then train
it by repeatedly feeding it two-dimensional input vectors which
have an equal chance of lying anywhere within the rectangle, i.e.
a uniform probability distribution.
Check the network model menu is set to its default of "growing
neural net". Then press start, and see the network grow to the maximum
number of nodes. These will be roughly evenly distributed within
Next, start another run in the rectangle, but before the run ends,
change probability distribution to Ring. The network will flow and
deform to redistribute its nodes evenly within the ring - a good
example because it's highly symmetric. It's fun to see how nodes
just inside the hole move toward and link to those in the body of
the ring, breaking connections with their old neighbours. It reminds
me of how new friendships form and old ones dissolve as one moves
from one city to another.
You can move nodes yourself by left-clicking and dragging them.
You can also move the entire signal distribution: Change probability
distribution to Right Mouse. This will draw a small rectangle for
the input space. Pressing the right mouse button will jump it to
a new position; you can then release the button to leave it there,
or drag and then release. This is a nice way to see how the network
adapts to fast and slow changes in its inputs.
Now we've demonstrated growing neural gases, let's see how they
work. We'll start with neural gases, because growing neural gases
combine these with other components. You can get a feel for what
a neural gas does and how fast it does it from the applet. From
the network model menu, select Neural Gas. Change probability distribution
to Rectangular, and max nodes to 10. If you just ended a run, press
reset to generate a new network. Press start, and watch the nodes
spread out evenly. Then try with 50 nodes.
Here's the essence of the algorithm:
Initialize the network to contain N randomly distributed nodes.
Take the next input, i.e. a vector, which in this
demo is two-dimensional.
Calculate each node's distance from the input.
Move the node nearest to the input - the winning node -
still nearer it.
Move the second-nearest node nearer the input too,
but not so far.
Repeat with all the other nodes. As with the nearest
and second-nearest, they are moved less far as their
And repeat the loop
The complete algorithm as written in the Neural
Gas page of Some Competitive Learning Methods contains
calculations not shown above. These determine how far each node
moves towards the input, and how the movement depends on time. The
distance moved is a product of three factors. The first, the ε
term, decreases with time, and gives a kind of annealing, so that
movements are large when finding an initial solution, but get smaller
as the net refines it. The second, hλ, term, decreases
with time and also with how far a node is from winning. (A quick
qualitative-reasoning check on whether factors increase or decrease
with their arguments helps when reading these algorithms, to avoid
getting lost in details.) The third factor is simply the vector
distance between the node and the input. Note that the report sometimes
uses the subscript i to mean "initial", as well
as an index.
The algorithm doesn't come with much explanation, but there's
a readable introduction in Chapter 2 of Growing
Neural Gas. Experiments with GNG, GNG with Utility and Supervised
GNG, a Master's thesis by Jim Holmström, Uppsala. Some
notation, such as the symbol for an input vector, differs between
this and Some Competitive Learning Methods. It's worth exploring
how the parameters in the bottom line of menus affect learning;
the course notes on Classification
and Clustering Using Artificial Neural Networks by Hamed
Muhammed at Uppsala suggest some experiments.
The growing neural gas algorithm is described on the Growing
Neural Gas page of Some Competitive Learning Methods.
It starts with two nodes and grows its network a node at a time,
using links generated by competitive Hebbian learning to guide their
placement. Since these weren't in the neural gas, let's look at
Competitive Hebbian learning, in this sense, trains a set of nodes
on a sequence of inputs. Unlike in the neural gas, the nodes don't
move; the goal is for them to grow links to one another as learning
proceeds. A link between two nodes means the nodes are close together
- neighbours - relative to the average spacing of the inputs. Links
only grow between nodes in regions where inputs occur, so reflect
the distribution of inputs. The mathematics behind this, cited in
Hebbian Learning page, and described in Topology
representing networks by Thomas Martinetz and Klaus Schulten,
is specialised; but you can get a feel for where links place themselves
by using the applet. Set the network model to Competitive Hebbian
Learning, and then run it on various probability distributions,
remembering that inputs are only generated in the pale blue areas.
Growing neural gases, as I said, combine the neural gas algorithm
with competitive Hebbian learning, in a network which grows a node
at a time. Where to put this new node is decided by looking at the
existing nodes' error fields. Each node has an error field which sums
the squares of its distance from the inputs. Nodes with larger error
fields will be covering larger regions of the input space, so it's
appropriate to refine this coverage by inserting the new node nearby.
(Intuitively speaking, the more inputs that occur in a region, the
more attention we should pay it, and the smaller are the subregions
we should divide it into when classifying inputs.)
As well as inserting new nodes, we move existing ones each time
the network receives an input; but instead of moving all of them
as in a plain neural gas, we move only two. One of these is the
winner, i.e. the one nearest the input; the other is the one of
its neighbours (i.e. the nodes linked to it) closest to the input.
When nodes move, they drag their links with them. The growing
neural gas has an aging mechanism to remove old links and nodes
that no longer reflect the distribution of inputs.
Growing neural gases are an elegant idea, and tempting because
they don't need to know the input distribution. However, as ever
with machine learning, it's worth testing learning performance,
comparing with alternative methods, and looking at the statistical
basics. Let's now turn to applications.
If you're interested in neural gases, Google will turn up an assortment
of applications. One possibility, suggested in the Weblog of Semantic
Web researcher Leo
Sauermann, is analysing large semantic graphs. This would be
an interesting experiment, as would others using non-numeric inputs.
You'd need a measure of distance between inputs, but once that's
done, the algorithm would probably carry across without trouble.
The Neuromimetic intelligence team at INRIA are working on a possibly
using neural gases (which they find better than growing neural gases)
for document classification.
I'll finish with an Indian project, which used growing neural
classify filamentous bacteria in sewage, by skeletonising images
of them into line segments. As the authors say in Shape Extraction
of Volumetric Images of Filamentous Bacteria Using a Topology Preserving
Feature Map, the line segments were obtained in a very direct
way, as just the links between nodes.
Filamentous bacteria live in colonies, joining together into long
thin filaments of cells. When sewage is treated by mixing with
air and letting microorganisms clean it by feeding on organic waste,
amongst its inhabitants are assorted species of filamentous bacteria.
In small concentrations, these are useful because they bind together
aggregates of other bacteria living in the mix, meaning, for instance,
that these precipitate out readily, and so are easy to remove. But
large concentrations gum up the system; so it's important to know
what species, and in what numbers, are living in your sewage tanks.
This involves classifying the filaments' species by looking at properties
such as their branching ratio, something almost impossible to do
manually because they are so complex.
The authors first obtained images of the filaments using confocal
laser scanning microscopy. This enables the 3D object to be scanned
as a sequence of 2D sections; all done optically, so the filaments
don't need to be physically sliced. After some preliminary cleaning
up and separation into individual filaments, the images were converted
into lists of voxels (the 3D equivalent of pixels). These voxel
coordinates were then fed one at a time to a modified growing neural
gas algorithm; each voxel caused a change in node positions, and
possibly a new node to be formed. (The paper calls them processors
rather than nodes.) The links amongst the nodes in the final state
of the growing neural gas were the skeleton of the image, a collection
of line segments. It was much easier to search these for properties
indicating their species than it would have been the original image.
Compared with conventional skeleton-finding methods, say the authors,
theirs was much less sensitive to noise, as well as being efficient
in invariance to image rotation. The paper gives details of the
algorithm, and shows some of the images before and after neural
Neural gases are a nice example of local behaviour generating
global structure. They make me think of how a pack of hunting or
foraging animals might be organised, if the inputs were food: to
benefit the pack as a whole, it makes sense to have a "specialist"
covering the region of most abundant food, and for its neighbours
to pay attention to that area too; but this has to be balanced against
the risk of neglecting the rest of the territory. Or perhaps the
nodes could represent research groups, and the inputs scientific
ideas. I'll leave it to the cynic to decide where research funding
might enter this interpretation.
- The DemoGNG (Version 1.5) applet, with links to source code, applet
manual, and Some Competitive Learning Methods. The latter
is at www.neuroinformatik.ruhr-uni-bochum.de/ini/VDM/research/gsn/JavaPaper/.
- Growing Neural Gas. Experiments with GNG, GNG with Utility
and Supervised GNG, Master's Thesis by Jim Holmström, Uppsala,
2002. Chapter 2 explains the growing neural gas algorithm. See also
the screen shots and explanations in Unsupervised ontogenetic
networks by Bernd Fritzke, at www.ki.inf.tu-dresden.de/~fritzke/papers/ncc2_4.pdf.
- The Self Organizing Map: Unsupervised Competitive Learning,
by Simon Dennis, Queensland, 1997. A nice intro to competitive learning,
with applets and exercises for them.
- Learning Vector Quantisation by Rob Saunders. An applet
demonstrating the Learning Vector Quantisation algorithm, with Voronoi
cells shown. The source can be downloaded. See also Simple Competitive
Learning by Jenny Orr, Williamette, www.willamette.edu/~gorr/classes/cs449/Unsupervised/competitive.html.
This explains vector quantisation, with diagrams.
- "FROG CELLS" by Maeda Mameo. Saunders's page links to a "Frog
Cells" page. That link is broken, but this one is probably the same.
It displays dynamically updated Voronoi and Delaunay diagrams -
a good way to appreciate them - and the code was used by Saunders
in his applet.
- Online version of Topology representing networks by Thomas
Martinetz and Klaus Schulten, from Neural Networks, volume
7, 1994. Introduces and proves the properties of competitive Hebbian
- Classification and Clustering Using Artificial Neural Networks,
by Hamed Muhammed, Uppsala, 2001. Part of a data-mining course;
introduces DemoGNG and sets exercises on the effect of changing
- Modular toolkit for Data Processing. An open-source Python
toolkit which views neural nets and other learning algorithms as
processing elements through which data is piped. Processing elements
can be linked, combining them into "flows". Includes a growing neural
gas, as well as principal component analysis, independent component
analysis, and slow feature analysis. Last updated at the end of
- Shape Extraction of Volumetric Images of Filamentous Bacteria
Using a Topology Preserving Feature Map, by U. Bhattacharya,
V. Liebscher, A. Datta S. K. Parui, K. Rodenacker, and B. B. Chaudhuri,
Indian Statistical Institute Calcutta and Neuherberg Germany, 1999.
For beautiful microphotographs of filamentous bacteria, see www.environmentalleverage.com/Filamentous%20Bacteria%20Photomicrographs.htm
at Environmental Leverage: Turning Liabilities to Leverage!!!!
- Robot Control with the "Neural Gas" Algorithm. Summary
of research by Stanislav Berkovich at the Theoretical and Computational
Biophysics Group, Urbana-Champaign. Uses a neural gas to divide
a robot's visual input space into smaller patches which are more
easily modelled by local maps.
- Leo Sauermann's blog. Sauermann works on the Semantic Web at DFKI
and maintains gnowsis (at www.gnowsis.org/),
DFKI's open-source Semantic Desktop environment. In his blog entry
for 29 April 2005, he proposes DemoGNG for analysing large semantic
- Project page of the Neuromimetic intelligence team at INRIA, who
are using neural gases for document classification.
- PowerPoint slides on intelligent signal processing, by Léonard
Studer, CERN. A presentation of neural nets (and fuzzy logic and
evolutionary computing) for signal processing, with mention of Kohonen
maps and brief mention of growing neural gases.
- How many kinds of Kohonen networks exist?, in the comp.ai.neural-nets
- Neural Network Software Resources for Developers, from
the Portal on forecasting with artificial neural networks. Links
to DemoGNG and other neural net software, in C, C++, Java, Lisp,
On the penultimate day of 2001, I took a train to Maastricht to
see a historic event, the introduction of the Euro. The inauguration
ceremony in Maastricht's lovely city square made an excellent and
lavishly EU-funded New Year's party; and after it, I continued with
a trip around Europe. I read articles in the Dutch papers about
coin collectors with no guilders to collect; in the Belgian, about
the ink rubbing off forged Euro notes; in the Spanish, allergies
due to nickel in the Euro coins; in the Portuguese, the lady who
got her conversion factor wrong and charged an entire month's salary
into her mobile phone. But these were small difficulties: all the
hastily-cobbled conversion software ran as it should, and Europe
avoided total monetary collapse.
I ended up in Frankfurt at the Bundesbank's Money
Museum. Amongst its exhibits were a display about the benefits
of monetary union; a Federal Reserve comic strip proclaiming the
virtues of global free markets; and a Virtual Economy gaming console
where you could be Finance Minister, balancing the economy against
the demands of workers and businesses whilst being careful not to
hurl it into a massive inflationary spiral. In fact, the whole museum
is devoted to monetary stability and the avoidance of inflation.
Which, given what happened during the 1920s, is not surprising.
In 1920, a dollar was worth 40 Mark; by November 1923, it was 4.2×1012
Inflation in Britain never soared to such levels, but in 1951,
it must have been sufficient to inspire the comic-song duo Flanders
and Swann to write There's
a Hole in my Budget. The lyrics go like this:
DS: There's a hole in my budget, dear Harold, dear Harold,
The song's site says that Flanders and Swann wrote this in 1951; they
rewrote the version above for the high inflation Britain suffered
during the 1970s. (Harold Wilson was Prime Minister; Dennis Healey,
with his notoriously bushy eyebrows, was Chancellor.) Stage directions
are to perform the song as a "dialogue between the Prime Minister
and Chancellor, who wander round the room in a slow inflationary spiral".
There's a hole in my budget, dear Harold, my dear.
MF: Then mend it, dear Healy, dear Dennis, dear Dennis,
Then mend it dear chancellor, dear Dennis, my dear.
DS: But how shall I mend it, dear Wilson, dear Wilson,
But how shall I mend it, dear Wilson, my dear?
MS: By building up exports, dear Dennis, dear Dennis,
By increased production, dear eyebrows, my dear.
DS: But that means working harder, dear Harold, dear Harold,
And the workers must have more incentives, my dear.
MF: Then decrease taxation, dear Healy, dear Healy,
And raise all their wages, dear Healy, my dear.
DS: But where is the money to come from, dear Premier,
But where is the money to come from, my dear?
MF: Why, out of your budget, dear Healy, dear Healy,
Why, out of your budget, dear Dennis, my dear.
DS: But there's a hole in my budget, dear Harold, my dear!
It's fun to translate this into Prolog. There are many possibilities,
depending whether you're writing a planner, economic simulator,
or whatever; how general you want it to be; and how much you want
it to explain its conclusions. Here's a simple version, written
thinking of the lines in the song as goals and subgoals:
can_get_money :- can_raise_exports.
can_raise_exports :- can_raise_production.
can_raise_production :- can_raise_workers_incentives.
can_raise_workers_incentives :- can_lower_taxes.
can_raise_workers_incentives :- can_raise_wages.
can_lower_taxes :- can_get_money.
can_raise_wages :- can_get_money.
Of course, this loops if you run it, which as with There's
a Hole in my Bucket on which it's based, was rather the authors'
1 ?- consult('d:/ainews/flanders_and_swann.pl').
% d:/ainews/flanders_and_swann.pl compiled 0.00 sec, 48 bytes
2 ?- can_get_money.
ERROR: Out of local stack
Exception: (2,794,796) can_raise_workers_incentives ?
Here's a different translation:
direct_subgoal( get_money, raise_exports ).
direct_subgoal( raise_exports, raise_production ).
direct_subgoal( raise_production, raise_workers_incentives ).
direct_subgoal( raise_workers_incentives, lower_taxes ).
direct_subgoal( raise_workers_incentives, raise_wages ).
direct_subgoal( lower_taxes, get_money ).
direct_subgoal( raise_wages, get_money ).
subgoal( A, B ) :- direct_subgoal( A, B ).
subgoal( A, B ) :- direct_subgoal( A, Z ), subgoal( Z, B ).
This makes the causal network explicit, and allows us to interrogate
it by asking which goals are subgoals of other goals, rather than
just executing to find out whether some goal is or is not possible:
8 ?- subgoal( get_money, SG ).
SG = raise_exports ;
SG = raise_wages ;
SG = get_money ;
We often encounter this choice between explicitly representing information
in some data structure or leaving it implicit in executable code.
As any good Prolog textbook will explain, the
predicate is easily extended to find the path between two actions.
The causal network can also be enhanced, for example by adding probabilities
and Bayesian reasoning.
The Budget song is funny because it describes an infinite loop.
There's no solution, because you need the resource you're fixing
in order to fix it. This reminded me of an exercise
in the first edition of Patrick Winston's Artificial Intelligence.
(Make sure to look for the first edition - later ones don't have
The exercise, written by Richard Brown, is problem 16-6 in Chapter
17. He conjectures a correspondence between story twists and program
bugs, and gives an example of a story involving recursion with no
stopping case. This is Larry Niven's Convergent Series, in
which a man escapes from a demon by drawing a pentacle on its stomach.
The laws of demonology constrain demons to remain inside pentacles,
and the demon senses the pentacle and shrinks itself to fit. The
pentacle shrinks too; the demon shrinks more; and so on ad infinitum.
The Budget song mirrors the same control-structure bug.
Brown identifies other bugs: the deadly embrace of parallel programming
is mirrored in O. Henry's short story The
Gift of the Magi; the Lisp FUNARG
problem, by the Rosencrantz and Guildenstern business. Other
kinds of bug, Brown says, can be summarised by giving the morals
of fairy tales. The first moral to come to my mind was "Don't count
your chickens before they're hatched". Could this be an admonition
not to dimension dynamic arrays until you know how big they have
to be? Brown ends by setting a problem to choose a bug, characterise
it, and then write a short story with the bug as its twist.
Returning to Flanders and Swann, their song demonstrates a simple
form of qualitative reasoning. Quantitative reasoning calculates
with numerical values; qualitative reasoning abstracts these
away into symbolic values. Thus, the Budget song is only interested
in whether its economic variables go up or down, not by how much
or how fast. Sometimes, a qualitative answer is necessary because
we don't have precise numerical values; sometimes, because it's
all we need in order to understand how something works - e.g. the
direction each wheel turns in a gear train, or whether your Christmas
tree lights will go out when a bulb blows in the circuit.
One kind of qualitative reasoning is what we do when we say "I
want a beer. OK, I'm out of cash; got to go to a cash dispenser
and get out £20; oh blow it, I was overdrawn, and I'll be
even more overdrawn now". In other words, negative plus negative
is negative. We can easily code such rules in a symbolic language
such as Lisp or Prolog:
sum( neg, neg, neg ).
sum( pos, pos, pos ).
mult( pos, pos, pos ).
mult( pos, neg, neg ).
mult( pos, zer, zer ).
mult( neg, neg, pos ).
On top of these, we can then write a symbolic evaluator which takes
a formula, breaks it down into individual functions and their arguments,
and calculates intermediate values and then a result in terms of the
symbolic values of the arguments.
One problem with qualitative reasoning is that there isn't always
enough information to determine a unique answer. If I tell you I
was overdrawn but have now paid money into my account, you can't
tell - unless I say how much I paid in - whether I am still overdrawn.
It's easy enough to program around this, by making rules return
sets of answers, saying for example that the sum of positive and
negative can be any of negative, zero, or positive:
sum( neg, pos, [neg,zer,pos] ).
This, however, easily leads to a combinatorial explosion of possible
In spite of combinatorial explosion, qualitative reasoning is
extremely useful, and has attracted a lot of research. You can experiment
with it for yourself: the third edition of Ivan Bratko's book PROLOG
Programming for Artificial Intelligence has an excellent chapter
on it. (Make sure to look for the third edition - earlier ones don't
have all the material.) Prolog code for the book's examples can
be downloaded from the book's
Web site, and includes qualitative models of simple circuits,
a bath filling with water, and a block on a spring.
On the Budget
song page, just after the lyrics, Flanders tells us:
Yes, I'm rather proud of that, actually, how in 1951
how we explained this involved economic truth in simple revue terms.
Alas, all too true today.
A paper by Chris Riesbeck on Knowledge reorganisation
and reasoning style would suggest that, in explaining this
involved economic truth, Flanders and Swann were thinking like expert
economists rather than novices.
Why do I say that? Riesbeck was interested in how novices reorganise
their memory structures as they learn. He compared how novice and
expert economists solve problems, and developed models of each type
of reasoning and a theory for how novices reorganise their memory
as they gain expertise.
Compare two of his examples:
Q: If the federal budget deficit went down, what would
happen to the rate of inflation?
A: I think inflation would go down, because people would have less
money taken out of their weekly income, they would hopefully do
some more saving, have more money to live on, not be pushing for
higher wages, and that wouldn't be pushing inflation up.
With the resulting structure of taxes and expenditures,
the President is not going to be balancing the Federal budget in
1984 or any other year. With high growth choked off by high interest
rates, budget deficits are going to be bigger, not smaller. The
result: more demands for credit and higher interest rates.
Lester Thurow, Newsweek, 21 September 1981,
The first transcript is from a novice who, Riesbeck claims, viewed
the economy in terms of the goals of economic actors - workers,
businesses, Government - and the subgoals needed to achieve these
goals. For example, a business's goal is to raise income. One way
to do so - a subgoal - is to raise sales. Other goals are to raise
prices and to lower costs. One of the Government's goals is to lower
the budget deficit (even though, in the real world, its actions
might make one think otherwise).
In total, Riesbeck interprets the transcript in terms of these
three actors, each having one top-level goal and two or three subgoals:
GOVERNMENT GOAL: lower the budget deficit
SUBGOAL: raise taxes
SUBGOAL: lower consumer spending
WORKERS' GOAL: raise money
SUBGOAL: lower taxes
SUBGOAL: raise wages
SUBGOAL: lower consumer spending
BUSINESS GOAL: raise money
SUBGOAL: raise sales
SUBGOAL: lower costs
SUBGOAL: raise prices
The full reasoning goes like this: First, the Government, if it can
lower the budget deficit, can afford to lower taxes. Second, workers,
once taxes are lower, can reduce their wage demands. Third, businesses,
if wages aren't going up, have no need to increase prices. Note that
the reasoning is working both up and down the goal tree. Reasoning
from goal to subgoal in the "workers" tree would be forward reasoning
from end to means: IF you need to raise money THEN one way to do so
is to get taxes lowered. Our novice is going in the opposite direction:
BECAUSE taxes will be lower THEN the increase in wages can also be
less (in fact, zero).
Now look at Riesbeck's second
example, from a Newsweek finance columnist. This is more
abstract, focussing directly on the economic variables and how they
influence one another. Riesbeck claims such reasoning can be modelled
as searching a graph whose nodes are economic variables, connected
by links indicating whether one variable varies directly or inversely
with the other. To answer a question such as "If the federal budget
deficit went down, what would happen to the rate of inflation?"
would involve searching the graph for a chain of influence links
from budget deficit to inflation. This is more efficient than novice
reasoning, because the data structure being searched is simpler,
being a single graph instead of an intertangled forest of different
actors' goal trees.
Models like Riesbeck's are useful in education, by helping teachers
understand what goes on in their pupils' heads. One of the earliest
and most famous is Young and O'Shea's production
system model of children's subtraction. They say that in standard
educational theory, children's faulty arithmetic is due to misremembering
"number facts", aggravated by problems such as bad setting-out,
fatigue, carelessness, and poor concentration. So standard educational
theory - this was written in 1981 - had concentrated on models for
recall and use of correct number facts.
However, the children's errors may actually be due to failures
in execution. We should regard the child as faithfully following
wrong algorithms, not wrongly executing the correct one. Of course,
this is a hypothesis; but it's worth taking seriously, because if
true, it gives us more control over teaching.
Young and O'Shea went on to build a production system model of
how children might acquire wrong algorithms rather than correct
ones. They started with production system rules which modelled correct
subtraction, and then showed that by slight changes to some of the
rules, their model could account for more than 2/3 of the errors
observed in their subjects' subtractions. Some of the rules were
faulty because of wrong actions; others, because of wrong conditions.
Consider this subtraction:
Its error could have been caused by a rule that was too general: one
whose conditions don't discriminate between having the zero under
a digit, or above.
Young and O'Shea concluded, amongst other things, that teachers
tend to regard a a new skill as something that must be "consolidated"
by repeated practice, as in language books that introduce past tenses
and then set translations with all past tenses and no present tenses.
But the rule model shows we must also train the student to discriminate
when to apply rules, i.e. to get the conditions right. Such training
may actually be hindered by consolidation. In the language example,
old present tense rules may be lost or changed in favour of past
Researchers have developed such cognitive models in many areas
of education. I mentioned Riesbeck's model of economic learning;
a more recent one, focussing on how experts switch easily between
textual information and X-Y graphs, and why novices have difficulty
doing the same, is How
Does an Expert Use a Graph? A Model of Visual and Verbal Inferencing
in Economics by Herbert Simon and colleagues at Carnegie
Such work might help teachers explain to students how macroeconomics
(the aggregate behaviour of economies in terms of variables such
as inflation, Gross National Product, and unemployment) is related
to microeconomics (the behaviour of individual actors). Could it
also help us build artificially-intelligent economists? You wouldn't
want to experiment on an actual economy, and the Money Museum gaming
consoles I mentioned at the start of this article are too heavy
to move; but if anybody wants to try out a computerised Chancellor
of the Exchequer or Finance Minister program, I can point them at
a testbed. It's a Web-based virtual
UK economy which I and colleagues built a few years ago for
teaching introductory economics, and where you play the role of
Chancellor, set taxes and other variables, and then watch how your
changes affect the economy over the next ten years.
If, when reading this article, you hadn't heard of Flanders and
Swann, it may help to think of them as the English equivalent to
I'll end this article with some advice from his song Lobachevsky.
Written for mathematicians, it applies just as well to AI:
Let no one else's work evade your eyes,
Remember why the good Lord made your eyes,
So don't shade your eyes,
But plagiarize, plagiarize, plagiarize...
Only be sure always to call it please, "research".
- There's a Hole in my Budget, from Flanders and Swann Online.
- The song is based on the traditional There's a Hole in my Bucket.
Here's one of many listings, from the Erith and Crayford Scouts
songs site. (For people from Maastricht, there's a version in local
dialect at www.limburgzingt.nl/maast-l.htm;
search for "ummer".)
Problem 16-6 (by Richard
Brown) in Chapter 17, Problems to think about, from Artificial
Intelligence (1st edition) by Patrick Winston, 1977.
- The Gift of the Magi by O. Henry, at The Literature Network.
- Posting explaining the Funarg problem on comp.lang.lisp by Barry
Margolin, 27 April 2000.
Chapter on Qualitative Reasoning, from the book PROLOG
Programming for Artificial Intelligence (3rd edition) by Ivan
- Student resources page for PROLOG Programming for Artificial
Intelligence, with source code for each chapter. The menu at
the top selects the chapter: qualitative reasoning is near the end.
- Qualitative Simulation with CLP, by Aleksander Bandelj,
Ivan Bratko and Dorian uc Faculty of Computer and Information
Science University of Ljubljana, Slovenia.
- Towards a qualitative model of everyday political reasoning,
by Kenneth Forbus and Sven Kuehne, Northwestern University.
Knowledge reorganization and reasoning
style by Chris Riesbeck, in Developments in expert systems,
edited by Mike Coombs, Academic Press, 1984.
Errors in children's
subtraction, by Richard Young and Tim O'Shea, from Cognitive
Science, volume 5, issue 2, 1981.
- A Production System Model of Subtraction, by Benedict du
Boulay and Steve Torrance, from the Cognitive Modelling course at
Sussex University, 2002. Shows output from a production system written
in Pop-11 and based on the Young and O'Shea rules.
- How Does an Expert Use a Graph? A Model of Visual and Verbal
Inferencing in Economics by Hermina Tabachneck, Anthony Leonardo,
and Herbert Simon, Carnegie Mellon. See also their CaMeRa: A
Computational Model of Multiple Representations, at www-2.cs.cmu.edu/afs/andrew.cmu.edu/usr0/psycam/public/camera/TLS_95_2/TLS_95_2.html.
- Qualitative Modeling in Education by Forbus and Bredeweg.
See also Bredeweg's thesis at staff.science.uva.nl/~bredeweg/pdf/thesis/03Groen.pdf.
- Virtual Economy, by Andy Beharrell, Keith Church, Jocelyn
Paine, and Graham Stark. Our be-your-own-Chancellor educational
simulation of the British economy. See also Graham Stark's Virtual
Chancellor at www.virtual-worlds.biz/vwc/.
- When Money Flowed Like Water, by Doron Swade, from Inc.
Magazine, September 1995. About the Phillips machine, a water-based
analogue computer economic model, built from old Lancaster bomber
- The Lives Behind the Numbers On the Screen: Illustrating the
Social Consequences of Economic Change By Telling Stories On the
Web. Some experiments in generating, from the output of Virtual
Economy, stories about the effects of economic changes on individual
- Diary of an Insane Cell Mechanic - A psychologist's descent
into molecular biology, by Jeff Shrager. Shrager worked on a
model of multiple-representation reasoning in gas laser physics,
which partly inspired the model described in Simon et. al.'s How
Does an Expert Use a Graph?. He also wrote this Diary,
and - amongst many other things - is principal investigator on the
Web-based programmable biological knowledge-base BioLingua,
built on top of BioLisp.
- Brain parts sort of involved in PreCognition: An fMRI study.
Shrager's parody of the electronic journal Cognitive
- Wikipedia entry for Tom Lehrer.
I found that quote, by Perl creator Larry Wall, in an interview
with Mark Jason Dominus about his book Higher Order Perl,
which tells how to program with higher-order functions in Perl.
It looks like an insult to Lisp, but it's really a compliment: Dominus
has taken the notion of higer-order functional programming, popular
in Lisp, for use in real-world Perl tasks such as typesetting, diagram
generation, and HTML hacking. Higer-order functional programming
passes functions as arguments to other functions, making it easy
to iterate over structures without coding your own loops. In Lisp,
(mapcar #'+ '(1 2 3 4) '(5 6 7 8))
will run through both lists of numbers adding corresponding pairs;
should you want to multiply instead, you pass the
mapcar instead of the
(mapcar #'* '(1 2 3 4) '(5 6 7 8))
Coming from a different culture, many Perl programmers don't realise
they can do the same. But you can, and it's a great way to write general
and reusable code.
Lisp is regarded by Paul
Graham, author of On Lisp and ANSI Common Lisp,
as the basis for his new language Arc,
as he explains in Being
Popular and The
Hundred-Year Language. Arc will be succinct, because "succinctness
is power"; but will not be object oriented, because "there
are five reasons people like object-oriented programming, and three
and a half of them are bad". Other essays on Graham's site include
Theory of VC Suckage, Why
Nerds are Unpopular, and an essay
on essay-writing. Did you know the word "meander" comes from
a river in Turkey? I do now.
But perhaps I should forget Lisp. In his collection More
About Real Programmers, Adrian Hilton tells us "Real programmers
don't use LISP. Only effeminate programmers use more parentheses
than actual code". The collection is one of many on the theme "Real
programmers don't eat quiche", possibly inspired by Real
Programmers Don't Use Pascal. The article is a companion
to the heart-warming story
of Mel, from the days when real computers were made from vacuum
tubes and drums, and to optimise your code, you timed how fast the
Hilton's collection ends with the real history of Unix:
In 1969, AT & T had just terminated their work with the
GE/Honeywell/ AT & T Multics project. Brian and I had just started
working with an early release of Pascal from Professor Nichlaus
Wirth's ETH labs in Switzerland and we were impressed with its elegant
simplicity and power. Dennis had just finished reading 'Bored
of the Rings', a hilarious National Lampoon parody of the great
Tolkien 'Lord of the Rings' trilogy. As a lark, we decided to do
parodies of the Multics environment and Pascal. Dennis and I were
responsible for the operating environment. We looked at Multics
and designed the new system to be as complex and cryptic as possible
to maximize casual users' frustration levels, calling it Unix as
a parody of Multics, as well as other more risque allusions. Then
Dennis and Brian worked on a truly warped version of Pascal, called
'A'. When we found others were actually trying to create real programs
with A, we quickly added additional cryptic features and evolved
into B, BCPL and finally C. We stopped when we got a clean compile
on the following syntax:
- Interview with Mark Jason Dominus, from the Perl Review.
- Review of Higher Order Perl by Damian Conway, explaining
how it will transform your programming.
www.paulgraham.com - Paul
Graham. His essays are linked from the left-hand navigation bar.
See also en.wikipedia.org/wiki/Paul_Graham.
- Real Programmers Don't Use Pascal. See www.pbm.com/~lindahl/mel.html
for The story of Mel.
- More About Real Programmers, jokes collected by Adrian
Hilton, ending with UNIX and C a Hoax.
- The Tao Of Programming.
- Review of Bored of the Rings.
Past newsletters are available at either www.ddj.com
As ever, interesting links and ideas for future issues are very
welcome. Feel free to contact either myself (below) or Jocelyn <firstname.lastname@example.org>
with comments, thoughts and suggestions.
Until next month,
Copyright ©2005 Amzi! inc., CMP, and Jocelyn Paine. All Rights