April 2004

Interactive Web-Based Experts

Like all computer programs, expert systems take input, process it, and provide output. For example, an expert system might input symptoms and output a diagnosis; input customer requirements and output a product configuration; or input a robot's situation and output an action.

In many cases the required inputs can be gathered beforehand and sent to the expert system, which then processes the input and produces the output. This might be the case in a telephone pricing module, where the facts of a particular call are known and input to the system that then applies the correct rules to determine the price for that call.

Interactive Dialogs

However, one of the fascinating things about the behavior of rule-based expert systems is the way they can selectively and interactively gather only the data needed for a particular answer. This features gives them a more human-like quality justifying a bit the broad label of artificial intelligence.

Consider, for example, these rules to help with video monitor problems that might be part of a computer technical support system:

if screen is blank and led light is on, then move the mouse.

if screen is blank and led light is off, then plug it in.

if screen is difficult to read, then adjust contrast.

Such a system will probably have knowledge about the various bits of input data, such as:

Attribute - screen; values = [blank, difficult_to_read]; prompt = What is the problem with the screen?

Attribute - led_light; values = [on, off]; prompt = Is the led light on or off?

The calling program will start by looking for a solution; the expert system will try the first rule which needs to know if there is a problem with the screen. Seeing as it doesn't know what the problem is yet, it will ask the user.

If the user responds that the screen is blank, then the system will ask if the led light is on or off. But if the user had responded that the screen is difficult to read, then the system wouldn't need to ask about the led light and would go directly to the third rule.

This dialog is in contrast to a more conventional program interface that would require all of the input data up front. But for many expert systems that would mean the user would have to provide a lot of information that wasn't necessary, such as the state of the led light when the problem with the monitor was one of contrast.

Problem with Web Dialog

Typically this type of expert system has a function that asks the user a question when necessary. It is called, asks the question, records the response and lets the reasoning engine continue.

But a Web server that is calling an expert system likes to be in control of the dialog with a client, and doesn't take kindly to a program that demands an immediate answer to a question. So the expert system needs to take a more passive approach to get data from the user.

When the system needs to get a value from the user it simply returns to the calling program with the fact that it needs an answer to a question, and then it rests. Maybe the program will stay in memory, or maybe it will write out what the current state of the consultation is to a file.

The Web server then poses the question to the user, and when an answer comes in the server can assert the new information to the expert system and start it again with the query. This cycle continues until the system comes up with an answer that the server then displays to the user.

See the code corner for an example of a simple reasoning engine designed to work on the Web.

Case Study, Vaccination Schedules

The researchers at Stanford, wondering about the use of AI in medicine, noted that one problem with deploying AI is the lack of data for expert systems to reason over. This observation probably applies to the deployment of AI in general.

One can encode all of the rules of an expert system in a domain, but without underlying data or a means to gather input from the user, the rules are worthless. It's kind of like having cars but no roads to drive them on.

The February issue of Dr. Dobb's Journal had an article on a custom rule engine that illustrates this point. The system encoded the rules for pediatric vaccination scheduling. These rules are moderately complex and an ideal candidate for a rule-based automated approach.

The first work in this area was done by researchers at Yale a number of years ago. But there was a problem. The rules require as input the vaccination history of the child, which is needed to forcast the next vaccinations needed. But that data has traditionally been recorded by hand on paper medical records.

So in order to use this system, a doctor would have to look at the paper records and type in the pertinent data and then ask the expert system to recommend the next vaccinations. It's easier to just study the rules and documents and figure the vaccinations by hand as well.

In other words, until the data collection of medical records is machine readable, there is not much point in developing expert systems that provide advice based on an individual's medical record.


Visual Data LLC is the provider of Office Practicum, a software package used to run a pediatric office. It provides all of the services you might expect from such a package: helping with billing, scheduling of visits, and the recording of medical information from each visit, including vaccinations given.

Doctors, who previously might have been reluctant to use an expert system dispensing advice on vaccinations, were suddenly asking Visual Data to provide that service. They knew the data was in the system, and the software was automating other parts of an office visit, why not provide a list of the vaccinations a child should get on a given visit as well as the due dates for upcoming vaccinations?

In other words, now that the data was available, it made tremendous sense to automate the process.

The article in the February Dr. Dobb's describes the particular system in more detail.


It's interesting to note that historically, the availability of data was key to AI success stories. The first commercial successes of AI were in the insurance industry, helping with underwriting decisions. The insurance industry, of course, was one of the first to have a large amount of machine readable data.

Other successes were in online systems where the users could provide information interactively to the system, such as Digital Equipment's configuration system. Note that Digital, being a computer company, had online order/data entry earlier than most companies.

But without either an easy means to communicate with a user, or a database of information to use, expert systems were not of much use.

The Internet now makes practical a whole host of expert systems in a variety of domains.

ChatBot Contest

The Loebner Prize is a big prize looking for a program that can carry on an intelligent conversation. The Chatbot Challenge is a much smaller prize, not looking for proof of artificial intelligence, but rather just the best chatbot. (See the July 2003 issue for a discussion of chatbots and the September 2003 issue for code to implement one www.ainewsletter.com.)

The competition features dozens of chatbots that you can try, and then based on your experience you can vote for the ones you think are best in a number of categories. I recommend looking at the past winners first, because they are some of the best. Check it out at http://web.infoave.net/~kbcowart/

Code Corner

Here is a very simple program that illustrates the type of dialog a back-end expert system can have with a Web server. The rules are for a toy identification system using natural Prolog syntax.

Toy Logic Base

This is the toy logic base used in the sample dialogs:

pet(dog) :- lives_in(house), sound(woof).
pet(duck) :- legs(2), sound(quack).
pet(horse) :- eats(hay), lives_in(barn).
pet(hamster) :- lives_in(cage).

lives_in(barn) :- size(large).
lives_in(house) :- house_broken(yes), size(medium).
lives_in(cage) :- size(small).

sound(X) :- ask('What sound? ', sound, X).
legs(X) :- ask('How many legs? ', legs, X).
eats(X) :- ask('What does it eat? ', eats, X).
size(X) :- ask('What size is it? ', size, X).
house_broken(X) :- ask('Is it house broken? ', house_broken, X).

Asking the User

For a stand-alone system, the predicate ask/3 would first check if the value of the attribute was already known, and if not simply prompt the user for an answer, and continue the reasoning process based on that answer. But that isn't practical in a Web server environment.

So ask/3 for this system also first checks if its already known, but if not it notes that the question needs to be asked, and then fails, triggering backtracking into other possible solutions.

ask(Prompt, Attr, Val) :-
    X = Val.
ask(Prompt, Attr, Val) :-
    (need(Attr, Prompt) ->
    assert( need(Attr, Prompt) ) ),


Because this is intended to be a back-end system, it needs an application program interface (API) for the calling program, probably a Web server, to use. The API has four entry point predicates:

init - Initialize the state of the system for a new consultation.
solve(Attribute, Value) - Call the system to determine a value for an attribute. Using the toy knowledge base above, this would be a call like solve(pet, X).
get_needs(NeedsList) - Get a list of the attributes and associated prompts that need values from the user.
add_known(Attribute, Value) - Assert to the logicbase an attribute and it's value.
Here is the implementation of the API predicates:

init :-

solve(Attr, Val) :-
    Query =.. [Attr, Val],

get_needs(List) :-
    findall( need(Attr, Prompt), retract(need(Attr,Prompt)), List).

add_known(Attr, Val) :-
    assert( known(Attr, Val) ).

Calling the API

The API would be used by a Web server, or other program, something like this:

while not call_prolog(solve(pet, X))
    post queries to user and get answers back
    for each needed attribute
        call_prolog(add_known(Attr, Val)
show user value of X

This implementation of ask/3 and get_needs/1 will cause the system to generate a need for the first required attribute of each of rule and send them all together after the first call to solve/2. In other words, it will batch up a number of questions for the user to get things started. This might be a desirable way to start, allowing for more efficient communication between the Web server and the client.

The Toy Dialog

Using the toy logic base, a user would first experience a welcome screen and a start button, which would start the consultation. The first set of questions would come back:

Is it house broken?
How many legs?
What does it eat?
What size is it?

The user might then provide answers to these questions like: yes, 4, dog_food, and medium.

The Web server loop would loop, asserting those answers and calling solve/2 again. This time there would only be one question to ask:

What sound?

The user might answer: woof.

And now the system can find an answer and will let the user know the pet is a dog.

This basic architecture can be expanded to allow inclusion of menus and/or edit checks. The loop can be changed so that only one question gets asked at a time. And, for finer control, the descriptions of the askable attributes can include a field that indicates related attributes that should be asked at the same time. In other words, relationships can be specified between askable attributes that will make the Web interaction with the user more reasonable.

Test Harness

It is often convenient to build a test harness in Prolog for a Prolog module that will be used as a service. In this way the Prolog code can be debugged entirely in a Prolog environment.

Here is the code that simulates a Web server calling the API. It doesn't do anything fancy with the I/O, but does accurately capture the dialog between the Web server and the back-end logic base.

main :-

dialog :-
    solve(pet, X),   % if it fails, the next clause gets tried
    write(pet = X),  % if it suceeded, then we're done
dialog :-
    Needs \= [],
    dialog :-
    write('no answer'),

web_prompt(Needs) :-
    batch_questions(Needs, Prompts),
    write(Prompts), nl,
    write('Enter Prolog list with all corresponding answers'), nl,
    write(' (ex. [no, 4, seeds,small]. or [woof].)'), nl,
    remember_answers(Needs, Answers),

batch_questions([], []).
    batch_questions([need(_,Prompt)|Needs], [Prompt|Prompts]) :-
    batch_questions(Needs, Prompts).

remember_answers([], []).
remember_answers([need(Attr,_)|Needs], [Answer|Answers]) :-
    add_known(Attr, Answer),
    remember_answers(Needs, Answers).

Trying it:

?- main.
[Is it house broken? , How many legs? , What does it eat? , What size is it?]
Enter Prolog list with all corresponding answers
    (ex. [no, 4, seeds, small]. or [woof].)
[yes, 4, dog_food, medium].
[What sound? ]
Enter Prolog list with all corresponding answers
    (ex. [no, 4, seeds, small]. or [woof].)
pet = dog


http://www.thearchitectjournal.com/Journal/issue1/article5.html - A Microsoft Architect Journal article done by your editor that is a bit long winded but has, near the end, more details on the vaccination system.

http://www.doctordeluca.com/Library/PublicHealth/Vaccine/ImmGuideKnowMain1-98.pdf - The paper describing Perry Miller's work at Yale on a vaccination system. It has excellent coverage of the issues in building such a system and the problem domain.

http://web.infoave.net/~kbcowart/ - ChatBot Challenge - try dozens of chatbots and vote for the best.

As always, feedback ideas and especially interesting links are welcome. Past issues are available at either www.ddj.com or www.ainewsletter.com.

Until next month.