Skip navigation

Here follows my abstract and proposal. It is unchanged, except for the removal of email addresses and the addition of small dashes to improve formatting.

Abstract:

LISA (Lisp-based Intelligent Software Agents) currently uses an implementation of the Rete algorithm (a many-to-many matching system) as its inference engine.  The Rete algorithm has excellent performance for certain applications, such as deductive databases and production systems.  However, in many situations its many-to-many matching is overkill.

Implementing backward chaining in LISA will provide a system of one-to-many matching, allowing LISA to be efficiently used for logic programming.

LISA Homepage:
http://lisa.sourceforge.net/

Proposal:

LISA (Lisp-based Intelligent Software Agents) currently uses an implementation of the Rete algorithm as its inference engine. Although this method of forward chaining has excellent performance for certain applications, in many situations its many-to-many matching is overkill.

Implementing backward chaining in LISA will provide a system of one-to-many matching, providing performance benefits similar to those found in using Dijkstra’s algorithm over Warshall’s algorithm. This performance increase will allow for efficient logic programming.

Backward chaining works by recursively deciding what facts would be required to resolve a predicate until a solution is found. As a reference point, Prolog uses backward chaining in its inferential engine.

Although both forward chaining and backward chaining algorithms are linear, backward chaining algorithms are much more effective at ‘predicting’ what facts are required to derive a conclusion, decreasing the quantity of wasted visits to nodes. It has been estimated that performance in backward chaining may be 30-50x faster than that of forward chaining [1].

Further information about backward chaining in general may be found in chapters 7 and 9 of Artificial Intelligence: A Modern Approach by Stuart Russell and Peter Norvig.

A plan follows:

2-3 Weeks: Investigate feasibility of various approaches, including, but not restricted to:

– Translation-modeled approaches (Used in Prolog):
– Warren Abstract Machine (WAM) [3]
– Tree-Oriented Abstract Machine (TOAM) [4][5] or TOAM Jr. [6]

– Approaches modeled after existing open source implementations:
– Mandarax Inference Engine [7]
– EulerSharp [8]

Considerations will include performance, clarity, and consistency with LISA’s existing API, with helpful input gained from analysis of existing lisp logic programming APIs [9][10].

Remaining Weeks: Implementation- schedule TBD depending upon actual algorithm selected.

David Hilton is, depending upon the point of view, either a Junior or Senior in the Computer Science Department at Brigham Young University. It may be said that he is concurrently a Junior and a Senior in the CS Program. He is Chair of the BYU ACM student chapter,

He has little formal experience with lisp, but is very interested in learning more. To date, his Lisp projects include a finite state machine evaluator and a probabilistic grammar checker. He has observed that Lisp is very succinct, despite his very limited knowledge. He currently uses Scheme in his Programming Languages course, the one and only course that uses a lisp at BYU.

To date, he has worked at 3 companies doing software development (two related to the development of medical devices, one to web programming), and thinks that this would be an excellent opportunity to experience another side of development.

He is loath to continue a narrative, when further information can easily be extracted from his resume [2].

References:

Scott Woodfield (ACM Chapter Advisor)
– removed@cs.byu.edu

Andrew May (Employer)
– removed@gmail.com

David Wilcox (Fellow Student)
– removed@gmail.com

Links:

[1] Peter Lin (woolfel.blogspot.com):
http://blogs.pathf.com/business_rules/2006/04/exagerated_clai.html

[2] Resume (David Hilton)
http://students.cs.byu.edu/~removed-because-of-references (newer, more formal)
http://students.cs.byu.edu/~removed-because-of-references (older, warning: contains statements that may be considered humorous)

[3] Warren’s Abstract Machine: A Tutorial Reconstruction (Hassan Aït-Kaci)
http://www.vanx.org/archive/wam/wam.html

[4] Parameter passing and control stack management in Prolog implementation revisited (Neng-Fa Zhou)
http://portal.acm.org/citation.cfm?id=236120

[5] A High-Performance Abstract Machine for Prolog and its Extensions (Neng-Fa Zhou)
http://www.probp.com/whitepaper/whitepaper.html

[6] A Register-free Abstract Prolog Machine with Jumbo Instructions (Neng-Fa Zhou)
http://www.sci.brooklyn.cuny.edu/~zhou/papers/toamjr.pdf

[7] The Mandarax Inference Engine
http://mandarax.sourceforge.net/

[8] EulerSharp
http://sourceforge.net/projects/eulersharp

[9] LispWorks KnowledgeWorks
http://www.lispworks.com/products/knowledgeworks.html

[10] PLT Scheme Inference Collection
http://planet.plt-scheme.org/display.ss?package=inference.plt&owner=williams

One Comment

  1. Additional References:

    A discussion with David Young about backward chaining – it contains 2 references to potentially valuable resources
    http://www.interactivecode.com/googles-summer-code-17/lisa-request-features-36049-2/

    PAIP – implements Prolog


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: