Logic Programming
Parallel Logic Programming
Our main objective is to extend the basic Resolution Principle of logic programs in order to exploit both OR and AND parallelism. We have been investigating a number of resolution algorithms for logic programs which would facilitate the use of distributed computer architectures. In order to combine these two types of parallelism, we consider each alternative path of the resolution tree as a totally independent computation. This gives rise to ORparallel execution. In addition, the conjunction of subgoals of each deterministic path can be resolved in parallel, giving in effect ANDparallelism.
A number of Abstract Machines and Architectures has been designed to implement several resolution algorithms in order to exploit both OR and AND parallelism of logic programs. Our current research is concentrated towards a distributed memory architecture, named OASys, in which the processing elements performing the ORparallel computation, posess their own a ddress space while other simple processing units are assigned with ANDparallel computation and share the same address space. OASys execution is based on distributed scheduling which allows either environment copying or recomputation of paths depending on an estimated cost before the distribution of work. Currently a prototype implementation is in progress, based on a network of workstations.
Distributed Constraint Logic Programming
The goal of the research in constraint logic programming is to improve the efficiency of logic programming systems at combinatorial problems, by combining parallel execution of logic programs and supporting constraints in logic. The focus of the research is on extending logic programming to support constraints and parallelism at the same time. Various applications are also developed to study specific requirements from the CLP system for various different problems, such as:
 timetabling
 workforce scheduling
 production line optimization
 traffic lights control
