Overview

  1. Introduction
  2. Network Structure and Communication Protocols
  3. Modular Structure
  4. Autonomic Computing
  5. Problem Solving


1. Introduction

This page gives a more detailed overview of the system, paying particular attention to its modular construction. The licas system is an open source framework for building service-based networks. The framework comes with a server for running the services on, mechanisms for adding the services to the server, mechanisms for linking services with each other, and mechanisms for allowing the services to communicate with each other. The default communication protocol inside of licas itself is an XML-RPC mechanism, but any Browser-based client can use REST-style or even simple HTTP GET calls to access the server. Code has been added to the base server classes so that they can recognise the form that a call is recieved in and can then process it accordingly. Then through Java again, dynamic invocation of Web Services is possible.

 

The system also provides an implementation of the standard Autonomous framework, including the four main components, behaviours and policy scripts. Only the framework is provided however, where you would be expected to write actual implementations of the main monitoring components yourself, based on your own particular problem specification. Services are protected with passwords, but this can be one password for the whole service, or through a script, you can set different passwords for different methods. Other scripts can be used as contracts or service-level agreements between services, where the framework is in place for that. There is also some basic search and metadata processing capabilities, to allow services to be found.



2. Network Structure and Communication Protocols

The system is designed to be peer-to-peer (p2p), where any service can both send (client) and receive (server) messages from any other service. Any remote message that is recieved, firstly passes through the base server, before being directed to the service that it is addressed to. It can also be interpreted there as XML-RPC or a direct HTTP request. The same communication process can also allow for direct object invocation, without parsing to or from XML first. The architecture is a typical hybrid p2p architecture and is also the sort of thing that the Cloud computing systems might provide. Figure 1 shows the general architecture of what a distributed network would look like.





Figure 1. Example of two Licas servers running service networks.


This diagram shows two servers running two different networks of services. The services can be structured in either network by permanent links, represented by the solid black arrows. Some services may have created dynamic links between each other, represented by the dashed red lines. The dynamic links can cross over servers as well. The remote communication between the servers is done using XML-RPC. The internal communication protocol is therefore XML-RPC, but the other formats are also possible. The server is still a web server and has limited capabilities to retrieve HTML pages from the file store as well, without implementing Java services. The Java services provide much more functionality however. Figure 2 shows where the XML-RPC protocol is used and where Web Service invocation can be used.





Figure 2. Example showing the different types of communication.


So internal communication in the licas system is by XML-RPC only, where service communication is done at the level of invoking a method on another service. A client can use either XML-RPC or RESTful-style messages to invoke a service running on a server, and either a client or a service can call an external Web Service dynamically, also through the licas classes.



3. Modular Architecture

The framework can be broken down into several modules, where not all of them are required to use the system. At the lowest level is a Service class. If you extend this with your own class, then you can add your own service to a network with all of the required licas functionality. The Auto class extends the Service class and is slightly more complex, providing for agent-like communications or continuous behaviour running on a separate thread. Note however that this is only a framework and the actual implementation still needs to be done by the programmer. Ontop of this there is the possibility for adding metadata to describe the service. All of the metadata is in XML format. The metadata can also be used to describe different security levels. The methods of a service can then be protected at different access levels, each requiring a different password. Figure 3 shows the modular architecture.






Figure 3. Modular architecture of the licas system. The server and service modules are shown.


The server modules are shown in green. There is a default HTTP server that can manage metadata, linking and communication mechanisms. You do not have to link the loaded services, for example, but this is the correct way to provide some sort of structure. The dynamic linking mechanism is provided as a utility 'Link' service that you add to your own service and then invoke. There are built-in meshanisms for doing this. The remote communication can be by XML-RPC. If passing complex Java objects, you need to write a parser for those classes. Alternatively, you can also serialize your objects, or local calls can use direct references. Web Services invocation is then another additional feature.


The Services that are loaded onto the network can be derived from the base licas 'Service' class. Alternatively, the 'Auto' class provides slightly more functionality. You can also load in your own class that is not derived from licas at all. It will be stored in a wrapper before being loaded onto the network. Some of the default functionality might then be missing.


4. Autonomic Computing

The system also protects any service that is communicated with by a wrapper object. This makes it difficult to obtain a direct service reference without the correct password, even through a local call. This wrapper object is a 'ServiceWrapper' by default. There is now also the option however to add an 'AutonomicManager' wrapper, if the object being wrapped is derived from a licas 'Auto' service. The default implementation of this will allow the service to operate as normal and also provide a message queue for storing and analysing messages that the service receives, and autonomously monitoring the service itself.

The autonomic manager is made up of monitor, analyser, planner and executor modules that are used to monitor the service in question and take action when there is a fault. Because this sort of activity is very application specific, it cannot be programmed completely and also lies outside of the scope of licas. Therefore, only a framework is in place to allow these modules to be loaded and to work together. In the licas system, only the base service has an Autonomic Manager. Any service that is nested inside of any other service is taken to be a utility service to the base service and is not monitored by any other module. The autonomic manager does not control the loaded service's actions, but only monitors it based on scripts or policies that are passed in as an admin document. The service itself invokes each monitoring operation by returning the evaluation of a behaviour execution to the manager. The manager then passes this to the monitoring modules, which evaluate the behaviour. Note that a behaviour can be any sort of action or evaluation. If there is an error, then this can be flagged. So the service starts the monitoring process, but the manager then evaluates the feedback and flags any error. The framework that is in place should be helpful for this process and so it would be worth looking at the code to see how it works if you are going to implement these modules yourself. Figure 4 shows the basic architecture of the autonomic manager. Note that the licas Service class also now has a contract manager for processing contract proposals for certain services. These would be related to the stored metadata.






Figure 4. Autonomic Manager wrapper with stored object.



5. Problem Solving

The system can be used to execute distributed services, which can run autonomous ‘behaviours’ that have their own internal control algorithms. These features are also integrated with the problem solving, where a distributed network of autonomous services, can send their information to a centralised problem solver to perform other calculations. The default package provides a genetic algorithm approach for more centralised processing, for example. One test option is therefore to start a group of distributed services running that would autonomously interact with each other without intervention. The other is to control the tests through a specified number of test runs and a more centralised component. The centralised problem solver allows different base frameworks to be used. There is now the more distributed linking option, or the centralised hyper-heuristic ‘grid’ option. The (new) proposal for creating a hyper-heuristic that can be used to match partial or uncertain information, is included. That sort of framework allows for a heuristic search over information that is sent from distributed sources to the centralised component. There is also a basic hill-climbing grid approach, but the class structure is extendable, so that your own programs can be included and tested.

So a unique feature of this system is the fact that a solution to optimising information sources has been integrated with a service-based network, allowing both for services to organise themselves based on mutual communications, or from a more centralised control system. The framework can also be used simply as a general problem solving environment, without considering services or networks; but it would then have a more limited functionality. It is worth looking at the ‘ai_heuristic’ javadocs to see what algorithms are provided as default.

Figure 5 shows the basic architecture of the problem solver. The problem solving is performed locally and not distributed throughout all of the network services or nodes. An information mediator can be used to retrieve the information from the distributed sources. This is then sent to the problem solver which creates solutions of the specified type and clusters the sources, or solves the problem, as best it can. The resulting clusters can then be turned into dynamic links and used to update the network structure, for example. The information mediator can communicate directly with the services running on a network and the results viewed in the GUI, or the problem solver can be used by itself without any network or GUI.






Figure 5. Problem Solving Architecture for Organising Information Sources.