Image Processing Demonstrator

Overview

The On-The-Fly Image Processing demonstrator aims for emphasizing the benefits of applying On-The-Fly Computing service composition techniques to the image processing application domain. A web based user interface enables  users to utilize the composition techniques as a service in order to compose sequences of image processing services for either live camera streams, predefined video stream or predefined still images.

Example from end user domain: Gradually changing a photo to achieve a desired effect. 

In order to achieve a particular result, services can be either composed manually or automatically (with and without a learning component). In case of manual composition, only signature information of services is considered in order to ensure the construction of composed services that can be executed. Deciding whether a combination of image processing services is reasonable or not is completely up to the user.

In case of automatic composition without learning, besides signature information, our adaptive service composition framework (see below) additionally considers semantic information specified as pre- and postconditions. By doing so, more reasonable solutions can be composed. When additionally incorporating the learning module of our adaptive service composition framework, the composition process is improved over time (in increasing numbers of composition processes) in order to identify the best solutions out of the set of reasonable solutions.

Example from expert/technical domain: Gradually changing an image to prepare the image content for subsequent information extraction.

Adaptive Service Composition Framework

The Service Composition component controls the overall composition process. It implements a forward search algorithm for planning based service composition. The algorithm interacts with a Service Repository to get the most up to date service speci cations and associated executable services that can be applied in the current search state. The integrated Matching operator ensures syntactically correct interconnections based on signature information.

The Learning Recommendation System provides learned knowledge in order to support the composition component. However, the recommendation system does not dictate which search node should be visited next. As the name implies, it only recommends a node selection strategy based on learned knowledge. In contrast to the recommendation system, the composition component is memoryless. Each search process starts from scratch without relying on knowledge from the previous search process. The Composition Rule Manager (CRM) generates and maintains composition rules that were identi ed by the composition component during all search processes so far. The Temporal Difference Learner (TDL) implements the relevant concepts for reinforced learning. Based on the CRM and the behavior of the composition component, the TDL automatically constructs, extends and maintains a Markovian state space. The TDL also maintains and updates Q-values based on reward given by the Automatic Evaluation component after automatically executing a composed solution by means of the Service Execution component. Results are stored within a MySQL database and can be viewed and processed by means of a separate analysis tool.

Contact

The Image Processing Demonstrator Tool has been developed by subproject B2. If you have any questions, please contact research staff from Subproject B2.