Introduction and Goals

In the Proof-of-Concept, the research and developments of the various subprojects flow together in a uniform, integrated software system. With the help of the Proof-of-Concept, typical scenarios of an On-The-Fly market can be played through. The goals of the Proof-of-Concept are:

  • Demonstration of the general feasibility of On-The-Fly Computing
  • Imparting the concepts of On-The-Fly Computing to both a specialist audience and lay people
  • Clarification of the relationships between the subprojects
  • Testing new approaches and reviewing hypotheses

OTF Machine Learning Scenario

The Proof-of-Concept is designed domain agnostic, such that it can be adopted for any kind of OTF problems. For illustration purposes, the Proof-of-Concepts focuses on an application scenario to configure tailor-made Machine Learning Services on the fly. The main problem addressed is the automatic creation of a learning process that generates a predictive model (e.g., a classifier) ​​on the basis of given training data. Such a learning process consists of different algorithms or software components that have to be appropriately parameterized and combined with each other. The result of a corresponding selection and configuration process is a Machine Learning Pipeline. This OTF Machine Learning Scenario has the characteristics of an OTF problem:

  • end-users do not need to know the underlying software architecture and manual steps in this process,
  • the user is provided with an executable service in the form of a Machine Learning pipeline,
  • this service is optimized for the individual requirements (i.e. training data) and
  • the provision of the service takes place within a short time.


Numerous concepts and methods of OTF computing, which have been developed in the respective subprojects, have already been integrated in the proof of concept.

Subproject  Contribution

Self-stabilizing publish-subscribe system for market and OTF provider

A3, A4

Concepts for the reputation system for the rating of service compositions


Chatbot for a user-friendly requirements specifications and a matcher for matching non-functional requirements


Configurator for service composition using heuristic search


Verification of functional properties within operation sequences


Certification and validation of functional properties of basic services


Authentication of ratings, authorization for buying service compositions and their access control

C2, C4

Deployment of basic services in Compute Centers and the execution of service compositions in heterogeneous computing environments


Conformance checking of the Proof-of-Concept architecture with the On-the-Fly architecture framework



The Proof-of-Concepts has several user interface views for the individual OTF market roles, i.e., Service Requesters, OTF Providers, Service Providers, Compute Centers, and a Market Provider.

The process of the Service Requester consists of four phases: requirements specification, configuration, buying & using, rating.

Requirement Specification

The process starts with creating a new request. Service Requesters conduct a dialog with a chatbot, where they describe the functional and non-functional requirements of their tailor-made service composition. The chatbot extracts a machine-readable request specification from requester’s answers and sends them to the Market Provider. The Market Provider broadcasts the Service Requests to all OTF Providers via a self-stabilizing Publish-Subscribe system. The OTF Providers respond with a confidence score that estimates how well they can solve the problem. Based on this confidence score and the overall reputation of the OTF Providers, the Service Requester chooses one OTF Provider from whom he wants to receive offers. 


The configuration process starts and the Service Requester can inspect the progress of the request. The OTF Provider uses templates for Machine Learning Pipelines and fills the placeholder of the templates with basic services by using a heuristic search. The configuration search space is visualized as a graph. Every node represents a concrete Machine Learning Pipeline.

Buying and Using

After the configuration process is done, the OTF Provider offers the Machine Learning Pipelines to the Service Requester. The offers differ in their non-functional properties and the Service Request chooses the offer that best fits his needs and buys the service composition. The composition is automatically deployed in the Computer Center, where it is ready to be used. An access control prevents that unauthorized third-parties can use the service composition.


Finally, Service Requesters can rate their service composition and share their experience with other Service Requesters.

The Market Provider can visualize the self-stabilizing OTF Provider network and can monitor the processes that are ongoing in the market.

OTF Providers can inspect the configuration process internals and view verification results of the basic services.

Service Providers publish libraries to the OTF market. A library consists of several basic services. A basic service is a piece of code that can be executed stand-alone by itself without relying in any other services. Basic services can be arranged in service compositions. In the example scenario, we are using the Machine Learning libraries Wekascikit learnTensorflow and two image processing libraries.

Executors are (virtual) computing units. Computer centers register their executors at OTF Providers. The OTF Providers deploy a subset of basic services on these executors. When an end user invokes its bought service composition, it is actually executed on (several) executors where the needed basic services are available.

Organization, DevOps, and Technologies

The development process of the proof-of-concept is controlled by a chief architect and a committee. A six-person development team is responsible for putting the core components of the subprojects into the proof-of-concept's microservice architecture and developing any other components that are not sub-project-related, such as a unified user interface.

In addition, the DevOps development process includes Continuous Integration and Continuous Delivery: GitLab is used to manage the program code of the components. The Jenkins build server monitors program code changes, compiles the program code with Gradle, analyzes the program code for errors with SonarQube, automatically runs JUnit tests, automatically creates Docker containers, and distributes them into a Kubernetes cluster where the components are ultimately executed. All processes can be monitored with ElasticSearch and Kibana.



Gregor Engels


Dr. Christian Soltenborn


Jan Bobolz (C1), Thorsten Götte (A3), David Niehues (C1), Marcel Wever (B2)


Oliver Butterwegge, Jan-Niclas Nutt, Saman Soltani