Universität Paderborn » SFB 901 » Projects » Project Area B » B4 » Abstract

Subproject B4

Proof-Carrying Services

Abstract

In this project we investigate techniques for ensuring the quality of services that are procured on the market and assembled on-the-fly, without trusting the service provider. We aim at the development of methods and concepts that allow the consumer of a service to automatically, formally and quickly check if the service possesses the desired properties, i.e., if it adheres to a given specification.

We started our research on Proof-carrying Code (PCC) techniques for software services and reconfigurable hardware modules, for the latter referred to as Proof-carrying Hardware (PCH). Our developed methods enable the service provider to automatically derive a formal evidence for the desired quality of a service but its derivation is typically costly. In contrast, the consumer only validates the derived evidence which is computationally less expensive and still ensures rigidly tamper-proof evidence. See Figure 1 for an overview.

Fig. 1: Concept of Proof-carrying Code and Proof-carrying Hardware

Currently, we proceed our research in three directions. First, we enhance and optimize our existing techniques. Second, we develop methods no longer strictly following the PCC principle. For example, the Programs from Proofs approach only keeps the idea of the workload shift but also uses a simple verification on consumer side. Furthermore, we combine (partial) verification with more lightweight techniques like testing or monitoring. Third, we would like to integrate our two branches of research to enable the validation of services covering hardware and software aspects.

Proof-carrying Code

First results of PCC for assembler programs were already published in 1997 by Necula and Lee. However, while these works mainly deal with type and memory safety properties, we are interested in less restricted classes of functional properties, e.g., pre- and postconditions, reachability of (error) locations and protocols describing allowed sequences of actions like method calls. Especially, subproject B3 relies on grey box service descriptions, e.g., sequences of method calls, during its analyses. Furthermore, we cannot expect that a service provider is familiar with semi-automatic verification techniques like theorem proving and require also a fully automatic verification of these properties. Finally, the main goal of our research is to keep the idea that the majority of the workload is shifted to the provider and we may deviate from the original PCC concept.

Strictly following the PCC idea, we developed a configurable certification framework which can be configured to show any of our functional properties. In contrast to Abstraction-carrying Code developed by Albert et. al. in 2004 which is a PCC framework for arbitrary abstract interpretations we support static analyses, abstract interpretation and model checking considering various abstract domains.

The configurable certification framework is based on the existing configurable program analysis framework from Beyer et. al. implemented in CPAchecker. In the configurable certification framework the provider constructs an abstract state space. In the basic framework, the abstract states are sent to the consumer who recomputes the abstract successors of each state, proves that successors are correctly considered and checks if the desired property is valid. Currently, we are interested in the improvement of this framework:

  •  How much information must be stored in the certificate?
  •  How can we parallelize our certificate validation?
  • How can we improve the representation of certificates?
  • Can we simplify certificate validation?
  • Can we support further analyses described in the configurable program analysis framework?

We already developed first approaches tackling the first and second question resulting in performance improvements of the consumer’s tasks. These approaches store a partitioned subset of the abstract state space. They read and validate the certificate in parallel and recompute the missing part of the abstract state space during validation.

Programs from Proofs

We invented the Programs from Proofs (PfP) framework as an alternative to PCC. Like PCC our PfP framework is a generic framework which keeps the idea of the workload shift and enables the consumer of a service to automatically, formally and quickly verify if the service possesses the desired properties. The basic idea of PfP is that the provider proves the correctness of the original program w.r.t. a property and uses the proof, e.g., the abstract state space, to transform the original program into an equivalent program for which due to its changed structure the same properties can be verified easier. The transformed program is sent to the consumer who benefits from easy verification.

We started our research on PfP with a concrete proof of concept instance. Our proof of concept instance verifies protocol properties and uses model checking with predicate abstraction on producer side and a simple dataflow analysis on the consumer side. In our ongoing research we want to systematically inspect the PfP framework. Therefore, we develop a configurable PfP framework. Like the configurable certification framework it is based on the existing configurable program analysis framework from Beyer et. al. implemented in CPAchecker.

The first version of our configurable PfP framework allows the consumer to use an arbitrary dataflow analysis to verify an invariant. To enable the consumer’s analysis the producer uses a predicated version of the dataflow analysis. Currently, we extend the properties supported by our framework, e.g., we also include protocol properties, and generalize the analysis configurations for the producer. The producer now uses a combination of enabler and property checking analysis and the consumer configures the property checking analysis to run in dataflow analysis mode. Furthermore, we want to systematically inspect our configurable PfP framework and are particularly interested in the following research questions:

  • Which enabler analysis can be used in PfP?
  • For which property checking analysis is PfP practicable?
  • Which combinations of enabler and property checking analysis are suitable?
  • Should we use property checking analyses which are no dataflow analyses?
  • Can reachability properties be ensured?
  • Which further properties can be ensured with PfP?
  • Can we combine PfP with PCC?

Combining Static and Dynamic Software Analysis

Strict formal validation of a service’s property can be too restrictive or inefficient for some types of properties and services. To enlarge the set of properties and services supported and to increase the efficiency of the consumer validation, we also study approaches which combine static approaches like verification with more lightweight dynamic approaches like runtime verification or testing. Hence, the combined approaches no longer need to ensure a definite correctness result.

Our first approach combines PfP aspects with runtime verification. This approach transforms a partially verified program in such a way that the transformed program halts before it reaches an error. For protocol properties this gives us runtime monitoring without any additional overhead.

In our current research, we combine partial verification with testing. We always start to verify the service. If the verifier gives up, e.g., it cannot exclude a spurious error, or a timeout occurs, we construct and test a residual program. The residual program should contain the unverified program paths only. So far, we use Conditional Model Checking from Beyer et. al.  to get a summary of the partial verification effort, a condition, and use one of two techniques to construct the residual program. The first technique combines the program and the condition and the second identifies unverified assertions and uses them as slicing criteria. Open research questions are:

  •  How to distribute the validation time between verification and testing?
  • Do we profit from running the different techniques in parallel?
  • Which technique should be used when for residual program generation?
  • For which verification methods and services are the techniques suitable and efficient?
  • How does the exploration order of the partial verification affect subsequent testing?
  • Can we use our techniques for other property specifications than assertions?
  • Can we combine our techniques with PfP, especially its consumer validation?

In the future, we also want to combine incomplete certificate validation and testing. We think of two cases. Either the certificate itself is the result of a partial verification or the consumer does not complete certificate validation, e.g., due to a timeout. Our research will tackle the following questions:

  • How to integrate these ideas into our configurable certification framework?
  • How to compute the residual program? Can we use the ideas from partial verification?
  • Do we need an order on certificate elements to get small residual programs?
  • Can we use the same techniques for both use cases?

Proof-carrying Hardware

We transferred the idea of Proof-carrying Code to reconfigurable hardware. Proof-carrying hardware cannot only verify functional properties of a hardware service but also non-functional hardware-related properties.

The functional properties can be verified by formulating test functions that use the primary inputs and outputs of the circuit to determine if the desired properties hold.

Our first approach transforms these test functions into an unsatisfiability problem (unSAT) over boolean formulas, which can in many relevant cases be solved quite efficiently using a modern SAT solver. The resolution proof of the unsatisfiability becomes the certificate. For our proof of concept, we use the property of functional equivalence between a circuit implementation and its original specification as a catch-all property, to implement prototypes for Proof-carrying Hardware. We used this approach in particular for combinatorial circuits and sequential circuits with a fixed number of cycles. Due to the closed nature of FPGA vendor’s tools, we cannot use the configurations of actual FPGAs directly. Hence, for demonstration purposes we started with configuration bitstreams for an abstract FPGA. To create real prototypes for PCH, we extended our approach to use virtual FPGA overlays on top of real FPGAs that we can program using open source tool flows compatible with PCH. To speed up the validation of the resolution proof, we would like to implement the validation in hardware itself.

Going forward, we will focus more on implementing the validation of non-functional properties. Non-functional properties include the physical aspects of the hardware device and the hardware service that is implemented on the reconfigurable device, for example:

  • Placement constraints
  • Routing constraints
  • Isolation of multiple modules on reconfigurable fabric
  • Min / max / exact clock frequency
  • Assurance of redundant circuits

Reconfigurable platforms can greatly benefit from the verification of these possibly security-critical features. In Figure 2 below, we depict two FPGA chips with three IP cores. Without placement constraints, the cores are placed in a way that may overlap and allows the individual modules to share resources on the chip, which allows for unintended data leaks and malicious data manipulation. The placement and routing on the right is performed under constraints that physically isolate the individual modules and thereby protect the integrity and confidentiality of their data. With isolated IP cores, the FPGA chip can simultaneously run multiple IP cores with different trust levels without compromising the security of the platform or the data.

Fig. 2: FPGA chip without and with physical isolation

Our alternative PCH approach is developed mostly to verify sequential circuits. It uses IC3 to build an inductive invariant, an overapproximation of the circuit’s state space. The inductive invariant, a logical formula, is the certificate.

We also started to combine PCH with monitoring and enforcement. The idea is that monitoring and enforcement modules are added to the circuits, to make sure that they exhibit only certain behavior or properties. PCH is then used to ensure the correct implementation of the added modules which is simpler than ensuring the correctness of the complete and complex circuit. In the future, we do not only want to check the behavior of the additional modules but also want to make sure that they are integrated in a way that they cannot be bypassed.

Integrating Proof-carrying Code and Proof-Carrying Hardware

Our goal is to unify our two research branches into a common Proof-Carrying technique for services covering software and hardware aspects. We think of services which execute parts of the service on standard processor hardware and other parts on (reconfigurable) hardware particularly developed for that service. For example, the software service may also execute special, non-standard instructions, so called custom instructions, or call methods implemented in hardware.

In a first step, we started to connect software and hardware verification. For our scenario, we extracted from software verification, the required functionality for the hardware implementation. We considered three techniques which differ in the effort and the reusability of hardware verification. First, we prove functional equivalence, i.e., if the hardware implementation adheres to its specification. Second, we check if the hardware adheres to the rules of the abstract domain used by the software verification. Third, we extract the hardware requirements from the proof of the software verification.

In our ongoing research we want to systematically study our combined hardware and software verification and extend it to a complete Proof-carrying technique. Therefore, we consider the following research questions:

  • Which kinds of hardware implementations are covered by our approach? For example, can we deal with loops or memory access?
  • Which software analyses can be used and how complex may they become?
  • How can we simplify the hardware requirements produced by the software verification?
  • How to close the gap between the requirements produced by software verification and the representation for hardware verification?
  • How to automate the translation from extracted software requirements to hardware requirements in a sound way?
  • What else must be done to automate the combined approach?
  • How to integrate our existing Proof-carrying Code and Proof-carrying Hardware approaches?

Further Information

For further information, please consult our publications or contact our researchers.