Research and projects by Carlo A. Furia

Formal methods often have the bad reputation of being expensive to deploy, of requiring highly-trained experts, and of being justified only for few safety-critical applications. My research aims to debunk this view by showing sweet spots where formal methods become cost-effective, usable, and practical.

The proper engineering way to look at the problem of applying formal techniques is in terms of trade off between effort and benefits. My research aims at finding favorable trade offs by changing what's on the two plates: reducing the effort (for example by increasing the level of automation) and boosting the benefits (for example by finding bugs early on during development). Here's this idea in one picture: from dubious benefits and heavy effort (scale on the left) to tangible benefits and moderate effort (scale on the right).

Two scales representing different trade offs between costs and benefits of formal methods

This page gives an overview of my research in the areas of automatic program repair, intermediate verification, auto-active program verification, specification-based software analysis, collaborative software development, language translation, and formal methods for real-time, hybrid, and cyber-physical systems. The areas are partly overlapping as they share common traits such as automation, combination and integration of diverse techniques, leveraging formal specifications, and empirical evaluations of their effectiveness. The results are often demonstrated by tools available as open source. Each area lists the projects and the main collaborators and publications (which also give complete credits). Since research in most areas is ongoing, the latest developments may be more accurately reflected elsewhere, such as in the latest publications, technical reports, or presentations.

Automatic program repair

"Automatic program repair" refers to techniques capable of suggesting corrections to the source code of faulty programs; applying the corrections permanently removes buggy behavior. Our AutoFix project developed one of the first successful technique and tool for automatic program repair. One of the distinguishing features of AutoFix is that it leverages simple functional contracts in the source code to boost accuracy and performance of bug fixing. An evaluation involving over 200 bugs from code bases of different quality demonstrated that AutoFix is highly successful, as it produces a high number of correct fixes (for 25% of the faults in our experiments) and requires only limited computational resources (under 20 minutes per fix on commodity hardware), which compares very favorably to the state of the art in the area. Jaid is an automatic repair technique that extends some of AutoFix's techniques so that they are applicable to Java programs without contracts or any special annotations. In our experiments, Jaid produced correct repairs, equivalent to those written by programmers, for 25 bugs in the Defects4J curated collection of bugs from real-world Java applications. Driven by state-based dynamic analysis techniques, Jaid can produce fixes that involve "semantic" object-state modifications, and thus are largely complementary to those generated by other automated repair techniques for Java.

Project pages
AutoFix and Jaid, with downloadable tools, screencast demo, and examples of repair
Main collaborators
Yu (Max) Pei
Publications
ASE'17, ICSE'15, TSE'14, FASE'14, ASE'11, ISSTA'10
Intermediate verification — languages and tools

Intermediate verification systems such as Boogie and Why3 act as a layer between high-level program verifiers (our AutoProof, or many other auto-active provers) and back-end provers (SMT solvers), to efficiently generate verification conditions from an intermediate representation; as such, they are increasingly popular components of any non-trivial verification toolchain. Our research in this area helps improve the flexibility, performance, and robustness at the intermediate verification layer in different ways:

  • BLT is a verifier for Boogie programs that generates verification conditions for first-order provers (in particular, the Vampire theorem prover) instead of the Z3 SMT solver normally used by the Boogie tool. A first-order prover is often more effective dealing with quantified formulas, as it does not require user-written low-level "trigger" annotations to guide the proof search.
  • b2w is an automatic translator from Boogie to WhyML (Why3's input language), which enables users of Boogie to easily try out Why3, taking advantage of the latter's features that are not available in the former.
  • Bogaloo is an interpreter and run-time assertion checker for the Boogie intermediate verification language, which helps debug failed verification attempts by providing concrete counterexamples and by supporting incremental development.
Try them out!
Bogaloo is online on Comcom
Project pages
BLT, b2w, and Bogaloo, with downloadable tools, documentation, and screencast demos
Main collaborators
YuTing (Jeff) Chen and Nadia Polikarpova
Publications
iFM'17, iFM'16, RV'13
Auto-active program verification

Auto-active verification tools provide a level of automation intermediate between fully automatic (for example, static analyzers and abstract interpreters) and interactive (for example, proof assistants). In practice, users interact with auto-active provers by providing annotations of the input programs, such as contracts (pre- and postconditions) and loop invariants. Our AutoProof project has developed a powerful auto-active prover for object-oriented programs written in Eiffel. AutoProof supports many features of a complex real-life programming language such as Eiffel. Its flexible invariant methodology makes it possible to reason, with limited annotation overhead, about programs using elaborate object structures idiomatic of object-oriented programming (such as design patterns). At the same time, it includes heuristics techniques that help reduce the annotation overhead and improve the feedback provided to users in case of failed verification attempts. Among other case studies, we have used AutoProof to verify a realistic data-structure library against its complete functional specification—the first verification of this scope carried out entirely with an auto-active tool.

Other work of ours in this area includes:

  • QFIS, an auto-active prover for programs annotated in a decidable fragment of the first-order theory of integer sequences, demonstrates custom decision procedures that simplify annotations and automated proofs in a specialized domain.
  • Javanni, an auto-active verifier for JavaScript programs, introduces some of the heuristics to reduce overhead in simple cases also available in AutoProof.
Try them out!
AutoProof, QFIS, and Javanni are online on Comcom
Project pages
AutoProof, EiffelBase2 (verified container library), QFIS, and Javanni, with downloadable tools, documentation, and screencast demos
Main collaborators
Nadia Polikarpova and Julian Tschannen
Publications
STTT'16, FM'15 (best paper), TACAS'15, STTT'14, FM'14, VSTTE'13, FASE'13, LASER'11, ATVA'12, ATVA'10
Practical specification-based software analysis

My work in this category is best described in terms of the broad-brush bottom line common to its research results: writing software specifications is worth every penny, in the sense that the additional cost can be traded off very favorably against help to improve software's quality and reliability.

My work related to the AutoTest framework has demonstrated several such trade-offs in the area of testing. A standard practice in industrial software development, testing is often done manually: programmers write test cases as a routine part of their development activities. However, writing test cases can be traded off against writing simple specifications in the form of contracts (such as pre- and postconditions)— something no more complex than writing conditional if expressions. If contracts are available, techniques based on randomized search such as our stateful testing can use them to completely automatically generate test cases exposing hundreds of bugs in production software. But advantageous trade offs need not end at simple contracts: our work on model-based contracts has shown that writing strong specifications for testing is possible with reasonable additional effort, provided supporting methods and tools are available such as those provided in our research. We also have shown that the additional effort pays off manyfold in terms of many more and subtler bugs found automatically with testing.

Another way of boosting the usefulness of simple specifications is by combining static and dynamic techniques—proofs and testing. The EVE verification environment transparently integrates AutoProof and AutoTest: a verification assistant runs each tool in the background, and provides concise, informative feedback to users by combining the output of the tools. Seamless integration improve usability for users with little experience in formal techniques, and provides a stepping stone towards more advanced usage of the verification tools.

Specifications differ in how hard they are to write and maintain; preconditions, for instance, tend to be more intuitive and simpler in form than loop invariants. This observation motivates developing tools capable of leveraging the simpler kinds of specifications that can be written by non-experts to infer more complex kinds automatically. The AutoInfer tool implements one of the most powerful dynamic techniques to infer complex postconditions of programs. An evaluation has shown that AutoInfer's inferred postconditions are often sound (94% of the routines) and even complete (75% of the routines) with respect to full functional specifications.

Developed in collaboration with Zeller's group at Saarland University, the DynaMate project deploys a combination of static (program verifiers) and dynamic (automated testing) techniques to tackle the complex task of automatically inferring loop invariants. One of they key intuitions behind DynaMate is to mutate postconditions (which we assume given) to derive possible loop invariants. The candidate loop invariants are then checked, and only those that are provably valid are used to perform automatic correctness proofs. Applied to numerous Java methods of java.util library classes, our DynaMate prototype inferred loop invariants sufficient to automatically discharge 97% of all proof obligations and to prove correct nearly 90% of the methods, outperforming several state-of-the-art tools for fully automatic verification.

Project pages
AutoTest, EVE, AutoInfer, and DynaMate with downloadable tools, documentation, and screencast demos
Main collaborators
Nadia Polikarpova, Julian Tschannen, Juan Pablo Galeotti, Andreas Zeller, and Bertrand Meyer
Publications
TSE'15, HVC'14, FM'14, CSUR'14, ICSE'13, ASE'11, SEFM'11, ICSE'11, VSTTE'10
Understanding and supporting collaborative software development

The CloudStudio project seeks to develop sound empirically-based knowledge about how teams of programmers work together, and to use such knowledge to guide the design of tools that can make distributed development more productive.

Issues investigated by means of empirical studies in the course of the CloudStudio project include: the impact of following agile rather than traditional processes during development (with the somewhat surprising conclusion that processes don't seem to matter that much); and the impact of merge conflicts and insufficient awareness on developer's productivity (with the conclusion that lack of awareness often is a bigger deal than merge conflicts). The tools supporting collaborative developments are integrated in the web-based CloudStudio development environment, which provides innovative features for unobtrusive awareness, conflict prevention, and collaborative debugging.

Try it out!
CloudStudio for collaborative development
Project pages
CloudStudio and EVE with online tools, documentation, and screencast demos
Main collaborators
H.-Christian Estler and Martin Nordio
Publications
ICGSE'14 (best paper), FM'14, ESE'13, ICGSE'13 (best paper), ASWEC'13, ICGSE'12 (best paper)
Modeling and analyzing real-time, hybrid, and cyber-physical systems

In safety critical domains such as those of real-time, hybrid, and cyber-physical systems, formal methods are often necessary to avoid the unacceptable costs of faulty development. Still, choosing the "right" formalism, suitable for the system under development, remains challenging because of conflicting requirements: the more expressive formalisms that can accurately model extensive classes of properties are often also unintuitive and complex to analyze automatically. My work in this area has focused on rigorously analyzing the trade off between expressiveness and complexity in many heterogeneous formalisms sharing the capability to describe some form of timed or hybrid behavior.

The book Modeling time in computing is a comprehensive survey of such issues, spanning notations as diverse as dynamical system models, automata and finite-state machines, Petri nets, various flavors of logic, and process algebras. The survey is based on an original taxonomy of "dimensions" relevant in models of timed, hybrid, and cyber-physical systems; its mix of foundational and advanced topics makes it suitable for computer scientists as well as researchers of other disciplines.

Other research in this area has focused on temporal logics with quantitative time that dwell on the boundary of undecidability—in particular, variants of the logic MTL. I have characterized the complexity of decidable and undecidable fragments, studied syntactic and semantic compromises that achieve interesting trade offs between complexity and expressiveness, and developed and implemented decision procedures that are applicable in practice.

Book
Modeling time in computing (some reviews of the book)
Main collaborators
Paola Spoletini, Dino Mandrioli, Angelo Morzenti, Matteo Rossi
Publications
AMAI'16, TIME'14, TIME'12, TIME'11, TOCL'10, ICECCS'10, SEFM'09, ATVA'08, FORMATS'08, ICFEM'08, ICTAC'08, FM'08, FORMATS'07, TCS'07, FORMATS'06, FASE'05
Automatic source-to-source language translation

Rewriting large programs written in a legacy language into a more modern one is a common industrial practice that absorbs costly resources. Automating this process has a great potential for impact, but previous attempts have been only partially successful because of the complexity of programming languages as they are used in practice—including low-level features such as pointer arithmetic, native code, and unstructured control flow.

The C2Eif project has been the first successful attempt at fully automating source-to-source translation in a way that produces functionally equivalent programs which take advantage of the target language's high-level features such as classes and inheritance. Experiments have shown that the tool can deal with large real-life C projects such as the editor vim and the library libgsl, producing Eiffel code of equivalent functionality and immediately usable. mtSystems is a spin-off company of this research, which offers commercial solutions for high-quality translation of C applications into Java and other modern languages.

Project page
C2Eif, J2Eif, and mtSystems
Main collaborator
Marco Trudel
Publications
ECOOP'13, WCRE'12, WCRE'12 demo, TOOLS'11