Monitoring, Runtime Analysis and Testing

From FSL
Jump to: navigation, search

Monitoring provides an effective solution to reliable software development. Before deployment, monitoring formal specifications not only brings more rigor to testing, but also allows advanced predictive runtime analyses of concurrent software. After deployment, monitoring provides fundamental support for error dignosis, on-site testing, and, more importantly, runtime recovery, resulting in more dependable and even provably correct software. Our current research in this area is focused around the following projects:

Monitoring-Oriented Programming

Monitoring-Oriented Programming, abbreviated MOP, is a software development and analysis framework aiming at reducing the gap between formal specification and implementation by allowing them together to form a system. In MOP, runtime monitoring is supported and encouraged as a fundamental principle for building reliable software. [more ...]

Concurrent Runtime Verification

Concurrent runtime verification aims at detecting concurrency errors in a system by running it and observing its execution traces; errors can be detected despite the fact that the analyzed execution traces may not necessarily hit them. Typical concurrency errors include dataraces and deadlocks, both notoriously hard to detect by ordinary testing, but general safety property violations (expressed using MOP logics) can also be detected. Static analisis can be used to improve the predictive capability of concurrent runtime verification. [more ...]

  • jPredictor is a concurrent runtime verification tool for Java.

Concurrent Random Test Generation

Concurrent random test generation aims at detecting errors in concurrent programs by increasing the likelihood of exercising erroneous behaviors of the program under test. In the form of noise making, to achieve this goal, the noise maker executes seeded primitives using coverage-based decision procedures. This requires gathering information on the program data and control flow dependencies. In a typical user scenario, a given functional test t is run (without human intervention) against the program under test P. The run can be checked by MOP on the fly, or analyzed by predictive analysis to detect untouched bugs. This is done repeatedly in an automated way. The noise maker seeds critical events with conditional synchronization primitives that may cause context switches or delay threads. Each time the functional test t is run, potentially different interleavings are manifested. During the execution of the functional test t, coverage information is produced and collected. In combination with predictive runtime analysis, we expect noise making to become an effective lightweight runtime verification technique that can detect a large percentage of, if not all the concurrency errors in a typical concurrent or multithreaded program.

Personal tools
Namespaces

Variants
Actions
Navigation