Tuesday, 4 July 2017

CV

Bernhard K. Aichernig is a tenured associate professor at Graz University of Technology, Austria. He investigates the foundations of software engineering for realising dependable computer-based systems. Bernhard is an expert in formal methods and testing. His research covers a variety of areas combining falsification, verification and abstraction techniques. Current topics include the Internet of Things, model learning, and statistical model checking. Since 2006, he participated in four European projects. From 2004-2016 Bernhard served as a board member of Formal Methods Europe, the association that organises the Formal Methods symposia. From 2002 to 2006 he had a faculty position at the United Nations University in Macao S.A.R., China. Bernhard holds a habilitation in Practical Computer Science and Formal Methods, a doctorate, and a diploma engineer degree from Graz University of Technology.

New article on requirements modelling, model-based testing and traceability

Our article on combining requirements modelling, test-case generation and traceability appeared in the Springer STTT journal.

The paper describes how we
  1. model requirements as contracts in the form of assume-guarantee conditions 
  2. generate test cases efficiently out of the models via SMT solving and an incremental algorithm
  3. add traceability information linking requirements, contracts, generated test-cases and test results
  4. demonstrated its feasibility with our industrial partner Infineon on airbag electronics.  
All of this comes with solid foundations and precise semantics.

Bernhard K. Aichernig, Klaus Hörmaier, Florian Lorber, Dejan Nickovic, and Stefan Tiran. Require, test, and trace IT. International Journal on Software Tools for Technology Transfer (STTT), 19:409–426, 2017. Open Access. Published online: 29 November 2016. (PDF) (doi:10.1007/s10009-016-0444-z)

It is open access and can be freely accessed at Springer.

Enjoy!

Thursday, 12 January 2017

Professional Activities in 2017

Here is a summary of my activities in 2017. This list will be updated as tasks come along.
  • Key Researcher in the projects 
  • Invited Speaker at the 11th Alpine Verification Meeting (AVM 2017),  Visegrad, Hungary, 18 - 21 Sep 2017.
  • Member of the appointment committee for the professorship in Information Security. 
  • External PhD examiner of Zhengkui Zhang, Aalborg University, Denmark. Thesis title: "Time and Cost Optimisation of Cyber-Physical Systems by Distributed Reachability Analysis". 
  • PC Member of 
    • A-MOST 2017, 13th Workshop on Advances in Model Based Testing
    • TAP 2017, 11th International Conference on Tests & Proofs
    • TASE 2017, 11th International Symposium on Theoretical Aspects of Software Engineering
    • MBT 2017, 11th Int. Workshop on Model-Based Testing
    • ICFEM 2017, 19th International Conference on Formal Engineering Methods
    • ICTAC 2017, 14th International Colloquium on Theoretical Aspects of Computing
  • Guest Editor of the Springer Journal Formal Aspects of Computing for a special issue of TAP 2016
  • Associate Editor of the open access journal Frontiers in ICT, section Formal Methods.
  • Teaching 
    • Quality Assurance in Software Development, 
    • Software Paradigms, 
    • Model-based Testing, 
    • Logic and Logic Programming, and 
    • Functional Programming.


Wednesday, 19 October 2016

New Journal Article on Test-Case Generation and Fault Propagation

Our new article on fault propagation and its relation to model-based test-case generation was just published:

Bernhard K. Aichernig, Elisabeth Jöbstl, and Martin Tappler. Does this fault lead to failure? Combining refinement and input-output conformance checking in fault-oriented test-case generation. Journal of Logical and Algebraic Methods in Programming, 85(5, Part 2):806–823, 2016. (PDF) (doi:10.1016/j.jlamp.2016.02.002)

The publisher Elsevier provides a free download of the original article until December 2.

The paper describes how we generate test-cases for reactive systems by

  1. injecting faults into the models (mutation testing)
  2. generating a test-case that triggers this fault (refinement checking)
  3. extending the test-case until an observational failure could be produced (ioco checking).

We show that this is more efficient than previous approaches that search for observational equivalences directly, i.e. skipping Step 2.

Our case study of a car-alarm controller shows that for some subtle faults we need up to nine additional interactions until a fault becomes visible at the interface.