Research


"If it rained knowledge I'd hold out my hand; but I would not give myself the trouble to go in quest of it."
- Samuel Johnson (from Boswell's
Life of Johnson)

News: This usually out of date page is now somewhat superseded by the (also usually out of date) page for the Oregon State University Software Testing and Analysis Group. The publications page here is kept up to date, however.

VIEW FROM 30,000 FT.: My general research interest is in software engineering: designing, specifying, implementing, testing, analyzing, verifying, understanding, and debugging software written in widely used programming languages, such as ANSI C and Java. My focus is on approaches and methods that naturally give rise to (semi-)automated tools for programmers, testers, and other software engineers. The power of computer science is in automation -- from design to testing, we need better tools to help us write better programs.

My work to date has usually involved some variety of model checking, but the specifics have varied from full exploration of highly abstracted systems (with MAGIC) to methods more akin to an unusual approach to testing (heuristics with JPF, model-driven verification of C programs with SPIN). Years of frustration when attempting to understand counterexamples produced by model checkers have convinced me that error explanation and fault localization are important (and interesting) topics, and my thesis work was an investigation of how a model checker can be used not only to find an error, but to explain and localize that error. I made use of distance metrics on program executions to find causes for errors, in a manner inspired by the counterfactual theory of causality proposed by David Lewis.

More recently, at JPL, I have become interested in the connections between model checking and (randomized) testing -- including using the same models and tools for both, reusing simulation infrastructure, and hybrid approaches such as randomized testing with the complete nondeterminism of model checking. More "frivolously" I have also become fascinated by the taxonomy and nature of actual bugs in programs, particularly file systems.

More generally, I see model checking (like well-done testing) primarily as a useful tool for program understanding -- the investigative side of software engineering, if you will. Model checking traditionally answers the question: does my (program, protocol, design) have a trace like this? Testing, in a sense, answers the same question. Even at the earliest stages of software design, thinking about how to make answering that question easier not only eventually aids verification, but helps us think about how a program will actually execute. One interesting possibility to consider is the use of model checking and other state-space exploration methods to help visualize the executions of a program. Recent work at IBM has shown the value of visualizing memory configurations of large systems; it seems that program trace-spaces would also be useful in software engineering and performance debugging.



UPDATE (11/10/10): I'll be chairing SPIN 2011, co-located with CAV 2011 in Snowbird, UT.

UPDATE (7/1/10): Not updating this site as well as I should... faculty life is busy. On PC for SAVCBS 2010, NASA Formal Methods 2011, FASE 2011... Those research changes mostly involve interest in end-user testing/debugging for machine-learned software, with Margaret Burnett and fellow CMU-er Weng-Keen Wong.

UPDATE (5/1/09): I'll be on the PC for SAVCBS 2009. I'll probably be changing research directions in some new and exciting ways (though also continuing past work) as a faculty member at Oregon State University, starting later this summer.

UPDATE (1/5/09): I'm on the PC for this year's Workshop on Constraints in Formal Verification. Submit away.

UPDATE (8/5/08): Paper for my invited talk at CFV'08 and VERIFY'08 gives a nice overview of the file system verification & testing that's occupied my time the last couple of years, and a lot of what LaRS has been up to in general. Take our conclusions on the success of various methods with a grain of salt -- one lab, a handful of people -- still, I think it's safe to say that for rich properties depending on data, the automated predicate abstraction tools aren't really ready for prime time (unless you have the authors captive, maybe).

UPDATE (6/26/08): I'll be on the PC for SAVCBS 2008. I just (a week ago) joined the MSL Flight Software Internal Test (FIT) team as lead developer/designer for the test automation framework. I don't know how much "research"y testing will be useful here, but after this I might be more worth listening to when I opine that something does or does not work in testing Actual Important Code.

UPDATE (6/10/08): More papers, first a write-up of some work on automated testing of models from AI planners, at MoChArt 2008. More central to what I've been working on lately, there's an ASE paper, "Random Test Run Length and Effectiveness," mostly based on the file system testing here at JPL. Unfortunately, we don't actually propose a solution to the problem of selecting an optimal (or even good) test run length, but the data does demonstrate pretty convincingly (IMO) that this is a major factor in whether random testing is effective for a system. This paper wouldn't have been written if not for Carlos Pacheco asking an innocent question after my ICSE '07 talk. Makes me wonder what else you have to luck into getting right in order for random testing to work (that we're not investigating)...

UPDATE (5/16/08): Papers at WODA 2008 and SPIN 2008. First is on using SPIN to do both random testing and model checking -- with a little comparison of the two. Also sort of an "advertisement" paper, to see if any other dynamic analysis folks want to use SPIN to do this kind of thing -- I think we do show that it's pretty darn easy to do. The second one's a tool paper on heuristics for finding bugs with SPIN (and they do seem to work surprisingly well, perhaps even better than the paper suggests, says the slightly dubious third author). Other than that, using the ideas in both these papers to test the heck out of the MSL file storage modules, with pretty good results (but we'd like to find EVERY LAST BUG before we head to Mars).

UPDATE (4/14/08): I'll be the invited speaker at the Constraints in Formal Verification 08 workshop in Australia this August, where I'll discuss "Putting Flight Software Through the Paces with Testing, Model Checking, and Constraint-Solving."

UPDATE (2/19/08): Klaus Havelund and I will be teaching CS 119 Reliable Softare: Testing and Monitoring at Caltech starting April 1st. Should be enjoyable --- perhaps I'll know more about how to test sotware after we're done...

UPDATE (2/5/08): Congratulations to my advisor, Ed Clarke, who won this year's ACM Turing Award (along with Emerson and Sifakis) for inventing model checking!

UPDATE (1/12/08): Presented "Extending Model Checking with Dynamic Analysis" at VMCAI '08. Preparing a talk on the same rough topic for this year's SoCal Workshop on Programming Languages and Systems, and giving a talk on some similar themes (at a high level) at Caltech on the 19th. In general, this is all looking at the places where model checking and testing intersect -- when state spaces are too large, and properties too rich (full functional correctness of a file system, for example) for automated abstraction, what can we do? Random testing is useful (I'm looking at how to wisely carve up a test budget into test runs, with Jamie Andrews at Western Ontario and RuGang Xu, summer intern '07 from UCLA), but explicit-state model checking with the expectation that you won't cover the whole state space seems better for programs well fit for a small nondeterministic harness. Where do "design for testability" and "design for verification" lead? Not, probably, to programmers working with temporal logic or theorem provers.


UPDATE (1/5/07): "Randomized Differential Testing as a Prelude to Formal Verification" was accepted for publication at ICSE 2007. I will be on the program committee for SPIN 2007. I was on the PC for the first international workshop on Verification and Debugging, at CAV this year, which was a lot of fun.


UPDATE (12/11/05): "Exploiting Traces in Program Analsyis" was accepted for publication at TACAS 2006.


UPDATE (11/5/05): Still working at JPL; still learning the ropes -- what is involved in this complicated beast they call flight software? I've been working on various things, including (with Rajeev Joshi) analysis (well, bounded model checking in the implementation) of a program given the knowledge that it produced a given trace -- in a sense, slicing with a series of printfs produced as the slicing criteria.

Enjoying reading Andreas Zeller's Why Programs Fail.

I'll be on the program committee for the 1st International Workshop on Verification and Debugging (the site's a bit preliminary right now), to be held with CAV '06 as part of FLOC.


A list of my publications is available.


My research is associated with or I'm directly working on the following tools, which are available for download:

  1. CBMC

    CBMC is a bounded model checker for ANSI-C programs. It provides verification for array bounds, pointer safety, exceptions and user-specified assertions.

    Coming soon: I will be releasing the explain tool for use in understanding counterexamples produced by CBMC.


  2. MAGIC: Modular Analysis of proGramsIn C

    MAGIC is a tool for compositional verification of (concurrent) ANSI-C programs, based on a two-level CounterExample-Guided Abstraction Refinment (CEGAR) approach.

    I worked with Sagar Chaki, Ofer Strichman, and Edmund Clarke on the predicate minimization algorithm used in MAGIC, and am now implementing distance metric-based error explanation for MAGIC.


  3. Java PathFinder

    The Java PathFinder is a tool based on a novel Java Virtual Machine (JVM) allowing model checking and testing of Java bytecode programs. I worked with NASA Ames Research Center's Automated Software Engineering group on JPF over two summers as a participant in the Summer Student Research Program.



  4. In the past, I also worked on these tools:

  5. The Concurrency Workbench (CWB-NC)

  6. SYMP


UPDATE (4/27/05): I started work at JPL on Monday; so far, so good. STTT journal paper on explanation accepted, and starting work on some new topics now. Lots of very cool things going on at LaRS.

In other news, JPF was open sourced today! It took a while, but Willem and Jon and Peter did it; way to go, guys. The release is here, and the Slashdot story(!) is here. Even I felt the effects -- factor of 10 or more boost in hits on my home page today. I guess Java model checking is famous now?


Pre-JPL Updates & Information


Dr. Edmund Clarke was my advisor at CMU. I research software engineering and formal verification, particularly model checking. I graduated in March 2005, and can now be found working at the Laboratory for Reliable Software (LaRS) at the Jet Propulsion Laboratory.
My undergraduate research involved an ABTA (Alternating Buchi Tableau Automata) based model-checker for the Concurrency Workbench of the New Century and simplifications to ABTAs produced from tableau rules for GCTL* (a modification of the temporal logic CTL*) and the L2 fragment of the mu-calculus that make ABTA model-checking more efficient. The model-checking algorithm is a modification of the one described in Girish Bhat's Ph.D. thesis. Dr. Rance Cleaveland directed this research, and has been astonishingly helpful throughtout my career in higher education. Our paper "Efficient Model Checking Via Büchi Tableau Automata" was published in the proceedings of CAV '01.

In the past at CMU I worked some on Sergey Berezin's SyMP system, although Sergey ended up spending more time fixing my code than it would have taken to write it himself, I suspect.

UPDATE (6/16/00): This summer I'm working with Mihalis Yannakakis and Doron Peled at Lucent/Bell Labs on black box checking.

UPDATE (8/29/00): Back at CMU, but continuing work on black-box checking as well as other things.

SOFTWARE MODEL CHECKING

UPDATE (8/16/01): About to leave NASA Ames Research Center, where I've been working on heuristic search for the JAVA PathFinder 2 with Willem Visser and the other members of the Automated Software Engineering group. I plan to continue this work during the year; the results so far have been very good, and there remain many avenues to explore.

UPDATE (1/16/02): A couple of papers to be published: "Adaptive Model Checking," by myself, Doron Peled, and Mihalis Yannakakis, to be published in the proceedings of TACAS 2002 and "Heuristic Model Checking for Java Programs," with Willem Visser, to be published in the proceedings of SPIN 2002.

UPDATE (4/26/02): "Model Checking Java Programs using Structural Heuristics" will appear in ISSTA 2002. As the semester finishes at CMU, I'm working with Sagar Chaki on predicate abstraction, in particular analyzing automatically discovered predicates, before returning to NASA Ames this summer.

UPDATE (2/14/03): A couple more papers to be published: "Modular Verification of Software Components in C," by Sagar Chaki, Edmund Clarke, Alex Groce, Somesh Jha, and Helmut Veith, to be presented at ICSE 2003 and "What Went Wrong: Explaining Counterexamples," with Willem Visser, to appear in SPIN 2003 (SPIN is being held after ICSE this year). I'm currently investigating optimization for predicate abstraction (for Sagar Chaki's MAGIC tool) and methods for automatically explaining counterexamples, possibly in the world of autonomous systems.

UPDATE (5/12/03): Wednesday, May 21st, at 10:00 AM in WH 5409, I will be presenting my thesis proposal, on the topic: "Explaining Counterexamples: Causal Analysis and Comparison of Transition Sequences" (abstract). My committee members are Edmund Clarke, David Garlan, Reid Simmons, and Willem Visser.

"Modular Verification of Software Components in C" was one of two ICSE ACM SIGSOFT Distinguished Paper Award winners. The MAGIC tool has been officially released.

UPDATE (6/13/03): My thesis topic has been approved and I am continuing work on my approach for explaining (and possibly automatically suggesting fixes for) errors in programs.

"Predicate Abstraction with Minimum Predicates" was accepted for publication in CHARME 2003.

UPDATE (8/03/03): Initial implementation of error explanation within the CBMC bounded model-checker for ANSI-C programs is complete, but lacking a number of features. CBMC is available now for download, and I hope to have an alpha release of error explanation capabilities before too long. Better handling of arrays and structures and integration with the GUI are the only major features needed before release.

UPDATE (11/21/03): I'll be attending Dagstuhl Seminar 03491, Understanding Program Dynamics right after Thanksgiving, presenting my work on error explanation with distance metrics.

"Efficient Verification of Sequential and Concurrent C Programs," by Sagar Chaki, Edmund Clarke, Joel Ouaknine, Ofer Strichman, Karen Yorav, and myself, has just been accepted for publication in the special Software Model Checking issue of Formal Methods in System Design. "Heuristics for Model Checking Java Programs" by Willem Visser and myself has been accepted for publication in Software Tools for Technology Transfer.

The explain tool is ready for release, but I haven't gotten around to it yet, and would like to better integrate it with the CBMC GUI before putting it on the website.

UPDATE (12/12/03): "Error Explanation with Distance Metrics" was accepted for publication in TACAS 2004.

UPDATE (3/26/04): "Understanding Counterexamples with explain" was accepted for publication in CAV 2004. I'm off tomorrow to Barcelona for ETAPS to present the distance metric paper. In the meantime, I've implemented distance metric based error explanation for MAGIC, and it's working quite nicely.

I forgot to mention the journal version of the ICSE paper also being accepted (see publications page).

UPDATE (6/11/04): "Explaining Abstract Counterexamples" will appear at FSE (Foundations of Software Engineering) in November. "Making the Most of BMC Counterexamples" has been accepted for BMC'04, the second workshop on Bounded Model Checking.

UPDATE (2/17/05): I've scheduled a thesis defense for March 3rd (two weeks from now) and will be starting work at JPL's Laboratory for Reliable Software (LaRS) as soon as I finish up at CMU (and move to California). It should be a great opportunity to do some serious model checking research with real-world (and other-worldly!) examples.

My thesis draft looks to be in good shape for public release, though I expect to be tweaking it until the defense or shortly thereafter.

UPDATE (3/3/05): Successful thesis defense today! I will be making some changes to address a few points that need to be clarified, and then, well, it's on to Mars, so to speak...


Some of this research was presented at meetings of the Specification and Verification Center at CMU.
The greater scheme of things...
Back to Alex's Page
Comments & questions to agroce@gmail.com.