However much we like to think of human knowledge progressing in the class room from expert to neophyte, the reality is that there is quite a lot of lore that is not in the text books. It is passed by word of mouth, as practitioners and researchers share with each other their struggles and their solutions. I work on an AJAX development toolkit, so it was great to talk to a few people actually developing AJAX apps. Our team is starting to build software on the Eclipse platform, so it was helpful to talk to other people who have experience with it.
In the other direction, it clarified my own work for me when I had to explain it to others. What we call "runAsync" and "code splitting" in house can be most easily explained as "incremental download of application code". It allows AJAX programs to start up before all their code downloads, and as the program runs it downloads the rest of its code. If their eyes haven't glazed over yet, I can then explain that our approach differs from others in that we use control-flow analysis to decide what part of the program goes in the initial download. Competitors tend to use a module system, but with that approach it's unclear how to download just a part of a module you are using initially versus downloading the whole thing.
There's not much interesting to share in these discussions, though. More interestingly, here are my picks for the most interesting papers of the conference.
"Analyzing the Performance of Code-copying Virtual Machines", Prokopski and Verbrugge.
I was really impressed by the amount of observation and analysis these guys did. Most of the time when a research backs a technique, they provide one or two case studies that include no surprises. It's more of a proof that they actually did some amount of work on the technique, rather than a real experiment designed to increase our understanding of the subject. Prokopski and Verbrugge tried their technique on three virtual machines, and it worked well on two but poorly on one. It's interesting to picture the more common scenario where the researcher tries a technique on one virtual machine and then publishes the results. The results from any one of these modified virtual machines would have given a misleading understanding of the technique as a whole. Indeed, I wonder how many research papers have a basically wrong result, but nobody knows because the empirical evidence is skimpy?
"Sound and Extensible Renaming for Java", Schafer, Ekman, and de Moor.
The title is about Java, but the technique is general. The problem is how to do renaming refactorings, which are one of the most frequent refactorings people do. Their technique uses an inverse of the usual name-lookup function that they call "access". Whereas the lookup function takes a name and a scope and finds the variable that is referred to, access takes a variable and a scope and computes a name (or qualified name) that can be used to access the variable. The main downside is that they did not justify the technique formally. It looks like a deep, general technique that deserves study both in minimal calculi and in a number of languages. I hope they or someone does so.
"QVM: An Efficient Runtime for Detecting Defects in Deploymed Systems", Arnold, Vechev, and Yahav.
There is a trade-off in software that the more sanity checking it does as it runs, the slower the program tends to run. Programmers have to decide what things to bother checking for with assertions, and they have to decide in what contexts to enable assertions at all. QVM makes the trade off less stark by making the level of assertion checking a continuous knob instead of a simple on/off switch. When you run the program, you specify how much CPU time you want to spend on assertion checking, and the infrastructure will turn assertions on and off as it tries to target that amount of CPU time. Moreover, it doesn't simply turn assertions on and off at a global level, but it makes some effort to spread out the assertions that it uses. If you have one assert that is in a frequently used code path, and one that is in a rarely used code path, it will throttle the frequent one more than the rarely-used one in order to increase the likelihood of finding bugs. All in all it's a clever system, and I hope it becomes the norm for VMs of the future.
"Verifying Correct Usage of Atomic Blocks with Typestate", Beckman, Bierhoff, and Aldrich.
This paper addresses program verification based on type state and when programs are concurrent. I don't know the area well, but they choose five kinds of access permissions that a variable might have. An access permission talks about the kind of aliasing going on with the referred-to object. The options are: unique (this is the only reference in the program), full (this reference is the only one used to modify the object), immutable (the underlying object never changes), pure (the underlying object might change, but not via this reference), share (anything goes). I don't know if program verification per se is going to become widely practical any time soon, but labels like these are helpful in figuring out how to better structure and document programs. If you are writing concurrent software in a language fortunate enough to have software transactional memory available, these labels suggest what you should document when you write the comments for a field pointing to a shared object.