Engineers of all walks insert extra debugging probes into what they're building. The exterior surface of the artifact is often silent about what goes inside, so the extra probes give engineers important extra information. They can use that information to more quickly test theories about what the artifact is doing. They use these tests to improve performance, debug faulty behavior, or to gain extra assurance that the artifact is behaving correctly.
Software engineering is both the same and different in this regard. Software certainly benefits from extra internal probes, and for all of the standard reasons. A common way to insert such probes is to insert debug or trace messages that get logged to a file. If the messages have a timestamp on them, then they help in profiling for performance. For fine-grained profiling, log messages can be so slow as to affect the timing. In such a case, the timing data might be gathered internally using cheap arithmetic operations, and then summary information emitted at the end. This is all the same, I would imagine, in most any form of engineering.
What's different with software is that it's soft. Software can be changed very easily, and then changed back again. If you're building something physical, then you can change the spec very easily, but building a new physical device to play with involves a non-trivial construction step. With software, the spec is the artifact. Change the spec and you've changed the artifact.
As such, a large number of debugging and profiling probes are not worth checking into version control and saving. Several times now I've watched a software engineer work on a performance problem, develop profiling probes as they do so, and then check in those probes when they are finished. The odd thing I've noticed, however, is that usually that same programmer started by disabling or deleting the stuff checked in by the previous programmer. Why is that? Why keep checking in code that the next guy usually deletes anyway?
This question quantifies over software projects, so it's rather hard to come up with a robust reason why it happens. Let me hazard two possible explanations.
One explanation is that it's simply very easy to develop new probes that do exactly what is desired. When the next engineer considers their small build-or-buy decision--build new probes, or buy the old ones--the expense of rebuilding is so small that the buy option is not very tempting.
The other explanation is that it's a problem of premature generalization. If you build a profiling system based on the one profiling problem you are facing right now, you are unlikely to support the next profiling problem that comes up. This is only a partial explanation, though. I see new profiling systems built all the time when the old one could have been made to work. Usually it's just easier to build a new one.
Whatever the reason, I am currently not a fan of checking in lots of debugging and profiling statements into version control. Keep the checked-in code lean and direct. Add extra probes when you need them, but take them back out before you commit. Instead of committing the probes to the code base, try to get your technique into your group's knowledge base. Write it up in English and then post it to a wiki or a forum. If you are fortunate enough to have Wave server, post it there.
Wednesday, December 15, 2010
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment