We all make mistakes. But how we go about redressing those mistakes tells a lot about our personality, both our strengths as well as our shortcomings. Bugs are a natural part of software development. Testing not so much. I attribute this to, among other things, general developer laziness and the constant pressure of ‘Getting Things Done’. And there it is again – that damn word that developers, sysadmins and customers can never seem to agree upon – ‘Done’.
A bugfix without a test is an anti-fix. You heard me – right up there next to the anti-christ himself. After committing the bugfix, the developer thinks their ‘Done’ when in reality they’ve just introduced a new bug (and more complexity) into the system.
Bugs are incredibly interesting facts. They are indicative of that rare species – source code that is actually used (remember the Urban Myth that only 20% of your source code is actually used on a daily basis?). If a customer has taken the time to try and get something done with your application, the least you can do is write tests for any bugs they happened to come across. The test is your unspoken agreement with the end-user that this particular bug won’t happen again.
If we all spent an extra 10 minutes before releasing and thought about boundary cases or function interactions – but that’s another post!
bugfix - test = anti-fix (new bug!)
bugfix + test = fix
How do you handle bugfixes? Share with us below.
12 thoughts on “Bugfixes without Tests are Anti-fixes”
So you have a poor opinion of Selenium? I just checked it out and it seems pretty alright to me. What kinds of problems have you experienced while using it? Or is it just the methodology it uses (e.g. recording clicks) which is distasteful?
I find Selenium testing hard to automate…meaning I need to first procure, install & configure an X-server with the relevant browsers. Then install & config Selenium there (what drove me nuts about using Selenium years ago was padding the “wait for DOM to load and render” sleep statements). And finally get this server under Hudson command and control.
While I know this is ultimately a worthwhile endeavor, it’s a helluva hike (uphill both ways in the snow) …
I spend a lot of my time these days writing up reproducible steps for bugs in other people’s code, and heartily agree with everything you said above, especially about how stakeholders have different definitions of “done”.
My workplace uses Pivotal Tracker quite a bit, and one thing I find works quite well is how Pivotal eschews the term “done” altogether in favour of different sign-off states: started -> finished -> delivered -> accepted/rejected.
Accepted/rejected are my favourite states, as you can mandate a process where you require another stakeholder to sign off on the work you’ve completed before it’s “done”.
Luckily, there are solutions to the Selenium hassle Dan describes in the cloud. Both, Sauce Labs and Cloud Testing are worth a look.
@Lindsay That’s exactly how we use PivotalTracker, too. Let a stakeholder accept or reject the stories. That really creates a shared understanding of “done”. We really made sure that the stakeholders understand that pressing that green button means “it will go live as it is”. That makes them think twice.
Great article. Totally agree. The problem I find is that you really need a process to surround your “expectations” associated with the removal of issues. This expectation that a bug fix needs a verification (can be a peer review, regression test, or manual test) and you need a infrastructure to drive this behavior so you do not have to rely on the individual.
We use Parasoft Concerto for this: http://www.parasoft.com/alm
Great article. And one more thing, the links to qunit and jasmine have swapped. 🙂
Thanks for the catch! Now fixed.
Very interesting post showing the value of testing before fixing a a bug 🙂
Nice to know that there are more people out there sharing the same opinion with me.