The article on Hacknot talks about testers and the grief developers face from them due to incorrect problem reporting practices. Of course developers do exactly the same things when they are the ones reporting problems.
I am talking about Technical Support requests. When a developer (or an operations personnel, or - worse yet - an operations manager) calls with a support case, they do exactly the same things that Hacknot described for the testers.
As a very common example, the support case description will read Weblogic crashed without reason. After extensive probing, following things emerge:
- The Weblogic process hasn't actualy exited, but is still around. It just stopped responding. This means Weblogic hanged, rather than crashed. Equally annoying, yet caused by completely different set of problems.
- No reason turns out to be a new code drop that, in their opinion, changed nothing important and therefore was not worth mentioning.
- Doing thread dumps and log/config analysis often shows a bottle neck and/or incorrect configuration. At this point, it usually emerges that the server load at the hang time was higher than ever before and no, it was not load tested in the same configuration (e.g. cluster with proxy) anywhere else.
I have to admit that when I was a developer, I commited all those mistakes and more myself. I feel that work in BEA support has opened my eyes and makes the code I produce now more user friendly with better reporting.
As a result, my thought on the QA/Dev separation is to actually fold developers into the testing team for short rotational periods.
The developer will then discover whether there are real causes to reporting problems (insufficient logs, misleading dialogs or just unclear logic) that will only be acknowledged as problems when a developer walks a mile in a tester's shoes.
The final benefit is that developers and testers have different ways to do similar things and exchange of techniques is always beneficial.
BlogicBlogger Over and Out