/home/dhananjay

A personal take on various topics

And Why Exactly Do We Have Such Meaningless Analysis of Developer Satisfaction ?

I remember my Market Research classes, where we were made to work extremely hard to come up with the right questions (could take weeks) before administering a questionnaire. Thats because if the questions do not have a clear traceback from the objectives of the exercise, they can give totally irrelevant and useless result at the end. If a research or a survey had to be useful, an enormous effort was required to upfront address what answers would be useful, and work back into the questions from there. Here’s a good reason (as a counterexample) why we were taught that rigour. A report titled ”Users’ Choice: Scripting Language Ratings - A comprehensive user satisfaction survey of over 500 Software developers and IT Pros” from Evans Data Corporation measures “User Satisfaction” with scripting languages”. (Registration / Personal data sharing required) which got covered by the register in Developers more ‘satisfied’ with PHP than other codes. Without spending too much time I will just point out one example : Turn to Page 23 - Performance. So PHP programmers are more satisfied with its performance than Python and Ruby developers are with theirs ? I suspect if they had Java ratings in, its satisfaction perhaps could’ve been even worse. Yet the actual runtime performance of these languages is exactly the reverse. So what exactly does “user satisfaction of PHP developers with its performance is higher than that of Python and Ruby programmers with the respective performance” tell me and how is it useful even in the remotest possible way ? Beats me - but the most sensible explanation I could think of is that satisfaction is a function of the challenges and the context - and these are not equal. So any such comparison is pretty meaningless. Even more damning is the fact that there is an abundance of evidence indicating actual language runtime performance being completely inconsistent with the suggested user satisfaction levels which questions the relevance of the comparison of user satisfaction levels. I wouldn’t have felt so strongly about if only the first 14 pages of the report had been published. That would simply reflect user satisfaction - end of story. But the remaining 12 pages which present the data in a comparative manner make it an exceptionally meaningless exercise best ignored (or blogged about and then ignored :) ).
Reblog this post [with Zemanta]

Comments