There is a technique sometimes used in retrospectives, and sometimes called “Taking the temperature” but in truth it can be used in many settings and I’m sure it goes by various names. Basically, you pose some question to a group, for example:
“How well do we feel the last iteration went?”
“How much do you know about Agile?”
“Are you confident we can meet this objective?”
Everyone takes a piece of paper - usually a sheet of post-it paper - and writes a number from 1 to 10. Depending on your question the number 1 means: bad, nothing, unlikely, while 10 means something like: brilliant, everything, certain.
The papers are folded (its anonymous), collected and opened.
The anonymous bit is important: I once sat with a team where everyone answered 6 to the question “How did this sprint go?”. In my opinion the sprint was horrid, indeed the whole initiative was a mess but I too answered 6. Maybe I answered 6 because the people before me had anchored my answer, maybe I answered 6 to conform with the group, maybe I didn’t want to rock the boat. But probably I answered 6 because in my opinion the manager running the team, the initiative, and this meeting was the cause of most of the problems and I felt that anyone who tried to speak up would be quizzed as to “Why do you say that?” and the answer “Because you really don’t get it you silly old …” was not very useful.
The thing is, once you have those numbers they are a talking point. I like to post them up on a board, in order, where we can see them and discuss them. If the question is “Are we confident in our ability to meet this goal?” and I am faced with a lot of scores around 6 or 7 I might say: “What could we do to increase our confidence?” and we could have a general discussion about this.
I usually use this technique at the start of a training session to judge the knowledge of Agile in the room. That helps me adjust my level of delivery to the audience.
The thing about this technique that I wanted to bring up today is this: the mean isn’t particularly interesting, i.e. summing all the number and dividing by the number of votes to achieve a mean average isn’t very useful. In part this is because the average does’t tell us much, regression to mean will happen and it doesn’t really give you anything to talk about.
Sure, its a bit of a vanity metric: “Look we scored 6.8 which is up from 6.65 last iteration!”
Median (ordered middle) might be more interesting. And mode (most common) is certainly more interesting. They are things to talk about. Faced with 8, 6, 6, 6, 5, 3: “Why do most people think this iteration went well but not everyone?”
There may be value in recording an average to track the ups and downs over time. I’m not sure there is but if I was I’d rant to track the median rather than the mean, to my mind that will contain more information than a mean. (However even here I’d get worried about Goodhart’s Law and goal displacement, someone might think the aim of an iteration was to have a good iteration.)
But what is really interesting, and the reason why all averages aren’t that interesting is because it is the extremes which are the most interesting.
“Why do we think that someone thought the iteration was not very good?”
“What could have caused someone to rate the iteration as high as 8?”
“How could we have a good (8) and a poor (3) in the same iteration?”
The extremes, particularly when there are opposing answers, make a good talking point. They are something to talk about. Remember of course to keep it anonymous: don’t ask “Will who ever scored 3 explain why they dissent?”, ask instead “What do we think made someone rate the iteration as a 3?”
So if you are going to use this technique please:
- Keep it anonymous
- Share the results with the whole group
- Avoid the mean
- Talk about the extremes
- Talk about how you could score higher in future