My last entry I discussed some of the lessons I learned coding the Agile on the Beach submission and review system, Mimas (feel free to have a play). For completeness I’d like to tell you what the system does and what I want to do next for the system:
For the speaker….
What you get is a fairly basic form to submit a proposal. You receive an e-mail confirmation and at some later date notification of the outcome. You should also receive a link to the scoring and comments your submission received - that is, if the organiser wants to send it.
Before I go on, it is worth pointing out that Mimas is build around the assumption that conferences have tracks, e.g. software coding track, product design track, etc. Submissions are tied to a conference track, reviewers are asked to review all track submissions and final decisions are made on a track-by-track basis. Since tracks are configured when the conference is set up they are largely fixed. That could change in future but right now they are fixed.
For the reviewer…
Reviewers are assigned to one or more tracks and asked to review the submissions in the track. They get a list of all the submissions and for each one can score the submissions from -3 (don’t want) to +3 (really want.) That is round one. The conference organisers can then decide which submissions to shortlist for round two.
In round two reviewers are asked to rank the shortlisted submissions - from 1 at the top to… well however many are shortlisted, 5, 6, 10, 16 - whatever.
For organisers….
Creating a conference goes without saying! - They can configure it too: maximum number of submissions per speaker, max number of co-speakers, tracks, talk types and durations, expenses options, e-mail templates, e-mail ccs & bccs and a few other bits.
But all organisers are not equal. There is a permissions system which allows the conference creator can decide just what other organisers are allowed to see and do. They also get to decide who the reviewers are and assign them to tracks.
Most of what the organisers see are reports which slice and dice the submissions in different ways but those with the correct permissions can do some other things:
- Move submissions between tracks
- Review submissions and reviewer scores
- Shortlist submissions from round one to round two
- Decline and accept submissions
- Export submissions data
- Send bulk (personalised) e-mail to submitters
- And of course change permissions for other organisers
What happens next?
There are two “big” changes I’d like to make…
I want to revise the way the system handles review so that rounds become configurable. Right now there are two rounds of reviews: one a scoring system, -3 to +3 and a second ranking round. I want to allow conference organisers to decide how many rounds of voting there should be and select the type of round they want. That will allow me to configure scoring (e.g. 0 to 5 instead of -3 to +3), add some more types of voting (e.g. multi-criteria), configure rounds and allow speakers to be anonymised.
The other big change, which comes from reviewer feedback, is to allow private comments. At the movement all reviewer comments on submissions are shared with submitters. But reviewers want the ability to make comments that are not shared.
This is a difficult one, I’d like reviewers to be open and share all their thoughts but some reviewers don’t like that. And because of that they find the system less useful. If I was Apple maybe I’d just say No, thats the way it is. But I’m not Apple and I need to meet client expectations.
Technically, I want to change the e-mail handling system, again. Although this is in its third version I’ve seen better ways of doing it. This is classic refactoring, each time I’ve change the way the e-mail system works I’ve seen a better way of doing it. Submitters and reviewers won’t see much change but improving the architecture will allow me to make enhancements more easily in future.
The other technical change I should make concerns testing. I really should look at ways to bring more of the UI under test - or at least run “bigger” tests. While there are lots of fine grained unit tests increasingly I find it necessary to manually test how the system hangs together, e.g. when I make changes to mail handling. This is because actions happen in the UI which cause quite a lot of stuff to happen below.
There are many more small changes I want to make to the system but they would probably take longer to describe than to write about!
So the last question is: is this worth while?
If the system is only ever for Agile on the Beach then probably not. It works.
If the system is used by other conferences then some of these changes, perhaps all, are worth doing. But then, maybe until I make those changes none will use it!
A classic dilemma.
If the system is my own little play thing then yes, all these changes are worth doing because I enjoy coding it! But enjoying doing something is different from it being commercially worth while. Financially I’m certainly better off doing something else. But if I value a hobby then it is worth doing.
Another classic dilemma.
One might ask: has it been worth doing so far?
Improved the AOTB submission and review process: Definitely!
Refreshed my programming: Yes
My own personal learnings: Yes
Created an option for others: Yes
Financial returns? No
Its just a shame that the first four don’t pay the bills!
Another classic problem.
No comments:
Post a Comment
Note: only a member of this blog may post a comment.