Saturday, March 25, 2017

AOTB 2017 - notes on submission selection & review

Phewww… Agile on the Beach isn’t for another 3 months but in one way its just ended for me. So it is a good time to share a few notes, specifically about speaker selection.


My role in Agile on the Beach centres on the speakers. In the early days I hand picked the line-up but for the last 4 (or maybe 5) years we’ve run an open call for papers. I write the CfP, I run the CfP process, I talent spot (and encourage submissions), I respond to (potential) speaker queries, I run the review processes.

This year the CfP opened at the star of December and earlier this month we published the full line-up, so the majority of my work for the conference fell in that period. The majority of this blog will focus on our selection procedure and provide additional feedback for those who submitted.

Those of you who are thinking of attending AOTB 2017 will also get an insight into what happens behind the scenes. And anyone - all of you I hope! - who are thinking of submitting to a future Agile on the Beach or any other conference may well learn something useful.

Everyone who submitted should have notification by now - accept or decline, please let me know if you don’t. For those we didn’t accept I’ll shortly send an e-mail with more information on how your submission faired.

It helps to understand the AOTB review and voting procedure (a blog from couple of years back). This year I widened the scoring system from -2 to +2 to -3 to +3 in an effort to spread out the scores bit, I’m not sure it helped through.

Another change was the addition of four new reviewers who are not on the conference committee. This removed some of the workload from the committee and, I think, made for better reviews. It also meant that no submission had fewer than 4 reviews and some as many as 7.

The big public change to Agile on the Beach 2017 is a move from September to July - a move forced on us by changed in the academic calendar at Falmouth University. But behind the scenes our organization changes have been even bigger - partly because we had 3 months less to organise 2017.

One big change is Mimas - more information on my Conference Review mini-site.

After several years of an ad hoc system of managing reviews I bit the bullet last summer and created our own conference submission and review system. I’ll blog more about this soon but right now let me say, like every other software development it seemed to take longer than expected!

However it has massively simplified things. I think next year, with the system in place and far less work needed, my life will be better. But, it has been work - and I’m so glad I did it test first!

Anyway, what can I tell you about the submissions themselves and the reviews….

About the submissions and reviews

There were over 320 submissions from over 220 potential speakers. With so many submissions I’m worried that we will loose out on fresh voices, I’m also worried some of the true experts will be deterred from submitting.

For each track round 1 reviews reduced this to a shortlist. For each track we decided a cut-off score, submissions which scored above this were on the shortlist and those below marked for decline.

We then did a double check, there were several sessions below the cut-off which held promise - e.g. interesting speaker or a novel topic - which we added to the short list. We specifically looked at all the submissions which didn’t make the cut-off but had been given a 3 score (the highest possible) by at least one reviewer. Most of these were then included on the shortlist as well.

And one or two, certainly no more, which were removed from the shortlist.

We also moved a few sessions between tracks in both rounds when we thought they were better in another track. But, with so many submissions we rely on those submitting to make intelligent choices on which track is most appropriate.

For 2017 the track shortlist cut-off score was:

  • Agile Business, 60 submissions, 9 speaking slow, shortlist threshold score: 8
  • Agile Practices, 57 submissions, 4 single slots available, shortlist threshold: 8
  • Product Design, 22 submissions, 4 single slots available, shortlist threshold: 7
  • Product Management, 38 submissions, 5 single slots available, shortlist threshold: 6
  • Team work, 83 submissions, 5 single slots available, shortlist threshold: 7
  • Software Delivery, 54 submissions, 9 single slots available, shortlist threshold: 9

(When a double is chosen then two single slots are combined.)

I was particularly happy to see more submissions in the Software Delivery track this year.

We had lots of very strong submissions from very strong, experienced, speakers. There are a few names we’ve declined this year who I find it hard to believe we said no: Judith, Karl and Steve spring to mind.

We also had quite a few weak submissions were the potential speaker only gave a few words of synopsis and biography - some even left the bio blank. I marked a lot of submissions down because the synopsis and/or bio did’t really say much. Some of these weren’t much more than a title and a list of bullet points.

Now quantity shouldn’t be a substitute for quality, less is more and so on, but… in a number of cases there wasn’t enough information to make a judgement so the submission was marked down. It also puts a question in my mind as to how interested the speaker is: I’m probably investing more time in reading and thinking about the proposal than they put into writing it.

If you want to speak the competition is fierce. You need to put really effort and thought into you submission.

Of course some synopsis go the other way and are too long, you bore before you get to the end. Similarly for biographies, not too much but not too little either.

Some synopsis spend all their time talking about the problem they will examine and said very little about the solution. Others seem to jump into solution without saying why - that is particularly true of talks which involve a brand-name tool or technique. (Sessions dealing with specific tools, especially commercial tools, are seldom scored highly by AOTB reviewers.)

There is an art to writing a synopsis: describe the problem (not in too much detail) and say something about the solution (without giving it all away), while at the same time being engaging and leaving the reader wanting to know more.

A few speakers - all from the USA interestingly - didn’t seem to read the call for papers. They requested speaking fees and/or proposed sessions which didn’t match our topic areas. I wonder if some people just fire off submissions to every conference?

To be clear: we don’t pay speakers.

We do sometimes give keynotes honorariums but we choose the keynotes, we don’t ask for keynotes in the open submission system. Normally we have already agreed our keynotes before the we call for speakers.

(If you know of someone you think would be a good keynote for AOTB please drop me a mail.)

As always those who requested long-haul airfare tended to get marked down. We don’t set a budget for speakers but we do keep an eye on costs, we have paid long-haul fares on occasions in the past but we can’t afford to pay many. This year both our keynotes are long-haul.

I can’t see any way around this. Its unfair but is a genuine constraint.

This year we did offer speakers the option of paying their own long-haul airfare and we would pay in country travel costs.

And similarly double sessions tended to get marked down. Again unfair but hard to avoid. We are aware of this problem and we do create some doubles space (the bonus track) but it is still a trade-off.

This year we seemed to have a lot of talks about “mechanical scrum” and “failing Agile”. Maybe this is a sign that Agile has come of age. These sessions didn’t score highly largely because they spent most of the synopsis setting out the problem rather than discussing solutions.

Failing Agile is well… a fact of life. It is only interesting when we can learn something from it - something not to do, or something to do differently, something to prevent or rectify the failure.

What will I want to do differently next year?

  • I hope to add a few more independent reviewers.
  • I’m thinking of designating track leads for reviews, these will weed out the clearly unsuitable submissions and help with the shortlisting.
  • Mimas will have a few tweaks but for AOTB it is essentially done.

So advice for those anyone thinking of submitting in future:

  • Read our Call for Papers: yes, I know it is long but we give a lot of information
  • Choose your track carefully: your changes of selection are much better if you submit into one of the more specific tracks (Software Delivery, Product Design and Product Management) rather than the three more general tracks (Business, Practices, and Teams.)
  • Put some effort into the synopsis, especially the short synopsis, tell us a bit about the problem but not too much, and tell us something about what you are going to say. The long synopsis isn’t as important but is useful for giving more detail.
  • Don’t take “long” synopsis as literally, it does not need to be very long. If you have a strong short synopsis then just leave long blank.
  • Watch your timings: some speakers gave detailed timing breakdowns (to the minute), on the whole these aren’t believable and people got marked down. However some speakers were just too ambitious in their contend and reviewers didn’t believe they could cover it all.
  • Double sessions really need to be outstanding, and they need to be interactive. No lecture based double has ever been selected.

I hope some of that is helpful to at least some of you.

No comments:

Post a Comment

Note: only a member of this blog may post a comment.