Last week I made some predictions for WTC 2016’s Teams based solely on Elo style rankings calculated from previous years’ data. I previously posted a metric for scoring rankings based on difference from the true position per prediction. My R implementation of this is available through my package WTCTools.This metric allows different numbers of predictions to be compared, although when increasing the number of rankings, the likelihood of a low (good) score falls. My implementation also allows pundits to predict country only, in which case the score for that position is the average of all teams from that country. The difficulty of this ranking is similar, and so comparable to that of selecting individual teams.
Last year I scored a mean distance of 9.8 places for 50 teams. This year my ratings are based on more years of Mark 2 data, but do not have any information about caster strength.
I also found this article by Klaw. He had collected predictions from some of the finest minds in Warmachine. These players know the field and may have even played games against some of their rivals. Their knowledge of player skill at the top of the field should give them great intuitive insight into teams well placed to win. They also picked dark horse teams, which I did not include, as I suspect that these were considered under-rated teams, rather than 7th placed teams. Jeff Galea only presented four picks, everyone else presented 6. Martin Hornacek only presented nationalities, so was scored against all matching teams. How do my predictions compare to these illustrious competitors?
|11||Don Martin Hornacek||15.50||17.33|
For a top 4 pick I was in the middle of the pack. For top 6 I was in the bottom half. My score for all 64 teams was 8.84. Definitely room for improvement, but not an embarrassing showing either. If I am to improve my forecasts I need to keep track of more tournaments and attempt to update my ratings where possible. I can also use this year’s data, plus perhaps some estimates for list performance.