Worked example of player ratings - FRE v MEL R19 2024

Saw this from one of my mutual follows on twitter and thought it would be a good chance to do something I’ve been wanting to for a while

I think there’s a fair bit of misunderstanding of the AFL Player Ratings system. I think a lot of that is self inflicted. I also think it’s something I have a decent enough understanding of to try to help correct.

First, a few provisos:

  1. I don’t have access to Champion Data’s player rating formulas. I do have a reasonable understanding of the thesis Dr Karl Jackson wrote that is the basis of the system (available here)

  2. I don’t have access to defensive data like pressure acts, tackles, etc except for where they’re represented on the post-game stats sheet

  3. Most of my work is on chain-by-chain performance, rather than individual actions which are the basis of player ratings

That being said, here we go.

Basics

Rating points are built on the idea of field equity. AFL is an extremely complicated game statistically. 36 players are on the field at a time, there are very few limits on their positioning, there is no offside rule, and play flows freely. This makes it hard to determine the effect of any given individual action on the field, compared to a sport like baseball.

The best method we have is to look at whether an action makes that player’s team more or less likely to score. Play resets after a score, so we can break play into these chunks.

For any given scenario on the field, whether it’s taking a mark at half-back, winning the centre ruck contest, or forcing the ball out of bounds in the pocket, we can look at all the data we have from previous matches, identify the similar scenarios we’ve witnessed before, and look at which team scored next. By getting the average of those scoring events (weighted for how exactly similar they are to the current situation, usually how close in field position), we arrive at the current field equity.

By comparing the next expected score at different times, we can work out how much value each team has added in that time. Expected Score (xScore) works on the same principle - What would you expect the outcome to be before the kick is taken, and what was the outcome after the kick was taken - find the difference between the two and you have an answer for the value the player added.

For a centre bounce contest, each team is as likely to score next as the other, it’s a true neutral ball so both teams field equity is zero. Once one team wins possession, the next expected score is roughly +1 point for them, so we can say the value of winning use of the ball at a centre bounce is roughly +1 point (and losing possession is -1 point).

For a more detailed example: A team has possession in general play just forward of centre on their wing. Let’s say they’re at +0.5 field equity (and again, this means their opponents are at -0.5). They go in-board, but the defending team takes an intercept mark, and next expected score has now flipped to +1 for the team now in possession (the corridor being a better attacking position compared to the wing, and having the ball from a mark considered more advantageous than the ball in general play). This change of 1.5 points will be shared partly between the kicker who turned the ball over (as a negative), and the defender who took the intercept mark (as a positive). For some reason, the intercepting player kicks directly to the boundary, and the ball ends up going out of bounds on centre wing - another neutral ball, and the entirety of his team’s change in equity (1 point) is deducted from his rating total.

Now, for the game in question.

Player by Player

I thought I’d start with the lowest hanging fruit first. As per Andrew Whelan’s excellent site WheeloRatings Michael Frederick had 5 shots for goal, at an expected score of 13 points, and his return was 3 behinds. That starts him off at -10 rating points just from his goal kicking.

Forwards are extremely accuracy dependent when it comes to their rating points. Take a shot with an expected score of 3 (A 50 metre set shot with no angle for example), if a player kicks the goal he nets 3 rating points from the kick. If he scores a behind he loses 2 rating points, and if he doesn’t score he’s at -3.

Jye Amiss had a stat sheet of 4 goals 0 behinds and 6 marks, which reads pretty well. Jacob van Rooyen scored 2.2 from 8 marks, also a healthy outing. Both were rated in the bottom 9 players on field.

Amiss and van Rooyen ended up in the minor negatives from their goalkicking (-0.2 from 4 goals from 5 shots, and -0.1) so from a ratings perspective you should be reading their game as if they didn’t hit the scoreboard at all. It’s worth clarifying here that rating points attempt to measure the contribution of a player against what a theoretical completely average player would have done in similar situations. If you plucked the most average (actual average, not “bad” average) kick for goal in the league out, and gave them the opportunities Amiss and van Rooyen had, you’d expect them to get a slightly better return, for that reason neither get any ratings contribution from their goalkicking efforts.

What else did Amiss do for the game? He had 2 effective handballs (one going 5 metres inbound, the other 6 metres backwards), and one effective inside 50 kick which resulted in a Walters mark and goal (this is probably where most of his positive rating points came from).

What about his 6 marks? All 6 were uncontested, with one being on the lead. Excepting marks on the lead, player ratings don’t consider taking an uncontested mark to be a valuable contribution, they’re an expectation of an average player. A mark on a lead recognises that the marking player managed to work themselves in front of a direct opponent, so they and the kicker each receive a share of the rating points.

None of Amiss’ possessions were contested, so much like the uncontested marks he receives no credit for them - being able to gather a ball directed to you is an expectation of the average footballer.

Some other actions will also have contributed to Amiss’ rating: He dropped a mark from an O’Meara kick in the first quarter, and he fumbled a handball receive from Bailey Banfield in the 4th, both will have resulted in negative points. He also had 8 pressure acts, I have no further data available on those, but they will have provided him with some points.

van Rooyen took 8 marks, however only 1 was contested and 3 were on the lead, so he won’t have got a huge number of points for this.

He did have 7 kicks outside of his shots on goal, however they won’t have helped a lot. Only two resulted in Melbourne retaining possession, both of them going backwards to uncontested marks to May and Rivers. One was spoiled out of bounds, and the other four all resulted in turn overs. His 12th kick was a ground kick turning the ball over to Michael Frederick. I’m not 100% certain how the rating system ranks ground kick clangers, if it does assign them value this will be a costly one for van Rooyen as it gave possession to the opposition deep in Melbourne’s defensive 50.

10 of Josh Treacy’s marks were uncontested, with only 2 of those being on the lead. He took two contested marks, but would have been debited for a fumbled mark from a Corey Wagner kick in the first quarter. His contested marks were complemented by a further 5 ground ball gets, all of which were loose balls though which are valued lower than hard ball gets.

He got a minor return (+1.6) from his goal kicking and some contribution from 12 pressure acts and 2 tackles.

Let’s have a look at his non-shot kicks.

He gains reasonable ground, but there’s a number of factors that play against him from a rating points perspective. All but one of his kicks followed a mark of his own. A mark is considered the easiest context for a possession (opposition aren’t legally allowed to pressure you) so it’s effectively graded the harshest.

Only one of his kicks led to a teammate’s mark. That’s a pure gain, they keep the same possession context (a mark), so the rating points impact will be purely positive based on the improved field position - in this case Treacy has moved it 40 metres closer to goal in doing so.

All his other kicks have, to greater or lesser degree, traded field position for context. The worst of which is the kick that led to an intercept mark for the opposition.

The kicks that lead to ground balls will also reflect poorly on him, even though some were eventually retained by teammates. The credit for winning the ball back from a disputed position is given to Treacy’s teammate, while the blame for moving the ball from a set position (the mark) to disputed lies with Treacy.

I’ll finish off by looking at the May and Melksham comparison made in the tweet.

Melksham would have received about +3.7 points from his goalkicking.

Both players took 1 contested mark, but May took 2 intercept marks (one being his contested mark). Outside of his one uncontested intercept mark, May will have received no points for his 8 uncontested marks, whereas Melksham would have received points for his two marks on the lead out of his three uncontested marks.

May had one more contested possession than Melksham, so my expectation is the difference will come in how they used the ball.

May had 23 kicks compared to Melksham’s 6 (one of which was his shot on goal). However, many of May’s kicks wouldn’t have rated highly.

12 of the 23 retained the same context - for example taking a mark and kicking it to a teammate who takes a mark - for a median 13.5 metres gained, so a fairly limited contribution overall.

8 resulted in a deterioration of context - one going from a mark two an uncontested possession for a teammate, three resulting in a contest, and 4 resulting in the opposition gaining possession. Three resulted in May gaining the ball in general play and finding a mark for a teammate, and these will have rated reasonably with each gaining ~20 metres in the process.

This isn’t meant to be an exhaustive explanation of the player rating system, but I hope it can make some contribution to understanding. I believe player ratings, while far from perfect, are the best single-metric player evaluation tool we currently have. I’m also a very strong believer in the use of next expected score to assess team equity in footy and use it extensively in my work.




Previous
Previous

Introducing Territory Charts

Next
Next

The Round Up - 2024 R06