New World Rankings?!

Photo credits courtesy of Disc Golf Pro Tour
Photo credits courtesy of Disc Golf Pro Tour

Recap on this week's movers

Pictured above is the top 10 MPO disc golfers in the world as produced by a *new* and superior rankings system (details below). Before I get into the juicy details of the rankings, let’s look at the Dynamic Discs Open and highlight some of the important movers. Starting with the most notable move from last week, Foundation’s own Brodie Smith moved up 9 spots to #46 after his T3 finish. This is Brodie’s first elite series top 10, top 5, and podium all in one, which gave him 2.5x as many points in one event as his next highest points event (Jonesboro Open the previous week), so it stands to reason why he made that big jump. Aaron Gossage, Väinö Mäkelä, Jason Hebenheimer, and Logan Harpool who all tied for 3rd along with Smith also had large jumps of 6, 14, 16, and 27 spots respectively. Ricky Wysocki – who was victorious in Emporia – didn’t change his ranking position, as he’s held the #1 spot for 40 straight weeks now, but that win does widen his gap over a #2 ranked Paul mcbeth. You can see Paul is still #2 because bad tournaments don’t hurt you; he simply didn’t gain as many points as he could have, and not nearly as many as Calvin and Chris who are now right on his heels in points. Finally, the last notable move from DDO is Simon Lizotte’s runner-up finish, which jumped him 9 spots to #36.

Rankings Origin

First off, I’ll introduce myself, my name is Judah Aderhold (yes, I’m Ezra’s brother, but I promise the rankings aren’t too biased) 🙂 and I created these rankings about 16 months ago. So, although they are new to fdnsports, they are in fact older than the rankings done by Udisc. I’ve been posting these rankings to the Instagram account @discgolfillustrated, but I’m excited to now have the opportunity to post them on this blog and hopefully get them more exposure.


Short description of the rankings and it's formulas: (more in depth details below)

The way players get points in the first place is by beating players rated 1000+ and the higher rated player one beats, the more points they get. After all those initial points are added up, that number gets multiplied by a time decay factor. The time decay factor keeps about 98% of points for the next 3 months and then points decay almost linearly from there until 2 years down the road in which they will be worth nothing. After time decay is factored in, that number is multiplied by a placing factor that changes based on the event tier (ie 2x points for a dgpt win, 1.3 for a dgpt 3rd, 1.5 for a worlds 5th, etc.). After that, you add up all of a player’s points they accumulated in tournaments over the last 2 years, and that’s their total points number that determines where a player is ranked.


Detailed breakdown

Initial Points Formula

The formula for the points each rating gives someone when they beat them is Y=(11132180+(0.5340068-11132180)/(1+(x/1201.756)^{92.87543})) (x= rating of a player you beat, Y= points received from beating that player). This may seem convoluted, but here’s a quick story on how that formula came to be. I started by looking at DGPT events, majors, and other large events from previous years and seeing how difficult it was to beat players of different ratings. I made brackets of 1000-1009, 1018-1026, etc, and went through and basically compared everyone to each other to find out: it’s m times easier to beat someone in the first bracket as the second, it’s n times easier to beat someone in the 2nd bracket as it is in 3rd, so one and so forth (I also compared several brackets against each other, not just neighboring brackets). Now I had point values for each bracket, but it wasn’t perfect. This is because it’s tougher to beat a 1009 rated player than it is to beat a 1000 rated player, and you should be awarded more points as such. So, I took my point values along with the midpoints of my brackets and plugged them into a best fit curve calculator and that’s what led me to the formula above (I ran multiple of these and chose the one that made the most sense practically and logically). So now we can see every rating gives a different number of points, and some examples are: 1000=0.964, 1025=4.796, 1037=13.096, and 1053=52.611.

Graph of the formula used and the original scatterplot points used to determine the formula. Y axis represents points, X axis represents player rating.
Graph of the formula used and the original scatterplot points used to determine the formula. Y axis represents points, X axis represents player rating.

Time Decay Factor

This one is modeled closely after the time decay factor of the OWGR (ball golf world rankings) but it’s a little more “fluid”. Ball golf has no decay for 13 weeks and then each week after that it loses about 1.1% each week (1.1% reduction of the original number) until it hits 0 at 52 weeks. I took that formula graphed out and almost acted as if the straight line and angled line were asymptotes to a curve, so I created this new time decay graph using this formula Y=-({-93b-50a+50x+10sqrt{25x^{2}-50ax+25a^{2}+93}}/93) (where a=11.9255 and b=100.0908) and the result is as shown below. It is very similar as you can see (to the top lines that are the owgr) and provides a minor difference points-wise, but it allows for one nice mathematical formula which results in something slightly more logical, such as someone receiving a little less points for a tournament 13 weeks ago than the exact same one today.

The purple curve is the time decay formula I use. The black and red lines on top are the time decay lines the owgr uses. Y axis represents % of original points value, X axis represents weeks since the event.
The purple curve is the time decay formula I use. The black and red lines on top are the time decay lines the owgr uses. Y axis represents % of original points value, X axis represents weeks since the event.

Tier Factor

Unfortunately there’s not as much math involved in the numbers for this, they were just made up roughly based on how closely it would make the points match the distribution given in the owgr, as well as round numbers that seemed to make logical sense. They are: B=1.0x; A: 3rd=1.1x, 2nd=1.2x, 1st=1.5x; DGPT/NT: 6-10=1.1x, 4-5=1.2x, 3rd=1.3x, 2nd=1.5x, 1st=2.0x; Non-World’s Major: 6-10=1.2x, 4-5=1.4x, 3rd=1.6x, 2nd=1.8x, 1st=2.5x; Worlds: 6-10=1.2x, 4-5=1.5x, 3rd=1.75x, 2nd=2.0x, 1st=3.0x. If these seem weighted incorrectly, keep in mind that by far the main factor determining how many points you get each week, comes down to the strength of the players you beat. So, while a 3rd at an A tier has the same multiplier as a 6th at a dgpt, the amount of base points received getting 6th at a dgpt is far and above that of a 3rd at an A tier. This just helps to reward specific placings with some bonus points on top of the extra points they would already get by beating more people than the players below them, as well as rewarding equal placings in higher tiered events with more bonus points even if the strength of the players beaten is the same.


Other thoughts and points of interest

While I count almost every MPO tournament, there are some left out. I do not count C tiers or any tournament that doesn’t have at least 5 1000+ rated players. This is because the points someone gets from a C tier or any tournament with less than 5 1000+ players is basically negligible, so it’s not worth it for me to factor those in. One big point of interest is the fact that my rankings is a sum of all the tournaments played. This is something I’ve wrestled with a great deal, and for where disc golf is right now, I believe that’s the best way to do it. You have too many tournaments such as silver series, A tiers, and even B tiers that top level pros choose to play at, where you get far fewer points than elite series events (field strength pales which is why) and I don’t want them to get unfairly hurt in the ranking by doing an average and having those tournaments bringing things down. A rankings system will never be perfect, but I feel like this is the least flawed way to do it for now.

Final Thoughts

Thank you for reading through my rankings! I hope you enjoyed learning about them and are as excited about them as I am! We all know how inadequate Udisc’s rankings are (Eagle jumped Calvin last week despite not playing, while Calvin notched a top 10; any time a top 3 player doesn’t get a top 5 they lose a substantial number of points; etc.), so these are quite superior in my opinion; you’re not penalized for bad or average play, you’re only positively affected by good results. I hope you agree and appreciate the time and effort I put into creating them. If not, and you disagree and/or have suggestions I’d love to hear them!

DGiWR Top 100. Top 250 is viewable via the embedded link

Judah Aderhold


  • Interesting, keep it up, please

  • This is amazing! Thank you for taking a logical approach and following something that has already been streamlined and proven

  • If I’m understanding this correctly, wouldn’t this rating system heavily bias players that play in all available tournaments over players that maybe miss parts from injury, life, smaller touring schedule (e.g. Simon Lizotte). Since points are only added by playing?

    • That is correct. As far as injuries and smaller touring schedules are concerned that’s just an unfortunate situation as far as ranking is concerned. (Only way for that not to be the case is to have a strict average in which case that’s not fair to the 80% of players in the top 50 who tour all the time.) Surprisingly enough someone who plays every elite series and silver series doesn’t gain a huge advantage over someone who plays every elite series and say only half the silver series. (That theme is even lessened in effect for someone playing A and B tiers). That’s because by far the most points awarded are at elite series and above, as well as select silver series with great fields such as Belton. But something like Tallahassee or A tiers really doesn’t offer a ton of points to where playing a bunch extra of those will give you much of an advantage.

  • I stopped reading for a min, who’s Jason Hebenheimer?

    • Missouri dude who’s on the up and up. Saw him play at Jonesboro and he got T3 at DDO, he’s got game! Recognizable by his horseshoe/cornhole putting style.

      • Love the article! Keep it up! Just wanted to point it out that it’s Jake Hebenheimer, not Jason (unless there’s another Hebenheimer out there..). Cheers!

      • His name on pdga is Jason but everyone calls him Jake so I’m honestly a little confused, but there’s only one hebenheimer I believe 😅

  • This looks great. I really appreciate the detail that goes into the ranking system. As far as a calculable ranking for disc golf goes, this is probably the best imo.

  • This is awesome. Can you do FPO too? Curious how that looks with the European players having more rounds over here this year.

    • Thanks! I’d think about it depending on how widely accepted these MPO rankings become, but this took me hundreds of hours to develop so it’d take a large following probably 😂

  • We need to see these rankings if we only used the last 12 months as a basis not 24. I would love to compare how the change the rankings.

    • So, if we consider the last 13 months (easier for me to calculate that), here’s the top 20: Wysocki, McBeth, Dickerson, Heimburg, Conrad, Klein, Jones, Hammes, Freeman, McMahon, Buhr, Gibson, Locastro, Orum, Keith, Gilbert, Clemons, Aderhold, Marwede, and Ellis. So, not too different from how it is now (this is because the time decay affecting tournaments older than 13 months, make them worth a lot less). The biggest change is Locastro dropping 3 because his Waco win doesn’t count now. A couple of the others are just 1 or 2 place moves that make sense for players who either have played well this year or not.

  • The only issue I take is that it is still PDGA rankings based. How can the system be separate while dependant? The second issue is that a 1055 player falling apart in a tourney would award so many points to players that is would dilute the system until those points decay.
    I would recommend a pool of points that decrease the lower in rankings the player is. Say whoever is sitting at #1 is a 1000 point bounty, and gets second place. Whoever beat him gets 1000 pts added to their event pool and it is reflective of the both players playing well. Next week the 1st place guy lands in 21st because they had a meltdown. In that case the top 20 players would share that 1000pt bounty and each get 50 pts. If a top player melts down, it cannot create a situation where 20 players hit the jackpot in pts as it would be inflationary to all of their results.
    Obviously, any multipliers to smooth it out would have to be added into the equations, but this accomplishes 2 things. It makes you independent of the PDGA rating and it works as a defense against a very high rated player shitting the bed and spreading dumping millions of pts into the ecosystem that has no method absorbing that shock.
    If anyone has a csv, json or other formatted data of the season results I can whip up a responsive page to tinker with it.

Leave a Reply

%d bloggers like this: