clock menu more-arrow no yes mobile

Filed under:

Recruiting Rankings Matter

Especially in the SEC.

Utah v Colorado
Mike MacIntyre and Colorado vastly outperformed their recruiting rankings in 2016. They’re the exception, not the rule.
Photo by Justin Edmonds/Getty Images

Give me a Power-5 team’s average national recruiting ranking for the classes of 2012 through 2016 and I have better than a 50-50 shot of coming within two wins of their final 2016 win total. Better yet, I’ve got about a 25-percent chance of telling you within one win.

Based solely on the team’s average Rivals ranking from the five classes that made up its roster last year.

Give me just the team’s average recruiting ranking among its conference mates for those five years, and I have a 62.5-percent shot of coming within two games of its final 2016 record.

Why is that? Because (say it with me now): Recruiting Rankings Matter.

Yes, on a player-by-player basis, you can have a Justin Britt, Charles Harris or Michael Sam come out of nowhere and become an elite college player. You can have four- and five-stars completely flame out and amount to basically nothing at the college level.

But when we peel back and look at this from a macro point of view, the teams that win a lot are usually the ones that recruit the best talent.

It sounds like a simple theorem but, against a faint but fairly consistent drumbeat of “recruiting rankings don’t matter,” it’s one that probably bears repeating.

Recruiting. Rankings. Matter.

The top 15 Power-5 teams by win percentage from 2016 had an average recruiting class rank of 23.7 making up their rosters. The bottom 14 by win percentage had an average recruiting class rank of 46.3.

Better yet, those top 15 teams ranked in about the 68th percentile among conference mates when it came to average yearly standing within the league. So, an average of placing 3rd for five years out of a 10-team league and 4th in a 12-team or 14-team league. The bottom 14 ranked in the 33rd percentile. So placing an average of 7th for five years in a 10-team league, 8th in a 12-team league and 9th in a 14-team one.

Setting a regression model to the data set of average national ranking and standing within the league for the 64 Power-5 teams (no Notre Dame wahhhh) from 2012-2016 finds that, for every 10 spots you fall in the national rankings, it means about a .046 drop in win percentage.

That’s about a win’s worth of difference. If you’re an average of 34.8 points lower than a team -- as, ahem, Missouri was to Alabama -- that means about a .162 drop in expected win percentage. That’s about two wins’ worth of difference.

The teams’ relative recruiting standing within their leagues were even more telling. For every two spots improved in, say, a 14-team league, a team could expect to gain a win. A team like Alabama (average SEC finish - 1.00) would expect to enjoy about four wins’ worth of advantage over a team like Missouri (average finish - 12.4).

From recruiting rankings alone.

Now, this obviously doesn’t tell the whole story. But it gives you a pretty good framework of understanding how the final league standings play out in the Power 5.

Let’s take a look at all 64 teams, with their average Rivals ranking, position within their league, actual 2016 win percentage, projected win percentages based on national and conference recruiting rankings and how many percentage points/games off the regression model was based on recruiting success.

Then we’ll look at which teams were right on the model, which ones exceeded expectations and which ones came up short.

Closest Projected Wins

National Model

Indiana: -0.05
Baylor: 0.18
LSU: -0.19
Kentucky: -0.21
NC State: 0.23
Florida: 0.28
Tennessee: 0.42
Arkansas: -0.57
North Carolina: 0.60
Texas A&M: -0.63

Conference Model
NC State: 0.00
North Carolina: 0.01
Miami: 0.10
LSU: 0.11
Texas A&M: 0.15
Baylor: 0.17
Georgia: -0.20
Auburn: -0.27
Indiana: -0.30
Vanderbilt: 0.33

If your team was one of the above then, congratulations! Your coaches are getting about what they should out of the talent they’ve assembled.

The coaches of the teams on this next list deserve raises:


National Model

Clemson: 3.93
Colorado: 3.90
Wisconsin: 3.85
Washington: 3.84
Alabama: 3.30
Georgia Tech: 3.27
Minnesota: 3.24
Kansas State: 3.15
Penn State: 3.02
West Virginia: 2.93

Conference Model

Colorado: 4.12
Washington: 4.00
Clemson: 3.56
Wisconsin: 3.16
Georgia Tech: 3.05
Kansas State: 3.05
Minnesota: 3.05
Alabama: 3.00
Utah: 2.64
Washington State: 2.63

And the coaches for the teams on this next list...need to figure some things out:


National Model

Virginia: -4.34
UCLA: -3.96
Michigan State: -3.94
Rutgers: -3.75
Arizona: -3.42
Oregon: -3.31
Kansas: -2.91
Texas: -2.90
Missouri: -2.62
Ole Miss: -2.55

Conference Model

Michigan State: -4.70
Virginia: -4.66
UCLA: -4.37
Rutgers: -4.27
Oregon: -3.54
Texas: -3.49
Kansas: -3.04
Arizona: -2.95
Illinois: -2.75
Purdue: -2.43

These trends fit better in some leagues than they do others. We found that out by taking each league on its own merits and fitting regression models based on national and conference recruiting rankings.

The ACC had a standard deviation of about two games off for both methods. The Big 12 was about 2.4 games and the Big Ten was about 2.5. The Pac-12...well, it was just weird. It did not follow the trend at all, to the point where, when we fit the regression model for the within-conference rankings, every single team came out with an expected win percentage of .542.

What a strange season that would be.

This shows that, on the whole, the correlation is stronger when we look at the national population than when we look at it league-by-league.

Except for one league, that is. A league in which it just means more.

Modeling on its own data, the SEC had a standard deviation of only about 1.1-1.2 wins in the national and conference standing methods. For sake of comparison, the national population models had standard deviations of about 2.3 wins.

So the correlation between recruiting and on-field success in the SEC was, like, twice as strong as it was in the national population.

Our SEC regression models each picked nine out of 14 teams to within one win of their actual win percentages and all but one team (Alabama...jerks) to within two wins.

This is why, in that whole Clemson series, I kept stressing the importance of Missouri being able to compete within the SEC if it’s ever going to have a chance to compete for the national title. And just how tough that is.

Because, while Missouri underachieved our SEC models by .87 and .98 wins, they weren’t even the most “disappointing” team in the league, according to our models. Ole Miss and South Carolina were worse.

Let’s do a little thought exercise, using our national models.

Missouri’s average 2012-2016 recruiting rank (36) gives it an expected win percentage of .552 for 2016. But its average rank within the SEC (12.4) lowers that expected win percentage to .425.

What if the Tigers were in the ACC? Their average conference place would be between North Carolina and Louisville (6.0 average place, .571 percentile). That translates to an expected win percentage of .598.

Big Ten? Between Penn State and Maryland (6.1, .564). A .595 expected win percentage.

Big 12? Between TCU and Oklahoma State (4.7, .530). Expected win percent: .583.

Pac-12. Between California and Arizona (7.0, .417). Expected win percent: .540.

That’s an expected record of 7-6 to 8-5 in a 13-game season.

As it is...well...let’s just substitute the 2017 rankings for 2012 and come up with some SEC predictions for next season using both the national population and SEC-specific regression models, shall we?