When Do Yards per Route Run, Targets per Route Run, and Yards per Target Stabilize?

In Episode 1.75 of The Three Cone Drill Podcast, I argued that Pro Football Focus (PFF) — due to its widespread footprint in mainstream NFL media — has a larger responsibility than most sites to convince consumers that its stats are trustworthy. Furthermore, I lamented the fact that this should be an integral part of what they do; although I welcome any potential web traffic, outsiders like me shouldn’t be the ones carrying PFF’s water. And besides, we can only carry so much water because PFF doesn’t make its raw data publicly available. If and when we start a fire, PFF’s the only one with enough water to put it out.

Recently, Chase Stuart of Football Perspective examined its Yards per Route Run (YPRR) stat for wide receivers (WRs), and found that YPRR was considerably “stickier” from year to year than Yards per Target (YPT), mainly because the key difference between the two, Targets per Route Run (TPRR), was the stickiest of them all. From a measurement perspective,1 then, a WR’s “true” yardage-producing ability is more reliably indicated by whether or not he ran a route on a given snap than by whether or not he was targeted. And in the bigger picture, this represents one piece of supporting evidence that PFF’s charting of “routes run” adds value to our WR evaluations.

With that in mind, I decided to look at whether or not Stuart’s year-to-year findings also apply at a more granular level: How reliable are YPRR, TPRR and YPT from game to game?

Methods

For the uninitiated, here’s a synopsis of how my analyses proceeded:

  1. I collected data for all WRs that had at least 8 games played in PFF’s historical database, which currently runs from 2007 to 2013.
  2. To control for team effects, I included only those WRs that played 8+ games for the same team.
  3. Starting with WRs that played 8+ games, I randomly selected two sets of 4 games for each WR, and calculated their YPRR, TPRR, and YPT in both sets.
  4. For each of the three metrics, I calculated its split-half correlation (r) between the two randomly-selected sets of games.
  5. I performed 25 iterations of Step 4 so that r converged.
  6. I repeated Steps 3-5 for WRs with 16+, 24+, 32+ games, and so on.
  7. In each “games played” group, I calculated the number of games at which the variance explained in each metric, R2, would mathematically equal 0.5.2
  8. I calculated a weighted average of my Step 7 results.3
  9. I calculated the “true” YPRR, TPRR, and YPT for a hypothetical WR that’s performed at a specific level through X number of games.4

Results

Let’s start with YPRR because that’s PFF’s “signature stat” of interest here:

GamesnrR2 = 0.50Avg YPRRObs 2.00 YPRR
Wtd Average141.641.82
44650.20161.571.66
82850.37131.621.76
122230.52111.641.83
161360.58111.691.87
201090.66101.711.90
24760.70101.751.92
28520.62171.791.92
32340.70141.841.95
36290.74131.841.96

As an example of how to read the table, take a look at the “16” row. There were 136 WRs that had (at least) two sets of 16 games for the same team, and those WRs averaged 1.69 YPRR. Given their split-half correlation of 0.58, YPRR stabilized at 11 games for this group. And given their 1.69 average YPRR, we can estimate that a WR with 2.00 YPRR after 16 games has a “true” YPRR of 1.87.

Meanwhile, the “Wtd Average” row tells us that YPRR stabilizes at 14 games and that a WR with 2.00 YPRR after 14 games has a “true” YPRR of 1.82. Although I wasn’t able to do this analysis on a play-by-play basis — again because PFF doesn’t make its raw data publicly available — I can use the weighted average number of routes run per game (26.1) to estimate that 14 games translates to approximately 350 routes run.

Now let’s compare these results to those for YPT:

GamesnrR2 = 0.50Avg YPTObs 9.00 YPT
Wtd Average398.258.62
44650.13268.088.20
82850.13538.198.30
122230.16658.248.36
161360.32348.368.56
201090.38338.388.62
24760.47278.508.74
28520.43378.638.79
32340.52298.748.88
36290.57278.678.86

Here, we see that YPT stabilizes at 39 games, which translates to about 205 targets given the weighted average of 5.2 WR targets per game. Therefore, YPT takes nearly three times as many observations as YPRR before a WR’s yardage performance represents half-skill/half-luck.

Finally, here’s the table for TPRR, i.e., the underlying stat that distinguishes YPRR from YPT:

GamesnrR2 = 0.50Avg TPRRObs 25.0% TPRR
Wtd Average719.9%22.5%
44650.32919.4%21.2%
82850.52719.8%22.5%
122230.67620.0%23.4%
161360.70720.2%23.6%
201090.79520.4%24.0%
24760.79720.6%24.1%
28520.80720.8%24.2%
32340.86521.0%24.4%
36290.89521.2%24.6%

Once again mimicking Chase Stuart’s findings, TPRR is the “stickiest” of the three metrics: It only takes TPRR 7 games to stabilize, or approximately 185 routes run. This suggests that (a) TPRR is the most reliable indicator of a WR’s “true” yardage-producing ability, and therefore (b) PFF adds value to our WR evaluations by charting routes run.

Discussion

As I think Stuart and I have established in our respective studies, the number of routes a WR runs is more useful than his number of targets when we try to put yardage in context. But why? What’s the conceptual reason for this, from an abstract, football measurement perspective. Again, he and I are on the same page: For any position, opportunity is king; and for WRs, routes run is a fundamental measure of opportunity. The number of snaps may be more important insofar as a WR can’t run a route if he’s not on the field, but the existence of rushing plays means that WRs spend a large proportion of their snaps blocking rather than route-running. Therefore, the best conceptualization of a WR’s performance is that yardage comes from receptions, receptions come from targets, and targets come from routes run.

If anything could be added to this line of thinking, it would be that routes run come from an offense’s pass identity, but extending that line of Stuart’s research is best left for a future post.

DT : IR :: TL : DR

PFF hasn’t put enough effort into establishing the reliability of its stats, so it’s (unfortunately) up to people like Chase Stuart and I (and James Keane of Bleeding Green Nation) to do most of the heavy lifting. Previously, Stuart found that YPRR is more reliable than YPT on a season-by-season basis because their underlying difference, TPRR, is the most reliable of all. My research showed the same thing on a game-by-game (or route-by-route) basis.

Email to someoneShare on Facebook0Tweet about this on TwitterShare on Reddit0

  1. Pun intended. 

  2. The formula is (Observations/2)*[(1-r)/r]

  3. Weighted by group size. 

  4. The formula is [(Observed Performance * Observations) + (League-Average Performance * Stabilization Point)] / (Observations + Stabilization Point)  

Bookmark the permalink.