The RotoWire Blog has been retired.

These archives exist as a way for people to continue to view the content that had been posted on the blog over the years.

Articles will no longer be posted here, but you can view new fantasy articles from our writers on the main site.

Does Yards Per-Play Track Total Offensive Strength Linearly?

Yards-per-play (YPP) is a useful metric when evaluating NFL offenses, as it measures efficiency rather than just total output. The question with which I'm concerned is whether YPP tracks offensive strength (scoring ability) linearly or whether a small jump in YPP over key ranges can have a disproportionally large effect on output.

To start with we can see that over certain ranges, gains in YPP are virtually worthless. If a team averaged only one yard per play, it would be no better off if it were to improve to two yards per play. Averaging two yards per play would rarely get you a first down and almost never result in a scoring drive. Similarly, if you averaged 20 yards per play, you'd score every time barring a turnover, so improving to 21 yards per play would have no effect whatsoever except to help you score more quickly.

These are extreme scenarios as the worst YPP offense in the league this year was the Rams at 4.7, and the best the Falcons at 6.7. But they serve to illustrate the point that YPP almost certainly affects scoring non-linearly if we use all possible ranges. But what happens if we stick to the plausible ranges for an NFL team - say 4.0 to 8.0? Should we expect that going from 4.0 to 5.0 has the same effect on scoring output as going from 5.0 to 6.0? I'd argue it's unlikely.

For starters, the NFL has a rule quirk of which you might be aware that you have four downs to get 10 yards in order to re-set to 1st and 10. It's a quirk because it need not be four and 10. It could be five and 20, three and eight, or anything else. If we were to change the down and distance requirement to two downs to get 20 yards, for example, then going from 20 to 21 YPP might actually matter a good deal. The value of each incremental yard (or tenth of a yard) per play is different depending on the down-and-distance rule.

If the value of each incremental increase in a team's per play average depends on the down and distance rule, it seems we can infer that given a constant down and distance requirement (four downs, 10 yards), certain ranges over which the increase takes place are more likely to be in the sweet spot than others, i.e., calibrated to have maximum impact on first-down conversion and hence scoring drives.

If that's true, then perhaps an increase from 4.5 YPP to 5.0 causes a team score x more points per game on average, but a gain from 5.0 to 5.5 causes 1.4 x. I would assume that for every down and distance rule set, there's an range of optimum sensitivity up or down. It might max out somewhere, where, for example going from 5.8 YPP to 6.0 YPP is the same as going from 4.5 to 5.0. If that's the case, we might be underestimating how significantly a team is improving (or regressing) if we're only looking at the linear net change.

One other idea that occurred to me is YPP might work in combination with something like consistency or success rate, i.e., getting 6.0 YPP is merely an average, and getting 12 yards and zero yards each 50 percent of the time is not the same as getting to 6.0 in a more varied way with smaller standard deviations. So it might be that both variables work in combination to derive the YPP incremental sweet spot. In which case a more consistent team gaining a half a yard per play in the sweet spot might be vastly more improved than an inconsistent team gaining the same amount outside of it.

After I posted this piece, Rufus Peabody of Massey-Peabody tweeted out a chart showing the relationship between average YPP and points scored. The line is pretty straight, indicating the relationship over the normal range of NFL games is in fact linear. I don't have a strong refutation for that. I would say perhaps it's a straight line because teams with different volatility levels optimize at different points, but even if that were the case, I doubt they would do so uniformly. I still think it makes sense there would be a sweet spot, but Rufus' data, which I have no reason to doubt, suggests otherwise.