H
HardRightEdge
Guest
Two things. One, pressures are a better indicator than sacks of performance in pass rush. If you don't believe that then you must believe that Matthews had his worst season in 2011 because he only had 6 sacks (had 43 hurries and 22 hits though). Two, Raji graded out positively in pass rush in 2010 (he was negative in run defense) and 2012, his 2011 was terrible. Should be noted that Raji's pass rush grade was less than half of Mike Neal's in 2012.
As for using PFF stats and grades; I don't know of any other site that grades every player on every snap in an objective fashion. The stats they provide are valuable because they actually give context to a player's performance. I watch the Packers A LOT...I don't watch other teams that often. I can't say how good a safety Burnett is because I don't watch every other safety in the NFL. PFF allows me to compare Burnett to other safeties. Any problem that they incorporate should also be occurring with other players in the league since their system is pretty much uniform. It may not be perfect but everyone player should receive the same problems, so they should wash out (not perfectly but enough to minimize issues).
I understand that's why such stats are popular. But that doesn't respond to the questions I raised, specifically the value of "consequential pressure" vs. the other kind. As noted, I don't limit value to sacks. I like hits a lot, rushed passes that result in interceptions even more, and QBs throwing off balance incomplete passes are another happy event. But if all you're doing is pressuring the QB to step up in the pocket, there are several QBs in this league who will eat you alive.
PFF may allow you to compare players, but how reliable is it, really? You assume their "system is pretty much uniform", but how do you know that? Many of those stats involve subjective judgements by, as I noted, unpaid volunteers. How consistent are the judgement calls by hundreds(?) of volunteers? They ask for volunteer commitments of about 10 hours per week...that would suggest several people each breaking down some limited aspects of each game. That's a lot of cats to herd in a largely subjective endeavor.
And when we follow these stats, to what degree are they black boxes? What is PFF's definition of a "pressure", for example? If you find one I'd like to see it. Maybe you need to be a paid subscriber to get full access to methodology and interpretive guidelines...I couldn't say...maybe not even then?
Team grades are a lot more valuable...they're developed by professionals who know what the player's assignment is on each play. That might not be so clear to PFF volunteers. Who blew that coverage? Who missed that blitz pick-up? Etc., etc. Unfortunately, teams don't regularly or fully disclose that info...we get a few glimpses here and there. But if PFF data is what's available, that doesn't mean it is valuable if it happens to be inconsistent or otherwise flawed.
This is heresy in the fantasy football age, but no stat is better than an inconsistent, inaccurate, misleading or un-vetted stat.
This is not a blanket indictment of PFF. I like the stuff that is measurable. To take one example, the time-in-pocket data recently discussed in this forum raises some interesting questions. But that requires only a stop watch and a definition, such as time in the tackle box. But even then, what is THEIR definition of "time in the pocket"? What do they tell their herd of cats to look for? That's relevant.