Last Updated on December 21, 2016
As we prepare for the third year of the College Football Playoff replacing the Bowl Championship Series, I ask you this: do you really understand how teams are ranked and chosen for the semifinals and the New Year’s bowls?
Although we’re already in the third season of the CFP – boy, does time fly – fans are still adjusting to the new system of ranking. I’ll admit that I thought I understood how the selection committee ranks teams each week, but after having the opportunity to participate in a mock selection committee with the CFP in October, I can tell you I had a lot to learn.
The selection committee’s directive
The night before we went in for our mock selection committee, we were each given a binder that included an agenda, the selection committee’s protocol, instructions on “how to select the four best teams to compete for the College Football National Championship,” and a one-page on each FBS school with key statistics from the season. Our mock selection committee’s task was to rank teams from the 2010 season as if it was the day after the conference championship games, so we had one-sheets on each FBS schools in 2010 that included a litany of statistics from the entire season.
A document entitled “College Football Playoff Selection Committee Protocol” began by stating the committee’s mission: “The committee’s task will be to select the best teams, rank the teams for inclusion in the playoff and selected other bowl games and then assign the teams to the sites.”
That statement was followed by a list of “Principles,” which laid out the process for selecting teams:
The committee will select the teams using a process that distinguishes among otherwise comparable teams by considering:
- Conference championships won,
- Strength of schedule,
- Head-to-head competition,
- Comparative outcomes of common opponents (without incenting margin of victory), and,
- Other relevant factors such as key injuries that may have affected a team’s performance during the season or likely will affect it’s postseason performance.
One of the first questions I asked of the CFP officials in the room with us and committee chair Kirby Hocutt (athletic director at Texas Tech), who was there to answer our questions throughout the mock selection, was about whether the factors are listed in order of importance. I received resounding “no.”
Asked what’s most important, Hocutt said simply, “Playing a great schedule and winning ballgames.”

The infamous “13th data point”
You’ll note that one of the distinguishing factors on the list is “conference championships won.” And no, the CFP insists that doesn’t mean winning a conference championship game.
Both Hocutt and CFP executive director Bill Hancock emphasized to us over and over again that there is no “13th data point,” a phrase coined by the media to describe a team’s 13th game – the conference championship game, which currently is played by all of the Power Five conferences with the exception the Big 12.
It’s worth noting that when Baylor and TCU shared the conference title for the 2014 season, neither made it into the semifinals. Last season, however, when Oklahoma won the conference outright, the Sooners did make it into the playoffs.
The Big 12 isn’t taking any chances going forward – the conference is adding a conference championship game in 2017. When it was announced conference commissioner Bob Bowlsby said, “The addition of a football championship game allows for a 13th data point for our teams under consideration for the College Football Playoff.”
The selection process
The process itself works the same every week, regardless of whether it’s the ninth week of the season (the first time the committee meets) or the last week of the season. Each committee member is asked to arrive with 30 teams selected, in no particular order, which they believe warrant consideration for the Top 25. The process begins anew with each week, so it’s not simply a matter of revising the previous week’s rankings.
Committee members enter the room, sit at their assigned station and enter their 30 teams in a process that amounts to a secret ballot. I played the role of Jeff Long for the purposes of this mock selection.




By the time the committee arrives on the Monday following the ninth week of the regular season, committee members have already watched dozens upon dozens of games.
Hocutt said he watches 15-18 games per week, some live, some via recording off his television at home and others from the cuts committee members receive on iPads given to them by the CFP. Those cuts are approximately 45 minutes each, giving committee members the opportunity to watch games without commercials, timeouts, huddles or other breaks in action. Committee members can also request a “coach’s cut,” which has the audio removed.
I had the opportunity to sit down with the committee member I role played – Arkansas athletic director Jeff Long – a couple of weeks after my mock selection, so I asked him about his watching habits. Long said he’s usually so keyed up after his team plays that he has plenty of energy to watch some games on his DVR at home later that evening and then can dig into more the next day.
Cuts stay on the iPads provided to committee members all season, and Hocutt said he has gone back and watched games from earlier in the season. For example, if Team A has played several contenders this season, he might go back and watch Team A to get a feel for how those contenders played against that common opponent.
At the end of the season, the committee gathers together to watch the conference championship games.
How the ranking works
After the committee members input their 30 teams to kick off each week’s discussion, the lists are compared by the CFP’s proprietary software, created by Code Authority, to determine a consensus as to which 30 teams should be considered for the Top 25. This step, along with each other step where committee members input selections and rankings, is anonymous.
Next, committee members are asked to input their Top 6, in no particular order. The software determines a consensus for which six teams should be considered for the Top 3 spots in the rankings. Next, committee member are asked to rank those six teams from #1-6, which results in the software producing a consensus #1-3.




Hancock told us he believes the biggest misconception about the process is that the committee is simply placing 25 teams on a list and matching them up. And I can now attest that the process is nothing like that at all.
Here’s a look at what the entire process of inputting and ranking looks like (leaving out for now the discussion and revoting that can occur throughout):
- Input 30 teams, in no particular order, to be considered for Top 25
- 30-team consensus determined by the software
- Input six teams, in no particular order, to be considered for #1-3
- Six-team consensus determined by the software
- Rank your top six teams
- Consensus determined for #1-3 by the software
- Input six teams, in no particular order, to be considered for #4-6
- Six-team consensus determined by the software
- Rank those six teams from #4-9
- Consensus determined for #4-6 by the software
- Input six teams, in no particular order, to be considered for #7-9
- Six-team consensus determined by the software
- Rank those six teams from #7-12
- Consensus determined for #7-9 by the software
- Input eight teams, in no particular order, to be considered for #10-17
- Eight-team consensus determined by the software
- Rank those eight teams from #10-17
- Consensus determined for #10-13
- Input eight teams, in no particular order, to be considered for #14-21
- Eight-team consensus determined by the software
- Rank those eight teams from #14-21
- Consensus determined for #14-17
- Input eight teams, in no particular order, to be considered for #18-25
- Eight-team consensus determined by the software
- Rank those eight teams from #18-25
- Consensus determined for #18-21
- Input eight teams, in no particular order, to be considered for #22-25
- Eight-team consensus determined by the software
- Rank those eight teams from #22-30
- Consensus determined for #22-25
But, of course, it’s not that simple. There’s discussion between each round. Large television monitors placed in front of committee members around the room display data from SportSource Analytics with everything from detailed offensive and defensive stats to various strength of schedule measurements. Teams can be viewed individually or in side-by-side comparisons.




Any member with a connection to a school being discussed – whether it’s their alma mater, where they work currently, a school where their child works, etc. – is recused from both voting and discussion when that school is being considered and leaves the room.
There can also be revotes, so nothing is set in stone until the committee reaches their consensus Top 25 and leaves the room on Tuesdays. To trigger a revote, a committee member needs to find three other supporters in the room and must revote at least three contiguous ranking slots.
For example, our mock committee wound up revoting #3-6 after we ranked #7-9. The revote resulted in no change. However, another revote we initiated for #21-25 at the end of the day did result in a change. And then, #3-7 were revoted and resulted in a change, even though the #3-6 revote had failed to produce a change earlier.
With all of the inputting, ranking, discussion and revoting, you can see how it takes the better part of two full days each week for the committee to produce a Top 25 – and why it’s probably a little more complicated than you thought!
Leave a Reply