Saturday, September 17, 2011


          Each week, the NCAA college football rankings come out, and they attract much interest on the part of college football partisans.  Since 1936, these rankings have been assembled by the Associated Press, and for many years, they were the only way to judge who was better than whom.  Then, with the advent of the Bowl Championship Series, the rankings stakes escalated, and a complex computer-assessed algorithm joined the opinions of sportswriters.

            The squabbling about the rankings, whatever their source, is unending.  No doubt, it’s part of the fun of college football season.  Different people have different views about what should go into the assessment, and how much weight each factor should get.  In a sense, however, there is a reasonable empirical test of the accuracy of the rankings.  When ranked teams play, the higher ranked team should defeat the lower ranked team.  Rankings are, after all, just predictions about who will win when ranked teams go at it.  I looked at the results of games played between the number 1 and number 2 ranked teams over the 75 years that ranking has been done.  Number 1 has played Number 2 41 times, and Number 1 has been victorious in 24 of those games.  Better than 50%, but not a lot better than 50%.  So ranking is hard.  There is error in measurement, and there is error in the weights assigned to different things being measured.  Almost certainly, when teams ranked closely together are compared, the error in measurement and weighting exceeds the difference in quality between the teams.  Error could be reduced if, say, we just presented a list of the 20 best teams, in no particular order, without identifying any of them as Number 1.  But that would be much less fun.

            I don’t really care about college football—a game played by pros disguised as students, overseen by coaches and administrators who are almost completely corrupt.  What I do care about—a great deal—is rankings of the quality of colleges, universities, and professional schools.   If ranking football teams is hard, ranking school quality is impossible.  Not only are there many more factors to consider, but the criteria are totally ambiguous.  No one is suggesting that if Williams College is the number 1 liberal arts college, and Swarthmore is number 2, that Williams would “beat” Swarthmore if they played.  Played what?  It is, in short, a fool’s errand to rank colleges and universities.  Rankings convey a false precision that is extremely misleading.  Yet, the rankings go on, and U.S. News and World Report coins money by publishing them, and high school kids and their parents are driven by the rankings in deciding where to apply and where to go, while the schools themselves strategize to try to move up the ladder (“Don’t admit that spectacular student.  She’s too good.  We’re just a safety school.  She’ll never come, and that will make our ‘yield’ look bad in the U.S. News rankings.”  “Let’s try to get every alum to give us something—even five dollars.  That way, our alumni participation will look good in the U.S. News rankings.”)  I wish I were making this up, but I’m not.

            If college rankings were harmless, however inaccurate, we could let schools have their fun, U.S. News make its money, and just ignore the foolish enterprise.  But unlike the rankings of college football teams, they aren’t harmless.  They lead students to ask “what’s the best school?”—the wrong question, rather than “what’s the best school for me?”—the right question.  Much time, money, and angst are wasted in pursuit of a non-existent objective.  This nonsense really must stop.

            What would a sensible alternative be?  Well, one alternative that would reduce the damage is for the rankers to identify the twenty top, next-to-top, and so on, schools in each category, and then present them, unordered.  This would not eliminate foolishness.  The schools ranked 21-25 would struggle to game the system so that they made it into the top tier, and the schools ranked 16-20 would game the system to maintain their position against competition.  But it would certainly reduce the game playing.  And it would eliminate the false claims to precision that such rankings imply.  Is there any chance that U.S. News would do this on its own?  Not a chance.  Sales would suffer.  The buzz surrounding the annual ratings issue would die down.  Then is there any chance that U.S. News could be pressured into adopting this more sensible approach?  Maybe.

            What if the “elite” in every category banded together and refused to cooperate unless and until U.S. News mends its ways.  You can’t stop the magazine from producing the rankings, but the rankings might quickly lose credibility if the numbers on which they were based were known to be unreliable.  Is there any chance that the elite might band together in this way?  Well, getting them to cooperate won’t be easy, and here’s why.  There is almost certainly a relation between the rankings of the schools and their quality.  It’s just that the direction of causality is opposite to what you, and U.S. News imagine.  What really makes one school better than another is the quality of the students who attend.  Rank almost any good school number 1, and it will become number 1, because it will attract the best students, who then teach one another.  In other words, rankings differences cause quality differences rather than reporting them. It is worth noting that economist John Maynard Keynes made this point years ago, in discussing the picking of stocks.  What matters, he said, is not what’s the best company.  What matters is what you think other people will think is the best company.  That’s what will drive the share price up, and that’s the bandwagon you want to be on.  In other words, thinking Acme Widgets is the best company is what makes it the best company, at least for investment.  Similarly, thinking that Swarthmore is the best college is what makes it the best college—because the best students will go there.

             But if Williams, Amherst, Swarthmore, Pomona, and Wesleyan band together and stop cooperating, and the rankings as we know them go away, the quality of the students they attract will go down, as high school seniors distribute themselves to other institutions. Similarly, if Yale, Harvard, Princeton, and Stanford stop cooperating.  And unless the top schools stop cooperating, nothing will change.  If a school ranked number 12 doesn’t play the game, it will be interpreted as nothing but sour grapes.

            So what can possibly induce the elite to use what leverage they have to get U.S. News to change its practices?  Why would any institution act against what seem to be its own best interests?  Why would any school do anything that might reduce the overall quality of its student body? My answer, stimulated more by hope than by experience, is that elite schools might do it just because it’s right.  It’s right because colleges and universities are our bastions of truth-seeking, and as defenders of truth, they should not participate in anything that seriously distorts truth.  And it’s right because they are interested in doing whatever they can to help students make wise decisions, and rankings are to a large extent the enemies of wise decisions.  So what I would love to see is movements by faculty and students on campuses across the land to convince their administrations to say “enough!”  If administrators can’t be persuaded to do the right thing, perhaps they can be shamed into it.

1 comment: