The flaws within college rankings
When it comes to applying to colleges, deciding where to apply is not easy. Having experts evaluate colleges seems like a great alternative to figuring it out ourselves. It is hardly surprising then that students and parents turn to rankings such as those published by U.S. News and Princeton Review.
With the college admissions landscape becoming increasingly competitive over the past two decades, applicants have often relied on rankings with blind faith. Notwithstanding critics’ opinions that rankings are elitist and imprecise, about two-thirds of high school students use college rankings when making decisions. College rankings offer the illusion of precision, but they simply do not measure the most important factor – the educational experience for an individual student.
The rankings can be helpful but students should make an informed decision about colleges with consideration to the metrics that went into the ranking methodology, and then determine the importance of those metrics to their own requirements.
To understand how a rank is determined, let us dive into the methodology of U.S. News & World Report, which is one of the most influential. Although U.S. News refines its methodology regularly based on user feedback, literature reviews, and regular engagement with higher education institutions, its core components have remained the same over many years.
About one-quarter of a college’s rank is based on reputational ratings it receives in a poll that U.S. News conducts annually of college presidents, provosts, admissions deans, and a small group of college counselors. The remaining three-quarters are made up of data collected in categories including retention and graduation rate, faculty resources, student selectivity, financial resources, alumni giving and graduate indebtedness. The weighting for each of the categories in the final calculation can vary each year based on these sub-categories.
The U.S. News methodology is the product of years of research, and it claims that only thoroughly vetted academic data from its surveys and reliable third-party sources are used to calculate each factor. While all this looks neatly tied together, the behavior of the data sources is a little less transparent.
In the reputation rating category, the administrators are asked to rate the academic quality of undergraduate programs at colleges with the same mission as their own (e.g. research universities, liberal art colleges, regional universities, or regional colleges). Many of the respondents acknowledge that they do not have complete knowledge of other colleges to provide a meaningful response. The response rate is fairly low: less than 50 percent for college administrators and less than 10 percent for high school counselors.
Critics express concern that the methodology does not include any factors that directly measure educational quality. There are only indirect measures such as small classes which may mean more personal attention and higher salaries that may translate to more motivated faculty.
Under pressure though, colleges have tried to improve on factors in the methodology that have little bearing on educational quality. For many years, colleges have used a harmless approach of producing flamboyant booklets to highlight their programs, facilities, and ambitious plans for the future. These booklets, which are primarily for prospective students, are also sent to college presidents at other campuses in hopes of raising awareness for their college and possibly influencing a higher reputation rating.
The rankings have also been affected when colleges misrepresented the data on some factors in the ranking formula to enhance their standing. U.S. News has had to penalize such colleges by giving them an “Unranked” status. Washington University in St. Louis, Scripps College and UC Berkeley are among some colleges that have been “Unranked” in an edition, which lasts a year until the colleges confirm the accuracy with their next data submission. Until that light was shed, however, these colleges enjoyed their high position on the list.
In reality, how relevant is reputation rating (one college’s view of another similar college) to a student’s view of a college? It is hardly relevant. And yet, it impacts the ranking significantly (by almost 25%). Social Mobility, a metric that measures the Pell Grant graduation rate, is weighted at 5%. Should this metric matter to a student who is not seeking any Financial Aid? Clearly, the answer is no.
Moreover, every year students desperately seek that acceptance letter from Harvard, Princeton or Yale without really knowing why except to impress family and friends. It is fine to have prestige as one of the selection factors, but it becomes a problem if that is the only factor.
Students are recommended to use the rankings as tip sheets. In other words, do not believe that one college is better than another simply because it is listed higher on the list.
As a prospective student, you should decide what is most important to you. Some colleges have a better social scene while others are environmentally friendly. Aggregated standardized ranking cannot replace your opinion of whether you will thrive and learn at a prospective college. That requires personal observation through a visit to the campus (or a virtual tour and interviews with current students), availability of your desired courses and program, and an understanding of your personal traits and goals.
With such a large time and financial investment in a college degree, picking a school that fits who you are is more important than what rank a school happens to have in a given year.
Anuradha Shenoy is an independent college guidance counselor who teaches a college prep elective – AVID – at Eastlake High School. She also serves as a copy editor for the Sammamish Independent.