Coase-Sandor Working Paper Series in Law and Economics

Publication Date

2019

Publication Title

Coase-Sandor Working Paper Series in Law and Economics

Abstract

There are obvious benefits to ranking academic departments based on objective measures of faculty research output. However, there are considerable difficulties associated with producing reliable and accurate rankings. In this short comment, we offer an evaluation of Heald and Sichelman's recent foray into the project of ranking law schools. Heald and Sichelman are to be commended for the transparency and rigor of their rankings effort. At the same time, it is important to note that their rankings involve a series of contestable discretionary choices and could give rise to potential counterproductive gaming by law schools seeking to improve their place in the rankings. In particular, Heald and Sichelman's system places a thumb on the scale on behalf of more senior faculty who publish in traditional law reviews and write in popular substantive areas like constitutional law. This raises the concern that rankings of this type could discourage law schools from hiring faculty that are young, produce interdisciplinary scholarship, and write in otherwise underrepresented fields. Nonetheless, Heald and Sichelman have taken an important step forward, and other scholars should look to build profitably upon their work.

CITATION: Adam Chilton & Jonathan Masur, What Should Law School Rankings Measure and How Should We Measure It: A Comment on Heald and Sichelman’s Rankings, 60 JURIMETRICS J. __–__ (2019).

There are obvious benefits to ranking academic departments based on objective measures of faculty research output. However, there are considerable difficulties associated with producing reliable and accurate rankings. Paul Heald and Ted Sichelman deserve substantial credit for undertaking a project to improve upon existing efforts to rank the academic impact of law school faculties, particularly in light of the fact that U.S. News & World Report now claims that it will incorporate some type of citation ranking into its overall assessment of law schools.1 Moreover, Heald and Sichelman’s work is careful and thorough, and they are admirably honest about its limitations and shortcomings. They also deserve considerable credit for their commitment to transparency and replicability.

At the same time, there are many issues raised by their rankings that deserve further discussion. The reason is simple: if rankings of academic impact based on citations are incorporated into U.S. News’s overall rankings of law schools, schools may try to maximize their ranking, even at the cost of other academic or social values. Decisions regarding how to construct rankings will thus directly impact the kinds of scholarship, and scholars, that are produced and rewarded. These decisions should thus be carefully debated by the legal academy and only made after thoughtful consideration about exactly what the rankings should measure and how best to do it. To contribute to that necessary debate, in this short comment, we have two goals: to describe and decompose some of those necessary decisions about what to measure, and to highlight some problems that may result from using HeinOnline to measure faculty impact.2

Number

893


Included in

Law Commons

Share

COinS