THE VALIDITY OF THE U.S. NEWS AND WORLD REPORT

RANKING OF ABA LAW SCHOOLS1

 

 

Stephen P. Klein, Ph.D. and Laura Hamilton, Ph.D.

 

February 18, 1998

 

1This report was commissioned by the Association of American Law Schools. The views expressed in this report are those of the authors and do not necessarily reflect the views of the AALS or any of the organizations with which the authors are affiliated. Prof. Joseph Hoffmann of Indiana University School of Law was kind enough to provide us with some of the data that we analyzed for this report.


 

THE VALIDITY OF THE U.S. NEWS AND WORLD REPORT RANKING OF ABA LAW SCHOOLS

Summary
Introduction
Comprehensiveness
Review of Each Factor
Overall Ranking
Conclusions

 

SUMMARY

U.S. News and World Report uses data on 12 factors to make its annual evaluation of law schools. Two of these factors (ratings by academics and ratings by lawyers and judges) involve subjective judgments of school quality. The other 10 factors are based on objective actuarial data, such as the school’s median LSAT score for beginning students and its bar passage rate. US News uses the 12 factors to rank the top 50 schools and assign the remaining 124 schools to three "tiers" of overall quality (with about 40 schools per tier). US News also publishes the data on each school for a few of the 12 factors.

There are several problems with the US News evaluation system. One of the most important of these is that US News does not consider many factors, such as the educational benefits of attending a certain school or the quality of its faculty, that are just as important as the ones it does include. There also are problems related to the accuracy of the data US News relies on to measure a factor, intentional and unintentional biases in the subjective assessments of school quality, and the use of variables that may foster inappropriate school practices. For example, survey respondents may rate down some schools in order to make their own school look better and schools may try to raise their score on the "rejection rate" factor by encouraging applications from students who have virtually no chance of being admitted. In addition, the methods US News uses to combine the values on different components (such as LSAT scores and undergraduate grades) into an overall factor score (such as for "student selectivity") does not really result in assigning the components the weight US News says they should carry (and no rationale is provided for its weights). Other concerns relate to whether the persons who respond to the surveys are truly representative of their respective populations and how US News imputed the values for missing data on certain variables.

Statistical analyses of the data that were available to us revealed that virtually all of the differences in the overall ranks among schools could be explained by the combination of two of the US News factors. These factors are student selectivity (which is driven by the school’s median LSAT score) and academic reputation. The other ten factors are superfluous. However, because the US News ranking system inflates small differences in quality among schools, the addition of other factors (and/or slightly changing their weights) could shift a school from the bottom of one broad category of overall quality to the top of another (such as from the second to the third tier). Unfortunately, because of problems with all the factors in the US News system, these changes could just as easily decrease as increase the validity of the overall rankings.


THE VALIDITY OF THE U.S. NEWS AND WORLD REPORT RANKING OF ABA LAW SCHOOLS

INTRODUCTION

For the past eight years, U.S. News and World Report has published its annual ranking of American Bar Association approved law schools. These ranks receive considerable attention and they may affect important decisions, such as by students in selecting schools, by law schools in establishing policies, and by law firms in hiring new attorneys. Many people rely on the ranks because US News enjoys a national reputation; it considers several factors that appear to be relevant in comparing the quality of different schools; it combines its findings into a single, easy to use, numerical value; and there is no generally accepted competing set of school rankings.

This paper examines the validity of the US News ranks. We do this by assessing the following assumptions on which the ranks are based: (1) US News considers all the factors (or at least all the important ones) that are needed to assess a law school’s quality, (2) US News measures these factors in a reasonably precise and unbiased way, and (3) there is consensus about how much weight each factor should carry in determining a school’s quality (and these weights are reflected in the ranks US News publishes). We also examine some of the policy implications and consequences of the criteria and procedures US News uses, and we consider the appropriateness of condensing the assessment of a school’s quality into a single numerical value.

COMPREHENSIVENESS

US News considers 12 factors in evaluating a law school’s quality. While a case could be made for including most of these factors, there are certainly many others that US News does not assess but which are arguably just as important as the ones it does measure. Moreover, there is no reason to assume the variables it measures are adequate proxies for the ones it does not assess. Consequently, inserting additional relevant factors into the US News evaluation system could change its overall ranking of schools. In this section, we list the factors US News considers and note some it does not measure, but probably should to provide a comprehensive assessment of school quality.

The following 12 factors are in the US News evaluation system: (1) reputation among academics (law school deans and faculty), (2) reputation among lawyers and judges, (3) median Law School Admission Test (LSAT) score, (4) median undergraduate grade point average (UGPA), (5) percentage of applicants not accepted (i.e., rejection rate), (6) expenditures per student for instruction, library, and supporting student services, (7) expenditures per student for financial aid, indirect costs, and overhead, (8) total number of volumes, microfilm, microfiche, and titles in the law library, (9) student-to-faculty ratio, (10) percentage of students employed at time of graduation, (11) percentage employed nine months later, and (12) bar passage rate. The data for factors 1 and 2 are obtained from annual surveys conducted by US News. The data for the other factors are obtained from a questionnaire that is completed by each school.

These factors do not include any direct assessment of the caliber of a school's faculty. Although it is difficult to create objective measures of faculty scholarship and teaching ability, these factors are critical to the quality of the education students receive. Also conspicuous by its absence is any student assessment of school quality. Where would recent graduates place their law school relative to other schools? Did they find their courses stimulating and interesting? Were there problems getting into the courses they wanted or obtaining access to professors? Are there serious concerns about housing, personal safety, or parking? Is the learning environment friendly and cooperative or is it highly competitive? Do they feel prepared to function effectively in their practice environment? Information about these and a plethora of other factors are important to students making a school choice, but are not accounted for in the US News system.

There are, of course, many other indicators of a school’s quality that US News does not consider, such as the cultural diversity of its student body, the design of its curriculum, opportunities for students to participate in legal "clinics," and its students' summer employment opportunities. The US News rankings also do not consider the educational or career benefits of attending a certain school, such as would be indicated by the percentage of its graduates who receive judicial clerkships or land a good job after graduation relative to the general caliber of the students coming into the school and other factors. Information about deriving these types of benefits from attending a given school certainly would be important for students concerned about their entry into the legal profession.

To sum up, the US News ranks are based on several factors, many of which are arguably relevant indicators of school quality. However, it is not clear whether the factors it measures are the most important ones to assess or even whether all of them are truly indicative of a law school’s quality. There are certainly several other facets of quality that US News ignores but which are critical for many audiences.

REVIEW OF EACH FACTOR

This section reviews how US News measures each of the 12 factors in its evaluation system and discusses the problems that are associated with its procedures. Some of these problems are methodological; e.g., its ranking of schools suggests much greater differences in quality among them than actually exist. Other problems are more policy relevant, such as those that may lead to undesirable law school practices.

Reputation Among Academics. In the fall of 1996, US News sent a questionnaire to the following four people at each ABA law school: the dean, the academic dean, the head of the faculty hiring committee, and the most recently tenured faculty member. This questionnaire listed 174 ABA law schools and asked respondents to assign each one to a quartile of overall school quality (top 25%, next 25%, and so on). Respondents could omit schools they did not feel qualified to rate. Consequently, some schools were evaluated by more respondents than were other schools. The quartile assignments for a school were averaged across all the respondents who evaluated it. For example, if 100 of the 200 respondents who evaluated a given school put it in the first quartile and the remaining 100 put it in the second quartile, then the school would have an average rating that was exactly half-way between the first and second quartiles. Finally, US News ranked the average ratings from 1 (the best) to 174 (the worst). Schools with the same average rating were given the same rank.

The major problem with this method of evaluating academic reputation is that it produces artificially large differences among schools and even creates differences where none truly exist. For example, US News reported that five schools (Yale, Harvard, Chicago, Columbia, and Michigan) tied for first place. It said the next four schools (Stanford, Berkeley, NYU, and Virginia) tied for sixth. This result could be obtained if all the respondents who evaluated the first five schools put them in the first quartile and all the respondents but one who evaluated the other four schools also put them in the first quartile. In other words, just one of the over 400 persons who returned questionnaires could change a school’s rank on the "academic reputation" factor from first to sixth which in turn could change its overall rank (such as knocking it out of the top 5).

A respondent who did not place Stanford, Berkeley, NYU, and Virginia among the top 44 schools in the country must have been careless (e.g., marked the wrong choice by accident), misinformed, or dishonest. The first two explanations suggest the ratings are affected by random error. The third explanation suggests that some respondents made their school look better under the US News system by rating down the competition. In other words, by strategically placing some schools in a lower or much lower quartile than they deserve, a handful of respondents made their school look better than it should.

US News changed its methodology for the 1997 rankings. Instead of being asked to place schools into quartiles, respondents are now told to rate each school on a scale of 1 to 5 (with 5 being "distinguished"). This is an improvement over the 1997 rating system, but it does not eliminate the problems discussed above. Respondents still have to make a judgment about each school in relation to all other schools. Respondents will undoubtedly have varying thresholds for determining if a school is a 1, 2, 3, 4, or 5. Some respondents will assign a given rating to many schools while others will assign that rating to just a few schools. The ratings will continue to be influenced by the kinds of schools with which a respondent has had the most contact. Furthermore, as in the quartile system, a single respondent seeking to improve his or her own school's standing could assign a 4 rather than a 5 to a school that is clearly among the very best in the nation, thereby lowering that school's ranking. There also is nothing in the US News system to prevent respondents from giving their own school a higher rating than it deserves.

This type of "strategic rating" occurs in the US News rankings of professional schools in other fields. Hence, there is no reason to believe the law school rankings are immune to this problem. In short, the fact that some schools are not placed in the top quartile by everyone is symptomatic of a much deeper and more serious problem. Moreover, by using ranks, US News inflates small differences in quality among very similar schools or even creates differences where none actually exist.

Reputation ratings are highly impressionistic and there is no way to know what factors different respondents consider in their ratings. It is unlikely, however, that most law school faculty have sufficient knowledge of most schools to provide accurate assessments of their quality. Serving on several review and accreditation committees may provide the best opportunity to become familiar with the broad range of schools. However, one law school faculty member who did this advised us that "even so extensive an immersion into evaluative materials covering a broad range of law schools does not equip one to make the fine judgments about all the law schools on which US News asks us to vote."

Respondents are unlikely to be representative of the nation's law school faculty. Except for the most recently tenured faculty member, respondents are likely to be older than the faculty as a whole, and their impressions may differ in systematic ways from other faculty members. Biases also may be introduced by who chooses to respond. Only 70% of those who were sent surveys completed them. US News did not provide any data on how the non-respondents differed from those who did respond. US News also would not give us the data needed to examine the degree of agreement among the respondents. However, our statistical analyses suggested that with the important exception of the "strategic ratings" problem noted above, rater reliability is probably not a major concern.1

Reputation Among Lawyers and Judges. US News also sent a survey to a sample of "1,310 practicing lawyers, hiring partners, and senior judges." No information is given about how this sample was chosen or how

many people were surveyed in each of the three job categories, but we do know that only 33% of those surveyed returned their questionnaires. The

survey asked respondents "to rate each school by quartiles based on their appraisals of the work of recent graduates of that school." As with the academic reputation survey, the quartile method was eliminated in 1997 and replaced by a 1-5 rating system.

Law school ranks on this index were computed in the same way as they were for the ratings by academics. Consequently, the problems with the "Lawyer and Judges Reputation" factor parallel those with the academic ratings factor. Specifically, the use of ranks exaggerates small differences in quality among schools, ranks are overly sensitive to careless error, a handful of respondents can rate strategically so as to give a school a higher (or lower) rating than it deserves, and the persons responding may not be representative of the population of lawyers. This latter problem is an especially serious concern when only one-third of those surveyed actually respond and there is no information about how the respondents differ from the non-respondents.

To evaluate a school, respondents have to make a judgment about that school in relation to all of the other schools in the country. This is so even if they have no first hand experience with the graduates from the vast majority of the nation’s law schools or know the schools from which other attorneys have graduated. The trustworthiness and validity of the respondents’ judgments are therefore seriously open to question. Hence, the ratings appear to be more like a popularity contest (e.g., have they ever heard of the school?) rather than a meaningful assessment of school quality.

Student Selectivity. This variable is a combination of median LSAT score, median undergraduate grade point average (UGPA), and rejection rate. Each of these factors is discussed in turn below.

Median LSAT score. US News converted each school’s median LSAT score for beginning students to a percentile in the overall distribution of LSAT scores. Next, it put this index on a 0 to 1 scale by dividing the school’s percentile by the percentile of the school with the highest median LSAT. The conversion to percentiles does not change the relative standings of the schools on LSAT, but it does spread them out much more. In other words, it inflates very small differences in median LSAT scores among schools. The consequences of this increase in score spread are discussed at the end of this section. Because law schools differ in both admission and retention policies (e.g., some graduate a much larger proportion of their entering class than do others), there is some concern about relying solely on the median LSAT score of the entering class.

UGPA. US News divides the median UGPA for beginning students at a school by the highest median UGPA that was reported by any school. One problem with UGPAs is the large differences in grading standards across undergraduate institutions; i.e., a given UGPA from one college may indicate a very different level of proficiency than that same UGPA from another institution. To compound this problem, the students who attend a given law school are unlikely to come from a representative sample of all undergraduate schools. Consequently, although the students entering one school may have the same median UGPA as those entering another school, they may differ greatly in the average grading standards of their respective undergraduate colleges. This situation may explain why UGPA is usually a moderately good predictor of first-year GPA within a law school, but a very poor predictor of success on the bar exam. Thus, a good argument can be made for excluding UGPA entirely.

Rejection Rate. US News divides the number of applicants who were not offered admission by the total number who applied. It then converts the resulting proportion to a 0 to 1 scale by dividing each school’s proportion by the proportion at the school that rejects the largest percentage of its applicants.

On average, schools that receive many applications relative to the number of seats available are probably more desirable than other schools. However, there are many factors besides desirability that affect rejection rate. These include where the school is located and its recruitment policies. For example, a school can inflate its rejection rate by encouraging rather than discouraging applications from students who have no real chance of being accepted. It also is not clear what additional insight about school quality is provided by rejection rate that is not already accounted for by LSAT and UGPA.

Overall Selectivity Rank. US News says it assigns the following weights to the factors above to create an overall selectivity rank for each school: 50% for LSAT, 40% for UGPA, and 10% for rejection rate. However, when values on different factors are added together to obtain an overall total, the weight each one actually carries is driven by the relative sizes of their "standard deviations" (i.e., how much the values on an index spread out around its mean value). The larger the spread, the greater the standard deviation, and the greater the weight.

Before they were weighted, the standard deviations of the US News index values for LSAT, UGPA, and rejection rate were .18, .05, and .16, respectively. Consequently, when the US News weights were applied, about 70% of the overall selectivity ranking came from LSAT alone while UGPA and rejection rate essentially split the remaining 30%. In short, because US News did not control for differences in standard deviations among the components before it weighted them, they did not carry their intended weights.

Whether a discrepancy between actual and intended weights is a serious concern depends on one’s notion of how much weight each factor should carry and how much effect the weights had on a school’s overall rank. Whether weights affect results depends on the correlation among the factors. For example, the weights would not have any influence if the rank ordering of schools on LSAT corresponded perfectly with their rank ordering on UGPA and rejection rate. Our analyses of the US News data revealed that, when the 50/40/10 weights were actually applied, the overall selectivity rank at several schools differed by several places from the way US News ranked them. These differences were enough to move some schools between "tiers" in the US News system.

Faculty Resources. This variable is a combination of direct expenditures per student, financial aid and indirect expenditures per student, library resources, and student-to-faculty ratio. Each of these factors is discussed in turn below. US News converts the values on each factor to a 0 to 1 scale by dividing a school’s index value by the one obtained at the school with the highest value.

Direct Expenditures Per Student. US News obtains the amount a school spends on instructional salaries, summer salaries, administrative and student services salaries, library salaries, other salaries not included elsewhere, fringe benefits, library operations, and law school expenses (excluding library). It then divides the sum of these expenditures by the school’s enrollment to produce a per-student figure.

Some of the variables US News uses in computing this index, such as faculty salaries, may reflect on school quality. For example, higher salaries may attract better faculty in some labor markets. However, the US News index does not consider geographical differences in the ability to hire qualified faculty or the trade-offs young faculty may make between institutional prestige and salary. Other items, such as library salaries, are even more questionable.

Financial Aid and Indirect Expenditures Per Student. US News obtains the amount a school spends per student on financial aid, other direct expenditures, and total indirect expenditures and overhead. It then divides this amount by the school’s enrollment. We agree that schools that spend more on financial aid should receive recognition for their generosity, but it is not clear how this factor relates to the quality of instruction provided or even to overall school quality. Furthermore, a school could inflate its tuition and simultaneously increase aid to students, thereby improving its standing on the financial aid category. Such a move would obviously have no effect on students or on the quality of their education. The rationale is even less clear for considering overhead and indirect expenditures. These factors are certainly not good indicators of the quality of the facilities or their ambiance.

Library. US News index values on this factor represent the total number of titles in the school’s library and the total volumes and volume equivalents that are held at the end of the fiscal year. This measure is likely to favor large schools, which tend to have larger libraries. While a law school certainly needs a basic and substantial library, it is not clear why one school should have a higher rank than another simply because it has a few more books filling its shelves.

Student-to-Faculty Ratio. US News converts the school’s student-to-faculty ratio to a 0 to 1 scale so that schools with fewer students per faculty member have higher index values. Unfortunately, this index is only a very rough proxy for student access to professors (or average class size in lower or upper division courses) because schools differ in their policies and practices regarding faculty members receiving released time for research and other activities.

Overall Resources Rank. US News said it assigned the following weights to the factors above to create an overall resources rank for each school: 65% direct expenditures, 10% financial aid and indirect expenditures, 5% for library volumes, and 20% for student-to-faculty ratio. However, as with selectivity, US News did not insure that these variables had equal standard deviations before it weighted them. Consequently, the published resource rankings do not reflect the intended weights. We do not know by how much the actual weights differed from the intended ones or the consequences of the disparity for the overall ranks of school quality because US News did not release the data on the individual components.

If we make the heroic assumption that the actual weights generally correspond to the intended ones, then total expenditures (direct plus indirect) account for the lion’s share (75%) of the overall resource rankings. In other words, the more a school spends per student, the higher its ranking. Thus, if different schools produce students of equal competence (as indicated by bar passage rate, employment, etc.), then under the US News system, the school that spends the most to do this gets the highest rank. This does not make sense. Expenditures per student also are insensitive to geographical differences in the costs of purchasing certain services and products, and to any economies or diseconomies of scale that are associated with school size.

Placement Success. This variable is a combination of the percentage of students who were employed by the time they graduated, the percent employed nine months later, and bar exam passage rate. Each of these factors is discussed in turn below. US News converts the values on each factor to a 0 to 1 scale by dividing a school’s index value by the one obtained at the school with the highest value.

Placement At Graduation. Law schools reported the percentage of their students who had secured employment at the time of graduation. This figure included graduates who were studying for another degree as well as those who got jobs outside of the legal profession. The failure to distinguish between legal and non-legal jobs raises serious questions about the validity of this index. The placement rate also included graduates who kept jobs that they had before beginning school. By including jobs that were not acquired as a result of a student's law degree, this measure may artificially inflate values for certain schools, especially those where large numbers of students work to pay their tuition. In addition, only 70% of law schools provided this placement figure. This led US News to "impute" (estimate) the employment rates for the missing schools by applying a discount rate to their nine-month placement rates. It is not clear what effect this imputation had on the rankings.

Nine-Month Placement. This variable is the percentage of graduates who had secured jobs or graduate study as of nine months after graduation. Again, no effort was made to distinguish between jobs that were and were not in the legal profession or between those that were newly acquired and those that students had before beginning their legal studies.

Bar Passage Rate. US News created this index by dividing the bar exam passing rate of a school’s graduates in the state where the greatest number of its students took the exam by the overall passing rate in that state. It appears US News used this procedure in an attempt to adjust for the large differences in pass/fail standards among states. However, the relationship between a school’s and a state’s pass rate is complex and it varies across states. For example, in a state where most of those taking the bar exam come from one school, then mathematically, that school’s passing rate will be nearly identical to the state’s rate. Consequently, schools in states with few if any other schools cannot receive an especially high or low value on this index. They are automatically stuck in the middle.

In addition, graduates of a given school who take the bar exam in one state may not be representative of all the students from that school. For example, suppose the majority of students in a state’s main law school remain in that state upon graduation, but the top 25% go to states where they can compete in more lucrative labor markets. Under these conditions, the school’s passing rate will underestimate the success of its students.

US News uses the state’s overall passing rate rather than the rate for graduates of ABA-accredited schools. In California, for example, these rates are 73% and 83% respectively. The ABA rate would be more appropriate.

Finally, US News relied on schools to provide the state bar passage rates. It did not verify the accuracy of these rates. As a result, disparate rates appear in the published version of the rankings. For example, the passing rate in California was 60% for the calculation of this index for the University of Southern California and the University of San Diego, but 73% for other California schools. US News acknowledged the error and will undoubtedly verify the state rates in future surveys. However, the fact that these inconsistent statistics appeared in the published rankings (and still appear in the rankings on the US News web site) suggests that the US News data collection and reporting practices need much more stringent quality control checks.

Overall Placement Rank. US News said it assigned the following weights to the factors above to create an overall placement rank for each school: 30% placement at graduation, 60% placement nine months later, and 10% bar passage rate. But again, it is unlikely these percentages reflect how much weight each factor actually carried in determining a school’s overall placement rank because US News did not insure that the three component indexes had equal standard deviations before the weights were applied.

OVERALL RANKING

Perhaps the most controversial aspect of the US News evaluations is that the ranks on different factors are combined to generate overall ranks. US News does this by computing a weighted sum of the ranks on the five factors (which is about the same number of factors Consumer Reports uses to evaluate hair dryers, fast food hamburgers, and luggage). The weights US News assigns to its five factors are as follows: 25% reputation among academics, 15% reputation among lawyers and judges, 25% student selectivity, 15% faculty resources, and 20% placement success.

To our knowledge, US News has not provided any justification for the weights it assigns to the five major factors or their components. We also are not aware of any study that was done to determine what weights would be appropriate. More importantly, there are likely to be very large differences in the weights assigned to these (and other) factors by various groups of readers, such as students, faculty members, and deans. For example, in the US News system, bar exam passage rate accounts for 10% of the weight assigned to the placement factor which in turn accounts for 20% of the weight in the overall rankings. Consequently, a school’s bar exam passage rate drives only 2% of the weight in its overall ranking. We suspect this is substantially less influence than what students would find appropriate. Similarly, faculty members might be more concerned about resources, expenditures, and some of the many other factors that US News did not even consider, such as working conditions. In short, different audiences have different priorities, and at least in theory, a single ranking system cannot do justice to all of them.

Whether differences in priorities are truly important depends on the how well the factors are correlated with each other. If one or two factors can serve as good proxies for the others (i.e., if the factors are highly correlated with each other), then weighting is not a major issue—and that is exactly what we found with the data that were available to us. Specifically, even by itself, the student selectivity factor explained about 90% of the differences in overall ranks among schools (i.e., percent of total variance). Since LSAT is the major driver of student selectivity (and is highly correlated with UGPA), ranking schools on LSAT alone will do a very good job of replicating the overall ranks US News publishes. The combination of student selectivity with academic reputation explains virtually all of the variance in the overall ranks. The same is true for the combination of student selectivity with lawyer and judge reputation. In short, most of what US News does to produce its overall ranks is unnecessary. It can generate them by a much shorter and cheaper (albeit less seemingly scientific and credible) route.

Although the overall US News rankings are fairly impervious to the weights assigned to different factors and even to the elimination of several of them, how these factors are combined can still change a school’s rank enough to make a difference in its perceived quality. For example, an eleventh ranked school could easily move into the top ten and a school at the top or bottom of one "tier" in the US News system could move to another tier (as in fact occurred when we applied the weights US News intended for the student selectivity factor). However, given the many concerns discussed above regarding all of the US News factors, it is not clear whether including more of them and/or changing their weights would result in a more valid index of a school’s overall quality. It could just as easily result in the overall ranks being even more corrupted by biases and errors in the evaluation system.

CONCLUSIONS

There are many serious problems with the US News system for evaluating law schools. These problems include concerns about: (1) important aspects of law school quality that are not assessed by US News; (2) the accuracy of the data US News used to create the index values (such as obvious errors in the computation of bar passage rate and failure to control for regional cost of living differences); (3) the effects of chance, multiple interpretations, and systematic biases on survey responses (such as whether respondents are representative of those sent surveys and whether strategic ratings led to some schools receiving a higher or lower rank than they deserved); (4) the methods US News used to handle missing data; and (5) the use of variables that could lead to inappropriate school practices (such as schools raising their "rejection rate" index by encouraging applications from students who have virtually no chance of being admitted).

There also are problems with how the 12 factors are weighted because they do not really carry the weights US News says they carry. Moreover, no rationale is provided for these weights. However, weighting only matters to the few schools that are near an important cut point, such as being in the top 10, 25, or 50. This is so because about 90% of the overall differences in ranks among schools can be explained solely by the median LSAT score of their entering classes and essentially all of the differences can be explained by the combination of LSAT and Academic reputation ratings. Consequently, all of the other 10 factors US News measures (such as placement of graduates) have virtually no effect on the overall ranks and because of measurement problems, what little influence they do have may lead to reducing rather than increasing the validity of the results.


Footnote

1 The extremely high correlation between the Academic Reputation ranks and the Lawyer and Judges Reputation ranks indicates that there must have been adequate agreement among most raters. (Back to article)