Thank you for taking a look at the schools that are members
of the Official MBA Guide.
What's Wrong With MBA Ranking Surveys?
by Martin Schatz, Ph.D.
(reprinted by permission of Management Research News 16(7), 1993, pp. 15-18)
The ranking of business schools has been a controversial subject for a number of years. It is only recently, however, that they have become popular and generally accepted. The Carter Report, the Ladd & Lipset Survey, and the survey of now defunct MBA Magazine all appeared in 1977. Carter measured the frequency with which the faculty published in academic journals as his basis to rank the schools; Ladd & Lipset questioned business school faculty about which schools they thought were best, and MBA Magazine had the deans of the business schools vote on the best programs. Although those who were knowledgeable about business schools at the time were skeptical of the procedures, the results were not generally available to the public and, therefore, made little difference to the schools. In recent years, however, ranking of all colleges has become popular with the press, and has been highly publicized. As a matter of fact, one of the principal reasons for the rankings has been the ability of the articles to boost the circulation of the magazines.
Basically, there are two problems associated with the popular rankings of MBA programs. The two seem to be self-contradictory, but they nevertheless still exist. The first problem is that people foolishly tend to believe that there is significance to the order in which the schools appear. The second problem is that the rankings have atendency to become self-fulfilling prophecies. Theoretically, the ultimate outcome of these two problems will be the creation of a list of elite schools through the redirection of qualified students and faculty to these institutions, followed closely by recruiters from the corporate world. It won't matter what, or how well the schools teach.
Let us take a look at the two most popular rankings: Business Week and U.S. News & World Report. Business Week reportedly bases its rankings on two factors; a survey of recent graduates from the schools being evaluated, and a survey of corporate executives. It also reports some objective data about each school, such as GMAT (Graduate Management Admission Test) score, salary of graduates, percentage of applicants accepted, and size. The magazine does not, however, reveal what role these data play in the ranking. Their claim to credibility is the statistical analysis of the questionable data. But to paraphrase the computer term of GIGO (garbage in - garbage out), a good statistical analysis of the wrong data yields bad results. The methodology used by Business Week would not rate a passing grade in any of the schools that the survey ranks, and not even at those that were not "good" enough to be included in the analysis. Of some 700 colleges and universities in this country that offer the MBA degree, Business Week pre-selected forty-four schools to be included in their 1994 ranking. But how did they determine which forty-four schools to include in their survey? Anything that they did with the data after that first decision is irrelevant if there is not a valid way of selecting the initial set of schools. And if there is some validity in their pre-selection process, maybe they could rank all 700 schools, instead of just the top twenty, or the forty-four that they started with.
The second flaw with Business Week's methodology rests on the premise that business executives know anything at all about the quality of business schools. As a matter of fact, it is probably safe to say that most of what the executives do know is from reading earlier issues of Business Week. It is also likely that at best, the executives evaluate the education received by the graduates of these schools on the basis of one or two individual graduates whom they happen to know, rather than on any extensive research. That raises the question of whether the reputation for quality of a school should then be based on the quality of an individual graduate of that school. Such asystem would lead to a "king of the hill" challenge process, not unlike the David and Goliath battle where each side chose one warrior to determine which army would be victorious. In modern day competition, perhaps we could just have an MBA version of the popular quiz programs Jeopardy or College Bowl. The rankings would then be determined by the order in which the schools' representatives finished in the competition.
Judging the quality of a school on the apparent popularity of the graduates raises even more questions. Isn't it likely that a very large school that is not particularly distinguished will turn out more successful graduates who are visible than a very small school that has higher standards and expectations for its students, but nevertheless has far fewer graduates? Also, how many of the executive respondents knew graduates from all of the schools included in the survey? If the unlikely answer to this latter question is "a substantial number," then another question becomes relevant. Why not base the ranking of schools on the percentage of graduates from each school that meets some level of acceptable performance? Facetiously, we could devise a scoring system that awards points based on the percentage of graduates who are subsequently rated to be outstanding and good employees, and even subtract points for those who turn out to be duds. Then, of course, we would really have to penalize those schools that mass produce graduates who are unemployable or at best under-employed.
To carry the absurdity a little further, if we are really basing the quality of the school on how good its graduates are, let's get to the heart of the matter. Let's talk about who is admitted to the schools. It has long been believed by a number of people that the top business schools insure the success of their graduates by admitting only those students who are guaranteed to be successful no matter where they go to school. Or for that matter, they will be successful whether or not they receive an MBA. Actually, both Business Week and U.S. News & World Report border on recognizing this by using student selectivity as a screening criterion. It is interesting that they seem to place more significance on the issue of the number of students who are rejected, rather than on the quality of the students who are accepted. In both cases, what they are really measuring is not only the difficulty of the school's admission standards, but also the school's ability to attract a high number of applications from unqualified applicants.
U.S. News & World Report added to its survey procedure a ranking of the schools by the people who should know -- the deans of the 270 MBA programs accredited by the American Assembly of Collegiate Schools of Business. There is one problem with that logic. These people don't know any more about each other's school than does the man on the street. First, based on the very high turnover rate of business school deans, a good many deans are new to the job (approximately ten percent each year) and don't even know very much about their own school. Second, except for a few visits that a given dean may take to other schools for the purpose of evaluating them for continued accreditation, even long-standing deans don't really know much about more than a handful of schools. And finally, there are no criteria. Even if it is possible for some deans to say "I know that School A has a more productive faculty than School B when it comes to academic publication," that same dean may feel that School B has a better student body than School A. The U.S. News & World Report survey did not allow for such fine tuning.
What all the rankings want to do is understandable; it may just not be possible. They want to measure a number of factors that they think are important, and then statistically weight these factors and emerge with a single number that depicts the overall quality of the school. U.S. News & World Report includes undergraduate grade point average, average GMAT score, the school's acceptance rate, and enrollment yield as components of their Student Selectivity criterion. To determine Placement Success, they measured the percentage of students employed after graduation, the ratio of graduates to employers recruiting on campus, and the average starting salary. Each of these factors can be objectively measured, and each of them might well tell something interesting about the school, but it certainly does not reveal the quality of research and teaching that goes on in the school, nor the quality and use of the facilities and equipment. And most of all, it does not measure the culture and values of the school which are of critical importance.
A New Methodology
It is easy to criticize what the rankings do wrong, and supposedly more difficult to offer corrective suggestions. The remainder of this paper will deal not only with the suggestions, but with a demonstration that such improvements are relevant and can work.
Data were gathered about all of the accredited MBA programs in three ways. Primary data were received by sending a questionnaire to each of the schools requesting detailed information. That was supplemented with secondary information from the articles that appeared in Business Week and U.S. News & World Report, and by referring to the standard guide books including Barron's, Petersons's, and the GMAC. College catalogs from some of the schools were also used. More than two-thirds of the schools returned the questionnaires, but since some questions were not always answered, not all of the desired comparisons could be made. The results conclusively indicate, however, that the Business Week and U.S. News & World Report results could be easily duplicated by combining just two of the eighteen attributes by which the schools were compared. This strongly suggests that the published rankings are at best superficial and, probably, based on the "halo effect" of these two attributes.
The questionnaire used for this research consisted of 42 questions, several of which had multiple parts. Ninety-six specific pieces of data were requested. Since the computer program that was developed for this analysis had the ability to weight each of 18 school attributes on a scale of one to ten and combine any two or more of the attributes, there was the possibility for ranking the schools by literally millions of combinations of attributes. Because schools were excluded from a specific ranking if some of the data were missing, however, the combination of attributes were kept rather basic. Various combinations of attributions were examined, but were deemed to be unnecessary to duplicate the results of the two published rankings. Using the simplest of combinations -- a simple average of the relative scores for GMAT and starting salary -- delivered a ranking that is essentially the same as the 1994 Business Week and 1993 U.S. News & World Reports combined. A study of Table 1 reveals that using these two criteria, two of the top 3 schools ranked by GMAT and starting salary alone, are included in the top three of either of the published rankings. A continued analysis shows a similar relationship for 8 of the top 10, and 19 of the top 20. Only in the case of Georgetown, which was ranked 16 with the combined criteria of GMAT score and salary, is a top twenty school not included in the top twenty of either Business Week or U.S. News & World Report. Even here, however, Georgetown was ranked number 22 by U.S. News & World Report, and listed as a runner-up to the top twenty schools by Business Week. Thus, from among the 700 MBA programs offered, the schools included in the top twenty by all three measures are virtually identical.
There are some who would claim that this study is merely verification of the two magazine surveys, showing that there is relevance in the methods that they used. Another conclusion, however, is that surveys are fine for predicting elections or seeking perceptions, but very superficial as a method of measuring facts. Because the press in general is only interested in reporting the difficulty in gaining admission and the salaries that graduates receive, it is the relative success in these areas that give a school its reputation. As we see in the comparisons that have been made, that reputation is now being taken as a proxy for academic program quality. But the MBA program of a school cannot be treated like a "black-box" holding tank where students reside for two years while their credential portfolios appreciate. It is indeed what happens during those two years that will make a person become a better or worse manager and leader when presented with the appropriate opportunity. Therefore, schools need to be judged upon what goes on within the black-box, and not just on the quality of the inputs and outputs themselves.
The ninety-two pieces of data collected for each school provided information about the students, the facilities, the classes and curriculum, the faculty, and placement. Each of these is important to prospective students and employers, but not all of them are pertinent to everyone. That, essentially, is the point of this paper. It is fine to rank order each attribute separately because each conveys some important information. But to try to combine the attributes into a single ranking should only be done on an individualized basis, depending on one's needs and perceptions. It is very unlikely that two people would put the same weighting factors on all of the 18 attributes.
The results of this analysis show that there is indeed a high correlation between the schools that rank as the "best" MBA programs in the popular press, and the measure of certain input-output measure of those programs. It is suggested by the author that the high survey rankings of these schools are due to the "halo" effect resulting from these statistics, and that the statistics themselves do not make them the "best" programs. The data do not imply that these schools are not indeed among the very best. It is just that the rankings resulting from the surveys are too one-dimensional to be taken seriously. Most likely, no single MBA program is best for everyone, and almost every program is best for someone. The match has to be individualized.
Attributes of Business Schools Examined in Study
Comparative Rankings of Business Schools
Rank Business Week U.S. News & World Report GMAT + Salary ------------------------------------------------------------------ 1 Pennsylvania Harvard Harvard 2 Northwestern Stanford Stanford 3 Chicago Pennsylvania Columbia 4 Stanford Northwestern Dartmouth 5 Harvard Michigan Pennsylvania 6 Michigan MIT Chicago 7 Indiana Duke Northwestern 8 Columbia Dartmouth MIT 9 UCLA Chicago Yale 10 MIT Columbia Berkeley 11 Duke Virginia UCLA 12 Virginia Cornell Cornell 13 Dartmouth Carnegie Mellon Duke 14 Carnegie-Mellon Berkeley Virginia 15 Cornell UCLA Carnegie-Mellon 16 NYU NYU Georgetown 17 Texas Yale Michigan 18 North Carolina Texas North Carolina 19 Berkeley North Carolina NYU 20 Purdue Indiana Texas 21 Southern California Emory 22 Georgetown Southern California 23 Purdue Southern Methodist 24 Rochester Rochester