Department Rankings: An Alternative Approach
Author(s): Arthur H. Miller, Charles Tien and Andrew A. Peebler
Source: PS: Political Science and Politics, Vol. 29, No. 4 (Dec., 1996), pp. 704-717
Published by: American Political Science Association
Stable URL: http://www.jstor.org/stable/420798
Accessed: 06-12-2017 19:52 UTC
JSTOR is a not-for-profit service that helps scholars, researchers, and students discover, use, and build upon a wide
range of content in a trusted digital archive. We use information technology and tools to increase productivity and
facilitate new forms of scholarship. For more information about JSTOR, please contact [email protected].
Your use of the JSTOR archive indicates your acceptance of the Terms & Conditions of Use, available at
http://about.jstor.org/terms
American Political Science Association is collaborating with JSTOR to digitize, preserve and
extend access to PS: Political Science and Politics
This content downloaded from 128.172.10.194 on Wed, 06 Dec 2017 19:52:09 UTC
All use subject to http://about.jstor.org/terms
The Profession
Department Rankings: An Alternative Approach*
Arthur H. Miller, Charles Tien, and Andrew A. Peebler, University of Iowa
Department rankings are important.
The amount of space in the June
1996 PS devoted to presenting and
analyzing the 1995 National Re-
search Council (NRC) rankings
bears witness to this importance.
The NRC reputational rankings pro-
vide more than mere bragging rights
(see Magner 1995 for some of the
institutional implications). For exam-
ple, the NRC rankings, and those
provided by U.S. News and World
Report, are incorporated into the
strategic plans of universities which
are subsequently used by administra-
tors to distribute and redistribute
scarce resources. Students examine
these rankings when applying to
graduate programs, and better stu-
dents apply primarily to more highly
ranked departments, thereby perpet-
uating the rankings of the top pro-
grams. No doubt rankings also have
more subtle and indirect effects on
the resources and quality of gradu-
ate programs. It is not farfetched to
expect that department rankings
could influence peer review of re-
search proposals for funding, or
manuscripts submitted to journals
for review and publication.
Given the importance and impact
of rankings, anyone using them must
remember that all approaches to
ranking have some limitations. For
example, when evaluating the NRC
rankings people frequently forget
that rankings are based upon mail-
back survey responses provided by a
relatively small number of evaluators
for a given field. While the overall
sample in the NRC study is some
8,000 respondents, only 208 individu-
als provided the evaluations of polit-
ical science programs. This sample
of 208 respondents (produced by a
response rate of only 55%) has an
overall sampling error of roughly ?t
7.1%. Given this large sampling er-
ror, it is statistically impossible to
differentiate the rank ordering of
many schools because the mean
scores used to assign the ranks are
not significantly different from one
another.1 In short, reputational rank-
ings suggest more difference between
one school and another than is war-
ranted by the data.
As has been previously argued
(Klingemann 1986), reputational
rankings may not reflect the best
criteria for judging the academic and
scholarly quality of the various de-
partments rated. As the analysis
pieces in the June 1996 PS demon-
strate, while reputational rankings
have some relationship to the quality
of scholarly output, they are domi-
nated by the size of faculty, the
number of Ph.D.’s produced and the
reputation of the university (Jack-
man and Siverson 1996; Katz and
Eagles 1996; Lowry and Silver 1996).
Even the NRC report itself acknowl-
edges that “reputational measures
provide only one tool for reviewing
the relative standing of doctoral pro-
grams in a field” (NRC 1995, 23).
An Alternative Approach
Previous work has suggested that
more objective measures can be
used as alternatives to reputational
surveys. Two more objective mea-
sures that are recommended include
number of publications and citations
(Robey 1982 preferred the number
of articles published; Klingemann
1986 used citations). The NRC
should be commended for their 1995
report which presented additional
information that goes beyond the
reputational rankings, such as data
on the number of publications and
citations per department. Those in-
terested in a somewhat more objec-
tive ranking system can use the NRC
information to determine such a
ranking.
Nevertheless, every evaluation ap-
proach has some limitation. Welch
and Hibbing (1983) have persua-
sively argued that the sheer number
of publications is too crude an indi-
cator of a department’s productivity
or quality because it fails to consider
the quality of the publisher. As sev-
eral authors have previously sug-
gested, the number of articles pub-
lished should be weighted by the
prestige of the journal if the number
of publications is to be used as an
indicator of program quality (Ga-
rand 1990, Christenson & Sigelman
1985). Unfortunately, the NRC
count of publications does not
weight for journal quality. Moreover,
the number of publications per de-
partment was counted for only the
period 1988-92, a very limited pe-
riod of time (NRC 1995, 25 and
Appendix L, 312).
The NRC report, however, goes
beyond the sheer number of publica-
tions by reporting information on
the number of citations that those
publications received. Again, this is a
step in the right direction for the
citation count indicates the extent to
which others in the profession see
the scholarly output of a program as
substantively important. Moreover,
the approach that the NRC used in
compiling the citation information
from the data provided by the Insti-
tute of Scientific Information (ISI)
appears quite sound. A detailed de-
scription of the NRC approach to
compiling the citation data is pro-
vided in Appendix G, page 143, of
the NRC report. Briefly, NRC used
the list of faculty members provided
by each university to locate, by last
name, Zip Code, and program sub-
stantive area (political science as
704 PS: Political Science & Politics
This content downloaded from 128.172.10.194 on Wed, 06 Dec 2017 19:52:09 UTC
All use subject to http://about.jstor.org/terms
Department Rankings: An Alternative Approach
opposed to sociology, anthropology
or any other field), the articles pro-
duced by each department between
1988 and 1992 and the number of
citations these same publications
produced during the 1988-1992 pe-
riod (NRC 1995, 25 and 312).
On the surface, the NRC compila-
tion of citations appears above re-
proach. The combination of last
name, Zip Code, and substantive
field appears to solve the problem of
misattribution of citations to an au-
thor because there is more than one
individual with the same last name
and first initials. Yet the rather nar-
row field designation used by NRC
may result in undercounting citations
for individuals who publish in inter-
disciplinary areas. Moreover, faculty
lists could be incomplete, a problem
noted by others (Magner 1995, Fen-
ton 1995). Also, the fact that the
University of Houston is listed in the
NRC report (Appendix, Table P-36)
as having no citations should have
alerted someone at the NRC to the
possibility that a problem existed in
data reporting.2 That this obvious
error did not set off alarms raises
questions and points to limitations.
Perhaps even more important than
these shortcomings is the limited
timeframe used for the citation
counts. Normally, there is a lag time
in citations. It takes time for the
profession to read a publication and
then incorporate the research into
later work through citation or a
more direct response to the work.
The time period between 1988 and
1992 is a rather limited one and thus
may not reflect the enduring quality
of the research, but rather what is
most topical at the time.3
Despite these limitations, our pur-
pose is not to critique the NRC re-
port. Rather, we applaud the NRC’s
efforts to provide more objective
data for evaluating graduate pro-
grams. We follow in the footsteps of
Hans-Dieter Klingemann who, ten
years ago, also presented an alterna-
tive to the 1982 NRC ranking.
Klingemann’s (1986) alternative
ranking was based on citations, an
indicator not included in the 1982
NRC report. Our approach utilizes
both citations and the number of
articles published in the American
Political Science Review. By focusing
on APSR publications, we control for
the quality of the publication. More-
over, since we seek to chart change
in the profession rather than merely
rank departments, we take a broader
historical approach to the publica-
tions and citations by examining
these over the 40-year period from
1954-1994.
The Data
Each author published in the
APSR from 1954 through 1994 is
represented in our data set. For
each author, we collected data on
the number of APSR publications in
the last forty years and the number
of citations listed in the Social Sci-
ence Citations Index. Needless to say,
collecting so much data was not a
trouble-free task. For a detailed de-
… a majority of political
science departments
around the country still
have only one or two
faculty members who have
ever published in the APSR.
scription of our data collection of
APSR publications and citations and
the pitfalls we encountered, see our
earlier article on the APSR Hall of
Fame (Miller, Tien, and Peebler
1996). For our first report, we col-
lected citation data on all authors
with two or more publications, and
28% of authors with one publication.
For this project, we collected cita-
tions data for the remaining authors
in the data set (the total number of
authors is 1,628).4
We also collected biographical
information on the authors. We
wanted to know what year the au-
thors completed their Ph.D.’s, the
schools that granted their degrees,
the authors’ institutional affiliation
when the APSR article was pub-
lished, and where the authors cur-
rently work if the publication oc-
curred between 1974 and 1994. For
authors publishing between 1954 and
1973, we researched where they
worked in 1973. We divided the data
analysis into two different time peri-
ods to look at how departments have
changed over the last twenty years.
The institutional affiliation of the
authors at the point of publication
were easily obtained from the
APSR-they are listed on the first
page of each article.
To collect the remaining biograph-
ical information, we searched seven
different sources. For authors pub-
lishing between 1974 and 1994, we
cross-checked three different sources
when gathering their biographical
data: the 1994-1996 APSA Directory
of Members; the 1995-97 APSA
Graduate Faculty and Programs in
Political Science; and the 1993-95
APSA Directory of Undergraduate
Political Science Faculty. For authors
publishing between 1954 and 1973,
we used the 1973 APSA Biographical
Directory, and the 1976 Guide to
Graduate Study in Political Science.5
The final source we used to track
down biographical data on the au-
thors was the Dissertation Abstracts
Ondisc (DAO), which told us where
and when authors received degrees,
but did not tell us where they
worked in 1973 or 1994.6 We were
able to collect biographical data on
74% of the authors (765 authors)
publishing in the 1974 to 1994 pe-
riod, and 65% (444 authors) from
the earlier period using this exten-
sive search process.7 Since we sought
to evaluate political science depart-
ments, we excluded authors from
other disciplines and those not work-
ing in political science departments.8
Some basic frequencies from these
data provide a fascinating portrait of
the discipline. Table 1 breaks down
the number of departments by raw
number of faculty members pub-
lished in APSR for both twenty year
periods. Between 1974 and 1994, a
total of 206 different departments
had faculty publishing in APSR, a
49% increase from the 138 depart-
ments publishing in the previous
twenty year period. Despite this in-
crease in the number of departments
publishing in APSR, the distributions
of departments by the number of
APSR published faculty for the two
twenty-year periods are similar.
Roughly 25% of the departments
with APSR published faculty in both
periods have five or more faculty
members with publications in APSR
December 1996 705
This content downloaded from 128.172.10.194 on Wed, 06 Dec 2017 19:52:09 UTC
All use subject to http://about.jstor.org/terms
The Profession
(see Table 1). Approximately 45% of
departments with APSR published
faculty in both periods have only one
faculty member with a publication in
the field’s leading journal. Yet the
increase in the size of political sci-
ence departments in recent years has
apparently brought an accompanying
increase in the percentage of depart-
ments with 10 or more APSR au-
thors (this percentage was 4.9% and
9.9% of departments in the earlier
and more recent 20 years respective-
ly). Nevertheless, a majority of polit-
ical science departments around the
country still have only one or two
faculty members who have ever pub-
lished in APSR. Thus, as the number
of authors publishing in APSR rises,
it is clear that this increase in au-
thors does not occur only among the
schools that previously had a rela-
tively larger number of faculty pub-
lishing in APSR.
Performance Evaluated
Through Publications
and Citations
We evaluated departments by the
performance of current faculty as
indicated by the number of publica-
tions in APSR. First, we ranked de-
partments by the raw number of fac-
ulty members in each department
with any publications in the APSR.
Departments with more than two
APSR authors in 1994 are listed in
the left half of Table 2, while the
right half lists departments with
more than one APSR author for
1973. The University of Michigan
leads the way with 23 faculty mem-
bers publishing in APSR between
1974 and 1994. However, the differ-
ences among the top departments
are relatively small-the fourth
through seventh ranked departments
are only six members short of tying
Michigan. What is even more note-
worthy than Michigan’s lead in both
twenty year time periods, is the dra-
matic change that occurred in
UCLA’s rank. UCLA ranked second
with 21 current faculty having pub-
lished in APSR during the more re-
cent twenty years, whereas only
three UCLA faculty had published
in APSR as of 1973, though faculty
size was roughly the same in both
periods (see Table 2). Michigan
State and the University of Mary-
land also experienced dramatic in-
creases in the number of faculty
publishing in APSR (they went from
2 faculty in 1973 to 16 and 13 re-
spectively in 1994, again with almost
no change in faculty size).
On the other hand, some depart-
ments dropped significantly. For ex-
ample, the number of Columbia
University faculty publishing in
APSR dropped from a rank of 10 in
1973 to 49 in 1994, despite a 26%
increase in the size of the faculty
(see Table 2). Due to the departure
of some very productive faculty
(such as Burnham and Hibbs), MIT
also experienced marked decline in
the number of authors publishing in
APSR, and thus is absent from the
set of schools in Table 2 for 1994.
Total number of publications at-
tributed to each department can also
be used in ranking departments (list-
ed under ‘APSR Articles’ in Table
2).9 Using this measure changes
rankings by shifting the order for
many of the top ten schools, but
there is very little movement in or
out of the top ten (see Table 3 for
the ranked list of the top 25 based
on the number of articles). Indiana
and UC San Diego drop from the
top 10 list as determined by the
number of authors, to be replaced
among the top 10 by Rochester and
North Carolina (see Table 3). Given
the relative stability in the top 10
when ranked by either the number
of APSR authors or the number of
articles, it may be that this ranking
reflects no more than the size of the
faculty. Yet looking down the list in
Table 2, it is evident that some
smaller departments produce more
articles than some larger depart-
ments.
To measure the relative productiv- ity of APSR authors from different
departments, we also provide in Ta-
ble 2 a measure of productivity for
each department: the average num-
ber of articles produced by each
APSR author (the number of articles
divided by the number of authors).
Rochester, despite its relatively smal faculty size, had the most productive authors in 1973 and remained so in
1994 (P = 3.86 in 1973 and 5.33 in
1994, see Table 2). Other highly pro-
TABLE 1
Number of APSR Authors Per Department by Number of Departments
1994 1973
Number of APSR Number of APSR
Published Faculty Number of Published Faculty Number of
in Department Departments in Department Departments
1 93 (45.1) 1 66 (47.8)
2 33 (16.0) 2 11 (8.0)
3 14(6.8) 3 20(14.5)
4 13(6.3) 4 11(8.0)
5 11(5.3) 5 3 (2.2)
6 7 (3.4) 6 8 (5.8)
7 6(2.9) 7 9(6.5)
8 2 (1.0) 8 1 (0.7)
9 7(3.4) 9 2(1.4)
10 3(1.5) 10 0(0.0)
11 1 (0.5) 11 2(1.4)
12 3(1.5) 12 1 (0.7)
13 2(1.0) 13 1 (0.7)
14 2(1.0) 14 1 (0.7)
15 1(0.5) 15 0 (0.0)
16 1(0.5) 16 0(0.0)
17 4(1.9) 17 0 (0.0)
18 0 (0.0) 18 0 (0.0)
19 0 (0.0) 19 1 (0.7)
20 1(0.5) 20 1 (0.7)
21 1 (0.5)
22 0 (0.0)
23 1(0.5)
765 Authors 206 (100) Depts. 444 Authors 138 (100) Depts.
Source: The University of Iowa APSR School Data Set
Figures in parenthesis are percentages.
706 PS: Political Science & Politics
This content downloaded from 128.172.10.194 on Wed, 06 Dec 2017 19:52:09 UTC
All use subject to http://about.jstor.org/terms
Department Rankings: An Alternative Approach
ductive authors in 1994 were found
at Stanford, California Institute of
Technology, and UC Santa Barbara.
Many programs saw an increase in
the productivity level of their APSR
authors over the 20-year period, but
the University of Wisconsin at Madi-
son was a noticeable exception (they
fell from P = 3.15, second highest in
1973, to a relatively low P = 2.00 in
1994). Also, while the Wisconsin fac-
ulty size increased significantly be-
tween 1973 and 1994, their number
of publications dropped by 40% (see
Table 2).
To determine if the productivity
measure is an accurate portrayal of
the number of APSR articles pro-
duced by each author in the depart-
ment rather than just a few out-
standing individuals, we calculated a
Gini coefficient for each department
based on publications. The Gini co-
efficient indicates the extent to which
a distribution deviates from a per-
fectly uniform distribution which
would be obtained if all authors in a
department produced the same num-
ber of publications (see Lambert
1989 for a detailed explanation). As
explained by Jackman and Siverson
(1996), a Gini coefficient “reflect[s]
variations across programs in the
degree to which overall productivity
for individual programs stems from
the activity of a minority of faculty
members within them.” The Gini
coefficient as computed here is
bounded between zero and one,
where zero indicates that all authors
are contributing an equal number of
articles. The larger Gini coefficients
in Table 2 indicate that many of the
APSR publications from a depart-
ment are coming from a minority of
the APSR authors.’I For example,
the University of California at Santa
Barbara has one of the highest Gini
coefficients in the list, with a value
of .52, that resulted from one author
publishing 11 articles, another au-
thor has two articles, while the two
remaining authors contributed one
article each.
The Gini coefficients in Table 2
for most schools, including the top
schools (as determined by the num-
ber of APSR authors), are relatively
small. These low coefficients indicate
that most authors within a depart-
ment are contributing roughly com-
parable numbers of publications. Of
course, it must be remembered that
the distribution for the number of
published articles for each depart-
ment excludes those faculty who
have zero APSR publications,
thereby limiting the extent to which
the distributions can be skewed.”1
Among the 25 schools with the larg-
est number of APSR articles (see
Table 3), two schools stand out as
having the lowest Gini coefficients in
Table 2 (Texas A & M and SUNY-
Stony Brook), while four depart-
ments have relatively high coeffi-
cients (University of Arizona,
Arizona State, California Institute
of Technology, and New York Uni-
Thus, it appears that
getting published in APSR
is easier than making a
significant impact on
the discipline.
versity) in the most recent period.
Departments receiving high coeffi-
cients may be somewhat less bal-
anced in strength as the high coeffi-
cients indicate that most of the
publications are coming from one or
two individuals.
Evaluating departments on the
basis of citations provides yet an-
other way of assessing program per-
formance. The extent to which a de-
partment is able to publish in the
field’s most prestigious journal pro-
vides some measure of the scholarly
quality of the work produced by the
faculty. Assessments based on cita-
tions, on the other hand, reflect the
acknowledgment of intellectual im-
portance through the use of the de-
partments’ research by others. A sig-
nificant number of citations over a
period of time demonstrates an es-
tablished track record for a depart-
ment thereby indicating that the
overall research of the department
has made an enduring contribution
to the discipline. Recent publications
are more likely to give emphasis to
novel ideas that may, or may not,
eventually find acceptance among
others in the discipline.
Rankings based on the number of
publications and the number of cita-
tions may be correlated, but these
two indicators may not necessarily
produce the same ranking of depart-
ments. The citation rankings of Ta-
ble 3 show a somewhat different list
of departments from the APSR arti-
cle rankings-four departments that
were absent from the publications
rankings appear in the top 25 cita-
tions rankings (UC Irvine, American
University, Cornell and Duke).
Many of these additions are due in
large part to single individuals who
have high citation counts. For exam-
ple, Cornell (with 16 articles) did
not make the top 25 based on the
number of articles, but they are
ranked 20th based on citations be-
cause Ted Lowi has over three thou-
sand citations himself (the next high-
est person at Cornell had 450
citations).
Given that the NRC report had
the University of Houston listed with
very few publications (only 8 total
publications for the 1988-1992 pe-
riod for the entire department), and
no citations, it is noteworthy where
Houston ranks in Table 3. Based on
the number of APSR articles, Hous-
ton ranks 17th, although they do fall
to 23rd when only citations are used
for the ranking. Nonetheless, it is
quite clear that Houston is among
the top 25 departments when the
number of publications and citations
are the relevant criteria for ranking
departments.
Comparing the Gini coefficients
for citations presented in Table 3
with those from Table 2 is also quite
revealing about most of these top
ranked departments. The relatively
large values for the coefficients in
Table 3, as compared with the lower
values in Table 2, demonstrate that,
while most departments have a num-
ber of authors contributing APSR
articles, they have only one or two
individuals who are getting cited fre-
quently. Thus, it appears that getting
published in APSR is easier than
making a significant impact on the
discipline. A department like Ohio
State, on the other hand, has a rela-
tively low Gini for both publications
and citations, thereby indicating con-
siderable uniformity across the fac-
ulty in both productivity and peer
recognition.
In our previous paper on the
December 1996 707
This content downloaded from 128.172.10.194 on Wed, 06 Dec 2017 19:52:09 UTC
All use subject to http://about.jstor.org/terms
The Profession
Table 2. Departments Ranked by Number ofAPSR Authors
School in 1994 FS APSR APSR P G School in 1973 FS APSR APSR P G
Authors Articles Authors Articles
1 Michigan, U of 44 23 56 2.43 .35 Michigan, U of 60 20 46 2.30 .36
2 UCLA 57 21 41 1.95 .34 UC Berkeley 46 19 39 2.05 .37
3 Harvard 48 20 56 2.80 .36 Harvard 44 14 31 2.21 .26
4 Ohio State Uni. 33 17 48 2.82 .31 Wisconsin, U of (Mad) 37 13 41 3.15 .30
5 UC Berkeley 41 17 43 2.53 .44 Princeton 30 12 23 1.92 .32
6 UC San Diego 36 17 36 2.12 .30 Stanford 30 11 22 2.00 .32
7 Indiana Uni. 27 17 32 1.88 .31 Chicago, U of 24 11 20 1.82 .26
8 Michigan State Uni. 28 16 49 3.06 .44 Yale 37 9 21 2.33 .38
9 Stanford 28 15 64 4.27 .32 Ohio State Uni. 28 9 14 1.56 .22
10 Yale 29 14 38 2.71 .43 Columbia Uni. 38 8 11 1.38 .22
11 North Carolina, U of 28 14 37 2.64 .38 Rochester, U of 14 7 27 3.86 .28
12 Maryland, U of 40 13 25 1.92 .33 Hawaii, U of – 7 18 2.57 .33
13 Texas A & M 37 13 19 1.46 .24 Iowa, U of 21 7 14 2.00 .24
14 Minnesota, U of (Mnpls) 30 12 32 2.67 .35 North Carolina, U of 35 7 14 2.00 .29
15 Wisconsin, U of (Mad.) 49 12 24 2.00 .37 Massachusetts, U of 39 7 12 1.71 .33
16 Princeton 49 12 23 1.92 .31 Johns Hopkins Uni. 27 7 10 1.43 .17
17 Iowa, U of 22 11 27 2.45 .44 Washington Uni. 21 7 9 1.29 .16
18 Arizona, Uof 20 10 33 3.30 .49 MIT – 7 8 1.14 .11
19 Chicago, U of 28 10 19 1.90 .31 Florida State Uni. 26 7 7 1.00 .00
20 Colorado, U of (Boulder) 26 10 17 1.70 .24 Cornell Uni. 28 6 9 1.50 .24
21 Rochester, U of 18 9 48 5.33 .46 Georgia, U of 30 6 9 1.50 .17
22 SUNY (Stony Brook) 18 9 27 3.00 .26 Minnesota, U of (Mnpls) 28 6 9 1.50 .24
23 Texas, U of (Austln) 53 9 21 2.33 .33 IllInois, U of 29 6 7 1.17 .12
24 Duke 29 9 18 2.00 .40 Indiana Uni. 38 6 7 1.17 .12
25 Georgetown 31 9 13 1.44 .22 Washington, U of 20 6 7 1.17 .12
26 Florida State Uni. 19 9 12 1.33 .17 Missouri, U of (Columbia) 23 6 6 1.00 .00
27 Illinois, U of 33 9 12 1.33 .22 Northwestern Uni. 24 6 6 1.00 .00
28 Houston, U of 28 8 25 3.13 .45 Brandeis Uni. 15 5 12 2.40 .30
29 Pittsburgh, U of 29 8 20 2.50 .42 Rutgers (New Bmswk) 46 5 7 1.40 .17
30 Comell Uni. 39 7 16 2.29 .29 Pennsylvania, U of 25 5 5 1.00 .00
31 Washington Uni. 18 7 16 2.29 .30 Carnegie-Mellon Uni. 35 4 10 2.50 .30
32 Emory Uni. 22 7 12 1.71 .31 Duke 25 4 8 2.00 .25
33 Loyola Uni., Chicago 18 7 12 1.71 .21 New York, City College – 4 7 1.75 .32
34 Penn State Uni. 19 7 11 1.57 .29 SUNY (Stony Brook) – 4 7 1.75 .32
35 Wayne State Uni. 24 7 8 1.14 .11 Syracuse Uni. 26 4 7 1.75 .11
36 Cal. Tech. 9 6 24 4.00 .51 New Mexico, U of 14 4 6 1.50 .25
37 Arizona State 16 6 20 3.33 .52 Pittsburgh, U of 31 4 6 1.50 .17
38 New York Uni. 21 6 19 3.17 .52 Dartmouth – 4 5 1.25 .15
39 Oregon, U of 18 6 15 2.50 .46 Penn State Uni. 20 4 5 1.25 .15
40 Virginia, U of 34 6 11 1.83 .26 UC Santa Barbara 24 4 5 1.25 .15
41 Florida, U of 28 6 10 1.67 .27 UC Davis 29 4 4 1.00 .00
42 Washington, U of 30 6 7 1.17 .12 Connecticut, U of 28 3 6 2.00 .33
43 UC Irvine 23 5 18 3.60 .36 Purdue Uni. 22 3 6 2.00 .22
44 Louisiana State U 19 5 11 2.20 .33 Claremont Grad. School 31 3 5 1.67 .27
45 UC Davis 21 5 11 2.20 .29 Northern Illinois Uni. 35 3 5 1.67 .13
46 North Texas, U of 22 5 10 2.00 .24 Oregon, U of 19 3 5 1.67 .13
47 American Uni. 22 5 9 1.80 .36 Swarthmore College – 3 5 1.67 .13
48 Purdue Uni. 29 5 9 1.80 .22 Texas, U of (Austin) – 3 5 1.67 .13
49 Columbia Uni. 48 5 8 1.60 .30 Vanderbilt Uni. 16 3 5 1.67 .13
50 South Carolina, U of 15 5 7 1.40 .17 Florida, U of 25 3 4 1.33 .17
51 Cincinnati, U of 14 5 6 1.20 .13 Kentucky, U of 19 3 4 1.33 .17
52 Kansas, U of 19 5 6 1.20 .13 SUNY (Buffalo) 26 3 4 1.33 .17
53 New Mexico, U of 16 5 6 1.20 .13 UC Riverside 16 3 4 1.33 .17
54 UC Santa Barbara 22 4 15 3.75 .52 Virginia, U of 35 3 4 1.33 .17
55 Missouri, U of (St. Louis) 23 4 10 2.50 .45 Brown – 3 3 1.00 .00
56 Northwestern Uni. 20 4 10 2.50 .40 Denver, U of 5 3 3 1.00 .00
57 George Washington Uni. 30 4 9 2.25 .28 Georgetown 25 3 3 1.00 .00
58 Illinois, U of (Chicago) 22 4 9 2.25 .19 SUNY (Binghamton) 22 3 3 1.00 .00
59 SUNY (Binghamton) 17 4 8 2.00 .31 SUNY (Brockport) 24 3 3 1.00 .00
60 Camegie-Mellon Uni. 44 4 7 1.75 .25 Toronto, U of – 3 3 1.00 .00
708 PS: Political Science & Politics
This content downloaded from 128.172.10.194 on Wed, 06 Dec 2017 19:52:09 UTC
All use subject to http://about.jstor.org/terms
Department Rankings: An Alternative Approach
Table 2. Departments Ranked by Number ofAPSR Authors (cont.)
School in 1994 FS APSR APSR P G School in 1973 FS APSR APSR P G
Authors Articles Authors Articles
61 Johns Hopkins Uni. 19 4 7 1.75 .25 UCLA 55 3 3 1.00 .00
62 Toronto, U of 56 4 6 1.50 .25 Arizona, U of 29 2 4 2.00 .25
63 Wisconsin, U of (Milw.) 20 4 6 1.50 .17 CUNY Grad Center 70 2 4 2.00 .00
64 Brown 19 4 5 1.25 .15 Michigan State Uni. 25 2 4 2.00 .25
65 Kentucky, U of 17 4 4 1.00 .00 Temple Uni. 27 2 4 2.00 .00
66 Pennsylvania, U of 23 4 4 1.00 .00 Oakland Uni. – 2 3 1.50 .17
67 Georgia, U of 30 3 7 2.33 .25 California State Uni. 29 2 2 1.00 .00
68 Auburn Uni. 29 3 6 2.00 .33 Maryland, U of 41 2 2 1.00 .00
69 Denver, U of 19 3 6 2.00 .22 San Diego State Uni. 25 2 2 1.00 .00
70 Tulane 14 3 6 2.00 .33 SUNY (Albany) 23 2 2 1.00 .00
71 UC Santa Cruz 20 3 6 2.00 .22 West Virginia Uni. 25 2 2 1.00 .00
72 Bryn Mawr College 5 3 5 1.67 .27 York Uni. – 2 2 1.00 .00
73 Iowa State Uni. 20 3 5 1.67 .27
74 UC Riverside 19 3 5 1.67 .13
75 Claremont Grad. School 9 3 4 1.33 .17
76 Northeastern Uni. 20 3 4 1.33 .17
77 Alabama, U of 15 3 3 1.00 .00
78 British Columbia, U of 3 3 1.00 .00
79 Cleveland State Uni. 14 3 3 1.00 .00
80 Notre Dame, U of 33 3 3 1.00 .00
Source: The University of Iowa APSR School Data Set.
FS = Faculty size for 1994 is as reported in NRC Report or 1995- Faculty size for 1973 is as reported in the APSA Guide to Gr APSR Authors = Number of faculty publishing in APSR between for 1973 rankings.
APSR Articles = Number of APSR articles between 1954-1994 for Number of APSR articles between 1954 and 1973 for faculty p P = Productivity of APSR authors in department (APSR articles/ G = Gini coefficient for APSR publications.
APSR Hall of Fame, we proposed a
new measure of performance that
combined publications in APSR and
citation lines, which we called the
Professional Visibility Index (PVI).12
We argued that publications in
APSR and citations were somewhat
different measures of performance
and visibility. To keep up with the
current literature, most political sci-
entists read APSR. Yet high quality
work tends to be noticed and cited
frequently regardless of where it is
published-for example, Philip E.
Converse’s much cited “The Nature
of Belief Systems in Mass Publics,”
published in Ideology and Discontent,
edited by David E. Apter. We be-
lieve this same logic applies to de-
partment performance. Citation lines
and publications in APSR are each
indications of good scholarship, and
departments benefit from having fac-
ulty that rate highly on these mea-
sures because individuals become
identified with their departments
over time. We also argue that rank-
ings of departments based on the
combination of these two indicators
should be more valid and reliable
than rankings based on only one of
these measures.”3
Table 3 also lists the top 25 de-
partments according to the PVI. All
the schools listed in the PVI top 25
appeared somewhere in the rankings
by number of APSR articles or cita-
tions. The final ranking in Table 3
simply adjusts the PVI ranking from
the previous column for the faculty
size of each department. Some nota-
ble shifts occur when faculty size is
controlled. For example, the Univer-
sity of Wisconsin, Texas, and Cornell
drop out of the top 25 because they
have relatively large faculty. Wash-
ington University in St. Louis, be-
cause of a relatively small faculty,
makes the final top 25. Similarly,
Stony Brook comes back into the
final listing although it did not make
the PVI top 25 prior to controlling
for faculty size.
Many of the departments that ap-
pear in the top 25 as determined by
the PVI list in Table 3 have been
recognized as highly productive de-
partments for some time. Yet a com-
parison of the Table 3 rankings with
comparable rankings for the 1954-
1973 period (presented in Table 4)
reveals that major changes occurred
in the scholarly quality of various
graduate programs. Some depart-
ments appear in the top 25 in Table
3, but did not appear as a top-
ranked department in 1973. Perhaps
most notable among highly improved
departments are Michigan State,
December 1996 709
This content downloaded from 128.172.10.194 on Wed, 06 Dec 2017 19:52:09 UTC
All use subject to http://about.jstor.org/terms
The Profession
UCLA, Maryland and UC San Di- 0o 0)
ego, none of which appear in Table – .8 W – . 9, 4 though all are among the most 2. oa o>z n cz-<cc o’c
productive departments in Table 3. A number of departments also dis- a)_ Z oD .r 0 C =0.aA0% C appeared from the top 25 over Among those schools listed in Table 0 twenty years between 1973 and 1994. c M – SCC
4, but not in Table 3, are Hawaii,o 0)”-
Syracuse, Brandeis, Georgia and 1
Johns Hopkins. The shift in the
ranking, whether up or down, was o0 =
generally due to turnover in faculty. :g C
Through Teaching 0 0
ity is to train graduate students in
Perforfessince Evaluated W f qual-#
should include an assessment of cre
Throuates Teaching department. The -D _ _0 C0 0,0
One responsibility of faculty is to eval-
teach. One aspect of this responsibil- of
ity is to train graduate students in W A.
the profession. Evaluating the qual- ityW
ity of graduate programs, should include an assessment of cre- ;, ..
ativity and scholarship produced b graduates of each department. This is
uating scholarly accomplishments of40 rC ( , C 0 0 NRC reports. acknowledges that eval-‘
component of assessing the quality r % f graduates should be an important c c 00 c 0
of education provided by those pro- 0
grams (NRC 1995, 26). Yet this isn Table
miss ing from both the 1982 and 1995 –
NRC reports. – Becausrkeley have collected information duction of graduates publishing in is more uniform than it has been
on the school from which all authors APSR: for example, Minnesota, among those getting their Ph.D. be-
in APSR for the past 4 years re- most noteworthy increase occurs for by the number oAPSR publica-
ceivedon of graduates published in ogy which did ntheirot make the list for by their relatively higher Gini coef evaluate the effectiveness of depart- the earlier twenty years, but ends up cients (see for example the coeffi Table 5 presents the ranking of de- 0 K 0. ments in preparing research scholars. – 0D 0 ,M 0 -0. 2%
partments by the numberge number of 13th during the more recent period cients for Michigan, North Carolina, Ph.D.’s publishing in APSR, and the(see Table 5). Rochester, Iowa and the University
number ofAPSR articles these grad-& Politics
uates produced (Table 5 is restricted ta W. W 710 PS: Political Science &5 Politics
This content downloaded from 128.172.10.194 on Wed, 06 Dec 2017 19:52:09 UTC
All use subject to http://about.jstor.org/terms
Table 4. Departments Ranked by APSR Articles, Citations and Order School in 1973 APSR School in 1973 Citations G School in 1973 PVI School in 1973 Articles _ I Michigan, U of 46 Harvard 13,480 .65 Harvard 417.89 Harvard 9 2 Wisconsin, U of (Mad) 41 IUC Berkeley 4,705 .69 UC Berkeley 183.50 UC Berk 3 UC Berkeley 39 Yale 4,499 .70 Michi 4 Harvard 31 Chicago, U of 4,095 .66 Wiscon … 5 Rochester, U of 27 Stanfo =c 6 Princeton 23 Syracuse Uni. 2,8 7 Stanford 22 Washington, U of 2,6 8 Yale 21 Wisconsin, U of (Mad) 2,615 .43 Princeton 41.56 Michigan, U of 1.85
9 Chicago, U of 20 Michigan, U of 2,408 .60 Rochester, U of 34.30 Pri t 10 Hawaii, U of 18 Princeton 1,807 .70 Syracuse Uni. 20.12 Washington, U of 0.92
S 11 Iowa, U of 14 Rochester, U of 1,270 .53 Washington, U of 18.49 Syracuse Uni. 0.77
12 North Carolina, U of 14 North Carolina, U of 1,245 .57 North Carolina, U of 17.43 Brand 13 Ohio State Uni. 14 Brandeis Uni. 916 .33 Hawaii, U of 13.83 North Carolina, 14 Brandeis Uni. 12 Minnesota, U of (Mnpls) 891 .52 Brandeis Uni. 10.99 Washington Uni. 0.33
1 16 Columbia Uni. 11 Hawaii, U of 768 .31 Ohio State Uni. 7.30 Minnesota, U of (Mnpls) 17 Carnegie-Mellon Uni. 10 Georgia, U of 748 .63 Washington Uni. 6.97 Ohio State Uni 18 Johns Hopkins Uni. 10 MIT 622 .66 Georgia, Uof 6.73 Georgia, Uof 0 19 Cornell Uni. 9 Carnegie-Mellon Uni. 596 .63 Iowa, U of 6.21 Johns Hopkins Uni. 0.19
20 Georgia, U of 9 Temple Uni. 565 .44 Columbia Uni. 5.99 Carnegie-Mellon Uni. 0.17
21 Minnesota, U of (Mnpls) 9 Columbia Uni. 545 .76 Carnegie-Mellon Uni. 5.96 Colum 22 Washington Uni. 9 Ohio State Uni. 522 .69 Johns Hopkins Uni. 5.01 Duke 0.16
23 Duke 8 Johns Hopkins Uni. 501 .64 MIT 4.97 New Mexico, U of 0 24 MIT 8 Duke 489 .64 Massachusetts, U of 3.93 Comell U 25 Florida State Uni. 7 Rutgers (New Bruns) 471 .55 Duke 3.91 Massachusetts, U of 0.10
26 Illinois, U of 27 Indiana Uni. 28 New York, City College 7 029 Rutgers (New Brunswick) 7
30 SUNY (Stony Brook) 7
31 Syracuse Uni. 7
32 Washington, U of 7
Source: The University of Iowa APSR School Data Set. Includes schools with more than one APSR APSR Articles = Number of APSR articles between 1954-1973 by the departmen Citations = Citations listed in SSCI between 1956 and 1976 for faculty publishing in APSR G = Gini coefficient for citations.
PVI = Professional Visibility Index ((APSR Publications*Citations)/100 PVI/Fac = PVI controlling for faculty size (PVI/Fa * Schools such as Hawaii and MIT might be absent from this column because of missing data on faculty size. However, they m excluded even if we had the This content downloaded from 128.172.10.194 on Wed, 06 Dec 2017 19:52:09 UTC
All use subject to http://about.jstor.org/terms
The Profession
Table 5. Departments by Number of PhD Graduates Published in APSR
Degree 1974 or Later GS APSR APSR P G Degree Before 1974 GS APSR APSR P G
Authors Articles Authors Articles
1 Michigan, U of 173 43 76 1.77 .33 Harvard 112 75 170 2.27 .37
2 UC Berkeley 143 34 63 1.85 .34 Yale 83 69 170 2.46 .41
3 Harvard 172 29 44 1.52 .22 Chicago, U of 129 54 103 1.91 .34
4 Minnesota, U of (Mnpls) 87 21 38 1.81 .28 UC Berkeley 273 37 79 2.14 .39
5 Chicago, U of 191 20 34 1.70 .31 Columbia Uni. 281 35 52 1.49 .23
6 Indiana Uni. 100 17 30 1.76 .33 Wisconsin, U of (Mad) 140 34 69 2.03 .36
7 Yale 69 17 28 1.65 .26 Michigan, U of 254 33 88 2.67 .45
8 Rochester, U of 31 13 33 2.54 .34 Princeton 80 31 58 1.87 .33
9 Iowa, U of 30 12 23 1.92 .37 North Carolina, U of 75 27 66 2.44 .46
10 Princeton 59 12 22 1.83 .36 Stanford 99 26 68 2.62 .38
11 Stanford 89 12 19 1.58 .29 Northwestern Uni. 45 25 62 2.48 .41
12 Washington Uni. 45 11 15 1.36 .22 IliInois, U of 100 20 38 1.90 .39
13 Cal. Tech. 19 10 22 2.20 .25 Indiana Uni. 139 17 33 1.94 .29
14 Northwestem Uni. 70 10 17 1.70 .26 Minnesota, U of (Mnpls) 75 16 30 1.88 .30
15 North Carolina, U of 103 8 12 1.50 .23 UCLA 155 16 27 1.69 .28
16 UCLA 177 8 11 1.38 .17 Comell Uni. 80 13 25 1.92 .31
17 Comell Uni. 86 8 8 1.00 .00 Syracuse Uni. 35 12 23 1.92 .37
18 Michigan State Uni. 41 7 14 2.00 .29 Duke 54 11 20 1.82 .29
19 Columbia Uni. 355 7 11 1.57 .23 MIT 95 10 21 2.10 .40
20 Johns Hopkins Uni. 148 6 12 2.00 .36 Rochester, U of 27 9 42 4.67 .49
21 Washington, U of 59 6 10 1.67 .27 Johns Hopkins Uni. 52 9 9 1.00 .00
22 MIT 70 6 9 1.50 .24 New York Uni. 100 8 11 1.38 .17
23 Florida State Uni. 155 6 8 1.33 .17 Iowa, U of 44 7 31 4.43 .47
24 Wisconsin, U of (Madison) 138 6 8 1.33 .21 Michigan State Uni. 30 7 9 1.29 .16
25 Ohio State Uni. 145 6 7 1.17 .12 Pennsylvania, U of 42 7 8 1.14 .11
26 Houston, U of 42 5 7 1.40 .23 Washington, U of 60 6 13 2.17 .45
27 SUNY (Stony Brook) 38 5 6 1.20 .13 Washington Uni. 42 5 9 1.80 .31
28 Illinois, U of 74 5 5 1.00 .00 Tulane 29 5 8 1.60 .25
29 Texas, U of (Austin) 89 4 7 1.75 .32 Oxford, U of (England) – 5 5 1.00 .00
30 Wisconsin, U of (Milwaukee) 16 4 7 1.75 .25 Oregon, U of 36 4 8 2.00 .19
31 Syracuse Uni. 40 4 6 1.50 .17 Kentucky, U of 27 4 6 1.50 .25
32 New York Uni. 16 4 5 1.25 .15 Virginia, U of 65 4 6 1.50 .17
33 Oxford, England, U of – 3 8 2.67 .25 London Sch. of Econ. – 4 5 1.25 .15
34 Maryland, U of 139 3 6 2.00 .22 UC Santa Barbara 97 4 5 1.25 .15
35 SUNY (Buffalo) 34 3 6 2.00 .33 Ohio State Uni. 150 4 4 1.00 .00
36 Toronto, Uof – 3 5 1.67 .13 Texas, U of (Austin) 40 3 7 2.33 .19
37 Oregon, U of 34 3 4 1.33 .17 Connecticut, U of 25 3 3 1.00 .00
38 South Carolina, U of 32 3 4 1.33 .17 Illinois, U of (Chicago) 100 2 7 3.50 .07
39 Georgia, U of 48 3 3 1.00 .00 AmericanUni. 50 2 4 2.00 .00
40 Oklahoma, U of 58 3 3 1.00 .00 Missouri, U of 48 2 4 2.00 .00
41 Wayne State Uni. 42 3 3 1.00 .00 Penn State Uni. 62 2 4 2.00 .25
42 Pittsburgh, U of 53 2 5 2.50 .30 Vanderbilt Uni. 20 2 4 2.00 .00
43 UC Irvine 34 2 5 2.50 .30 Australian National – 2 2 1.00 .00
44 Duke 89 2 4 2.00 .00 ClaremontGrad. School 160 2 2 1.00 .00
45 Carnegie-Mellon Uni. – 2 3 1.50 .17 Georgetown 80 2 2 1.00 .00
46 Rice Uni. 30 2 3 1.50 .17 Maryland, U of 56 2 2 1.00 .00
47 York Uni. 18 2 3 1.50 .17 Oslo, Norway, U of – 2 2 1.00 .00
48 Boston College – 2 2 1.00 .00
49 Cincinnati, U of 47 2 2 1.00 .00
50 Colorado, U of (Boulder) 27 2 2 1.00 .00
51 Pennsylvania, U of 51 2 2 1.00 .00
52 SUNY (Binghamton) 83 2 2 1.00 .00
Source: The University of Iowa APSR School Data Set.
APSR Authors = Number of the department’s Ph.D. graduates publishing in APSR between 1954 Number of the department’s Ph.D. graduates publishing in the APSR between 1954-1973 APSR Articles = Number of APSR articles between 1954-1994 by the departments Ph.D. gradu GS = Graduate students enrolled in program as reported in NRC Report or 1995-97 Graduate For the earlier rankings we obtained the graduate student size from the 1976 Guide to G P = Productivity of the departments Ph.D graduates publishing in APSR (APSR articles/APSR G = Gini coefficient for APSR publications
712 PS: Political Science & Politics
This content downloaded from 128.172.10.194 on Wed, 06 Dec 2017 19:52:09 UTC
All use subject to http://about.jstor.org/terms
oM ~Table 6. Department Rankings by Performance of Faculty and their PhDs. Order Degree 1974 or Later PVI Degree 1974 or Later PVUI(F+G)* Degree before 1974 PVI Degree before 1974 PVi( 1 Harvard 3,661.24 Rochester, U of 26.61 Harvard 17,568.50 Yale 2 Michigan, U of 2,915.75 Stanford 16.95 Yale 13,627.35 Harvard 1 3 Stanford 1,983.58 Harvard 16.64 hicago, U of 5,236.78 Chicago, U of 4 Yale 1,514.45 Yale 15.45 Michigan, U of 2,482.09 Rochester, U of 14.20
5 UC Berkeley 1,505.32 Michigan, U of 13.44 UC Berkeley 1,858.56 North Carolina, U of 12.53
6 Rochester, U of 1,303.86 Cal. Tech. 10.19 Wisconsin 7 UCLA 875.73 UC Berkeley 8.18 North Carolina, U of 1,378.36 Northwestern Uni. 8 Indiana Uni. 687.64 Iowa, U of 5.94 Princeton 1,301.22 Wisconsin, U of (Madison) 8.21
9 Minnesota, U of (Mnpis) 593.13 Indiana Uni. 5.41 olumbia Uni. 1,236.43 Michigan, U of 7.90
10 Maryland, U of 443.66 Minnesota, U of (Mnpls) 5.07 Stanford 1,000.02 Stanford 7.75
11 Michigan State Uni. 346.48 Michigan State Uni. 5.02 Northwestern UnI. 710.55 UC Berkeley 12 Princeton 330.59 UCLA 3.74 Rochester, Uof 582.10 Iowa, Uof 4 13 Ohio State Uni. 321.70 UC San Diego 3.28 Iowa, U of 274.51 Columbia Uni. 3 14 Iowa, U of 308.91 New York Uni. 3.25 IllInois, U of 250.62 Minnesota, U of (Mnp 15 UC San Diego 305.21 Princeton 3.06 innesota, U of(Mnpls) 241.44 Syracuse UnI. 2.10
16 Cal. Tech. 285.29 Arizona, U of 2.66 Indiana Uni. 206.79 Duke 202
17 Arizona, U of 252.50 Maryland, U of 2.48 IT 170.38 Illinois, U of 1 18 Chicago, U of 222.76 Ohio State Uni. 1.78 Duke 159.22 MIT 1.79
19 North Carolina, U of 192.69 Washington Uni. 1.77 CLA 155.23 Comrnell Un 20 Wisconsin, U of (Madison) 161.87 UC Irvine 1.74 Co l Uni. 132.44 IndianaUni 1.17U
21 New York Urd. 120.25 Houston, U of 1.66 yracuseUni. 128.38 Washington, U o 22 Texas, U of (Austin) 117.28 North Carolina, U of 1.47 ashingto 23 Houston, U of 116.13 SUNY (Stony Brook) 1.40 ewYork Uni. 57.66 UCLA 24 Washington Uni. 111.76 Northwestemrn Uni. 1.19 ashington Uni. 46.75 Vanderbilt Un 25 Northwestern Uni. 106.86 Chicago, U of 1.02 UC Davis 37.51 New Source: The University of Iowa APSR School Data Set
APSR Articles = For the 1974 or later rankings, the number of APSR arti department’s faculty publishing between 1974 and 1994, and the depa the before 1974 rankings, the number of articles equals the number o between 1954 and 1994 for the Ph.D. graduates. – Citations = For the 1974 or later rankings, citations equals the citations listed in SSCI between 1956 and 1993 for department’ graduates published in the APSR.
Citations = For the before 1974 rankings, citations equals the citations listed in SSCI for department’s faculty between 1956 Ph.D. graduates between 1956 and PVI = Professional Visibility Index ((APSR Publications*Citations of faculty and Ph.D. g PVI/(F+G) = Professional Visibility Index controlling fo the data * Schools might be absent from this column because of missing data on faculty or gradu 13 Ih~o~ Ui, 21.0 (U Sa Digo 3,~iowa Uo 27.51j 14 Iowa Uo I~891 ew orkUni,3.2 ?Uof 20.6 Mi~This content downloaded from 128.172.10.194 on Wed, 06 Dec 2017 19:52:09 UTC
All use subject to http://about.jstor.org/terms
The Profession
in the leading journal and citation
counts, it is possible to directly com-
pare these objective rankings with
the NRC reputational rankings. Ta-
ble 7 presents this comparison for
the top 50 departments. In many
respects, the reputational and objec-
tive rankings are similar. Only two
departments ranked in the NRC top
25 (MIT and the University of
Washington) do not appear in the
objective listing using the faculty
PVI, and one of these schools
(Washington) enters the list when
the combined faculty and graduate
PVI is used for the rankings in the
third column of Table 7. Among the
second 25 departments, there are
nine in the NRC ranking that do not
appear in the objective ranking-
although, again, one of those (Wis-
consin at Milwaukee) appears in the
objective listing that uses the com-
bined faculty and graduate PVI (see
Table 7). Overall, 78% of the NRC
top 50 departments are the same as
those listed in the more objectively
based ratings. Moreover, the correla-
tion between the NRC reputational
ranking, for the 98 departments in-
cluded in the NRC report, and the
faculty PVI is very significant (r =
.60), thus indicating much similarity
in the two types of rankings.
Despite the general similarity in
the reputational and objective rank-
ings, there are noteworthy discrepan-
cies between the two rankings.
Clearly some departments rate much
higher in the reputational listing
than in the objective listing-for ex-
ample, MIT, Chicago, Wisconsin,
Duke, Cornell and Columbia (see
Table 7). Some other departments
receive a lower ranking on the basis
of reputation than they deserve ac-
cording to objective indicators-for
example, Cal Tech, Maryland, Michi-
gan State, or Houston. In most
cases, these latter departments are
programs that have experienced an
improvement productivity levels in
recent years, so their reputation may
not yet reflect this improvement.
Perhaps the reason why reputation
lags behind objective indicators and
why reputation may be relatively sta-
ble over time, is that reputational
rankings are largely influenced by
factors that are fairly obvious to
those doing the ranking. For exam-
ple, the larger the department the
more visible that department is to
the profession as a whole. Larger
departments send more faculty to
conferences, publish more articles,
and produce more graduate stu-
dents. This does not mean that large
departments lack quality. After all,
when we controlled for faculty size
in Table 3, the rank ordering among
the top departments changed rela-
tively little. Yet if reputational rank-
ings are partially a reflection of what
is most apparent, then we would ex-
pect that reputational rank is more a
reflection of the number of publica-
tions than a reflection of citations,
because citations are less visible than
are publications. Indeed, this is ex-
actly what we find when we regress
reputational ranking on the number
of APSR publications and the num-
ber of citations controlling for fac-
ulty size. The regression explains
70% of the variance (adjusted R
squared) in the NRC reputational
ranking, with the following Beta co-
efficients and T values for the three
independent variables:
Beta T
Faculty Size .45 6.48
APSR Publications .46 4.20
Citations .06 .58
In short, while reputational reflect the scholarly quality o faculty, they are based on o indicators of that quality rath more subtle indicators. The size of a
department and the number of pub-
lications produced by a department
makes that department more visible,
but the number of citations or the
quality of the graduates add little to
reputational rankings.
Collaboration by
Department Rank
Collaboration has been increasing
in the profession during the past 40
years. Between 1954 and 1960 only
10% of the articles in APSR were
co-authored, whereas half of APSR
articles published from 1989 to 1994
were co-authored. Over the entire 40
year period between 1954 and 1994
some 30% of all APSR articles were
co-authored (our data set has 580
co-authored articles). In the earlier
years, most of these co-authored ar-
ticles were written by collaborators
at the same university. Between 1954
and 1963, only 36% of collaborators
were at different schools; however,
between 1964 and 1983 this figure
rose to 60%, and from 1984 to 1994,
70% of co-authored APSR articles
involved collaborators from different
schools. Over the entire 40 year pe-
riod, 63% of all co-authored articles
involved collaborators from different
universities. In short, when collabo-
ration occurs it is far more likely to
be between different universities
rather than within the same school.
While the extent of collaboration
does not vary significantly across de-
partments of different rank, the pat-
tern of collaboration does change for
schools of different scholarly rank.
Departments in the highest quartiles,
as determined by the number of
APSR publications and citation
counts, are somewhat more likely to
have collaboration among members
of the same department than are
lower ranked departments (for ex-
ample, 41% of collaborators are
within the same department among
the highest ranked departments as
compared with 31% among the
schools in the lowest quartile).
Moreover, when collaboration occurs
between departments, it tends to be
between departments of a similar
rank. Among the top ranked depart-
ments, 79% of collaboration was
with a department of the same rank
or only one quartile lower (62% in
the same quartile, 17% in the sec-
ond highest quartile). Similarly,
among the lowest ranked schools,
56% of collaboration occurred with
a school of the same rank and an-
other 14% was with a department in
the next highest rank.
If collaboration reflects an attempt
to share resources, it is clearly not
benefiting the lesser ranked depart-
ments. Higher-ranked departments
appear to have more research re-
sources at their disposal. It would be
reasonable, therefore, if faculty at
lower-ranked departments collabo-
rated with faculty from higher-
ranked departments to increase the
resources at their disposal, but such
collaboration is rare. Given this out-
come, cross-department collabora-
tion appears to be motivated by
something other than an effort to
714 PS: Political Science & Politics
This content downloaded from 128.172.10.194 on Wed, 06 Dec 2017 19:52:09 UTC
All use subject to http://about.jstor.org/terms
Department Rankings: An Alternative Approach
Table 7. A Direct Comparison of Reputational and Objective Department Rankings
School in 1994 NRC93Q School in 1994 Faculty School in 1994 Faculty &
PVI Student PVI
1 Harvard 4.88 Stanford 1,419.07 Harvard 3,661.24
2 UC Berkeley 4.66 Harvard 1,390.03 Michigan, U of 2,915.75
3 Michigan, U of 4.60 Michigan, U of 950.10 Stanford 1,983.58
4 Yale 4.60 Yale 754.19 Yale 1,514.45
5 Stanford 4.50 UCLA 590.85 UC Berkeley 1,505.32
6 Chicago, U of 4.41 Rochester, U of 417.13 Rochester, U of 1,303.86
7 Princeton 4.39 Maryland, U of 348.60 UCLA 875.73
8 UCLA 4.25 UC Berkeley 345.81 Indiana Uni. 687.64
9 UC San Diego 4.13 UC San Diego 296.96 Minnesota, U of (Mnpls) 593.13
10 Wisconsin, U of (Madison) 4.09 Ohio State Uni. 280.18 Maryland, U of 443.66
11 Rochester, U of 4.01 Arizona, U of 244.37 Michigan State Uni. 346.48
12 MIT 3.96 Michigan State Uni. 229.13 Princeton 330.59
13 Minnesota, U of (Mnpls) 3.95 Indiana Uni. 226.79 Ohio State Uni. 321.70
14 Duke 3.94 Princeton 156.54 Iowa, U of 308.91
15 Cornell Uni. 3.85 Minnesota, U of (Mnpls) 142.08 UC San Diego 305.21
16 Columbia Uni. 3.84 Iowa, U of 115.74 Cal. Tech. 285.29
17 Ohio State Uni. 3.69 Wisconsin, U of (Madison) 103.61 Arizona, U of 252.50
18 North Carolina, U of 3.54 New York Uni. 92.46 Chicago, U of 222.76
19 Texas, U of (Austin) 3.49 Houston, U of 86.23 North Carolina, U of 192.69
20 Indiana Uni. 3.45 North Carolina, U of 81.37 Wisconsin, U of (Madison) 161.87
21 Johns Hopkins Uni. 3.37 UC Irvine 75.56 New York Uni. 120.25
22 Northwestern Uni. 3.35 Chicago, U of 70.58 Texas, U of (Austin) 117.28
23 Washington, U of 3.34 Cal. Tech. 69.19 Houston, U of 116.13
24 Washington Uni. 3.29 Texas, U of (Austin) 68.87 Washington Uni. 111.76
25 Iowa, U of 3.25 Cornell Uni. 66.29 Northwestern Uni. 106.86
26 Virginia, U of 3.24 Duke 61.86 Cornell Uni. 99.94
27 Rutgers Uni. (New Brunsw.) 3.24 SUNY (Stony Brook) 59.88 UC Irvine 99.06
28 Michigan State Uni. 3.24 Arizona State 48.89 Duke 80.42
29 Maryland, U of 3.23 Washington Uni. 43.65 SUNY (Stony Brook) 78.42
30 Illinois, U of 3.20 UC Santa Barbara 38.88 Johns Hopkins Uni. 67.63
31 Pittsburgh, U of 3.15 Texas A & M 32.03 Arizona State 48.89
32 UC Irvine 3.14 Colorado, U of (Boulder) 31.80 Pittsburgh, U of 42.15
33 Houston, U of 2.96 Northwestern Uni. 31.48 UC Santa Barbara 41.52
34 SUNY (Stony Brook) 2.92 American Uni. 31.21 Oregon, U of 39.46
35 Arizona, U of 2.89 Oregon, U of 27.80 Colorado, U of (Boulder) 35.82
36 Emory Uni. 2.88 Pittsburgh, U of 23.49 Washington, U of 35.72
37 Georgetown 2.85 Georgetown 22.32 American Uni. 34.68
38 Florida State Uni. 2.82 Penn State Uni. 22.20 Texas A & M 33.72
39 Colorado, U of (Boulder) 2.78 Johns Hopkins Uni. 21.58 Columbia Uni. 32.39
40 Syracuse Uni. 2.77 George Washington Uni. 20.34 Carnegie-Mellon Uni. 30.73
41 UC Santa Barbara 2.74 Carnegie-Mellon Uni. 19.21 Georgetown 24.46
42 Pennsylvania, U of 2.68 UC Davis 17.44 Georgia, U of 22.71
43 Arizona State 2.67 Georgia, U of 15.82 Penn State Uni. 22.20
44 Georgia, U of 2.66 North Texas, U of 13.69 George Washington Uni. 20.34
45 Notre Dame, U of 2.66 Emory Uni. 13.33 Wisconsin, U of (Milwaukee) 19.79
46 UC Davis 2.61 Purdue Uni. 11.68 UC Davis 17.44
47 George Washington Uni. 2.57 Columbia Uni. 11.50 Emory Uni. 17.16
48 CUNY Grad Center 2.57 Illinois, U of 9.58 SUNY (Buffalo) 17.01
49 Tufts Uni. 2.51 Marquette Uni. 7.91 Illinois, U of 16.12
50 Wisconsin, U of (Milwaukee) 2.48 Louisiana State U 7.35 North Texas, U of 13.69
Source: The University of Iowa APSR School Data Set.
NRC93Q = Score from 1995 National Research Council Rankings.
Faculty PVI = (Faculty Publications * Faculty Citations)/1000. Data from 1954 to 1994 are used for faculty publishing be Faculty & Student PVI = ((Faculty + Student Publications)*(Faculty + Student Citations))/1000. Student data are from 1 receiving their degrees after 1973.
December 1996 715
This content downloaded from 128.172.10.194 on Wed, 06 Dec 2017 19:52:09 UTC
All use subject to http://about.jstor.org/terms
The Profession
share resources; more likely, collabo-
ration reflects a similarity of substan-
tive interests and methodological
expertise among the collaborators.
Regardless of the motivation for
cross-department collaboration, the
pattern of collaboration by rank of
departments suggests that collabora-
tion, in general, does not provide a
mechanism for improving the rela-
tive ranking of lower ranked depart-
ments. Rather, given the pattern
with which cross-department collabo-
ration occurs, collaboration is far
more likely to maintain the rank or-
der of departments than to change
that order.
Conclusion
There is a substantial relationship
between reputational rankings of the
quality of departments and more
objective indicators of department
quality. Particularly important is the
number of publications that depart-
ments have in the leading journals.
Less substantial, but still important,
are the number of citations pro-
duced by the department faculty and
the quality of the research con-
ducted by the graduates of the de-
partment. But despite the overlap in
reputational and objective ratings,
enough difference remains between
the two approaches to warrant using
both the objective rankings and the
reputational rankings.
The NRC has moved in the right
direction by adding more objective
data to their report. Despite the lim-
itations in the NRC publication and
citation data, there is significant cor-
relation between their objective
measures and those reported here
(the correlation between the NRC
number of publications and the
number of APSR articles is .66 and
the two sets of citation counts are
correlated at .71). To improve the
validity of their objective measures
in future reports, the NRC should
weight the number of publications by
journal quality, utilize a longer time
period for citations, and check the
accuracy of data.
The comparison of objective mea-
sures of program quality over the
past 40 years demonstrates that de-
partments can increase program ef-
fectiveness, and, in turn, benefit
reputational standing. Similarly, the
quality of graduate programs can be
drastically changed by the departure
of very productive faculty members.
The ranking by objective measures
for both current faculty and gradu-
ates should be a useful list for any
department hiring new faculty in the
future.
Finally, we had thought that col-
laborative research and publishing
might be a mechanism for improving
the quality of scholarship among
lower ranked departments. If collab-
oration occurred between depart-
ments of differing rank, schools of
lower-rank would benefit through
sharing in the greater resources of
the higher-ranked departments, thus
improving the visibility and quality
of the initially lower-ranked depart-
ments. The results demonstrate,
however, that there is little collabo-
ration across departments of differ-
ing rank. As collaboration in politi-
cal science increases, it does not
change either the reputational or
objective rankings of departments.
Notes
*This effort has been, perhaps more than
anything else, an exercise in data set construc-
tion. We wish to thank those individuals
whose countless hours of data collection, re-
checking, coding and entry have made this
article possible: Megan Lutz, Graham Fuller,
Michelle Ucci, Scott Fitzgerald, Jeremy John-
son and Chris Hipschen. We also wish to
thank Chia-Hsing Lu for technical assistance,
Karen Mazaika for editorial assistance and
Peggy Swails for secretarial assistance.
1. The NRC is certainly aware of the sam-
pling error issue. They do present the mean
ratings of departments within confidence in-
tervals, but this information appears in an
appendix to the report (for Political Science
see Appendix, Figure Q-36, pages 688-89 in
the report). However, a closer look at Figure
Q36 reveals that only 10 broad categories of
ratings can be differentiated when statistical
significance is taken into consideration. Statis-
tically speaking, the ranking of schools that
fall into the 10 different broad categories of
ratings can be differentiated from one an-
other, but schools falling into the same broad
category cannot be statistically differentiated.
Figure Q36 does confirm that Harvard re-
ceives a statistically higher reputational rating
than the remaining schools. Beyond that clear
difference, however, it is statistically impossi-
ble to precisely differentiate the rankings
among various subsets of schools. For exam-
ple, due to sampling error, it is statistically
impossible to differentiate among the follow-
ing six schools for the second place ranking–
Berkeley, Yale, Michigan, Stanford, Chicago
and Princeton. Despite the imprecision that
arises from the large sampling error, the NRC
reports mean ratings with two decimal places,
thereby implying more precision than the data
warrant.
2. Neither ISI nor NRC could give us an
explanation for the erroneous reporting of th Houston citation and publication data. We
were told by NRC, however, that they did not have the resources to check the accuracy of
any of the citation and publication counts
data presented in their report. Moreover,
NRC did not check for misspelled names, a
possibility that can arise on either the lists of faculty that came from the included universi-
ties or in the citation data base.
3. The time period from which the NRC
selected publication counts and citations is
somewhat confusing in their report. On page
143, the NRC report refers to the ISI publica-
tions and citations data set for the period
1981 to 1992. Yet, on pages 25 and 312, the
NRC report refers only to publications during
the 1988-1992 period. Again, NRC confirmed
that only publications for the 1988-1992 pe-
riod were used in the count of publications
and citations.
4. As indicated in our earlier report,
sometimes it is difficult to determine from the
individuals name which citations actually be-
long to the individual. This occurs for such
common names as Brown, Jones and Smith.
Given that the Social Science Index lists au-
thors by last name and then by first initial,
and on occasion middle initial, and given that
there are a number of individuals in the social
sciences that have the same last name, we
spent a good deal of time checking and re-
checking the citation counts for authors with
common names. In a small number of cases
we were still not confident that we could
properly allocate the citations to the right in-
dividuals, so we eliminated those individuals
from the analysis and presentations that uti-
lize citation counts. The five names with
which we had problems were as follows:
C. Brown, R. Brown, W. Dixon, E. Jones and
J. Smith. In most cases, these names would
have fallen out of our analyses because they
do not meet other criteria (such as a mini-
mum number of publications or a clear cut
department affiliation). Nevertheless, we apol-
ogize to individuals with these names and ini-
tials if they feel slighted by exclusion from the
departmental evaluations. The same apology
goes to any department that may have a fac-
ulty member with one of these names and
initials.
Moreover, the reader should be aware of
the updated Table 6 from our earlier PS arti-
cle (March 1996, p. 80) published in the June
1996 PS (page 192). Also, Seymour Martin
Lipset was inadvertently missing from the Ta-
ble 5 list of citation leaders (March 1996 PS
p. 79). He has a total of 12,930 citations for
the period 1956-1993 and three APSR publi-
cations, one of which occurred in the most
recent twenty years. Excluding R. Brown and
Norman Nie who gets a large number of cita-
tions for the SPSS manual, Lipset is the pro-
fession’s most frequently cited individual. Be-
cause Norman Nie receives a huge number of
citations to the SPSS manual, we did not at-
tribute his citations in the most recent twenty
years to Chicago, nor did they get added to
716 PS: Political Science & Politics
This content downloaded from 128.172.10.194 on Wed, 06 Dec 2017 19:52:09 UTC
All use subject to http://about.jstor.org/terms
Stanford when assessing the quality of Ph.D.
graduates.
5. We thank the APSA office for gener-
ously allowing us to use the older directories.
6. Dissertation Abstracts Ondisc is a single
database that combines information from the
Comprehensive Dissertation Index, Dissertation
Abstracts International, American Doctoral Dis-
sertations and Masters Abstracts International.
Dissertations from 1861 to present are in the
database.
7. The authors on whom we were unable
to locate biographical data fall into four ma-
jor categories: they are either from other dis-
ciplines, from foreign universities, from non-
academic institutions, or they have recently
retired. The reason for the larger percentage
of missing data on biographical information
for the earlier twenty-year period is because
during that time, fewer authors were members
of the APSA and more of them appeared to
be from outside the United States.
8. For example, we identified a total of
sixty-three authors from DAO that received
their Ph.D.’s in other disciplines. Our data
also showed a total of 42 authors listed at
non-academic institutions (e.g., Brookings In-
stitution) in 1994 or 1973.
9. A school receives credit for a publica-
tion when a faculty member publishes in the
APSR, regardless of whether it was a single-
authored or multi-authored article. Thus, if a
team of four collaborators are all from the
same school that school gets credit for four
publications.
10. The equation for the Gini coefficient is,
G = 1 + 1/N – [2(xN + 2xN_ + 3XN-2 +
… + Nx,)/N2/], where N equals the number
of APSR authors in the department, xN is the
highest number of APSR publications in a de-
partment and x, is the smallest, and i equals
the mean number of APSR publications in
the department.
11. The purpose here was to determine to
what extent the APSR publications were uni-
formly distributed across the authors who had
contributed to the Review rather than deter-
mining to what extent the articles were dis-
tributed across all the members of each de-
partment. If we included all members of each
department who have no APSR publications,
the coefficients would be much higher. The
coefficient values are also surpressed by the
fact that very few individuals in the profession
publish five or more APSR articles. As a re-
sult of how difficult it is to publish in the Re-
view, few departments will ever have a highly
skewed distribution for the number of APSR
publications contributed by those who pub-
lished at least once in the Review, hence Gini
coefficients for the number of articles in the
APSR should be relatively low.
12. The PVI is calculated by multiplying
publications by citations and then dividing by
1,000.
13. It might be argued that the PVI, as we
calculated it (number of APSR articles times
the number of citations), is dominated by the
weight of the citations. To examine this possi-
bility we produced another ranking after set-
ting publication counts equal to citation
counts and adding the two numbers together.
Setting publications equal to citations was ac-
complished by dividing the mean number of
citations by the mean number of publications,
then multiplying the number of publications
times the resulting number (185.64). The new
ranking with equally weighted publication and
citation counts is virtually the same as our
original ranking, no doubt because number of
publications and number of citations are cor-
related.
14. The number of APSR articles for the
faculty comes from Tables 3 and 4. The num-
ber of articles produced by the graduates of a
department comes from Table 5. To compute
the number of citations used to calculate the
PVI in Table 6, add the number of articles
from Tables 3 (or 4 depending on the time
period) and 5, then divide the PVI value in
Table 6 (after multiplying by 1000) by the
number of articles. For example, to calculate
the combined number of citations for Har-
vard, add 43 articles from Table 3 for 1994
and 44 articles from Table 5 for a total of 87
articles. Multiply the Table 6 PVI (3661.24)
by 1000 and divide by 87 for a total of 42,083
citations. Interested readers can write the se-
nior author to request these values and the
PVI values for the fuller set of schools in-
cluded in the data.
References
Christensen, James A. and Lee Sigelman.
1985. “Accrediting Knowledge: Journal
Stature and Citation Impact in Social Sci-
ence.” Social Science Quarterly 66:964-76.
Fenton, David W. 1995. “Apparent Anomalies
in Reported Data for the Field of Music
in the Report of the National Research
Council.” This report can be accessed at
the following World Wide Web site: http://
www.bway.net/-dfenton/nrc-report/nrc-
rept.html.
Garand, James C. 1990. “An Alternative In-
terpretation of Recent Political Science
Journal Evaluations.” Political Science and
Politics 23:444-51.
Jackman, Robert W. and Randolph M. Siver-
son. 1996. “Rating the Rating: An Analy-
sis of the National Research Council’s Ap-
praisal of Political Science Ph.D.
Programs.” Political Science and Politics 29
(June): 155-60.
Katz, Richard S. and Munroe Eagles. 1996.
“Ranking Political Science Programs: A
View from the Lower Half.” Political Sci-
ence and Politics 29 (June): 149-54.
Klingemann, Hans-Dieter. 1986. “Ranking the
Graduate Departments in the 1980s: To-
ward Objective Qualitative Indicators.”
Political Science and Politics 19:651-61.
Lambert, Peter J. 1989. The Distribution and
Redistribution of Income: A Mathematical
Analysis. Cambridge: Basil Blackwell.
Lowery, Robert C. and Brian D. Silver. 1996.
“A Rising Tide Lifts All Boats: Political
Science Department Reputation and the
Reputation of the University.” Political
Science and Politics 29 (June): 161-67.
Magner, Denise. 1995. “Ratings War: A New
Ranking of Doctoral Programs Spurs a
Flurry of Departmental Damage Control.”
Chronicle of Higher Education, October 27,
A19.
Miller, Arthur H., Charles Tien and Andrew
Peebler. 1996. “The American Political
Science Review Hall of Fame: Assess-
ments and Implications for an Evolving
Discipline.” Political Science and Politics
29 (March) 73-83.
National Research Council (NRC). 1995. Re-
search Doctorate Programs in the United
States: Continuity and Change. Washington
DC: National Academy Press.
Robey, J. S. 1982. “Reputation vs. Citations:
Who are the Top Scholars in Political Sci-
ence?” Political Science and Politics 15:
199-200.
Welch, S. and J. R. Hibbing. 1983. “What Do
the New Ratings of Political Science De-
partments Measure?” Political Science and
Politics 16:532-40.
A Political Scientist Rides the Talk Radio Circuit
A Political Scientist Rides the Talk Radio Circuit
James G. Gimpel, University of Maryland
O ld geezers sitting around in bar-
ber shops listening to cattle market
and farm commodity reports, grous-
ing about community problems, and
bragging about their latest hunting
and fishing expeditions. That’s my
vision of AM talk radio listeners
formed by my childhood upbringing
in a small western Nebraska town.
Growing up, I figured the only rea-
son why people listened to talk radio
was because there were only three
radio stations on the dial in my re-
mote part of an out-of-the-way state.
So when my publisher, Allyn and
Bacon, decided to hire a publicist to
promote my book on the first 100
days of the 104th Congress and the
December 1996 717
This content downloaded from 128.172.10.194 on Wed, 06 Dec 2017 19:52:09 UTC
All use subject to http://about.jstor.org/terms
Get Professional Assignment Help Cheaply
Are you busy and do not have time to handle your assignment? Are you scared that your paper will not make the grade? Do you have responsibilities that may hinder you from turning in your assignment on time? Are you tired and can barely handle your assignment? Are your grades inconsistent?
Whichever your reason is, it is valid! You can get professional academic help from our service at affordable rates. We have a team of professional academic writers who can handle all your assignments.
Why Choose Our Academic Writing Service?
- Plagiarism free papers
- Timely delivery
- Any deadline
- Skilled, Experienced Native English Writers
- Subject-relevant academic writer
- Adherence to paper instructions
- Ability to tackle bulk assignments
- Reasonable prices
- 24/7 Customer Support
- Get superb grades consistently
Online Academic Help With Different Subjects
Literature
Students barely have time to read. We got you! Have your literature essay or book review written without having the hassle of reading the book. You can get your literature paper custom-written for you by our literature specialists.
Finance
Do you struggle with finance? No need to torture yourself if finance is not your cup of tea. You can order your finance paper from our academic writing service and get 100% original work from competent finance experts.
Computer science
Computer science is a tough subject. Fortunately, our computer science experts are up to the match. No need to stress and have sleepless nights. Our academic writers will tackle all your computer science assignments and deliver them on time. Let us handle all your python, java, ruby, JavaScript, php , C+ assignments!
Psychology
While psychology may be an interesting subject, you may lack sufficient time to handle your assignments. Don’t despair; by using our academic writing service, you can be assured of perfect grades. Moreover, your grades will be consistent.
Engineering
Engineering is quite a demanding subject. Students face a lot of pressure and barely have enough time to do what they love to do. Our academic writing service got you covered! Our engineering specialists follow the paper instructions and ensure timely delivery of the paper.
Nursing
In the nursing course, you may have difficulties with literature reviews, annotated bibliographies, critical essays, and other assignments. Our nursing assignment writers will offer you professional nursing paper help at low prices.
Sociology
Truth be told, sociology papers can be quite exhausting. Our academic writing service relieves you of fatigue, pressure, and stress. You can relax and have peace of mind as our academic writers handle your sociology assignment.
Business
We take pride in having some of the best business writers in the industry. Our business writers have a lot of experience in the field. They are reliable, and you can be assured of a high-grade paper. They are able to handle business papers of any subject, length, deadline, and difficulty!
Statistics
We boast of having some of the most experienced statistics experts in the industry. Our statistics experts have diverse skills, expertise, and knowledge to handle any kind of assignment. They have access to all kinds of software to get your assignment done.
Law
Writing a law essay may prove to be an insurmountable obstacle, especially when you need to know the peculiarities of the legislative framework. Take advantage of our top-notch law specialists and get superb grades and 100% satisfaction.
What discipline/subjects do you deal in?
We have highlighted some of the most popular subjects we handle above. Those are just a tip of the iceberg. We deal in all academic disciplines since our writers are as diverse. They have been drawn from across all disciplines, and orders are assigned to those writers believed to be the best in the field. In a nutshell, there is no task we cannot handle; all you need to do is place your order with us. As long as your instructions are clear, just trust we shall deliver irrespective of the discipline.
Are your writers competent enough to handle my paper?
Our essay writers are graduates with bachelor's, masters, Ph.D., and doctorate degrees in various subjects. The minimum requirement to be an essay writer with our essay writing service is to have a college degree. All our academic writers have a minimum of two years of academic writing. We have a stringent recruitment process to ensure that we get only the most competent essay writers in the industry. We also ensure that the writers are handsomely compensated for their value. The majority of our writers are native English speakers. As such, the fluency of language and grammar is impeccable.
What if I don’t like the paper?
There is a very low likelihood that you won’t like the paper.
Reasons being:
- When assigning your order, we match the paper’s discipline with the writer’s field/specialization. Since all our writers are graduates, we match the paper’s subject with the field the writer studied. For instance, if it’s a nursing paper, only a nursing graduate and writer will handle it. Furthermore, all our writers have academic writing experience and top-notch research skills.
- We have a quality assurance that reviews the paper before it gets to you. As such, we ensure that you get a paper that meets the required standard and will most definitely make the grade.
In the event that you don’t like your paper:
- The writer will revise the paper up to your pleasing. You have unlimited revisions. You simply need to highlight what specifically you don’t like about the paper, and the writer will make the amendments. The paper will be revised until you are satisfied. Revisions are free of charge
- We will have a different writer write the paper from scratch.
- Last resort, if the above does not work, we will refund your money.
Will the professor find out I didn’t write the paper myself?
Not at all. All papers are written from scratch. There is no way your tutor or instructor will realize that you did not write the paper yourself. In fact, we recommend using our assignment help services for consistent results.
What if the paper is plagiarized?
We check all papers for plagiarism before we submit them. We use powerful plagiarism checking software such as SafeAssign, LopesWrite, and Turnitin. We also upload the plagiarism report so that you can review it. We understand that plagiarism is academic suicide. We would not take the risk of submitting plagiarized work and jeopardize your academic journey. Furthermore, we do not sell or use prewritten papers, and each paper is written from scratch.
When will I get my paper?
You determine when you get the paper by setting the deadline when placing the order. All papers are delivered within the deadline. We are well aware that we operate in a time-sensitive industry. As such, we have laid out strategies to ensure that the client receives the paper on time and they never miss the deadline. We understand that papers that are submitted late have some points deducted. We do not want you to miss any points due to late submission. We work on beating deadlines by huge margins in order to ensure that you have ample time to review the paper before you submit it.
Will anyone find out that I used your services?
We have a privacy and confidentiality policy that guides our work. We NEVER share any customer information with third parties. Noone will ever know that you used our assignment help services. It’s only between you and us. We are bound by our policies to protect the customer’s identity and information. All your information, such as your names, phone number, email, order information, and so on, are protected. We have robust security systems that ensure that your data is protected. Hacking our systems is close to impossible, and it has never happened.
How our Assignment Help Service Works
1. Place an order
You fill all the paper instructions in the order form. Make sure you include all the helpful materials so that our academic writers can deliver the perfect paper. It will also help to eliminate unnecessary revisions.
2. Pay for the order
Proceed to pay for the paper so that it can be assigned to one of our expert academic writers. The paper subject is matched with the writer’s area of specialization.
3. Track the progress
You communicate with the writer and know about the progress of the paper. The client can ask the writer for drafts of the paper. The client can upload extra material and include additional instructions from the lecturer. Receive a paper.
4. Download the paper
The paper is sent to your email and uploaded to your personal account. You also get a plagiarism report attached to your paper.