"A Comparative Analysis of Quantitative and Qualitative Measures of Undergraduate Political Science Studentsí Learning Outcomes"
 
 
 
 
Edward S. Malecki and J. Theodore Anagnoson
Department of Political Science
California State University, Los Angeles
Los Angeles, CA 90032-8226
emaleck@calstatela.edu or tanagno@calstatela.edu

 

 

 

 

Abstract

This paper reports the results from a two-fold assessment of the learning outcomes from the undergraduate major of the Department of Political Science, California State University, Los Angeles, a Masters-level urban, comprehensive institution. A group of 26 term papers were externally assessed by faculty from another comparable political science department, and 66 sophomores, juniors and seniors in both lower division and upper division classes took a sample version of the GRE in Political Science. Two models are posited Ė a "growth model" in which students learn more as they take more courses and complete more college level work, and a "garbage in/garbage out" model, in which students with weak writing skills and ability to learn exit with those same deficiencies (and those with strong skills exit with a strong content base and skills). Results show that students who have taken more courses in the major do better both in writing and the GRE, even when controlling for GPA and the grade in the English writing class that is prerequisite to political science writing classes. The evidence supports the "growth model" consistently.

 

 

 

 

 

 

Prepared for presentation at the 1999 Annual Meetings of the American Political Science Association, Atlanta, Georgia, September 2-5, 1999. Copyright 1999 by the American Political Science Association.

There is a nationwide movement for all academic disciplines to develop policies, procedures, and measures for assessing student learning outcomes in their disciplines. As Julian, Chamberlain, and Seay (1991) note, the roots of this movement are varied (e.g., wider access to higher education, reports of illiterate graduates, the consumer movementís push for truth in advertising). However, a common theme voiced by the federal government and many state governments in the 1970s was a concern that accrediting agencies were no longer doing an adequate job in making sure that institutions of higher education spent their money wisely or met their stated goals. The United States Department of Education, the Council on Postsecondary Accreditation, and many regional accrediting agencies including the Western Association of Schools and Colleges (ACSCU, 1998) "have endorsed the concept of outcomes assessment as an appropriate tool for evaluating institutional effectiveness" (Julian, Chamberlain, and Seay 1991, p. 206). The nationís governors in their report, "Time for Results: The Governorsí 1991 Report on Education," have also strongly endorsed the idea of assessing outcomes.

Given these demands from agencies external to the discipline, one of the central reasons for a political science department to assess student outcomes is to provide data to external constituencies that show the department is meeting its goals and objectives. This, however, is not the only reason for assessing outcomes and it need not be the most important reason. Many departments are concerned about declining enrollment in political science classes. After a long period of nationwide growth in the number of students majoring in political science, many departments are experiencing a decline in both enrollment and majors (Mann, 1996). Because majors and enrollment are the life-blood of any department, there are strong internal reasons to consider the systematic assessment of a political science program that is experiencing significant decline in either majors or enrollment.

Systematic programmatic assessment involves more than simply assessing how much content was learned by students majoring in political science at your university. Systematic assessment also includes studying why students at your university choose political science as a major and why they leave the major. Systematic assessment is also concerned with the evaluation of teaching (Fox and Keeter, 1996) and with how students evaluate the curriculum (e.g., what do they believe is useful even if they did not like learning it and what do they like even if they donít think it is necessary to learn). Julian, Chamberlain, and Seay (1991, p. 207) point out that programmatic assessment can also include the study of alumni to see how they fare after graduation (e.g., placement in law school, employment opportunities) as well as evaluations of political science interns and/or graduates made by agencies, employers and/or graduate schools. Although the CSLA Department of Political Science engages in many of these forms of programmatic assessment, the primary focus of the present study is on learning outcomes of our majors. The reason for this narrow focus is largely pragmatic: this is one area of programmatic assessment with which the department has little experience.

Typically the first step in developing an assessment plan involves the articulation of a mission statement. The mission statement states the goals the department hopes to achieve and specifies how these goals are related to the goals of the university (Lasher and Kitts, 1998). These goals are in turn subdivided into more specific objectives (Diamond, 1998). Even when all these steps have been completed, there frequently is no clear statement of what the department specifically expects students to learn. Instead, of specific learning outcomes, most departments specify the outcome for the major as the completion of so many units of political science courses. For example, at California State University, Los Angeles (CSLA) students are required to take courses in American politics, comparative politics, international relations, political theory, public policy and administration, and public law. Since students do not have to take the same courses to complete these requirements, there is no guarantee that they will complete the degree having been exposed to (let alone having learned) a common content.

Unfortunately, this nebulous set of expectations about what learning outcomes to expect from our majors is not unique to the CSLA department of political science. The mission and curriculum of political science departments at colleges and universities across the country vary significantly (Wahlke 1991). The curricular diversity within the discipline complicates the process of developing useful assessment standards for the discipline and makes it very doubtful that one measure of assessment will fit the needs of all political science departments.

Nevertheless, there are certain common themes that permeate all political science departments. One such theme involves an emphasis on writing skills. Most political science departments serve as significant feeders to law schools (Mann, 1996). This long-standing relationship has affected the curriculum and mode of teaching in political science departments. Because essay examinations are standard in law school, most political science instructors favor the use of essay examinations in their classes. This typically means that political science students get a lot of practice in writing and organizing essays as they progress through the major. Repetition in writing essays should show up as a value-added skill in students who graduate from political science departments. To the extent that political science departments are effective in developing these writing and organizing skills, political science faculty are promoting skills which will be useful in virtually any field of employment for which a college education is desirable.

Political science also has a common set of distinctive concepts, which are emphasized throughout the curriculum. In particular, political science students hear about federalism, separation of powers, checks and balances, constitutionalism, and limited government in a variety of classes as they progress through the major. Because these concepts are used in a variety of different contexts, political science students should develop a much better understanding of the nuances of these concepts by the time they graduate. Since teaching disciplinary content to majors is central to what political scientists do in the classroom, this is an obvious focus for assessment of learning outcomes.

Background of the Study. The proposed paper will report on the results of a study of learning outcomes of a beginning and exiting cohort of political science students at California State University, Los Angeles (CSLA). CSLA is an urban comprehensive university with a predominantly minority student population (i.e., 81%). The department is midsize with 12 full time faculty and 212 undergraduate majors. It also has two separate graduate programs, a MA in Political Science and a MS in Public Administration, but students in neither of these graduate programs are involved in the present study. Given the great variety of political science departments across the nation, it should be obvious that research institutions with much larger faculty and more selectively admitted students will need to develop different benchmarks for their students than those that will be reported in the present study.

Most models of assessment are based on the assumption that students enter the university as freshmen who have already decided that they will be majors in the discipline to be assessed and that they will graduate four years later. Indeed, that is how curriculum is written. Students in the first two years of a political science major are typically expected to concentrate on completing their general education requirements and to spend their last two years completing course work in their major and/or minor fields. Assessment models typically assume that students entering the university are prepared to begin university level work. Like most universities the curriculum for the CSLA Department of Political Science is based on the assumption that students can complete all requirements for the degree within four years.

These assumptions are increasingly unrealistic, especially for students at CSLA.

Despite the fact that the California State University system requires freshmen to be in the upper third of their class based on a combination of grade point average (GPA) and ACT or SAT scores (CSLA General Catalog 1997-1999, p. 37), almost three-fourths of the CSLA freshmen are required to take remedial courses in either mathematics or English writing (Weiss, 1998). Moreover, English is a second language for almost two-thirds of the CSLA student population. Finally, like students across the nation, many CSLA students graduating in political science have started college as undecided or in other majors. Of course, many students who start as political science majors end up changing majors and graduating in other fields so they generally are not included in assessment studies.

Because most CSLA students have extensive work commitments, the typical political science undergraduate goes to school only three out of four quarters a year, taking approximately 10 units per quarter. Given the nature of the CSLA student population, the time to degree varies considerably with relatively few students graduating within four years (typically two or three students per year out of over 200 majors). Moreover, given light course loads and time constraints, some majors end up taking courses out of sequence (e.g., completing lower division requirements after having taken a large number of upper division courses in the major). This means that the time between courses varies considerably from student to student thereby making it difficult to evaluate learning outcomes.

Almost two-thirds of all CSLA students including political science majors are transfers from community college. All of the feeder community colleges offer at least two core political science courses, introduction to American government and introduction to comparative politics. Some offer three to five political science courses. Thus, undergraduate political science majors at CSLA frequently have had a significant introduction to the discipline before they have taken a single course at the university. This complicates the problem of assessing the learning outcomes for these students.

Procedures and Assumptions

The research for the study is based in part on student papers written for credit in a entry-level required lower division writing course (POLS 203) and a required exit-level upper division writing course (POLS 491). Papers were collected and photocopied before being graded by the instructors. Clean copies of the papers were then sent to San Francisco State University (SFSU) to be assessed by SFSU political science faculty according to writing rubrics developed for this purpose. A parallel but independent assessment of student knowledge of disciplinary content was conducted through the administration of a sample version of the Revised Political Science Test for the Graduate Record Examination (GRE). According to the Committee of Examiners for the GRE, the Political Science test is capable of providing sub-scores in American government and politics as well as in comparative politics/international relations. The Department reproduced this sample test and administered it to students involved in the writing assessment study and to a larger set of entering and exiting majors.

The study assumes that the average writing and content scores for the exiting cohort will be higher than the average scores for the beginning cohort. In order to control for differences in experience and ability between the two cohorts, additional data were collected such as the number of political science courses each student had completed prior to the class in which he or she had written the paper. Other control variables include the studentsí overall average GPA at CSLA, GPA in political science classes, and grades in both the required English writing course (which is prerequisite to all Department writing courses) and the basic American government course (which is prerequisite to all major upper division classes).

Of course, there are many other variables besides student abilities and program quality that can affect learning outcomes. For example, curriculum change both within the major and in general education, turnover in Department faculty, changes in teaching modalities, external factors such as wars or economic conditions affecting student motivation, and changes in student culture can all influence learning outcomes. Although the exiting students experienced faculty turnover and curricular change during their studies at CSLA, these changes were relatively modest and incremental. At any rate, the present study was not able to control for the effect of all these possible variables. It is assumed, perhaps erroneously, that changes in such variables did not create systematic differences between the entering and exiting cohort of students.

The sample GRE was administered to all political science majors in the lower and upper division political science writing classes in fall 1998. In addition, the examination was administered to students in other lower division political science classes as well as another senior class in order to reach entry level and exiting students who were not enrolled in either writing class. No student took the examination more than once. In order to get participation of the instructors, the examinations were not counted toward the course grade and the length of time to administer the examination was controlled. The 1996-1997 Descriptive Booklet contains a sample version of the test. To limit administration time to 30 minutes, only the first 28 of 37 questions were used in this study.

In addition to the examination questions, students were asked how many political science courses they had previously taken. They were also asked if they had taken the basic American government course, and if so where (i.e., Cal State LA, a community college). Students were also asked to identify their option in political science (i.e., general, pre-legal, public administration, or world politics) and to identify writing courses and requirements they had completed.

Students were informed that the test was part of a study of the effectiveness of the political science program and that their score on the test would not affect their grade in the course. They were also told that nobody would be able to answer all the questions correctly. Because the examination is intended for seniors who are applying to graduate school, and because the lower division courses included seniors (who still had not completed the lower division requirements), there was no fair way for the examinations to be used in the grading process. The examinations were collected by a student and placed in an envelope addressed to the principal investigator of the study.

Students were asked to supply their identification number on the examination, but 16 of the 66 students participating in the study did not supply this information. This created missing data problems for some types of analyses. Information from student records was used to check the accuracy of student responses to the questionnaire and to obtain additional information such as the Grade Point Average (GPA).

The authors assumed that as students completed more courses in the major scores on their papers and their GRE examinations would improve. We did not, however, assume that there would be a strong relationship between the evaluation scores for the paper and scores on the GRE. In the first place, the GRE is a measure of general knowledge about the discipline in contrast to the papers, which focus on a narrow aspect of the discipline. In the second place, the score on the papers is not simply a reflection of how much content students have learned. It is also a reflection of how well the students have written the papers. Since the GRE does not measure writing ability, there is no reason to assume that GRE scores will be strongly related to the paper scores.

Hypotheses:

The basic model underlying these hypotheses is a growth or value added model. In other words, it is assumed that as students complete courses in the major they will become more knowledgeable about disciplinary content and will become more skillful writers.

This is not, however, the only available model. GIGO or "garbage in and garbage out" is an alternative model. In the GIGO model it is assumed that students who are admitted with weak writing skills and/or little ability to learn and to retain content will exit with these same deficiencies (probably sooner than later). Conversely, those with strong skills and ability will exit the university with a strong content base and strong writing skills (i.e., QIQO or "quality in and quality out"). In the GIGO/QIQO model, entry level essay scores and test scores as well as GPA are the best predictors of GRE and paper scores. For example, in the GIGO/QIQO model, controlling for these variables should cause any relationship between GRE scores and the number of courses taken to disappear. In contrast, the growth model predicts that controlling for GPA should strengthen the relation between GRE scores and the number of prior major courses completed since it is assumed that as knowledge accumulates student performance as measured by GPA will improve.

Writing Sample Findings

The study reports findings on two different, but overlapping, sets of students. The larger group includes 66 students who took the sample GRE political science test. For this group the primary focus will be on the analysis of the scores on the sample test. In addition, there is a smaller group of 26 students whose sample papers were evaluated. Most of these 26 students are part of the larger group, but separate analysis is justified because of the writing evaluations. Moreover, because the identity of these 26 students was known, the principal investigator was able to get a complete set of data for all these students from student records. Ironically, for this smaller group the only variable with significant missing data is the sample GRE test score since some of these students did not put their identification number on the answer sheet. Nevertheless, the authors were able to track down the identity of most of these students from their answers to the questions preceding the examination. Therefore GRE scores are missing for only 4 of the 26 students in the writing sample and some of these students may not have been in class the day the examination was administered.

The papers were given an overall rating and were also rated on four separate dimensions: argument and topic, evidence/analysis, expression, and form/citation. The rating scale used three categories with three to five letter grades for each category:

Excellent/Proficient: A, A-, B+
Satisfactory/Competent B, B-, C+, C
Unsatisfactory C-, D+, D, D-, F

For purposes of analysis, these ratings were converted into numeric scores with 0 points for an F and 11 points for an A. Thus, the highest score a student could achieve on any dimension of the paper was 11 and the lowest was zero.

Table 1
 
  Lower Division 

Mean Score

Upper Division 

Mean Score

Significance Level 

Two-tailed F test.

Overall Paper 6.93 8.75 .070
Argument 7.21 8.83 .113
Evidence 7.21 8.83 .104
Expression 6.50 7.31 .134
Form 7.21 8.75 .206
Sample Size 14 12  
 

The writing sample consisted of 14 students who submitted papers in POLS 203, the introductory political science writing class, and 12 in POLS 491, one of the upper division writings classes. Student papers were randomly assigned an identification number from one to twenty-six. Clean copies of the papers were sent to San Francisco State without student names or any indication of the course number or class level. Therefore, the SFSU evaluators did not know whether they were scoring a paper from a lower division class or an upper division class.

Hypothesis 1: "Seniors in the major will have higher scores on their writing sample than entry level students, even when controlling for variables such as GPA and the grade in the English writing class that is prerequisite to political science writing classes."

The mean scores on the papers are reported in Table 1. In addition, Table 1 indicates the significance levels for the difference in means between the two classes. In POLS 203, the mean overall score on the paper was 6.93 (B-/C+) with a standard deviation of 2.59. For POLS 491, the mean overall score on the paper was 8.75 (B/B+) with a standard deviation of 2.26. The distribution of paper scores was essentially rectangular. The difference in overall paper scores was in the predicted direction, but the F for the one-way ANOVA was only significant at the .070 level. Although none of the scores on "argument," "evidence," "expression" or "form" were significant (P>0.10), all the mean differences were in the predicted direction: 7.21 < 8.83, 7.21 < 8.83, 6.5 < 8.25, and 7.21 < 8.75 respectively for the students in the lower division and upper division classes.

Granted the difference in mean paper scores are in the predicted direction (albeit not significant because of very small sample sizes), could these differences simply be due to the fact that the cohort of students in POLS 491 as compared to the cohort in POLS 203 were better students? In other words, could it be that the exiting cohort came to the major with better writing skills, more ability, and more content knowledge than the entering cohort?

The findings in Table 2 clearly indicate that the differences in paper scores are not due to significant differences between the exiting and entering cohorts in entry level writing skills as measured by the grade in the standard English writing course, overall

Table 2
 
  Lower Division 

Mean

Upper Division 

Mean

Significance Level 

Two-tailed F test.

ENGL 101 grade 3.00 3.00 1.00
GRE score 13.46 14.33 .639
GPA 2.93 2.89 .859
Sample Size 14 12  

ability as measured by GPA, or content knowledge as measured by GRE scores. Both cohorts have comparable ability and entry level writing skills. Although the students in the upper division class appear to have learned more political science content than the students in the lower division class, this difference is not significant, hence by itself is unlikely to account for the differences in the writing scores. In short, the findings in Table 1 and Table 2 are consistent with Hypothesis 1.

Growth Model. Are the differences in the mean paper scores consistent with a growth model? The growth model is based on the assumption that students learn more as they take more classes and complete more college level work and more work in the major. The findings in Table 3 show that the exiting cohort of students have completed

Table 3
 
  Lower Division 

Mean

Upper Division 

Mean

Significance Level 

Two-tailed F test.

Prior POLS courses 6.29 14.25 .000
CSLA Units 43.79 101.71 .004
Total Units 136.25 169.67 .010
Sample Size 14 12  

significantly more courses in political science, more total units, and more units at CSLA than the entering cohort of majors. But the exiting and entering cohorts do not differ significantly in their entry level writing skills, GPA, or knowledge of the content. This suggests that it is the significantly greater number of political science courses and overall greater experience at the college level of the exiting cohort that contributes to their stronger writing skills as reflected in higher paper scores. Thus, the findings presented in Tables 1, 2 and 3 are more consistent with a growth model than the GIGO/QIQO model.

While the previous findings provide some support for the growth model of student learning, basing the analysis on the comparison of students in the lower and upper division writing classes understates that support because neither class is a pure sample of entering and exiting political science majors. On the one hand, one of the students in the upper division writing course was a senior but only a minor in political science. That student had completed six prior political science courses. If this student were excluded from the upper division writing class, each of the remaining students would have completed twelve or more political science classes prior to the quarter in which they turned in their sample paper and took the sample GRE examination. On the other hand, two of the fourteen students in the lower division course had already completed more than twelve political science classes. If these two students were excluded from the lower division class, none of the remaining students would have completed more than nine political science courses prior to the quarter in which they turned in their sample paper and took the sample GRE examination.

Even if these changes were made, it would still be over-reaching to call the students in the lower division political science writing class strictly entry level. The average number of total quarter units completed by students in the lower division class is 136 and no student had completed fewer than 85 units prior to that class. Since junior status at CSLA is equivalent to the completion of 93 quarter units, all but two of the students in the lower division writing class are at least juniors. Given that over two-thirds of all political science majors at CSLA are transfer students, these findings are not very surprising. They do, however, suggest that another mode of analyzing the entering and exiting cohorts may be more fruitful.

In particular, it suggests a comparison between political science students who have already completed a large number of political science courses (e.g., ten or more) with students who have completed a relatively smaller number of political science courses (e.g., less than ten). For the next stage of analysis, the 26 students in the writing sample were divided into two equal groups. One group consisted of students who had completed 10 or more political science classes prior to the quarter in which they turned in their sample paper and took the sample GRE examination. The other group consisted of students who had completed fewer than 10 political science classes prior to the quarter in which they turned in their sample paper and took the sample GRE examination.
 

Table 4
 
  Less than ten prior POLS courses 

Mean Score

Ten or more prior 

POLS courses 

Mean Score

Significance Level 

Two-tailed F test

Overall Paper 6.77 8.77 .044
Argument 7.00  8.92 .056
Evidence 6.85  9.08 .021
Expression 6.54  8.08 .189
Form 6.54  9.31 .017
Sample Size 13 13  
 

Findings from this perspective are reported in Table 4. Although completing more political science courses is associated with higher mean scores on all dimensions of the paper, the "expression" scores are not significantly better for the students with more political science courses. Overall these findings suggest that as students complete more political science courses (and presumably more writing assignments in the discipline), they incrementally learn how to use political science evidence more effectively and also learn to use the proper forms of documenting this evidence. This suggests that the contribution of additional political science course work to writing skills is primarily discipline specific (e.g., learning how to use and cite disciplinary evidence) as opposed to generic improvement of writing skills.

The inference that the influence of political science course work on writing skills is primarily discipline specific is reinforced by the findings in Table 5. Students with fewer than ten prior political science courses do not differ significantly from students with ten or more political science courses in their grades in the basic English writing course. Thus, both sets of students started the major with approximately the same basic writing skills. Moreover, they started with approximately the same ability levels as measured by GPA. Although the group with more political science courses has a higher mean "expression" score, essentially an evaluation of basic writing skills, this difference unlike the differences in the average "evidence" and "form" scores is not significant

(Table 4). Finally, the higher average GRE score in the group with more political science classes (but lower GPA) is also consistent with the notion that the influence of these major courses is discipline specific.

Table 5
 
  Less than ten prior POLS courses 

Mean

Ten or more prior POLS courses 

Mean

Significance Level 

Two-tailed F test

ENGL writing grade 3.07  2.92 .546
Overall GPA 3.00  2.82 .370
POLS GPA 2.90  2.88 .943
Am Govít grade 2.87  2.69 .583
GRE score 13.18  14.45 .484
Sample Size 13 13  
 

Hypothesis 2: "Scores on the writing samples will either be unrelated or very weakly related to scores on the sample GRE political science test, even when controlling for variables such as GPA."

Although it might be argued that the GRE score provides an indirect measure of a studentís ability to use political science evidence, it clearly neither measures a studentís ability to use documentation and proper forms of citation nor measures a studentís ability to write. Moreover, unlike a writing assignment, which asks students to narrow down their topic to a very limited segment of disciplinary content, the GRE is aimed at assessing the studentsí general knowledge of the disciplineís content. Thus, there is no reason to assume that there should be a strong connection between writing scores and scores on the GRE and that is the result in the present study. The Pearson correlation coefficients between scores on the sample GRE examination and the scores on "argument," "evidence," "expression," "form" and overall evaluation are not significant and quite modest (r<.20 except for "expression" where r=.32, p > .14). Controlling for both overall GPA and the studentís GPA in political science does not change these results. Thus, these findings are consistent with Hypothesis 2.

Assessment Implications. The evidence on writing skills taken as a whole provides more support for a growth model than for a GIGO/QIQO model. Nevertheless, the findings on the studentsí "expression" scores are not inconsistent with the GIGO/QIQO model. It is clear that the political science program at CSLA is more effective in getting students to learn how to use political science "evidence" and "forms" than it is in getting students to learn how to improve their general ability to construct arguments and to express themselves effectively in written English. In order to achieve significant improvement in learning outcomes involving these basic writing skills, the political science department at CSLA may want to consider making changes either in the curriculum and/or in the way the existing curriculum is taught.

Any change in curriculum and/or in teaching which places greater emphasis on improving basic writing skills is likely to reduce the amount of time and energy devoted to getting political science students to learn the content of the discipline. Therefore, in order to make an informed decision about such a change in emphasis, the department needs to know whether or not its political science program is helping its majors significantly improve their knowledge of the disciplineís content as they move through the program. The basic aim of the program should be to help students at all ability and skill levels to improve their knowledge of political science. In other words, even though students receiving "A" grades presumably know more than students who receive "C" grades, both types of students should show growth in their knowledge of political science content as they progress through the program. If a department has evidence that indicates its majors are not showing a significant growth in their knowledge of the discipline as they complete their program, then it is unlikely that the department will devote more time and effort to teaching basic writing skills.

GRE Sample Findings

As noted above, 66 students in required political science classes at both the lower and upper division level took a sample GRE political science test in fall, 1998. Only the first 28 of the 37 questions on the sample test were used. The GRE scores are presented in "raw score form," that is, as the number of questions that the student answered correctly. The authors also divided the 28 test questions into subsections of American politics, international relations, comparative politics, and political theory. Raw scores were then calculated for each subsection.

The assessment study was based on the assumption that the students in the lower division classes could be treated as an entering cohort of majors and the students in the upper division classes could be treated as an exiting cohort of majors. This assumption held true for the upper division classes in which all but one of the 19 students in these classes, a senior who was a political science minor, had completed ten or more political science classes prior to taking the examination. However more than one-fourth of the 47 students in the lower division classes who took the GRE had already completed ten or more political science courses. Given the large number of upper classmen in the lower division classes, it simply was not valid to treat the students in the lower division classes as an entering cohort of majors to be compared with the students in the upper division class. It was possible, however, to compare the students by dividing them into two groups based on the number of political science courses they had completed prior to taking the sample GRE examination.

Number of Prior Major Courses. As with the previous analysis of the grades on the term papers, students were divided into two groups. Thirty-five, including 34 of the 47 students in the lower division classes, were placed in a group that had completed less than 10 courses in political science prior to taking the sample GRE. Thirty-one, including 18 of the 19 students in the upper division classes, were placed in a group that had completed 10 or more political science courses. A comparison of the two groups in terms of mean scores for the GRE overall and four subsections of the test is shown in Table 6. Significance levels are based on F values from a one way analysis of variance.

Table 6
 
GRE Component (range)
Less than 10
POLS Courses 
Mean Score
10 or More POLS Courses 
Mean Score
Significance Level
Two tailed test
Overall GRE (0-28) 12.2 13.9 0.11
American (0-9) 4.5 4.8 0.53
IR (0-8) 4.2 4.8 0.16
Comparative (0-7) 2.3 2.8 0.22
Theory (0-4) 1.1 1.5 0.10
Sample Size (N)
35
31
 

As was true with the paper scores above, the differences between those who have completed 10 or more political science courses versus those who have taken fewer than 10 courses are in the predicted direction. In the case of the political theory subsection scores, the difference is significant at p= .10 and in the case of the overall GRE score, the

Table 7
 
Type of GRE Questions and Range of Scores Less than nine prior 

POLS courses 

Mean Score

Nine or more prior POLS courses 

Mean Score

Significance Level 

Two-tail F test

Overall (0-28) 11.61 14.20 .01
American (0-9) 4.39 4.91 .21
IR (0-8) 3.94 4.91 .02
Comparative (0-7) 2.23 2.83 .08
Theory (0-4) 1.06 1.54 .04
Sample Size n= 31 35  

difference is significant at p=.11. However, unlike the case of the writing sample (N=26) where marginal shifts in the cutting line would not have affected the results, in this case (N=66) marginal shifts do affect the results. For example, if the division in this case were made at nine classes rather than ten, then most of the differences between the two groups are statistically significant. See Table 7 for the results. Given such differences in findings based on a modest move of the cutting point used to divide the sample into two groups, an alternative analysis seems preferable.

Since both the GRE score and the number-of-prior-major-courses variable are interval/ratio level, the GRE score can be regressed on the number-of-prior-major-courses variable. Although the results of this regression show that only 7% of the variance are explained, the number-of-prior-major-courses variable is significant, with a t-value of 2.19 (p < .032). A more elaborate model using this and other variables is shown in Table 8.

Hypothesis 3: Seniors in the major will have higher scores on the sample GRE political science test than entry level students, even when controlling for variables such as GPA.

For this test, the number-of-prior-major-courses variable is used as a proxy for class standing (i.e., senior, junior, sophomore, or freshman status). GPA is used as a control variable. Using the GRE score as the dependent variable, the results are shown in column 1 of Table 8 (equation 1, hypothesis 3). In general, the data support this hypothesis, with both the number-of-prior-major-courses and the GPA variable significant and in the hypothesized direction. The F statistic is significant, and R2 is in the .26 range.

Similar regressions (not shown) were run for the American politics, international relations, comparative politics, and political theory subsections of the GRE. The number-of prior-major-courses variable is most significant with the theory score, perhaps because students wait until their senior year to take the required political theory course. The scores for the other subsections are less predictable than for either theory or the overall GRE score. In fact, in the equation predicting the comparative politics subsection score, neither of the independent variables is significant.

Hypothesis 4: Students in the World Politics option will have higher GRE scores on the comparative politics and international relations questions than students in the other options.

The GRE sample includes 18 students in the General political science option, 21 in the Pre-legal, 15 in the Public Administration, 10 in the World Politics, and 2 in the American Politics option. The two students in the American Politics are not included in Table 9 because of their low number. Table 9 shows the GPA as well as the mean scores on the GRE and on the four subsections of the GRE for majors in the General, Pre-legal, Public Administration, and World Politics options.

 
Table 8
Regressions of Graduate Record Exam Score on Various Independent Variables
For Purposes of Testing Hypotheses 3, 4, and 5 as well as Growth Model

Equation                                             1                 2                 3                 4                 5

Hypothesis                                         3                 4                 5                 G1                   G2

N                                                       53                53               53               15                47

Dependent                                     GRE         Comparative     GRE           GRE            GRE
Variable                                         Score           Score           Score          Score          Score

Independent
Variables:                                     1, 2             1, 2, 6-8         1, 2, 3           1, 4           1, 2, 5

1) Number of prior
POLS courses                                 0.23**           0.06*            0.22**        0.38*          0.27***
                                                      (2.43)             (1.79)            (2.38)         (1.99)          (2.76)

2) GPA                                           2.94****        0.61*            2.88****                       2.30**
                                                      (3.50)             (1.81)            (3.39)                             (2.27)

3) Transfer Student =1;                                                                0.63
Other = 0.                                                                                   (0.61)

4) English Placement Test,                                                                                0.12
Total Score                                                                                                     (0.89)

5) GPA in English 101                                                                                                        -0.08
                                                                                                                                         (-0.10)

Studentís option within the BA
     in Political Science:
6) World politics option =1;                                     0.33
     Other = 0.                                                         (0.51)
7) Public administration = 1;                                   -0.89*
     Other = 0.                                                       (-1.72)
8) Pre-legal option =1;                                           -0.59
     Other = 0.                                                       (-1.20)

F                                                      8.79****       1.97              5.91***         2.92*          4.22***
Prob > F                                      (0.0006)           (0.10)            (0.01)            (0.09)          (0.01)
R2                                                   .26                  .17                .24                 .33               .23
Adjusted R2                                   .23                   .09               .22                 .21                .17

Notes:           *=significant at 0.10. **= significant at 0.05. *** = significant at 0.01.

  **** = significant at 0.001.
 
  t-values in parentheses.
  Comparative Score is the score on the comparative politics subsection of the sample GRE exam.
 
The last column of Table 9 shows that students in the Pre-legal option have the highest GPA followed in order by students in the Public Administration, World Politics, and General options. The difference in GPA between options is statistically significant (p< .02) using a two-tailed F test.
Table 9
GRE and Subsection scores by Option within the Major

Option                                 GRE         American         IR         Comp.         Theory         GPA
                                                               Politics                        Politics

General                               12.5             4.6                 4.0           2.7                 1.2             2.4
Pre-legal                              13.0            4.7                  4.4           2.4                 1.5             3.1
Public Administration         13.5             5.2                 4.9            2.1                1.3             2.8
World Politics                     12.7             4.0                 4.4            3.1                 1.2             2.6

Note: highest score in each column is underlined.

The differences in GPA between the options are not, however, replicated in GRE scores. World Politics students, despite a relatively low GPA, are highest in the comparative politics subsection of the GRE and second highest on the international relations subsection. Public Administration students have the highest mean score on the overall GRE and the highest mean score in both the American politics and international relations subsections. The option with the highest GPA, the Pre-legal, has the highest mean test score only in the political theory subsection.

A regression (Table 8, equation 2, hypothesis 4) of the comparative politics subsection score on GPA, the number-of-prior-major courses variable, and dichotomous variables for the Pre-legal, Public Administration, and World Politics options resulted in no significant coefficients at the 0.05 level. Three coefficients, however, were significant at the 0.10 level, but the dichotomous variable for majoring in the World Politics option was not among them. Overall this equation yielded a poorly fitting regression. In other words, after controlling for both the number-of-prior-major-courses variable and GPA, there is no significant difference between the students in the World Politics option and students in the other options on the comparative politics subsection score. In addition, none of the differences between the options in total GRE score or other subsection scores approaches statistical significance.

Overall the data do not provide much support for Hypothesis 4. Although students in the World Politics option did score higher than other options on questions dealing with comparative politics, this difference is not statistically significant. Moreover, in terms of GRE questions dealing with international relations, Public Administration students scored higher than students in the World Politics option. If it is assumed that students in the World Politics option have more exposure to the content of international relations than students in other options, their scores on the international relations questions of the GRE do not reflect this advantage.

Hypothesis 5: Controlling for the total number of political science courses that students have completed, the writing and GRE scores of transfer students will not be significantly different from native students.

To test this hypothesis for the 66 students in the GRE sample a dichotomous variable for transfer status (1 = a transfer student, 0 = a native CSLA student) was added to the number-of-prior-major-courses variable and the studentís GPA. All three variables are regressed on GRE in equation 3 of Table 8. Because some of the 66 students who took the GRE exam chose to remain anonymous, GPA information is available for only 53 students. Within this group 22 are native students and 31 are transfers. Twenty-seven of the 31 transfers are from community colleges. The results in Table 8 show that both the number-of-prior-major-courses variable and the GPA variable are virtually unchanged from the results reported for equation 1, and the dichotomous variable for transfer status is insignificant. The GRE scores of transfer students are indeed not significantly different from those of native students.

The results for the 26 students in the writing sample are not entirely consistent with the findings above, perhaps because only five of the 26 students in the writing sample are native CSLA students. Although the 21 transfer students in the writing sample have higher mean scores 8.2>7.67, 8.6>7.8, 8.8>7.76, and 8.8>7.71 on the overall evaluation of the paper and on "argument," "evidence," and "form" respectively, none of these differences are statistically significant. The five native CSLA students have a slightly higher mean score on "expression" 7.38>7, but this too is not statistically significant. The native CSLA and transfer students in the writing sample do, however, differ significantly in terms of the mean number of prior major courses completed, 13.8 and 9 respectively (t=1.8188, two-tailed p>0.08).

In the writing sample, controlling for the number of prior major courses in a regression equation including the transfer status variable and paper scores yielded mixed results. On the one hand, the native CSLA students and the transfer students are not significantly different in terms of scores on "argument," "expression," and overall evaluation of the paper. On the other hand, there are significant differences between native CSLA and transfer students on "evidence" and "form" scores (p>F=0.0417 and 0.0516 respectively). For "evidence" scores the regression explains over 17% of the variance. The t value for the number-of-prior-major-courses variable is 2.547 and p>0.018. For "form" scores the regression explains 16% of the variance. The t value for the number-of-prior-major-courses variable is 2.480 and p>0.021.

In short, the findings for the GRE scores and the writing sample scores on "argument," "expression," and overall evaluation of the paper are consistent with hypothesis 5: there are no statistically significant differences between transfer students and native CSLA students controlling for the number of prior major courses. However, for the writing sample the findings on "evidence" and "form" scores are inconsistent with hypothesis 5: controlling for the number of prior major courses, transfer students do have significantly higher scores on "evidence" and "form."

Growth Model. In the GIGO/QIQO model, entry level essay scores and test scores as well as GPA are the best predictors of GRE. If GIGO/QIQO is the correct model, controlling for these variables should make any relationship between GRE scores and the number-of- prior-major-courses variable disappear.

Unfortunately, SAT scores were available for only seven students so this variable was not used to test this hypothesis. English Placement Test (EPT) scores were available for 15 students and the grades in English 101 were available for 47. Both of the latter variables are used as indicators of "entry level essay scores and test scores." Again referring to Table 8 (equation 4, hypothesis G1), the number-of-prior-major-courses variable is significant at 0.07 (but with only 15 observations). The EPT total score variable is not significant at all. A stronger test of this model is also presented in Table 8 (equation 5, hypothesis G2). This equation uses the GPA in English 101 as an indicator for "entry level essay scores and test scores." Comparing this equation with equation 1 in Table 8 reveals that this test produces little change in the relationships between GRE scores and the two main independent variables: GPA and the number-of-prior-major-courses variable. Both relationships remain statistically significant, but the English 101 GPA variable is not significant.

In contrast to the GIGO/QIQO model, the growth model predicts that controlling for GPA should strengthen the relationship between GRE scores and the number-of-prior-major-courses. That prediction is based on the assumption that as knowledge accumulates student performance as measured by the GPA should improve. This is, of course, exactly the finding made under Hypothesis 3 above. All in all, the evidence from the sample GRE data set, like the evidence from the writing sample, tends to support the growth model rather than the GIGO/QIQO model. Although these tests indicate that the undergraduate political program at CSLA is producing some growth in learning in both the areas of writing skills and overall content, the analysis to this point has not identified the content areas in which the department has been especially successful.

Content Differences. The difference in scores for the various subsections of the GRE can be used to point out content areas where the department has done well and "not quite as well." The mean and median scores for the exam and its subsections are shown in Table 10:
 

Table 10
 
  Number of Questions Mean Score Median Score  % of Correct Responses Sample Size 

N

GRE 28 12.98 13 46.4% 66
American 9 4.67 5 51.9 66
I R 8 4.45 5 55.7 66
Comparative 7 2.55 2 36.4 66
Theory 4 1.32 1 33.0 66

Although the political theory raw scores are the lowest, they are also based on the fewest questions, only four. The percentage of correct responses for each subsection of the GRE is a more useful statistic for determining how well the department has done in different content areas. Using this standard, the department at CSLA has done best in international relations and worst in political theory. The differences in the mean percentage of correct responses for the four subsections of the GRE are statistically significant using an analysis of variance (p<.001).

Surprisingly, American politics (rather than comparative politics) is the subsection that is not statistically different from the international relations subsection. The comparative politics and the theory subsections are significantly different from both the American politics and international relations subsections but not from one another. Since the same faculty teach classes in both comparative politics and international relations, it is unlikely that the difference in percent correct for comparative politics and international relations subsections is faculty related. Infrequent offerings of courses in certain areas such as Europe and Africa may be a more important cause of the low percentage of correct answers in the comparative politics subsection.

The low percent of correct scores in the political theory subsection of the test are disappointing, but not unexpected. Because students in the department have traditionally struggled in political theory courses, the department recently added a lower division core course in theory to the curriculum. The evidence from this study indicates that this was an appropriate move.

It is not surprising that CSLA students did relatively well in the American politics subsection of the GRE test. Most of the courses in the curriculum focus on various aspects of American politics. What is surprising is that our students apparently learned a little more about international relations than American politics. At this point the authors do not have data which can explain this result. This suggests that another project, which investigates student knowledge in the different sub-fields of political science in more depth, seems warranted. Further study is especially warranted in light of the fact that the CSLA undergraduate major has Public Administration and Pre-legal options in addition to the General option.

Conclusions:

1) Seniors in the major do have higher scores on their writing sample than entry level students, even when controlling for variables such as GPA and the grade in the English writing class that is prerequisite to political science writing classes. The program at CSLA is apparently more effective in getting students to learn how to use political science "evidence" and "forms" than improving basic writing skills.

2) Scores on the writing sample are very weakly related to scores on the sample GRE political science test, even when controlling for variables such as GPA. A comparative analysis of quantitative and qualitative measures indicates that writing skills cannot be usefully measured by a quantitative test such as the GRE and that research papers are not useful for measuring student knowledge of general disciplinary content.

3) Seniors in the major do have higher scores on the sample GRE political science test than entry level students, even when controlling for variables such as GPA. Thus, requiring more courses in the major is likely to increase the amount of disciplinary content students learn.

One of the hypotheses is not confirmed: The results for the fifth hypothesis are mixed: Finally, the findings in this study support an underlying growth model rather than a GIGO/QIQO model of student learning. Thus, requiring more courses in the major is likely to increase both the amount of disciplinary content and the amount of disciplinary specific writing skills students learn.

In addition to these findings, the study unintentionally revealed that over one-fourth of the students enrolled in lower division major courses are juniors and seniors. Since these students should have completed the lower division classes earlier in their academic careers, the department has been stimulated by these finding to explore the possibility of using computer generated enrollment restrictions to strengthen advising aimed at getting students to take these classes earlier in their academic career.

 
References

Accrediting Commission for Senior Colleges and Universities (ACSCU). 1998. Eight Perspectives on How to Focus the Accreditation Process on Educational Effectiveness. Oakland, CA: Western Association of Schools and Colleges (WASC).

Diamond, Robert M. 1998. Designing and Assessing Courses and Curricula: A Practical Guide. Rev. Ed. San Francisco: Jossey-Bass Publishers.

Fox, J. Clifford Fox and Keeter , Scott. (1996) "Improving Teaching and Its Evaluation: A Survey of Political Science Departments" PS: Politics and Political Science, 29: 174-180.

Jordan, Larry and Ko, Vivian. 1999. "Passing Rates: Writing Proficiency Exam 93/94 TO 97/98." Draft document AS 98-24. Analytical Studies & Data Administration, California State University, Los Angeles.

Julian, Frank H., Chamberlain, Don H., and Seay, Robert A. 1991. "A National Status Report on Outcomes Assessment by Departments of Political Science." PS: Politics and Political Science, 24: 205-208.

Lasher, Kevin J. and Kitts, Kenneth. 1998. "Coming Soon to Your Department: Institutional Effectiveness Plans." PS: Politics and Political Science, 31: 69-73.

Mann, Sheilah. 1996. "Political Science Departments Report Declines in Enrollments and Majors in Recent Years." PS: Politics and Political Science, 29: 527-533.

Wahlke, John C. 1991. ""Liberal Learning and the Political Science Major: A Report to the Profession." PS: Politics and Political Science, 24: 48-60.

Weiss, Kenneth R. 1998. "Write of Passage." Los Angeles Times. (June 3), B2.