Published online by Cambridge University Press: 06 August 2003
Every regional accreditation body in the United States is changing its standards to incorporate student learning assessment (McMurtrie 2000, A29). While each accrediting body takes its own approach, all agree “measuring what students are learning will continue to gain importance” (McMurtrie 2000, A30).
Colleges and universities are responding in varying ways and to varying degrees to these new learning assessment standards, with some faculty and administrators viewing the change positively and others negatively. Learning assessment within political science major programs prompted the research that led to this paper.
The goal of this project is to develop the most comprehensive description possible of student learning assessment in political science departments across the United States. To that end, we designed a five-part survey. The survey included both closed- and open-ended questions covering topics like the size and type of the department, what learning objectives, if any, had been established by the department, how those were developed, what assessment instruments, if any, were being used by the department, the means of analyzing the data, the conclusions drawn, any changes made as a result of assessment findings, and the types of resources available to the department for student learning assessment programs. This basic structure, of course, conforms loosely to the “ideal type,” learning assessment model. Following this ideal model one first develops a specific set of learning objectives for majors (or students in the general education program, etc.). Next, one designs and applies appropriate assessment instruments (or instrument). The next step is systematic and regular collection and analysis of data. That analysis would then feed back into the curriculum, informing the process of change within that curriculum as needed. This would be a continuous cycle.1
For a discussion of the steps in the learning assessment process, see for example The Departmental Guide and Record Book for Student Outcomes Assessment and Institutional Effectiveness, James O. Nichols (New York, Agathon Press, 1995).
In June 2000, this survey was sent to chairpersons at 1,253 departments across the United States—those listed by the APSA as four-year or graduate institutions with some type of political science department (even if combined with another discipline, for example other social sciences). Between June and October 1st, 2000, 213 surveys were returned, for a return rate of almost 17%. We then converted the survey into a list of variables that were coded and analyzed using SPSS. The following is a partial report on the findings of this survey.
We received responses from a variety of different sized departments, with varying numbers of majors, and from all of the six regional accrediting bodies operating in the United States. Before reviewing the characteristics of responding departments, which are summarized in Table 1, it should be noted that the researchers suspect a bias in the pattern of responses. The perception is that departments that have been at least somewhat focused on learning assessment are much more likely to have returned the survey than departments that have not had such a focus. If you are interested in learning assessment, or believe your department has done good work in this area, you are more eager to participate in a study of this kind. However, this suspicion cannot be confirmed on the basis of the data presented here.
Turning to Table 1, then, the majority of our responses (64.2%) came from departments offering only undergraduate programs. Departments offering graduate programs accounted for 34.4% of our respondents. The largest group of respondents (30.2 %) was departments offering only an undergraduate major and having five or fewer faculty members. Another 18.9% of the respondents were departments where political science is combined with other disciplines. A further 15.1% offer undergraduate programs only but have more than five faculty members. A small majority of our respondents (51.5%) had more than 75 political science majors.
The respondents came from all six regional accreditation areas but with large differences in the distribution among the accrediting regions. A full 33.5% came from the North Central Association of Colleges and Schools. Conversely, as Table 1 illustrates, only 4.7% came from the Northeastern Association of Schools and Colleges. To some degree this reflects differences in the number of four-year colleges and universities that fall within the boundaries of each accrediting region, but it may also reflect the degree to which the accrediting body for the region in question has made assessment of student learning a priority.
The survey asked respondents to indicate whether or not the department had formally adopted a set of specific learning objectives for its students. Just over 16% stated that there was not even a discussion about learning objectives taking place within the department. Another 12.3% said that there was a department-level discussion taking place, 16.1% said that learning objectives were being formulated, 38.9% said that learning objectives had been adopted, and 14.2% said that a previously adopted set of learning objectives was under review.
What is the association between the type of department and that department's stage in this process? The largest single group (17%) of responding departments that had formally adopted (or had adopted and were now reviewing) learning objectives is those departments that offer only an undergraduate program and have five or fewer faculty members. The second largest group (10.5%) is the combined department (where political science is but one discipline under the umbrella of a single department). The third largest group (9%) is the remaining exclusively undergraduate category—departments with more than 5 faculty members. To summarize, 58.5% (or 79 out of 135) of undergraduate-only departments are at the advanced stages of the learning objective development process. Just over 45% of departments offering graduate work are at this same stage in this process. Such a breakdown is not surprising; one would expect departments exclusively focused on undergraduate work to be more focused on the assessment of undergraduate learning, and the survey did not ask questions about the assessment of student learning at the graduate level.
Thus far, the focus of this paper has been on the presentation, more or less, of the learning assessment context. What about the specifics of learning assessment being carried out by the respondents? The data shown in Table 1 begin to illustrate these specifics.
Table 2 presents the percentage of respondents that have adopted the various specific learning objectives listed. Each objective was treated as a separate nominal variable, thus allowing for multiple objectives. The objective adopted by more respondents than any other is “writing skills” at 57.1%. In other words, 57.1% of the responses were from departments that have formally (but see below) adopted the objective “students should develop writing skills.” Critical thinking received the next highest percentage of “yes” responses at 55.7%. Fifty-four percent of responses came from departments that have as a learning objective “familiarity with major theories and analytical approaches in political science.” No other single objective has been formally adopted by a majority of the responding departments. Objectives such as becoming familiar with the major subfields of political science, understanding the international dimensions of political problems and policies, and the ability to design and conduct political science research projects all received between 40 and 46% “yes” responses. Exclusive of the “other” category (8%), the least frequently adopted learning objectives were “develop a fundamental understanding of cognate disciplines” (15.6%) and “students should acquire practical experience in politics or government” (22.2%). This means, of course, that the most popular learning objectives are not those specific to the discipline of political science.
Finally, 18.9% of respondents indicated that the question did not apply to them. The assumption is that this 18.9% came primarily, if not exclusively, from two groups of respondents: those that have not yet adopted any learning objectives and those departments offering only graduate programs. Actually, this percentage seems somewhat low given that a full 28.1% of our respondents indicated that they were not yet even in the formulation stages of developing a set of departmental learning objectives. This discrepancy is probably the result of some respondents answering the set of learning objectives survey questions based upon implied rather than formally adopted objectives (or even the individual respondent's sense of good objectives for political science majors).
A similar set of survey questions focused on the development and type of assessment instruments being used by the respondents. Table 3 shows the percentage of departments using each of the listed learning assessment instruments. Again, each possible instrument was treated as a separate nominal variable to allow for the use of multiple methods.
The assessment instrument used by the largest percentage of departments is the senior capstone course. Over 39% of respondents require such a course for student learning assessment purposes. Twenty-five percent of respondents use (either alone or in conjunction with other tools) faculty observations to assess student learning. The tool receiving the lowest percentage of positive responses (i.e. the department uses the tool) is the pre-test/post-test (9.9%). A somewhat larger group of respondents (14.2%) use a post-test only.
The percentage of respondents (17.5%) who indicated that this set of questions was not applicable to them is particularly interesting. Over 31% of these respondents indicated that their departments were not yet at the formulation stage of establishing learning assessment tools. In order to fully explore this discrepancy, a follow-up study is necessary. However, it is possible to speculate about the reason for this disjunction. Two different explanations might account for this. First, tools which the survey listed as assessment instruments might be in place in a department but not used for assessment purposes in any formal way. Thus, a respondent might have indicated that his or her department was not even talking about learning assessment tools but still might indicate that the department requires a senior seminar (rather than simply marking “not applicable”). Second, assessment tools may be in place and may be used by a department for assessment purposes, but without the knowledge of the department member responding to the questionnaire. This second explanation seems less plausible since the surveys were sent to department chairs but must be noted as one possibility.
The final set of closed-ended questions included on the survey concerned the availability of resources to support departmental learning assessment efforts. Overall, 5.2% of responding departments stated that there were substantial resources available to support learning assessment, 32.5% said that there were some resources available, and 31.6% stated that there were a few resources available. The final 25.9% of our respondents stated that there were no resources available for this purpose. Oncampus workshops are the most widely available (39.6%) and widely utilized (25.9%) resource in support of learning assessment. Off-campus workshops and on-campus teaching centers follow with 29.7% of respondents stating that off-campus workshops were available and 28.8% indicating that on-campus teaching centers were available. However, only 17.9% of respondents indicated that they or their departments had utilized any off-campus resources, and only 16% stated that the available on-campus teaching center was utilized.
Interestingly, but perhaps not surprisingly, the least widely available types of resources, course releases and financial rewards, were also the most likely to be used by departments. The full 5.7% of respondents who indicated that course releases were available also indicated that they used that option. Of the 7.1% of departments that indicated that some sort of financial reward was available in support of learning assessment, 5.2% indicated that they had taken advantage of that resource. In addition, 11.8% of respondents indicated that the issue of resource exploitation was not applicable to them.
In order to further explore the relationship between resource availability and departmental assessment efforts, we conducted two crosstabs. There is a substantial correlation between the amount of resources available and departmental stage in the process of developing learning objectives.
In 91% (10 out of 11) of the cases where substantial resources were available, the department had either adopted or adopted and was also reviewing a set of formal learning objectives. Out of the 68 cases where the respondent indicated that some resources were available, 65% also indicated that they were in one of these two final stages of learning objective development. Conversely, in the 55 instances where the respondent indicated that there were no resources available at all, 38.25% were not even having discussions about learning objectives, while only 18% had actually adopted learning objectives and 13% were reviewing a formally adopted set of objectives.
There is a similar association between the availability of resources and the department's stage in developing learning assessment instruments. One hundred percent of departments (n = 11) that reported there were substantial resources also reported that they had either adopted or had adopted and were now reviewing an assessment instrument or instruments. Conversely, of the 55 departments that reported there were no resources available to support learning assessment, 36.25% stated there was no discussion of assessment instruments, 20% stated there was a discussion but no adoption, 9% stated they were formulating instruments, 27% reported that they had adopted instruments, and finally, 4% reported that they were reviewing a previously adopted instrument or instruments. While no formal causal relationship has been established here, this data does clearly indicate that increased resources in support of learning assessment are likely to have an impact on the development of learning assessment in departments.
Respondents were also asked a series of open-ended questions. These questions asked for summaries of the most significant conclusions departments had reached after analyzing data generated by their learning assessment instruments, and summaries of changes that departments had made in majors, course offerings, or course-level pedagogy as a result of learning assessment. Respondents whose departments have not developed or implemented learning assessment programs generally did not answer these questions. Also, departments in the early stages of developing and implementing learning assessment programs frequently did not respond to these questions. A total of 127 departments (59.75%) provided responses to the open-ended questions.
Departments were asked to indicate the three most significant conclusions they had reached as a result of analyzing data generated by learning assessment. It is striking that departments indicated a wide range of conclusions based on their analyses of data generated from learning assessment instruments. A substantial number of departments (17.5%) reported either that it was too early to report conclusions, or that the question was not applicable to them (4.75%). Fifteen percent of these departments indicated conclusions that either no other department listed or that were not directly related to learning objectives. Such responses were coded as “other.” A few somewhat surprising results appear here. First, although critical thinking is a very common learning objective, few departments have reached conclusions about their students' critical thinking skills, with two reporting satisfaction with student achievement and one reporting dissatisfaction. One might imagine that learning assessment programs would provide more frequent conclusions about such a popular learning objective. Two hypotheses might account for this observation. One, although many departments value critical thinking as a learning objective, few departments have been able to develop instruments to measure student achievement in this area. Second, departments may be evaluating critical thinking skills when they evaluate their students' writing skills.
It is also noteworthy that relatively few departments (4%) have reached conclusions about their students' information technology skills, despite its popularity as a learning objective and the frequency (discussed later) with which departments have adapted courselevel pedagogy to emphasize information technology.
A second open-ended question asked departments to list significant changes they have made in their majors as a result of learning assessment. A large number of departments (37%) report no changes in their major program, either because it is too early in the assessment process to implement changes in majors, because departments have not implemented learning assessment, or because what they have learned from learning assessment has not led to changes. For departments that have made learning-assessment related changes in their programs, a number of results are prominent. First, learning assessment has led several departments either to a general revision of major requirements (19%) or to create new majors, tracks, or emphases (4%). Second, several departments have tried to address perceived student deficiencies in analytical techniques and research methods. Some departments have done this by adding a required methods course or by requiring students to take a methods course earlier, usually as sophomores. Some of the departments (16.5%) adding or revising a senior seminar or capstone course are doing so to address deficiencies in analytical techniques and research methods.2
Of course, senior seminar and capstone courses are popular instruments for assessing student achievement of learning objectives. It is not possible to distinguish whether a department is using a senior seminar or capstone course to help students address analytical and research deficiencies, to assess student learning, or both.
A third open-ended question asked departments to list significant changes made in course offering as a result of learning assessment. For example, had departments added courses to address gaps in student achievement? A first observation is that departments were less likely to report that they had not made changes in course offerings than they were to report that they had not made changes in their major programs. Departments were more likely to report that they had added courses (31.5%) than that they had dropped courses (12.75%). When departments observe deficiencies in student learning, it may be almost a reflex response to add courses to deal with the deficiency. For example, several departments reported that their students had gaps in international relations and comparative politics and added courses in those areas to address the deficiencies. Dropping courses may result from a perception that students overemphasize a subfield, most likely American politics.
Another common response to this question is that departments have added or significantly revised a research methods course. This is consistent with responses to the previous two questions, which also revealed departments' concerns about their students' deficiencies in research and analysis.
Finally, departments were asked to list three significant changes in specific courses. The number of departments reporting that learning assessment has not or has not yet led to change drops off again. It would seem that the results of learning assessment are first realized at the level of the individual course. Generally, this results in a greater emphasis of skills: research and analytical skills (21.5%), writing skills (14%), communication skills (4%), and critical thinking skills (1.5%). The order of priority is consistent with responses to other questions. Departments are most likely to emphasize analytical and writing skills and least likely to emphasize critical thinking skills. Again, the disparity between the popularity of critical thinking as a learning objective and departmental efforts to emphasize critical thinking skills stands out.
Other course-level changes emphasize how courses are delivered. By far, the most popular change in course presentation (24.5%) is by increasing the use of information technology. Responses to this question are unclear as to what extent information technology is being deployed to help students enhance their own information technology skills. The alternative is that using information technology is seen as a useful way to convey content or to help students achieve other learning objectives.
Results from the four open-ended questions indicate ongoing uncertainty about the process of learning assessment and a concern with students' learning skills. Many departments that have implemented learning assessment programs have also implemented changes at the levels of major programs, course offerings, and individual courses. Changes are most commonly directed at more complete coverage of the discipline and development of student learning skills, especially analytical and research skills.
The results of this survey indicate that the most recent wave of the learning assessment movement is having an impact on political science departments across the United States. Over 50% of respondents from undergraduate-only departments and 45% of departments offering graduate work are involved at some stage of the learning assessment cycle. What the authors have found surprising is that the development of learning assessment strategies by departments does not seem to follow the “ideal type” learning assessment model. 52.4% of respondents have indicated that their respective departments have formally adopted learning objectives (perhaps undergoing a review) whereas 62.8% of respondents indicate that they are either formulating assessment instruments or have formally adopted them (again, perhaps undergoing a review of those instruments). In theory (as described in the methods section of this paper), a department is supposed to be assessing student learning against a set of established learning objectives. One would not expect to have more departments engaged in the formulation or use of assessment instruments than departments with adopted learning objectives. Apparently, as in the world of public policy and elsewhere, the theory does not capture the true nature of the process. It may be that the ideal model of learning assessment as described in this paper needs to be questioned. Certainly, departments engaged in meaningful learning assessment may, during the course of that process, redefine slightly or significantly that department's stated learning objectives. The process of reflecting on student learning can help departments to further define the student outcomes they desire. Perhaps for some departments it is necessary to focus on what students are currently learning in order to come to a consensus about what that department hopes to achieve. In order to fully explore this issue, a follow-up study involving in-depth interviews with department chairs is necessary. Certainly, the expectation is that once a department has been engaged in learning assessment for a significant amount of time, some set of learning objectives, and tools for measuring student performance against those objectives, would be well developed. But perhaps when a department is still developing its thinking about learning assessment, the linkage between objectives and measurement tools needs to be loosely conceived.
Similarly, an interview based followup study will help the authors develop a more in-depth understanding of learning assessment in political science departments. Which specific instruments do departments believe help to measure which specific learning objectives? How have departments introduced curricular changes? Do early indicators suggest that those changes are having an impact on student learning? Does the availability of resources to support assessment explain why some departments are further along in the assessment cycle? Or does the availability of resources simply reflect the real driving force behind departmental use of learning assessment: the prioritization of assessment by accrediting bodies and central university administrators? This initial study has given us data on the scope of learning assessment amongst political science departments in the United States and on the types of objectives and assessment tools most frequently adopted. The authors believe that departments in the early stages of developing a serious learning assessment strategy would benefit from a second study that would answer some of the questions raised above.
Characteristics of Respondents
Adopted Learning Objectives
Assessment Instruments