Hostname: page-component-6bf8c574d5-mggfc Total loading time: 0 Render date: 2025-02-21T04:09:58.507Z Has data issue: false hasContentIssue false

APSA Teaching and Learning Conference: A Summary of Four Tracks

Track Three: Assessment

Published online by Cambridge University Press:  01 July 2004

John Ishiyama
Affiliation:
Truman State University
Rights & Permissions [Opens in a new window]

Extract

From February 19-21, 2004 the inaugural APSA Teaching and Learning Conference was held at American University in Washington, D.C. The conference was organized to emulate the “European” model of a conference, where a set of working groups (or “tracks”) were convened to work on one of four issue areas that the conference program committee considered important to the political science discipline. The tracks were not designed to follow the standard political science conference format where individual papers were presented and a “discussant” commented on the papers, leaving five minutes at the end for general discussion. Rather, the tracks were working groups, where each individual presented over two days with the papers discussed by the group collectively, not to fete out criticisms, but to build on presented themes. The groups then made recommendations to a plenary session at the end of the conference, suggesting what APSA as a professional association could do to facilitate advances in each of the four issue areas.

Type
The Teacher
Copyright
© 2004 by the American Political Science Association

Assessment in Political Science

From February 19-21, 2004 the inaugural APSA Teaching and Learning Conference was held at American University in Washington, D.C. The conference was organized to emulate the “European” model of a conference, where a set of working groups (or “tracks”) were convened to work on one of four issue areas that the conference program committee considered important to the political science discipline. The tracks were not designed to follow the standard political science conference format where individual papers were presented and a “discussant” commented on the papers, leaving five minutes at the end for general discussion. Rather, the tracks were working groups, where each individual presented over two days with the papers discussed by the group collectively, not to fete out criticisms, but to build on presented themes. The groups then made recommendations to a plenary session at the end of the conference, suggesting what APSA as a professional association could do to facilitate advances in each of the four issue areas.

Our track focused on “Assessment and Learning Outcomes.” At first, none of us really knew what to expect. We all came from very different backgrounds. Some came from very large departments, with faculty numbering 25 or more. Others were from “departments” made up of one political scientist. There were full professors as well as graduate students, tenured and untenured faculty. Some were from public institutions whereas others were from private schools. Research schools were represented as were primarily undergraduate institutions and community/junior colleges. Two participants were from Historically Black Colleges or Universities (HBCU). Still others were from departments that were integrated with other disciplines (such as sociology or history, or in a social science department or division)—others came from “stand alone” departments. It was indeed a very diverse group of 10 participants.

Nonetheless, what became almost immediately apparent was that, despite most of us having never met prior to the meeting, we had much in common. What was particularly striking about Track Three was that every one of the programs represented that had adopted an assessment program was first and foremost motivated by the desire of the faculty to improve student learning, not in response to pressure from an external accrediting organization. At all the institutions represented, assessment began with the simple realization that students were not learning the skills and content that the political science faculty believed were important (especially among those students with high GPAs), that is, students were graduating without the requisite skills and knowledge that would make them marketable.

Thus we began with the “radical” notion that assessment is really about how we can demonstrate empirically that our students are actually learning what we would like them to learn. As skeptical empirical political scientists, we are naturally unconvinced by “show and tell” literature that claims that certain teaching techniques or curricular structures promoted student learning. This hardly constitutes the standard of evidence we use in our own substantive work, and should not therefore constitute the standard of evidence in demonstrating that our teaching techniques or curricular structures work. For us, assessment's primary purpose is the empirical evaluation of pedagogy and curriculum (beyond just seeing “A's” on student transcripts) to enable us to improve upon what we do as educators.

Over the course of the next two days, each of the participants made a presentation on some aspect of assessment. On the first day, the presentations focused primarily on program assessment. One presentation discussed the history of a “successful” assessment program that had been instituted some 30 years ago; others presented material about how to build a program from scratch, recounting the challenges they faced in instituting an assessment program. Although most of the participants who established an assessment program talked about establishing goals first (particularly emphasizing skills, rather than merely content) and then developing a program to assess progress towards those goals in a planned deductive way, at least one participant noted that it was possible to start establishing a program from the “bottom up” through the efforts of individual faculty members.

On the second day, the focus shifted to particular techniques individuals employed to assess student learning. Some of these involved the use of in-class techniques (such as variations of minute papers to provide for student feedback) as well as projects that examined (in a quasi-experimental way) the effectiveness of certain pedagogical techniques. One project compared female student class participation in an online format to that in a traditional format. Another examined various ways to assess the development of critical thinking using problem-solving exercises. Still another explored the value of using peer evaluation techniques to assess student learning. Over the course of this session we also discussed the merits of both quantitative and qualitative techniques, ranging from standardized, nationally-normed tests like the ETS-produced Major Field Test in Political Science, to content analyzing online discussions and the use of electronic portfolios, as well as using locally developed instruments, senior seminar capstone courses, and exit interviews.

We were also aware of the common criticisms of assessment posed by many of our colleagues in the discipline, some of which have a great deal of merit. For instance, many question the use of standardized tests to assess student learning and are suspicious of “externally imposed” authority, viewing assessment as a threat to academic freedom. Many at the session noted that many of these criticisms are based on the somewhat faulty notion that assessment is based only on standardized tests. As with any other research agenda, multiple methodologies and multiple indicators are always better than reliance on single measures (either quantitative or qualitative).

Nonetheless, we all acknowledged that the discipline faces an ever-growing pressure from external audiences (like state legislatures and accrediting organizations) and that assessment is a reality with which we must now deal. One of the themes mentioned again and again is that if we as political scientists do not come up with our own assessment programs, tailored to political science and, devised by political scientists, someone else, whether it be from higher education or the humanities, will do it for us. We would prefer that we devise a plan ourselves; who is in a better position to devise such assessment strategies than political scientists? Who better able to engage in “policy evaluation”? Who better able to use multiple methodologies?

Further, to avoid the image that standards are externally imposed, we unanimously concluded that it was not for APSA (or any other organization) to set standards, but rather to help illustrate model programs from a variety of different kinds of institutions so as to act as a “menu for choice” for those departments that might be inclined to consider adopting an assessment program. Hence nothing is imposed—the choice remains the departments'—but at least there would be models for those interested in conducting assessment. However, the APSA can play a crucial role in providing resources and information to departments interested in devising an assessment program.

Another commonly expressed concern by some of our political science colleagues goes something like this: Assessment techniques are labor intensive and will take time away from more pressing academic responsibilities, or; is it in my individual professional interest to conduct assessment? Regarding the time and effort required to conduct assessment, it is important to recognize that conducting assessment involves considerable time and energy and that most faculty are stretched thin regarding scholarly commitments, service commitments, etc. However, if we take seriously the notion that student learning is at the core of what we do, then it is in our interest to gauge whether our students are actually learning. There is also a potential professional pay off as well (in terms of publication)—increasing numbers of journals publish work that addresses the “scholarship of assessment/teaching and learning”—including PS and the newly launched Journal of Political Science Education (the journal of the Undergraduate Education Division of the APSA).

In sum, there were a number of lessons that we took away from this meeting:

  1. It is true that no two institutions are the same, so a one-size approach does not fit all. There are, however, common themes that we can learn from one another.
  2. Assessment is meant to improve our teaching and promote student learning not just to please others.
  3. If you are thinking about instituting an assessment program, expect to encounter a considerable amount of politics—battles over turf and techniques are inevitable. Map out the terrain. Who are the skeptics and why are they skeptical? If you want to persuade, do it in a non-threatening way (a theme mentioned by all). Work with those who are willing to participate.
  4. Assessment fits what we as social scientists do in our own scholarship; in fact it is a natural fit.
  5. There are multiple indicators (qualitative and quantitative) that can be adapted for use in multiple settings.
  6. Assessment can be of benefit to faculty and programs, but the process of assessment can benefit students too (a common theme mentioned by all). Student understanding and participation in assessment is a good thing.

Finally, the workshop produced a set of recommendations that were forwarded to APSA regarding how to promote assessment in political science. First, we made the following appeal to the APSA outlining why it should play a greater role in assessment:

  1. First, in an effort to promote the scholarship of teaching and learning, we need to move from a culture of unexamined assumptions to a culture of evidence.
  2. If we do not set the agenda in establishing assessment in political science, others less qualified with different agendas will do so.
  3. Our discipline is distinctively qualified to lead in the assessment/policy evaluation field.
  4. This is an opportunity for APSA to help enhance the quality of political science education.
  5. We can increase membership by increasing the relevance of APSA to the teaching function of its profession, especially to those in undergraduate education. Only the APSA has the legitimacy and the capacity to coordinate and disseminate information on effective assessment practices.

Second, we suggested that APSA could provide essential services to political scientists by:

  1. Supporting and promoting the publication of the Scholarship of Teaching and Learning (SOTL) in political science;
  2. Publicizing model assessment programs/exemplary practices at the program and classroom levels;
  3. Revisiting the American Association of Colleges and Universities (AACU)/Wahlke report on the undergraduate political science major;
  4. Providing a clearing house of external program reviewers;
  5. Providing a clearing house of assessment literature;
  6. Promoting the Scholarship of Teaching and Learning in graduate education;
  7. Establishing and nurturinge an APSA Workshop on Aassessment training using faculty members from model political science programs;
  8. Creating a standing APSA working group on Assessment that would assist in the implementation of the above.

However, beyond the recommendations presented, perhaps the most rewarding aspect of the conference was the realization that we were not alone—that there were others in the discipline struggling with the same issues and concerns. Finally, we had a forum to exchange ideas and explore new intellectual horizons regarding teaching and learning in political science. Indeed, perhaps the most significant and long lasting consequence of the TLC in general was the creation of a “community” of scholars dedicated to the study of teaching and learning in political science. The creation of this community was a crucial first step on the road to change in the APSA, a change that will better prepare all of us for the challenges facing Higher Education in the near future.