Hostname: page-component-6bf8c574d5-qdpjg Total loading time: 0 Render date: 2025-02-21T06:52:07.508Z Has data issue: false hasContentIssue false

What a Long, Strange Trip It’s Been: Three Decades of Outcomes Assessment in Higher Education

Published online by Cambridge University Press:  26 January 2016

E. Fletcher McClellan*
Affiliation:
Elizabethtown College
Rights & Permissions [Opens in a new window]

Abstract

Type
Profession Symposium: Assessment in Political Science Redux
Copyright
Copyright © American Political Science Association 2016 

A 30-year reform movement in higher education— assessment of student learning outcomes—has become “an unavoidable condition of doing business” (Ewell Reference Ewell and Banta2002, 22) for colleges and universities. The majority of undergraduate institutions have established common learning goals and processes for gathering evidence of students’ academic achievement (Kuh et al. Reference Kuh, Jankowski, Ikenberry and Kinzie2014). A “scholarship of assessment” and a community of assessment professionals, coordinators, and committees have materialized (Ewell Reference Ewell and Banta2002; Kinzie and Jankowski Reference Kinzie, Jankowski, Kuh, Ikenberry and Jankowski2015).

Despite its success in entering the higher education mainstream, outcomes assessment activity on most campuses is wrapped in a “culture of compliance” (Kuh et al. Reference Kuh, Ikenberry, Jankowski, Cain, Ewell, Hutchings and Kinzie2015, 5). Skepticism about the purposes and value of assessment pervades academia, especially the arts and sciences (Ewell, Paulson, and Kinzie Reference Ewell, Paulson and Kinzie2011). In political science—which has accomplished as much as any non-accredited discipline to promote understanding and best practices of assessment (Deardorff, Hamann, and Ishiyama Reference Deardorff, Hamann and Ishiyama2009)—the dominant stance of most programs is acquiescence to outside demands rather than commitment from faculty to improve curriculum and instruction (Young, Cartwright, and Rudy Reference Young, Cartwright and Rudy2014).

This article examines outcomes assessment in higher education as a case study of policy implementation, taking three separate approaches to analyze why the assessment movement has encountered difficulty in achieving campus involvement and acceptance. Moreover, it shows that despite obstacles to effective implementation, assessment activity and use have risen in the past decade. The result is that outcomes assessment is here to stay, presenting challenges and opportunities for political scientists and the discipline.

THE ACCOUNTABILITY–IMPROVEMENT PARADOX AND THE MAKING OF OUTCOMES ASSESSMENT POLICY

At the root of higher education’s ambivalence about outcomes assessment—defined as “the gathering and use of evidence of student learning in decision making and in strengthening institutional performance and public accountability” (Kuh et al. Reference Kuh, Ikenberry, Jankowski, Cain, Ewell, Hutchings and Kinzie2015, 2)—are the dual purposes that assessment activities are supposed to serve. By providing information about what students are learning through the college experience, institutions are held accountable to the government, accreditors, and public for the results they achieve. The same evidence can help institutions, academic programs, faculty, and staff improve teaching and learning processes and, consequently, raise student achievement.

Although some scholars believe that both accountability and improvement can be accomplished through assessment (Kuh et al. Reference Kuh, Ikenberry, Jankowski, Cain, Ewell, Hutchings and Kinzie2015), there is undeniable tension (Ewell Reference Ewell2009). Public accountability incentivizes institutions and programs to identify and present only positive evidence, whereas improvement depends on information gathered through critical inquiry. Assessment for accountability leads external stakeholders to make summative judgments about performance, which affects key decisions about the future of institutions, programs, and individuals. When conducted in a non-threatening environment, assessment for improvement encourages faculty engagement, formative feedback, and risk taking (Ewell Reference Ewell2009).

The accountability–improvement paradox emerged in the process of adopting outcomes assessment as national higher education policy. The assessment movement resulted from a convergence of separate streams of activity in the 1980s (McClellan Reference McClellan, Deardorff, Hamann and Ishiyama2009). In the wake of alarmist reports about the decline of US educational achievement in the midst of rising global economic competition, politicians wanted greater accountability and results from educational institutions. At the same time, reformers within the academy drew attention to the need to improve undergraduate curriculum and teaching, focusing on what students gained from the college experience (McClellan Reference McClellan, Deardorff, Hamann and Ishiyama2009).

Observing the apparent success of outcomes-based models, such as performance funding in the states and competency-based education in four-year colleges, both camps perceived assessment of student learning as a cost-effective strategy, and policy makers in higher education took notice (Ewell Reference Ewell and Banta1993, Reference Ewell and Banta2002). Near the end of the Reagan administration, Secretary of Education William Bennett issued an executive order that directed accreditation bodies—which serve as gatekeepers for determining institution eligibility for receiving federal financial aid—to include assessment criteria in their standards for member institutions. Amendments to the Higher Education Act in 1992, enacted by a Democratic Congress and signed into law by President George H.W. Bush, codified outcomes assessment as federal policy.

ACTIVITY WITHOUT ACCEPTANCE: THREE DECADES OF ASSESSMENT POLICY IMPLEMENTATION

Since Secretary Bennett’s executive order, the process of implementing student learning outcomes assessment in higher education cannot be described as rapid or harmonious. The “culture of compliance” on most campuses stems from the belief that governmental overseers and regional accrediting agencies are the primary drivers of assessment (Kuh and Ikenberry Reference Kuh and Ikenberry2009; Kuh et al. Reference Kuh, Jankowski, Ikenberry and Kinzie2014). Complying with accreditor expectations is the most frequently cited use of assessment evidence (Kuh et al. Reference Kuh, Jankowski, Ikenberry and Kinzie2014).

Regarding the impact of assessment efforts on campus practices and student learning, the results are disappointing. In a 1997 national survey of postsecondary institutions, student assessment was mentioned as having a positive impact on only three of 15 student, faculty, and institutional areas of policy and practice (Peterson et al. Reference Peterson, Einerson, Augustine and Vaughan1999). According to Banta and Blaich (Reference Banta and Blaich2011), only 6% of the almost 150 college best-practice assessment programs that they examined presented evidence of improved student learning. Blaich and Wise (Reference Blaich and Wise2011) observed that campuses involved in the Wabash National Study of Liberal Arts Education put too much energy into obtaining perfect data rather than acting on the volumes of data already gathered.

To be sure, progress was made on several fronts at the institutional and program levels. The 1997 survey found that 65% of institutions had developed formal assessment plans, 13% had pursued informal plans, and the remainder were developing plans or had no plan at all (Peterson et al. Reference Peterson, Einerson, Augustine and Vaughan1999). By 2009, almost 75% of provosts reported that their institutions had adopted plans with common student learning outcomes, a result that increased to 84% in 2013 (Kuh and Ikenberry Reference Kuh and Ikenberry2009; Kuh et al. Reference Kuh, Jankowski, Ikenberry and Kinzie2014).

The volume and variety of assessment activity also have increased. In 2009, colleges and universities employed an average of three assessment tools, particularly national student surveys such as the National Survey of Student Engagement (Kuh and Ikenberry Reference Kuh and Ikenberry2009). Four years later, five assessment approaches comprised the norm, with significant gains at the program level in the use of rubrics, portfolios, capstone experiences, and other direct measures of student learning (Kuh et al. Reference Kuh, Jankowski, Ikenberry and Kinzie2014).

In another shift, there was a significant surge in colleges and universities reporting that institutional commitment to improve and faculty interest in improving student learning were important drivers of assessment activity (Kuh et al. Reference Kuh, Jankowski, Ikenberry and Kinzie2014). Department chairs and program directors stated that faculty interest in program improvement was the top motivator for assessment (Ewell, Paulson, and Kinzie Reference Ewell, Paulson and Kinzie2011). Use of assessment data grew across areas beyond accreditation, including program review, curriculum revision, and learning goals revision (Kinzie, Hutchings, and Jankowski Reference Kinzie, Hutchings, Jankowski, Kuh, Ikenberry and Jankowski2015; Kuh et al. Reference Kuh, Jankowski, Ikenberry and Kinzie2014).

By examining outcomes assessment as a case study in policy implementation, we gain greater insight as to why it has taken three decades for the assessment movement in higher education to attain extensive institutional and program involvement without widespread enthusiasm.

EXPLAINING STUDENT OUTCOMES ASSESSMENT IMPLEMENTATION

By examining outcomes assessment as a case study in policy implementation, we gain greater insight as to why it has taken three decades for the assessment movement in higher education to attain extensive institutional and program involvement without widespread enthusiasm. The main focus is on assessment as national policy, which recognizes that states, accreditors, and higher education institutions carry the primary implementation responsibility.

Intrusions on Professional Accountability

From an institutional perspective, the higher education industry must manage diverse and sometimes contradictory expectations from the democratic polity (Romsek and Dubnick Reference Romsek and Dubnick1987). Traditionally, higher education was regarded as a public good, and government’s role—expressed in the Higher Education Act of 1965—was to increase access and provide sufficient resources to educational institutions. Similar to the deference given to the legal, accounting, and medical professions, government and society granted faculty and higher education officials “professional accountability” (Romsek and Dubnick Reference Romsek and Dubnick1987) to manage their own affairs and serve the public.

The autonomy granted the higher education sector was not unlimited, however. As deLeon (Reference deLeon1998, 550) explained, the compact between a profession and society can be modified depending on changes in the “(a) importance to society of the function over which the profession has control, (b) public confidence that the profession has special expertise relevant to its function, or (c) public trust in the rectitude of the professionals.” The efforts of governments and regional accreditors to assert greater oversight through outcomes assessment and other areas of compliance (e.g., Title IX, the Family Educational Rights and Privacy Act, and the Clery Act) naturally encountered resistance from higher education providers. However, increased compliance demands reflected greater societal concern about higher education performance, cost, value, and equity (Pew Research Center 2011).

Ultra-Complexity of Joint Action

Another explanation of how policy implementation can be impeded is the “complexity of joint action” (Pressman and Wildavsky Reference Pressman and Wildavsky1973). Administering assessment policy requires cooperation among federal and state governments, institutional and program accrediting agencies, and higher education providers. As a matter of random error, if nothing else, difficulties with any actor and roadblocks at any stage in the process are likely to occur.

Among the most conflicted actors are the regional accrediting agencies, which must promote as well as enforce outcomes assessment. Promoting assessment is aligned with the traditional emphasis of accreditors on institutional improvement through institutional self-study and peer review. As gatekeepers for the federal financial-aid system, however, accrediting agencies have imposed more sanctions on institutions for failing to meet assessment standards (Gannon-Slater et al. Reference Gannon-Slater, Ikenberry, Jankowski and Kuh2014). Even so, these enforcement efforts were insufficient in the perception of Bush II education officials (Lederman Reference Lederman2007) and Secretary of Education Arne Duncan, who called the regional accreditors “the watchdogs that don’t bark” (US Department of Education 2015).

Working with state governing bodies or accrediting agencies, higher education providers have leeway to develop assessment programs appropriate to their needs. As a result, there is tremendous diversity in how the thousands of educational institutions in the United States respond to assessment mandates (Kuh and Ikenberry Reference Kuh and Ikenberry2009; Kuh et al. Reference Kuh, Jankowski, Ikenberry and Kinzie2014). The type of institution (e.g., size, degree-granting, or mission) has a significant effect; for example, there is an inverse relationship between a college’s admissions selectivity and the frequency of assessment activity and use of results (Kuh et al. Reference Kuh, Jankowski, Ikenberry and Kinzie2014).

Internally, the prevailing response of higher education institutions to assessment mandates is to separate assessment offices and committees not only from the core of the academic enterprise but also from institutional decision making and planning (Kuh et al. Reference Kuh, Ikenberry, Jankowski, Cain, Ewell, Hutchings and Kinzie2015). An exception is the associate’s-degree–granting sector, which is most likely to use assessment results for strategic planning and resource allocation (Kuh et al. Reference Kuh, Jankowski, Ikenberry and Kinzie2014). At the program level, professional studies and accredited programs have embraced assessment (Ewell, Paulson, and Kinzie Reference Ewell, Paulson and Kinzie2011). Conversely, the humanities and social sciences are more likely to “decouple” assessment from their daily work (Young, Cartwright, and Rudy Reference Young, Cartwright and Rudy2014).

(Lack of) Conditions for Successful Implementation

To the extent that implementation of outcomes assessment policy has proceeded in “top-down” fashion, the “conditions for successful implementation” approach of Mazmanian and Sabatier (Reference Mazmanian and Sabatier1989) described how obstacles can arise. Four variables pertain especially to assessment.

Goal Conflict: Attainment and Employability versus Learning

Policy goal conflict can frustrate implementation efforts (Mazmanian and Sabatier Reference Mazmanian and Sabatier1989, 41–2). Learning is the domain of professionals in higher education, but success is paramount to policy makers in government. Connecting college performance to US competitiveness in the global marketplace, both the George W. Bush and Obama administrations specified attainment as a primary goal of federal higher-education policy (The Secretary of Education’s Commission 2006; US Department of Education 2015). States also focused on degree completion, linking funding of public institutions to their performance on attainment overall and by underrepresented groups (Jones Reference Jones2013). In response, academics acknowledge the importance of graduation rates but question the government’s completion agenda, arguing that educational quality and rigor will suffer (Humphreys Reference Humphreys2012).

Similarly, the federal emphasis on “gainful employment” as an outcomes measure, indicated in new regulations of career-focused institutions (Grasgreen Reference Grasgreen2015), has received mixed reactions from the higher-education community. Although traditional providers praise efforts to crack down on irresponsible for-profits, an employability standard—implied in “skin-in-the-game” proposals that hold colleges and universities accountable for student debt (Stratford Reference Stratford2015)—has the potential to hold all providers accountable for results that educators claim depend in large part on the preparation of students before college, individual student talent and effort, and state of the economy (Rawlings Reference Rawlings2015). In addition, governmental pressure on institutions to produce employable graduates can interfere with the learning domain, exemplified by the reaction from liberal-arts defenders to President Obama’s offhand remark about the value of an art-history major (Jaschik Reference Jaschik2014).

Ambiguous Means: What Kind of Learning Should Be Promoted and How?

Implementation also is hindered when there is ambiguity about which methods or remedies should be applied to the problem under review (Mazmanian and Sabatier Reference Mazmanian and Sabatier1989, 21). When viewed as a technical matter, there is growing consensus on how to “do” assessment (Banta and Palomba Reference Banta and Palomba2014; Kinzie, Hutchings, and Jankowski Reference Kinzie, Hutchings, Jankowski, Kuh, Ikenberry and Jankowski2015; Suskie Reference Suskie2009). In a broader sense, however, the impact of assessment scholarship remains limited. As the debate over the relevance of the liberal arts illustrates, society is divided on what students should know and be able to do (Humphreys Reference Humphreys2011). Vigorous discussions continue among researchers about which aspects of curriculum, instruction, and college involvement best promote learning (e.g., the effectiveness of collaborative-learning and student-engagement strategies) (Arum and Roksa Reference Arum and Roksa2011; McCormick, Gonyea, and Kinzie Reference McCormick, Gonyea and Kinzie2013). Moreover, there is disagreement about the extent to which the college experience itself contributes to student learning and success (Pascarella and Terenzini Reference Pascarella and Terenzini2005).

Market-Driven Change: Theory and Efficacy

Another condition of successful policy implementation is the existence of a sound theory relating changes in target-group behavior to the achievement of program objectives (Mazmanian and Sabatier Reference Mazmanian and Sabatier1989, 25–6). Faith in markets to influence college policies and practices—and, in turn, promote student attainment and success—has been an underlying principle of the Bush II and Obama higher education agendas. Following up on his predecessor’s recommendation to create a “consumer-friendly information database on higher education” (The Secretary of Education’s Commission 2006, 21), in early 2013, Obama produced a College Scorecard that facilitates comparisons of colleges relative to cost, graduation rate, loan-default rate, and median borrowing (US Department of Education 2013).

Nevertheless, evidence suggests that students choose colleges mainly on the basis of cost and location (Espinosa, Crandall, and Tukibayeva Reference Espinosa, Crandall and Tukibayeva2014). Only a small percentage of students use national transparency initiatives such as the Voluntary System of Accountability; of those who do, most search for the cost calculator and graduation rates rather than actual information about student learning (Jankowski et al. Reference Jankowski, Ikenberry, Kinzie, Kuh, Chenoy and Baker2012). Thus, the incongruence of the market theory of policy change with consumer behavior calls into question the soundness of federal strategy.

Passive Support (and Active Opposition) by Constituency Groups

Active support by constituency groups, a few key legislators, and/or the chief executive will facilitate implementation (Mazmanian and Sabatier Reference Mazmanian and Sabatier1989, 31–4). Despite strong support from federal and state officials, outcomes assessment has not been immune from partisan and ideological conflict. Democrats scrutinized the for-profit sector more severely (Grasgreen Reference Grasgreen2015), for instance, whereas Republicans in Congress disdained strong executive regulation of higher education—even when coming from a GOP administration (Lederman Reference Lederman2007; Stratford Reference Stratford2015). The higher education community has exploited these differences in resisting specific assessment-related initiatives, while also proclaiming support for outcomes assessment as a means of improvement (Fain Reference Fain2015; Lederman Reference Lederman2007).

For example, President Obama’s announcement in August 2013 that the College Scorecard would include governmental ratings of colleges and universities received heavy criticism from higher education organizations. The administration’s proposal ignored the differences among institutions that serve vastly dissimilar student populations, opponents claimed, and underestimated the challenge of obtaining comparable data and metrics (Fain Reference Fain2015). Senator Lamar Alexander (R-TN), chair of the Senate Committee on Health, Education, Labor, and Pensions and an advocate of deregulation (Stratford Reference Stratford2015), objected to direct governmental competition with private-ratings services (Fain Reference Fain2015). The US Department of Education ultimately withdrew the ratings plan in June 2015—a consequence of what Secretary Duncan stated was the higher-education lobby’s interest in “propping up the status quo” (Fain Reference Fain2015). Instead, economic metrics were added to the Scorecard, including the median salary of students 10 years after they first enrolled in a college (Blumenstyk Reference Blumenstyk2015).

As pressures on higher education to produce results intensify and the scholarship of assessment advances, the pace and uses of outcomes assessment activity are likely to accelerate (Kuh et al. Reference Kuh, Jankowski, Ikenberry and Kinzie2014).

ACCELERATING OUTCOMES ASSESSMENT ACTIVITY AND THE CHALLENGE FOR POLITICAL SCIENCE

As pressures on higher education to produce results intensify and the scholarship of assessment advances, the pace and uses of outcomes assessment activity are likely to accelerate (Kuh et al. Reference Kuh, Jankowski, Ikenberry and Kinzie2014). The tension between the accountability and improvement paradigms of assessment (Ewell Reference Ewell2009) will continue, but improvement has become a driver and consequence of assessing academic programs (Kuh et al. Reference Kuh, Jankowski, Ikenberry and Kinzie2014). In this sense, implementation of outcomes assessment in higher education is moving toward a “bottom-up” strategy “from compliance to ownership” (Kuh et al. Reference Kuh, Ikenberry, Jankowski, Cain, Ewell, Hutchings and Kinzie2015). Applying Matland’s (Reference Matland1995, 165–8) typology of implementation processes, assessment has entered a phase of “experimental implementation” in which inter-institutional exchange and learning are taking place.

Although implementation barriers remain, the terms of debate over assessment on college campuses have changed. According to assessment scholar and political scientist Peter Ewell (2009, 6), “the question has become more about what kinds of assessment to engage in and under whose control than about whether to engage in it at all.” For political science, the issue now is how to make assessment work for our own purposes. Many political science programs have successfully used outcomes assessment to improve curriculum and instruction and, consequently, foster student learning (Deardorff, Hamann, and Ishiyama Reference Deardorff, Hamann and Ishiyama2009; Ishiyama, Miller, and Simon Reference Ishiyama, Miller and Simon2015). As a discipline, how can we marshal convincing evidence of student learning? How can we forge stronger links between political science learning and post-undergraduate student success?

Furthermore, how can assessment evidence, in turn, be leveraged to promote political science at institutional, regional, and national levels? The programmatic, institutional, and cultural diversity of our discipline has prevented consensus on the priority and goals of political science education (Ishiyama, Breuning, and Lopez Reference Ishiyama, Breuning and Lopez2006). However, the stakes for higher education—and political science in particular—have never been higher. Advancing from passive acceptance of outcomes assessment to integrative use is an important pathway toward shaping our individual and collective destiny.

References

REFERENCES

Arum, Richard, and Roksa, Josipa. 2011. Academically Adrift: Limited Learning on College Campuses. Chicago: University of Chicago Press.Google Scholar
Banta, Trudy W., and Blaich, Charles. 2011. “Closing the Assessment Loop.” Change: The Magazine of Higher Learning 43 (1): 22–7.Google Scholar
Banta, Trudy W., and Palomba, Catherine A.. 2014. Assessment Essentials: Planning, Implementing and Improving Assessment in Higher Education. 2nd ed. San Francisco: Jossey-Bass.Google Scholar
Blaich, Charles F., and Wise, Kathleen. 2011. From Gathering to Using Assessment Results: Lessons from the Wabash National Study. Champaign, IL: National Institute for Learning Outcomes Assessment.Google Scholar
Blumenstyk, Goldie. 2015. “White House Unveils College Scorecard That Replaces Its Scuttled Ratings Plan.” Chronicle of Higher Education. September 12. Available at http://m.chronicle.com/article/White-House-Unveils-College/233073/?cid=at&utm_source=at&utm_medium=en. Accessed September 24, 2015.Google Scholar
Deardorff, Michelle D., Hamann, Kerstin, and Ishiyama, John, eds. 2009. Assessment in Political Science. Washington, DC: American Political Science Association.Google Scholar
deLeon, Linda. 1998. “Accountability in a ‘Reinvented’ Environment.” Public Administration 76 (3): 539–48.CrossRefGoogle Scholar
Espinosa, Lorelle L., Crandall, Jennifer R., and Tukibayeva, Malika. 2014. Rankings, Institutional Behavior and College and University Choice. Washington, DC: American Council on Education.Google Scholar
Ewell, Peter T. 1993. “The Role of States and Accreditors in Shaping Assessment Practice.” In Making a Difference: Outcomes of a Decade of Assessment in Higher Education, ed. Banta, Trudy W. and Associates, 339–56. San Francisco: Jossey-Bass.Google Scholar
Ewell, Peter T. 2002. “An Emerging Scholarship: A Brief History of Assessment.” In Building a Scholarship of Assessment, ed. Banta, Trudy W. and Associates, 325. San Francisco: Jossey-Bass.Google Scholar
Ewell, Peter T. 2009. Assessment, Accountability and Improvement: Revisiting the Tension. Champaign, IL: National Institute for Learning Outcomes Assessment.Google Scholar
Ewell, Peter, Paulson, Karen, and Kinzie, Jillian. 2011. Down and In: Assessment Practices at the Program Level. Champaign, IL: National Institute for Learning Outcomes Assessment.Google Scholar
Fain, Paul. 2015. “Ratings Without…Rating.” Inside Higher Education. June 25. Available at www.insidehighered.com/news/2015/06/25/education-department-says-rating-system-will-be-consumer-tool-rather-comparison. Accessed July 26, 2015.Google Scholar
Gannon-Slater, Nora, Ikenberry, Stanley, Jankowski, Natasha, and Kuh, George. 2014. Institutional Assessment Practices across Accreditation Regions. Champaign, IL: National Institute for Learning Outcomes Assessment.Google Scholar
Grasgreen, Allie. 2015. “Barack Obama Pushes For-Profit Colleges to the Brink.” Politico. July 1. Available at www.politico.com/story/2015/07/barack-obama-pushes-for-profit-colleges-to-the-brink-119613.html. Accessed July 29, 2015.Google Scholar
Humphreys, Debra. 2011. Steve Jobs and Bill Gates on Higher Education and How to Prepare Students for Success (blog). Association of American Colleges and Universities. March 23. Available at www.aacu.org/leap/liberal-education-nation-blog/steve-jobs-and-bill-gates-higher-education-and-how-prepare. Accessed July 31, 2015.Google Scholar
Humphreys, Debra. 2012. “What’s Wrong with the Completion Agenda—And What We Can Do About It.” Liberal Education 98 (1): 158.Google Scholar
Ishiyama, John, Breuning, Marijke, and Lopez, Linda. 2006. “A Century of Continuity and (Little) Change in the Undergraduate Political Science Curriculum.” American Political Science Review 100 (November): 659–65.Google Scholar
Ishiyama, John, Miller, William J., and Simon, Eszter. 2015. Handbook on Teaching and Learning in Political Science and International Relations. Cheltenham, UK: Edward Elgar.Google Scholar
Jankowski, Natasha A., Ikenberry, Stanley O., Kinzie, Jillian, Kuh, George D., Chenoy, Gloria F., and Baker, Gianina R.. 2012. Transparency & Accountability: An Evaluation of the VSA College Portrait Pilot. Champaign, IL: National Institute for Learning Outcomes Assessment.Google Scholar
Jaschik, Scott. 2014. “Obama vs. Art History.” Inside Higher Education. January 31. Available at www.insidehighered.com/news/2014/01/31/obama-becomes-latest-politician-criticize-liberal-arts-discipline. Accessed July 23, 2015.Google Scholar
Jones, Dennis P. 2013. Outcomes-Based Funding: The Wave of Implementation. Boulder, CO: National Center for Higher Education Management Systems.Google Scholar
Kinzie, Jillian, Hutchings, Pat, and Jankowski, Natasha A.. 2015. “Fostering Greater Use of Assessment Results: Principles for Effective Practice.” In Using Evidence of Student Learning to Improve Higher Education, ed. Kuh, George D., Ikenberry, Stanley O., Jankowski, Natasha A., et al., 5172. San Francisco: Jossey-Bass.Google Scholar
Kinzie, Jillian, and Jankowski, Natasha A.. 2015. “Making Assessment Consequential: Organizing to Yield Results.” In Using Evidence of Student Learning to Improve Higher Education, ed. Kuh, George D., Ikenberry, Stanley O., Jankowski, Natasha A., et al., 7391. San Francisco: Jossey-Bass.Google Scholar
Kuh, George D., and Ikenberry, Stanley O.. 2009. More Than You Think, Less Than We Need: Learning Outcomes Assessment in American Higher Education. Champaign, IL: National Institute for Learning Outcomes Assessment.Google Scholar
Kuh, George D., Ikenberry, Stanley O., Jankowski, Natasha A., Cain, Timothy R., Ewell, Peter T., Hutchings, Pat, and Kinzie, Jillian. 2015. Using Evidence of Student Learning to Improve Higher Education. San Francisco: Jossey-Bass.Google Scholar
Kuh, George D., Jankowski, Natasha, Ikenberry, Stanley O., and Kinzie, Jillian. 2014. Knowing What Students Know and Can Do: The Current State of Student Learning Outcomes Assessment in U.S. Colleges and Universities. Champaign, IL: National Institute for Learning Outcomes Assessment.Google Scholar
Lederman, Doug. 2007. “Key GOP Senator Warns Spellings.” May 29. Inside Higher Education. Available at www.insidehighered.com/news/2007/05/29/alexander. Accessed July 30, 2015.Google Scholar
Matland, Richard E. 1995. “Synthesizing the Implementation Literature: The Ambiguity–Conflict Model of Policy Implementation.” Journal of Public Administration Research and Theory 5 (2): 145–74.Google Scholar
Mazmanian, Daniel, and Sabatier, Paul A.. 1989. Implementation and Public Policy. Revised edition. Latham, MD: University Press of America.Google Scholar
McClellan, E. Fletcher. 2009. “An Overview of the Assessment Movement.” In Assessment in Political Science, ed. Deardorff, Michelle D., Hamann, Kerstin, and Ishiyama, John, 3958. Washington, DC: American Political Science Association.Google Scholar
McCormick, Alexander C., Gonyea, Robert M., and Kinzie, Jillian. 2013. “Refreshing Engagement: NSSE at 13.” Change: The Magazine of Higher Learning 45 (3): 615.Google Scholar
Pascarella, Ernest, and Terenzini, Patrick T.. 2005. How College Affects Students: Volume 2: A Third Decade of Research. San Francisco: Jossey-Bass.Google Scholar
Peterson, Marvin W., Einerson, Marne K., Augustine, Catherine H., and Vaughan, Derek S.. 1999. Institutional Support for Student Assessment: Methodology and Results of a National Survey. Stanford, CA: National Center for Postsecondary Improvement.Google Scholar
Pew Research Center. 2011. Is College Worth It? College Presidents, Public Assess Value, Quality, Mission of Higher Education. May 15. Available at www.pewsocialtrends.org/2011/05/15/is-college-worth-it. Accessed July 25, 2015.Google Scholar
Pressman, Jeffrey L., and Wildavsky, Aaron. 1973. Implementation: How Great Expectations in Washington Are Dashed in Oakland. Berkeley: University of California Press.Google Scholar
Rawlings, Hunter. 2015. “College Is Not a Commodity. Stop Treating It Like One.” Washington Post. June 9. Available at www.washingtonpost.com/posteverything/wp/2015/06/09/college-is-not-a-commodity-stop-treating-it-like-one. Accessed July 23, 2015.Google Scholar
Romsek, Barbara S., and Dubnick, Melvin J.. 1987. “Accountability in the Public Sector: Lessons from the Challenger Tragedy.” Public Administration Review 47 (3): 227–38.Google Scholar
Stratford, Michael. 2015. “Alexander’s Higher Ed Act Agenda.” Inside Higher Education. March 24. Available at www.insidehighered.com/news/2015/03/24/alexander-weighing-new-accountability-tools-better-data-higher-ed-act-rewrite. Accessed July 30, 2015.Google Scholar
Suskie, Linda A. 2009. Assessing Student Learning: A Common Sense Guide. 2nd ed. San Francisco: Jossey-Bass.Google Scholar
The Secretary of Education’s Commission on the Future of Higher Education. 2006. A Test of Leadership: Charting the Future of U.S. Higher Education. Washington, DC: US Department of Education.Google Scholar
US Department of Education. 2013. “Obama Administration Launches College Scorecard.” February 14. Available at www.ed.gov/blog/2013/02/obama-administration-launches-college-scorecard. Accessed July 29, 2015.Google Scholar
US Department of Education. 2015. “Toward a New Focus on Outcomes in Higher Education: Remarks by Secretary Arne Duncan at the University of Maryland–Baltimore County.” July 27. Available at www.ed.gov/news/speeches/toward-new-focus-outcomes-higher-education. Accessed July 28, 2015.Google Scholar
Young, Candace C., Cartwright, Debra A., and Rudy, Michael. 2014. “To Resist, Acquiesce, or Internalize: Departmental Responsiveness to Demands for Outcomes Assessment.” Journal of Political Science Education 10: 322.Google Scholar