Hostname: page-component-745bb68f8f-s22k5 Total loading time: 0 Render date: 2025-02-11T07:10:55.691Z Has data issue: false hasContentIssue false

Round and round we go: an ‘action’ ride on the rehearsing and performing cycle

Published online by Cambridge University Press:  02 April 2012

Mark Pulman*
Affiliation:
University of Huddersfield, University Campus Barnsley, Church Street Barnsley, S70 2AN, UKm.pulman@hud.ac.uk
Rights & Permissions [Opens in a new window]

Abstract

This article discusses the use of action research cycles on a course involving groupwork rehearsing and performance. The aim was to explore various pedagogic aspects of the activities and improve tutor practice. This account of four action research spirals, taking place over a 10-year period of rehearsing and performing, considers their management and operation including activities, interventions, data collection, reflection and re-planning. Conducting action research spirals of this length can raise issues about combining the roles of tutor and researcher, balancing action and reflection, and managing an extensive and varied data corpus. Employing several cycles in an action research inquiry however, allows the retracing of previous interventions, and a fine-tuning of the process. A spiral of cycles, incorporating interventions, together with informal and structured reflection, can be effective in improving practice and adding strength and depth to the inquiry. This study also offers a contribution towards filling the gap in the literature on lengthy cyclical action research studies.

Type
Research Article
Copyright
Copyright © Cambridge University Press 2012

Introduction

As a tutor involved in rehearsing various performance groups, I work with ensembles in developing their musical material and providing feedback on their rehearsal techniques and performances. It is an activity that, typically, we might repeat at various times during the academic year, and from year to year also.

This re-occurring activity of rehearsing and performing we could describe as a cycle or, perhaps, a rehearsing and performing cycle. Such cycles might, for example, comprise: an introduction to the task or assignment including an explanation of how it will be marked; a number of rehearsals; a performance; feedback and grading. Our music curriculum often requires us to repeat this or other patterns of activities and, should we be planning changes to the next iteration, this process is similar to an action research cycle. Tim Cain, writing in this journal about the use of action research in music education (Cain, Reference CAIN2008), provides a helpful explanation about how and why the action research cycle is relevant to us as practitioners working in the music classroom. In order to improve their practice, action researchers in music will ‘plan and carry out interventions . . . and evaluate the consequences of these interventions, interrogating data in order to ground their evaluations in evidence. They reflect on each stage in order to generate new plans, thus starting the cycle again’ (p. 284). Cain's informative review examines a large number of action research investigations involving music and analyses each in terms of the typical stages of action research cycle: planning–action–evaluation–reflection. The motivation for this present article arises from Cain's review paper and his finding that, despite action research being regarded as a cyclical process (Somekh, Reference SOMEKH2006; Norton, Reference NORTON2009), there was a surprising lack of music action research projects that employed more than just a single cycle. In fact most were ‘temporally truncated’ (Conway & Borst, Reference CONWAY and BORST2001); that is, there was only one turn of the action research cycle. This paper, in its examination of a large number of cyclical ‘rides’, as the title suggests, is an attempt to help fill that gap. The focus of the article therefore, is not on the substantive findings of the research as such (some of these are reported in Pulman, Reference PULMAN2009), but instead on the process and management of the action research cycles themselves.

Kurt Lewin (Reference LEWIN1946), often considered as the father of action research, was the first to describe the process of fact-finding, planning, exploratory action and evaluation as being cyclical and spiral (Somekh & Zeichner, Reference SOMEKH and ZEICHNER2009). Many authors describe four stages of the cycle, which are: planning, action (implementing the plans) observing and reflecting. Re-planning can then take place and so round it goes again (Wadsworth, Reference WADSWORTH1997). Spirals of action research can indicate the presence of several cycles, as O'Brien and Moules (Reference O'BRIEN and MOULES2007) explain, upwardly spiralling, as the investigation lengthens. Heale (Reference HEALE2003) provides a helpful illustration of cycles and spirals in his action research pack designed for practitioner researchers; Figs 1 and 2 illustrate these within a rehearsing and performing context.

Fig. 1 Stages of an action research cycle. 1. Planning the activities. 2. Action (rehearsing and performing). 3. Collecting data (assessments, observations, interviews). 4. Reflecting (evaluating learning and tutor practice)

Fig. 2 Illustration of an action research spiral

Some forms of inquiry using action research spirals have a focus on a single research question while others focus simultaneously on multiple research questions (Gallas, Reference GALLAS1998). There are also many unique forms of action research that can occur ‘with main cycles and branching sub-cycles emerging spontaneously’ (Somekh & Zeichner, Reference SOMEKH and ZEICHNER2009, p. 17). With educational action research being suited to local contexts involving, for example, a syllabus or course, cyclical forms that are responsive and flexible to local agency, may be more appropriate than those which have a more fixed methodological design (Robson, Reference ROBSON2002). Indeed, there can be flexibility also in the duration of these ‘curriculum inquiry cycles’ (Al-Qura'n et al., Reference AL-QURA'N, HAIKAL, REZEQ, SHALABI, FATHI and GHOUSH2001): ‘tiny cycles’ (O'Brien & Moules, 2007) lasting one minute only, two-hour cycles (Baskerville & Pries-Heje, Reference BASKERVILLE and PRIES-HEJE1999) or, in the case of this present study, cycles of six weeks' duration. Deciding, in advance, how many cycles might be required for the investigation is not always possible, or even desirable, if interesting developments are uncovered in the final turn of the spiral. Indeed, to assume the spiral of planning–action–observation–reflection is relatively unfettered would be naïve; projects might end with a change of tutor and ‘spiralling within a tight time frame might limit the space for reflection needed to formulate new plans and try out different approaches’ (Bowl et al., Reference BOWL, COOKE and HOCKINGS2008, p. 91).

Rehearsing and performing

Returning to my involvement in the rehearsing and performing cycles, this project describes an action research study involving undergraduate popular music students. The year one group performance module, Performance Management, involves students rehearsing in bands and preparing for gigs. It is a year-long module that includes, among other activities, three rehearsing and performing assignments: a ‘Christmas Party’ entertainment at a public venue; a ‘Venues and Audiences’ daytime gig in a school, café or street busking; and a ‘Decade Tribute’ evening at a public venue. Four medium-sized rehearsal studios are used and bands are assigned two hours of rehearsal time per week plus an additional two hours of unsupervised rehearsing time. They are supervised by me and occasionally, another tutor also. I work with each band in turn, moving from studio to studio, developing their song material and providing feedback on their rehearsing and performing activities.

This annual course cycle, established for over 10 years, has seen many developments including changes to the processes and activities of the assignments and in tutor practice. Each rehearsing and performing assignment within it usually comprises the following cycle of activities: introduction and discussion about the assignment; establishing the assessment criteria with training activities (often involving peer assessment); band rehearsing; peer assessment of the rehearsing; performance; assessment of the performance; and feedback.

The appropriation of these 32 rehearsing and performing cycles (Fig. 3) for action research, was framed by three aims:

  1. (1) Involving students in the ownership of assessment by inviting them to devise assessment criteria based on their personal attributes arising from their rehearsing.

  2. (2) Understanding the processes involved in employing a peer assessment system; for example, the activity of peer marking including where, within the cycle, this should be conducted.

  3. (3) Improving my practice and confidence in using peer assessment techniques.

Fig.3 The rehearsing and performing cycle

First-year cohorts between 2000 and 2011 were involved in the research, comprising 260 students and 108 bands in total. Prior approval for conducting action research into this module, initially taught on the BA (Hons) Popular Music delivered at Barnsley College before subsequently transferring to The University of Huddersfield, was obtained from the relevant department at each institution.

Educational action research is commonly associated with the researcher's values (Somekh & Zeichner, Reference SOMEKH and ZEICHNER2009) and it was important for me to consider these in relation to my involvement with the rehearsing and performing cycles taking place in the module. In this context, my values position included: a belief in the potential for peer learning and assessment techniques in developing students’ knowledge of themselves and their band members; respect for popular music students’ commitment to their music, their bands, their songs and their culture; trust in the student–tutor rehearsal relationship, and similarly the relationship between band members; and faith, also, in my ability to help ensure that the peer assessment process would be conducted with fairness and honesty. It is hoped that the presence of each of the above might be recognised in the account of the action research appearing later in the article. This study, then, attempted to be grounded in the values and culture of our popular music students and to be flexible also to local agency in bringing about change.

Acquiring new knowledge and understanding about rehearsing and performing, for students of popular music and tutors alike, can be created in the participatory and social context of the activity itself (Green, Reference GREEN2002; Pulman, Reference PULMAN2008). Band members and tutors each learn by doing, through their engagement in the social learning setting that characterises popular group music making. From a theoretical perspective, tutor practice in the rehearsing and performing cycles in this study are situated within social constructivist views of learning (Vygotsky, 1978; Bruner, 1983; Guba & Lincoln, Reference GUBA, LINCOLN, Denzin and Lincoln1994). The cycle of rehearsing and performing that was appropriated in this study as part of an action research cycle (McNiff, Reference McNIFF1988; McNiff et al., Reference McNIFF, LOMAX and WHITEHEAD1996) is illustrated in Fig. 4.

Fig. 4 The action research cycle applied to the rehearsing and performing cycle

Referring to Fig. 4, the starting point of each cycle, namely the rehearsing and performing assignment, was the place at which the assignment was discussed, band membership and assessment criteria considered (including training activities) and where any changes or interventions to these were carried out (this is discussed further in the next section). Following the bands rehearsing activity of the cycle, issues involving peer assessment of rehearsing were examined (further interventions being carried out). After the final rehearsal and sound-check had taken place, bands entertained a public audience at a gig: the bands performing part of the cycle, in which their performances were also assessed. Further interventions took place in order to explore and evaluate various approaches to the, sometimes delicate, issue of communicating peer feedback. An analysis of the data (which included peer assessment marking, observations and interviews) arising from the activities of the cycle was subsequently considered. The conclusion of each cycle was a natural place to reflect on what had been learned about: teaching, learning and assessment practice, the action research methodology and overall direction of the research. Such reflection allowed particular aspects of the development, practice and management of the cycle or entire spiral to be considered, changed and compared. Finally the revision and re-application of the action research cycle restarted the next loop of the spiral. Naturally, there were occasions during the cycles where it was appropriate to reflect on the rehearsing activities and students’ responses before completion of the cycle. The end of each cycle, however, generally coincided with an end of term break; a natural point in which reflection on the completed cycle, and planning for the next, could take place.

The action research process of intervention, indicating a change of activity during a cycle, featured extensively in this study. Identifying which ‘intervention and change’ (Robson, Reference ROBSON2002, p. 219) to make in the rehearsing and performing cycles was guided by the literature (McNiff et al., Reference McNIFF, LOMAX and WHITEHEAD1996) and given focus by a set of values described previously. Interventions included, for example, changing the method in which the assessment criteria was formulated, or altering the sequence or content of an activity. In deciding on an intervention, it was thought important to consider whether the change might disadvantage the bands or their members. There were several occasions, however, where no interventions were made, preferring instead to repeat and monitor the cycle unchanged, simply in order to clarify, compare or corroborate any effects.

Data collection and analysis procedures

Although the most common data collection methods for action research involving music, reported by Cain, were qualitative, including reflective journals, interviews and participant observations, he also found some studies that employed quantitative methods. Indeed, the cycles in my study produced information and responses that were both qualitative and quantitative. These data were collected eclectically and comprised three methods: quantitative data arising from assessments, interviews with band members, and direct tutor participant observation during the activities. No evaluative checklists or observational recording methods were employed when carrying out an intervention as it was considered that the use of such instruments could be intrusive to the naturalistic setting of the cycles. Indeed, deciding against employing observational recording methods in this context is not unusual; action research is different from other traditions because of its focus upon the researcher's values, which are considered to be as important as methodological factors (McNiff et al., Reference McNIFF, LOMAX and WHITEHEAD1996; McNiff & Whitehead, Reference McNIFF and WHITEHEAD2006). Observations were recorded afterwards using pencil and paper to note any events considered significant. This latter method is an illustration of ‘Observing’ and ‘Noticing’ (Reis-Jorge, Reference REIS-JORGE2007, p. 414), an occasional process of focusing on the activities and making interpretations without systematic evidence as a basis for reflection in the action research cycles. Although methodologically untidy, it was considered a pragmatic and reactive process arising from noticing events (McDonough & McDonough, Reference McDONOUGH and McDONOUGH1997) that seemed appropriate for a tutor being involved ‘in the moment’ of band rehearsing.

Analysis procedures were as follows: correlation analysis and statistical measurements were performed on the quantitative marking data; interviews were analysed using an adapted grounded theory (Charmaz, Reference CHARMAZ, Denzin and Lincoln2000, Reference CHARMAZ2006); observational data were analysed by carrying out a thematic analysis (Boyatzis, Reference BOYATZIS1998). Multi-methodological approaches, as in this study, are increasingly being regarded as suited for educational and pedagogical action research (Gorard & Taylor, Reference GORARD and TAYLOR2004; Norton, Reference NORTON2009).

Spirals of cycles

Whereas the largest number of cycles reported in Cain's review was three, in this study four lengthy action research spirals were conducted across a total of 32 cycles of rehearsing and performing spanning over ten years. The first spiral was concerned with searching for assessment criteria that would be suitable for peer assessment of group rehearsals. It involved interventions in a substantial number of rehearsing and performing cycles across a nine-year period. Shortly after the peer assessment system was introduced, it became necessary to address the problem of how students marked each other; this was the concern of spiral two. Interventions and subsequent monitoring of this were carried out over a four-year period. Spiral three similarly arose out of the early difficulties with peer marking; this had become problematic due to a lack of thought about when to conduct the marking itself. Interventions and monitoring of these were carried out over a three-year period. Finally, following a number of refinements to the peer assessment criteria, it became apparent that peer feedback as a consequence of these refinements, although valuable, could also be delicate and required sensitivity in its handling. Communicating this feedback was proving to be problematic; this issue, requiring interventions and monitoring of these, extended over a six-year period, was the concern of spiral four. Table 1 presents a summary of these four action research spirals and Table 2 illustrates the 32 rehearsing and performing cycles, over which the action research spirals were superimposed, by assignment title (Christmas Party; Venues & Audiences; Decade tribute). The following section contains a brief summary of the development of the four action research spirals, which, as author, tutor and researcher, I thought more appropriate to present separately, rather than in a rigid chronological order, given the focus of this paper.

Table 1 The four action research spirals

Table 2 The 32 rehearsing and performing cycles

Spiral One: Deciding on appropriate peer assessment criteria (commenced 2000; discontinued 2009)

As a researcher I thought it important, during the initial action research, to obtain experience in the management of the action research spiral. Similarly, I also considered it necessary to acquire competence, as a tutor, in the practice of peer assessment (Carr & Kemmis, Reference CARR and KEMMIS1986; Guba & Lincoln, Reference GUBA, LINCOLN, Denzin and Lincoln1994; Brown, Reference BROWN1998) and, in view of the primacy of these dual aims, the interventions or changes that took place at the outset was necessarily modest.

This action research spiral, from which the other three spirals subsequently ‘branched’ (Somekh & Zeichner, Reference SOMEKH and ZEICHNER2009), proved to be the lengthiest of the entire study. It was used to help solve the problem about which type of assessment criteria might be appropriate to use for the purpose of assessing students’ band rehearsals. I was interested in employing criteria that, in addition to being used towards providing a mark for the rehearsing, could also act as a focus for learning in the rehearsal.

During the introduction, discussion and class-based activities (Fig. 4), I discussed with band members the idea with the bands, that they might assess each other, in each of these cycles, using a simple contribution to rehearsals criterion, with marks calculated using the so-called ‘zero-sum’ method.Footnote 1 This method was employed in the three rehearsing and performing cycles of 2000–2001 that, although providing a contribution percentage mark, yielded little about individuals’ rehearsing skills or how these might have developed in the rehearsals. In order to help develop their skills and awareness in rehearsing, the idea of using students’ personal attributes as assessment criteria, formed while observing the previous cycles. Putting this into practice, the new cohort of bands were invited, during the introductory sessions, to formulate personal attributes that they considered were important for rehearsing and which they agreed could be used as their assessment criteria. Examples of these group-agreed criteria in cycle 4 included Confidence; Creative input; Speaking out; and Staying focused. My evaluation of this was that it provided a more informative assessment in which peers rated each other across specific aspects of rehearsing that they felt were important.

Although I felt this was an improvement, what still appeared to be absent, however, were criteria that, in addition to being relevant to individuals, also focused on what they, as individuals, perceived to be skills that could be further developed. Consequently, to help support the development of these, students were asked, in the introductory activities, to reflect on their personal strengths and personal weaknesses. For cycle 7 therefore, individuals were invited to suggest two of their rehearsal strengths and one weakness, on which they would agree to being assessed by their band members. Observing their positive response overall to this activity, gave encouragement (in cycle 8) to attempt an expansion in the number of personal weakness criteria from one to three. This, it was hoped, might further increase individuals’ awareness of, and need to develop, these qualities. Students would remain in the same bands, using the same personal attribute criteria for cycle 9, which would allow comparisons to be made between the marks that were awarded from cycle to cycle.

The qualitative observational data arising from the cycles was expanded by conducting six semi-structured interviews with individual band members about their responses to the various peer assessment activities. Students’ interview feedback was generally positive and some discussed the idea that the bands themselves might suggest each individual band member's personal attribute criteria, based on their previous rehearsal experiences with them. It was planned to do this in the three cycles of the next academic year.

Being a little more confident of my practice in using personal attributes as assessment criteria my thoughts turned towards how these might be refined, in future cycles, to develop individuals’ rehearsal awareness further. Reflecting on the interviews and observations of the various interactions between band members, it was becoming apparent that trust was important for them to be able to respond appropriately to giving or receiving their assessment feedback. Indeed, deciding on an appropriate feedback mechanism was a problem in itself; this issue is the concern of Spiral Four. Observations of the new 2003–2004 class in their introductory workshops, suggested that they were exhibiting good levels of group work skills and trust. On this basis, I decided to bring forward the activity, in which individuals formulated their personal weakness attributes, to the first cycle of the academic year (cycle 10) in order to ascertain whether engaging in this activity would be beneficial earlier on in the academic year. It was also decided to take forward the idea, which emerged during the student interviews, of inviting each band to suggest the three personal weaknesses that they considered were most appropriate for each of their band members. As students had already spent considerable time rehearsing with their bands, it was thought not unreasonable that the band, with which students had previously rehearsed during cycle 10, should also formulate the personal weakness attributes for each member in cycle 11. It was important to plan an activity for this that was fair, transparent and agreeable by the class. Cycle 12 included a refinement, which, it was hoped, would motivate individuals further in bringing about improvements: a single group attribute criterion, general contribution to rehearsing, replacing the usual three group-agreed attributes. The purpose of this change, in the cycle, was to increase the focus on, and weighting towards, individuals’ personal weakness attributes.

Consolidation of what had already been learned, guided the planning for the 2004–2005 course cycle: retracing and ‘fine-tuning’ (Kember et al., Reference KEMBER, LEUNG, JONES, YUEN LOKE, McKAY, SINCLAIR, TSE, WEBB, WONG and YEUNG2000) previous activities and monitoring students’ responses (peer marking data, observations, interviews) rather than introducing further changes. I thought it unwise, however, to assume that activities which had been previously successful would, with each new cohort, continue to be so. Consequently bands, with which each student had previously rehearsed, continued to determine students’ individual personal weakness criteria.

One intervention, however: the general contribution to rehearsing criterion, introduced in cycle 12 was dropped for cycle 15. This was due to students routinely allocating the same mark to every band member, rather than the desired effect: developing fine discriminatory skills about individuals’ rehearsal contribution. Interventions do not always lead to improvements being made.

Consequently, the assessments in cycle 15 (the final of the academic year 2004–2005) were based solely on individuals’ three personal weakness attributes that, it was hoped, would motivate them further towards bringing about improvements to these areas.

Ensuing cycles, from 2005 onwards enabled the operation and responsiveness of the peer assessment criteria process, for subsequent cohorts, to be monitored. The outcome of this action research spiral was a solution to the problem of formulating peer assessment criteria that enabled each band member to develop and improve key qualities in themselves that were thought important towards achieving a productive rehearsal. It provided, in moving through group-agreed criteria, self-selected personal weakness, and finally personal weaknesses that were identified by the other band members, a graduated process of engagement in peer assessment criteria. The experiences of these more recent cycles were not always positive, however. For example, the activity whereby bands determined the personal weakness criteria of their members was not successful during the cycles of 2006–2007, as trust between students had not developed as readily as expected. Further monitoring took place, during cycles 23 and 26, and another tutor in the department agreed to lead these assignments thus enabling the peer assessment criteria process to be independently evaluated. This tutor reported that no difficulties had been encountered bringing an end to the spiral in 2009.

Spiral Two: Deciding between secret marking or open collaborative marking (commenced 2001; discontinued 2005)

An issue that emerged during the peer marking process (Fig. 4) was whether students should mark their peers either in secret or openly, and collaboratively, as a band. Marking in secret, I assumed, would lead to a more honest and fair set of marks. Collaborative marking, however, I assumed might support better learning opportunities, develop students’ maturity and trust. Observing each activity and noticing students’ responses would inform and suggest a possible solution to this activity. In the first three rehearsing and performing cycles of 2000–2001 (cycles 1–3) I did not appreciate that there might be significance in the method of marking, leaving students to mark informally as they wished. For the rehearsing and performing cycles 4 and 5, however, I decided to facilitate secret marking and collaborative marking respectively. These interventions enabled a comparative analysis to be made on the peer assessment data arising from both activities. Observing and noticing how individuals responded when marking, whether in private or in collaboration, was fascinating; some were hesitant, a few were quite confident within the collaborative discussion; others, seemingly, were more comfortable marking in secret, and so on. As before, students experienced both secret and collaborative marking activities: secret marking in cycles 7 and 8, and collaborative marking in cycle 9. Some bands, particularly those who became somewhat dysfunctional in their rehearsing, found collaborative marking to be an awkward experience. Five interviews were also conducted with individual band members and analysis of their responses suggested that peer assessment should be conducted in secret, which, they believed, would lead to more honest feedback. The next cohort of students experienced secret marking in cycles 10 and 11 with the intervention of optional collaborative marking in cycle 12 (offered this option, most bands still preferred to mark privately), which enabled me to observe their responses and monitor the marking data. It became apparent however, that bands which collaborated in their marking, would award nearly identical marks to each of their members. A further five interviews were conducted and these responses, in addition to providing feedback on tutor practice within the cycles, continued to inform the planning of subsequent action research spirals. Students expressed a preference for secret marking, as it led to a more honest feedback; I reached a similar conclusion after comparing and evaluating the marking data arising from each method. This issue, that was the purpose of this action research spiral, had been solved: secret marking was adopted as the norm with the occasional optional collaborative marking if there was a sufficient level of trust and honesty in evidence. Action research spiral three ended with cycle 12 in 2004.

Spiral Three: Deciding when to conduct the peer assessment activity (commenced 2002; discontinued 2005)

A third issue that was becoming apparent in using peer assessment involved the timing of the assessment activity itself. In the initial cycles, and without sufficient thought, I asked the band members to assess each other's rehearsing shortly after they had already received the written feedback and marks for their actual on stage performances given by their bands. Each band member, therefore, knew whether their band had performed successfully, or otherwise, before assessing each other's rehearsing. Although students were aware of their band's performance grade I assumed that this would not influence how peers would mark the rehearsing element. Further, it was convenient for me to tackle both assessment elements (giving feedback about the band performances and conducting the rehearsing peer assessment) in the same teaching session. I began to notice, however, that for bands which had performed less successfully, a larger spread of peer assessment marks than normal, as measured by standard deviation, was evident. Also, these band members often awarded low, usually identical, marks to certain individuals. I suspected that there might be marking cartels operating who were punishing individuals for the poor on-stage performance of the band rather than assessing purely on the rehearsal criteria; the purpose of Spiral 3 was to find a solution to that problem. For the incoming 2002–2003 cohort, individuals assessed their band members after knowing their band performance mark in the rehearsing and performing cycles 7 and 8, as usual. This was changed, however, in cycle 9, with marking completed before the bands performed; it allowed a comparative quantitative analysis to be performed across each process. With the new cohort of 2003–2004, the interventions of the previous year were repeated, similar to the ‘retracing steps’ idea of O'Sullivan (Reference O'SULLIVAN2002, p. 529), in order to gather further data and monitoring the change at another turn of the spiral. It was becoming clear, from interviews, observations and marking data, that assessments seemed best conducted during the sound check, before students had even performed, let alone received their band performance mark; this became an established practice from cycle 12 onwards.

On reflection, there were a number of difficulties arising from managing the variety of peer assessment activities that were taking place in the action research cycles. It was becoming evident, for example, that cohesion was deteriorating within certain bands. Did this increasingly dysfunctional group behaviour arise from the interventions in the peer assessment activities, or was it as a consequence, perhaps, of some individuals – free riders – not pulling their weight? Reflecting on my inexperience in the management of these cycles raised a question about whether or not I was achieving an appropriate balance between my tutor and researcher roles; this is considered in the discussion section appearing later in the article. Although progress overall was encouraging, another question was: had students’ learning improved due to their involvement as peer assessors, or was it as a consequence of improved tutor practice?

Spiral Four: Deciding the most appropriate method for communicating sensitive peer feedback (Commenced 2004; discontinued 2010)

The activity that proved to be the most challenging for students and tutor alike was that of communicating to individuals the, sometimes delicate, feedback from their band about what they thought were their personal weaknesses in rehearsals (described in Spiral One) This feedback was usually formulated by the band following the venues and audiences assignment, and normally given once during the course. Although my first attempt (cycle 11) at facilitating a band-to-individual feedback appeared to be successful, with those giving, in addition to those on the receiving end, happy with the process, this activity for the next two course years (in cycles 14 and 17), proved awkward and embarrassing for many. Indeed, for the next iteration (cycle 20) I decided, through observation of this cohort, that there were insufficient levels of trust and maturity existing even to attempt it. Rather than helping to facilitate bands in formulating personal weakness criteria for those with whom they had previously rehearsed and giving this feedback directly to those individuals, an intervention (arising from ideas suggested by two interviewees) was tested twice in 2010 (cycles 29 and 30). In these cycles individuals provided written personal weakness criteria for their previous band members. These submissions, which I subsequently anonymised before offering this feedback, seemed to be a practical solution that helped to avoid a possible face-to-face embarrassment between band members. Following a subsequent monitoring of cycle 32 this action research spiral was discontinued. Overall, the period from 2005–2011 saw a gradual de-scaffolding of the action research apparatus (discontinuing formal interviews and writing fewer observational memos, for example), as the rehearsing and performing cyclical process appeared stable and responsive to each cohort.

Discussion

This section considers the account of the research spirals in the study and, in the context of educational action research, discusses the various issues that arise.

There were two cycles that characterised the study: the rehearsing and performing spiral, which were experienced by the students, and the action research spirals that were probably less evident to them. In comparing the development of each, it would be fair to describe the rehearsing and performing cycle as going around fairly predictably, in roughly equal time frames, as a consequence of the course. The research spirals, however, appeared to move more unevenly; many of the earlier cycles, for example rehearsing and performing cycles 4–12, being preoccupied with a ‘great rush for data’ (Bowl et al., Reference BOWL, COOKE and HOCKINGS2008, p. 88). Conversely, in the second half of the study, the research spiral moved slowly, with interventions being absent in several cycles due to the retracing and monitoring procedure that I considered necessary towards establishing whether previous changes were desirable.

Employing multiple cycles in a music action research project allows comparisons to be made and, in the retracing of previous interventions, enables a rechecking of those earlier findings, so adding depth to the inquiry. This may be one advantage that multiple cycles have over studies that involve just one or two turns of the spiral. For example, if Spiral One had concluded after rehearsing and performing cycle three, or even after six turns of the spiral, valuable knowledge about students’ rehearsal attributes, peer assessment processes and techniques would not have been uncovered, and improvements to my own practice similarly limited. This might be illustrative perhaps, of what Cain was thinking when saying ‘research might also examine how teachers use the action research spiral to drill down into professional issues and problems, to discover knowledge that might go well beyond commonsense theorising’ (Cain, Reference CAIN2010, p. 173).

An action research study involving bands, their members, and tutor-researcher, not only raises questions about how to achieve the combined teaching and researcher role, but also about where the boundaries between teaching and research during the spirals might be drawn. Combining these two roles can be problematic in view of, for example, competing demands and resources, deciding when to observe as a researcher, or when to act as a tutor. Indeed, Elliott (Reference ELLIOTT, Noffke and Somekh2009) considers the great challenge for tutor educators is to integrate their dual roles as educational practitioners and researchers. It was a challenge for this study too, particularly in the discussions within bands about their members’ personal attributes and also in the conduct of the collaborative marking activities. Although a duality such as this reminds us about ‘the teaching and doing research dilemma’ (Reis-Jorge, Reference REIS-JORGE2007, p. 414), using a spiral of cycles might help to support the tutor-researcher, step by step, in developing the skills necessary to successfully integrate these roles. Indeed, this was my own experience, as each successive cycle added a developing assurance in my practice, so demonstrating the value of multiple cycles.

A related issue, given that there were several ‘monitoring’ cycles in which no interventions took place, concerns the question of whether these cycles constitute legitimate action research or rather, as Norton (Reference NORTON2009) might describe, ‘action learning’ (p. 31). Given that reflection and subsequent planning for the next year are also part of an institution's evaluation process, would the entire length of these four spirals count as action research, or just those involving the rehearsing and performing cycles 1–15 because they contained the majority of the interventions? As tutors we are aware of the need for an ongoing monitoring of our courses and, as such, this would usually include comparing activities that are repeated from time to time, and from year to year. If these activities can be considered as cyclical in nature, then a cycle of planning, action, observation and data gathering, reflection and re-planning for the next iteration might each contribute, on a variety of levels, to this monitoring. If so, our course evaluation might also be likened to an action research spiral, that is ‘conceptualised as having a local effect’ (Cain, Reference CAIN2008, p. 309) on improving practice (Somekh & Zeichner, Reference SOMEKH and ZEICHNER2009).

My study, however, went beyond the usual course evaluation procedures, by its usage of action research processes including, for example, interventions, student interviews, action and reflection. Indeed, for the latter, and especially for spirals of this length it was, as noted by Bowl et al. (Reference BOWL, COOKE and HOCKINGS2008) ‘important, to find a way of overcoming the problem of balancing action and reflection’ (p. 88). Action research cycle theory tends to position ‘reflection’ at the end of a cycle, occurring at a natural place in which to ‘see what effect your change has made (reflect)’ (Norton, Reference NORTON2009, p. 69). Reflection however, logically follows observation at any time within the cycle: observing my students in activities such as formulating their criteria, awarding marks and receiving feedback, for example, prompted an immediate and on-going reflective thinking process taking place at an informal level (O'Sullivan, 2002), involving ‘reflection on professional action’ (Wallace, Reference WALLACE1998). Structured reflection (Vygotsky, Reference VYGOTSKY1978; Bruner, Reference BRUNER1983), including ‘collecting, describing, analysing and evaluating information and engagement with public theories in a systematic way’ (Wallace, Reference WALLACE1991, Reference WALLACE1998; Reis-Jorge, Reference REIS-JORGE2007, p. 414), was also possible, however, because of the period of time from one activity to the next, usually being weekly. Reflective processes in the second half of the study increasingly focused on thinking about how one cycle compared with the previous, and how one year compared with the preceding. Action research that utilises a spiral of cycles allows different levels of reflection (Reis-Jorge, Reference REIS-JORGE2007), at various points within and from cycle to cycle, which, over a period of time might also be shaped by broader historical, political and ideological contexts. Indeed as Cain concludes, there was little broader reflexivity apparent in the studies he reviewed; was this a consequence, perhaps, of most projects not proceeding beyond one-turn in the action research spiral? All of this suggests that my reflection on the rehearsing and performing activities might be likened to that of a continually developing process taking place with each successive cycle culminating, as an end-course evaluation, with a structured reflective overview of my practice as a whole. This account of action and reflection may be a process that colleagues might recognise in their own music practice.

Another issue involves the considerable data that might arise from employing a spiral of cycles. The data corpus involved in this study, comprising quantitative peer marking, interviews and observation, required decisions, for instance, about whether one type of data was more useful than another, for progressing the action research from cycle to cycle, or from year to year. Further, how might such heterogeneous information be synthesised, so as to allow it to be apprehended holistically in the reflective process? Developing rigorous methods to integrate, analyse and interpret mixed data types for each cycle would seem to be essential. The spirals in this study employed simple statistical measurements and observation, comprising written notes as an aide memoir plus interview data which, through the reflective process about the information as a whole, helped the evaluation and was a practical solution to data analysis that was grounded in the rehearsing and performing activities.

In summary, then:

  • The 32 rehearsing and performing cycles moved round in equal time frames.

  • The four research spirals, however, moved round in uneven time frames with interventions being absent in several cycles due to retracing and monitoring procedures;

  • Employing multiple cycles allowed several comparisons to be made and, in the retracing of previous interventions, enabled: a rechecking of earlier findings, depth to the inquiry, and improvements to practice.

  • Multiple cycles also helped to support and facilitate a greater assurance to the tutor-researcher duality by developing, through each successive cycle, the skills necessary to integrate these two roles.

  • The action research cycle of planning, action, observation, data gathering, reflection and re-planning for the next iteration, can contribute to course evaluation and improvements to our practice as music educators.

Final thoughts

A spiral of cycles, incorporating interventions, together with an informal, as well as structured, reflection, can be effective in informing and improving our practice. The action research spiral, as an iterative and cumulative process, can form a powerful buttress to our methodological design; it can add strength to our inquiry and, as music practitioners, help us to make improvements to our work. The spiral process is particularly amenable to local agency within various musical settings: from K-12 (for example Cain, Reference CAIN2010) to undergraduate, and from formal to informal learning contexts. As tutors, we think about the effectiveness of our music curriculum, the process with which it is delivered, and consider methods in which improvements can be made in each. Thinking in terms of cycles and spirals, when deciding how best to improve our effectiveness as a tutor, is an approach that might help our performance in the classroom. The action research spiral can strengthen connections between research and practice, enhance professional development and improve our practice as music educators.

Footnotes

1 Students’ final marks being obtained from the totalled marks awarded by their band members, dividing each by their mean, and multiplying them by the ‘performance mark’. This process is sometimes described as the ‘zero-sum’ method (Sharp, Reference SHARP2006) because any student who is peer assessed as providing zero contribution receives zero marks.

References

AL-QURA'N, M., HAIKAL, M., REZEQ, M., SHALABI, N., FATHI, S. & GHOUSH, S. (2001) The development and implementation of a sixth grade geology unit through collaborative action research. Educational Action Research, 9, 395411.Google Scholar
BASKERVILLE, R. & PRIES-HEJE, J. (1999) Grounded action research: a method for understanding IT in practice. Accounting, Management and Information Technologies, 9, 123.CrossRefGoogle Scholar
BOWL, M., COOKE, S. & HOCKINGS, C. (2008) Researching across boundaries and borders: the challenges for research. Educational Action Research, 16, 8595.CrossRefGoogle Scholar
BOYATZIS, R. (1998) Transforming Qualitative Information: Thematic Analysis and Code Development. Thousand Oaks: Sage.Google Scholar
BROWN, S., Ed. (1998) Peer Assessment in Practice. Birmingham: Staff and Educational Development Association.Google Scholar
BRUNER, J. (1983) Child's Talk. Learning to Use Language. Oxford: Oxford University Press.Google Scholar
CAIN, T. (2008) The characteristics of action research in music education. British Journal of Music Education, 25, 283313.CrossRefGoogle Scholar
CAIN, T. (2010) Music teachers’ action research and the development of Big K knowledge. International Journal of Music Education, 28, 159175.CrossRefGoogle Scholar
CARR, W. & KEMMIS, S. (1986) Becoming Critical: Education, Knowledge and Action Research. Philadelphia: The Falmer Press.Google Scholar
CHARMAZ, K. (2000) Grounded theory: objectivist and constructivist methods. In Denzin, K. & Lincoln, Y. (Eds.), Handbook of Qualitative Research (pp. 509536). CA: Sage.Google Scholar
CHARMAZ, K. (2006) Constructing Grounded Theory: A Practical Guide Through Qualitative Analysis. London: Sage.Google Scholar
CONWAY, C. M. & BORST, J. (2001) Action research in music education. Applications of Research in Music Education, 19 (2), 38.CrossRefGoogle Scholar
ELLIOTT, J. (2009) Building educational theory through action research. In Noffke, S. & Somekh, B. (Eds.), The SAGE Handbook of Educational Action Research (pp. 2838). London: Sage.Google Scholar
GALLAS, K. (1998) Sometimes I Can Be Anything: Power, Gender and Identity in a Primary School Classroom. New York: Teachers College Press.Google Scholar
GORARD, S. & TAYLOR, C. (2004) Combining Methods in Social and Educational Research. Maidenhead: Open University Press.Google Scholar
GREEN, L. (2002) How Popular Musicians Learn: A Way Ahead for Music Education. Aldershot: Ashgate.Google Scholar
GUBA, E. & LINCOLN, Y. (1994) Competing paradigms in qualitative research. In Denzin, N. & Lincoln, Y. (Eds.), Handbook of Qualitative Research (pp. 485499). Thousand Oaks: Sage.Google Scholar
HEALE, G. (2003) Applying theory to practice: an action research resource pack for professionals. Clinical Chiropractic, 6, 414.CrossRefGoogle Scholar
KEMBER, D., LEUNG, D., JONES, A., YUEN LOKE, A., McKAY, J., SINCLAIR, K., TSE, H., WEBB, C., WONG, M. & YEUNG, E. (2000) Development of a questionnaire to measure the level of reflective thinking. Assessment and Evaluation in Higher Education, 25, 381395.CrossRefGoogle Scholar
LEWIN, K. (1946) Action research and minority problems. Journal of Social Issues, 2 (4), 3446.CrossRefGoogle Scholar
McDONOUGH, J. & McDONOUGH, S. (1997) Research Methods for English language teachers. London: Arnold.Google Scholar
McNIFF, J. (1988) Action Research: Principles and Practice. Basingstoke: Macmillan.CrossRefGoogle Scholar
McNIFF, J., LOMAX, P. & WHITEHEAD, J. (1996) You and Your Action Research Project. London: Routledge.CrossRefGoogle Scholar
McNIFF, J. & WHITEHEAD, J. (2006) All You Need to Know About Action Research. London: Sage.Google Scholar
NORTON, L. (2009) Action Research in Teaching and Learning. Abingdon: Routledge.CrossRefGoogle Scholar
O'BRIEN, N. & MOULES, T. (2007) So round the spiral again: a reflective participatory research project with children and young people. Educational Action Research, 15, 385402.CrossRefGoogle Scholar
O'SULLIVAN, M. (2002) Action research and the transfer of reflective approaches to in-service education and training (INSET) for unqualified and underqualified primary teachers in Namibia. Teaching and Teacher Education, 18, 523539.CrossRefGoogle Scholar
PULMAN, M. (2008) Knowing Yourself Through Others: Peer Assessment in Popular Music. Unpublished PhD Thesis. Sheffield Hallam University.Google Scholar
PULMAN, M. (2009) Seeing yourself as others see you: developing personal attributes in the group rehearsal. British Journal of Music Education, 26, 117135.CrossRefGoogle Scholar
REIS-JORGE, J. (2007) Teachers’ conceptions of teacher-research and self-perceptions as enquiring practitioners – a longitudinal case study. Teaching and Teacher Education, 23, 402417.CrossRefGoogle Scholar
ROBSON, C. (2002) Real World Research. Oxford: Blackwell.Google Scholar
SHARP, S. (2006) Deriving individual student marks: a tutor's assessment of group work. Assessment and Evaluation in Higher Education, 31, 329343.CrossRefGoogle Scholar
SOMEKH, B. (2006) Action Research: A Methodology for Change and Development. Maidenhead: Open University Press.Google Scholar
SOMEKH, B. & ZEICHNER, K. (2009) Action research for educational reform: remodelling action research theories and practice in local contexts. Educational Action Research, 17, 521.CrossRefGoogle Scholar
VYGOTSKY, L. (1978) Mind in Society: The Development of Higher Psychological Processes. Cambridge, MA: Harvard University Press.Google Scholar
WADSWORTH, Y. (1997) Everyday Evaluation on the Run. Australia: Allen & Unwin.Google Scholar
WALLACE, M. (1991) Training Foreign Language Teachers. A Reflective Approach. Cambridge: Cambridge University Press.Google Scholar
WALLACE, M. (1998) Action Research for Language Teachers. Cambridge: Cambridge University Press.Google Scholar
Figure 0

Fig. 1 Stages of an action research cycle. 1. Planning the activities. 2. Action (rehearsing and performing). 3. Collecting data (assessments, observations, interviews). 4. Reflecting (evaluating learning and tutor practice)

Figure 1

Fig. 2 Illustration of an action research spiral

Figure 2

Fig.3 The rehearsing and performing cycle

Figure 3

Fig. 4 The action research cycle applied to the rehearsing and performing cycle

Figure 4

Table 1 The four action research spirals

Figure 5

Table 2 The 32 rehearsing and performing cycles