Hostname: page-component-745bb68f8f-l4dxg Total loading time: 0 Render date: 2025-02-11T22:56:01.577Z Has data issue: false hasContentIssue false

Attitudes towards online feedback on writing: Why students mistrust the learning potential of models

Published online by Cambridge University Press:  22 June 2015

Carola Strobl*
Affiliation:
Department of Translation, Interpreting, and Communication, Ghent University, Belgium (email: Carola.Strobl@ugent.be)
Rights & Permissions [Opens in a new window]

Abstract

This exploratory study sheds new light on students’ perceptions of online feedback types for a complex writing task, summary writing from spoken input in a foreign language (L2), and investigates how these correlate with their actual learning to write. Students tend to favour clear-cut, instructivist rather than constructivist feedback, and guided self-evaluation through model solutions in online learning environments. However, the former type is too limited to tackle all dimensions of advanced writing. Constructivist feedback, in the form of guided modelling, allows addressing the higher-order concerns involved in summary writing. In addition, it is widely acknowledged that activating the zone of proximal development (ZPD) through cognitive involvement is beneficial to learning. To investigate students’ learning from both types of feedback, a one-group pre-post-test intervention study was set up. Students attending a course on summary writing in L2 within a bachelor programme in Applied Languages (n=38) followed an individual online learning module containing both instructivist fill-the-gap exercises and model solutions with constructivist guiding questions for self-assessment. The students’ actual learning gain was measured through pre- and post-tests, and compared with their perceived learning gain, as expressed in self-evaluation. The comparison reveals a dichotomy between the students’ observed learning curve and an underestimation of their own progress. This dichotomy was found to originate in a mismatch between their expectations towards the online learning module and the characteristics of the constructivist feedback conveyed. This mismatch can be attributed to three key factors: (1) evaluation, (2) linguistic focus, and (3) learner motivation.

Type
Regular papers
Copyright
Copyright © European Association for Computer Assisted Language Learning 2015 

1 Introduction

Feedback plays a vital role in online learning and is inextricably linked to design principles for computer-assisted language learning (CALL) (Felix, Reference Felix2005; Heift, Reference Heift2010). Felix (Reference Felix2005) discusses an “about turn” in the provision of feedback in CALL due to the focus on constructivist design principles at the beginning of the millennium. However, she holds that behaviourist pattern-drill should not be abandoned altogether, mainly on account of its time-efficiency. According to Kirschner’s model of educational design, decisions about the appropriate design of an online intervention should primarily be informed by knowledge about the learners’ background and by the learning goals (Kirschner, Reference Kirschner2002). An online module that aims to foster summary writing from spoken input in a foreign language needs to combine different instruction techniques and, accordingly, forms of feedback. The task is complex and challenges learners on several levels, as it involves different skills (listening and writing), techniques (note-taking), and linguistic and cognitive demands. Moreover, feedback on summary writing needs to address both lower-order concerns (LOCs), i.e. problems at word- and sentence-level, like accurate choice of cohesive ties, and higher-order concerns (HOCs), i.e. problems at text level, like content selection and coherence. The latter demanding cognitive ability cannot be trained by means of pre-formatted instructivist feedback.

Following the Cognitive Mediational Framework (Doyle, Reference Doyle1977), no direct relationship can be assumed between instruction, including the provision of feedback, and actual learning results. Several factors play a mediating role, among them the learners’ perception of the instruction provided. In the same vein, Norris and Manchón (Reference Norris and Manchón2012) claim that writing development is mediated by different individual factors like learners’ goals, beliefs, attitudes, and learning histories. Therefore, in order to assess the effectiveness of an intervention in learning-to-write, it is important to triangulate data related to process, product, and students’ attitudes in order to ascertain the role of feedback in the learning process (Norris & Manchón, 2012: 238).

Below is a brief review of literature related to feedback on online learning and feedback on L2 writing. Due consideration is also given to previous research on learners’ attitudes towards feedback.

1.1 Feedback on online learning

In the area of CALL, research into the effectiveness of different forms of feedback has come to diverging conclusions (Robinson, Reference Robinson1991; Felix, Reference Felix2002; Heift, Reference Heift2005; Rosselle, Sercu & Vandepitte, Reference Rosselle, Sercu and Vandepitte2009). Robinson (Reference Robinson1991) concludes from her literature review that internal detection of errors and the learner providing the answers are more effective than the external provision of answers through program disclosure. Felix (Reference Felix2002), however, holds that pattern drill exercises are more appropriate and time-efficient to foster accuracy. Heift (Reference Heift2005) and Rosselle et al. (Reference Rosselle, Sercu and Vandepitte2009) found that students’ uptake, i.e. the response to corrective feedback, for grammar and vocabulary-related exercises, was higher when the feedback contained metalinguistic information than in the case of recasts or error location only. The only CALL study to my knowledge that includes a focus on HOCs for summary writing is Kintsch, Steinhart, Stahl, Matthews and Lamb (Reference Kintsch, Steinhart, Stahl, Matthews and Lamb2000). The authors discuss the affordances of educational software based on Latent Semantic Analysis for this instructional goal. They report only partial success, pointing to the fact that, apart from feedback, students need explicit strategy training and modelling. So, a combination of pre-task strategy training and constructivist feedback in the form of model provision could be a promising route.

Feedback on content plays an important role in learning to summarise, but has not been a focus of CALL studies until now. Therefore, this brief literature review on feedback effectiveness also considers relevant studies in general online pedagogy. Research on the impact of online feedback on students’ learning in other areas than CALL also leads to different conclusions (Mandernach, Reference Mandernach2005; van der Kleij, Eggen, Timmers & Veldkamp, Reference van der Kleij, Eggen, Timmers and Veldkamp2012). According to van der Kleij et al. (Reference van der Kleij, Eggen, Timmers and Veldkamp2012), an important factor in explaining the different findings is the level of intended outcomes. They propose a comprehensive framework for online feedback classification based on Hattie and Timperley (Reference Hattie and Timperley2007) and Shute (Reference Shute2008), integrating the feature “targeted level of outcome” with the traditional description levels “content” and “timing”. Regarding timing, feedback can either be delayed or immediate. At the content level, they differentiate between “knowledge of results” (KR), “knowledge of correct response” (KCR), and “elaborated feedback” (EF). The targeted outcome of feedback can be at the “self” level (e.g. learner characteristics), “task” level (e.g. knowledge building), “process” level (e.g. a worked-out example), and “regulation” level (e.g. self-assessment). As constructivist pedagogy focuses on the learner’s process level and cognitive involvement, feedback associated with this approach provides elaborated information about the targeted response. Instructivist pedagogy, on the other hand, seeks to automate discrete items of a task, and therefore provides immediate KR without elaborating on the correct response. In their experiment on the effect of feedback in online learning in a content-related course, van der Kleij et al. found that, contrary to the initial hypothesis, the students in the KCR+EF condition, which aimed at task and process levels, did not outperform the students in the KR only condition. Feedback timing turned out to play an important role in their study, as students paid more attention to KCR+EF feedback delivered immediately after each response than to delayed feedback. Mandernach (Reference Mandernach2005), who also found that the various types of computer-based feedback implemented in his study (KR, KCR, EF in the form of topic-contingent and response-contingent information) did not impact learning to a significant degree, concludes:

[T]here is no clear-cut “best” type of feedback in computer-based instruction for all learners and learning outcomes (...) [W]hile computer-based feedback may help to clarify simple, definition-based errors, it may be less effective in correcting more complex errors in understanding. In addition, research indicates that student understanding is enhanced more through the application of relevant examples than through repetition of basic information.

(Mandernach, Reference Mandernach2005: 3)

The use of feedback in the present study reflects this insight. Computer-based (automated) feedback was used to create and broaden students’ knowledge base concerning discrete language items, whereas models were provided as “relevant examples” for more complex problems like selecting content and creating a coherent text.

A potential problem concerning KCR+EF feedback stated by CALL and non-CALL scholars is the observation that students spend no or little time examining it (Pujolà, Reference Pujolà2001; Mandernach, Reference Mandernach2005; Heift, Reference Heift2010). Heift and Pujolà relate this behaviour to the observation that students are satisfied to know their result or the correct response, and do not see the need to delve into the elaborate response. Van der Kleij et al. (Reference van der Kleij, Eggen, Timmers and Veldkamp2012) point out the importance of attitudes and motivation for the time spent reading the feedback, stating that feedback can only be effective when “the learner is willing and able to use [it]” (van der Kleij et al., Reference van der Kleij, Eggen, Timmers and Veldkamp2012: 265). As Mandernach states, “[t]his failure of students to have utilized the computer-based feedback likely accounts for the lack of learning gains in response to the various forms of feedback-elaboration” (Mandernach, Reference Mandernach2005: 9).

1.2 Feedback on (L2) writing: The role of models

A major concern of research on written corrective feedback is the question whether error correction has a positive impact on writing performance, and to what extent specific feedback features, like degree of explicitness and focus, can play a role in the development of accuracy (see van Beuningen, Reference van Beuningen2010, for an overview). However, for the present study, these questions are of less importance. Error correction only occurred in the form of predefined computer-provided feedback in exercises targeting restricted language use. Instead, feedback on the written summaries was provided in form of a model solution, being the “most extreme form of indirect feedback” (van Beuningen, Reference van Beuningen2010:12). After all, the main learning goal was to raise students’ awareness of HOCs in writing, i.e. content selection, rephrasing, and coherent text structure, being the main challenges in summary writing.

Models as a cognitive-constructivist form of feedback can help stimulate the learners’ ZPD through noticing the gap between their own performance and a target-like performance. The theoretical foundations for the use of models as feedback on L2 writing can thus be laid in sociocultural theory in combination with the concept of “noticing” (Schmidt, Reference Schmidt1990). “Noticing” refers to learners’ awareness of “a mismatch or gap between what they can produce and what they need to produce, as well as between what they produce and what target language speakers produce” (Schmidt, Reference Schmidt2001: 6). That is, the first step of “noticing” takes place during production, and the second step during comparison of their own production with a model. Martínez and Roca de Larios (Reference Martínez and Roca de Larios2010) and Hanaoka (Reference Hanaoka2007) report on the use of models as feedback for a narrative writing task based on picture stimuli. Given the task type and the relatively low proficiency level of the learners involved, they found that noticing occurred mainly in the area of lexicon, and to some extent in the area of ideas and expression. Hanaoka also reports that noticing depends largely on proficiency. This is in line with Aljaafreh and Lantolf (Reference Aljaafreh and Lantolf1994), who state that the potential relevance of feedback depends on “where in the learner’s ZPD a particular property of the L2 is situated” (p. 480).

To date, research on the use of models in L2 writing is scarce, and the focus has been restricted to LOCs. To the best of my knowledge, there has been no CALL study that reports on the use of models as feedback. In this respect, the present study contributes to both fields of research. On the one hand, it broadens the focus of L2 writing research on the use of models, and on the other, it introduces the use of models in constructivist CALL pedagogy for advanced writing.

1.3 Learners’ attitudes towards feedback

While there is a broad range of quantitative studies on uptake and effectiveness of feedback types in computer-based learning environments, few studies have investigated the user perspective qualitatively, taking students’ attitudes into consideration (Cotos, Reference Cotos2011). This is in stark contrast with the impact that attitudes have on the uptake, given that “feedback is not necessarily a reinforcer, because [it] can be accepted, modified, or rejected” (Hattie & Timperley, Reference Hattie and Timperley2007: 82, drawing on Kulhavy, Reference Kulhavy1977). As early as in 1977, Kulhavy related the importance of research on students’ perceptions of feedback to the variety of feedback provision made possible in computer-assisted learning: “[B]ecause computerized instruction allows such a wide range of strategies for each response, the question of how one most effectively matches feedback parameters with response characteristics is indeed an important one” (Kulhavy, Reference Kulhavy1977: 224). Kulhavy singles out learners’ confidence in the response as an important factor impacting on the attention spent on, and the effect of, feedback.

Most CALL studies attending to feedback attitudes deal with grammar or vocabulary instruction. Nagata (Reference Nagata1993) found that students’ satisfaction with feedback depended on its degree of sophistication. She implemented a CALL program for the training of a complex Japanese grammatical construction, providing both pre-programmed feedback based on pattern matching, and feedback produced by an intelligent tutoring system using natural language processing. The participants preferred the latter, because it explicitly guided them towards finding the correct answer, providing a lot more contextualised EF. In the same vein, van der Kleij et al. (Reference van der Kleij, Eggen, Timmers and Veldkamp2012) who investigated how students perceive the usefulness of formative computer-based assessment found that the attitudes were significantly more positive when students received EF instead of corrective feedback (KR) only. This also coincides with findings of Rosselle et al. (Reference Rosselle, Sercu and Vandepitte2009) who studied the effect of five different feedback types in an L1-L2 translation activity. Her learners rated the usefulness of feedback that offered both diagnosis and guidance higher than simple KCR. However, feedback indicating the location and providing a meta-linguistic clue without revealing KCR did yield a lower score, because students felt uncertain whether they had interpreted their errors correctly.

Also in the field of L2 writing, several researchers (Hyland, Reference Hyland2003; Storch, Reference Storch2010; Norris & Manchón, Reference Norris and Manchón2012) assert that it is important to focus on learners’ attitudes along with the effectiveness of the feedback provided. The findings of Hyland’s (Reference Hyland2003) case study confirm that L2 students value form-focused feedback. Diab (Reference Diab2005) reports the same result from a questionnaire study in which university English as a second language (ESL) students “overwhelmingly (90%) agreed [...] that it is important to them to have as few errors as possible in their written work” (Diab, 2005: 30). Radecki and Swales (Reference Radecki and Swales1988) added to the discussion by developing a typology of post-secondary students’ behaviour in ESL writing instruction, depending on their perception of the teacher’s role. Differentiating between “Receptors”, “Semi-resistors”, and “Resistors”, they found that students’ expectations regarding the focus of feedback on writing (surface error correction vs. rhetoric comment) largely depended on that perception, as well as on the course attended (English language classes vs. courses in a study discipline where the language plays a subservient role). Enginarlar (Reference Enginarlar1993) investigated students’ attitudes towards multiple-trait formative feedback, including indication of error location plus metalinguistic gloss and a summative evaluation on paragraph as well as text level, but not providing KCR. Next to stating a positive overall evaluation of this feedback system, Enginarlar also found that “when feedback [...] is provided in a problem-solving manner, students seem to regard revision work as a collaborative type of learning where responsibility is shared by the two parties [i.e. learner and teacher]” (Enginarlar, Reference Enginarlar1993: 203).

To sum up, there seems to be a trend that learners’ acceptance of feedback in L2 production, whether provided by pre-defined algorithms or by a human evaluator, increases with the degree of focus on forms and the contextualisation of the information provided. However, we need to know more about “how students actually engage with feedback and how feedback shapes their writing processes, revising practices and their self-evaluation capacities” (Hyland, Reference Hyland2010: 179). The present study responds to this need by investigating attitudes towards online feedback on writing, focusing on their perception of the usefulness of models. It sets out to broaden our understanding of the complex mechanisms at play by triangulating actual learning observed with students’ perceived learning gain, and their reported attitudes towards the feedback received. More specifically, the following research questions are addressed:

  1. 1. How did students’ summary writing change after following the online module?

  2. 2. How did students themselves perceive their learning gain through the online module?

  3. 3. What are the students’ attitudes towards the different feedback types provided, and is it possible to pinpoint their mediating role between perceived and actual learning gain?

2 Study design

The exploratory intervention study from which the data for this research were drawn comprised a six-week period of weekly classes in summary writing from spoken input. The 38 participants were 2nd-year bachelor students of Applied Language Studies at a higher education college in Belgium. Their writing proficiency in the target language, German, was at level B2 of the Common European Framework. For 76% of the participants, this was their first experience of an online learning module. Prior to the online learning phase, the students had received face-to-face (f2f) instruction on the same task. In the first and sixth weeks of the study, pre- and post-tests in summary writing and a fill-the-gap test on cohesive ties were administered. The actual intervention consisted of four consecutive weekly sessions of individual online learning in class. Every week, the students wrote a summary of a radio feature on a news item lasting 3–5 minutes. They were guided through three different task phases by an online learning path following a “moderate constructivist” design (Karagiorgi & Symeou, Reference Karagiorgi and Symeou2005) in which diverse exercise and feedback types were combined (see Figure 1). The rationale for the distribution of instructivist and constructivist task types in the module is that “pre-determined, constrained, sequential, criterion-referenced instructional design is most suitable for introductory learning while constructivist approaches are more appropriate for advanced knowledge acquisition” (Karagiorgi & Symeou, 2005: 23). This rationale was followed in order to design the online module: it includes different task types, ranging from multiple-choice over fill-the-gap to open answer questions, and provides different types of feedback, ranging from corrective feedback to models. This complex architecture was deemed necessary in order to meet the different learning goals involved in summary writing: (1) build up listening strategies in order to reduce anxiety and foster scan-listening for main content items; (2) build up writing strategies to produce a new coherent text based on a different input genre (here: radio features); and (3) provide the necessary linguistic knowledge in the L2 to write a concise and coherent summary, i.e. typical chunks and phrases for the genre and cohesive strategies. According to insights from scholarship on feedback in online learning on the one hand, and in (L2) writing, on the other, feedback types were adjusted to the different goals. The actual writing phase started in class, and was finished at home. Therefore, the self-evaluation of the previous week’s summary was actually the first step in each online module. Table 1 provides an overview of the different exercise types, the instructional focus, and the feedback type, according to van der Kleij et al.’s (Reference van der Kleij, Eggen, Timmers and Veldkamp2012) classification. In the instructivist part, which contains preparatory exercises for the actual listening task, feedback was delivered immediately after the completion of each exercise in the form of (1) knowledge of result (correct/wrong), plus (2) knowledge of correct response in case of a wrong answer, and/or (3) elaborated feedback, i.e. background information on applicable rules and possible sources of error. In order to avoid students neglecting the feedback – a problem frequently mentioned in studies on online feedback which can lead to inconclusive results (cf. supra) – some elements were built in to make students read the feedback: exercises were conceived as a suite of closely related items, building up on the knowledge provided in the KCR and (eventual) EF of immediately preceding exercises.Footnote 1

Fig. 1 Flowchart of the design of the individual online learning module

Table 1 Features of the task types in the online module

In the constructivist part that focuses on self-evaluation of the writing process and product, feedback was delayed (i.e. in the class session following the elaboration of the summary) and provided no knowledge of the result. Instead, knowledge of the correct response was made available by means of a model solution, and students were prompted to compare their own solution with the model to stimulate noticing (Schmidt, Reference Schmidt1990). They were guided in this process by reflective questions that directed their attention towards specific features of the model summary, e.g. information provided on the source text, answers given to main questions in the introduction.2

It is important to note that naturalistic classroom data was analysed. All students followed the same learning path and received the same support. Therefore, this study has no intention of making claims about differential effects of feedback types. Instead, rich data was collected in order to explore students’ overall learning gain, to monitor their attitudes before, during, and after the intervention in an online environment, which was a new experience for the learners, and to assess their self-reliance after having followed the online module. The data sources for the analysis are represented in Figure 2, which depicts their triangulation for interpretation.

Fig. 2 Data triangulation for the analysis of perceived and actual learning gain

In the next section, data analysis and results will be presented for each of the three subsets.

3 Data analysis and results

3.1 Actual learning gain: Pre-post-test on summary writing

In order to investigate a possible learning gain, students wrote a pre-test in summary writing immediately before the intervention and six weeks later a post-test, immediately after the last online module. The tests were compared with regard to features that address HOCs, that is, concerning the text level (as opposed to word and sentence level) (for an overview, see Table 2):

  1. 1. Degree of content elaboration through restructuring information. The original radio features to be summarized were interviews. High content elaboration was seen as (a) abandonment of the question-and-answer structure in the summary, and (b) changes in the original proposition order that favoured a coherent summary structure.

  2. 2. Degree of linguistic elaboration through rephrasing content (as opposed to a verbatim copy of the original text), and through variation in co-reference strategies. As a unit of analysis for the latter, the references to the interviewee were analysed. A high degree of linguistic elaboration coincided with a broad range of different reference strategies (like pronouns, synonyms, hypernyms, etc.) as opposed to the recurrence of the interviewee’s name. For each individual student, a holistic evaluation of the learning gain concerning these quality features was made (progress/status quo/deterioration). For 48% of the students, clear progress was demonstrated, for 33.5%, the post-test showed a status quo, and 18.5% performed worse in the post-test.Footnote 3

Table 2 Overview of the quality features assessed in the pre- and post-tests on summary writing

3.2 Perceived learning gain

In order to measure students’ perceptions of their own learning through the online intervention, two units of analysis were defined: (a) the students’ self-reported overall learning gain, and (b) the students’ self-assessment scores on their summaries, based on the comparison with the model solution.

3.2.1 Self-reported overall learning gain

As a unit of analysis for students’ self-reported learning, their ratings for overall progress in summary writing after the f2f lessons, that is, before the online intervention, and those after the online learning sequence, are compared (see Table 3). In anonymous pre- and post-intervention questionnaires, the students rated the statement “I feel I have made progress in summary writing” on a five-point Likert scale. On a positive note, the number of students who felt they learnt “a lot” increased considerably after the online class (from 2.7% to 20%), while there is a noticeable decrease in the number of participants who declared they learnt “a little” (from 65% to 34%). While none of the students declared that they had not learnt anything at all, 11% were rather pessimistic about their progress (“not much”) after the f2f lessons, and this increased to 14.5% after the online module. The most noticeable result is that considerably more students declared that they felt insecure about their progress after the online class (“don’t know”: from 21.5% to 31.5%).Footnote

Table 3 Self-estimated progress after f2f lessons and online module on a five-point-Likert scale

3.2.2 Self-assessment with model solution

As part of the task in the online module, the participants were asked three times to evaluate their summaries. They were assisted in the evaluation process by a model solution and reflective questions.4 The self-assessment consisted of two elements: (a) a score on a scale of 10, and (b) a rationale for the score.

There was noticeably low variance in the three scores the students attributed to their own summaries, both at individual and at group level. The average score over the whole group and the three assessment sessions was 6.2/10 (5.9; 6.5; 6.0), with a spread ranging from 3 to 8. An increase of 0.6 in the second self-assessment is followed by a decrease of 0.5 in the third and final assessment. Therefore, there is hardly any difference between the averages of the first and last assessment scores. The individual variance between the three scores did not exceed 2 scales (/10).

For the sake of completeness, it has to be mentioned that the aforementioned insecurity with respect to the self-assessment does not hold for all participants. Some of the students’ rationales reflect a high degree of awareness of relevant summary quality features, and firm self-efficacy beliefs:

7[/10]: All important propositions are there, and I tried to reformulate specific words.

8: My summary has a different structure [than the model], but I actually think it’s also a good one.

8: I actually like my summary, though I failed to mention some figures. I don’t consider them to be important.

The overall low variance in self-assessment scores suggests that either the students did not feel comfortable with this task, or they did not experience stable progress in their own summary writing. But how can this be reconciled with the actual progress in summary writing demonstrated? In order to understand the assumptions underlying this apparent dichotomy, it is important to explore the students’ attitudes.

3.3 Students’ attitudes towards the feedback received

3.3.1 Answers in questionnaires

During the course of the intervention, students were asked twice, via an anonymous online questionnaire, to express their attitudes towards the exercises and the feedback received in the online module, the first time immediately after the second online session (i.e. the first self-assessment with a model solution) (Q1), and the second time after the last online session (Q2). The question asked was: “Which of the exercises in the online learning module did you (a) like, or (b) dislike? Please explain.” The answers to the first part of the question were processed quantitatively. The answers to the second part of the question were coded according to main characteristics mentioned (e.g. “difficult”, “tedious”, “fun to do”, “helpful”). As a full report of the qualitative analysis is beyond the scope of this article, selected statements are used as illustrative examples, enriching the picture emerging from the quantitative results.

The students were united in their positive attitude towards the preparatory listening exercises in the instructivist part of the online module (only positive mentions, Q1: 12; Q2: 16). The opinions towards instructivist grammar exercises on cohesive ties were diverging: while they received 13 (Q1) and 7 (Q2) positive mentions respectively, the same number of students admitted to disliking them (Q1: 8, Q2: 12). Interesting to note is the general tendency towards a more unfavourable attitude in the second questionnaire. Two illustrative examples of students’ comments are:

I didn’t like the grammar exercises because there was not enough explanation about how the respective structure had to be used.

It was not really interesting to do the fill-the-gap exercises on grammar because there were always several possible answers.”

With respect to the constructivist self-reflection exercises based on the model solution, a completely different picture emerges: they were mentioned positively only once in Q1, and negatively two (Q1) and five (Q2) times respectively. Illustrative example comments are:

I really found it interesting to compare my own summary to a model. Like this, you discover different ways to write about the same topic.

Because I wrote the text myself and eliminated the errors I could find before uploading it, I was barely able to improve it by myself. If I cannot compare [my assessment] to a score given by a teacher, how should I know whether I assess my text correctly?

3.3.2 Post-hoc focus groupsFootnote 5

Immediately after the last online session, two focus group (F1, F2) discussions took place with randomly selected students (F1: 6 students, F2: 7 students). In order to ensure that the participants felt free to express their opinions, they were not led by their teacher, but by an experienced researcher unknown to the students. Two of the key questions (K1, K2) were directly related to the students’ attitudes towards exercise and feedback types. The salient tendencies in the discussion about these two key questions are summarized below.

(K1): Generally speaking, there were two types of exercises: In one type, you had to fill in gaps or select an answer between several choices, and you got direct feedback in form of an automated answer. In the other type you had to fill in a sentence or a text, and then you had to compare your answer to a model. Which type did you prefer personally?

All of the seven participants of F1 declared they preferred the instructivist exercises. The main reason given was that the answers were clear-cut, while students felt insecure about the limits of acceptability of their own formulations when comparing them with the model. The attitudes in F2 seemed to be more divergent. The general trend of the discussion in F2 was that both exercise types have their strengths, and that a good mix of both is probably the best for an online module.

(K2): You had to evaluate your own summaries, comparing them with a model solution. Did you find this easy? Did it help you?

In F1, all but one student did not find the model solutions helpful. One student was inclined to find them helpful but disliked the fact that she had to assess her own text with a score. In F2, the participants mentioned two concrete items they learnt through the models: provide information about the source text, and address the “wh-”questions in the introduction. However, also in this group, most participants felt insecure about the self-assessment scores because they lacked a framework for reference, and they could not assess the accuracy of their own texts. One student who was favourable towards the usefulness of the models stated she always tried to implement what she had learnt from the model in her next summary. This statement reinforces the view that feedback needs to be sustained in order to gain its full potential (Storch, Reference Storch2010: 42).

4 Discussion

The comparison of pre- and post-tests indicates a positive development in students’ skills through the intervention. Nevertheless, students’ self-assessment behaviour and self-reported learning gain reveals that they are seemingly unaware of this learning gain.

The first two research questions can thus be answered in the following way: whereas students actually learnt to elaborate their summaries both linguistically and in terms of content, they did not fully appreciate the value of these skills. Instead, they remained insecure about their progress, because they focused on accuracy, which they felt unable to evaluate themselves. Indeed, as the model only represents one out of a vast combination of possible formulations, the chance to discover language-related problems is low. In the following, we discuss this mismatch based on students’ expressed attitudes in order to answer research question (3): What is the role of students’ attitudes as possible mediator between actual and perceived learning gain?

The discrepancy between the students’ expectations and the constructivist approach underlying the assessment part of the online module can be attributed to three different, yet related, key factors: (1) evaluation, (2) linguistic focus, and (3) learner motivation (see Table 4 for an overview).

The mismatch in evaluation is linked to the fact that constructivist feedback does not provide an external judgement in the form of KR. This clearly confused the students who were not used to self-evaluation, and therefore, did not seem comfortable relying on it, as is evidenced by the following quotes:

We didn’t get feedback on our summaries, so how can we know whether they were good? You can’t make progress like that. (post-hoc questionnaire)

I was disappointed because we were expected to evaluate our own summaries. I think that a combination of online exercises and a reliable text correction by the teacher would be ideal. (post-hoc questionnaire)

Drawing on Hattie and Timperley (Reference Hattie and Timperley2007), effective feedback should “answer three major questions (...): Where am I going? (What are the goals?), How am I going? (What progress is being made toward the goal?), and Where to next? (What activities need to be undertaken to make better progress?)” (p. 86). Clearly, the second question was not answered to the students’ satisfaction.

The linguistic focus mismatch is rooted in the students’ (short) history as learners of German as a foreign language. Second-year bachelor students are not used to writing longer texts in German which requires a focus on HOCs, like coherence and cohesion. Instead, their understanding of good writing is intrinsically linked to accuracy, which leads them to concentrating on LOCs like grammar and word choice. This might also reflect secondary school teachers’ feedback behaviour. The following two statements illustrate this focus on LOCs:

I don’t know if I made a lot of grammatical errors, so I can’t assess the quality of my summary. (post-hoc questionnaire)

Maybe you make the same errors every week again without noticing it. (...) Like this, there’s a risk that your errors fossilize. (post-hoc questionnaire)

This confirms Hyland’s (Reference Hyland2003) and Diab’s (Reference Diab2005) findings about students’ preferences towards corrective feedback in writing.

A third mismatch can be detected between the students’ perceptions of their learner role on the one hand, and the role they should adapt in a constructivist learning environment, on the other. In this context, Duijnhouwer, Prins and Stokking (Reference Duijnhouwer, Prins and Stokking2012) refer to personal goals, differentiating between “performance goal” and “mastery goal”:

As students have a stronger mastery goal they have a stronger focus on developing their competence (...). As students have a stronger performance goal they have a stronger focus on getting their competence positively judged. (Duijnhouwer et al., 2012: 173)

The following two selected quotes clearly show that the students struggled with the role they needed to adopt for self-evaluation:

I found it strange that we had to evaluate ourselves. After all, we’re not teachers, so how can we mark our own work? (post-hoc questionnaire)

Because I wrote the text myself and edited it carefully before uploading it, I was unable to pick up on any mistakes myself. If I can’t compare my own judgement to that of a teacher, it’s difficult to know whether I was right or wrong. (post-hoc questionnaire)

Table 4 Three key factors in mismatch between students’ expectations and adopted pedagogic approach

However, it is important to note that not all students expressed negative attitudes towards the constructivist feedback they received in the online module. Some statements indicate an open attitude towards this new feedback experience, positively mentioning the goal-orientedness that is reinforced by the model solution. That is, the Hattie & Timperley’s (Reference Hattie and Timperley2007) question “Where to next?” has been answered to those students’ satisfaction.

Thanks to the model solutions, the teacher’s expectations were clear. You knew how you actually should do it. (post-hoc questionnaire)

I also learnt to always mention the source and the 5 WH-questions; I think I always learnt something from the comparison and tried to do better next time. (post-hoc questionnaire)

When you have to fill in whole sentences, you’ve got to think about it (...) and when you compare it to the model solution, it’s like “ah, I have to keep this in mind”. (post-hoc focus group interview 1)

Moreover, when comparing the different feedback types they received in the online environment, students also mentioned problematic aspects of instructivist feedback, self-critically reflecting on their own behaviour. Two important aspects are highlighted: (1) a retention problem, and (2) the “click-away behaviour”.

If you need a piece of information that you read about two exercises ago, you already can’t recall it anymore. (post-hoc questionnaire)

The respondent refers to the “exercise suites” in the instructivist part of the module, building upon knowledge provided in the feedback of immediately preceding exercises. Clearly, this stimulus to read the feedback failed to yield the desired effect. Van Beuningen (2010) also discusses the limited capacity of corrective feedback to foster transfer: “[F]ocused CF is rather a form of explicit grammar instruction than a focus-on-form intervention (...). This might make it more difficult for learners to transfer what is learned from the feedback to new writing situations” (Van Beuningen, 2010: 11).

I think you click away after filling in an exercise, I personally do not read the whole feedback but just click on “OK”. (post-hoc focus group interview 2)

In the same vein, Heift (Reference Heift2010) concludes from her literature review that 18% of learners in CALL environments neglected to look up answers altogether. She specifically mentions the problem of recasts being neglected in a CALL environment: “[T]here is no longer a need for the learner to attend to the feedback generated by the computer - the correct answer has already been supplied.” (Heift, 2010: 199). This “click-away” behaviour that challenges the pedagogic intentions of the instructor-designer can be subsumed under the umbrella term “Instructional disobedience” coined by Elen (Reference Elen2013).

Concluding from the various statements above, attitude clearly depends on the individual’s learning motivation, and will ultimately determine the benefit the students draw from the model solution. It is therefore necessary that students are conducted towards developing a more mastery-oriented mindset to optimally benefit from constructivist online tasks and “maximize their potential for self-repair” (Heift, Reference Heift2010).This can be achieved by learning how to use cognitive and meta-cognitive strategies for writing and self-evaluation (Segev-Miller, Reference Segev-Miller2004). Feedback – as important as it might be – is “only part of the equation”, as Hattie and Timperley (Reference Hattie and Timperley2007) pointed out:

A major task for teachers and parents is to make academic goals salient for all students, because students who are prepared to question or reflect on what they know and understand are more likely to seek confirmatory and/or disconfirmatory feedback that allows for the best opportunities for learning.

(Hattie & Timperley, 2007: 103–104)

5 Conclusion

The main goal of this exploratory study using naturalistic classroom data was to broaden our understanding of students’ attitudes towards different feedback types in an online environment while dealing with a complex writing task. For this purpose, the actual learning gain as measured in pre- and post-tests was examined in combination with self-reported learning gain as measured in self-assessment, and triangulated with the students’ attitudes as expressed in questionnaires and interviews.

The results of the study revealed an interesting dichotomy between the achieved results and the perceived learning gain. A satisfactory overall learning gain was detected in summary writing through constructivist online support that consisted of repeated and guided self-evaluation based on model solutions. In stark contrast to these finding is the students’ low self-confidence showing in the self-assessment scores and the reported insecurity concerning their overall learning gain.

The students’ attitudes were found to go some way towards explaining the observed contrast. Indeed, the dichotomy is caused by a mismatch between the students’ expectations of a learning environment on the one hand, and the adopted constructivist approach on the other. This mismatch can be attributed to three different key factors: (1) evaluation, (2) linguistic focus and (3) learner motivation.

The findings are in line with previous research that has pinpointed the importance of motivation (Duijnhouwer et al., Reference Duijnhouwer, Prins and Stokking2012; van der Kleij et al., Reference van der Kleij, Eggen, Timmers and Veldkamp2012) for the uptake of feedback. The activation of the learner’s ZPD in an individual online learning environment requires a mature learner role, striving towards mastery (instead of performance). It is important for teachers to bear in mind learner histories, and consequently raise their students’ awareness of this aspect of (language) learning before the latter engage in constructivist online learning activities: “[T]eachers should help students to develop practices of feedback use which will scaffold and engage them as they develop their own self-monitoring capabilities” (Storch, Reference Storch2010:180).

An important limitation of the adopted study design, a single-group pre-post-test intervention, is that a possible “learning by doing”- effect could not be controlled for. However, there are two aspects that make a strong case for the effect of the online intervention: (a) in the pre-post-tests comparison, the focus was on specific aspects of writing (HOCs) that formed an important part of the intervention and that are highly unlikely to be achieved through “learning by doing”; (b) at the moment of the pre-test, the students had already received face-to-face instruction on summary writing for six weeks, including writing practice, and clearly had not learnt these specific features.

Another shortcoming of the present study is that the two important cognitive mediators of learning apart from attitudes, strategy development and self-efficacy beliefs, were not operationalised in a consistent way, and therefore had to be excluded from the analysis. While, in recent research, the importance of self-efficacy beliefs for first language (L1) writing development has been underlined (Pajares, Reference Pajares2003; Woodrow, Reference Woodrow2011), this area, to date, is under-investigated in the field of L2 writing research (Kormos, Reference Kormos2012). Future research should investigate the role of self-efficacy beliefs in the context of computer-assisted advanced L2 writing more thoroughly.

This study revealed that the use of models as feedback in an online individual module for summary writing can actually enhance students’ writing performance regarding HOCs. Additionally, it might be an interesting focus for further research to investigate whether the concept of noticing (Schmidt, Reference Schmidt1990, Reference Schmidt2001) can be expanded to HOCs, as previous research focusing on the use of models to foster attention on LOCs has provided evidence of noticing (Hanaoka, Reference Hanaoka2007; Martínez & Roca de Larios, Reference Martínez and Roca de Larios2010). Such an in-depth analysis of students’ response to the models might reveal how processes of noticing can be linked to development in advanced L2 writing.

Acknowledgements

I would like to thank Mat Schulze and the anonymous reviewers for their insightful comments and helpful suggestions on earlier drafts of this paper.

Supplementary material

To view supplementary material for this article, please visit http://dx.doi.org/10.1017/S0958344015000099

Footnotes

1 and 2 see examples in the supplementary materials provided online

2 see examples in the supplementary materials provided online

3 See example of a pre- and post-test case study in the supplementary materials provided online.

4 See example in the supplementary materials provided online.

5 See full transcription of relevant parts of the discussions in the supplementary materials provided online.

References

Aljaafreh, A. and Lantolf, J. P. (1994) Negative feedback as regulation and second language learning in the zone of proximal development. The Modern Language Journal, 78(4): 465483.CrossRefGoogle Scholar
Cotos, E. (2011) Potential of automated writing evaluation feedback. CALICO Journal, 28(2): 420459. http://www.equinoxpub.com/journals/index.php/CALICO/article/view/22995 Google Scholar
Diab, R. L. (2005) EFL university students' preferences for error correction and teacher feedback on writing. TESL Reporter, 38(1): 2751.Google Scholar
Doyle, W. (1977) Paradigms for research on teacher effectiveness. Review of research in education, 5: 163198.Google Scholar
Duijnhouwer, H., Prins, F.J. and Stokking, K. M. (2012) Feedback providing improvement strategies and reflection on feedback use: Effects on students’ writing motivation, process, and performance. Learning and Instruction, 22(3): 171184.Google Scholar
Elen, J. (2013 “Instructional disobedience”: Challenging instructional design research. Earli 15th biannual conference, München, Germany: unpublished keynote speech.Google Scholar
Enginarlar, H. (1993) Student response to teacher feedback in EFL writing. System, 21(2): 193204.Google Scholar
Felix, U. (2002) The web as a vehicle for constructivist approaches in language teaching. ReCALL, 14(1): 215.Google Scholar
Felix, U. (2005) E-learning pedagogy in the third millennium: The need for combining social and cognitive constructivist approaches. ReCALL, 17(1): 85100.Google Scholar
Hanaoka, O. (2007) Output, noticing, and learning: An investigation into the role of spontaneous attention to form in a four-stage writing task. Language Teaching Research, 11(4): 459479.CrossRefGoogle Scholar
Hattie, J. and Timperley, H. (2007) The power of feedback. Review of Educational Research, 77(1): 81112.CrossRefGoogle Scholar
Heift, T. (2005) Corrective feedback and learner uptake in CALL. ReCALL, 16(2): 416431.Google Scholar
Heift, T. (2010) Prompting in CALL: A longitudinal study of learner uptake. The Modern Language Journal, 94(2): 198216.Google Scholar
Hyland, F. (2003) Focusing on form: Student engagement with teacher feedback. System, 31(2): 217230.CrossRefGoogle Scholar
Hyland, F. (2010) Future directions in feedback on second language writing: Overview and research agenda. International Journal of English Studies, 10(2): 171182.Google Scholar
Karagiorgi, Y. and Symeou, L. (2005) Translating constructivism into instructional design: Potential and limitations. Educational Technology & Society, 8(1): 1727.Google Scholar
Kintsch, E., Steinhart, D., Stahl, G., Matthews, C. and Lamb, R. (2000) Developing summarization skills through the use of LSA-based feedback. Interactive Learning Environments, 8(2): 87109.Google Scholar
Kirschner, P. A. (2002) Can we support CSCL? Educational, social and technological affordances for learning. In: Kirschner, P.A. (ed.), Three worlds of CSCL: Can we support CSCL?. Heerlen: Open University of the Netherlands, 747.Google Scholar
Kormos, J. (2012) The role of individual differences in L2 writing. Journal of Second Language Writing, 21(4): 390403.Google Scholar
Kulhavy, R. W. (1977) Feedback in written instruction. Review of Educational Research, 47(2): 211232.CrossRefGoogle Scholar
Mandernach, B. J. (2005) Relative effectiveness of computer-based and human feedback for enhancing student learning. The Journal of Educators Online, 2(1). http://www.thejeo.com/Archives/Volume2Number1/MandernachFinal.pdf.Google Scholar
Martínez, N. E. and Roca de Larios, J. (2010) The use of models as a form of written feedback to secondary school pupils of English. International Journal of English Studies, 10(2): 143170.CrossRefGoogle Scholar
Nagata, N. (1993) Intelligent computer feedback for second language instruction. The Modern Language Journal, 77(3): 330339.CrossRefGoogle Scholar
Norris, J. M. and Manchón, R. (2012) Investigating L2 writing development from multiple perspectives: Issues in theory and research. In: Manchón, R. (ed.), L2 writing development: Multiple perspectives. Boston/Berlin: Walter de Gruyter, 221244.CrossRefGoogle Scholar
Pajares, F. (2003) Self-efficacy beliefs, motivation, and achievement in writing: A review of the literature. Reading & Writing Quarterly, 19(2): 139.Google Scholar
Pujolà, J. -T. (2001) Did CALL feedback feed back? Researching learners’ use of feedback. ReCALL, 13(1): 7998.CrossRefGoogle Scholar
Radecki, P. M. and Swales, J. M. (1988) ESL student reaction to written comments on their written work. System, 16(3): 355365.CrossRefGoogle Scholar
Robinson, G. L. (1991) Effective feedback strategies in CALL: Learning theory and empirical research. In: Dunkel, P. (ed.), Computer-assisted language learning and testing: research issues and practice. New York, NY: Newbury House Publishers, 155167.Google Scholar
Rosselle, M., Sercu, L. and Vandepitte, S. (2009) Learning outcomes and learner perceptions in relation to computer-based feedback. Indian Journal of Applied Linguistics, 35(1): 4561.Google Scholar
Schmidt, R. W. (1990) The role of consciousness in second language learning. Applied Linguistics, 11(2): 129158.Google Scholar
Schmidt, R. W. (2001) Attention. In: Robinson P. (ed.), Cognition and second language instruction. Cambridge: Cambridge University Press, 332.Google Scholar
Segev-Miller, R. (2004) Writing from sources: The effect of explicit instruction on college students' processes and products. L1-Educational Studies in Language and Literature, 4(1): 533.Google Scholar
Shute, V. (2008) Focus on formative feedback. Review of Educational Research, 78(1): 153189.Google Scholar
Storch, N. (2010) Critical feedback on written corrective feedback research. International Journal of English Studies, 10(2): 2946.Google Scholar
van Beuningen, C. (2010) Corrective feedback in L2 writing: Theoretical perspectives, empirical insights, and future directions. International Journal of English Studies, 10(2): 127.Google Scholar
van der Kleij, F. M., Eggen, T. J. H. M., Timmers, C. F. and Veldkamp, B. P. (2012) Effects of feedback in a computer-based assessment for learning. Computers & Education, 58(1): 263272.Google Scholar
Woodrow, L. (2011) College English writing affect: Self-efficacy and anxiety. System, 39(4): 510522.Google Scholar
Figure 0

Fig. 1 Flowchart of the design of the individual online learning module

Figure 1

Table 1 Features of the task types in the online module

Figure 2

Fig. 2 Data triangulation for the analysis of perceived and actual learning gain

Figure 3

Table 2 Overview of the quality features assessed in the pre- and post-tests on summary writing

Figure 4

Table 3 Self-estimated progress after f2f lessons and online module on a five-point-Likert scale

Figure 5

Table 4 Three key factors in mismatch between students’ expectations and adopted pedagogic approach

Supplementary material: File

Strobl supplementary material

Supplementary material

Download Strobl supplementary material(File)
File 42.5 KB