Hostname: page-component-745bb68f8f-lrblm Total loading time: 0 Render date: 2025-02-06T07:19:47.678Z Has data issue: false hasContentIssue false

The Students’ Perceptions of School Success Promoting Strategies Inventory (SPSI): Development and validity evidence based studies

Published online by Cambridge University Press:  04 August 2014

Paulo A. S. Moreira*
Affiliation:
Universidade Lusíada do Porto (Portugal)
João Tiago Oliveira
Affiliation:
Universidade Lusíada do Porto (Portugal)
Paulo Dias
Affiliation:
Universidade Lusíada do Porto (Portugal)
Filipa Machado Vaz
Affiliation:
Universidade Lusíada do Porto (Portugal)
Isabel Torres-Oliveira
Affiliation:
Universidade Lusíada do Porto (Portugal)
*
*Correspondence concerning this article should be addressed to Paulo Moreira. Centro de Investigação em Psicologia para o Desenvolvimento (CIPD). Observatório da Melhoria e Eficácia da Escola. Universidade Lusíada do Porto. Rua Dr. Lopo de Carvalho. 4369–006. Porto (Portugal). E-mail: paulomoreira@por.ulusiada.pt
Rights & Permissions [Opens in a new window]

Abstract

Students’ perceptions about school success promotion strategies are of great importance for schools, as they are an indicator of how students perceive the school success promotion strategies. The objective of this study was to develop and analyze the validity evidence based of The Students’ Perceptions of School Success Promoting Strategies Inventory (SPSI), which assesses both individual students’ perceptions of their school success promoting strategies, and dimensions of school quality. A structure of 7 related factors was found, which showed good adjustment indices in two additional different samples, suggesting that this is a well-fitting multi-group model (p < .001). All scales presented good reliability values. Schools with good academic results registered higher values in Career development, Active learning, Proximity, Educational Technologies and Extra-curricular activities (p < .05). SPSI showed to be adequate to measure within-schools (students within schools) dimensions of school success. In addition, there is preliminary evidence for its adequacy for measuring school success promotion dimensions between schools for 4 dimensions. This study supports the validity evidence based of the SPSI (validity evidence based on test content, on internal structure, on relations to other variables and on consequences of testing). Future studies should test for within- and between-level variance in a bigger sample of schools.

Type
Research Article
Copyright
Copyright © Universidad Complutense de Madrid and Colegio Oficial de Psicólogos de Madrid 2014 

The understanding of how schools implement school success promoting strategies benefits from the combination of indicators of perceptions from the several school agents (including principals, teachers, parents, but also students) both at within and between schools level. The objective of this study was to test the dimensional structure within and between schools of the Students’ Perceptions of School Success Promoting Strategies Inventory (SPSI), an instrument that assesses students’ perceptions of school success promoting strategies.

The school success promotion and prevention of school drop-out and low academic achievement is a key challenge faced by schools. Schools characteristics have a significant impact on several indicators of students’ academic trajectories, including on academic performance and school dropout (e.g. Lee & Smith, 1995; Lee & Burkam, Reference Lee and Burkam2003). Besides, there is an overall consensus regarding the efficacy of key education strategies in preventing school drop-out and promoting academic success. For example, the National Dropout Prevention Centre/Network has identified 15 strategies shown to effectively prevent school drop-out. These can be grouped into: Educative System Plasticity, that is, the educational context’s ability to renew and adapt itself in light of students’ characteristics and needs (e.g., Hopkins, Reference Hopkins2001); monitoring of Rules and Regulations (e.g., Wagstaff, Combs, & Jarvis, Reference Wagstaff, Combs and Jarvis2000); existence of Extra-Curricular Activities (e.g., Feldman & Matjasko, Reference Feldman and Matjasko2005); Mentoring by teachers or older peers that help pupils in their academic and personal difficulties (e.g., Gonzales, Richards, & Seeley, Reference Gonzales, Richards and Seeley2002); promotion of Active Learning activities, in which students are encouraged to work autonomously and with peers (e.g., Bell & Kozlowski, Reference Bell and Kozlowski2008); Personalized Learning, adapting teachers’ practices to each student’s needs (e.g., Switzer, Reference Switzer, Smink and Schargel2004); Educational Technologies used in the classroom and in homework tasks (e.g., Hartley, Reference Hartley2007); the implementation of Career Development programs (e.g., Stone, Reference Stone, Smink and Schargel2004) and the relationship between students’ achievement and teacher’s Proximity (e.g., Bryk, Easton, Kerbeow, Rollow, & Sebring, Reference Bryk, Easton, Kerbeow, Rollow and Sebring1994; Wagstaff et al., Reference Wagstaff, Combs and Jarvis2000).

In order to assess school success promotion strategies, a preliminary set of items was formulated (Moreira, Dias, Vaz, Rocha, & Freitas, Reference Moreira, Dias, Vaz, Rocha, Freitas, Matos, Verdasca, Matos, Costa, Ferrão and Moreira2012; Moreira, Dias, Vaz, Rocha, & Vaz, Reference Moreira, Dias, Vaz and Vaz2013). A final set of items was selected, which composed the version of the School Success Promoting Strategies Inventory (SPSI) tested in this study. The instrument assesses the students’ perceptions about the degree to which their schools offer seven empirically validated strategies for the promotion of school success, throughout 7 scales: Career development, Teachers and students proximity, Mentoring, Active learning, Personalized learning, Extracurricular activities, Educational technologies.

Career development

It refers to the “total constellation of psychological, sociological, educational, physical, economic and chance factors that combine to influence the nature and significance of work in the total lifespan of any given individual” (Maddy-Bernstein, Reference Maddy-Bernstein2000, p. 2). There is a widespread agreement around career development being a desirable part of schooling and there is evidence that many different types of career guidance are effective, especially individual counseling interventions (Hughes & Karp, Reference Hughes and Karp2004). The Social Cognitive Career Theory (Lent, Brown, & Hackett, Reference Lent, Brown and Hackett1994) suggests that outcome expectancies, career interests and career self-efficacy shape career choice and that career self-efficacy plays a mediating role between one’s background and interests and one's outcome expectancies. Moreover, career self-efficacy is influenced both by individual variants (predispositions, gender, race/ethnicity, health status) and by contextual factors such as family background and learning experiences. The theory highlights the interactive influence of contextual factors and cognitive person variables on individual career development (Lent, Brown, & Hackett, Reference Lent, Brown and Hackett2000). It is expected that students from effective schools will register higher levels of career development than those from less effective schools, because effective schools mobilize internal and external resources to promote the development of students’ career development. This trend was found in previous studies (Moreira, Dias, Vaz, Rocha, Monteiro et al., Reference Moreira, Dias, Vaz, Rocha, Freitas, Matos, Verdasca, Matos, Costa, Ferrão and Moreira2012).

Teacher and staff-student Proximity

Positive teacher-student relationships with proximity are supportive, warm and low in conflict teacher-student relationships (e.g., Pianta, Numetz, & Benet, Reference Pianta, Nimetz and Bennett1997), and are associated with student positive academic beliefs, motivation and performance (e.g., Hamre & Pianta, Reference Hamre and Pianta2005; Palermo, Hanish, Martin, Fabes, & Reiser, Reference Palermo, Hanish, Martin, Fabes and Reiser2007; Reddy, Rhodes, & Mulhall, Reference Reddy, Rhodes and Mulhall2003). Student-teacher relationships develop over the course of the school year through a complex intersection of student and teacher beliefs, attitudes, behaviors and interactions with one another (Hamre & Pianta, Reference Hamre, Pianta, Bear and Minke2006). Positive teacher-student relationships may protect children from suboptimal home environments including negative parent-child relationships (O’Connor & McCartney, Reference O’Connor and McCartney2007), and strong and supportive relationships with teachers allow students to feel safer and more secure in the school setting, feeling more competent, making more positive connections with peers and obtaining greater academic gains (Hamre & Pianta, Reference Hamre, Pianta, Bear and Minke2006). Appropriate teacher-student relationships for high student outcomes are characterized by a rather high degree of teacher influence and proximity towards students (Wubbels & Brekelmans, Reference Wubbles and Brekelmans2005; Wubbels, Brekelmans, Brok, & Tartwijk, Reference Wubble, Brekelmans, Brok, Tarwijk, Evertson and Weinstein2006). Affective qualities of teacher-student relationships are positively associated to students’ school engagement and achievement, with stronger effects in the higher grades (Roorda, Koomen, Spilt, & Oort, Reference Roorda, Koomen, Spilt and Oort2011). Therefore, it is expected that higher levels of teacher-student proximity will be found in effective schools.

Mentoring

It refers to an intervention that addresses young people’s needs for adult support and guidance throughout their development (DuBois, Portillo, Rhodes, Silverthorn, & Valentine, Reference DuBois, Portillo, Rhodes, Silverthorn and Valentine2011). In adolescence, there is a greater orientation toward the development of significant emotional bonds. However, even though these bonds are often a result of effective mentoring, they are not mentoring’s main goal (Hamilton & Hamilton, Reference Hamilton and Hamilton2010). Moreover, some evidence suggests that it may be of limited value or even counterproductive for mentors to regard cultivating an emotional connection with this population as the primary goal (Hamilton & Hamilton, Reference Hamilton and Hamilton1992) or, as well, to foster relationships that are unconditionally supportive and lacking in structure (Langhout, Rhodes, & Osborne, Reference Langhout, Rhodes and Osborne2004). Meta-analyses show that mentoring is effective in improving outcomes across behavioral, social, emotional and academic domains (Dubois et al., Reference DuBois, Portillo, Rhodes, Silverthorn and Valentine2011). Developmentally, the advantages of participation in mentoring programs range across early childhood to adolescence. Evidence indicates that mentoring programs are effective when: a) participating youth have either had preexisting difficulties (including behavioral problems specifically) or been exposed to significant levels of environment risk; b) evaluated samples have included greater proportions of male youths; c) there has been good fit between the educational or occupational background of mentors and the goals of the program; d) mentors and youths have been paired on similarity of interests, and e) programs have been structured to support mentors in assuming teaching or advocacy roles with youths (Dubois et al., Reference DuBois, Portillo, Rhodes, Silverthorn and Valentine2011). These conditions are favored by systematic and organized approaches to mentoring, which are more typical of efficient schools, reason why it is expected that efficient schools will register higher levels of mentoring than less effective schools.

Active Learning

Active learning occurs in the classroom when the teacher creates a learning environment that allows for the students’ proactive involvement (Michael, Reference Michael2006). Active learning involves 1) the active construction of meaning by the learner; 2) learning facts (“what” – declarative knowledge) and learning to do something (“how” – procedural knowledge) are two different processes; 3) some things that are learned are specific to the domain or context, whereas other things are more readily transferred to other domains; 4) individuals are likely to learn more when they learn with others than when they learn alone; and 5) meaningful learning is facilitated by articulating explanations, whether to one’s self, peers or teachers (Michael, Reference Michael2006). Active learning promotes real time learning Footnote 1 , and benefits students (allowing them for practicing skills and asking questions), but also teachers by enabling them the opportunity to evaluate the students in a real time basis (Amburgh, Devlin, Kirwin, & Qualters, Reference Amburgh, Devlin, Kirwin and Qualters2007). Active learning increases students’ levels of attention span during class (e.g., Wankat, Reference Wankat2002), and is strongly associated with persistence to course and degree completion (Astin, Reference Astin1993). Active learning promotes learning by doing things, rather than learning by receiving information (Mahmood, Tariq, & Javed, Reference Mahmood, Tariq and Javed2011), which promotes students’ deeper engagement in the learning processes; it encourages critical thinking and fosters the development of self-oriented learning (Brown & Freeman, Reference Brown and Freeman2000). Dimensions of teaching process are essential to creating active learning in the classroom: 1) context setting; 2) class preparation; 3) class delivery; and 4) continuous improvement (Auster & Wylie, Reference Auster and Wylie2006). Because these conditions (especially continuous improvement) are associated to effective school improvement efforts and to school efficacy (Creemers & Reezigt, Reference Creemers and Reezigt2005), it is expected that higher levels of active learning exist in effective schools, in comparison with less effective schools.

Personalized Learning

Personalized Learning refers to the teaching / learning processes oriented to the correspondence between the teaching methods and techniques and the students characteristics (including expectations, interests, preferences, motivations). Personalized learning is not a set of techniques but rather a culture that supports learning (Hargreaves, Reference Hargreaves2006). Personalization refers to participation (Leadbeater, Reference Leadbeater2004, Reference Leadbeater2005) and to a process rather than a state or product (Hargreaves, Reference Hargreaves2006). Schools using personalized learning approaches have a strong sense of “agency” (school staff facilitate students’ engagement and recognized their contributions), give students the skills to learn, motivate students to succeed in their own interests and aspirations, and students’ prior knowledge and experience is accepted as an essential starting point (Sebba, Brown, Steard, Galton, & James, 2007). Where personalized learning takes places, schools have found different ways of organizing themselves, often through collaboration with other schools or agencies outside school, which provide greater flexibility and adaptability in the provision offered to students (Leadbeater, Reference Leadbeater2005). Schools efficient in implementing personalized learning register higher test scores at elementary, middle and high school level, better attendance rates, lower suspension rates, lower grade retention rates and higher graduation rates than other schools (Centre for Collaborative Education, 2001). Therefore, it is expected that high achieving schools register higher scores in personalized learning than low achieving schools.

Extracurricular Activities

It refers to activities organized and supported by schools occurring mostly on school facilities (e.g., academic clubs, sports, drama), beyond the curricular activities. They are embedded in schools and communities and influenced by families and peers (Bronfenbrenner, Reference Bronfenbrenner1979, Reference Bronfenbrenner1986, Reference Bronfenbrenner and Lerner1998), and are privileged contexts for youth development (Kahne et al., Reference Kahne, Nagaoka, Brown, O’Brien, Quinn and Thiede2001). Feldman and Matjasko (Reference Feldman and Matjasko2005) reviewed the literature on school-based activity participation and drew several general conclusions. School-based, structured, extracurricular activity participation, as opposed to participation in unstructured activities, even those including school-based, is associated with positive adolescent developmental outcomes: a) higher academic performance and attainment; b) reduced rates of dropout; c) lower rates of substance use; d) less sexual activity among girls; e) better psychological adjustment, including higher self-esteem, less worry regarding the future and reduced feelings of social isolation; and f) reduced rates of delinquent behavior, including criminal arrests and antisocial behavior (Feldman & Matjasko, Reference Feldman and Matjasko2005). However, associating extracurricular activities participation with several outcomes depends on the interaction with other factors. The relation between participation in extra-curricular activities and disruptive outcomes, such as delinquent behaviors, is moderated by gender and the relation between participation in extra-curricular activities and psychological adjustment is moderated by the type of activity (Feldman & Matjsko, Reference Feldman and Matjasko2005). The relation between extra-curricular activities and academic achievement is mediated by self-beliefs (Valentine, Cooper, Bettencourt, & DuBois, Reference Valentine, Cooper, Bettencourt and DuBois2002), and moderated by race (Feldman & MaTjasko, Reference Feldman and Matjasko2005). Students’ participation on extra-curricular activities is associated to school characteristics (McNeal, Reference McNeal1999), reason why it is expected that higher scores on extracurricular activities will be found in high effective schools.

Educational Technologies

It refers to the use of technologies (with special focus on computer-related technologies) in classroom and in the teaching/learning process. In the last decades, there was a significant use of educational technologies, which resulted in a change of the teachers’ role as primary source of information to a role of providing students with structure and advice, monitoring their progress and assessing their accomplishments (Kozma, Reference Kozma2003). Several meta-analyses attest for the very significant and positive impact of the use of educational technologies on academic achievement (Means, Toyama, Murphy, Bakia, & Jones, Reference Means, Toyama, Murphy, Bakia and Jones2009; Schacter & Fagnano, Reference Schacter and Fagnano1999), on motivation to learn, encouraging collaborative learning and supporting the development of critical thinking and problem-solving skills (e.g., Schacter & Fagnano, Reference Schacter and Fagnano1999). However, this only occurs when the learning objectives are clear and the focus of the technology use is oriented (Waxman, Li, & Michko, Reference Waxman, Lin and Michko2003), including at-risk and special needs students (Means, Chelemer, & Knapp, Reference Means, Chelemer and Knapp1991). Because of this, it is expected that schools with high prevalence of at-risk students or students with special needs, register high levels of use of educational technologies. On the other hand, it is likely that schools with high archiving students also foment the use of educational technologies. For this reason, the importance of assessing the use of educational technologies is closely linked to the simultaneous understanding of the degree to which its use is consistent and oriented to the learning objectives.

Individual students’ perceptions

Indicators of institutional improvement efforts or of school success promotion strategies have been traditionally assessed by school indicators given by principals or teachers directly involved in school success promotion. The evaluation of school improvement efforts should include indicators of several dimensions, from multi-informants perspectives. The students’ perceptions are a relevant indicator for the improvement efforts, among others, because they reflect their representations, which tend to be related to motivations, beliefs and behaviors (e.g., Moreira, Vaz, Dias, & Petracchi, Reference Moreira, Vaz, Dias and Petracchi2009). Because school improvement strategies involve students, and intend to reach and impact several functioning domains, students’ perceptions on school success promoting strategies are of great relevance as they reflect students’ understanding of what is being carried out. Considering students’ perceptions on school improvement efforts means that students are considered relevant agents of school improvement efforts, where students are not only the subjects used to measure the potential impact, but they are considered also as agents of change.

On the one hand, most of the school success promotion strategies are addressed to and/or involve students. Therefore, a full understanding of the impact of these strategies in several indicators requires the understanding of how the different students from the school perceive these strategies. On the other hand, school success promotion strategies are taken at an institutional level, and the comparison of the values that a given strategy assumes in a given school (comparatively to other schools) may give some important indicators of school quality. Each school success promoting strategy becomes a stimuli, which is expected to be objective, and, therefore, to be presented similarly to different individuals. When several individuals exposed to the same stimuli are homogenous in the perceptions they have about the same stimuli, it means that the given perceived characteristic may be attributed mostly to the stimuli’s characteristics rather than to the individual’s differences in perceiving it. Therefore, the homogeneity of the perceptions that a group of students have about the values that a given dimension assumes in their school is an indicator of the values that a given dimension assumes in that school. Because the degree to which schools implement school success promoting strategies has systematically been found to be associated to educational efficacy and school quality (Creemers & Reezigt, Reference Creemers and Reezigt2005), the comparison between schools in terms of school success promoting strategies is of great relevance to their school improvement and school efficacy efforts. Taking into account the students’ perceptions about their school for improvement efforts requires the ability of separately understanding the meaning of the students’ perceptions, both within and between schools. This requires assessment instruments that have proved to be adequate for use in hierarchical samples, where students’ perceptions are taken as an indicator of differences in a given dimension within a school, but also an indicator of differences between schools.

In spite of the relevance of the students’ perceptions for school improvement efforts, there is an overall lack of empirically tested measures that can reliably assess students’ perceptions on school success promotion strategies. The fact that students’ perceptions measured at an individual level are of interest at an institutional level raises the question of the dimensional structure of the instrument in different groups of students and schools. Answering this question requires the consideration of the hierarchical structure of the data (students within schools), and the use of multilevel structural equation modeling. The objective of this study was to develop and to analyze the validity evidence based of an assessment measure of students’ perceptions on school success promotion strategies. This study tested the hypothesis that the instrument’s dimensional structure is adequate within (students within schools) and between schools for the assessment of Students’ perceptions of school success promoting strategies. Moreover, this study tested the hypothesis that the instrument is sensitive to the different values that the dimensions of school success promotion strategies assume in different groups.

Method

Participants

Two samples were used in this study. For the identification of the factorial structure, a first sample of 406 Portuguese high school students was included, 49.3% of whom were from 10th grade (n = 200), 32% from 11th grade (n = 130) and 18.7% from 12th grade (n = 76). Participants’ age ranged between 15 and 19 years old (M = 16.24, DP = 1.05), 54.8% of whom were female (n = 221) and 45.2% of whom were male (n = 182). Three participants did not give information about the gender. In order to test the factorial structure found in the sample described above, a second sample was recruited (954 Portuguese high school students (12th graders), 50.2% of which were females (n = 479) and 34.6% of which were males (n = 330) (15.2% did not specify their gender). Due to the interest in assessing the adequacy of the factorial structure in different groups of students, the factorial structure was tested in two groups: in a group of students from schools with a consistent history of good academic outcomes and in a group of students from schools with a consistent history of poor academic results. The definition of these two groups of schools was based on the national school rankings of the last 5 years previous to the study. Those schools that had achieved the first places in the national rankings in the last 5 years were included in the group of schools with a history of consistently good results. Conversely, those schools classified in the last places in the national rankings in the last 5 years were included in the group of schools with a history of consistently poor outcomes. The mean age in both groups was 17 years old (Schools with good results: M = 16.98, SD = .566; Schools with poor results: M = 17.26, SD = .677). These students attended a total of 24 secondary schools (nationally distributed) in the 2007/2008 academic year. Six of the participating schools (corresponding to a total of 484 students or 50.73%) had a history of consistently good academic outcomes whereas the remaining 18 schools (corresponding to a total of 470 students or 49.27%) had a history of consistently poor academic outcomes.

Instruments

Based on the empirically validated strategies for school success promotion, the items were generated accordingly to the seven dimensions identified and described earlier. These formulated items were then analyzed by a researchers’ group (who assessed the concordance between the items content and each dimension), which resulted in the elimination and/or rewriting of overlapping items. To assess its application to the student population, the draft version was presented to a group of high school students. The students were asked to point out items that weren’t clear enough and to propose suggestions for improvement. Based on this feedback, a final version of 36 items scored using a 4-point Likert-type scale (1 = Totally disagree; 2 = Disagree; 3 = Agree; 4 = Totally agree) was established. The 36 items are distributed by 7 scales: career development, proximity, mentoring, active learning, personalized learning, extra-curricular activities and educational technologies. The Career development scale assesses the students’ perceptions of the degree to which their school offers opportunities for career development, throughout 8 items (e.g. “students have the opportunity to carry out activities for professional development”). The teacher and staff-student proximity scale assesses students’ perceptions of the degree to which students-teachers interactions are characterized by proximity, support, and monitoring of academically relevant issues, from attendance to school behaviors. It is composed by 8 items (e.g. “when a student is absent, school staff will contact his/her parents or other family members). The Mentoring scale measures students’ perceptions on how their school implements interventions offering support and guidance throughout their development, using 5 items (e.g. “someone is available to provide personalized support”). Active learning assesses students’ perceptions of how teachers use learning techniques that promote active student engagement on activities, throughout 6 items (e.g. “teachers accept students’ willingness to carry out activities in a non-conventional way”). The Personalized learning scale measures students’ perceptions of how teachers tailor educational strategies to students’ characteristics and needs, using 3 items (e.g. “the learning process takes into account the student’s individual needs”). Extra-curricular activities scale assesses students’ perceptions of how their school promotes structured activities that take place outside the curricular activities, using 3 items (e.g. “after-school activities are available to students”). The Educational technologies scale assesses the students’ perceptions of how schools use technologies to support the teaching-learning processes, throughout 3 items (e.g. “students have access to computers and other technological material and use them as learning resources”).

Data collection procedures

For the first study, schools were selected by convenience, in particular by geographic proximity. For the factorial structure testing, both groups (schools with a consistent history of good results and schools with a consistent history of poor results) were selected based on their position in the national school rankings, in the 5 years previous to the study. The other data collection procedures were the same for both studies: schools were invited to participate in the study, informed consent from parents was collected and the measure instruments were administered collectively during the school classes, with the consent of the teachers. Two samples were included because of the recommendations for perform confirmatory analyses in a different sample than that being used for the exploratory analysis or the principal component analysis. Moreover, because schools may differ on the degree to which they implement each one of the dimensions assessed by this instrument, it is desirable to assess the factorial structure of the instrument in two groups of schools (schools with a consistent history of good results and schools with a consistent history of poor results).

Data analysis procedures

All data were carefully double-checked for possible miscoding, distribution of values, and missing values were replaced, using the Series Mean estimation method, prior to analysis.

Exploratory Factor Analysis and Reliability (EFA). Data collected from the first sample was used to perform an exploratory factor analysis with a Promax Rotation with Kayser normalization and extraction of factors with eigenvalue higher than 1. Exploratory Factor Analysis of principal components was also used to empirically evaluate the factor structure. Reliability indices were calculated using Cronbach’s alpha. Analyses were carried out using the Statistical Package for Social Sciences (SPSS) for Windows, version 18.0

Confirmatory Factor Analysis (CFA). In light of the profile differences between both school samples, testing of the instrumental factorial structure was carried out using Confirmatory Factor Analysis, using a new sample composed by 954 students. Confirmatory Factor Analysis was carried out with EQS 6.1 using a Maximum Likelihood procedure for parameter estimation and robust methods for non-normal estimators correction, including Satorra-Bentler scaled Chi-Square (S-Bχ²), Comparative Fit Index using robust method (*CFI), Standardized Root Mean-Square Residual (SRMR) and Root Mean Squared Error of Approximation using robust method (*RMSEA).

Results

Exploratory Factor Analysis and Reliability

EFA was applied to data from the first sample (n = 406) revealing a seven-factor structure. Values of .91 in KMO and of 6023.442 (p ≤ .001) in Bartlett test were obtained, indicating the feasibility of carrying out a factor analysis. EFA data revealed a seven factor structure consistent with a theoretically and empirically based adequate solution.

Table 1. Exploratory Factor Analysis and Scales Cronbach’s Alpha

Note: Extraction Method: Principal Component Analysis. Rotation Method: Promax with Kaiser Normalization. P –Proximity; M – Mentoring; ECA – Extracurricular activities; AL – Active learning; PL – Personalized learning; ET –Educational Technologies; CD – Career Development.

The Cronbach’s alpha values ranged from .71 (in Educational Technologies) to .87 (in Career Development). Teacher Student Proximity and Active Learning scales have alpha values of .79, Extracurricular Activities .81, Individual Learning .84 and Mentoring .85.

All latent factors were positively correlated. A series of moderate size correlations were obtained, namely: between teacher-student proximity and educational technologies (r = .401, p < .01), extra-curricular activities (r = .434, p < .01), personalized learning (r = .404, p < .01) and active learning (r = .304, p < .01); between educational technologies and extracurricular activities (r = .363, p < .01) and personalized learning (r = .362, p < .01); between mentoring and personalized learning (r = .385, p < .01); between extra-curricular activities and personalized learning (r = .428, p < .01); and between active learning and career development (r = .452, p < .01) (Table 2).

Table 2. Correlations between the factors

Note: **Correlation is significant at the 0.01 level (2-tailed). P –Proximity; M – Mentoring; ECA – Extracurricular activities; AL – Active learning; PL - Personalized learning; ET –Educational Technologies; CD – Career Development.

Confirmatory Factor Analysis

In order to test the model adjustment (Figure 1), CFA was applied to data from second sample (n = 954). The standardized solution and parameter estimates for the correlated seven-factor measurement model of the 36-item from the sample presented a good fit to the data S-Bχ² (573) = 1514.1140, p < .001, SRMR = .046, *CFI = .924, *RMSEA = .042, 90% CI [.039, .044]. However, modification indices analysis suggested an additional covariance between the errors of two items from the Personalized Learning Scale (E31, E34) (Figure 1). Overall, the final model adequately reproduced the data S-Bχ² (572) = 1315.4578, p < .001, SRMR = .032, *CFI = .940, *RMSEA = .037, 90% CI [.034, .040].

Figure 1. Factor structure of the Students’ Perceptions of School Success Promoting Strategies Inventory. P –Proximity; M – Mentoring; ECA – Extracurricular activities; AL – Active learning; PL - Personalized learning; ET –Educational Technologies; CD – Career Development.

In order to test the invariance of the model between these two types of schools, the principles suggested by Byrne (Reference Byrne2006) were used. First, the baseline models for each one were established. Findings were consistent in revealing goodness-of-fit statistics for this model for both schools with a consistently good academic outcomes (S-Bχ² (572) = 891.9005, p < .001, SRMR = .045, *CFI = .951, *RMSEA = .034, 90% [C.I. .030, .038]) and schools with history of consistently poor academic results (S-Bχ² (572) = 1089.2432, p < .001, SRMR = .066, *CFI = .911, *RMSEA = .044, 90% [C.I. .040, .048]).

Testing for Configural Invariance, goodness-of-fit statistics related to this model revealed a well-fitting multigroup model with the S-Bχ² (1144) value of 1982.9649 (p < .001), closely representing the sum of these values when the two baseline models were analyzed separately. Testing Measurement Model invariance and Structural Model invariance, although some deterioration has taken place, results suggest it is a well-fitting model (Table 3).

Table 3. Tests for Invariance of SPSI Factorial Structure: Summary of Goodness-of-Fit Statistics

Note: *Value of the Robust Test; a: Corrected value.

Since several authors suggest the use of multilevel techniques in studies in which students are nested in schools (e.g., Lee & Burkham, Reference Lee and Burkam2003), one of the objectives of this study was to test for Within- and Between-Level variance of the SPSI. The decision to proceed with a multilevel analysis depended, in part, on the intraclass correlation values among the within-school variables (Muthén, Reference Muthén1991) and on group sizes (Muthén, Reference Muthén and Raftery1997). In this study’s sample, very small intraclass correlation coefficients for Active Learning (.095), Personalized Learning (.026) and Mentoring (.008) scales (Table 4) were found. Besides, the total number of schools in this sample was very low (n = 24), and eight of them were composed by fifteen or less individuals. These issues suggested there was no rationale for conducting analyses at the higher level of the model.

Table 4. Estimated intraclass correlations

Criterion validity of the scales

In order to test the criterion validity of the scales, differences in each one of the dimensions between schools with good academic results and schools with poor academic results were evaluated. It was found that students from schools with consistently good academic outcomes presented higher statistically significant mean scores in all SPSI scales comparatively with students of the schools with a history of poor academic results, with exception of Personalized Learning (p = .369) and Mentoring (p = .125) scales (Table 5).

Table 5. Mean, standard deviation and t test for school success promotion dimensions for students from schools with a consistent history of good and poor academic results

Discussion

This paper describes the development and the analysis of the validity evidence based of the Students’ Perceptions of School Success Promoting Strategies Inventory (SPSI), a self-report assessment tool capturing students’ perceptions of their schools’ efforts at implementing evidence-based strategies for the promotion of school success promotion and for the prevention of school drop-out. Results reveal acceptable indicators of validity evidence based on test content, on internal structure, on relations to other variables and on consequences of testing, consistent with the Standards for Educational and Psychological Testing (American Educational Research Association, American Psychological Association, National Council on Measurement in Education, Joint Committee on Standards for Educational, & Psychological Testing (US), 1999).

The 7-factor structure found is consistent with the school-related dimensions identified as having empirical support for the promotion of school success (Educational technologies, Career development, Teacher student proximity, Active learning, Extra-curricular activities, Individual learning and Mentoring). All scales presented good reliability values (with α values ranging from .71 in Educational technologies to .87 in Career development). All scales were weak to moderately correlated, as expected. On one hand, the students’ perceptions were influenced also by individual characteristics, in the sense that those characteristics mediated the perception not only of a specific strategy but of several strategies. On the other hand, at a school level, it is very likely that a covariation is found between the different school success promotion strategies in a given school because the implementation of the different dimensions is shared and depends on some common factors (such as strategic vision for the school, leadership, resources, constraints, etc.). Confirmatory analyses confirm the adequacy of the factorial structure of the SPSI in relation to the data of all schools. The adequacy of the factorial structure in two sub-groups was also tested: in the group of schools with a history of consistently good academic results and in the group of school with a history of consistently poor results. When testing the factorial invariance, the model revealed to be adequate in the two groups of schools. Because the dimensions evaluated by the SPSI are school-related dimensions, it was expected that the values each dimension assumed in different schools may vary, because they are dependent of several school level factors (such as school strategy, school priorities, school constraints and resources, students characteristics, etc.). Therefore, and because of the associations of the dimensions assessed by the SPSI with students’ academic outcomes and with school efficacy and school quality, it was relevant to assess the adequacy of the model to the data of both high and low achieving schools. The goodness-of-fit statistics for the several models revealed that the factorial structure was adequate for the data of the all sample, and also for the data of each of the groups of schools (high and low achieving).

Because of the hierarchical structure of the sample (students within schools), this study also assessed the factorial structure within and between schools separately. In order to decide if there were conditions to perform a multilevel structural equation model, the intraclass correlations between within-school variables were assessed. Results revealed good values in four dimensions (Career development, Proximity, Extra-curricular activities, and Educational technologies) and inadequate values for 3 dimensions (Mentoring, Active learning and Personalized learning).

These results suggest that there were no conditions to perform a multilevel structural equation analysis. Intraclass correlation refers to the degree of homogeneity of a group of individuals in assessing a given variable. Individuals’ development proceeds through the dynamics of one’s characteristics, the context characteristics and the interaction between both. If a group of students is homogeneous in the assessment of a given variable, it means that the different members of the group perceive the same variable in similar ways. In other words, school success promotion strategies vary from dimensions that are intended to affect all students (such as Extra-curricular activities) to dimensions that are intended to reach students with specific characteristics (for example, more compensatory dimensions). In fact, it is very likely that a student with special difficulties in math may be involved in mentoring activities, oriented to compensate deficits in at-risk students. Therefore, it is very likely that students who are engaged in compensatory activities in Math perceive that the school is promoting Mentoring more than students who are not involved in compensatory activities. This also applies to Personalized learning, which refers to the learning processes oriented to the correspondence between the teaching methods and techniques and the students characteristics (including expectations, interests, preferences, motivations). Some students with some particular characteristics may feel that their expectations or their characteristics are being considered in the learning process, while other students may feel that the same activity does not correspond to their characteristics, needs or motivations. Also, students’ individual differences related to personality, for example, are involved in the students’ perception of how the school offers conditions for proactive learning. Students with a greater sense of autonomy or agency may perceive that a given strategy allows them to actively engage in the activity, while other students may consider that the same strategy is not appropriate to keep them involved. Therefore, in spite of the strategy being the same, different students may be exposed to it differently. Besides, students’ individual characteristics mediate the perception that different students have of the same stimuli (Moreira, Dias, Vaz, Rocha, & Freitas, Reference Moreira, Dias, Vaz, Rocha, Freitas, Matos, Verdasca, Matos, Costa, Ferrão and Moreira2012). The fact that the dimensions of Mentoring, Active Learning and Personalized learning had intraclass correlations values that suggest heterogeneity of students of a given school in perceiving the degree to which the same school implements these dimensions is consistent to the mediating effect of individual characteristics in their perceptions.

The SPSI was found to be adequate to measure differences between students in perceiving their school’s strategies for the promotion of school success. Because students’ perceptions are nested with schools, in order to understand the adequacy of the instrument in measuring differences between schools in their school success promotion strategies, the presence of the needed conditions to perform multilevel structural equation modeling was explored. The intraclass correlation coefficients found in this sample has shown to be very low for three scales. This fact prevented us to proceed with further analysis, also because it was interpreted as a possible consequence of this sample’s characteristics: a) the small number of schools included and, b) the small number of students in some schools (8 schools had 15 students or less). Moreover, the hypothesis that the intraclass homogeneity may be also associated to some items’ content was raised. All these aspects should be considered in future studies using a bigger sample of schools, allowing for the testing of the tendency found in this study about the adequacy of the SPSI for measuring dimensions of school success promotion both at within and between schools level.

This study also evaluated the criterion validity and validity generalization of the SPSI, by exploring the differences between high and low achieving schools at the level of the dimensions assessed by the instrument. High achieving schools registered higher values than low achieving schools in 5 dimensions, but not in Mentoring and Personalized Learning, which registered no differences between schools. These results are consistent with those expected, as there was a robust amount of evidence linking the effective implementation school improvement at the level of the dimensions included in the SPSI and school efficacy (Creemers & Reezigt, Reference Creemers and Reezigt2005). These results are in agreement with recent studies assessing predictors of academic achievement, both at individual level (Moreira et al., Reference Moreira, Dias, Vaz and Vaz2013) and at school level (Moreira, Dias, Vaz, Rocha, & Freitas, Reference Moreira, Dias, Vaz, Rocha, Freitas, Matos, Verdasca, Matos, Costa, Ferrão and Moreira2012).

In sum, this study revealed that the SPSI presents acceptable indicators of evidence-based validity, as suggested by the Standards for Educational and Psychological Testing (American Educational Research Association, American Psychological Association, National Council on Measurement in Education, Joint Committee on Standards for Educational, & Psychological Testing (US), 1999). Validity evidence-based of internal structure is supported by the 7 factors structure and the relationships among test items and test dimensions. Besides, results on the factorial invariance across groups support the internal structure. Validity evidence-based on relations to other variables was found for the test-criterion relationships (test scores were found to differ amongst schools with different students’ academic performance average). Finally, validity evidence on consequences of the use of the SPSI relies on the fact that results of the test may support the selection of school improvement efforts, including school-wide and classroom strategies.

A key limitation of this study is the relatively low number of schools. This sample included only 24 schools, and 8 schools had 15 or less individuals. As suggested by several authors, the decision to proceed with a multilevel analysis depended, in part, on the intraclass correlation values among the within-school variables (Muthén, Reference Muthén1991) and on the group sizes (Muthén, Reference Muthén and Raftery1997). Therefore, this sample was not adequate to perform multilevel structural equation modeling. Future studies should test the within- and between-level variance of the SPSI in a sample with a higher number of schools, and with more homogeneity in what concerns the number of students by school. Future studies also need to include younger students, in order to better understand the developmental specificities associated to students’ perceptions about school success promoting strategies. Moreover, future studies should explore the extent to which students’ responses to the SPSI are mediated by other psychobiological dimensions such as motivation and engagement with school. These are pertinent research questions that can shed more light onto the extent to which the impact of contextual factors on academic achievement is moderated by individual characteristics.

This study has implications for both research and practice. Trends found in this study confirm that schools with different profiles present differences at the level of school related success promotion strategies. These results contribute to the awareness of the need to considering the hierarchical structure of the sample, when developing and testing instruments that are intended to assess both individual and school level dimensions. Furthermore, for educators, this study’s results highlight the importance of considering students’ individual differences for school agents when planning and implementing school success promotion strategies, as they have the potential to mediate the impact that a given strategy will have on students.

The availability of an instrument that allows for the comparison of schools in dimensions relevant to school improvement and school efficacy effort is of great relevance, as it favors comparative analyses. The comparison of the values a given school presents in a given dimension, favors the understanding of the school’s specificities, when compared with other schools (both with schools with similar and / or different profiles). This knowledge has the potential of contributing a) to describe school characteristics, b) to establish priorities in terms of school improvement efforts and, c), to monitor the impact of school improvement efforts.

Its brief form favors its joint use with assessment of other dimensions (e.g. parent involvement with school), for which instruments with good psychometric properties already exist. Because school success promotion strategies should be tailored to school population, a relevant use of this instrument would be the measurement of these dimensions controlling for students characteristics, such as level of exposure to risk factors (e.g. Preto & Moreira, Reference Preto and Moreira2012). Moreover, due to the need of schools to consider the students in a biopsychosocial perspective (Moreira, Oliveira et al., Reference Moreira, Dias, Vaz, Rocha, Freitas, Matos, Verdasca, Matos, Costa, Ferrão and Moreira2012; Moreira, Crusellas, Sá, Gomes, & Matias, Reference Moreira, Crusellas, Sá, Gomes and Matias2010), this instrument has the potential to contribute to the understanding of how these dimensions relate to other relevant school- or teacher-related dimensions (e.g. Moreira, Pinheiro, Gomes, Cotter, & Ferreira, Reference Moreira, Dias, Vaz and Vaz2013).

This study was supported by Calouste Gulbenkian Foundation, and by Minerva Foundation - Cultura, Ensino e Investigação Científica, the founding institution of the Lusíada universities. We are very grateful to the anonymous reviewers for their fruitful comments and suggestions, as their contributions were crucial for the improvement of an earlier version of this article.

Footnotes

References

Amburgh, J., Devlin, J., Kirwin, J., & Qualters, D. (2007). A tool for measuring active learning in the classroom. American Journal of Pharmaceutical Education, 71(5), 18.Google ScholarPubMed
American Educational Research Association, American Psychological Association, National Council on Measurement in Education, Joint Committee on Standards for Educational, & Psychological Testing (US). (1999). Standards for educational and psychological testing. Washington, DC: American Educational Research Association Google Scholar
Astin, A. (1993). What matters in college? Four critical years revisited. San-Francisco, CA: Josey-Bass.Google Scholar
Auster, E., & Wylie, K. (2006). Creating active learning in the classroom: A systematic approach. Journal of Management Education, 30, 333353. http://dx.doi.org/10.1177/1052562905283346 Google Scholar
Bell, B. S., & Kozlowski, S. W. (2008). Active learning: Effects of core training design elements on self-regulatory processes, learning, and adaptability. Journal of Applied Psychology, 93, 296316. http://dx.doi.org/10.1037/0021-9010.93.2.296 CrossRefGoogle ScholarPubMed
Bronfenbrenner, U. (1979) The ecology of human development. Cambridge, MA: Harvard University Press.Google Scholar
Bronfenbrenner, U. (1986). Ecology of the family as a context for human development: Research perspectives. Developmental Psychology, 22, 723742. http://dx.doi.org/10.1037/0012-1649.22.6.723 CrossRefGoogle Scholar
Bronfenbrenner, U. (1998). The ecology of developmental processes. In Lerner, R. (Ed.), Handbook of child psychology: Vol. 1. Theoretical models of human development (5 th Ed., pp. 9931028). New York, NY: Wiley.Google Scholar
Brown, M., & Freeman, K. (2000). Distinguishing features of critical thinking classrooms. Teaching Higher Education, 5, 301309. http://dx.doi.org/10.1080/713699143 CrossRefGoogle Scholar
Bryk, A., Easton, J., Kerbeow, D., Rollow, S., & Sebring, P. (1994). The state of Chicago school reform. Phi Delta Kappan, 76, 7481.Google Scholar
Byrne, B. M. (2006 ). Structural Equation Modeling with EQS: Basic concepts, applications, and programming (2 nd Ed.). Mahwah, NJ: Lawrence Erlbaum Associates, Inc.Google Scholar
Center for Collaborative Education (2001) How are the Boston pilot schools faring?: An analysis of student demographics, engagement and performance. Boston, MA: Author.Google Scholar
Creemers, B., & Reezigt, G. (2005). Linking school effectiveness and school improvement: The background and outline of the project. School Effectiveness & School Improvement, 16, 359371. http://dx.doi.org/10.1080/09243450500234484 CrossRefGoogle Scholar
DuBois, D., Portillo, N., Rhodes, J., Silverthorn, N., & Valentine, J. (2011). How effective are mentoring programs for youth? A systematic assessment of the evidence. Psychological Science in the Public Interest, 12, 5791. http://dx.doi.org/10.1177/1529100611414806 CrossRefGoogle ScholarPubMed
Feldman, A., & Matjasko, J. (2005). The role of school-based extracurricular activities in adolescent development: A comprehensive review and future directions. Review of Educational Research, 75, 159210. http://dx.doi.org/10.3102/00346543075002159 CrossRefGoogle Scholar
Gonzales, R., Richards, K., & Seeley, K. (2002). Youth out of school: Linking absence to delinquency. Denver, CO: Colorado Foundation for Families and Children Google Scholar
Hamilton, S., & Hamilton, M. (1992). Mentoring programs: Promise and paradox. Phi Delta Kappan, 73, 546550.Google Scholar
Hamilton, S., & Hamilton, M. (2010). Building mentoring relationships. New Directions from Youth Development, 126, 141144. http://dx.doi.org/10.1002/yd.354 CrossRefGoogle Scholar
Hamre, B. K., & Pianta, R. C. (2005). Can instructional and emotional support in the first-grade classroom make a difference for children at risk of school failure? Child Development, 76, 949967. http://dx.doi.org/10.1111/j.1467-8624.2005.00889.x CrossRefGoogle ScholarPubMed
Hamre, B., & Pianta, R. (2006). Student-teacher relationships . In Bear, G. & Minke, K. (Eds.), Children’s needs III: Development, prevention and intervention (pp. 5971). Bethesda, MD: National Association of School Psychologists.Google Scholar
Hargreaves, D. (2006) Personalizing learning 6: The final gateway: School design and organization London, UK: Specialist Schools Trust Google Scholar
Hartley, J. (2007). Teaching, learning and new technology: A review for teachers. British Journal of Educational Technology, 38, 4262. http://dx.doi.org/10.1111/j.1467-8535.2006.00634.x CrossRefGoogle Scholar
Hopkins, D. (2001). Meeting the challenge. An improvement guide for schools facing challenging circumstances. London, UK: Department for Education and Skills.Google Scholar
Hughes, K., & Karp, M. (2004). School-based career development: A synthesis of the literature. Institute of Education and the Economy. New York, NY: Teachers College, Columbia University.Google Scholar
Kahne, J., Nagaoka, J., Brown, A., O’Brien, J., Quinn, T., & Thiede, K. (2001). Assessing after-school programs as contexts for youth development. Youth and Society, 32, 421446. http://dx.doi.org/10.1177/0044118X01032004002 CrossRefGoogle Scholar
Kozma, R. (2003). Technology and classroom practices: An international study. Journal of Research on Technology in Education, 36, 114.CrossRefGoogle Scholar
Langhout, R., Rhodes, J., & Osborne, L. (2004). An exploratory study of youth mentoring in an urban context: Adolescents perceptions of relationship styles. Journal of Youth and Adolescence, 33, 293306. http://dx.doi.org/10.1023%2FB%3AJOYO.0000032638.85483.44 CrossRefGoogle Scholar
Leadbeater, C. (2004) Learning about personalization: How can we put the learner at the heart of the education system? Nottingham, UK: DfES Google Scholar
Leadbeater, C. (2005) The shape of things to come: Personalized learning through collaboration. Nottingham, UK: DfES Google Scholar
Lee, V. E., & Burkam, D. T. (2003). Dropping out of high school: The role of school organization and structure. American Educational Research Journal, 40, 353393. http://dx.doi.org/10.3102/00028312040002353 CrossRefGoogle Scholar
Lent, R., Brown, S., & Hackett, G. (1994). Toward a unifying social cognitive theory of career and academic interest, choice, and performance [Monograph]. Journal of Vocational Behavior, 45, 79122. http://dx.doi.org/10.1006/jvbe.1994.1027 CrossRefGoogle Scholar
Lent, R., Brown, S., & Hackett, G. (2000). Contextual supports and barriers to career choice: A social cognitive analysis. Journal of Counseling Psychology, 47, 3649. http://dx.doi.org/10.1037//0022-0167.47.1.36 CrossRefGoogle Scholar
Maddy-Bernstein, C. (2000). Career development issues affecting secondary schools. The Highlight Zone Research @ Work, 1, 18.Google Scholar
Mahmood, M., Tariq, M., & Javed, S. (2011). Strategies for active learning: An alternative to passive learning. Academic Research International, 1, 193198.Google Scholar
McNeal, R. (1999). Participation in high school extracurricular activities: Investigating school effects. Social Science Quarterly, 80, 291309.Google Scholar
Means, B., Chelemer, C., & Knapp, M. (Eds.). (1991). Teaching advanced skills to at-risk students: Views from research and practice. San Francisco, CA: Jossey Bass Publishers.Google Scholar
Means, B., Toyama, Y., Murphy, R., Bakia, M., & Jones, K. (2009). Evaluation of evidence-based practices in online learning: A meta-analysis and review of online learning studies. Washington, DC: Department of Education, Office of Planning, Evaluation, and Policy Development.Google Scholar
Michael, J. (2006). Where’s the evidence that active learning works? Advances in Physiology Education, 30, 159167. http://dx.doi.org/10.1152/advan.00053.2006 CrossRefGoogle ScholarPubMed
Moreira, P. A. S., Crusellas, L., , I., Gomes, P., & Matias, C. (2010). Evaluation of a manual-based program for the promotion of social and emotional skills in elementary school children: Results from a 4 year study in Portugal. Health Promotion International, 25, 309317. http://dx.doi.org/10.1093/heapro/daq029 CrossRefGoogle ScholarPubMed
Moreira, P. A. S., Dias, P., Vaz, F. M., Rocha, C., & Freitas, S. (2012). Preditores de desempenho académico em alunos Portugueses do 12° ano: Comparação entre escolas com diferentes perfis [Predictors of academic performance in Portuguese secondary school students: Comparison between schools with different profiles]. In Matos, J. M., Verdasca, J., Matos, M., Costa, M. E., Ferrão, M. E., & Moreira, P. A. S. (Coords.). Promoção do sucesso educativo: Projectos de pesquisa [Educational success promotion: Resarch projects] (pp. 189233). Lisboa, Portugal: Fundação Calouste Gulbenkian.Google Scholar
Moreira, P. A. S., Dias, P. C., Vaz, F., Rocha, C., Monteiro, J., & Vaz, J. M. (2012). Escolas secundárias Portuguesas com melhores e piores resultados académicos: Dimensões do aluno, da família e da escola [High and low achieving Portuguese secondary schools: Students’ family and school dimensions]. Revista de Psicologia da Criança e do Adolescente, 3, 81121 Google Scholar
Moreira, P. A. S., Dias, P., Vaz, F. M., & Vaz, J. M. (2013). Predictors of academic performance and school engagement: Integrating persistence, motivation and study skills perspectives using person-centered and variable-centered approaches. Learning and Individual Differences, 24, 117125. http://dx.doi.org/10.1016/j.lindif.2012.10.016 CrossRefGoogle Scholar
Moreira, P. A. S., Oliveira, J. T., Azevedo, C., Sousa, A., Castro, J., & Cloninger, C. R. (2012). The psychometrics and validity of the Junior Temperament and Character Inventory in Portuguese adolescents. Comprehensive Psychiatry, 53, 12271236. http://dx.doi.org/10.1016/j.comppsych.2012.04.014 CrossRefGoogle ScholarPubMed
Moreira, P. A. S., Pinheiro, A., Gomes, P., Cotter, M. J., & Ferreira, R. (2013). Development and evaluation of psychometric properties of an Inventory of Teachers’ Perceptions on Socio-emotional needs. Psicologia: Reflexão e Crítica, 26, 6776. http://dx.doi.org/10.1590/S0102-79722013000100008 Google Scholar
Moreira, P. A. S., Vaz, F. M., Dias, P., & Petracchi, P. (2009). Psychometric properties of the Portuguese version of the School Engagement Instrument. Canadian Journal of School Psychology, 24, 303317. http://dx.doi.org/10.1177/0829573509346680 CrossRefGoogle Scholar
Muthén, B. (1991). Multilevel factor analysis of class and student achievement components. Journal of Educational Measurement, 28, 338354. http://dx.doi.org/10.1111/j.1745-3984.1991.tb00363.x CrossRefGoogle Scholar
Muthén, B. (1997). Latent variable modeling with longitudinal and multilevel data. In Raftery, A. (Ed.), Sociological Methodology (pp. 453480). Boston, MA: Blackwell Publishers.Google Scholar
O’Connor, E., & McCartney, K. (2007). Examining teacher-child relationships and achievement as part of an ecological model of development. American Educational Research Journal, 44, 340369. http://dx.doi.org/10.3102/0002831207302172 CrossRefGoogle Scholar
Palermo, F., Hanish, L. D., Martin, C. L., Fabes, R. A., & Reiser, M. (2007). Preschoolers’ academic readiness: What role does the teacher-child relationship play? Early Childhood Research Quarterly, 22, 407422. http://dx.doi.org/10.1016/j.ecresq.2007.04.002 CrossRefGoogle ScholarPubMed
Pianta, R. C., Nimetz, S. L., & Bennett, E. (1997). Mother-child relationships, teacher-child relationships, and school outcomes in preschool and kindergarten. Early Childhood Research Quarterly, 12, 263280. http://dx.doi.org/10.1016/S0885-2006(97)90003-X CrossRefGoogle Scholar
Preto, M., & Moreira, P. A. S. (2012). Auto-regulação da aprendizagem em crianças e adolescentes filhos de vítimas de violência doméstica contra mulheres [Self-regulation of learning in children and adolescents whose mothers were victims of domestic violence]. Psicologia: Reflexão & Crítica, 25, 730737. http://dx.doi.org/10.1590/S0102-79722012000400012 Google Scholar
Reddy, R., Rhodes, J. E., & Mulhall, P. (2003). The influence of teacher support on student adjustment in the middle school years: A latent growth curve study. Development and Psychopathology, 15, 119138. http://dx.doi.org/10.1017/S0954579403000075 CrossRefGoogle Scholar
Roorda, D., Koomen, H., Spilt, J., & Oort, F. (2011). The influence of affective teacher-student relationships on students’ school engagement and achievement: A meta-analytic approach. Review of Educational Journal, 81, 493529. http://dx.doi.org/10.3102/0034654311421793 Google Scholar
Schacter, J., & Fagnano, C. (1999). Does computer technology improve student learning and achievement? How, when, and under what conditions? Journal of Educational Computing Research, 20, 329343. http://dx.doi.org/10.2190/VQ8V-8VYB-RKFB-Y5RU CrossRefGoogle Scholar
Sebba, J., Brown, N., Steward, S., Galton, M., & James, M. (2008). An investigation of personalized learning approaches used by schools, Nottingham, UK: DfES.Google Scholar
Stone, J. R. III. (2004). Career and technical education: Increasing school engagement. In Smink, J. & Schargel, F. P. (Eds.), Helping students graduate: A strategic approach to dropout prevention (pp. 177184). Larchmont, NY: Eye on Education.Google Scholar
Switzer, D. (2004). Individualized instruction. In Smink, J. & Schargel, F. P. (Eds.), Helping students graduate: A strategic approach to dropout prevention (pp. 225233). Larchmont, NY: Eye on Education.Google Scholar
Valentine, J., Cooper, H., Bettencourt, B., & DuBois, D. (2002). Out-of-school activities and academic achievement: The mediating role of self-beliefs. Educational Psychologist, 37, 245256. http://dx.doi.org/10.1207/S15326985EP3704_4 CrossRefGoogle Scholar
Wagstaff, M., Combs, L., & Jarvis, B. (2000). Solving high school attendance problems: A case study. Journal of At-Risk Issues, 7, 2130.Google Scholar
Wankat, P. (2002). The effective efficient professor: Teaching, scholarship and service. Boston, MA: Allyn & Bacon.Google Scholar
Waxman, H., Lin, M., & Michko, G. (2003). A meta-analysis of the effectiveness of teaching and learning with technology on student outcomes. Naperville, IL: Learning Point Associates.Google Scholar
Wubble, T., Brekelmans, M., Brok, P., & Tarwijk, J. (2006). An international perspective on classroom management in secondary classroom in the Netherlands. In Evertson, C., & Weinstein, C. (Eds.), Handbook of classroom management: Research, practice, and contemporary issues (pp. 11611192). Mahwah, NJ: Lawrence Erlbaum Associates.Google Scholar
Wubbles, T., & Brekelmans, M. (2005). Two decades of research on teacher-student relationships in class. International Journal of Educational Research, 43, 624. http://dx.doi.org/10.1016/j.ijer.2006.03.003 CrossRefGoogle Scholar
Figure 0

Table 1. Exploratory Factor Analysis and Scales Cronbach’s Alpha

Figure 1

Table 2. Correlations between the factors

Figure 2

Figure 1. Factor structure of the Students’ Perceptions of School Success Promoting Strategies Inventory. P –Proximity; M – Mentoring; ECA – Extracurricular activities; AL – Active learning; PL - Personalized learning; ET –Educational Technologies; CD – Career Development.

Figure 3

Table 3. Tests for Invariance of SPSI Factorial Structure: Summary of Goodness-of-Fit Statistics

Figure 4

Table 4. Estimated intraclass correlations

Figure 5

Table 5. Mean, standard deviation and t test for school success promotion dimensions for students from schools with a consistent history of good and poor academic results