Hostname: page-component-745bb68f8f-b95js Total loading time: 0 Render date: 2025-02-06T07:22:31.710Z Has data issue: false hasContentIssue false

Pediatric Perceived Cognitive Functioning: Psychometric Properties and Normative Data of the Dutch Item Bank and Short Form

Published online by Cambridge University Press:  10 June 2019

Jan Pieter Marchal*
Affiliation:
Pediatric Psychosocial Department, Emma Children’s Hospital, Amsterdam UMC University Medical Centers, 1105AZ Amsterdam, The Netherlands Research Priority Area Yield, University of Amsterdam, 1105AZ Amsterdam, The Netherlands
Marieke de Vries
Affiliation:
Pediatric Psychosocial Department, Emma Children’s Hospital, Amsterdam UMC University Medical Centers, 1105AZ Amsterdam, The Netherlands Research Priority Area Yield, University of Amsterdam, 1105AZ Amsterdam, The Netherlands School of Psychology, University of Nottingham Malaysia Campus, Semenyih 43500, Malaysia
Judith Conijn
Affiliation:
Research Institute of Child Development and Education, University of Amsterdam, 1001NG Amsterdam, The Netherlands
André B Rietman
Affiliation:
Department of Pediatric Surgery and Intensive Care, Erasmus MC-Sophia Children’s Hospital, Rotterdam 3015GD, The Netherlands
Hanneke IJsselstijn
Affiliation:
Department of Pediatric Surgery and Intensive Care, Erasmus MC-Sophia Children’s Hospital, Rotterdam 3015GD, The Netherlands
Dick Tibboel
Affiliation:
Department of Pediatric Surgery and Intensive Care, Erasmus MC-Sophia Children’s Hospital, Rotterdam 3015GD, The Netherlands
Lotte Haverman
Affiliation:
Pediatric Psychosocial Department, Emma Children’s Hospital, Amsterdam UMC University Medical Centers, 1105AZ Amsterdam, The Netherlands
Heleen Maurice-Stam
Affiliation:
Pediatric Psychosocial Department, Emma Children’s Hospital, Amsterdam UMC University Medical Centers, 1105AZ Amsterdam, The Netherlands
Kim J Oostrom
Affiliation:
Pediatric Psychosocial Department, Emma Children’s Hospital, Amsterdam UMC University Medical Centers, 1105AZ Amsterdam, The Netherlands Research Priority Area Yield, University of Amsterdam, 1105AZ Amsterdam, The Netherlands
Martha A Grootenhuis
Affiliation:
Pediatric Psychosocial Department, Emma Children’s Hospital, Amsterdam UMC University Medical Centers, 1105AZ Amsterdam, The Netherlands Research Priority Area Yield, University of Amsterdam, 1105AZ Amsterdam, The Netherlands Princess Máxima Center for Pediatric Oncology, Utrecht 3584CS, The Netherlands
*
*Correspondence and reprint requests to: Jan Pieter Marchal, Meibergdreef 9, Room G8-136, 1105AZ Amsterdam, The Netherlands, (+31)20 566 5674. E-mail: j.p.marchal@amc.uva.nl
Rights & Permissions [Opens in a new window]

Abstract

Objective:

With increasing numbers of children growing up with conditions that are associated with acquired brain injury, efficient neuropsychological screening for cognitive deficits is pivotal. Brief self-report measures concerning daily complaints can play an important role in such screening. We translated and adapted the pediatric perceived cognitive functioning (PedsPCF) self- and parent-report item bank to Dutch. This study presents (1) psychometric properties, (2) a new short form, and (3) normative data for the short form.

Methods:

A general population sample of children and parents was recruited. Dimensionality of the PedsPCF was assessed using confirmatory factor analyses and exploratory bifactor analyses. Item response theory (IRT) modeling was used to evaluate model fit of the PedsPCF, to identify differential item functioning (DIF), and to select items for the short form. To select short-form items, we also considered the neuropsychological content of items.

Results:

In 1441 families, a parent and/or child participated (response rate 66% at family level). Assessed psychometric properties were satisfactory and the predominantly unidimensional factor structure of the PedsPCF allowed for IRT modeling using the graded response model. One item showed meaningful DIF. For the short form, 10 items were selected.

Conclusions:

In this first study of the PedsPCF outside the United States, studied psychometric properties of the translated PedsPCF were satisfactory, and allowed for IRT modeling. Based on the IRT analyses and the content of items, we proposed a new 10-item short form. Further research should determine the relation of PedsPCF outcomes with neurocognitive measures and its ability to facilitate neuropsychological screening in clinical practice.

Type
Regular Research
Copyright
Copyright © INS. Published by Cambridge University Press, 2019. 

INTRODUCTION

Many pediatric conditions are associated with acquired brain injury, due to the condition, the treatment, or both. Neurocognitive deficits have, for instance, been described as a consequence of sickle cell disease, childhood cancer, and renal disease (Chen et al., Reference Chen, Didsbury, van Zwieten, Howell, Kim, Tong, Howard, Nassar, Barton, Lah, Lorenzo, Strippoli, Palmer, Teixeira-Pinto, Mackie, McTaggart, Walker, Kara, Craig and Wong2018; de Ruiter et al., Reference de Ruiter, van Mourik, Schouten-van Meeteren, Grootenhuis and Oosterlaan2013; Hijmans et al., Reference Hijmans, Fijnvandraat, Grootenhuis, van Geloven, Heijboer, Peters and Oosterlaan2011). Increasing numbers of children grow up with such conditions, or survive intensive care admissions and surgical interventions, being at risk for long-term neurocognitive consequences (Clancy et al., Reference Clancy, Edginton, Casarin and Vizcaychipi2015). According to Hardy et al. (Reference Hardy, Olson, Cox, Kennedy and Walsh2017), neuropsychological service delivery for these groups should be prevention-based, focusing on early detection and timely intervention in at-risk groups. These authors suggest to screen for neurocognitive deficits, and to use both short performance-based tasks and brief symptom questionnaires for this purpose (Hardy et al., Reference Hardy, Olson, Cox, Kennedy and Walsh2017). Although questionnaires concerning perceived cognitive deficits are generally found to show limited relations with performance-based assessments (Coutinho et al., Reference Coutinho, Camara-Costa, Kemlin, Billette de Villemeur, Rodriguez and Dellatolas2017; Vega-Fernandez et al., Reference Vega-Fernandez, Zelko, Klein-Gitelman, Lee, Hummel, Nelson, Thomas, Ying, Beebe and Brunner2014), they provide an important complementary perspective on cognitive functioning in daily life situations, where objective measures mostly focus on performance under optimal conditions (Coutinho et al., Reference Coutinho, Camara-Costa, Kemlin, Billette de Villemeur, Rodriguez and Dellatolas2017; Vega-Fernandez et al., Reference Vega-Fernandez, Zelko, Klein-Gitelman, Lee, Hummel, Nelson, Thomas, Ying, Beebe and Brunner2014). Brief but precise, well-validated questionnaires concerning cognitive complaints are still sparse.

An instrument that has the potential to fill this gap is the pediatric perceived cognitive function (PedsPCF) item bank (Lai, Butt, et al., Reference Lai, Butt, Zelko, Cella, Krull, Kieran and Goldman2011). The PedsPCF was developed in the context of pediatric oncology, to assess perceived cognitive functioning as experienced by the child and/or parent (proxy-report). In the United States, the PedsPCF is used as a computerized adaptive test (CAT) or as a short-form questionnaire containing a selection of items. Short forms are currently available in English (Cella et al., Reference Cella, Schalet, Kallen, Lai, Cook, Ruthson and Choi2016; Lai et al., Reference Lai, Bregman, Zelko, Nowinski, Cella, Beaumont and Goldman2017) and Spanish (Wong et al., Reference Wong, Lai, Correia and Cella2015). The PedsPCF parent-report item bank showed satisfactory psychometric properties in a US general population sample (Lai, Zelko, et al., Reference Lai, Zelko, Butt, Cella, Kieran, Krull and Goldman2011). In terms of clinical utility, parent-report PedsPCF scores have been found to be associated with outcomes such as problem behavior, neurological conditions (Lai, Butt, et al., Reference Lai, Butt, Zelko, Cella, Krull, Kieran and Goldman2011; Lai, Wagner, et al., Reference Lai, Wagner, Jacobsen and Cella2014; Lai, Zelko, et al., Reference Lai, Zelko, Butt, Cella, Kieran, Krull and Goldman2011, Reference Lai, Zelko, Krull, Cella, Nowinski, Manley and Goldman2014), computerized neuropsychological measures (Lai, Zelko, et al., Reference Lai, Zelko, Krull, Cella, Nowinski, Manley and Goldman2014), and cerebral damage in children with brain tumors (Lai et al., Reference Lai, Bregman, Zelko, Nowinski, Cella, Beaumont and Goldman2017). Furthermore, the PedsPCF could to some extent discriminate between children with and without attentional problems as determined by the child behavior checklist (Lai, Butt, et al., Reference Lai, Butt, Zelko, Cella, Krull, Kieran and Goldman2011). The extent to which the PedsPCF can predict neurocognitive deficits as determined by performance-based measures needs to be further established in future research.

Although the PedsPCF appears to be a promising instrument, various issues need to be addressed. First, while authors found evidence that the PedsPCF has a sufficiently unidimensional bifactor structure to be used as a CAT (Lai, Zelko, et al., Reference Lai, Zelko, Butt, Cella, Kieran, Krull and Goldman2011), they also identified three subdomains (memory retrieval, attention/concentration, and working memory). The extent to which these subdomains can be replicated and have added value in the interpretation of PedsPCF scores is unclear. Furthermore, publications to date have focused on the parent-report PedsPCF (Lai, Zelko, et al., Reference Lai, Zelko, Butt, Cella, Kieran, Krull and Goldman2011). Given the inherent value of self-report and the current use of the self-report item bank in the United States (Cella et al., Reference Cella, Schalet, Kallen, Lai, Cook, Ruthson and Choi2016), psychometric properties of the self-report item bank should also be studied. Finally, the short forms that have been published to date have selected items based on their statistical merit, disregarding the neuropsychological content of items. To be able to identify the nature of cognitive complaints, a short form should cover all types of complaints in terms of content (Edelen & Reeve, Reference Edelen and Reeve2007).

To address these issues and to be able to implement the PedsPCF in Dutch pediatric practice, we translated and adapted the PedsPCF parent- and self-report item bank. Our aims are (1) to evaluate psychometric properties of the translated parent- and self-report PedsPCF item bank in the general population, (2) to propose a PedsPCF short form, and (3) to present Dutch normative data for the PedsPCF parent and child versions.

METHOD

Participants and Procedures

Participants were approached through market research agency Kantar TNS in January 2016. This large research agency maintains a nationwide panel of 200,000 potential respondents, from which a representative Dutch sample of 2192 children aged 7–18 years was selected. For sampling and weighing procedures, Kantar TNS uses “DIANA” software (www.niposoftware.com). Stratification was based on Dutch key demographics (age of the child and the parent, gender of the child, geographical region, social class based on educational level and income, and household size), with a stratified random sampling technique, aiming to minimize sample variance and increase precision. After selecting the child, the invited parent/caregiver was selected randomly (by gender). The envisioned number of respondents was 1200 children and 1 of their parents/caregivers, 100 in each of 12 one-year age strata.

Invitation emails that specified the invited parent and child by name and age were sent to the parent/caregiver by Kantar TNS. The email described the purpose of the study and linked to the study website (www.hetklikt.nu) where the questionnaires could be completed after giving consent. Participants were compensated through points in the Kantar TNS reward system that could be exchanged for gift cards. If both parent and child completed the questionnaires, they received approximately €9 worth of points per participating family.

The medical ethical review board was informed and concluded that no formal evaluation was required for this study. Data were obtained in compliance with the principles outlined in the Helsinki Declaration.

Instruments

Background variables

Sociodemographic characteristics of invited parents/caregivers and children were provided by Kantar TNS. Participating parents completed an additional short questionnaire on specific sociodemographics, for example, education and child’s health status. The latter inquired the presence (yes or no) of various health conditions including autism spectrum disorder (ASD), attention deficit hyperactivity disorder (ADHD), intellectual disability, or learning problems. If parents answered yes on any condition, they were inquired to specify the nature of each condition in an open-ended question (“please specify”). Based on parents’ comments on the additional open-ended questions, JPM and KJO independently judged whether a reported condition was indeed likely to be present (as yes/no/unclear). For instance, if parents noted that the child had not been diagnosed, or if the answer revealed that parents misunderstood the question, it was marked as “no” or “unclear.” JPM and KJO discussed and harmonized these cases that were judged differently.

PedsPCF item bank

The PedsPCF item bank consists of 43 items and has been developed in the United States for 7- to 18-year-olds and their parents/caregivers in a pediatric oncology context. Although the age range of the PedsPCF is 7–18 years, self-report items are currently used only for children aged 8 years and older (Cella et al., Reference Cella, Schalet, Kallen, Lai, Cook, Ruthson and Choi2016). Originally, items were formulated based on existing instruments and interviews with parents, patients, and clinicians (Lai, Zelko, et al., Reference Lai, Butt, Zelko, Cella, Krull, Kieran and Goldman2011). The items are identical in content for parent- and self-report version of the item bank and inquire overt behavior concerning various aspects of cognitive functioning. In terms of content of the items, Lai, Zelko, et al. (Reference Lai, Butt, Zelko, Cella, Krull, Kieran and Goldman2011) describe that the PedsPCF taps the content areas of “attention, concentration, executive function, language, spatial orientation, visuospatial ability, memory, personal orientation, and processing speed.” The first 25 items inquire the frequency of cognitive problems over the past 4 weeks; the final 18 items inquire the intensity of cognitive problems. Items are rated on a 5-point Likert scale (frequency: 1 = all of the time, 2 = most of the time, 3 = some of the time, 4 = a little of the time, and 5 = none of the time; intensity: 1 = very much, 2 = quite a bit, 3 = somewhat, 4 = a little bit, and 5 = not at all). Of the 43 items, 30 items have unique item-stems, intensity items 1–13 being identical to frequency items 31–43. In our factor and item response theory (IRT) analyses, we included only 1 of each identically worded item, that is, the first 30 items, items 1–25 (frequency) and items 26–30 (intensity). This approach avoided local dependence in our analyses, and conforms with recent descriptions of the use of the PedsPCF in the United States (Cella et al., Reference Cella, Schalet, Kallen, Lai, Cook, Ruthson and Choi2016). Previous factor analysis in a general population sample in the United States (including all 43 items) resulted in a bifactorial model with a dominant common factor (“cognitive functioning”) and 3 specific factors (“memory retrieval,” “attention/concentration,” and “working memory”) (Lai, Zelko, et al., Reference Lai, Zelko, Butt, Cella, Kieran, Krull and Goldman2011). Authors interpreted this bifactorial model to confirm sufficient unidimensionality of the PedsPCF, allowing for the development of a CAT and short forms. The instrument is integrated in the Patient-Reported Outcomes Measurement Information System (PROMIS) initiative. Instructions and items were translated from English to Dutch by FACITtrans, a company specialized in translating patient-reported outcome measures. PROMIS guidelines for translation and cultural adaptation were followed, including cognitive debriefing (Correia, Reference Correia2013).

Statistical Analysis

For statistical analyses, the Statistical Package for Social Science (SPSS) version 24 (Reference Corp2016), Mplus version 7.1.1 (Muthén & Muthén, Reference Muthén and Muthén1998–2015), and R (Team, Reference Team2016) were used.

Respondents

To assess a possible selection bias, basic sociodemographics were compared between responding families (parent and/or child responding) and nonresponding families (no completed questionnaires). Unpaired t tests were used for continuous variables (age of the invited parent/caregiver), and chi-square tests for categorical variables (all other variables). For continuous variables, effect sizes (d) with pooled standard deviations were calculated according to Cohen (Reference Cohen1988), and effect sizes for categorical variables were expressed as Cramér’s V. For both effect sizes, 0.2–0.5 was considered small, 0.5–0.8 medium, and ≥0.8 large (Cohen, Reference Cohen1988).

Psychometric properties

Psychometric evaluation was informed by De Vet et al. (Reference De Vet, Terwee, Mokkink and Knol2011) and by the approach of Reeve et al. (Reference Reeve, Hays, Bjorner, Cook, Crane, Teresi, Thissen, Revicki, Weiss, Hambleton, Liu, Gershon, Reise, Lai and Cella2007), comprising classical test theory (CTT) and IRT methods. In line with existing literature on the PedsPCF, the parent-report item bank was treated as the primary source for decision making throughout our analyses, yet we verified all outcomes in the self-report version.

Missing item scores

Missing answers among the PedsPCF items were limited to 6 (of 30) items concerning school, which had the answer option “Not applicable (I have not been in school at all in the last 4 weeks).” In 1.8% of parents and 1.9% of children, one or more answers were missing, resulting in, respectively, 0.2% and 0.3% of all data being missing. For the descriptive analyses, the analyses of assumptions for IRT modeling (including the factor analyses) and the IRT analyses, we used pairwise deletion (Hardouin et al., Reference Hardouin, Conroy and Sebille2011). That is, we used all available data unless forced otherwise by the statistical software: for the analysis of monotonicity only complete cases were allowed (Van den Ark, Reference Van den Ark2017), while for the evaluation of model and item fit multiple imputation was forced by the software (Chalmers et al., Reference Chalmers, Pritikin, Robitzsch, Zoltak, KwonHyun, Falk and Meade2017).

Descriptive analyses

We analyzed response frequency for all items, and mean, standard deviation, range, skewness, and kurtosis of the total scores. We also analyzed internal consistency (Guttman’s lambda 2), with ≥0.80 being regarded as good and ≥0.90 as excellent (Cohen, Reference Cohen1988). Furthermore, we assessed item-scale correlations and interitem polychoric correlations, with significant correlations of 0.1, 0.3, 0.5, and 0.7 being considered small, medium, large, and very large size, respectively (Cohen, Reference Cohen1988).

Assumptions for IRT modeling

We analyzed dimensionality of the translated PedsPCF to assess the assumptions of unidimensionality and local independence for IRT modeling (Reeve et al., Reference Reeve, Hays, Bjorner, Cook, Crane, Teresi, Thissen, Revicki, Weiss, Hambleton, Liu, Gershon, Reise, Lai and Cella2007). These assumptions imply that a single latent trait underlies item responses, while item scores are locally independent given the latent trait. A secondary objective of the dimensionality analyses was to determine whether the bifactor structure as described by Lai and Zelko et al. (Reference Lai, Butt, Zelko, Cella, Krull, Kieran and Goldman2011) could be replicated.

To analyze unidimensionality and local independence, we conducted factor analyses for ordered categorical data in Mplus version 7.11 (Muthén & Muthén, Reference Muthén and Muthén1998–2015), using a robust weighted least squares estimator. First, we assessed the fit of a one-factor model and examined the residual correlations to identify possible sources of multidimensionality and local dependencies. Item pairs with residual correlations over |0.20| were inspected to provide plausible explanations for the local dependencies (e.g., similar wording or content). Positive residual correlations indicate higher observed correlations between items than expected given the unidimensional factor model, while negative residual correlations indicate that the observed correlation is lower than expected. Indicators of good model fit were root mean square error of approximation (RMSEA) <0.06, and comparative fit index (CFI) and Tucker–Lewis index (TLI) >0.95, while indicators for poor fit were RMSEA >0.08, and CFI and TLI <0.90 (Hu & Bentler, Reference Hu and Bentler1999; MacCallum et al., Reference MacCallum, Browne and Sugawara1996). When the unidimensional model provided unsatisfactory fit to the data, the strength of the primary dimension was determined by inspection of eigenvalues of the polychoric correlation matrix and by exploratory bifactor analysis (Jennrich & Bentler, Reference Jennrich and Bentler2011) using a bifactor geomin rotation. In bifactor analyses, data are modeled by positing a common factor that loads on all items and one or more specific factors that may load on specific groups of items. Support for sufficient unidimensionality is found when the ratio between the second and the first eigenvalue is 1/4 or smaller, and when the bifactor analysis shows that the factor loadings on the common factor are both substantial (i.e., >0.30) and considerably larger than loadings on specific factors (Reeve et al., Reference Reeve, Hays, Bjorner, Cook, Crane, Teresi, Thissen, Revicki, Weiss, Hambleton, Liu, Gershon, Reise, Lai and Cella2007).

Even though one of the goals was to determine whether the bifactor model described by Lai, Zelko, et al. (Reference Lai, Zelko, Butt, Cella, Kieran, Krull and Goldman2011) could be confirmed, we used exploratory rather than confirmatory bifactor analyses since a preliminary confirmatory factor analysis revealed that the previously described bifactor model provided poor fit to our data. Bifactor models with one, two, and three specific factors were evaluated.

After evaluating unidimensionality and local independence, the remaining assumption for IRT modeling was monotonicity of items (Reeve et al., Reference Reeve, Hays, Bjorner, Cook, Crane, Teresi, Thissen, Revicki, Weiss, Hambleton, Liu, Gershon, Reise, Lai and Cella2007), which was assessed using the Mokken package for R (Van den Ark, Reference Van den Ark2017). A violation of monotonicity implies that the probability of producing a higher item score (i.e., indicative of a higher latent trait value) is not an increasing function of the latent trait value.

IRT analyses

When assumptions for IRT modeling were sufficiently met, we fitted a graded response model (GRM) using the “ltm” package for R (Rizopoulos, Reference Rizopoulos2013; Samejima, Reference Samejima1997). Before doing so, we determined whether all answer categories were sufficiently “filled” (i.e., selected by respondents) to provide reliable parameter estimates for all items (Edelen & Reeve, Reference Edelen and Reeve2007). Model fit and item fit in the GRM were determined using the package “mirt” for R (Chalmers et al., Reference Chalmers, Pritikin, Robitzsch, Zoltak, KwonHyun, Falk and Meade2017). We used the M 2* statistic as an overall goodness of fit index and the squared root mean residual correlation (SRMSR) as a measure of the magnitude of misfit (Cai & Hansen, Reference Cai and Hansen2013). SRMSR values of 0.027 indicate close fit, and values of 0.05 indicate adequate fit (Maydeu-Olivares & Joe, Reference Maydeu-Olivares and Joe2014). Item fit was evaluated using S-X 2 statistics (Kang & Chen, Reference Kang and Chen2010), applying a Bonferroni correction for multiple testing. To assess whether locally dependent item pairs (as identified by residual correlations in the factor analyses) resulted in GRM parameter bias, we conducted sensitivity analyses, that is, removing one of the items in such item pairs and assessing whether this resulted in substantial changes in parameter estimates (Reeve et al., Reference Reeve, Hays, Bjorner, Cook, Crane, Teresi, Thissen, Revicki, Weiss, Hambleton, Liu, Gershon, Reise, Lai and Cella2007). Item information curves (IIC) were produced to evaluate the amount of test information conditional on θ (i.e., the latent trait, so level of perceived cognitive functioning). In addition, item characteristic curves (ICC) were produced for each item, to evaluate the probability of choosing answer options conditional on θ. Finally, test information curves and the standard errors as a function of θ were plotted.

Differential item functioning (DIF) was analyzed for each item across the child’s gender, age group (primary education [7–11 years] vs. secondary education [12–18 years]), and (only in parent-reported items) parent’s gender and educational level (high vs. middle and low; low vs. middle and high). Significant DIF indicates that an item functions differently across subgroups (Reeve et al., Reference Reeve, Hays, Bjorner, Cook, Crane, Teresi, Thissen, Revicki, Weiss, Hambleton, Liu, Gershon, Reise, Lai and Cella2007). We analyzed DIF using the lordif package for R (Choi et al., Reference Choi, Gibbons and Crane2016). We used an α level of 0.01, while a change in McFadden’s pseudo-R 2 of ≥0.02 was seen as an indication for meaningful DIF (Wong et al., Reference Wong, Lai, Correia and Cella2015).

Item selection for the short form

We aimed for the short form (1) to provide optimal precision in the problematic score range, (2) to include the best items, and (3) to represent all content areas of the full item bank. Concerning the latter, there is no available overview of items with the corresponding content areas that are described by Lai, Zelko, et al. (Reference Lai, Zelko, Butt, Cella, Kieran, Krull and Goldman2011). Therefore, an expert panel (JPM, MdV, AR, KJO, TBS) independently indicated for each item which content areas it covered. Core items for a content area are items that were unanimously indicated to represent that content area (see Table 1).

Table 1. Items of the Dutch PedsPCF with corresponding content areas

Note: Item numbers printed bold are part of the 10-item short form. Presented items are the parent-report items. Self-report items are formulated as, for example, “I forget things easily.” Content areas were determined by an expert panel, based on the content of the items.

Abbreviations: Attn, attention; Conc, concentration; EF, executive functioning; Lang, language; Mem, memory; PO, personal orientation; PS, processing speed; SO, spatial orientation; VA, visuospatial ability.

Precise criteria for item selection are outlined in the supplementary material. These criteria were based on criteria defined by Edelen and Reeve (Reference Edelen and Reeve2007), Reeve et al. (Reference Reeve, Hays, Bjorner, Cook, Crane, Teresi, Thissen, Revicki, Weiss, Hambleton, Liu, Gershon, Reise, Lai and Cella2007), and De Vet et al. (Reference De Vet, Terwee, Mokkink and Knol2011). After selecting items for the short form, properties of the short form were analyzed by means of CTT analyses (descriptives, internal consistency and item-total correlation, and Spearman’s rank correlation of the short-form sum score with the full-form sum score) and IRT analyses (GRM fit, item fit, test information curve, and standard error conditional on θ).

Normative data

The PedsPCF parent- and self-report are scored using the GRM as estimated in the child and parent data, respectively, resulting in θ estimates (mean 0, SD 1) for each participant. To facilitate interpretation, the θ estimates were transformed to T scores (mean 50, SD 10) for the subsequent analyses.

To determine the agreement, we analyzed the intraclass correlation of the estimated T scores for the parent- and self-reports (two-way random-effect models, absolute agreement) and for the full and short forms (two-way mixed-effect model, absolute agreement). Intraclass correlations above 0.5, 0.75, and 0.9 were interpreted as moderate, good, and excellent, respectively (Koo & Li, Reference Koo and Li2016).

To obtain T scores based on the individual θ estimates, computer access is needed. Since this is not always feasible in clinical practice, we will provide conversion tables for the short form, showing the sum score of items with the corresponding mean estimated T score and SE.

RESULTS

Respondents

In 1441 of the 2192 invited families, a parent and/or a child completed the PedsPCF (response rate 66% of families), resulting in completed PedsPCFs by 1434 parents of 7- to 18-year-olds (response rate 65%) and 1181 children aged 8–18 years (response rate 58%) (see Table 2).

Table 2. Sociodemographic characteristics of respondents

Abbreviations: ADHD, attention deficit hyperactivity disorder; ASD, autism spectrum disorder.

a Highest educational level completed. High, higher vocational education, university; intermediate, middle vocational education, higher secondary education, pre-university education; low, primary education, lower vocational education, lower or middle general secondary education.

In 1174 families, both the child and a parent/caregiver responded. In responding families, the invited parent/caregiver was significantly more frequently female than in the nonresponding families (61.4% vs. 54.1% female, small effect size) (see supplementary Table 2).

Psychometric Properties

Descriptive analyses

Analysis of the response frequency revealed that the most “severe” answer option (i.e., option 1) was rarely endorsed. Both parent- and self-report total scores showed significant negative skewness and positive kurtosis, with Z scores exceeding |1.96| (Field, Reference Field2013). Internal consistency of the item bank was very high: Guttman’s lambda 2 was 0.96 in parent-report and 0.95 in self-report. Item-scale correlation ranged from 0.41 to 0.79 (mean 0.64) in the parent-report item bank, and 0.38 to 0.72 (mean 0.59) in the self-report item bank. Polychoric interitem correlation coefficients ranged from 0.25 to 0.84 (mean 0.50) in parent-report and 0.21 to 0.78 (mean 0.45) in self-report. For parent-report items 7 and 8, and items 8 and 11, interitem correlation exceeded 0.80, suggesting item redundancy. In self-report, both of these item pairs showed correlations of 0.78.

Assumptions for IRT modeling

A comprehensive description of the results for dimensionality analysis of the PedsPCF item bank can be found in the supplementary material part A. In brief, a one-factor model provided poor fit for both the parent-report and child-report item banks, even when correcting for local dependencies between item scores. Hence, local dependencies alone could not explain the poor fit of unidimensional models. We therefore conducted further analyses (i.e., eigenvalue analysis and bifactor analyses) to assess whether the primary dimension in the PedsPCF item-score data was strong enough for using unidimensional IRT modeling. Eigenvalue analysis showed that 47% (parent data) and 53% (child data) of the total variance in item scores was attributable to the first factor. The ratio between the second and the first eigenvalue was approximately 1/7 in both the parent- and self-report data. Estimated exploratory bifactor models with one, two, and three specific factors showed that factor loadings on the common factor were substantial (>0.30) and considerably larger than loadings on specific factors. Based on these results, we concluded that the PedsPCF satisfied the established criteria as outlined in the section “Method” for sufficient unidimensionality to conduct unidimensional IRT analyses (Reeve et al., Reference Reeve, Hays, Bjorner, Cook, Crane, Teresi, Thissen, Revicki, Weiss, Hambleton, Liu, Gershon, Reise, Lai and Cella2007). Furthermore, the bifactor models provided no support for “memory retrieval” and “attention/concentration” subscales described by Lai, Zelko, et al. (Reference Lai, Zelko, Butt, Cella, Kieran, Krull and Goldman2011), and little support for a “working memory” subscale.

Analysis of monotonicity revealed that parent-report item 20 significantly violated this assumption. Although visual inspection showed that the violation was relatively minor, item 20 was excluded from the parent-report GRM. In the self-report data, no significant violation of monotonicity was found.

IRT analyses

For the IRT analyses, response categories 1 and 2 were collapsed since these categories were rarely selected in our sample. This was done to avoid unreliable parameter estimates (Edelen & Reeve, Reference Edelen and Reeve2007). The GRM was therefore based on 4-point scales, resulting in three rather than four location parameters for each item (supplementary Tables 36). Item 20 was not included in the parent-report IRT model, due to its violation of monotonicity.

Consistent with the results of the factor analyses, the unidimensional GRM showed poor fit, in both parent-report (M 2*[319] = 2878.6, p < .001; SRMSR = 0.067) and self-report (M 2*[345] = 2102.0, p < .001; SRMSR = 0.065) data. However, sensitivity analyses suggested that the locally dependent item pairs did not cause substantial item parameter bias: respectively, removing items 8, 16, 18, and 26 (i.e., in 4 separate GRMs) resulted in changes of maximal 0.15 in both discrimination and location parameters (data not shown).

Location parameters indicated that most answer options are relevant for children with below-average perceived cognitive functioning, that is, θ < 0. Self-report item 7 (S-X2 = 161.6, p < .001) showed significant misfit in the estimated GRM. Item 26 showed uniform DIF for gender of the child in both parent-report (McFadden’s pseudo-R2 change 0.03) and self-report (McFadden’s pseudo-R2 change 0.04). The direction of the DIF showed that among boys this specific cognitive complaint is reported at lower levels of ability than among girls. No item showed significant DIF for gender of the parent, age group of the child, or educational level of the parent. The test information curves and standard errors conditional on θ for the full forms are presented in Figures 1 and 2.

Fig. 1. Test information curve (solid line) and standard errors (dashed line) of the PedsPCF parent-report full form.

Fig. 2. Test information curve (solid line) and standard errors (dashed line) of the PedsPCF self-report full form.

Short Form

In the supplementary material part B, methods and results of item selection for the short form are outlined. The number of items to be selected was not predefined. It was defined that the short form should represent all content areas of cognitive functioning and contain the best performing items according to outlined criteria. The final result was a 10-item short form, consisting of items 10, 12, 14, 15, 17, 21, 22, 23, 24, and 30 (see Table 1). The GRM for the short form showed adequate fit for the parent-report (M 2*[15] = 28.0, p = .022; SRMSR = 0.039) and the self-report (M 2*[15] = 33.9, p = .003; SRMSR = 0.040) versions. None of the items showed significant misfit in the GRM of the short form. Test information curves and standard errors conditional on θ for the short forms are presented in Figures 3 and 4. In the short form, Guttman’s lambda 2 was 0.89 for parent-report and 0.86 for self-report. Polychoric interitem correlation coefficients ranged from 0.35 to 0.73 (the latter concerning items 14 and 15, overall mean correlation 0.51) in parent-report and 0.33 to 0.61 (mean 0.46) in self-report. Item-scale correlations ranged from 0.48 to 0.76 (mean 0.61) in the parent-report, and 0.47 to 0.68 (mean 0.57) in the self-report short form. Correlations between short-form and full-form sum scores were very high, in both parent-report and self-report (r = 0.94, p < .001).

Fig. 3. Test information curve (solid line) and standard errors (dashed line) of the PedsPCF parent-report short form.

Fig. 4. Test information curve (solid line) and standard errors (dashed line) of the PedsPCF self-report short form.

Normative Data

There was a moderate intraclass correlation between parent- and self-report T scores in the full form (intraclass correlation coefficient [ICC] = 0.73 [95% CI 0.70–0.75], p < .001, n = 1174) and in the short form (ICC = 0.65 [95% CI 0.62–0.68], p < .001, n = 1174). There was an excellent intraclass correlation between full- and short-form T scores in both the parent-report (ICC = 0.93 [95% CI 0.93–0.94], p < .001, n = 1174) and the self-report (ICC = 0.93, 95% confidence interval 0.92–0.94, p < .001, n = 1174) forms.

Conversion tables are presented for the parent- and self-report short forms (see Table 3). With item scores ranging from 1 to 5, sum scores of the short form ranged from 10 to 50, higher scores indicating better perceived cognitive functioning.

Table 3. PedsPCF short-form sum scores with corresponding mean T scores and standard errors

Note: The presented T scores and SE are the mean scores estimated using the IRT model. Parent-report concerned the age range 7–18 years, self-report the age range 8–18 years.

DISCUSSION

This study describes psychometric properties of the parent-report and self-report PedsPCF item bank in a large representative Dutch general population sample. As such, it is the first study of the PedsPCF outside the United States. Psychometric properties that were assessed were satisfactory and factor analyses revealed that the Dutch PedsPCF item bank sufficiently met assumptions for unidimensional IRT modeling. Based on the IRT models, we propose a 10-item PedsPCF short form that contains the best functioning items and represents all content areas of the PedsPCF item bank. Normative data for the parent- and self-report versions of the PedsPCF full and short forms are presented.

Our findings are generally in line with findings concerning the PedsPCF in US populations, and imply that the PedsPCF can be adapted for international use. There were, however, also notable differences with previous findings. Other than Lai, Zelko, et al. (Reference Lai, Zelko, Butt, Cella, Kieran, Krull and Goldman2011), who found no DIF, we found significant DIF for item 26 (concerning the ability of the child to add or subtract numbers in his/her head). The pattern of scoring indicated that parents answered this item more positively for boys than for girls, given equal perceived cognitive functioning. This item may reflect a relative cognitive strength of boys, which is preserved when overall (perceived) cognitive ability is lower. Alternatively, this DIF may reflect the “math–gender stereotype,” with math skills being overestimated in boys and/or underestimated in girls (Passolunghi et al., Reference Passolunghi, Rueda Ferreira and Tomasetto2014). Another difference from previous findings is the scant statistical support for the three subdomains “memory retrieval,” “attention/concentration,” and “working memory” as described by Lai, Zelko, et al. (Reference Lai, Zelko, Butt, Cella, Kieran, Krull and Goldman2011). Although these domains were used in the first publications on the PedsPCF (Lai, Butt, et al., Reference Lai, Butt, Zelko, Cella, Krull, Kieran and Goldman2011; Lai, Zelko, et al., Reference Lai, Zelko, Butt, Cella, Kieran, Krull and Goldman2011), they were not mentioned in more recent publications on the PedsPCF (Cella et al., Reference Cella, Schalet, Kallen, Lai, Cook, Ruthson and Choi2016; Lai et al., Reference Lai, Bregman, Zelko, Nowinski, Cella, Beaumont and Goldman2017; Lai, Zelko, et al., Reference Lai, Zelko, Krull, Cella, Nowinski, Manley and Goldman2014; Wong et al., Reference Wong, Lai, Correia and Cella2015). Thus, our findings appear to be in line with the current application of the PedsPCF.

In some respects, our approach differed from that of previous studies. First, our models included only one of each identically worded item-stem in the analyses. This was done to preclude local dependence in our statistical analyses. Another difference in approach is that we fully included the self-report version in our analyses, whereas previous studies focused predominantly on the parent-report version of the PedsPCF. Our findings indicated that the psychometric properties of the parent- and self-report item bank are very similar. This does not imply, however, that outcomes of parent- and self-report are equal, as shown by the mere moderate agreement we found between parent- and self-report. The potentially complementary quality of both perspectives and what predicts agreement or disagreement between parent and child is of interest for future research.

The current findings indicate that, like the original PedsPCF, the translated PedsPCF can be used as a CAT, making optimal use of the possibilities of IRT modeling in assessing with increased precision and efficiency (Terwee et al., Reference Terwee, Roorda, de Vet, Dekker, Westhovens, van Leeuwen, Cella, Correia, Arnold, Perez and Boers2014). When computer-based assessment is not possible or feasible, short forms can be used, presenting a selection of items from the item bank. Previous (English and Spanish) short forms of the PedsPCF contained items that were most frequently selected based on CAT simulations (Lai et al., Reference Lai, Bregman, Zelko, Nowinski, Cella, Beaumont and Goldman2017; Lai, Wagner, et al., Reference Lai, Wagner, Jacobsen and Cella2014; Wong et al., Reference Wong, Lai, Correia and Cella2015). Since we deemed it important for the Dutch short form to cover all cognitive functions that are tapped by the PedsPCF item bank, we carefully selected items for the Dutch short form. Three of the 10 selected items were previously included in 1 or more of the published short forms (Lai et al., Reference Lai, Bregman, Zelko, Nowinski, Cella, Beaumont and Goldman2017; Lai, Wagner, et al., Reference Lai, Wagner, Jacobsen and Cella2014; Wong et al., Reference Wong, Lai, Correia and Cella2015).

To obtain normed results for individuals, IRT-based scoring using a computer is recommended since it makes most use of the GRM potential. Yet, a web-based scoring service is still under development in the Netherlands. To allow for paper–pencil scoring, we presented a conversion table, which gives an approximation of the T score and SE, based on the mean for each possible sum score. As such, it does not account for differences in item parameters, which would lead to a more precise estimation for each individual scoring pattern.

Arguably, short forms should be preferred over full-form administration, given the brevity and very high correlations with the full form. Moreover, high interitem correlations suggested item redundancy in the full form. Nonetheless, we cannot exclude that the full form gives a more comprehensive view of cognitive complaints and has added clinical value in discerning the (distinct) nature of underpinning neurocognitive deficits.

The PedsPCF assesses perceived cognitive functioning, using items that are formulated as cognitive complaints. As noted previously, this type of “symptom questionnaires” is generally not found to show strong relations with neurocognitive performance tasks (Coutinho et al., Reference Coutinho, Camara-Costa, Kemlin, Billette de Villemeur, Rodriguez and Dellatolas2017; Vega-Fernandez et al., Reference Vega-Fernandez, Zelko, Klein-Gitelman, Lee, Hummel, Nelson, Thomas, Ying, Beebe and Brunner2014). Symptom questionnaires add, however, ecological validity and information on subjective complaints. In various US studies, the PedsPCF was found to be associated with neuropsychological measures, neurological conditions, and cerebral damage on a group level (Lai et al., Reference Lai, Bregman, Zelko, Nowinski, Cella, Beaumont and Goldman2017; Lai, Butt, et al., Reference Lai, Butt, Zelko, Cella, Krull, Kieran and Goldman2011; Lai, Wagner, et al., Reference Lai, Wagner, Jacobsen and Cella2014). An important next step is to establish the extent to which the PedsPCF can assist in predicting neurocognitive deficits at the individual level, that is, predict neurocognitive deficits as determined by performance-based measures.

This study was limited in some respects. First, IRT modeling and thus item selection for the short form and DIF analysis was based on a general population sample. This approach is required to provide normative data, yet there are also advantages to IRT modeling in the population of intended use of the PedsPCF, that is, clinical populations. Ideally, our findings should be verified in clinical populations, yet including sufficient participants for IRT modeling will be challenging. Another limitation concerns the representativeness of our sample. Although the invited families were carefully selected to be representative for the Dutch population on key demographics, our sample may not have been fully representative due to an overrepresentation of mothers among respondents. We cannot rule out, on the other hand, that in clinical practice, mothers will also be more likely to be the respondent than fathers, since mothers are generally more involved than fathers in healthcare appointments for their child (Mehta & Richards, Reference Mehta and Richards2002; Wakefield et al., Reference Wakefield, McLoone, Fleming, Peate, Thomas, Sansom-Daly and Cohn2011). Furthermore, the parent-reported rates of conditions such as ADHD (approximately 9%) and ASD (approximately 7%) are considerably higher than what is to be expected based on literature (Baron-Cohen et al., Reference Baron-Cohen, Scott, Allison, Williams, Bolton, Matthews and Brayne2009; Polanczyk et al., Reference Polanczyk, de Lima, Horta, Biederman and Rohde2007). Possibly, we have an overestimation of these rates since we relied on parents reporting whether or not a condition was present, rather than systematically checking symptoms as is common in prevalence studies; see, for instance, Polanczyk et al. (Reference Polanczyk, de Lima, Horta, Biederman and Rohde2007). Finally, questionnaires were completed at home, so we cannot rule out that parents have aided their children in completing questionnaires. Particularly for children with intellectual disabilities, parents may have provided some assistance. This should be kept in mind when interpreting, for instance, the outcomes concerning agreement between parent and child.

To conclude, the studied internal psychometric properties of the translated PedsPCF were satisfactory and allowed for IRT modeling, in both the parent-report and the self-report version. This implies that the PedsPCF can successfully be adapted for international use, that is, outside the United States. We proposed a short form that, unlike previous short forms, was based not only on the IRT parameter estimates but also on the neuropsychological content of the items. An important next step is to further establish the clinical utility of the PedsPCF as a screening instrument for neurocognitive deficits, for instance, by determining the association and the predictive power of PedsPCF scores for outcomes of neurocognitive performance-based tasks and questionnaires in clinical (pediatric) groups.

SUPPLEMENTARY MATERIAL

To view supplementary material for this article, please visit https://doi.org/10.1017/S1355617719000572.

ACKNOWLEDGEMENTS

Dr. Tinka Bröring-Starre for participating in the expert panel. Michiel Luijten (MSc) and Dr. Caroline Terwee (Dutch-Flemish PROMIS group) for their valuable comments on the manuscript. Dr. Jin-Shei Lai for her support and valuable comments at various stages of the project.

This project was supported by contributions from the Department of Pediatric Surgery and Intensive Care of the Erasmus Medical Center, Rotterdam; Pediatric Psychosocial Department of the Emma Children’s Hospital, Amsterdam UMC University Medical Centers, Amsterdam; and University of Amsterdam Research Priority Area Yield. Authors have no conflict of interest to declare.

Footnotes

Jan Pieter Marchal and Marieke de Vries are co-first authors.

References

REFERENCES

Baron-Cohen, S., Scott, F.J., Allison, C., Williams, J., Bolton, P., Matthews, F.E., & Brayne, C. (2009). Prevalence of autism-spectrum conditions: UK school-based population study. British Journal of Psychiatry, 194(6), 500509.CrossRefGoogle ScholarPubMed
Cai, L. & Hansen, M. (2013). Limited-information goodness-of-fit testing of hierarchical item factor models. British Journal of Mathematical and Statistical Psychology, 66(2), 245276.CrossRefGoogle ScholarPubMed
Cella, D., Schalet, B.D., Kallen, M.A., Lai, J.-S., Cook, K.F., Ruthson, J., & Choi, S.W. (2016). PROsetta Stone Analysis Report, Vol. 3—Neuro-QoL Pediatric Cognitive Function and Pediatric Perceived Cognitive Function. Retrieved from http://www.prosettastone.org/AnalysisReport.Google Scholar
Chalmers, P., Pritikin, J., Robitzsch, A., Zoltak, M., KwonHyun, K., Falk, C.F., & Meade, A. (2017, July 23). Package ‘mirt’. Retrieved from https://cran.r-project.org/web/packages/mirt/mirt.pdf.Google Scholar
Chen, K., Didsbury, M., van Zwieten, A., Howell, M., Kim, S., Tong, A., Howard, K., Nassar, N., Barton, B., Lah, S., Lorenzo, J., Strippoli, G., Palmer, S., Teixeira-Pinto, A., Mackie, F., McTaggart, S., Walker, A., Kara, T., Craig, J.C., & Wong, G. (2018). Neurocognitive and educational outcomes in children and adolescents with CKD: A systematic review and meta-analysis. Clinical Journal of the American Society of Nephrology, 13(3), 387397.CrossRefGoogle ScholarPubMed
Choi, S.W., Gibbons, L.E., & Crane, P.K. (2016, March 3). Package “lordif”. Retrieved from https://cran.r-project.org/web/packages/lordif/lordif.pdf.Google Scholar
Clancy, O., Edginton, T., Casarin, A., & Vizcaychipi, M.P. (2015). The psychological and neurocognitive consequences of critical illness. A pragmatic review of current evidence. Journal of the Intensive Care Society, 16(3), 226233.CrossRefGoogle ScholarPubMed
Cohen, J. (1988). Statistical Power Analysis for the Behavioral Sciences. New York, NY: Academy Press.Google Scholar
Corp, I. (2016). IBM SPSS Statistics for Windows, Version 24.0. Armonk, NY: IBM Corp.Google Scholar
Correia, H. (2013). PROMIS Instrument Development and Validation Scientific Standards Version 2.0: Appendix 14. Retrieved from http://www.healthmeasures.net/images/PROMIS/PROMISStandards_Vers2.0_Final.pdf.Google Scholar
Coutinho, V., Camara-Costa, H., Kemlin, I., Billette de Villemeur, T., Rodriguez, D., & Dellatolas, G. (2017). The discrepancy between performance-based measures and questionnaires when assessing clinical outcomes and quality of life in pediatric patients with neurological disorders. Applied Neuropsychology: Child, 6(4), 255261.CrossRefGoogle ScholarPubMed
de Ruiter, M.A., van Mourik, R., Schouten-van Meeteren, A.Y., Grootenhuis, M.A., & Oosterlaan, J. (2013). Neurocognitive consequences of a paediatric brain tumour and its treatment: A meta-analysis. Developmental Medicine and Child Neurology, 55(5), 408417.CrossRefGoogle ScholarPubMed
De Vet, H.C., Terwee, C.B., Mokkink, L.B., & Knol, D.L. (2011). Measurement in Medicine: A Practical Guide. Cambridge, UK: Cambridge University Press.CrossRefGoogle Scholar
Edelen, M.O. & Reeve, B.B. (2007). Applying item response theory (IRT) modeling to questionnaire development, evaluation, and refinement. Quality of Life Research, 16(Suppl.1), 518.CrossRefGoogle ScholarPubMed
Field, A. (2013). Discovering Statistics using IBM SPSS Statistics. London, UK: Sage.Google Scholar
Hardouin, J.B., Conroy, R., & Sebille, V. (2011). Imputation by the mean score should be avoided when validating a Patient Reported Outcomes questionnaire by a Rasch model in presence of informative missing data. BMC Medical Research Methodology, 11, 105.CrossRefGoogle ScholarPubMed
Hardy, K.K., Olson, K., Cox, S.M., Kennedy, T., & Walsh, K.S. (2017). Systematic review: A prevention-based model of neuropsychological assessment for children with medical illness. Journal of Pediatric Psychology, 42(8), 815822.CrossRefGoogle ScholarPubMed
Hijmans, C.T., Fijnvandraat, K., Grootenhuis, M.A., van Geloven, N., Heijboer, H., Peters, M., & Oosterlaan, J. (2011). Neurocognitive deficits in children with sickle cell disease: A comprehensive profile. Pediatric Blood & Cancer, 56(5), 783788.CrossRefGoogle ScholarPubMed
Hu, L.T. & Bentler, P.M. (1999). Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives. Structural Equation Modeling: A Multidisciplinary Journal, 6(1), 155.CrossRefGoogle Scholar
Jennrich, R.I. & Bentler, P.M. (2011). Exploratory bi-factor analysis. Psychometrika, 76(4), 537549.CrossRefGoogle ScholarPubMed
Kang, T. & Chen, T.T. (2010). Performance of the generalized S-X2 item fit index for the graded response model. Asia Pacific Education Review, 12(1), 8996.CrossRefGoogle Scholar
Koo, T.K. & Li, M.Y. (2016). A guideline of selecting and reporting intraclass correlation coefficients for reliability research. Journal of Chiropractic Medicine, 15(2), 155163.CrossRefGoogle ScholarPubMed
Lai, J.S., Bregman, C., Zelko, F., Nowinski, C., Cella, D., Beaumont, J.J., & Goldman, S. (2017). Parent-reported cognitive function is associated with leukoencephalopathy in children with brain tumors. Quality of Life Research, 26(9), 25412550.CrossRefGoogle ScholarPubMed
Lai, J.S., Butt, Z., Zelko, F., Cella, D., Krull, K.R., Kieran, M.W., & Goldman, S. (2011). Development of a parent-report cognitive function item bank using item response theory and exploration of its clinical utility in computerized adaptive testing. Journal of Pediatric Psychology, 36(7), 766779.CrossRefGoogle ScholarPubMed
Lai, J.S., Wagner, L.I., Jacobsen, P.B., & Cella, D. (2014). Self-reported cognitive concerns and abilities: Two sides of one coin? Psycho-Oncology, 23(10), 11331141.CrossRefGoogle ScholarPubMed
Lai, J.S., Zelko, F., Butt, Z., Cella, D., Kieran, M.W., Krull, K.R., Magasi, S., & Goldman, S. (2011). Parent-perceived child cognitive function: Results from a sample drawn from the US general population. Child’s Nervous System, 27(2), 285293.CrossRefGoogle ScholarPubMed
Lai, J.S., Zelko, F., Krull, K.R., Cella, D., Nowinski, C., Manley, P.E., & Goldman, S. (2014). Parent-reported cognition of children with cancer and its potential clinical usefulness. Quality of Life Research, 23(4), 10491058.CrossRefGoogle ScholarPubMed
MacCallum, R.C., Browne, M.W., & Sugawara, H.M. (1996). Power analysis and determination of sample size for covariance structure modeling. Psychological Methods, 1(2), 130.CrossRefGoogle Scholar
Maydeu-Olivares, A. & Joe, H. (2014). Assessing approximate fit in categorical data analysis. Multivariate Behavioral Research, 49(4), 305328.CrossRefGoogle ScholarPubMed
Mehta, S.K. & Richards, N. (2002). Parental involvement in pediatric cardiology outpatient visits. Clinical Pediatrics, 41(8), 593596.CrossRefGoogle ScholarPubMed
Muthén, L.K. & Muthén, B.O. (1998-2015). Mplus User's Guide (7th ed.). Los Angeles, CA: Muthén & Muthén.Google Scholar
Passolunghi, M.C., Rueda Ferreira, T.I., & Tomasetto, C. (2014). Math–gender stereotypes and math-related beliefs in childhood and early adolescence. Learning and Individual Differences, 34(Suppl.C), 7076.CrossRefGoogle Scholar
Polanczyk, G., de Lima, M.S., Horta, B.L., Biederman, J., & Rohde, L.A. (2007). The worldwide prevalence of ADHD: A systematic review and metaregression analysis. American Journal of Psychiatry, 164(6), 942948.CrossRefGoogle ScholarPubMed
Reeve, B.B., Hays, R.D., Bjorner, J.B., Cook, K.F., Crane, P.K., Teresi, J.A., Thissen, D., Revicki, D.A., Weiss, D.J., Hambleton, R.K., Liu, H., Gershon, R., Reise, S.P., Lai, J.S., Cella, D., & PROMIS Cooperative Group (2007). Psychometric evaluation and calibration of health-related quality of life item banks: Plans for the Patient-Reported Outcomes Measurement Information System (PROMIS). Medical Care, 45(5 Suppl.1), S22S31.CrossRefGoogle Scholar
Rizopoulos, D. (2013). Package ‘ltm’. Retrieved from https://cran.r-project.org/web/packages/ltm/ltm.pdf.Google Scholar
Samejima, F. (1997). Graded response model, In Handbook of modern item response theory (pp. 85100). New York (NY): Springer.CrossRefGoogle Scholar
Team, R.C. (2016). R: A Language and Environment for Statistical Computing. Vienna: R Foundation for Statistical Computing.Google Scholar
Terwee, C.B., Roorda, L.D., de Vet, H.C., Dekker, J., Westhovens, R., van Leeuwen, J., Cella, D., Correia, H., Arnold, B., Perez, B., & Boers, M. (2014). Dutch-Flemish translation of 17 item banks from the patient-reported outcomes measurement information system (PROMIS). Quality of Life Research, 23(6), 17331741.Google Scholar
Van den Ark, L.A. (2017). Package ‘Mokken’ Version 2.8.6. Retrieved from https://cran.r-project.org/web/packages/mokken/mokken.pdf.Google Scholar
Vega-Fernandez, P., Zelko, F.A., Klein-Gitelman, M., Lee, J., Hummel, J., Nelson, S., Thomas, E.C., Ying, J., Beebe, D.W., & Brunner, H.I. (2014). Value of questionnaire-based screening as a proxy for neurocognitive testing in childhood-onset systemic lupus erythematosus. Arthritis Care & Research, 66(6), 943948.CrossRefGoogle ScholarPubMed
Wakefield, C.E., McLoone, J.K., Fleming, C.A.K., Peate, M., Thomas, E.J., Sansom-Daly, U., Butow, P., & Cohn, R.J. (2011). Adolescent cancer and health-related decision-making: An Australian multi-perspective family analysis of appointment attendance and involvement in medical and lifestyle choices. Journal of Adolescent and Young Adult Oncology, 1(4), 173180.CrossRefGoogle Scholar
Wong, A.W., Lai, J.S., Correia, H., & Cella, D. (2015). Evaluating psychometric properties of the Spanish-version of the pediatric functional assessment of chronic illness therapy-perceived cognitive function (pedsFACIT-PCF). Quality of Life Research, 24(9), 22892295.CrossRefGoogle Scholar
Figure 0

Table 1. Items of the Dutch PedsPCF with corresponding content areas

Figure 1

Table 2. Sociodemographic characteristics of respondents

Figure 2

Fig. 1. Test information curve (solid line) and standard errors (dashed line) of the PedsPCF parent-report full form.

Figure 3

Fig. 2. Test information curve (solid line) and standard errors (dashed line) of the PedsPCF self-report full form.

Figure 4

Fig. 3. Test information curve (solid line) and standard errors (dashed line) of the PedsPCF parent-report short form.

Figure 5

Fig. 4. Test information curve (solid line) and standard errors (dashed line) of the PedsPCF self-report short form.

Figure 6

Table 3. PedsPCF short-form sum scores with corresponding mean T scores and standard errors

Supplementary material: File

Marchal et al. supplementary material

Marchal et al. supplementary material 1

Download Marchal et al. supplementary material(File)
File 215 KB