Introduction
Continuity of care is considered a corner-stone in the effective management of long-term disorders by service users, clinicians and health-care policy-makers. It is fundamental in several policy documents (DoH, 1990, 1995, 2001) and has been proposed as a useful criterion for mental health service evaluation (Johnson et al. Reference Johnson, Prosser, Bindman and Szmukler1997). The weight given it is reflected in the widespread use of case management (Mueser et al. Reference Mueser, Bond, Drake and Resnick1998) and national policies such as the Care Programme Approach (CPA) in the UK (DoH, 1990). Indeed, Tessler (Reference Tessler1987) argues that it has replaced dependency and deinstitutionalization as the central issue in service provision.
However, although the importance of continuity of care has long been recognized, including for those with severe mental illness, it is generally agreed that there have been few attempts until recently to define it systematically, continuity being ‘often lauded but seldom defined’ (Freeman et al. Reference Freeman, Shepperd, Robinson, Ehrich and Richards2000; see also Crawford et al. Reference Crawford, Jonge, Freeman and Weaver2004). Definitions are frequently inadequate, often with only one or two elements included (Freeman et al. Reference Freeman, Shepperd, Robinson, Ehrich and Richards2000). Freeman et al. (Reference Freeman, Shepperd, Robinson, Ehrich and Richards2000) identified 32 continuity of care studies in mental health and 14 in primary care but found more than 10 definitions and few attempts to explicate and analyse the idea substantively. Crawford et al. (Reference Crawford, Jonge, Freeman and Weaver2004) reviewed 435 relevant papers, most of which did not define continuity of care. Haggerty et al. (Reference Haggerty, Reid, Freeman, Starfield, Adair and McKendry2003), however, emphasize that without clear definitions of continuity of care it is possible neither to investigate nor to solve discontinuities.
Adair et al. (Reference Adair, McDougall, Beckie, Joyce, Mitton, Wild, Gordon and Costigan2003), charting the definitions of continuity of care over 30 years, found that continuity was rarely distinguished from the interventions themselves until the 1980s, when the idea that it might be a multi-dimensional concept began to emerge (Bachrach, Reference Bachrach1981), whereas in the 1990s continuity became seen as a potential measure of system-level reform. Where continuity had previously been seen as indicating care by the same caregiver or group of caregivers, the idea of continuity as involving the coordination of the patient's progress through the system gained hold.
Operationalizing the concept of continuity of care, however, has been notoriously difficult. Many of the earlier studies focused on discharge after an acute care episode rather than on longitudinal changes in continuity (Adair et al. Reference Adair, McDougall, Beckie, Joyce, Mitton, Wild, Gordon and Costigan2003) and this has been the case even in some recent studies that have successfully distinguished between the continuity after discharge achieved by different mental health systems (Sytema et al. Reference Sytema, Micciolo and Tansella1997; Sytema & Burgess, Reference Sytema and Burgess1999). Sytema et al. (Reference Sytema, Micciolo and Tansella1997), however, also focused on flexibility of care, operationalized as the combinations of in-, day- and out-patient care used during follow-up, whereas other studies have focused on cross-boundary continuity between primary and secondary care (Bindman et al. Reference Bindman, Johnson, Wright, Szmukler, Bebbington, Kuipers and Thornicroft1997), psychiatric and emergency services (Heslop et al. Reference Heslop, Elsom and Parker2000), or in-patient and community settings (Kopelowicz et al. Reference Kopelowicz, Wallace and Zarate1998).
Several groups have proposed a range of conceptualizations that emphasize differing features: ‘a sustained patient–physician partnership’ (Nutting et al. Reference Nutting, Goodwin, Flocke, Zyzanski and Stange2003); maintenance of contact, consistency in the member of staff seen and success of transfer between services (Johnson et al. Reference Johnson, Prosser, Bindman and Szmukler1997); and ‘adequate access to care … good interpersonal skills, good information flow and uptake between providers and organizations, and good care coordination’ (Reid et al. Reference Reid, Haggerty and McKendry2002), whereas discontinuity has been defined as gaps in care (Cook et al. Reference Cook, Render and Woods2000). Others have again emphasized that continuity of care be understood as multi-dimensional. Crawford et al. (Reference Crawford, Jonge, Freeman and Weaver2004) propose five factors based on sustained contact with services, breaks in service delivery, the same member of staff being seen, coordination of health and social professionals and the experience of care; Johnson et al. (Reference Johnson, Prosser, Bindman and Szmukler1997) include maintenance of contact, consistency in the member of staff seen, transition and integration between services, adherence to service plans, and management of service users' needs; and Ware et al. (Reference Ware, Dickey, Tugenberg and McHorney2003) use five domains: knowledge, flexibility, availability, coordination and transitions. A systematic literature review by Joyce et al. (Reference Joyce, Wild, Adair, McDougall, Gordon, Costigan, Beckie, Kowalsky, Pasmeny and Barnes2004) found that continuity of care has been defined in terms of service delivery, accessibility, relationship base and individualized care.
The impact of continuity of care as a multi-dimensional concept on health and social outcomes has been less often studied, as studies have tended either to examine outcomes with implications for continuity (such as loss of contact) or to examine interventions assumed to promote continuity (Freeman et al. Reference Freeman, Shepperd, Robinson, Ehrich and Richards2000). Adair et al. (Reference Adair, McDougall, Mitton, Joyce, Wild, Gordon, Costigan, Kowalsky, Pasmeny and Beckie2005), however, found that better overall continuity, as a combined rating of a range of dimensions, was associated with better quality of life, better community functioning, lower symptom severity and greater service satisfaction, as well as with lower hospital costs and higher community costs (Mitton et al. Reference Mitton, Adair, McDougall and Marcouz2005), although the direction of these effects could not be determined.
Freeman et al. (Reference Freeman, Shepperd, Robinson, Ehrich and Richards2000) rated continuity of care studies from the service users' viewpoint according to relevance, method and concept and highlighted the necessity not only for clarity in the conceptualization of continuity of care in order to be able to gauge its impact but also for the inclusion of the service user's perspective. They summarized the principal characteristics of continuity of care in a ‘multi-axial definition’ comprising: experienced, cross-boundary, flexible, information, relational and longitudinal. In a subsequent study of continuity in mental health settings (Freeman et al. Reference Freeman, Weaver, Low, de Jonge and Crawford2002), they added two further definitions, contextual and long-term. This extended model was the starting-point for the present study (see Table 1).
Table 1. Multi-axial definition of continuity of care
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921043019694-0156:S0033291708003747:S0033291708003747_tab1.gif?pub-status=live)
In the current study we aimed to test whether a multi-factorial model of continuity of care could be operationalized for users of mental health services and whether systematically collected service user-level data would confirm the model's validity for this group.
Method
Sample and procedure
People with long-term psychotic disorders were sampled from the caseloads of seven Community Mental Health Teams (CMHTs) covered by two mental health Trusts. The inclusion criteria were: clinical diagnosis of any psychotic disorder received at least 2 years previously; on the caseload of the CMHT for at least 6 months; and aged 18 to 65 years. Diagnosis was confirmed by use of OPCRIT (McGuffin et al. Reference McGuffin, Farmer and Harvey1991).
The multi-axial model of continuity of care (Freeman et al. Reference Freeman, Shepperd, Robinson, Ehrich and Richards2000, Reference Freeman, Weaver, Low, de Jonge and Crawford2002) was taken as the starting-point. Each of its eight facets or definitions was operationalized by identifying data and/or measures that approximated to it. The variables used to operationalize each definition were agreed by expert consensus within the multi-disciplinary research group. They were chosen for their closeness to the definition being considered, the likelihood and regularity of their being recorded in the case-notes or the availability of established, validated instruments for obtaining them during a single interview.
Interviews collected basic data on: patterns of contact with services in the preceding 12 months; breaks in care; and referrals to other services including hospital admission. Demographic and illness data were also collected. Three questionnaire measures were also completed. The Camberwell Assessment of Need (CAN; Phelan et al. Reference Phelan, Slade, Thornicroft, Dunn, Holloway, Wykes, Strathdee, Loftus, McCrone and Hayward1995) was used in the operationalization of flexible continuity and the Scale to Assess Therapeutic Relationships in Community Mental Health Care – service user version (STAR; McGuire-Snieckus et al. Reference McGuire-Snieckus, McCabe, Catty and Priebe2006) was used in the operationalization of relational continuity. CONTINU-UM (Continuity of care–User Measure; Rose et al. unpublished observations), a user-generated measure of continuity developed for the study, was used as a proxy for experienced continuity. Data on contact with services, number of professionals seen and information flow were also collected from CMHT records by study researchers using a standard schedule developed for the study. This recorded every face-to-face and telephone contact made between the team and the user; the discipline of the professional involved; for every transition in care (referral to an alternative or additional service, including admission to in-patient care), its date and whether appropriate documentation was recorded as having been sent or received; whether the annual CPA documentation was recorded as having been sent to the user, their carer and their general practitioner (GP); and contact between the CMHT and the GP.
Analysis
The continuity components were manipulated to give them comparable weight. Continuous variables were z scored if normally distributed or otherwise converted into categorical variables. Variables were coded so that a positive score indicated an assumed positive scenario. The direction of relationships as determined by the factor analysis, however, would indicate the final direction of the variables. Variables were omitted from further analysis if there was insufficient spread of response (<5% in any category) or if two variables had a Spearman rank correlation coefficient ⩾0.8, in which case one was omitted.
Bartlett's test of sphericity and the Kaiser–Meyer–Olkin (KMO) measure of sampling adequacy (Kaiser, Reference Kaiser1974) were used to evaluate the strength of the linear association between the items in the inter-item correlation matrix. Variables were omitted if their individual measure of sampling adequacy was unacceptably low, until the overall KMO measure of sampling adequacy reached an acceptable level.
Exploratory factor analysis was carried out on variables retained after preliminary screening. A principal component analysis was used to extract factors with an eigenvalue greater than one. A varimax rotation was then used to produce interpretable independent factors. Extracted factors were interpreted by identifying the items that loaded onto each with a rotated factor loading >0.5. Analyses were conducted in SPSS version 14 for Windows (SPSS Inc., Chicago, IL, USA).
Results
Sample
Initially, 609 service users were identified as being potentially eligible for the study. Of these, 111 did not meet the inclusion criteria and 318 declined to participate, leaving 180 service users to be interviewed. Characteristics of the sample are given in Table 2. The diagnosis of psychosis was confirmed by OPCRIT (McGuffin et al. Reference McGuffin, Farmer and Harvey1991) for 171.
Table 2. Clinical and demographic characteristics
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921043019694-0156:S0033291708003747:S0033291708003747_tab2.gif?pub-status=live)
s.d., Standard deviation.
a Includes full-time, part-time and sheltered work and self-employment.
b Includes seeking work, unable to work, studying, retired or other.
Operationalizing continuity of care
Freeman's eight definitions of continuity were operationalized using a total of 32 components for consideration for entry into the factor analysis (Table 3).
Table 3. Continuity of care components
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921043019694-0156:S0033291708003747:S0033291708003747_tab3.gif?pub-status=live)
CONTINU-UM, Continuity of care–User Measure; STAR, Scale to Assess Therapeutic Relationships in Community Mental Health Care; CAN, Camberwell Assessment of Need; CPA, Care Programme Approach; GP, general practitioner; CMHT, Community Mental Health Team.
a Items in italics were subsequently dropped from the analysis, for reasons given in the text.
b Information from the service user.
c Variable reversed so that a high score indicates an assumed positive scenario; for example a high score for ‘average gap between face-to-face contacts’ would indicate short average gaps.
d For users with no identified care coordinator (STAR-c rating), the STAR concerning the relationship with the psychiatrist (STAR-p) was used; where no psychiatrist was identified or rated, the STAR concerning the relationship with a third identified professional (STAR-o) was used, to maximize data.
Experienced continuity
Our overarching concept for the purposes of this study (and therefore not necessarily interpreted as either ‘coordinated’ or ‘smooth’), this was to capture the service user perspective and operationalized using CONTINU-UM.
Flexible continuity
Conceptualized as the range of needs at any single time-point being met, this was operationalized using CAN and as response to change in clinical needs over time as increased rate of contacts in the 3 months prior to any hospital admission or service user-reported deterioration.
Cross-boundary continuity
Conceptualized as transitions and fragmentations, this was operationalized as referrals to other services, admissions to hospital, discharges from hospital, number of agencies involved and any user-reported contact with primary care.
Continuity of information
Determined by the number of transitions collected for cross-boundary continuity, this was operationalized as (a) documents sent as a proportion of the identified transitions, (b) proportion of letters copied or sent directly to the user and (c) number of people to whom CPA documentation was copied (an established good-practice requirement for long-term care in this group).
Longitudinal continuity
This was operationalized as (a) any change in who acts as the user's care coordinator and the number of staff in that role, (b) any change in who acts as the user's psychiatrist and the number of psychiatrists in that role, (c) ‘spread of non-medical CMHT input’ (number of different non-medical team members seen out of the total number of contacts with non-medical team members) and (d) ‘spread of medical CMHT input’ (number of different medical team members seen out of the total number of contacts with medical team members).
Relational or personal continuity
This was operationalized as the user-rated STAR.
Long-term continuity
Interpreted as breaks in care and user-initiated discontinuity, this was operationalized as: user-reported level of attendance of appointments with CMHT; number of user-initiated breaks from mental health care reported by user; user-reported medication adherence; total number of CMHT contacts in year; longest gap between contacts with secondary care team; number of gaps of more than two months; number of more than average gaps (quantified as user's individual mean gap×2+2 weeks); number of days between hospital discharge and face-to-face contact with a member of the CMHT.
Contextual continuity
Interpreted as social context, this was operationalized as living situation (supported accommodation or independent) and daily activities (day care).
When the inter-item correlation matrix was constructed, 10 components were omitted from further analysis. ‘Total number of phone calls’ was omitted because of inconsistent case-note recording. The variables ‘Saw known CMHT member when hospitalized’, ‘Increased contacts in three months prior to user deterioration’, ‘Increased contacts in three months prior to admission’ and ‘Number of user-rated breaks in care’ were omitted because of insufficient spread of response. ‘Total number of face-to-face contacts’ was found to be correlated with ‘Average gap between face-to-face contacts’ (r=0.88) and was therefore omitted from further analysis. ‘Longest gap between face-to-face contacts’ was highly correlated with ‘Average gap between face-to-face contacts’ (r=0.86) and ‘Gap of two months or more’ (r=0.86), so was omitted. ‘Referred to other agency’ was highly correlated with ‘Had a transition’ (r=0.81) and so was omitted. ‘Number of unmet needs’ was highly correlated with ‘Proportion of needs met’ (r=0.94) so it was omitted, and ‘CAN total number of needs’ was highly correlated with ‘CAN total level of need’ (r=0.93) and was therefore omitted. Thus, 22 components were appropriate for entry into the exploratory factor analysis.
Factor analysis
A factor analysis was conducted to explore how the different components of continuity relate to each other. Entering the 22 components produced a KMO statistic of 0.49, just below the 0.5 threshold of an acceptable measure of sampling adequacy (Kaiser, Reference Kaiser1974). The individual measures of sampling adequacy were then examined and two were found to be very low and so were removed from the factor analysis: ‘Gaps of (average gap×2+2 weeks)’ (0.28) and ‘Medical input spread’ (0.22). In the repeated factor analysis, Bartlett's test of sphericity indicated that the correlation matrix was not an identity matrix (χ2=540.5, p<0.001). The KMO measure of sampling adequacy was 0.54, which, though still low, was acceptable. (The correlation matrix is not presented here but is available from the first author on request.) Seven factors were extracted with an eigenvalue of one or more, explaining 62.5% of the total variance in the data (Table 4). Where the factor was predominately characterized by a component or components used to operationalize the original multi-axial model, the name of that definition is added in parentheses in Table 4. Factor 5, Managed Transitions, was recoded into a straightforward trichotomous variable.
Table 4. Continuity of care factors
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921043019694-0156:S0033291708003747:S0033291708003747_tab4.gif?pub-status=live)
CONTINU-UM, Continuity of care–User Measure; STAR, Scale to Assess Therapeutic Relationships in Community Mental Health Care; CAN, Camberwell Assessment of Need; CPA, Care Programme Approach; GP, general practitioner; CMHT, Community Mental Health Team.
a ‘Negative’ indicates that the component loads negatively onto the factor, indicating an inverse relationship, whereas ‘reversed’ indicates that the variable was reverse-scored from the outset so that a high score would indicate a positive scenario.
The majority of components loaded significantly onto one factor only, with rotated loadings of 0.5 and above. There were four exceptions to this. ‘Any user-rated breaks in care?’, ‘CPA copied to GP and user?’, ‘Number of care coordinators in the past year’ and ‘Attended a day centre’ all had absolute loadings between 0.4 and 0.5 onto only one factor and so were allocated to that respective factor.
Summary statistics for the 20 components of continuity of care in the seven-factor model are presented in Table 5.
Table 5. Levels of continuity components
![](https://static.cambridge.org/binary/version/id/urn:cambridge.org:id:binary:20160921043019694-0156:S0033291708003747:S0033291708003747_tab5.gif?pub-status=live)
CONTINU-UM, Continuity of care–User Measure; STAR, Scale to Assess Therapeutic Relationships in Community Mental Health Care; CAN, Camberwell Assessment of Need; CPA, Care Programme Approach; GP, general practitioner; CMHT, Community Mental Health Team.
Values are given as mean (standard deviation) or number (percentage).
Discussion
This study was based on the premise that continuity of care is ‘often lauded but seldom defined’ (Freeman et al. Reference Freeman, Shepperd, Robinson, Ehrich and Richards2000). Anecdotal evidence would suggest that professionals tend to recognize the idea of continuity of care and intuitively accept it as a worthy goal, despite the paucity of evidence about what it means in practice.
We operationalized the original model to enable its systematic measurement and exploration using quantitative service user-level data. We used the global score of the new measure, CONTINU-UM, as a proxy for experienced continuity (as an overarching concept), treating it as a single measure that would reflect participants' own experiences and perspectives on the continuity of care they received. We operationalized the remaining elements using multiple components (collected both through interview and from clinical records) that, between them, would reflect the full range of concepts covered by the multi-axial model from which we started. By exploring the relationships between these components through a factor analysis, we found them to be grouped differently in practice, providing a new seven-factor model comprising Experience and Relationship, Regularity, Meeting Needs, Consolidation, Managed Transitions, Care Coordination and Supported Living. These have clear relationships with the different elements of the model of Freeman et al., although they are not synonymous.
Our methodology was comparable to that of Adair et al. (Reference Adair, McDougall, Beckie, Joyce, Mitton, Wild, Gordon and Costigan2003), whose measure developed for the Canadian context includes both patient- and observer-rated scales. Our factors Experience and Relationship and Meeting Needs partially matched their patient-rated subscales ‘relationship base’ and ‘responsive treatment’ respectively, and their other subscale ‘system fragmentation’ seems to have been reflected in our analysis by three distinct factors, Consolidation, Managed Transitions and Care Coordination.
Our analysis thus confirms Freeman et al.'s argument and Adair et al.'s finding that continuity of care comprises more than one single entity. The overarching concept of continuity of care can be broken down into a number of independent concepts and the factors that emerged from our analysis seem intuitively meaningful and practical.
Methodological issues
The conclusions of this paper are inevitably derived from a sample who agreed to take part. How this group may have contrasted with the larger group who refused is unknown. It is possible, though not proven, that those who refused may have been less well engaged with or favourably disposed towards services. If this were the case, this would be likely to affect the levels of several of the continuity factors of the sample (such as Experience and Relationship or Regularity), rather than affecting the overall factor structure.
We took an inclusive approach to operationalizing and measuring the original model. Consistent with this, we did not remove items from the exploratory factor analysis that were weakly correlated with each other (<0.3 as is sometimes advised), as it was possible that different components of continuity would be unrelated to each other.
Given the nature of some of the components included, it was likely that some of them would not be related to each other, affecting the KMO statistic (measuring sampling adequacy). Overall measures of good fit may, therefore, not be applicable to our aims.
Data from records were limited by the availability of the information on file. This may have varied between CMHTs. In assessing ‘information continuity’ and its related components, whether the requisite information was on file was highly relevant. We therefore worked on the assumption that information not on file had not been sent, a conservative estimation of information flow. It is possible that the accuracy of service contact or transitions data may have been compromised by the quality of case-notes in a way that could not be quantified and that may have varied between CMHTs.
Although the factors are intuitively meaningful, their scoring is not and this complicates interpretation, which needs to be based on the components loading onto each factor.
Potential use of these factors
Our factor structure is helpful in challenging preconceptions about likely correlates of care practices. For instance, care components linked with Care Coordination and those linked with Experience and Relationship loaded onto separate factors, suggesting that focusing care on a single care coordinator is no guarantee in itself of better relational or experienced continuity. The loading of ‘designated psychiatrist’ onto Care Coordination suggested that this was common and reflected a choice in provision of care: users were more likely to see no psychiatrist or more than two (that is, to have no particular psychiatrist relating to them) if they saw only one or two care coordinators. This suggests that teams were choosing between emphasizing continuity achieved through the care coordinator or through the psychiatrist, without any evidence of this being based on an explicit policy. Any assumption that the one smoothly substitutes for the other is challenged by service users' reports in in-depth interviews conducted in a related study (Jones, personal communication), which found that they disliked having to see several psychiatrists, even if they had a single care coordinator.
Johnson et al. (Reference Johnson, Prosser, Bindman and Szmukler1997) proposed that continuity be used as an important quality measure for services, but until recently there have been no metrics. Our operationalization of Freeman et al.'s original model draws on routinely collected data and well-known and validated measures. Our factors may in future be used to identify service user characteristics associated with different levels of continuity and therefore help to target extra support to vulnerable groups. They may also be used as outcomes against which to test measures (in particular service configurations) deployed to improve continuity. It is unlikely that a model comprising seven factors would be used in routine services. As it presently stands, however, it may provide for clinicians a means of conceptualizing continuity of care for mental health, along with a wide-ranging set of measures of continuity in its different facets, from which different aspects could be selected to reflect service priorities. The relative clinical importance of the seven factors remains to be tested against relevant clinical and social outcome measures. Further research should then identify the optimal continuity of care factors as the minimum necessary components of care for service users with chronic mental health problems.
Appendix: The ECHO Group
Main phase: Tom Burns (University of Oxford), Jocelyn Catty (St George's, University of London), Sarah Clement (London South Bank University), Kate Harvey (University of Reading), Sarah White (St George's, University of London), Tamara Anderson (St George's, University of London), Naomi Cowan (St George's, University of London), Gemma Ellis (St George's, University of London), Helen Eracleous (St George's, University of London), Connie Geyer (St George's, University of London), Pascale Lissouba (St George's, University of London), Zoe Poole (St George's, University of London); Qualitative strand: Ian Rees Jones (University of Wales, Bangor), Nilufar Ahmed (St George's, University of London); Developmental phase: Diana Rose (IOP, London), Til Wykes (IOP, London), Angela Sweeney (IOP, London); Organizational strand: Susan McLaren (London South Bank University), Ruth Belling (London South Bank University), Jonathon Davies (London South Bank University), Ferew Lemma (London South Bank University), Margaret Whittock (London South Bank University).
Acknowledgements
This study was funded by a grant (SDO/13(d)2001) from the National Institute for Health Research Service Delivery and Organisation Programme, London School of Hygiene and Tropical Medicine.
Declaration of Interest
None.