Introduction
Assertive Community Treatment (ACT) focuses on clients with serious mental illness, particularly those who have experienced frequent hospital admissions and have difficulties engaging with services. Clinical and cost-effectiveness of ACT have been demonstrated in numerous randomised controlled trials (RCTs), mostly in the USA (Stein & Test, Reference Stein and Test1980; Marshall & Lockwood, Reference Marshall and Lockwood1998). Successful replications and service evaluations in Australia have suggested good clinical outcomes (Hoult, Reference Hoult1986; Hambridge & Rosen, Reference Hambridge and Rosen1994; Issakidis et al. Reference Issakidis, Sanderson, Teeson, Johnson and Buhrich1999) and led to widespread implementation of ACT in Australia (Ash et al. Reference Ash, Brown, Burvill, Davies, Hughson, Meadows, Nagle, Rosen, Singh, Weir, Meadows, Singh and Grigg2001). In relative contrast, the REACT study (Killaspy et al. Reference Killaspy, Bebbington, Blizard, Johnson, Nolan, Pilling and King2006), the first RCT of ACT teams (ACTTs) with relatively high-model fidelity, concurred with earlier UK studies of intensive case management models suggesting little advantage for ACT over ‘usual care’ from community mental health teams (Holloway & Carson, Reference Holloway and Carson1998; Thornicroft et al. Reference Thornicroft, Strathdee, Phelan, Holloway, Wykes, Dunn, McCrone, Leese, Johnson and Szmukler1998a, Reference Thornicroft, Wykes, Holloway, Johnson and Szmuklerb; Burns et al. Reference Burns, Creed, Fahy, Thompson, Tryrer and White1999; UK 700 Group, 1999). Similar findings are reported in other European studies (e.g. Systema, Reference Sytema1991). Despite this, ACTTs have been implemented across England (Department of Health, 1999).
Various explanations have been advanced for the failure to discern an effect of ACT in some countries, including differences in how long teams under study were established or followed up as well as variations in model fidelity or the standard of comparison care (Teague et al. Reference Teague, Bond and Drake1998; Tyrer, Reference Tyrer2000; Fiander et al. Reference Fiander, Burns, McHugo and Drake2003; Killaspy et al. Reference Killaspy, Bebbington, Blizard, Johnson, Nolan, Pilling and King2006). The heterogeneity of comparison care and the development of better-quality ‘standard’ community mental health care may explain why earlier trials of ACT showed more efficacy than later studies (Burns, Reference Burns2008). Model fidelity has assumed a particular importance since high-fidelity programmes are more effective (McHugo et al. Reference McHugo, Drake, Whitley, Bond, Campbell, Rapp, Goldman, Lutz and Finnerty2007) and fidelity explains much of the variation in client outcomes (Burns et al. Reference Burns, Catty, Dash, Roberts, Lockwood and Marshall2007). Further, the implementation of ACT programmes continues to vary (Bond, Reference Bond1990; Phillips et al. Reference Phillips, Burns, Edgar, Mueser, Linkins, Rosenheck, Drake and McDonel Herr2001; Salyers et al. Reference Salyers, Bond, Teague, Cox, Smith, Hicks and Koop2003). It is increasingly difficult to conduct multi-centre RCTs of ACT to provide an international comparison, because the model has now been widely implemented in many countries. Service model interventions are also complex, making RCT results subject to effect modification in different populations. Therefore, studies of ACT implementation are required, including comparisons between sites, which may yield new insights into differences in reported efficacy and effectiveness (Phillips et al. Reference Phillips, Burns, Edgar, Mueser, Linkins, Rosenheck, Drake and McDonel Herr2001).
The Pan London Assertive Outreach (PLAO) Studies examined ACT implementation and effectiveness among all 24 London teams in operation in 2001 (Billings et al. Reference Billings, Johnson, Bebbington, Greaves, Priebe, Muijen, Ryrie, Watts, White and Wright2003; Priebe et al. Reference Priebe, Fakhoury, Watts, Bebbington, Burns, Johnson, Muijen, Ryrie, White and Wright2003; Wright et al. Reference Wright, Burns, James, Billings, Johnson, Muijen, Priebe, Ryrie, Watts and White2003). There was variable implementation of the ACT model and only four teams showed high model fidelity as measured by the Dartmouth ACT Scale (Teague et al. Reference Teague, Bond and Drake1998). Three types of ACTT were identified: Clusters A and B were statutory teams with care programme approach (CPA) responsibilities and Cluster C were non-statutory teams without CPA responsibilities. London ACT clinicians had high levels of job satisfaction and moderate to low ‘burnout’, but were stressed by lack of support from senior staff. There was evidence that some London ACTTs (Cluster C teams especially) accepted clients who were not in the conventional target group, such as those who had never been hospitalised.
In this study, we aimed to investigate differences in the implementation of ACT in Melbourne and London in terms of team structure, staffing and processes, staff experiences and client characteristics through a cross-sectional survey, using similar measures to those used in the PLAO Studies and employing post hoc comparisons. We aimed at exploring whether any differences in implementation could explain the differences in efficacy reported for ACT in Australia and England.
Method
Setting
ACTTs have been established in Melbourne since 1996. The study involved all four ACTTs in the Western Region of Melbourne, an area of medium-to-high social and economic deprivation according to the Index of Relative Socio-Economic Disadvantage (Australian Bureau of Statistics, 2003). For these 4 ACTTs, there was an average of 11 acute beds per 100 000 population with an average bed occupancy of 96.0%. The average length of stay was 15 days with a 28-day readmission rate of 11.1%. There was an average of 3 extended care (rehabilitation) inpatient beds per 100 000 population in western Melbourne. Data were compared with results for London ACTTs investigated in the PLAO Study (Billings et al. Reference Billings, Johnson, Bebbington, Greaves, Priebe, Muijen, Ryrie, Watts, White and Wright2003; Priebe et al. Reference Priebe, Fakhoury, Watts, Bebbington, Burns, Johnson, Muijen, Ryrie, White and Wright2003; Wright et al. Reference Wright, Burns, James, Billings, Johnson, Muijen, Priebe, Ryrie, Watts and White2003). These 24 ACTTs served the greater London region that includes many of the most socially deprived boroughs in England with high levels of psychiatric morbidity, particularly in inner London (Mental Illness Needs Index) (Glover et al. Reference Glover, Robin, Emami and Arabscheibani1998). In 2003–2004, the average mental health inpatient bed occupancy in London was 91.3% with an average of 38 acute beds per 100 000 population. The average length of stay was 54 days. In addition, there was an average of 10 inpatient rehabilitation beds per 100 000 population (Department of Health hospital activity data) (Department of Health, 2004). An average 28-day readmission rate was 6.5%, ranging from 4.4 to 10.4% (Commission for Health Improvement, 2003).
Sampling and data collection methods
All data on Melbourne teams were gathered between August 2003 and December 2004 (within 2 years of completion of the PLAO Study). Participant sampling and data collection methods used in the PLAO Study were replicated with the exception of: (a) the International Classification of Mental Health Care (De Jong et al. Reference De Jong, van der Lubbe, Wiersma, Moscarelli, Rupp and Sartorius1996) which we chose not to include because it identified few differences between statutory teams in the PLAO Study (Wright et al. Reference Wright, Burns, James, Billings, Johnson, Muijen, Priebe, Ryrie, Watts and White2003) and all the Melbourne teams were known to be statutory teams; (b) data on client characteristics that were collected from team staff who referred to the case notes where needed, whereas in the PLAO Study (Priebe et al. Reference Priebe, Fakhoury, Watts, Bebbington, Burns, Johnson, Muijen, Ryrie, White and Wright2003) these data were collected directly from case notes by research assistants. This was not possible in Melbourne due to lack of study resources. Therefore, a validation exercise was conducted. We compared data on inpatient admissions that were collected by team staff for all clients of one of the Melbourne teams with data extracted from the service's electronic clinical database. Data gathering was coordinated by one part-time psychiatrist in training (Salvatore Martino). Ethical approval for the study was gained from the relevant local ethics committees.
Participants
Participants were staff of the four western Melbourne ACTTs. Data on team composition, processes and fidelity to the ACT model were gathered from team managers. All staff completed self-report questionnaires on their experiences of their work and on the characteristics of their clients, as described below. They were encouraged to check the case notes for client data.
Team composition, processes and fidelity to the ACT model
The team organisation questionnaire is a semi-structured questionnaire developed specifically for the PLAO Study (Wright et al. Reference Wright, Burns, James, Billings, Johnson, Muijen, Priebe, Ryrie, Watts and White2003) that collects information on team staffing, caseloads, policies and protocols, and the team's relationship to other health and social care providers. An additional item was added to gather information specific to the Australian context at the time: use of Community Treatment Orders (CTOs; legislative orders that compel outpatient treatment, referred to internationally as involuntary outpatient commitment). The Dartmouth Assertive Community Treatment Scale (DACTS) (Teague et al. Reference Teague, Bond and Drake1998) is an internationally agreed measure of fidelity to the ACT model consisting of 28 items each rated on a scale of one to five. Three sub-scales are generated: human resources, organisational boundaries and nature of services. Total mean scores above 4 denote high-model fidelity, 3 to 4 ‘ACT-like’ services and below 3, low-model fidelity.
Staff experiences
Staff burnout was assessed using the Maslach Burnout Inventory (Maslach et al. Reference Maslach, Jackson and Leiter1996), a 22-item scale that yields scores for three components of burnout: emotional exhaustion – depletion of emotional resources, leading to workers feeling unable to give off themselves at a psychological level; depersonalisation – negative, cynical attitudes and feelings about clients; personal accomplishment – evaluating oneself negatively, particularly with regard to working with clients. A high degree of burnout is reflected by high scores for the first two and a low score for the third component. Among mental health professionals, high burnout is characterised by scores greater than 20, greater than 7 and less than 29 for the three components, respectively (medium burnout: 14–20, 5–7 and 29–33; low burnout: 13 or less, 4 or less, and 34 or more).
Job satisfaction was measured using two instruments: (1) The short version of the Minnesota Satisfaction Questionnaire (Weiss et al. Reference Weiss, Dawis, England and Lofquist1967) that consists of 20 items rated on a five-point scale, each measuring satisfaction with a particular aspect of work. This produces scores for two subscales, intrinsic and extrinsic satisfaction. The former, scored from 12 to 60, reflects the extent to which people think that their job fits their vocational abilities and needs. The latter is scored from 6 to 30 and is a measure of satisfaction with working conditions and rewards. A neutral attitude is indicated by scores of 60 for overall satisfaction, 36 for intrinsic satisfaction and 18 for extrinsic satisfaction. (2) The Job Diagnostic Survey (Hackman & Oldham, Reference Hackman and Oldham1975) comprises five items concerning the degree to which the employee is satisfied and happy with the job overall, rather than specific aspects of the job.
Client characteristics
Data on client characteristics were gathered on structured data sheets completed by clinicians, with reference to the case files, as follows: socio-demographic data (age, gender, ethnicity, marital status, employment, living situation); clinical characteristics (diagnosis, total admissions, total number and length of admissions in the last 2 years, use of the Mental Health Act in the last 2 years, including whether currently subject to a CTO); adverse events (deliberate self-harm, acts of violence, contact with police, periods of homelessness in the last 2 years). Staff made ratings of their clients' substance use using the Clinician Alcohol and Drug Scales (Drake et al. Reference Drake, McHugo, Becker, Anthony and Clark1996) that give ratings from 1 (abstinent) to 5 (dependent with institutionalisation). They also rated their perception of their clients' engagement using a modified version of the Helping Alliance Scale (Priebe & Gruyters, Reference Priebe and Gruyters1993) which has five items, each of which have a maximum score of 10, with higher scores reflecting a greater quality of engagement.
Data analysis
For these post hoc analyses, simple descriptive data were calculated for each team and as a total for all Melbourne ACTTs using SPSS version 14 (SPSS, 2006). Mean scores were calculated for normally distributed continuous variables and one-way ANOVA or Student's t tests were carried out for comparison of means. Medians were calculated for non-normally distributed data and compared using Kruskal–Wallis ANOVA, Mann–Whitney U tests or Wilcoxon Rank Sum tests as appropriate.
A cluster analysis was carried out to determine whether different ‘types’ of Melbourne teams were in operation, similar to the three types of London teams identified in the PLAO Study (Wright et al. Reference Wright, Burns, James, Billings, Johnson, Muijen, Priebe, Ryrie, Watts and White2003). The same analytical strategy used on the PLAO Study was employed, including the same key variables in both analyses. Due to the small number, it was not possible to carry out a cluster analysis on the Melbourne teams alone. Instead, their data were added to the London teams' data.
London and Melbourne teams' staff burnout and job satisfaction data were entered into a STATA database (Stata Corporation, 2003). The analytical strategy used in the PLAO Study (Billings et al. Reference Billings, Johnson, Bebbington, Greaves, Priebe, Muijen, Ryrie, Watts, White and Wright2003) was replicated for these data that were compared using regression analysis with the burnout or satisfaction variable as the dependent variable and city as the independent variable, adjusting for clustering by team.
Results
Response
The full data on team composition, processes and model fidelity were gathered from team managers. The response rate for self-report data on staff experiences was 98% (44/45) and for client data it was 91% (183/202). These response rates are comparable with the PLAO Study (100% for team organisation data, 89% for staff experiences and 100% for client case note data).
Comparison of Melbourne and London team characteristics
The 14 key team characteristics used for the cluster analysis of team type are shown in Table 1 for London and Melbourne teams. A number of potentially important differences between the London and Melbourne teams are apparent. London ACTTs had less than half the amount of dedicated psychiatrist time compared to the Melbourne teams. Individual staff caseloads were higher in London. No Melbourne teams had dedicated inpatient beds but they took greater responsibility for dealing with psychiatric crises than all the London teams except for the four in cluster B. All the Melbourne teams operated outside usual office hours and three of the four teams achieved a majority (>70%) of client contacts in vivo, compared to only one-third of the London Clusters A and B teams and 45% of Cluster C teams.
Table 1. Variables used in cluster analysis of team type and distribution of team variables in London and Melbourne teams

*CPA is the Care Programme Approach and ISP, Individual Service Planning, is the nearest Australian equivalent.
†This figure excludes psychiatrists.
‡Scored as: (1) team has no responsibility; (2) emergency service has team-generated protocol; (3) team is available by telephone, predominantly for consultation; (4) team provides emergency services backup; (5) team provides 24-hour coverage.
The cluster analysis of team type showed the following:
(1) The Melbourne teams always clustered together.
(2) The London Cluster C remained robust.
(3) The London Clusters A and B became less robust. The type of linkage method used (single or complete) determined whether the Melbourne teams joined London Cluster A or B and whether Clusters A and B remained distinct. Table 2 displays the summary DACTS scores (Teague et al. Reference Teague, Bond and Drake1998) for the three PLAO clusters plus the Melbourne teams. Non-parametric analysis showed that the four groups differed significantly on the DACTS total score and the Human Resources Subscale. In addition, post hoc comparisons (using Wilcoxon Rank Sum tests) of Melbourne teams' DACTS scores compared with the London ACTTs showed them to be significantly different from Cluster C on total DACTS (W = 21.0, p = 0.01) and the Human Resources Subscale (W = 21.5, p = 0.01), but not significantly different from Cluster A or B for total DACTS or Subscale scores (Table 2). Therefore, further comparisons were made between Melbourne and London Clusters A and B teams combined.
Table 2. Comparison of London and Melbourne ACT teams' model fidelity as measured by the DACTS (Teague et al. Reference Teague, Bond and Drake1998)

*Kruskal Wallis test.
DACTS – H = Human resources sub-scale.
DACTS – O = Organisational boundaries sub-scale.
DACTS – S = Nature of services sub-scale.
On average, the Melbourne teams scored as ‘ACT-like’ in terms of the DACTS (one team scoring as high-fidelity ACT). Melbourne teams scored highly for 14 of 28 DACTS items (small caseload, team approach, clinically active team leader, staff capacity, qualified nurse on staff, team size, explicit intake criteria, intake rate, responsibility for hospital admissions, responsibility for hospital discharge planning, in-vivo services, assertive engagement mechanisms, intensity of service, work with support system). London Clusters A and B teams also scored as ‘ACT-like’ (3 London A teams scored as high-fidelity ACT) and they scored highly for 11 DACTS items (small caseload, qualified nurse on staff, explicit intake criteria, intake rate, full responsibility for treatment, responsibility for hospital admissions, responsibility for hospital discharge planning, time-unlimited services, in-vivo services, no dropout policy, assertive engagement mechanisms).
Staff characteristics
Staff of the London teams were younger and more likely to identify as being from non-white ethnic groups than the Melbourne staff. While Melbourne staff had greater length of experience in ACT compared with London staff, staff from both cities had similar lengths of experience of working in mental health services and within their current team (see Table 3). Melbourne and London staff also had similar turnover rates (61.4 and 67.1% of staff, respectively, were in their posts for less than 2 years). Melbourne staff were more likely to work shifts, outside usual office hours, whereas almost half the London staff only worked within office hours.
Table 3. Socio-demographic and job details of staff in London (Clusters A and B) and Melbourne ACT teams

There were no statistically significant differences between London Clusters A and B and Melbourne ACTT staff in terms of job satisfaction and burnout (Table 4).
Table 4. Job satisfaction and burnout for ACT staff in London (Clusters A and B) and Melbourne ACT teams

*Regression analysis with the burnout or satisfaction variable as the dependent variable and city as the independent variable, adjusting for clustering by team.
Client characteristics
The validation of admission data collected by staff in one of the Melbourne teams against corresponding data in the electronic clinical database showed that ACTT staff were generally accurate in their report of their clients' recent admissions. For the ‘validation team’, a median of two admissions (range: 0–11) and 30 bed-days (range: 0–232) over the previous 2 years were reported by staff. Corresponding data from the electronic database were: 1.5 admissions (range: 0–7) and 29.5 bed-days (range: 0–175) over the previous 2 years.
Table 5 shows that the socio-demographic characteristics of the clients were very similar between Melbourne and London ACTTs, with almost identical proportions of males and single clients, but fewer Melbourne clients lived alone. Clients' ethnicity differed between the London and Melbourne teams but was in keeping with the local populations of the two cities. Approximately three quarters of clients in both cities were diagnosed with schizophrenia. Substance misuse and dependency over the previous 6 months were similar. More Melbourne than London clients had experienced an episode of deliberate self-harm or a period of homelessness in the previous 2 years. There were more arrests among London clients despite levels of violence being similar.
Table 5. Characteristics of clients of London (Clusters A and B) and Melbourne ACT teams

*In the last 6 months; **in the last 2 years.
Just over half (52%) of the Melbourne clients were on CTOs. Melbourne ACTT clients had been admitted to hospital more frequently than London ACTT clients, with almost half having over ten admissions in their lifetime compared to around one-fifth of the London clients. The majority of clients both in London and Melbourne had been admitted within the last 2 years, but the length of stay in Melbourne was significantly shorter. A greater proportion of Melbourne clients (32.4%) compared with London clients (10.7%) was discharged from ACT team care in the preceding 9 months. London and Melbourne staff rated their engagement with clients moderately positive overall, although ratings on each item of the Helping Alliance Scale indicated better engagement with clients according to Melbourne staff. Engagement was significantly better for Melbourne staff with regard to getting along with the client (mean scores 7.2 and 6.6, respectively, Mann–Whitney U = 31 410, p = 0.003) and how actively clinicians felt themselves to be involved in their treatment (mean scores 7.2 and 6.4, Mann–Whitney U = 31 716.5, p = 0.003).
Discussion
This study aimed at investigating differences in ACT implementation between Melbourne and London in order to investigate whether differences in implementation of the ACT model could help explain the lack of efficacy reported in the UK. Strengths of the study include that ACTTs in one area of Melbourne were compared with London ACTTs using previously collected data obtained through similar methods. This comparison is also detailed and extensive, covering not only model fidelity but also other potentially important characteristics of ACTTs, including team composition and processes as well as staff and client characteristics. A limitation is that the study was not designed to investigate efficacy of teams in either setting. This means that any interpretations of the implementation findings in this regard are tentative because they cannot be definitively linked to client outcomes and therefore to differences in efficacy between settings. A further possible limitation was the use of staff reports to gather client data in Melbourne. However, two factors limited the potential recall bias that this approach could have created: firstly, the staff had very small caseloads and therefore probably knew their clients and their clinical histories very well; secondly, they were encouraged to check the case notes to confirm the data they supplied. The high level of correspondence between the staff-reported client admission data and the admission histories recorded in the electronic clinical database adds confidence in the accuracy of the findings.
Differences between Melbourne and London ACT teams
Client characteristics, staff satisfaction and burnout were very similar in the two cities. The Melbourne teams had been established for longer than the London teams. This might account for any differences in ACT implementation through increased familiarity and experience with the model. Our data showed that the Melbourne teams were not exactly alike any of the three London team types identified in the PLAO Study (Wright et al. Reference Wright, Burns, James, Billings, Johnson, Muijen, Priebe, Ryrie, Watts and White2003). However, they were distinct from the non-statutory (Cluster C) London teams and shared features with Clusters A and B teams. Although Melbourne and London Clusters A and B teams all scored as ‘ACT-like’ overall, this may have obscured subtle but important differences between the teams. All Melbourne teams, all London Cluster A and most Cluster B teams had integrated health and social care, a feature of home treatment services shown to be associated with better client outcomes (Catty et al. Reference Catty, Burns, Knapp, Watt, Wright, Henderson and Healey2002). However, only Melbourne teams scored highly on the team approach. This feature of the ACT model enables frequent sharing of ideas about how to engage and manage clients (Killaspy et al. Reference Killaspy, Johnson, Pierce, Bebbington, Pilling, Nolan and King2009), perhaps related to the rating of better engagement with clients according to Melbourne staff, as well as mitigating the adverse impact of changes of key workers (Davidson & Campbell, Reference Davidson and Campbell2007). Further, this appears to be the most important factor associated with reducing admissions (Burns et al. Reference Burns, Catty, Dash, Roberts, Lockwood and Marshall2007). The other important feature of home treatment services identified by Catty et al. (Reference Catty, Burns, Knapp, Watt, Wright, Henderson and Healey2002) is the proportion of in vivo client contact. We found that three of the four Melbourne teams made in vivo contact with their clients in the majority of cases compared to only a third of the London Clusters A and B teams. If home visiting is an ‘active component’ of models of home-based treatment, then this could be another extremely important difference between the implementation of ACT in Australia and the UK.
Admissions and bed days for Melbourne and London ACT clients
The Melbourne clients had more previous admissions compared to the London clients. This could suggest that the Melbourne teams were focusing more than the London teams on the target population for ACT, i.e. clients with frequent hospital admissions (Marshall, Reference Marshall2008). Alternatively, this could reflect the differences in general admission patterns between the two settings, unrelated to ACTT clients. However, this interpretation is less plausible because there are proportionately fewer inpatient beds available in Melbourne and so the threshold for admission is likely to be higher than in London.
Three quarters of ACTT clients in both countries were admitted in the 2 years preceding the study, but Melbourne clients had more admissions in these 2 years compared to London clients. However, admissions of Melbourne clients were shorter than those of London clients resulting in fewer total inpatient days than the London teams. This could again reflect the general differences in the average length of inpatient episode between the two settings or it may have been due to the Melbourne ACTTs being more proactive in admitting clients at an earlier stage of relapse so that shorter admissions were required. Specific differences between ACTTs in the two settings could have made this more likely: more allocated time from psychiatrists; extended hours of operation; greater responsibility for crises; greater proportion of in vivo contact; and, the availability of involuntary community treatment in Victoria. This would be in keeping with Weaver et al. (Reference Weaver, Tyrer, Ritchie and Renton2003) suggestion that intensive models of community mental health care can offer ‘anticipatory response’ i.e. responding at an early stage of relapse. The fact that only half of the London teams operated outside office hours compared to all Melbourne teams is of concern and of particular note because although the Policy Implementation Guide (Department of Health, 2001) did not insist upon a 24-hour service, it detailed that ACTTs in England should provide a service from 8 a.m. to 8 p.m. 7 days per week and offer an on-call telephone support service outside these hours, yet this was rarely the case among the London teams surveyed. In addition, previous work in Melbourne supports the notion that CTOs may enable assertive follow-up: treatment contacts were significantly greater during implementation of involuntary community treatment compared with the previous year (Muirhead et al. Reference Muirhead, Harvey and Ingram2006). However, the higher number and shorter admissions for Melbourne ACTT clients could also reflect early discharge before stability of mental health has been achieved. This is a potential disadvantage of the Australian approach, possibly evidenced by the higher rate of acute readmissions in Melbourne compared to London (11.1% v. 6.5%). We cannot comment on any differences in readmission rates for ACTT clients in the two settings because these data were not available. However, it is possible that the availability of CTOs and the greater availability of psychiatrists' time in the Melbourne ACTTs could facilitate more successful early discharges than would have been possible for London ACT clients at the time.
Model fidelity and effective components of the ACT model
The Melbourne and London Clusters A and B teams scored similarly on the full and sub-scales of the DACTS. Given some of the potentially important differences between the teams identified in this study, the more ‘critical components’ (the implementation of the team approach and the proportion of in vivo work) of the model may not be adequately weighted by this scale. Therefore, further research into ACT should perhaps focus on these components of the model rather than overall programme fidelity. There may also be other aspects of the model that are not captured by fidelity measures, such as clinical leadership, team culture and the quality of therapeutic relationship between worker and client, which are yet to be fully evaluated in terms of their relationship to efficacy (Salyers et al. Reference Salyers, Bond, Teague, Cox, Smith, Hicks and Koop2003; Ruggeri & Tansella, Reference Ruggeri and Tansella2008), although preliminary findings suggest that therapeutic alliance is significantly associated with lower re-hospitalisation rates for new ACT clients (Fakhoury et al. Reference Fakhoury, White and Priebe2007). The ACT approach appears to enable clinicians to ‘persistently engage’ with clients and work on issues other than medication adherence which is helpful for the therapeutic relationship (Chinman et al. Reference Chinman, Allende, Bailey, Maust and Davidson1999; Priebe et al. Reference Priebe, Watts, Chase and Matanov2005; Killaspy et al. Reference Killaspy, Johnson, Pierce, Bebbington, Pilling, Nolan and King2009). Further, a greater focus on treatment content, including evidence-based interventions, is likely to be beneficial in better understanding outcomes (Sytema et al. Reference Sytema, Wunderink, Bloemers, Roorda and Wiersma2007).
It is also possible that the ACT model has not been shown to be efficacious in England because usual care delivered by community mental health teams (CMHTs) incorporates more assertive outreach than comparison care in other countries (Tyrer, Reference Tyrer2000; Burns et al. Reference Burns, Catty, Watt, Wright, Knapp and Henderson2002). It has therefore been considered whether successful elements of the ACT model can be incorporated into a CMHT model, although this may not be feasible due to other demands on CMHT workers unrelated to the ‘difficult to engage’ ACT clients (Killaspy, Reference Killaspy2007).
Conclusions
Using almost identical participant sampling and data collection methods, this study demonstrated that client characteristics, staff satisfaction and burnout were very similar in selected ACTTs in London and Melbourne. Further, although these teams all scored as ‘ACT-like’ overall, there were potentially important differences in their implementation of the ACT model. Only Melbourne teams scored highly on team approach and, in comparison with the London teams, a greater proportion conducted the majority of their work in clients' homes. Given the emerging literature about active components of home treatment services, these differences in implementation may explain international differences in ACT efficacy. Given the high investment in ACT in many countries, continuing debate about whether it is efficacious may be better focused on how to identify, maximise and sustain the most important elements of the model.
Funding and support
The PLAO studies were funded by the Department of Health, UK.
Declaration of interests
None.