Despite the need for and existence of practices that effectively prevent or treat mental health problems in children and adolescents, such practices are rarely employed in child welfare systems (Usher and Wildfire 2003; Burns et al. 2004; Leslie et al. 2004). In fact, as much as 90% of public youth-service systems, including mental health, education, juvenile justice and child welfare, do not use evidence-based practices (Hoagwood and Olin 2002). Unfortunately, our understanding of the reasons for this apparent gap between science and practice is limited to a few empirical studies and conceptual models that may or may not be not empirically grounded (Aarons et al., this issue). In implementation research, mixed method designs have been increasingly been utilized to develop a science base for understanding and overcoming barriers to implementation. More recently, they have been used in the design and implementation of strategies to facilitate the implementation of EBPs (Proctor et al. 2009). Mixed methods designs focus on collecting, analyzing and merging both quantitative and qualitative data into one or more studies. The central premise of these designs is that the use of quantitative and qualitative approaches in combination provides a better understanding of research issues than either approach alone (Robins et al. 2008). In such designs, qualitative methods are used to explore and obtain depth of understanding as to the reasons for success or failure to implement evidence-based practice or to identify strategies for facilitating implementation while quantitative methods are used to test and confirm hypotheses based on an existing conceptual model and obtain breadth of understanding of predictors of successful implementation (Teddlie and Tashakkori 2003).

In this paper, we examine the application of mixed method designs in implementation research in a sample of mental health services research studies published in peer-reviewed journals over the last 5 years. Our aim was to determine how such methods were currently being used, whether this use was consistent with the conceptual framework outlined by Aarons et al. (this issue) for understanding the phases of implementation, and whether these strategies could offer any guidance for subsequent use of mixed methods in implementation research.

Methods

We conducted a literature review of mental health services research publications over a five-year period (Jan 2005–Dec 2009), using the PubMed Central database. Data were taken from the full text of the research article. Criteria for identification and selection of articles included reports of original research and one of the following: (1) studies that were specifically identified as using mixed methods, either through keywords or description in the title; (2) qualitative studies conducted as part of larger projects, including randomized controlled trials, which also included use of quantitative methods; or (3) studies that “quantitized” qualitative data (Miles and Huberman 1994) or “qualitized” quantitative data (Tashakkori and Teddlie 1998). Per criteria used by McKibbon and Gadd (2004), the analysis had to be fairly substantial—for example, a simple descriptive analysis of baseline demographics of the participants was not sufficient to be included as a mixed method article. Further, qualitative studies that were not clearly linked to quantitative studies or methods were excluded from our review.

We next assessed the use of mixed methods in each study to determine their structure, function, and process. A taxonomy of these elements of mixed method designs and definition of terms is provided in Table 1 below. Procedures for assessing the reliability of the classification procedures are described elsewhere (Palinkas et al. 2010). Assessment of the structure of the research design was based on Morse’s (1991) taxonomy that gives emphasis to timing (e.g., using methods in sequence [represented by a “→”symbol] versus using them simultaneous [represented by a “+” symbol]), and to weighting (e.g., primary method [represented by capital letters like “QUAN”] versus secondary [represented in small case letters like “qual”]). Assessment of the function of mixed methods was based on whether the two methods were being used to answer the same question or to answer related questions and whether the intention of using mixed methods corresponded to any of the five types of mixed methods designs described by Greene et al. (1989) (Triangulation or Convergence, Complementarity, Expansion, Development, and Initiation or Sampling). Finally, the process or strategies for combining qualitative and quantitative data was assessed using the typology proposed by Cresswell and Plano Clark (2007): merging or converging the two datasets by actually bringing them together, connecting the two datasets by having one build upon the other, or embedding one dataset within the other so that one type of data provides a supportive role for the other dataset.

Table 1 Taxonomy of mixed method designs

Results

Our search identified 22 articles published between 2005 and 2009 that met our criteria for analysis. Our analyses revealed 7 different structural arrangements, 5 different functions of mixed methods, and 3 different ways of linking quantitative and qualitative data together. Many studies included more than one structural arrangement, function or process; hence the raw numbers often added up to more than the total number of studies reviewed. Twelve of the 22 papers presented qualitative data only, but were part of larger studies that included the use of quantitative measures.

Mixed Method Structure

In 9 of the 22 studies reviewed, quantitative and qualitative methods were used in sequence and in 19 studies, they were used simultaneously. Six studies used them in both sequential and simultaneous fashion. Sequential designs are dictated either by the specific methodology, study objectives, or logistical issues in collection and analysis of data. For instance, Proctor et al. (2007) conducted a qualitative pilot study to capture the perspective of agency directors on the challenge of implementing evidence-based practices in community mental health agencies prior to the development and testing of a specific implementation intervention in the belief that incorporation of this perspective in the development stage would lead to a more successful outcome that would be assessed using quantitative methods (qual → QUAN). Using the technique of concept mapping (Trochim 1989), Aarons et al. (2009), solicited information on factors likely to impact implementation of EBPs in public sector mental health settings from 31 services providers and consumers organized into 6 focus groups. Each participant then sorted a series of 105 statements into piles and rated each statement according to importance and changeability. Data were then entered in a software program that uses multidimensional scaling and hierarchical cluster analysis to generate a visual display of how statements clustered across all participants. Finally, 22 of the original 31 participants assigned meaning to and identified an appropriate name for each of the clusters identified (Aarons et al. 2009).

As an example of a simultaneous collection and analysis of qualitative and quantitative data Sharkey et al. (2005) conducted a qualitative study of factors affecting the implementation of a randomized controlled trial parallel to the trial’s quantitative assessment of the effectiveness of a transitional discharge model for people with a serious mental illness (QUAN + qual). Aarons and Palinkas (Aarons and Palinkas 2007; Palinkas and Aarons 2009), simultaneously collected qualitative data through annual interviews and focus groups and quantitative data through semi-annual web-based surveys to assess the process of implementation of SafeCare®, an intervention designed to reduce child neglect and out-of-home placements of neglected children. The study also assessed its impact on agency organizational culture and climate and the therapeutic relationship between home visitor and client family (QUAN + QUAL).

With respect to the weighting or prioritization of each method, all but one of the studies examined had unbalanced designs; of these, 19 studies used quantitative methods as the primary or dominant method and qualitative methods as the secondary or subordinate method. For instance, a qualitative assessment by Palinkas et al. (2008) of the process of implementation of evidence-based treatments for depression, anxiety and conduct disorders in children was secondary to the primary aim of evaluating the effectiveness of two different variations of the treatments, one based on the standardized use of manualized treatments and one based on a modular approach (QUAN + qual). In two studies (Aarons et al. 2009; Bachman et al. 2009) qualitative methods were primary and quantitative methods were secondary (Quan + QUAL); in two other studies (Aarons and Palinkas 2007; Marty et al. 2008) both types of unbalanced designs were used.

Ten of the 22 studies included balanced designs in which quantitative and qualitative methods were given equal weight. In all 10 studies, the methods were used simultaneously (QUAN + QUAL). Whitley et al. (2009) documented the process of implementation of an illness management and recovery program for people with severe mental illness in community mental health settings using qualitative data to assess perceived barriers and facilitators of implementation and quantitative data to assess implementation performance based on assessments of fidelity to the practice model, with no overriding priority assigned to either aim. Some studies gave equal weight to qualitative and quantitative data for the purpose of evaluating fidelity and implementation barriers/facilitators even though the collection of qualitative data to assess implementation was viewed as secondary to the overall goal of evaluating the effectiveness of an intervention (e.g., Marshall et al. 2008; Marty et al. 2008; Rapp et al. 2009).

Mixed Method Function

Our review revealed five distinct functions of mixing methods. The first function was convergence in which qualitative and quantitative methods were used sequentially or simultaneously to answer the same question. Eight (36%) of the studies included this function. We identified two specific forms of convergence, triangulation and transformation. Triangulation involves the use of one type data to validate or confirm conclusions reached from analysis of the other type of data. For instance, in examining the sustainability of evidence-based practices in routine mental health agencies, Swain et al. (2009) used triangulation to identify commonalities and disparities between quantitative data obtained from closed-ended questions and qualitative data obtained from open-ended questions in a survey administered to 49 participants, each participant representing a distinct practice site. Transformation involves the sequential quantification of qualitative data—e.g., (qual → QUAN) or the use of qualitative techniques to transform quantitative data. The technique of concept mapping used by Aarons et al. (2009), where qualitative data elicited from focus groups are “quantitized” using multidimensional scaling and hierarchical cluster analysis, is an example of transformation.

In 14 studies, quantitative and qualitative methods were used in complementary fashion to answer related questions for the purpose of evaluation. For instance, Hoagwood et al. (2007) used a case study of an individual child to describe the process of implementation of an evidence-based, trauma-focused, cognitive-behavioral therapy for treatment of symptoms of PTSD in children living in New York City in the aftermath of the World Trade Center attack on September 11, 2001. Although the article does provide information on the outcome of the child’s treatment, the case study method was intended more to illustrate the process of treatment, beginning with engagement and moving to assessment, treatment, and finally, to outcome. This technique also illustrates the use of an elaborative design in which qualitative methods are used to provide depth of understanding to complement the breadth of understanding afforded by quantitative methods. In this instance, the “thick description” of the child’s progress from symptom presentation to completion of treatment offers a degree of depth of understanding of the experience of this child and other study participants that is not possible from measures on standardized clinical assessment instruments alone.

In 13 of the studies, mixed methods designs exhibited the function of expansion in which qualitative data were used to explain findings from the analyses of quantitative data. For instance, Kramer and Burns (2008) used data from qualitative interviews with providers as part of a summative evaluation to understand the factors contributing to partial or full implementation of a CBT for depressed adolescents in two publically-funded mental healthcare settings. Brunette et al. (2008) used qualitative data collected from interviews and ethnographic observations to elucidate barriers and facilitators to implementation of integrated dual disorders treatment and explain differences in treatment fidelity across the study sites.

Mixed methods were also used in 6 studies for the purpose of developing new measures, conceptual models, or interventions. In one study (Blasinsky et al. 2006), development of a rating scale to construct predictors of program outcomes and sustainability of a collaborative care intervention to assist older adults suffering from major depression or dysthymia involved the sequential use of QUAL to identify form and content of items to be used in a QUAN study—e.g., survey questions (qual → QUAN). In a second study, qualitative data was sequentially collected and analyzed to develop a conceptual framework for generating hypotheses explaining the adoption and implementation of Functional Family Therapy in a sample of family and child mental health services organizations in New York State to be tested using quantitative methods (qual → QUAN) (Zazalli et al. 2008). In two studies, intervention development or adaptation involved the use of qualitative methods to develop new interventions or adapt existing interventions to new populations (qual → QUAN). For instance, semi-structured interviews were conducted by Henke et al. (2008) to test the feasibility of a primary care depression performance-based reward program.

Finally, mixed methods were used to identify a sample of participants for use of the other method. This technique was used in 5 of the 22 studies (23%). One form of sampling was the sequential use of QUAN data to identify potential participants for QUAL study (quan → QUAL). Aarons and Palinkas (2007), for example, selected clinical case managers having the most positive and most negative views of an evidence-based practice for extended semi-structured interviews based on results of a web-based quantitative survey asking about the perceived value and usefulness of SafeCare®. The other form of sampling used qualitative data to identify samples of participants for quantitative analysis. A study of staff turnover in the implementation of evidence based practices in mental health care by Woltmann et al. (2008) used qualitative data obtained through interviews with staff, clinic directors and consultant trainers to create categories of turnover and designations of positive, negative and mixed influence of turnover on outcomes. These categories were then quantitatively compared with implementation outcomes via simple tabulations of fidelity and penetration means for each category.

Mixed Method Process

The integration of quantitative and qualitative data occurred in three forms, merging the data, connecting the data, and embedding the data. In 17 studies, the qualitative study was embedded within a larger quantitative effectiveness trial or implementation study. Slade et al. (2008) nested a qualitative study within a multi-site randomized controlled trial of a standardized assessment of mental health problem severity to determine whether the intervention improved agreement on referrals and to identify professional and organizational barriers to implementation. In 11 studies, the insights gained from one type of method were connected to a different type of method to answer related questions through complementarity, expansion, development or sampling. Thus, the qualitative assessment of agency director perspectives on implementation of evidence-based practices by Proctor et al. (2007) was designed as a pilot-stage step in a research agenda to develop and quantitatively test implementation intervention. Zazalli et al. (2008) connected qualitative data collected from semi-structured interviews with 15 program administrators to the development of a conceptual model of implementation of Functional Family Therapy that could then be tested using quantitative methods. In 10 studies, qualitative and quantitative data were brought together in the analysis phase to answer the same question through triangulation or related questions through complementarity. Bachman et al. (2009) merged qualitative data collected from semi-structured interviews with quantitative data collected from two surveys to describe and compare the experience of integrating children’s services in 35 children’s trusts in England.

Mixed Methods and Phases of Implementation

Using the conceptual framework proposed by Aarons et al. (this issue), we also mapped the use of mixed methods of the 22 studies reviewed along two dimensions, phase of implementation and inner and outer context. The results are presented in Table 2 below. Fifteen of the 22 studies focused on the implementation stage and 13 studies focused on organizational characteristics that facilitated or impeded implementation. Only two studies focused on the exploration stage (Aarons et al. 2009; Proctor et al. 2007). Two studies focused on the adoption stage (Palinkas et al. 2008; Zazalli et al. 2008), and two studies focused on the sustainability stage (Blasinsky et al. 2006; Swain et al. 2009). One study (Bearsley-Smith et al. 2007) proposed to study the adoption, implementation and sustainability stages in a longitudinal fashion; however, the article provided few details on elements of inner or outer context to be examined. The majority of studies that examined socio-political context and funding issues were focused on the implementation or sustainability stages, while the majority of studies that examined organizational and individual adapter characteristics were focused on the implementation stage. Only one study (Aarons et al. 2009) examined the role of client advocacy.

Table 2 Studies using mixed method to examine outer and inner context by implementation stage

Discussion

Our analysis of the 22 studies uncovered five major reasons for using mixed method designs in intervention research. The first reason was to use quantitative methods to measure intervention and/or implementation outcomes and qualitative methods to understand process. This aim was explicit in 11 of the 22 studies. Qualitative inquiry is highly appropriate for studying process because (1) depicting process requires detailed descriptions of how people engage with one another, (2) the experience of process typically varies for different people so their experiences need to be captured in their own words, (3) process is fluid and dynamic so it can’t be fairly summarized on a single rating scale at one point in time, and (4) participants’ perceptions are a key process consideration (Patton 2001).

The second reason was to conduct both exploratory and confirmatory research. In mixed method designs, qualitative methods are used to explore a phenomenon and generate a conceptual model along with testable hypotheses, while quantitative methods are used to confirm the validity of the model by testing the hypotheses (Teddlie and Tashakkori 2003). This combined focus is also consistent with the call by funding agencies (NIMH 2004) and others (Proctor et al. 2009) to develop new conceptual models and to develop new measures to test these models. Several of the studies focused on development of new measures (Blaskinsky et al. 2006; Slade et al. 2008) or conceptual frameworks (Zazalli et al. 2008), or the development of new or adaptations of existing interventions (Proctor et al. 2007; Henke et al. 2007).

The third reason was to examine both intervention content and context. Many of the studies included in this review used mixed methods to examine the context of implementation of a specific intervention (e.g., Henke et al. 2008; Sharkey et al. 2005; Slade et al. 2008; Whitley et al. 2009). Unlike efficacy studies where context can be controlled, implementation research occurs in real world settings distinguished by their complexity and variation in context (Landsverk et al., this issue). Qualitative methods are especially suited to understanding context (Bernard 1988). In contrast, quantitative methods were used to measure aspects of the content of the intervention in addition to the intervention’s outcomes. A particularly important element of content was the degree of fidelity of application of the intervention. Schoenwald et al. (this issue) discuss different strategies for the quantitative measurement of fidelity to explain variation in intervention/implementation outcomes.

The fourth reason for using mixed methods was to incorporate the perspective of potential consumers of evidence-based practices (both practitioners and clients) (Proctor et al. 2009). As observed by Aarons et al. (this issue), some models that describe approaches to organizational change and innovation adoption highlight the importance of actively including and involving critical relevant stakeholders during the process of considering and preparing for innovation adoption. Use of qualitative methods gives voice to these stakeholders (Sofaer 1999) and allows partners an opportunity to express their own perspectives, values and opinions (Palinkas et al. 2009). Obtaining such a perspective was an explicit aim of studies by Henke et al. (2008), Proctor et al. (2007), Aarons et al. (2009), and Palinkas and Aarons (2009). A mixed method approach is also consistent with the need to understand patient and provider preferences in the use of Sequential Multiple Assignment Randomized Trial (SMART) designs when testing and evaluating the effectiveness of different strategies to improve implementation outcomes (Landsverk et al., this issue).

Finally, mixed methods were used to compensate for one set of methods by the use of another set of methods. For instance, convergence or triangulation of quantitative and qualitative data was an explicit feature of the mixed method study of the implementation of SafeCare® in Oklahoma by Aarons et al. (Aarons and Palinkas 2007; Palinkas and Aarons 2009) because of limited statistical power in quantitative analyses that were nested in teams of services providers, a common problem of implementation research (Proctor et al. 2009; Landsverk et al., this issue).

The studies examined in this review represent a continuum of mixed method designs that ranges from the simple to the complex. Simple designs were observed in single studies that have a limited objective or scope. For instance, in seeking to determine whether the experience of using mixed methods accounted for possible changes in attitudes towards their use, Gioia and Dziadosz (2008) used semi-structured interview and focus group methods to obtain first-hand accounts of practitioners’ experiences in being trained to use an EBP, and a quantitative measure of attitudes towards the use of EBPs to identify changes in attitudes over time. In contrast, complex designs usually involve more than one study, each of which are linked by a set of related objectives. For instance, Bearsley-Smith et al. (2007) describe a protocol for a cluster randomized feasibility trial in which quantitative measures are used in studies designed to evaluate program outcomes (e.g., diagnostic status and clinical severity, client satisfaction) and measure program fidelity, and qualitative methods (clinician focus groups and semistructured client interviews) are used in studies designed to assess the process of implementation and explain quantitative findings.

In addition to study objectives, complexity of mixed method designs is also related to the context in which the study or studies were conducted. For instance, six of the studies reviewed were embedded in a larger effort known as the National Evidence-Based Practice Implementation Project, which was designed to explore whether EBP’s can be implemented in routine mental health service settings and to discover the facilitating conditions, barriers, and strategies that affected implementation (Brunette et al. 2008; Marshall et al. 2008; Marty et al. 2008; Rapp et al. 2009; Whitley et al. 2009). Two additional studies (Aarons and Palinkas 2007; Palinkas and Aarons 2009) were part of a mixed-method study of implementation embedded in a statewide randomized controlled trial of the effectiveness of an evidence-based practice for reducing child neglect and out of home foster placements. In each instance, the rationale for the use of a mixed method design was determined by its role in the larger project (primary or secondary), resulting in an unbalanced structure and emphasis on complementarity to understand the process of implementation and expansion to explain outcomes of the larger project. However, the embedded mixed method study itself often reflected a balanced structure and use of convergence, complementarity, expansion, and sampling to understand barriers and facilitators of implementation.

Complexity of mixed method designs is also related to the phase of implementation under examination. Mixed method studies of the exploration and adoption phases described by Aarons et al. (this issue) tended to utilize less complex designs characterized by a sequential unbalanced structure for the purpose of seeking convergence through transformation or developing new measures, conceptual frameworks or interventions, and a process of connecting the data. In contrast, studies of the implementation and sustainability phases tended to utilize more complex designs characterized by a simultaneous balanced or unbalanced structure for the purpose of seeking convergence through triangulation, complementarity, expansion and sampling, and a process of embedding the data. Nevertheless, as these studies illustrate, research on any of the four phases of implementation described by Aarons et al. may utilize and benefit from the application of any combination of elements of structure, function and process as long as this combination is consistent with study aims and context.

Our examination of these studies also revealed other characteristics of mixed method designs in implementation research that are noteworthy. First, the vast majority of studies reviewed utilized observational designs. As Landsverk et al. (this issue) and others (Proctor et al. 2009), have noted, most early research on implementation was observational in nature, relying upon naturalistic case study approaches. More recently, prospective, experimental designs have been used to develop, test and evaluate specific strategies designed to increase the likelihood of implementation (Chamberlain et al. 2008; Glisson and Schoenwald 2005). Second, all of the 22 studies reviewed focused on characteristics of organizations and individual adopters that facilitated or impeded the process of implementation. Only seven studies included a focus on the outer context or the interorganizational component of the inner context of implementation (Aarons et al., this issue). Third, only 2 of the 22 studies (Aarons and Palinkas 2007; Palinkas and Aarons 2009) focused on implementation in child welfare settings. Given the issues in Child Welfare, such as lack of professional education focused on evidence based practices and the richness of information solicited through mixed methods, the paucity of studies on implementation in Child Welfare is surprising.

However, there are ongoing efforts to incorporate mixed method designs in research involving the implementation of evidence-based practices that include experimental designs to evaluate implementation strategies, an examination of outer and interorganizational context, and are situated in child welfare settings. Two such efforts include Using Community Development Teams to Scale-up MTFC in California (Patricia Chamberlain, Principal Investigator) and Cascading Diffusion of an Evidence-Based Child Maltreatment Intervention (Mark Chaffin, Principal Investigator). The first is a randomized controlled trial designed to evaluate the effectiveness of a strategy for implementing Multidimensional Treatment Foster Care (MTFC; Chamberlain et al. 2007), an evidence-based program for out of home youth aged 8–18 with emotional or behavioral problems. Mixed methods are being used to examine the structure and operation of system leaders’ influence networks and use of research evidence. The Cascading Diffusion Project is a demonstration grant examining whether or not a model of planned diffusion of an evidence-based practice can develop a network of services with self-sustaining levels of model fidelity and provider competency. A mixed method approach is being employed to describe the relationships between provider staff, system and organizational factors, and their impact on the implementation process. In both projects, qualitative and quantitative methods are being used in a simultaneous, unbalanced arrangement for the purpose of seeking complementarity, using quantitative methods to achieve breadth of understanding (i.e., generalizability) of both content (i.e., fidelity) and outcomes (i.e., stage of implementation, number of children placed, recidivism), and qualitative methods to achieve depth of understanding (i.e., thick description) of both process and inner and outer context of implementation, all in embedded design.

In recommending changes in the current approach to evidence in health care to accelerate the improvement of systems of care and practice, Berwick (2008) recommends embracing a wider range of scientific methodologies than the usual RCT experimental design. These methodologies include the use of assessment techniques developed in engineering and used in quality improvement (e.g., statistical process control, time series analysis, simulations, and factorial experiments) as well as ethnography, anthropology, and other qualitative methods. Berwick argues that such methods are essential to understanding mechanisms and context of implementation and quality improvement. Nevertheless, it is the combining of these methods through mixed method designs that is likely to hold the greatest promise for advancing our understanding of why evidence-based practices are not being used, what can be done to get them into routine use, and how to accelerate the improvement of systems of care and practice.