Based on the definition of an intervention, which of the following is not a key component?

  • Journal List
  • HHS Author Manuscripts
  • PMC3409469

Res Nurs Health. Author manuscript; available in PMC 2012 Aug 1.

Published in final edited form as:

PMCID: PMC3409469

NIHMSID: NIHMS231905

Susan M. Breitenstein, PhD, RN, Deborah Gross, DNSc, RN, FAAN, Christine Garvey, PhD, RN, Carri Hill, PhD, Louis Fogg, PhD, and Barbara Resnick, PhD, RN, FAAN

Abstract

Implementation fidelity is the degree to which an intervention is delivered as intended and is critical to successful translation of evidence-based interventions into practice. Diminished fidelity may be why interventions that work well in highly controlled trials may fail to yield the same outcomes when applied in real life contexts. The purpose of this paper is to define implementation fidelity and describe its importance for the larger science of implementation, discuss data collection methods and current efforts in measuring implementation fidelity in community-based prevention interventions, and present future research directions for measuring implementation fidelity that will advance implementation science.

Keywords: implementation, fidelity, research translation

There is increasing awareness that prevention programs shown to be effective in clinical trials may not impact the health of society unless they are delivered with fidelity (Fixsen, Naoom, Blase, Friedman, & Wallace, 2005; Mihalic, 2004). Implementation fidelity refers to the degree to which an intervention is delivered as intended; it is critical to successful translation of evidence-based interventions into practice (C. Carroll et al., 2007; Mihalic, 2004). With greater focus on disseminating prevention programs in community settings comes an increased need for developing feasible and valid strategies for monitoring implementation fidelity in the community (Kerner, Rimer, & Emmons, 2005). However, few community-based researchers have reported their procedures for monitoring implementation fidelity, and even fewer have evaluated the validity of those strategies (Dusenbury, Brannigan, Falco, & Hansen, 2003; Stein, Sargent, & Rafaels, 2007). The purpose of this paper is to: (a) define implementation fidelity and describe its importance for the larger science of implementation, (b) discuss data collection methods and current efforts to measure implementation fidelity in community-based prevention interventions, and (c) present future research directions for measuring implementation fidelity that will advance implementation science.

Definition of Terms

Several terms are used interchangeably to describe the fidelity of implementing an intervention. Terms used include implementation fidelity (C. Carroll et al., 2007; Lee et al., 2008; Mihalic, 2004; Rohrbach, Gunning, Sun, & Sussman, in press), fidelity of implementation (Dusenbury et al., 2003; Sánchez et al., 2007), fidelity (Fixsen et al., 2005; Forgatch, Patterson, & DeGarmo, 2005), treatment fidelity (Eames et al., 2008; Hogue et al., 2008), treatment integrity (Perepletchikova, Treat, & Kazdin, 2007), and intervention fidelity (Santacroce, Maccarelli, & Grey, 2004; Stein et al., 2007). These terms share a similar definition-- an intervention being delivered as intended by the program developers and in line with the program model. Although general definitions for these terms are consistent, specific dimensions of fidelity differ. For example, adherence to an intervention, content, process, dose, quality of delivery, competence, participant responsiveness, and program differentiation are dimensions of fidelity defined in the literature (Barber, Sharpless, Klostermann, & McCarthy, 2007; Martino, Ball, Nich, Frankforter, & Carroll, 2008; Stein, et al., 2007). In this paper, the term implementation fidelity was chosen because it is most frequently used in prevention and community-based intervention research (C. Carroll et al., 2007; Lee et al., 2008; Mihalic, Fagan, & Argamaso, 2008). Two aspects of implementation fidelity will be addressed. These are the degree to which an intervention is conducted (a) competently (competence), and (b) according to protocol (adherence; C. Carroll et al., 2007; Dusenbury, Brannigan, Hansen, Walsh, & Falco, 2005).

Adherence and Competence

Adherence refers to the extent to which practitioners' (i.e., the individuals implementing the intervention) behaviors conform to the intervention protocol (Hogue et al., 2008). Measurements of adherence focus on the quantity or presence of prescribed behaviors defined in an intervention manual or by the intervention protocol. Adherence measures evaluate those components specific and essential to the defined intervention.

Competence relates to the skillfulness in the delivery of the intervention and includes interpersonal and process level skills (Forgatch et al., 2005; Perepletchikova & Kazdin, 2005; Stein et al., 2007). In contrast to adherence, which measures whether the protocol has been fully implemented, competence refers to how well the protocol is implemented. Competence in delivering an intervention includes qualities related to communication, technical abilities, and skills in responding to the participants receiving the intervention.

Intervention context matters, and measurements of competence and adherence components should account for the context of the intervention setting. For example, in a group-based intervention, group members who monopolize the discussion test the practitioner's skill in keeping the other group members engaged while adhering to the intervention protocol. By contrast, the practitioner delivering an individually-administered intervention in a busy primary care setting serving uninsured patients with low health literacy levels will have different challenges than a practitioner delivering the same intervention in a small, private practice office serving a well-educated, economically advantaged population of patients. Therefore, including assessments of relevant contextual factors that could affect fidelity, such as intragroup dynamics or contextual variations, would provide a broader understanding of practitioners' competence.

Implementation Fidelity: A Component of Implementation Science

According to Eccles and Mittman (2006), implementation research is “the scientific study of methods to promote the systematic uptake of research findings and other evidence-based practices into routine practice, and, hence, to improve the quality and effectiveness of health services and care” (para. 2). Fidelity is a key ingredient for the systematic implementation of evidence-based interventions in community settings. For larger scale dissemination of interventions to be effective, researchers need to understand the processes required to implement the intervention consistently and at a high level of quality, especially when different practitioners with different levels of expertise are implementing the intervention in different contexts (Glasgow, Lichenstein, & Marcus, 2003). Research that advances our understanding of the processes needed to maintain implementation fidelity will be a critical step toward creating sustainable interventions.

Fixsen and colleagues (2005) described a framework for implementation science that includes core implementation components. The core implementation components are practitioner selection, pre-service and in-service training, ongoing coaching and supervision, practitioner performance evaluation (assessment of implementation fidelity), decision support data systems, facilitative administrative supports, and system interventions (see Fixsen et al. for a comprehensive description of these components). The practitioner level components of this framework are practitioner selection, training, and coaching and supervision. Implementation fidelity assessment acts as a feedback mechanism to the practitioner level components to improve practitioners' performance and ultimately effect targeted improvements of the intervention. Therefore, implementation fidelity assessment creates a feedback loop to inform practitioner selection requirements, improvements in training, and ongoing coaching and supervision (see Figure 1).

Based on the definition of an intervention, which of the following is not a key component?

Relationship of Practitioner Level-Core Implementation Components

In addition to the core implementation components suggested by Fixsen and colleagues (2005) other useful frameworks for guiding research and practice of implementation science include implementation fidelity as a key component. Example frameworks include the RE-AIM model (Glasgow et al., 2003), the Department of Veterans Affairs Quality Enhancement Research Initiative (QUERI; Bowman, Sobo, Asch, & Gifford, 2008), and the Behavior Change Consortium (BCC) framework for treatment fidelity (Resnick et al., 2005).

Why Does Implementation Fidelity Matter?

Diminished fidelity during dissemination of interventions may be why interventions that work well in highly controlled trials may fail to yield the same outcomes when applied in real life contexts (Bellg et al., 2004; Elliott & Mihalic, 2004). Several barriers to maintaining implementation fidelity in real life contexts have been identified. These include local adaptations of interventions, individual variations in practitioner adherence and competence, lack of available training and technical support, limited resources for supporting the intervention at the site level, and competing demands for the practitioners' time that can diminish their commitment or effectiveness (Botvin, 2004; Dévieux, et al., 2005; Hill, Maucione, & Hood, 2007). Lack of implementation fidelity can weaken outcomes, leading to faulty conclusions about intervention effectiveness. Because they can cause potentially useful interventions to appear ineffective, failures in implementation fidelity have been identified as type III errors (Dobson & Singer, 2005; Sánchez et al., 2007). To avoid a type III error, clear and feasible strategies for monitoring and measuring implementation fidelity should be delineated prior to initiation of an intervention study or dissemination efforts.

Current Efforts and Methods for Measuring Implementation Fidelity

Research reviewers have found little empirical work on systematic and comprehensive strategies for measuring implementation fidelity of prevention interventions in community settings. For example, a review of parent training outcome studies that had been published from 1980-1988 indicated that only 6% of studies included discussion of fidelity monitoring related to implementation (Rogers Wiese, 1992). Dane and Schneider (1998) found that for outcome studies of primary and secondary prevention programs conducted between 1980 and 1994, only 24% reported procedures related to measuring fidelity. Finally, using a 22-item scale evaluating the quality of fidelity measurement, Perepletchikova et al., (2007) reviewed psychosocial interventions published between 2000 and 2004 and found that procedures for measuring fidelity were rarely reported, with only 3.5% of the interventions adequately addressing fidelity.

More recently, greater attention is being given to the measurement of implementation fidelity in community based preventive interventions (The Conduct Problems Prevention Research Group, 2002; Lee et al., 2008; Mihalic et al., 2008). We will describe the most commonly reported methods used to collect fidelity data. These include self report, in vivo and video observations, and audio recordings. Each of these methods has unique advantages as well as limitations, which are described.

Self-Report

Self-report methods involve information collected directly from the practitioner or intervention participant. Data may be gathered using checklists or verbal report. Most self-report fidelity measures ask practitioners to indicate whether they implemented identified components of the intervention protocol. Self-report measures from the intervention participant may ask whether they received identified components of the intervention. As such, self-report methods typically measure practitioner adherence rather than competence.

Lee and colleagues (2008) developed an innovative self-report method using a web-based system to monitor implementation fidelity of a school-based prevention program for reducing student conduct problems. Trained family advocates implemented the intervention, then completed weekly on-line adherence surveys to report adherence to the prevention program. Lee et al. found the on-line surveys to be a feasible and time efficient strategy for collecting adherence indices during dissemination.

There are several advantages to using self-report data to assess implementation fidelity. Self-report methods are inexpensive and less time consuming than observational methods. Data collected based on practitioners self reports may provide important clinical information regarding the portability of the intervention during dissemination. For example, through on-line self-report surveys, Lee and colleagues (2008) identified program components that were difficult to adhere to during dissemination, thus informing training and future versions of the program.

Potential limitations of use of self-report data relate to validity and accuracy. For example, participant reports may be biased by their feelings toward the practitioner. Distortions in data may occur due to poor recollection by practitioners, practitioners' propensity to over report adherence, or their desire to provide positive assessments of their adherence to protocols (Breitenstein, 2010; Lillehoj, Griffin, & Spoth, 2004; Perepletchikova et al., 2007). For example, although Lee and colleagues (2008) found the fidelity monitoring web based system feasible and informative, it is possible that these data would differ from data obtained from concurrent observations. Thus, the validity of self-report fidelity ratings may be limited.

Observations

Observations of intervention sessions are designed to provide information regarding adherence and competence of practitioners. Observers rate the practitioners' adherence and competence to the defined intervention protocol using a fidelity instrument that identifies key components of the intervention. An important advantage of observation data is that they are generally considered more accurate than self-report, providing a more objective assessment of practitioners' and participants' behavior (Dusenbury et al., 2003). The majority of fidelity efforts in preventive interventions use observational data (Mihalic et al., 2008).

There are some disadvantages to observational methods when compared to self-report. Observational methods can be costly and labor-intensive. For instance, training observers to accurately rate competence can require extensive training. Recently researchers have reported coder training on fidelity measures of competence took from 15-40 hours per coder (Dumas, Lynch, Laughlin, Smith, & Prinz, 2001; Eames et al., 2008; Forgatch et al., 2005). Finally, practitioner reactivity to observation can change implementation fidelity and not necessarily in a systematic way. For example, some practitioners may be more adherent to the protocol while they are under observation; others may become anxious about being observed, leading to diminished adherence or competence. Therefore, reactivity effects may provide less accurate estimates of implementation fidelity.

In vivo

In vivo observations are conducted with an observer present at the intervention session. During in vivo observations, fidelity measures are coded during or immediately following the session. An advantage of in vivo observation is that it allows for an overall assessment of the external environment of the intervention that may influence implementation fidelity. Another advantage is the observer's opportunity to detect nonverbal forms of communication that can affect learning or that signal how the intervention is being received. For example, the opportunity to observe eye contact and non verbal gestures of understanding might signal recipient engagement and understanding of the intervention content.

In vivo observations have several disadvantages. Conducting live observations by trained coders may not be feasible, due to scheduling or geographic location. Feasibility may be particularly problematic during larger scale dissemination. Ongoing estimates of inter-rater reliability require having two coders in the session simultaneously, further diminishing feasibility.

Video recording

To address certain limitations described above, some researchers use video recordings of intervention sessions that are later evaluated by trained observers for fidelity. There are several advantages to using video recorded observations. Unlike in vivo observations, video recorded data provide the opportunity to assess reliability among fidelity raters, observe randomly selected portions of the intervention to evaluate ongoing fidelity, and observe intervention sessions multiple times. The ability to re-examine an intervention session allows coders to capture complex interactions that might elude in vivo observers (Gardner, 2000; Zelenko, 2004). Further, video cameras have become smaller and less expensive, making them a more feasible and affordable option than in the past.

Several community based studies have used video recordings to assess fidelity. For example, Forgatch and colleagues (2005) and Eames et al. (2008) evaluated the fidelity of their parent-training program using observations from video recorded data. Forgatch and colleagues observed 10 minute segments of recordings. Eames and colleagues used continuous time sampling of video recordings to assess the frequency of process skills (competence) of group leaders delivering a parenting program. Both research groups reported high inter-rater reliability (intraclass correlations ranged from .71-.99), providing evidence that video recorded data provided adequate behavior samples for obtaining reliable fidelity data.

Disadvantages of video recordings relate to cost, restrictions of a camera in a fixed position, and loss of participant anonymity. Although less expensive than in the past, the cost of purchasing and maintaining video equipment may still be too high for broad usage in a large scale dissemination, particularly for non-profit community agencies. Practitioner's may be reluctant to being video-recorded, potentially altering how they behave and how the intervention is implemented. Finally, video cameras may use only one camera angle, which may not capture nonverbal or other relevant occurrences outside the camera range.

Audio recording

Audio equipment has also been used to assess implementation fidelity and offers several advantages. Current technology allows for the use of small digital recorders that are relatively unobtrusive and produce good sound quality. Audio recording may be less intrusive than in vivo and video recorded collection of data because microphones can be hidden, thereby reducing potential reactivity effects from being observed. Further, obtaining audio recorded samples is less costly than observations because the equipment costs less, and it does not require coordination among observers. Similar to video recorded data, audio recordings allow for re-examination of the intervention session and analysis of reliability among fidelity coders. Because an independent rater can hear not only what the practitioner says but how it is said, both adherence and competence can be evaluated from audio recordings. For example, practitioners might explain a concept clearly but also communicate frustration in their voices, which would diminish their effectiveness and suggest less competence with a challenging participant.

Dumas and colleagues (2001) developed a fidelity checklist for use with audio recordings of intervention sessions to assess fidelity to a parenting program targeting risk reduction among school-aged children. Adherence and communication skills were assessed from 30 minute segments of randomly selected intervention sessions. Their results show it is possible to obtain high inter-rater agreement (86.7%-97.4%) on fidelity ratings using audio recorded data.

Using audio recorded data of a group-based parenting program in Chicago Head Start sites, Breitenstein (2010) also obtained high inter-rater reliability (mean agreement = 90%) of a measure of group leader adherence and competence. Although parents and group leaders consented to be audio recorded, at the end of the program they reported little awareness of the audio recorder and said that being audio recorded had little effect on their behavior in the parent groups.

Audio recording is not without disadvantage. Audio recordings do not capture certain types of communication, such as non-verbal cues. Further, audio recording does not allow for assessment of environmental factors that may be important to adherence and competence ratings (e.g., cramped intervention site, extraneous events that distract participants). Like video recordings, equipment dysfunction is another disadvantage of audio recording strategies.

In summary, current efforts to measure fidelity have relied on self-report, audio recorded, and observational data. Each of these data collection methods provides unique information regarding the adherent and competent implementation of an intervention. Researchers selecting a data collection method for implementation fidelity measurement should consider several factors including, feasibility, cost, efficiency, reliability, reactivity, and the ability to collect adequate behavior samples for measuring practitioner adherence and competence (see Table 1).

Table 1

Data Collection Methods for Implementation Fidelity Assessment

Data Collection
Method
DefinitionAdvantagesLimitations
Self-report Information collected from practitioner or intervention participants Time and cost efficient
Clinical information from perspective of practitioners and participants
Validity and accuracy of data unknown
Potential for self- desirability bias
Observation Independent observer rates the intervention session Objective assessment for valid and accurate data Cost
Time consuming
In Vivo Live observations of the intervention session Overall assessment of external environment and contextual factors
Assess non-verbal communication
Feasibility of scheduling observers
Reactivity effects due to observer
Video recording Intervention session is video recorded and viewed for fidelity assessment by independent raters Review of sessions multiple times
Reliability and accuracy checks of data
May not capture nonverbal or other occurrences outside camera range
Cost
Reactivity effects due to camera
Loss of participant anonymity
Audio recording Intervention session is audio recorded and reviewed for fidelity assessment by independent raters Less costly than observation
Able to review sessions multiple times
Reliability and accuracy checks of data
Less likely to produce reactivity effects
May not capture nonverbal or other occurrences not amenable to audio recording
No assessment of environment

Future Directions for Implementation Fidelity

As attention to implementation fidelity monitoring increases in implementation research and dissemination efforts, several key questions emerge related to assessment and use of implementation fidelity data. They are: (a) How should implementation fidelity be measured? (b) How often should implementation fidelity be assessed? (c) What is the relationship of implementation fidelity to intervention outcome? and (d) What is the role of implementation fidelity in analyzing outcomes?

How Should Implementation Fidelity be Measured?

Although there have been recent advances in measuring implementation fidelity in community based interventions, it is rare to see detailed discussion of the validity and reliability of the measures used to make fidelity assessments. In a review of fidelity monitoring, Baer and colleagues (2007) found that most rating instruments did not have established psychometric properties.

What is measured is just as important as how it is measured. In this regard, the content of the fidelity instrument is very important. For example, items on fidelity instruments need to capture behaviors and processes that are congruent with the underlying theoretical framework and reflective of the core components of the intervention. An adequate measure should assess not only adherence to these core components but also the competence with which the practitioner delivers these core components.

Current fidelity instruments are either specific to a particular intervention (global across sessions or session specific) or generic measures for use across various interventions (K. M. Carroll et al., 2000; Eames et al., 2008; Hogue et al., 2008). Measures specific to an intervention allow for assessment of core components of a particular intervention and feasibility for wide-scale dissemination of that specific intervention (Dusenbury et al., 2003). However, a disadvantage to creating fidelity measures that are specific to a single intervention is they do not allow for standardization or generalizability of findings across different interventions that are theoretically consistent. This makes it difficult to build the science related to implementation of theoretically similar interventions. There may be benefit to developing implementation fidelity measures that can be generalized across similar interventions. For example, many community-based interventions use very similar cognitive-behavioral strategies (i.e., interventions targeting depression or substance abuse). The Yale Adherence and Competence Scale (YACS) is an example of a generic measure for use across comparable interventions. The YACS was developed as a general system for rating therapist adherence and competence in implementing behavioral treatments for substance use disorders (K. M. Carroll et al., 2000). Fidelity measures, like the YACS, that can be broadly applied to similar interventions might lead to standardized methods for evaluating and interpreting fidelity, assessing intervention implementation across a variety of settings and interventions, and improving training protocols that can be applied to multiple interventions.

How Often Should Implementation Fidelity be Assessed?

In developing methods to evaluate implementation fidelity, identifying the frequency of monitoring should be determined to establish reliable and valid assessments of fidelity. Ongoing assessment of fidelity is important in community settings to (a) assure continued validity of an intervention, (b) maintain consistent implementation of the intervention, and (c) allow for observations across a variety of settings (Botvin, 2004; Buckwalter et al., 2009; Dusenbury et al., 2005; Hill et al., 2007). Ongoing assessments of fidelity may capture issues related to practitioners' drift, contextual issues that may influence the implementation and receipt of the intervention, identifying adaptations of the intervention, and provide important information for supervising and training practitioners. In addition, collecting data over time may capture incidents that might be more difficult or unusual and thus more indicative of competence (Barber et al., 2006). However, ongoing assessment can be costly and time intensive. Therefore, selection of a random set of sessions for fidelity monitoring may provide a representative sample of the practitioner and participant functioning (Barber et al., 2006). Empirical assessment of fidelity monitoring strategies is critical to moving intervention science forward to understand the most reliable assessment strategies. Further, assessing fidelity across time and intervention sessions will help determine the minimum frequency of fidelity monitoring necessary to maintain effectiveness.

What is the Relationship Between Implementation Fidelity and Intervention Outcome?

Few researchers have considered the influence of implementation fidelity on outcomes or included fidelity data in the analysis of intervention outcomes (Dane & Schneider, 1998; Domitrovich & Greenberg, 2000). Further, most researchers who did evaluate implementation fidelity have tended to focus exclusively on adherence to the protocol with less attention given to the competence of implementation (Fixsen et al., 2005). In the following sections, the relationships among adherence, competence, and intervention outcomes will be discussed.

Adherence measures and outcome

Several researchers have examined the relationship between adherence and intervention outcomes, with contradictory results. For example, some investigators have reported a positive linear relationship between adherence and outcomes (Hogue et al., 2008; Huey, Henggeler, Brondino, & Pickrel, 2000). However, Barber and colleagues (2006) found that perfect adherence to the intervention protocol in their study of individual drug counseling sessions was less predictive of good intervention outcomes than a moderate level of adherence. The notion that rigid adherence to the prescribe protocol might lead to poorer outcomes suggests that some level of practitioner flexibility and adaptability may be needed to meet local and individual needs when implementing interventions in different populations within different contexts. More research is needed to understand the relationship between adherence and intervention outcomes. Of particular interest is understanding the degree to which protocols can be adapted for a specific population or context while still retaining fidelity to the intended intervention (Botvin, 2004; Castro, Barrera, & Martinez, 2004; The Conduct Problems Work Group, 2002).

Competence measures and outcome

Findings from research evaluating competence related to outcomes are also equivocal. Some findings suggest high competence is significantly related to intervention outcomes while other findings suggest modest to nonsignificant associations between practitioner competence and intervention outcomes. For example, Barber and colleagues (2007) reviewed the psychotherapy literature related to therapist competence and found only moderate relationships between therapist competence and patient outcomes. However, Forgatch and colleagues (2005) found strong relationships between competence and improved parenting practices in their study of a parent training intervention. Hogue and colleagues (2008) reported no effects of competence related to outcomes in an intervention for adolescent substance use and related behavior problems

There are several potential explanations for the conflicting results related to implementation fidelity and outcomes. First, the essential ingredients believed to make the intervention effective may be poorly explicated. In such cases, the fidelity instrument may not be measuring the qualities practitioners are applying in the implementation. Second, the fidelity measure may not be adequately differentiating between adherence and competence. Practitioners may be implementing all of the core components of a protocol but doing so poorly or with only moderate skill, leading to diminished intervention outcomes.

Third, poor inter-rater reliability may be attenuating effects. This may be particularly true when measuring competence, because it tends to be a more complex assessment of practitioner skills. For example, Hogue and colleagues (2008) found no relationship between competence and intervention outcomes. However, reported inter-rater reliability for their competence scales was relatively low (intraclass correlations (ICC) were .55-.56). In contrast, using a competence scale that demonstrated higher reliability (ICCs of .71-.93), Forgatch et al. (2005) found a significant relationship between competence and outcomes.

Finally, inconsistent findings of fidelity and outcomes may occur because of confounding factors that influence practitioner adherence and competence. Characteristics of the intervention recipients, the complexity of the intervention session, and environmental factors may influence how well an intervention is implemented. For example, competence may be facilitated when the participant receiving the intervention is motivated, engaged, and a quick learner of new information but impeded by recipients who repeatedly question what is being taught in an intervention or who appear resistant to or perplexed by new ideas. However, resistant and challenging recipients may also be the individuals who most benefit from the intervention and exhibit the greatest improvements in outcomes. Future researchers should focus on identifying and assessing potential moderating variables affecting the relationship between implementation fidelity components and intervention outcomes.

Because of the discrepant results relating adherence and competence to outcomes, efforts should continue to understand the influence of practitioner implementation fidelity on outcomes to isolate factors that might influence the effectiveness of a given intervention. Identifying essential factors that may relate to practitioner competence and adherence can inform training, supervision, and dissemination efforts.

What is the Role of Implementation Fidelity in Analyzing Outcomes?

Using implementation fidelity information in the analysis of intervention effectiveness is important because fidelity outcomes: (a) are related to the internal validity of an intervention study, (b) allow increased confidence in attributing improvements to the intervention, and (c) may increase statistical power by controlling for error associated with diminished implementation quality (Fixsen et al., 2005; Mowbray, Holter, Teague, & Bybee, 2003; Santacroce et al., 2004). Future researchers should focus on developing and understanding of the role of implementation fidelity in the analysis of intervention effectiveness. Based on results showing variable strengths in the relationships among adherence, competence, and intervention outcomes, an important empirical question is identifying acceptable levels of adherence and competence for maintaining intervention effects (Barber et al., 2007). Research on the degree of implementation fidelity needed to retain intervention effects might also shed light on the degree to which interventions can be adapted to the needs of different communities or target populations without reducing their beneficial effects (Dariotis, Bumbarger, Duncan, & Greenberg, 2008). Reliable and valid assessments of implementation fidelity will increase the understanding and methods used to determine how challenging situations, environments, and/or intervention recipients affect implementation fidelity and intervention outcomes.

Conclusion

The science of implementation of evidenced based practices is in its infancy (Fixsen et al., 2005). Comprehensive assessment of fidelity provides critical information to inform implementation and dissemination efforts and address research to practice gaps. There is consensus that implementation fidelity of interventions needs to be systematically evaluated because it remains our best estimate of implementation quality. Implementation fidelity assessment strengthens the validity of a study and provides data for monitoring the transport of an intervention during dissemination. Further, implementation fidelity assessments inform practitioner training and supervision. To ensure that interventions can be delivered with a high level of fidelity across different practitioners and settings, practical evaluation strategies that are both feasible and cost-effective need to be developed (Perepletchikova et al., 2007). To date, few researchers have developed comprehensive fidelity plans or reliable and valid measures for measuring implementation fidelity, particularly related to large-scale dissemination efforts. In addition, more research is needed on the relationships among adherence, competence, and intervention outcomes.

Acknowledgments

Preparation of this paper was supported in part by an award from the Golden Lamp Society of Rush University College of Nursing. The authors thank Dr. Mary Johnson for her helpful feedback and comments in preparing the manuscript.

Contributor Information

Susan M. Breitenstein, Rush University College of Nursing, Chicago, IL.

Deborah Gross, Johns Hopkins University.

Christine Garvey, Rush University College of Nursing, Chicago, IL.

Carri Hill, Institute for Juvenile Research, Department of Psychiatry, University of Illinois at Chicago, Chicago, IL.

Louis Fogg, Rush University College of Nursing, Chicago, IL.

Barbara Resnick, University of Maryland.

References

  • Baer JS, Ball SA, Campbell BK, Miele GM, Schoener EP, Tracy K. Training and fidelity monitoring of behavioral interventions in multi-site addictions research. Drug and Alcohol Dependence. 2007;87:107–118. [PMC free article] [PubMed] [Google Scholar]
  • Barber JP, Gallop R, Crits-Christoph P, Frank A, Thase ME, Weiss RD, et al. The role of therapist adherence, therapist competence, and alliance in predicting outcome of individual drug counseling: Results from the national institute drug abuse collaborative cocaine treatment study. Psychotherapy Research. 2006;16:229–240. [Google Scholar]
  • Barber JP, Sharpless BA, Klostermann S, McCarthy KS. Assessing intervention competence and its relation to therapy outcome: A selected review derived from the outcome literature. Professional Psychology: Research and Practice. 2007;38:493–500. [Google Scholar]
  • Bellg AJ, Borrelli B, Resnick B, Hecht J, Minicucci DS, Ory M, et al. Enhancing treatment fidelity in health behavior change studies: Best practices and recommendations from the NIH behavior change consortium. Health Psychology. 2004;23:443–451. [PubMed] [Google Scholar]
  • Botvin GJ. Advancing prevention science and practice: Challenges, critical issues, and future directions. Prevention Science. 2004;5:69–72. [PubMed] [Google Scholar]
  • Bowman CC, Sobo EJ, Asch SM, Gifford AL. Measuring persistence of implementation: QUERI series. Implementation Science. 2008;3:21. doi: 10.1186/1748-5908-3-21. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Breitenstein S. Measuring implementation fidelity of a community based parenting intervention. Dissertation Abstracts International: Section B Sciences and Engineering. 2010;70(09):3376892. [Google Scholar]
  • Buckwalter KC, Grey M, Bowers B, McCarthy AM, Gross D, Funk M, et al. Intervention research in highly unstable environments. Research in Nursing and Health. 2009;32:110–121. [PMC free article] [PubMed] [Google Scholar]
  • Carroll C, Patterson M, Wood S, Booth A, Rick J, Balain S. A conceptual framework for implementation fidelity [Electronic version] Implementation Science. 2007;2:40. doi: 10.1186/1748-5908-2-40. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Carroll KM, Nich C, Sifry RL, Nuro KF, Frankforter TL, Ball SA, et al. A general system for evaluating therapist adherence and competence in psychotherapy research in the addictions. Drug and Alcohol Dependence. 2000;57:225–238. [PubMed] [Google Scholar]
  • Castro FG, Barrera M, Jr, Martinez CR., Jr The cultural adaptation of prevention interventions: Resolving tensions between fidelity and fit. Prevention Science. 2004;5:41–45. [PubMed] [Google Scholar]
  • The Conduct Problems Prevention Research Group. The implementation of the Fast Track program: An example of a large-scale prevention science efficacy trial. Journal of Abnormal Child Psychology. 2002;30:1–17. [PMC free article] [PubMed] [Google Scholar]
  • Dane AV, Schneider BH. Program integrity in primary and early secondary prevention: Are implementation effects out of control? Clinical Psychology Review. 1998;18:23–45. [PubMed] [Google Scholar]
  • Dariotis JK, Bumbarger BK, Duncan LG, Greenberg MT. How do implementation efforts relate to program adherence? Examining the role of organizational, implementer, and program factors. Journal of Community Psychology. 2008;36:744–760. [Google Scholar]
  • Dévieux JG, Rosenberg R, Jean-Gilles M, Samuels D, Ergon-Pérez E, Jacobs R. Cultural adaptation in translational research: Field experiences. Journal of Urban Health. 2005;82:iii82–iii91. [PMC free article] [PubMed] [Google Scholar]
  • Dobson KS, Singer AR. Definitional and practical issues in the assessment of treatment integrity. Clinical Psychology: Science and Practice. 2005;12:384–387. [Google Scholar]
  • Domitrovich CE, Greenberg MT. The study of implementation: Current findings from effective programs that prevent mental disorders in school-aged children. Journal of Educational and Psychological Consultation. 2000;11:193–221. [Google Scholar]
  • Dumas JE, Lynch AM, Laughlin JE, Smith EP, Prinz RJ. Promoting intervention fidelity: Conceptual issues, methods, and preliminary results from the EARLY ALLIANCE prevention trial. American Journal of Preventive Medicine. 2001;20(Suppl1):38–47. [PubMed] [Google Scholar]
  • Dusenbury L, Brannigan R, Falco M, Hansen WB. A review of research on fidelity of implementation: Implications for drug abuse prevention in school settings. Health Education Research. 2003;18:237–256. [PubMed] [Google Scholar]
  • Dusenbury L, Brannigan R, Hansen W, Walsh J, Falco M. Quality of implementation: Developing measures crucial to understanding the diffusion of preventive interventions. Health Education Research. 2005;20:308–313. [PubMed] [Google Scholar]
  • Eames C, Daley D, Hutchings J, Hughes JC, Jones K, Martin P, et al. The Leader Observation Tool: A process skills treatment fidelity measure for the Incredible Years parenting programme. Child: Care, Health and Development. 2008;34:391–400. [PubMed] [Google Scholar]
  • Eccles M, Mittman B. Welcome to implementation science [Electronic version] Implementation Science. 2006;1:1. doi: 10.1186/1748-5908-1-1. [CrossRef] [Google Scholar]
  • Elliott DS, Mihalic S. Issues in disseminating and replicating effective prevention programs. Prevention Science. 2004;5:47–53. [PubMed] [Google Scholar]
  • Fixsen DL, Naoom SF, Blase KA, Friedman RM, Wallace F. Implementation research: A synthesis of the literature. Tampa, FL: University of South Florida, Louis de la Parte Florida Mental Health Institute, The National Implementation Research Network (FMHI Publication #231); 2005. [Google Scholar]
  • Forgatch MS, Patterson GR, DeGarmo DS. Evaluating fidelity: Predictive validity for a measure of competent adherence to the Oregon model of parent management training. Behavior Therapy. 2005;36:3–13. [PMC free article] [PubMed] [Google Scholar]
  • Gardner F. Methodological issues in the direct observation of parent-child interaction: Do observational findings reflect the natural behavior of participants? Clinical Child and Family Psychology Review. 2000;3:185–198. [PubMed] [Google Scholar]
  • Glasgow RE, Lichtenstein E, Marcus AC. Why don't we see more translation of health promotion research to practice? Rethinking the efficacy-to-effectiveness transition. American Journal of Public Health. 2003;93:1261–1267. [PMC free article] [PubMed] [Google Scholar]
  • Hill LG, Maucione K, Hood BK. A focused approach to assessing program fidelity. Prevention Science. 2007;8:25–34. [PubMed] [Google Scholar]
  • Hogue A, Henderson CE, Dauber S, Barajas PC, Fried A, Liddle HA. Treatment adherence, competence, and outcome in individual and family therapy for adolescent behavior problems. Journal of Consulting and Clinical Psychology. 2008;76:544–555. [PMC free article] [PubMed] [Google Scholar]
  • Huey SJ, Jr, Henggeler SW, Brondino MJ, Pickrel SG. Mechanisms of change in multisystemic therapy: Reducing delinquent behavior through therapist adherence and improved family and peer functioning. Journal of Consulting and Clinical Psychology. 2000;68:451–467. [PubMed] [Google Scholar]
  • Kerner J, Rimer B, Emmons K. Introduction to the special section on dissemination: Dissemination research and research dissemination: How can we close the gap? Health Psychology. 2005;24:443–446. [PubMed] [Google Scholar]
  • Lee CY, August GJ, Realmuto GM, Horowitz JL, Bloomquist ML, Klimes-Dougan B. Fidelity at a distance: Assessing implementation fidelity of the Early Risers Prevention Program in a going-to-scale intervention trial. Prevention Science. 2008;9:215–229. [PubMed] [Google Scholar]
  • Lillehoj CJ, Griffin KW, Spoth R. Program provider and observer ratings of school-based preventive intervention implementation: Agreement and relation to youth outcomes. Health Education and Behavior. 2004;31:242–257. [PubMed] [Google Scholar]
  • Martino S, Ball SA, Nich C, Frankforter TL, Carroll KM. Community program therapist adherence and competence in motivational enhancement therapy. Drug and Alcohol Dependence. 2008;96:37–48. [PMC free article] [PubMed] [Google Scholar]
  • Mihalic S. The importance of implementation fidelity. Emotional and Behavioral Disorders in Youth. 2004;4(4):83–105. [Google Scholar]
  • Mihalic S, Fagan A, Argamaso S. Implementing the LifeSkills training drug prevention program: Factors related to implementation fidelity [Electronic version] Implementation Science. 2008;3:5. doi: 10.1186/1748-5908-3-5. [PMC free article] [PubMed] [CrossRef] [Google Scholar]
  • Mowbray CT, Holter MC, Teague GB, Bybee D. Fidelity criteria: Development, measurement, and validation. American Journal of Evaluation. 2003;24:315–340. [Google Scholar]
  • Perepletchikova F, Kazdin AE. Treatment integrity and therapeutic change: Issues and research recommendations. Clinical Psychology: Science and Practice. 2005;12:365–383. [Google Scholar]
  • Perepletchikova F, Treat TA, Kazdin AE. Treatment integrity in psychotherapy research: Analysis of the studies and examination of the associated factors. Journal of Consulting & Clinical Psychology. 2007;75:829–841. [PubMed] [Google Scholar]
  • Resnick B, Bellg AJ, Borrelli B, DeFrancesco C, Breger R, Hecht J, et al. Examples of implementation and evaluation of treatment fidelity in the BCC studies: Where we are and where we need to go. Annals of Behavioral Medicine. 2005;29(SpecSuppl):46–54. [PubMed] [Google Scholar]
  • Rogers Wiese MR. A critical review of parent training research. Psychology in the Schools. 1992;29:229–236. [Google Scholar]
  • Rohrbach LA, Gunning M, Sun P, Sussman S. The project towards No Drug Abuse (TND) dissemination trial: Implementation fidelity and immediate outcomes. Prevention Science in press. [PMC free article] [PubMed] [Google Scholar]
  • Sánchez V, Steckler A, Nitirat P, Hallfors D, Cho H, Brodish P. Fidelity of implementation in a treatment effectiveness trial of reconnecting youth. Health Education Research. 2007;22:95–107. [PubMed] [Google Scholar]
  • Santacroce SJ, Maccarelli LM, Grey M. Intervention fidelity. Nursing Research. 2004;53:63–66. [PubMed] [Google Scholar]
  • Stein KF, Sargent JT, Rafaels N. Intervention research: Establishing fidelity of the independent variable in nursing clinical trials. Nursing Research. 2007;56:54–62. [PubMed] [Google Scholar]
  • Zelenko M. Observation in infant-toddler mental health assessment. In: DelCarmen-Wiggins R, Carter AS, editors. Handbook of infant, toddler, and preschool mental health assessment. New York: Oxford; 2004. pp. 205–221. [Google Scholar]

Which type of client is often affected by the engagement or intervention but may not know of the activity or that it will concern them?

Primary clients generally are the ones paying for the consultant's services. These clients will be affected by the engagement or intervention but may not know of the engagement activity or that it will concern them.

What is defined as a participatory democratic process concerned with developing practical knowledge in the pursuit of worthwhile human purposes?

Action research is a participatory, democratic process concerned with developing practical knowledge in the pursuit of worthwhile human purposes.

Which of the following methods of data gathering allows the consultant to collect data on actual behavior rather than reports of people's behavior?

A fourth method of data gathering is direct observation. Compared to the first three methods we have discussed, observations allow the consultant to collect data on actual behavior rather than reports of people's behavior (Nadler, 1977).

Which role of the change agent is defined as advocating for a particular approach or perspective?

Change Agent: Mobilizing. Advocating for a particular approach or perspective.