Encyclopedia of Social Work is now a consistently updated digital resource. Visit About to learn more, meet the editorial board, or explore the latest articles.

Dismiss
Show Summary Details

Page of

PRINTED FROM the Encyclopedia of Social Work, accessed online. (c) National Association of Social Workers and Oxford University Press USA, 2016. All Rights Reserved. Under the terms of the applicable license agreement governing use of the Encyclopedia of Social Work accessed online, an authorized individual user may print out a PDF of a single article for personal use, only (for details see Privacy Policy).

Subscriber: null; date: 26 February 2017

Randomized Control Trials

Abstract and Keywords

This entry defines Randomized Control Trials (RCTs) and puts them in an historical context. It provides an understanding of the distinction between efficacy and effectiveness RCTs and explains why effectiveness trials are more relevant to social work interventions. The strengths and limitations of RCTs that use experimental designs are delineated. It discusses the reporting requirements of RCTs by the standards of the CONSORT (Consolidated Standards of Reporting Trials).. It also presents the controversies of social workers in the use of RCTs.. Current health services research emphasizes evidence-based practices, research on comparative effectiveness, and using dissemination and implementation research to understand the gaps between empirically supported interventions and the services that are offered in routine care. RCTs have emerged as a central methodology in all of these efforts. Social workers, therefore, need to be knowledgeable and engage in these efforts.

Keywords: comparative effectiveness research, CONSORT, dissemination research, effectiveness trials, experimental design, implementation research, interventions, randomized control trials, randomized field trials

History and Definition of Randomized Control Trials

Clinical trials have a long history in medicine dating back to 600 B.C.E. Credit for the modern randomized controlled trial (RCT), as we know it today, goes back to the landmark studies of streptomycin for pulmonary tuberculosis in the late 1940s by Sir Austin Bradford Hill (Kelly & Moore, 2011; Shorter, 2011; Stolberg, Norman & Trop, 2004). The RCT rests on the random allocation procedures developed by R. J. Fisher in the 1920s and 1930s that were employed in agriculture experiments (Torgerson & Torgerson, 2008). Since this time, the RCT design has been accepted as the methodology of choice in determining effective treatments in medicine to the point that now it is considered the gold standard. However, experimental design, which is the foundation of the RCT, did not make in-roads into the social work profession until much later. It was not until the mid-1960s when Campbell and Stanley (1963) published their categorization of research designs in which they argued that the experimental research design had the fewest threats to internal validity as compared to the quasi-experimental and pre-experimental designs. Their primer, which outlined the various research designs and their strengths and weaknesses, became the standard for behavior and social science research designs. Holosko (2010) noted that in social work, the text Exemplars of Social Research (Fellin, Tripodi, and Meyer, 1969) was the first to focus on design classification in social work. Currently, all social work research texts use the design classification system developed by Campbell and Stanley and clearly make the point that experimental designs are the only ones for establishing valid causal conclusions.

An RCT, which basically employs an experimental design, is a study in which participants are allocated by chance to two or more conditions, at least one experimental intervention and one control condition that is used for purposes of comparison. This chance allocation procedure is referred to as randomization or random assignment, and it serves to ensure that, on average, the groups of participants for each condition are equal on all known or unknown characteristics with the exception of the allocation of the experimental intervention. This characteristic, therefore, enables the investigator to isolate the cause of the outcome difference between the conditions to being attributed to the experimental intervention(s) as opposed to other confounding factors. Consequently, the ability to rule out other potential factors to producing the change in the outcome is what makes the investigator extremely confident in the attribution of the experimental intervention being the cause of any differences in outcomes between the conditions being compared. Generally, RCTs employ the classic experimental design in which outcomes are measured at baseline, before the introduction of the interventions, and at the termination of treatment. Having the baseline measures enables the investigator to test the equality of the conditions prior to instituting the intervention.

Effectiveness RCTs versus Efficacy RCTs

RCTs in social work usually take place in community settings, such as social service, child welfare, mental health, or substance abuse agencies, whereas biomedical RCTs are undertaken in clinic or hospital settings, especially those affiliated with academic medical schools. Consequently, the RCT conducted by social work investigators, which is sometimes called the randomized field trial, falls within the domain of what are called effectiveness trials, because they transpire within the routine practice settings. In comparison, biomedical RCTs are efficacy trials which occur under ideal circumstances with highly trained practitioners who serve clients with very stringent eligibility criteria; biomedical RCTs maximize fidelity and adherence to a manualized experimental intervention (Glasgow & Steiner, 2012). Effectiveness trials often employ the staff of the agency; as a result, these interventionists may have varying qualifications and degrees of motivation to delivering the experimental interventions (Solomon, Cavanaugh, & Draine, 2009; Solomon & Cavanaugh, in press). Furthermore, the traditional medical RCT, which is sometimes called a randomized clinical trial, tests a new drug or surgical procedure, which are highly focused and thus easily can have rigidly standardized protocols, unlike social work interventions, which have been described as socially complex services (Wolff, 2000). This complexity makes social work interventions difficult to clearly specify and delineate, because they are service approaches that are open to some degree of interpretation and flexibility in delivery by the provider (Wolff, 2000). Moreover, the social context in which these interventions are embedded also may affect the interventions and may impact dissimilarly the different intervention conditions, thus violating the underlying assumption of RCTs that environments for all conditions are equivalent (Wolff, 2000). Given less rigor that can be maintained in an effectiveness trial than in an efficacy trial, the internal validity of efficacy trials are stronger than those of effectiveness trials. Furthermore, given the stringency of the efficacy trial, the external validity is far more limited in the efficacy trial than in the effectiveness trial.

The effectiveness trial is conceived of as a practical or pragmatic trial that evaluates the degree of benefits that result from the experimental intervention within a routine practice setting. These effectiveness trials are concerned with answering questions of practical importance to policymakers, practitioners, administrators, consumers, and citizens (Glasgow & Steiner, 2012). An efficacy trial is viewed as an explanatory trial because its purpose is to determine whether the experimental intervention resulted in the hypothesized benefits, without producing any harm to participants under the most ideal of circumstances for isolating the causal effect (Gartlehner, Hansen, Nissman, et al., 2006). These two types of trials address different issues. The efficacy trial is more concerned with ensuring the causal relationship between the purified intervention and the outcome of interest, whereas in effectiveness trials the concern is whether the experimental intervention produced better outcomes than the comparison within an ordinary practice setting. For this reason, the efficacy trial commonly employs a placebo condition for the basis of comparison, while the effectiveness trial frequently uses the standard of care that is usually provided in such a practice setting. The placebo is an inactive agent that is used as a basis of control for comparison purposes in an RCT. Given the nature of the comparison condition in effectiveness trials and the focus on some standard treatment, these trials are referred to as comparative effectiveness trials (to be discussed in more detail later). There are those who do not believe in the worthiness of efficacy trials as it is impractical to believe that studies that are conducted with highly motivated providers, in rich resource environments, and with uncomplicated clients have much relevance and application to under-resourced settings, unmotivated and ill-qualified practitioners, and clients with a myriad of issues resulting in having a complicated set of problems (Glasgow & Steiner, 2012).

Design of RCTs

The RCT is the design of choice for determining the effectiveness of interventions because of its ability to control for potential biases and confounds; hence, ruling out competing alternative hypotheses as explanations for differences in changes in the outcomes for those in the experimental group as compared to the control group members. Although there are other designs and statistical procedures to evaluate the effectiveness of interventions, they tend to limit the confidence in concluding strong findings of causality. For example, quasi-experimental designs rely on naturally occurring group comparison rather than the chance allocation procedure of the RCT. Naturally occurring procedures may result in the characteristics of the experimental and control groups not being equal on all characteristics, both known and unknown. This lack of equality may result in unintentional bias that then interacts with the experimental intervention to produce a change that cannot be attributed to the intervention alone but to characteristics of the groups (selection bias) or interaction of these characteristics and the experimental intervention. Another design more commonly used in social work, the pre- and post-test design, is an even weaker design at controlling biases. Furthermore, these pre-test and post-test designs overestimate the beneficial effects of the intervention due to changes occurring over time in the participants, that is, maturation, and to their regression to the average score of the group (Torgerson & Torgerson, 2008). There are alternative approaches to evaluate interventions such as interrupted time series designs, but these alternative approaches require the investigators to anticipate potential biases and to be able to measure them in advance. Nevertheless, it is impossible to measure unknown variables that may affect the outcome (Torgerson & Torgerson, 2008). But inferring causality from these other methods is quite problematic. The following sections discuss the features of the RCT and their strengths and limitations.

Ethics of RCTs

In biomedical research, particularly in drug trials, the use of placebo control is falling out of favor due to the element of deception (Shorter, 2011). But for RCTs in the social work arena, the comparison has usually been against the current standard of care, because it has always been felt that denying needed treatment to vulnerable clients is unethical. Waitlist or inert treatments are used as control conditions when participants would otherwise not be receiving any service. However, many social workers are concerned that the individuals who were assigned to the control condition could be receiving the less effective services. But, what is not frequency recognized or understood is that, at the outset of an RCT, we do not know whether the experimental treatment will be more effective than the control. The purpose of an RCT is to test the effectiveness of the experimental intervention. The ethical requirement for an RCT is that there is some degree of uncertainty regarding the beneficial effects of the experimental intervention. This uncertainty is known as equipoise. Without meeting the criterion of equipoise, there is no ethical basis for conducting the RCT. Thus, an RCT does not deny needed services, since it is far more ethical to test an intervention for effectiveness as opposed to “inflicting an untested and potentially harmful procedure on people” (Newman & Roberts, 1997, p. 292).

The RCT is an intersection of practice and research, as the experimental intervention is usually a practice intervention or an educational curriculum. Consequently, both practice ethics and research ethics are pertinent to RCTs. The Belmont Report (1979), which is a report issued by the National Commission for the Protection of Human Subjects of Biomedical and Behavioral Research, was written to develop guidelines for protecting human subjects given concerns about the unethical manner in which study participants have been treated. The report made a clear distinction between research and practice. It defined practice as “interventions that are designed to enhance the well-being of an individual, patient, or client[,] and that [have] a reasonable expectation of success” (p. 3). The definition of research is activities that are “designed to test [a] hypothesis, permit conclusions to be drawn, and thereby to develop or contribute to generalizable knowledge” (p. 3). Thus, practice is about improving the well-being of an individual, while research is about improving society through the development of knowledge. Because RCTs entail both practice and research, what is required for ethical practice applies to the RCT as well. For example, if a participant is deteriorating during the experimental intervention, the ethics of practice as well as the ethics of research require the participant to be removed. Since RCTs in social work generally involve a practice intervention, just as a practitioner needs to deal with issues of termination, so does the interventionist involved in an RCT. Should the participant require on-going care after his or her time in the experimental intervention, the practioner must make appropriate referral arrangements for the participant (Solomon, Cavanaugh, & Draine, 2009). Thus, the ethics of both practice and research do not conflict. If there is concern regarding conflicts with practice and research, it is with the rigor of scientific methodology, not with research ethics.

The ethical argument for RCTs is that there is a “compelling need” to know the effectiveness of various treatments or services, and that RCTs can most satisfactorily meet this need (Mark & Lenz-Watson, 2011). Using RCTs is the preferred method of knowing the answer because they provide an unbiased answer to the question that is posed; in contrast, other ways of obtaining the answer are likely to yield biased estimates due to threats to the study’s internal validity. Consequently, other methods may result in incorrect answers and therefore produce undue harms. Ethically, researchers are obligated to minimize such risks. Therefore, in terms of the Belmont Report, investigators are more likely to maximize benefits, minimize risk, and ensure a fair distribution of risk and benefits with RCTs for those involved in the original research as well as for those with whom it is applied in the future than they are with other methods of determining effectiveness (Mark & Lenz-Watson, 2011).

There have been some methodological advances in recent years that reduce some of the ethical concerns of RCTs. For example, with the use of power analyses, a minimum number of participants are exposed to any potential risk from the experimental intervention. Currently, an investigator can employ a “stop rule” whereby the data are continually analyzed and as soon as an effect or clear harm is determined, further enrollment is stopped. However, this can only occur with consecutive sampling procedures (Mark & Lenz-Watson, 2011). The application of adaptive randomization is another procedure that limits the number of participants to the ineffective condition. In this method, periodic analyses are conducted “to adjust the probability of assignment to each condition based on the apparent effectiveness at that point” (Mark & Lenz-Watson, 2011, p. 200).

Theoretical basis for experimental intervention

RCTs employ the deductive process and are hypothesis testing studies. Ahypothesis testing study begins with a theoretical justification that posits that the proposed intervention will produce a change in the behavioral outcomes of the experimental intervention participants more so than for comparison participants. The theories or explanatory models are used to justify the relationship of the intervention processes to the hypothesized outcomes (Solomon, Cavanaugh, & Draine, 2009). Examples of the types of theories that are often employed in social work intervention research include cognitive behavioral theory; social learning theory; stress, coping, and adaptation; health beliefs; social support; social capital; the theory of planned behavior; the theory of reasoned action; and the transtheoretical model of change (Solomon, Cavanaugh, & Draine, 2009). These theories are the basis of the intervention that is linked to specific outcomes.

Models for testing interventions are becoming more sophisticated and incorporating mediator and moderator variables (Baron & Kenny, 1986), as they move the field forward to the next generation of studies and refine their clinical application (Kraemer, Wilson, Fairburn, & Argus, 2002). Mediators help us to understand why interventions work, as they “identify why and how” interventions have an effect (Kraemer, et al., 2002, p. 877). Mediators specify the process of change and identify the causal links by which the intervention produces the change in outcome. Temporally, they occur between the intervention and the outcome. Thus, mediators help to determine the mechanism by which the interventions work. They occur during the course of the intervention, but they are not a component of that intervention. An example of a mediator is that those clients receiving the experimental intervention who have a greater alliance (mediator) with their provider will have a more positive outcome. Moderators indicate “for whom and under [which] conditions” the intervention works (Kraemer, et al., 2002, p. 878). They affect the directionality of the relationship between the intervention and outcome or the strength of that relationship. Consequently, moderators specify subgroups for whom the intervention may be effective or not effective. For example, an intervention may be more effective for youth with a history of abuse versus those with no history of abuse. It is important to note that moderators always occur prior to what they moderate. The determination of both specific mediator and moderator variables needs to be theoretically driven (Kraemer, et al., 2002).

Strength of RCTs: Controlling Internal Bias

Unlike efficacy RCTs, effectiveness RCTs do not have as rigid sampling criteria, so participants may have co-morbid disorders. However, the inclusion and exclusion criteria must still be very clearly specified and operationally defined. If the sample is large enough, then randomization controls for a number of threats to internal validity. Given that the two groups are equal on most characteristics, neither selection bias, nor maturation are problems, as both groups will develop equally over time. Similarly, statistical regression, testing, and instrumentation are also not problems. History also is not an issue if the same setting is used for both conditions. This way, if some event occurs during the course of the study, such as a policy change, both conditions should be equally affected. However, differential attrition may not be equal if the nature of the experimental or control intervention is such that for some reason more participants who are somehow different drop out of one condition versus the other. For example, if the experimental intervention is particularly time consuming or onerous, busier people may drop out of experimental intervention but not out of the control one. The design per se may not be able to prevent this from happening, and thus special provisions might need to be built into the study methods to prevent this from occurring.

There are other potential confounds that must be considered when designing RCTs, such as contamination between the conditions or blurring of the condition. In some cases, the interaction of participants between the conditions may result in idea sharing, so that participants of the control condition may use these tips and thus receive the benefits that were meant only for experimental participants. Similarly, if the same providers serve participants from both conditions, the providers may inadvertently use some of the elements of the experimental intervention with the control participants. One means to avoiding this situation is to use a cluster randomized design whereby providers are randomized to experimental and control conditions and then deliver only the intervention to which they are assigned. Clients who are enrolled in the study receive the intervention contingent on what intervention the provider is assigned to. Another possibility is a drift of the experimental interventionists toward the control condition. This may likely occur if the control condition is the usual service that the experimental providers were used to delivering. Over time, particularly if the experimental service is complex, these interventionists may find that they drift to their old ways of servicing their clients. Ongoing coaching, booster sessions, and monitoring of the experimental intervention with corrective actions may need to be built into the RCT to ensure that these kinds of problems are avoided.

Another concern is that the possible reactivity of the providers, participants, or the investigators themselves affect the outcomes of the RCT. Blinding, which is used to control for this reactivity, is often used in efficacy RCTs, particularly in drug studies, as it is feasible to disguise a medication so that no one involved in the study is aware of the condition to which a participant is assigned. However, in social work type interventions, it is impossible to use blinding, since everyone is aware of what a participant is receiving. But those who collect the data may be able to be blinded as to the assignment of participants, which is what Gellis and collaborators (2007) did in their problem-solving RCT.

Intervention Specification and Fidelity Assessment

The interventions that are being assessed in the RCT must be very well specified, because it is expected that the intervention is standardized in such a way that each interventionist delivers essentially the same intervention. The means by which this standardization is achieved is through the use of a treatment or program manual. Psychotherapy RCTs usually have a treatment manual, whereas more psychosocial program interventions, such as Assertive Community Treatment, have more comprehensive packages. Some of the packages can be found on the Substance Abuse Mental Health Services Administrations websites and are called the Evidence-Based Behavioral Health Practice Tool kit series. Each kit outlines the essential components and tips for implementing the intervention. Program manuals also specify the structural elements of the intervention, such as caseload size and interventionists qualifications. Generally, treatment manuals often contain an overview of the literature on the intervention as well as suggestions for building a therapeutic relationship, specified activities, techniques, and content, sequencing of activities, strategies for handling problems, and issues of implementation and termination (Solomon, et al., 2009).

Although treatment and program manuals increase the likelihood of ensuring the integrity of the intervention, they have also been criticized. Concerns have been raised that these manualized treatments are often not applicable to the diversity of clients who are served by social workers who have complex problems, are often less motivated for behavioral change, and cultural ethinic minorities; are not relevant to the type of training or qualifications, of providers in community agencies. Furthermore, these manuals focus more on techniques rather than alliance formation; they downplay clinical expertise and provider competence by emphasizing adherence to the intervention (Carroll & Rounsaville, 2008; Havik VandenBos, 1996). Not all of these criticisms are valid as they do recognize the need for flexibility in delivery of the intervention, but they do offer a structure and plan for provision of the intervention for those less versed in the intervention. Also, existing manuals may need to be adapted for the specific populations served by community agencies. There are models available for adapting manuals. One such model is called ADAPT-ITT, which utilizes ethnographic methods (Wingood & DiClemente, 2008).

Most manuals also contain an assessment for measuring the fidelity of the intervention in order to ensure that the intervention was implemented as specifed in the treatment manual. These fidelity scales indicate whether the intervention was actually implemented as it was intended. Without assessing the integrity to the intended intervention, an investigator may come to the erroneous conclusion that an intervention was not effective, when, in reality, it simply was not implemented properly. Fidelity measures usually are quantitative scales that determine the degree to which the service elements of the intervention were delivered by the provider. Orwin (2000) notes that it is important to evaluate the extent to which recipients received the intervention. Consequently, more than one measure is required for making these determinations. In addition, it is essential to have a leakage measure, which assesses the extent of contamination or leakage of the experimental intervention to the control condition (Orwin, 2000). In some instances, the fidelity measure, or some adaptation of it, may be used by the control condition providers and recipients as a leakage measure.

Limitations of RCTs: External Validity

A major concern with regard to RCTs is the ability to generalize the results to other clients, sites, times, and providers than those involved in the study. Because the external validity of RCTs is much more limited, particularly as investigations control more confounds and tighten the internal validity of the study, the generalizability of study results is diminished. Ecological validity is also a concern in RCTs, which involves the extent to which the study environment and intervention may have relevance to practice settings. This is much more of an issue with efficacy studies than with effectiveness ones, as many psychosocial community-based RCTs take place in settings comparable to the ones to which to be generalized. One way to increasing external validity is to replicate the RCT in a different site. Multi-site RCTs increase the external validity within the study because multi-site studies are essentially replicating the RCT in more than two sites.

Intent-to-Treat Analysis

Generally, it is considered best to analyze RCTs by an intent-to-treat analysis, which analyzes outcomes based on the condition to which participants were originally assigned regardless of the nature of service intervention that they received. This statistical procedure controls for potential bias of dropouts from service or crossover effects, in other words, when participants receive the experimental intervention even though they were assigned to the control or usual service (Solomon, Cavanaugh, & Draine, 2009). However, one of the problems with conducting such an analysis is the need to obtain the outcome data, which is often difficult with the types of social work populations for which social work RCTs are conducted.

Integration of Qualitative Research into RCTs

There is increasing recognition of the advantages as well as certain limitations regarding RCTs. These RCT studies are very good at indicating which interventions are effective, but they do not offer much in terms of insight into how and why these interventions work or don’t work, how these interventions fail or succeed for different populations, and how these interventions are sensitive to the social context in which they are embedded. Consequently, a number of scholars suggest the need to integrate qualitative research methods into RCT studies, although a review of RCTs has found that this integration continues to be a rare occurrence (Grissmer, Subotnick, & Orland, 2009; Lewin, Glenton, & Oxman, 2009;, Spillane et al., 2010; Oakley, 2006; Veroff, Casebeer, & Hilsden, 2002). Qualitative methods in conjunction with RCTs have the potential to develop theory and, therefore, to be responsive to the need for theory-driven experimentation (Grissmer, et al., 2009).

The incorporation of qualitative research into an RCT which employsis a quantitative methodology is referred to as a mixed methods study. Johnson, Onwuegbuzie, & Turner (2007) provide a definition based on a survey of mixed methods experts.

Mixed methods research is the type of research in which a researcher or team of researchers combines elements of qualitative and quantitative research approaches (e.g., use of qualitative and quantitative viewpoints, data collection, analysis, inference techniques) for the broad purposes of breadth and depth of understanding and corroboration (p. 123).

Qualitative methods can be used before, during, and after an RCT is completed (Solomon & Cavanaugh, in press). The qualitative methods that may be employed vary from interviews and focus groups to ethnography and participant observation. Prior to the implementation of the RCT, qualitative methods may be used to explore issues concerning the question of interest, the environmental context in which the RCT is situated, refine and develop the experimental intervention, specify the usual care condition, generate hypotheses, and assess appropriateness of measures for evaluating outcomes. During the course of conducting the RCT, qualitative methods may offer insight into how the experimental intervention is actually being delivered and how it differs from what was intended; what are the processes of implementation and change as the intervention is being instituted; what exactly is the social and environmental context of the RCT and whether it differs for each of the conditions and has an impact or interacts with the interventions; and the interactions of the provider and recipients and their responses and meanings of the intervention to these key participants. After the completion of the RCT, investigators can explore reasons for the effectiveness or failure of the intervention, assess variation in effectiveness for various subgroups, assess the underlying theoretical processes that were thought to produce changes in the outcomes, and generate new theories based on examining the experiences and responses of recipients and providers of the intervention (Solomon & Cavanaugh, in press).

Reporting of RCTs: CONSORT Statement

Concerns regarding the quality of reporting RCTs have resulted in the development of standards for reporting. The current established standard is the CONSORT, which emerged from a collaborative effort of epidemiologists, biostatisticians, and journal editors (Begg, Cho, Eastwood, et al., 1996). The need for these standards became apparent from a number of reviews that found incomplete or inadequate reporting, such as no identification of assignment procedures to groups and lack of reporting of sample size. Poor quality reporting has implications for biased estimates of the effectiveness of interventions in meta-analyses (Moher, Schulz, & Altman, 2001) and has grown in concern with the increasing employment of evidence-based practices as this process requires the use of valid evidence. Such detailed information is necessary to appraise the validity of the results and is also required for systematic reviews and meta-analyses (to be discussed in more detail later). The expectation of CONSORT developers was that the quality of written reporting of RCTs would be enhanced for those journals whose editors required the use of the CONSORT. The template forms, including the flow of participants through the study from recruitment to analyses, are available for download on the CONSORT website: http://www.consort-statement.org/ (Moher et al, 2001).

Some scholars have voiced concerns that the CONSORT is too heavily focused on the quantitative and statistical aspects of the study to the neglect of the interpretation of the results. Consequently, researchers overemphasize the statistical significance of their findings to the detriment of misinterpreting statistical significance as being equated with clinical significance. However, it may well be that a statistically significant finding may not be of importance clinically or that statistically non-significant results may have important clinical implications (Chan, Man-Son-Hing, Molnar, & Laupacis, 2001). A review by Chan and associates (2001) found that only 10% of articles in the review made specific statements about the clinical importance of their results regarding their primary outcomes.

Use and Controversies of RCTs in Social Work

Although concerns have been voiced for years that social workers have conducted RCTs on the effectiveness of social work interventions (Rosen, Proctor, & Staudt, 1999), there are many in the field who are opposed to the use of RCTs. Rubin and Parrish (2007) have delineated the reasons for the scarcity of RCTs in social work. They include beliefs that RCTs have limited generalizability due to stringent eligibility criteria for such studies; RCTs tend to be focused on interventions that are readily manualized rather than socially complex interventions; and “rigid adherence to treatment manuals can harm the therapeutic alliance and result in poorer treatment outcomes” for clients served (p. 334). Those who hold these beliefs do not seem to understand the distinctions between effectiveness studies versus efficacy studies, and that while it is difficult to manualize socially complex interventions, it is not impossible. Furthermore, there is recognition that alliance is an important common factor in all of the therapeutic interventions and that the level of expertise of the provider is a significant part of the delivery of the intervention in an RCT.

Reviews of research on effective practice in social work have revealed limited rigorous well controlled studies in this arena (Holosko, 2010; Rosen, Proctor, & Staudt, 1999). Holosko’s review found that social work research employs primarily pre-experimental designs (that is, the Campbell and Stanley classification). Reviewing the three primarily empirically oriented social work journals, Research on Social Work Practice, Journal of Social Service Research, and Social Work Research, Holosko found that the most frequently used design (82.2%) was pre-experimental and that only 2.3% used an experimental design. In reviewing research evaluating social work interventions, Rubin and Parrish (2007) found that 20% employed randomized designs while 31% used quasi-experimental designs, 23% used pre-experimental designs, and the rest used a variety of other designs. Consequently, RCTs remain relatively rare in the study of social work interventions. Hence, we find no specific social work evidence based practices in the field. While there are some evidence-based practices (EBPs) that are consistent with social work ideology and practice, they are not identified by those outside the profession as social work interventions.

Promotion and Rationale for Use of RCTs in Social Work

Social work has had a long-term commitment to determining the effectiveness of social work practice as evidenced by Todd’s book The Scientific Spirit and Social Work from 1919 as cited in Newman and Roberts (1997). In 1993, William Epstein called for the employment of RCTs in evaluating outcomes of human services, noting that “Despite the particular circumstances of the human services that limit the application of RCTs a more common use of these definitive methodologies is ethically, technically, and theoretically possible (Meinert, 1986). Particularly in the human services, an appropriately administered RCT is far more powerful than any other method to establish credible outcomes.” (p. 5)

Currently, the impetus for the use of RCTs is the evidence-based practice (EBP) movement that started in the medical arena and is permeating all of health and human services and which has an insatiable need for credible evidence, particularly regarding questions of effectiveness. The driving force for EBP in the United Kingdom, and in the US as well, is the economic and political demands for demonstrating “tangible returns on welfare expenditures” given limited available financial resources (Newman & Roberts, 1997, p. 288). Consequently, this accountability mentality is provoking the need to know what works and for whom. Thus, there has been a clarion call for increased use of RCTs in determining the effectiveness of social work interventions (Kelly & Moore, 2011; Newman & Roberts, 1997; Soydan, 2008). In a 2010 presentation on outcomes of social work interventions, Mullen concluded his talk by asking “The number and percent of RCTs being conducted since 1990 [are] increasing in education, medicine, nursing, and psychology, but what about [in] social work?” (Mullen, 2010).

RCTs and EBPs

RCTs are considered the gold standard of evidence when assessing effectiveness type questions for employing EBP as a process of decision-making or for designating an intervention as an EBP. To make such judgments, there is a well-accepted hierarchy of evidence with systematic and meta-analytic reviews of RCTs and multiple RCTs at the top, while other designs and methods are lower down on the hierarchy (Roberts & Yeager, 2004; Rosenthal, 2004). Many social workers have concerns that this hierarchy privileges quantitative methods over qualitative methods, and most particularly, experimental designs. However, it must be understood that this hierarchy of evidence is only for questions of effectiveness of interventions, and not for other types of practice questions, such as understanding the experiences of clients in a certain situation that would privilege qualitative methods.

Specifically, EBP, as a process of decision-making regarding questions of effectiveness, has been defined as “the integration of best research evidence with clinical expertise and patient values” (Sackett, Straus, Richardson, Rosenberg, & Hayes, 2000, p. 1). Evidence in this context “refers to scientific information regarding effects of a well-defined treatment compared with a comparison group of no treatment, a placebo, or an alternative treatment” (Drake, Latimer, Leff, McHugo, & Burns, 2004, p. 717). Consequently, the best research evidence when addressing questions as to which interventions are most effective for a particular problem or condition results from well-constructed and conducted RCTs, as they enable causal statements to be made with confidence. However, one RCT does not evidence make, but rather such judgments require more than one study with positive results. Therefore, finding systematic reviews with a meta-analysis (to be discussed below) that statistically aggregates the results of number of well-conceived and executed RCTs conducted by different independent research teams is most helpful in making practice decisions. Similarly, when researchers are making an outcome determination as to whether an intervention is to be considered an EBP or an empirically supported treatment, they undertake a systematic review of research, particularly RCTs, on the specific intervention of interest. To designate an intervention as an EBP usually requires at least two (usually more) high quality RCTs conducted by independent investigators to control for potential bias of investigators (Drake, et al., 2004). However, given that EBP decisions and the implementation of EBPs are influenced by the social context as well as provider and patient characteristics, it is recognized that research evidence is only part of equation. Clinical expertise and patient values are also to be considered in making clinical decisions.

Systematic Reviews and Meta-analyses

A systematic review is a highly specified procedure with a clearly delineated planned protocol for reviewing all the research studies on a defined topic or for responding to a specific research question that can be reproduced. Within a systematic review, if the data are statistically aggregated from the studies included in the review, then a meta-analysis has been performed (Littell, Corcoran, & Pillai, 2008). Essentially, systematic reviews are research studies using these studies as the sampling unit. Systematic reviews of interventions studies, which are often confined to RCTs, have enhanced the ability to engage in EBP as process and have also led to the ability to document the empirical support for specific interventions, thus leading to the designation of certain interventions as EBPs. While systematic reviews or meta-analyses are not reserved for RCTs, for questions of effectiveness, reviewers may confine the reviews to the most rigorous of designs such as RCTs while others may include less rigorous designs such as quasi-experimental ones.

A number of groups have synthesized or summarized the available scientific research on a given topic, particularly on the state of the science of specific interventions. In the health arena, the most noted of groups under whose auspices systematic reviews are undertaken is the Cochrane Collaboration (www.Cochrane.org). On its website, the Cochrane Collaboration notes that it helps healthcare providers, policymakers, patients, and practitioners to make well-informed decisions about health care by preparing and updating accessible reviews that are published online. Cochrane has a rigorous protocol for their reviews. While the reviews are not limited to RCTs, for those dealing with topics of effectiveness, they do heavily rely on original RCTs. The Campbell Collaboration (www.campbellcollaboration.org), developed to mirror its sibling, is focused on reviews in the areas of education, crime and justice, and social welfare. The Campbell website notes that systematic reviews sum up the available research to answer a specific question. Procedures must be transparent so that reviews can be replicated. As a result, a protocol is required that includes specific inclusion and exclusion criteria, an explicit search strategy, systematic coding and analyses procedures, and if possible a meta-analysis. Again, given the high quality of reviews, questions of effectiveness often focus on RCTs.

Comparative Effectiveness Research (CER)

Comparative effectiveness research has been defined as “the conduct and synthesis of research comparing the benefits and harms of different interventions and strategies to prevent, diagnose, treat, and monitor health care conditions in ‘real world’ settings. The purpose of this research is to improve health outcomes by developing and disseminating evidence-based information to patients, clinicians, and other decision-makers, responding to their expressed needs, about which interventions are most effective for which patients under specific circumstances” (Social Work Policy Institute, 2010, p. 2). This definition includes both the generation of new research as well as the synthesis of existing research by different or alternative methods; the definition refers to comparing at least two treatments that are effective enough to be considered a standard of care. Thus, both are active treatments, rather than comparing an experimental intervention to a placebo (Sox, 2010). Although this may include the use of observational or quasi-experimental studies, stronger designs would include more effectiveness RCTs. The Social Work Policy Institute promotes the role that social workers should play in generating research on social work interventions per se as well as in encouraging social workers to participate in review teams that conduct systematic reviews in this arena. This increased emphasis on CER came about as a result of the passage of the American Recovery and Reinvestment Act of 2009 and seems to be promoting effectiveness RCTs of the mainstay of social work research interventions.

Dissemination and Implementation Research

The current emphasis on translation, dissemination, and implementation research arose from the concern regarding the gap between the findings of clinical research that were put into routine practice. In other words, what has been learned from science sometimes does not translate into the care that is provided to social work clients for a very long time. Research has found that it takes, on average, 17 years before scientifically supported interventions are provided in routine practice settings (Institute of Medicine Committee on the Quality of Health Care in America, 2001). Translational research is the broad area of research that examines strategies for “how best to transfer evidence-based knowledge into routine or representative practice” and therefore encompasses both dissemination research and implementation research (Schillinger, 2010, p. 1). Dissemination research is particularly focused on examining strategies for distributing information to specific audiences to increase the spread of knowledge about evidence-based interventions so that the interventions result in greater use in routine care settings (Schillinger, 2010). Implementation research takes evidenced-based knowledge a step further to assess “how a specific set of activities and designed strategies are used to successfully integrate an evidence-based … intervention” into a routine practice setting (Schillinger, 2010, p.1). In the public health arena, dissemination and implementation (D & I) research has become a major emphasis, and the National Institutes of Health (NIH) have been promoting this area of research. While this research employs a diversity of designs and methodologies, it also includes the use of RCTs as indicated by an announcement from the National Heart Lung and Blood Institute (2011) which stated: “D & I investigations have historically used randomized controlled trial (RCT) designs. This initiative would continue to encourage the use of rigorous study designs, including RCTs, but other design and analytic strategies may be appropriate as well” (http://www.nhlbi.nih.gov/funding/policies/dissemination&implementationR18.htm p.3).

Contributions of Social Work to Engagement with RCTs

In the current context, social workers clearly need to conduct, use, and be able to appraise RCTs. With the increasing emphasis on EBPs, social workers need to contribute to this domain of designated EBPs by having well identified social work EBPs. Furthermore, there is a movement to bring EBPs into practice settings where social workers are the predominate providers. Consequently, social workers will likely play a major role in the translation and implementation of EBPs, which may require social workers to take part in RCTs that might be executed in their agency practice setting. To ensure that the EBPs are adapted to be appropriate for social work client populations and for social work providers, social workers must be well versed in the advantages and limitations of RCTs and must be prepared to take a leadership role in these endeavors.

References

Baron, R. & Kenny, D. A. (1986). The moderator-mediator variable distinction in social psychological research: Conceptual strategic, and statistical considerations. Journal of Personality & Social Psychology. 51, 1173–1182.Find this resource:

    Begg, C., Cho, M., Eastwood, S. et al. (1996). Improving the quality of reporting of randomized controlled trials: The CONSORT statement. Journal of the American Medical Association, 276, 7–9.Find this resource:

      Campbell, D. & Stanley, J. (1963). Experimental and Quasi-Experimental Designs for Research, Chicago, RandMcNally.Find this resource:

        Carroll, K., & Rounsaville, B. (2008). Efficacy and effectiveness in developing treatment manuals. In A. Nezu & C. Nezu (Eds.). Evidence-based outcome research. New York: Oxford University Press.Find this resource:

          Chan, K., Man-Son-Hing, M., Molnar, F., & Laupacis, A. (2001). How well is the clinical importance of study results reported? An Assessment of randomized controlled trials. Canadian Medical Association Journal, 165, 1197–1202.Find this resource:

            Drake, R., Latimer, E., Leff, J.S., McHugo, G., & Burns, B. (2004). What is evidence? Child and Adolescent Psychiatric Clinics of North America, 13, 717–728.Find this resource:

              Epstein, W. (1993). Randomized controlled trials in the human services. Social Work Research & Abstracts, 29 (3), 3–10.Find this resource:

                Fellin, P., Tripodi, T., & Meyer, H. (1969). Exemplars of social research. Itasca, IL: F. E. Peacock.Find this resource:

                  Gartlehner, G., Hansen, R. A., Nissman, D., Lohr, K. N., & Carey, T. S. April 2006. Criteria for distinguishing effectiveness from efficacy trials in systematic reviews. Technical Review 12 (Prepared by the RTI-International-University of North Carolina Evidence-Based Practice Center under Contract No. 290-02-0016.) AHRQ Publication No. 06 0046. Rockville, MD: Agency for Healthcare Research and Quality.Find this resource:

                    Gellis, Z., McGinty, J., Horowitz, A., Bruce, M., & Misener, E. (2007). Problem-solving therapy for late life depression in home care: A randomized field trial. American Journal of Geriatric Psychiatry, 15, 968–978.Find this resource:

                      Glasgow, R. & Steiner, J. (2012). Comparative effectiveness research to accelerate translation: Recommendations for an emerging field of science. In Dissemination and Implementation Research in Health. R. Brownson, G. Colditz, & E. Proctor (Eds.). New York: Oxford University Press.Find this resource:

                        Grissmer, D., Subotnik, R., & Orland, M. (2009). A guide to incorporating multiple methods in randomized controlled trials to assess intervention effects. Washington, DC: American Psychological Association.Find this resource:

                          Havik, O. & VandenBos, G. (1996). Limitations of manualized psychotherapy for everyday clinical practice. Clinical Psychology: Science and Practice, 3, 264–267.Find this resource:

                            Holosko, M. (2010). What types of designs are we using in social work research and evaluation? Research on Social Work Practice, 20(6), 665–673.Find this resource:

                              Institute of Medicine Committee on the Quality of Health Care in America (2001). Crossing the quality chasm: A new health system for the 21st century. Washington, DC: National Academies Press.Find this resource:

                                Johnson, R.B., Onwuegbuzie, A., Turner, L. (2007). Toward a definition of mixed methods research.Journal of Mixed Methods Research, 1(2), 112–133.Find this resource:

                                  Kelly, M. & Moore, T. (2011). Methodological, theoretical, infrastructural, and design issues in conducting good outcome studies. Research on Social Work Practice. 21(6), 644–653.Find this resource:

                                    Kraemer, H., Wilson, T., Fairburn, C., & Argus, St. (2002). Mediators and moderators of treatment effects in randomized clinical trials. Archives of General Psychiatry.59, 877–853.Find this resource:

                                      Lewin, S., Glenton, C., & Oxman, A. (2009, September 10). Use of qualitative methods alongside randomized controlled trials of complex healthcare interventions: Methodological study. British Medical Journal, 339, b3496. doi: 10.1136/bmj.b3496.Find this resource:

                                        Littell, J., Corcoran, J. & Pillai, V. (2008). Systematic reviews and meta-analysis. New York: Oxford University Press.Find this resource:

                                          Mark, M. & Lenz-Watson, A. (2011). Ethics and the conduct of randomized experiments and quasi-experiments in field setting. In A. Panter, & S. Sterba (Eds.) Handbook of ethics in quantitative methodology. New York: Routledge.Find this resource:

                                            Meinert, C. (1986). Clinical trials: Design, conduct and analysis. New York: Oxford University Press.Find this resource:

                                              Moher, D., Schulz, K., & Altman, D. (2001). CONSORT Group (Consolidated Standards of Reporting Trials) The CONSORT statement revised recommendations for improving the quality of reports of parallel-group randomized trials. Journal of the American Medical Association, 285, 1987–1991.Find this resource:

                                                Mullen, E. (November 2010). An overview of social work intervention outcomes. Inaugural Symposium of the National Association of Social Workers, Social Work Policy Institute, Social Work Research & Comparative Effectiveness Research (CER): A Research Symposium to Strengthen the Connection, Washington, DC.Find this resource:

                                                  Newman, T. & Roberts, H. (1997). Assessing social work effectiveness in child care practice: The contribution of randomized controlled trials. Child care, health and development, 23, 287–296.Find this resource:

                                                    Oakley, A., Strange, V., Bonell, C., Allen, E., Stephenson, J. & Ripple Study Team (2006). Process evaluation in randomized controlled trials of complex interventions. British Medical Journal, 332, 413–416.Find this resource:

                                                      Orwin, R. (2000). Methodological challenges in study design and implementation: Assessing program fidelity in substance abuse health services research. Addiction, 95 (S3), S309–S327.Find this resource:

                                                        Roberts, A., Yeager, K. (2004) Systematic reviews of evidence-based studies and practice-based research: How to search for, develop, and use them. In Roberts, A., Yeager, K.,eds, Evidence-based practice manual. New York: Oxford University Press, 3–14.Find this resource:

                                                          Rosen, A., Proctor, E., & Staudt, M. (1999). Social work research and the quest for effective practice. Social Work Research, 23, 4–14.Find this resource:

                                                            Rosenthal, R. (2004) Overview of evidence-based practice. In A. Roberts, & K. Yeager, K, (Eds.), Evidence-based practice manual (pp. 20-29). New York: Oxford University Press.Find this resource:

                                                              Rubin, A., Parrish, D. (2007). Problematic phrases in the conclusions of published outcome studies: Implications for evidence-based practice. Research on Social Work Practice, 17, 334–347.Find this resource:

                                                                Sackett, D. L., Straus, S., Richardson, S., Rosenberg, W., & Haynes, R. B. (2000) Evidence-based medicine: How to practice and teach EBM. 2d ed. London, UK: Churchill Livingstone.Find this resource:

                                                                  Schillinger, D. (2010). An Introduction to Effectiveness, Dissemination and Implementation Research. Clinical translational Science Institute Community Engagement Program, University of California San Francisco. http://ctsi.ucsf.edu/files/CE/edi_introguide.pdf.

                                                                  Shorter, E. (2011). A brief history of placebos and clinical trials in psychiatry. The Canadian Journal of Psychiatry, 56, 193–197.Find this resource:

                                                                    Social Work Policy Institute (January 2010). Social Work Research and Comparative Effectiveness Research (CER): a Research Symposium to Strengthen Connection. Washington, DC. Author.Find this resource:

                                                                      Solomon, P. & Cavanaugh, M. (in press). Randomized controlled trials for psychosocial interventions. In G. Guest (Ed.) Public health research methods. Thousand Oaks, CA: Sage Publications.Find this resource:

                                                                        Solomon, P., Cavanaugh, M., & Draine, J. (2009). Randomized controlled trials: Design and implementation for community-based psychosocial intervention. New York: Oxford University Press.Find this resource:

                                                                          Sox, H. (2010). Defining comparative effectiveness research: The importance of getting it right. Medical Care, 48(6), S7–S8.Find this resource:

                                                                            Soydan, H. (2008). Applying randomized controlled trials and systematic reviews in social work research. Research on Social Work Practice, 18, 311–318.Find this resource:

                                                                              Spillane, J., Pareja, A., Dorner, L., Barnes, C., May, H., Huff, J., & Camburn, E. (2010). Mixing methods in randomized controlled trials (RCTs): Validation, contextualization, triangulation, and control. Educational assessment, evaluation and accountability, 22, 5–28.Find this resource:

                                                                                Stolberg, H., Norman, G., & Trop, I. (2004). Fundamental of clinical research for radiologists: Randomized controlled trials. American Journal of Radiology, 183(6), 1539–1544.Find this resource:

                                                                                  Torgerson, D., & Torgerson, C. (2008). Designing randomised trials in health, education and the social sciences: An introduction. Hampshire, UK: Palgrave Macmillan.Find this resource:

                                                                                    Veroff, M., Casebeer, A., & Hilsden, R. (2002). Assessing efficacy of complementary medicine: Adding qualitative research methods to the “gold standard.” The Journal of Alternative and Complementary Medicine. 8, 275–281.Find this resource:

                                                                                      Wingood, G. & DiClemente, R. (2008). The ADAPT-ITT model: A novel method of adapting evidence-based HIV Interventions. Journal of Acquired Immune Deficiency Syndrome, 47, Supplellment 1, S40–S46.Find this resource:

                                                                                        Wolff, N. (2000). Using randomized controlled trials to evaluate socially complex services: Problems, challenges and recommendations. The Journal of Mental Health Policy and Economics, 3, 97–109.Find this resource: