Show Summary Details

Bibliography expanded to reflect additional resources in clinical data mining and consumer-centered evaluations. Consumer-centered evaluation integrates service users in all facets of the evaluation from data collections to utilization of the findings.

Updated on 1 August 2013. The previous version of this content can be found here.
Page of

PRINTED FROM the Encyclopedia of Social Work, accessed online. (c) National Association of Social Workers and Oxford University Press USA, 2016. All Rights Reserved. Under the terms of the applicable license agreement governing use of the Encyclopedia of Social Work accessed online, an authorized individual user may print out a PDF of a single article for personal use, only (for details see Privacy Policy and Legal Notice).

Subscriber: null; date: 17 August 2018

Agency-Based Research

Abstract and Keywords

This entry reviews agency-based research and the unique demands created by the organizational context where this activity resides. Three primary stakeholder groups are identified: administrators and program managers, supervisors, and direct service workers and clinicians. Possible uses of agency-based research by each of the respective stakeholder groups are described. Finally, the role of service consumers in agency-based research is discussed.

Keywords: clinical and program evaluation, consumer involvement, research utilization

Social work is and always has been an agency-based occupation. Notwithstanding the movement to “professionalize” social work through licensure and certification and the interest in private practice, most practitioners continue to work within or on behalf of social agencies. Similarly, most social work clients receive services within agencies. These organizations may be specifically social work settings or those in which social workers play important but ancillary roles, for example, hospitals, mental health clinics, schools, prisons, and the like.

Despite the pervasive organizational underpinnings of social work practice and the pioneering work of Tony Grasso (Grasso & Epstein, 1992, 1993), the value of agency-based research remains underappreciated within the field. Instead, most published social work research is carried out under university auspices with the prevailing paradigm of social work knowledge development ascribed to social work academics. More recently, the evidence-based practice movement has further emphasized the division of labor by characterizing academics as knowledge producers, and practitioners as obstacles to research (Rubin, 2006), knowledge implementers (Gambrill, 2006), or “propagandists” of misinformation (Gambrill, 2010). Rarely are practitioners of any stripe viewed as legitimate contributors to the knowledge stream.

By contrast, this entry focuses on the conduct of programmatic, supervisory, and clinical research within social work agencies, by agency staff. More specifically, it describes and illustrates the administrative, supervisory, and clinical uses of agency-based research. In so doing, it suggests how agency practitioners can conduct their own research for internal decision making as well as for making external contributions to knowledge. This requires the design, implementation, and utilization of studies that attend specifically to organizational aspects of service delivery.

As indicated earlier, social work agencies have three sets of “stakeholders” that have a professional interest in agency-based knowledge development: (a) administrators and program managers, (b) supervisors, and (c) direct service workers or clinicians, or both. Each group has its own parallel stakes in questions about client need, service delivery, and client outcome. At the organizational level, the research studies that address these questions are referred to as planning studies, monitoring studies, and program evaluations. At the direct service or clinical level, research can help assess the needs of individual clients; the clinical interventions they receive; and the individual, family, or group outcomes that result. Between the programmatic and individual client levels, supervisors may be concerned with the training and supervisory needs of their units or individual workers and staff responses to supervisory interventions.

To answer these questions, agency-based social workers at each level have available to them a wide range of research approaches and methods as well as an eclectic range of data sources (Kapp & Anderson, 2010, chapter 8). These include both qualitative and quantitative methods that can be applied to already available information or make use of original information. Available information can come from within the agency itself in the form of computerized information or case records, community informational resources outside the agency, or published research literature. Original information may be secured through observation, interviews, or questionnaires. The latter can be based on already available instruments or can be completely original.

Of course, the appropriate use of any of these approaches requires some degree of research sophistication and material and technical resources. And they must be used in ways that are ethical and sensitive to the cultural values and sensibilities of clients and agency staff alike (Kapp & Anderson, 2010, chapters 5 and 6). Agencies may have their own research units, may employ outside research consultants, or may collaborate with universities to conduct this research. But what sets agency-based research apart is that its primary purpose is to enhance the effectiveness of the agency in serving its clientele and in achieving its mission. In addition, the research task must accommodate the dynamic context of the organization. Epstein (2001) suggests that practice-based research principles should be used where the practice setting is the centerpiece and research activity is conducted in a manner complementary to service processes and activities. Kapp and Anderson (2010) further specify that each step of the evaluation process, from its design and implementation to dissemination of the findings to users, should be centered on the intended use of the evaluation data by various agency practitioners.

Clinical Uses of Agency-Based Research

As noted earlier, a significant gap exists between much of the research conducted on clinical practice and the clinical practitioner’s usage of that same material (Epstein & Blumenfield, 2001). One of the strategies for addressing this gap is for practitioners to design, implement, and use research on their own practice within the agency setting (Vonk, Tripodi, & Epstein, 2006). Single-subject design is one research method that has been sanctioned and supported by social worker practitioners to assess and evaluate clinical effectiveness. This technique applies the logic of time series evaluation methods to the treatment progression of single individuals (Rubin & Babbie, 2005; Tripodi, 1994; Tripodi & DiNoia, 2008). Program logic models have been used to create conceptual intervention models that can be used to organize and facilitate practitioner research projects (Alter & Egan, 1997; Alter & Murty, 1997; Kapp & Anderson, 2010, chapter 7).

Epstein and colleagues have continued to develop an innovative, direct service practitioner-friendly approach described as clinical data mining (CDM). This method uses the individual clinician as the primary researcher responsible for designing, implementing, and utilizing the research. The clinician-derived research questions are addressed by collecting and analyzing data directly from the case files. This method has been effective in a variety of settings (Auslander, Dobrof, & Epstein, 2001; Epstein & Blumenfield, 2001; Peake, Epstein, & Medeiros, 2005; Zilberfein, Hutson, Snyder, & Epstein, 2001) and in multidisciplinary and international contexts (Joubert & Epstein, 2005, Lalayants et al., 2012).

More recently, organizational research-oriented doctoral students have demonstrated how a CDM approach can be applied to studying organizational strategies as diverse as multidisciplinary child protection efforts (Lalayants, 2010), workplace trauma interventions (DeFraia, 2011), agency accreditation (Williams-Gray, 2008), and social entrepreneurship (VanBrackle, 2012).

Supervisory Uses of Agency-Based Research

The information obtained through agency-based research is a critical resource for administrators and supervisors (Schoech, 2000). Data from agency-based research is critical to determining whether client outcomes are being achieved (Poertner & Rapp, 2007). Typically, agency-based research by administrators and supervisors is completed by exploiting existing data sources, specifically internal information systems (Hatry, 2004).

Supervisors are able to use information from clinical information systems to foster the improvement of clinical skills (Mooradian & Grasso, 1993). Current information technology provides access to large automated data systems that sort and analyze client information in custom reports, which allow supervisors to “drill down” to key data that address the development of direct service skills. This technology allows supervisors to create specific staff or team reports that facilitate information-based supervision (Kapp, Hahn, & Rand, 2011; Marty & Barkett, 2002; Moore & Press, 2002).

Managerial Uses of Agency-Based Research

Information technology is equally useful for creating custom reports that allow managers access to key data that address specific service-related questions. For example, managers can generate program-specific reports of successful clients or family contacts reports organized by workers (Kapp et al., 2011; Marty & Barkett 2002; Moore & Press, 2002).

Program improvement planning efforts integrate agency-based research with organizational initiatives directed at improving program performance. Organizational innovation is complemented by agency-based research. The implementation and impact of specific program enhancements targeted at program outcome (for example, an aftercare program intended to increase postplacement stability) are evaluated using specific data collection plans (Hartnett & Kapp, 2003; Kapp & Anderson, 2010).

Consumer Involvement in Agency-Based Research

Social work ethical directives focused on empowerment and the increasing role of the consumer in service provision in more and more fields of practice provide continuing support for the involvement of consumers in agency-based research efforts. Consequently, consumer satisfaction surveys are very common in agency-based research (Corrigan, Lickey, Campion, & Rashid, 2000; Fischer & Valley, 2000; Kapp & Anderson, 2010; Kapp & Vela, 2004; Martin, Petr, & Kapp, 2002). Although philosophical reasons have been cited for a more consistent inclusion of service providers in agency-based research (Boll, 1995), Corrigan and Garman (1997) argue that consumer involvement is likely to highlight aspects of service implementation and effectiveness that may otherwise be overlooked. More recently, service consumers and agency evaluators have collaborated to develop and implement strategies for including consumers in study design, data collection, report writing, and utilization of agency-based research findings (Linhorst & Eckert, 2002; Monson & Thurley, 2011). Guiding principles and procedures for designing and implementing this work have been developed and documented (Malins et al., 2006). These recent developments illustrate the vital role of the service consumers in the design, implementation, and use of agency-based evaluation.


Alter, C., & Egan, M. (1997). Logic modeling: A tool for teaching critical thinking in social work practice. Journal of Social Work Education, 33(1), 85–102.Find this resource:

    Alter, C., & Murty, S. (1997). Logic modeling: A tool for teaching practice evaluation. Journal of Social Work Education, 33(1), 103–117.Find this resource:

      Auslander, G., Dobrof, J., & Epstein, I. (2001). Comparing social work’s role in renal dialysis in Israel and the United States: The practice-based research potential of available clinical information. Social Work in Health Care, 33(3/4), 129–151.Find this resource:

        Boll, J. (1995). Member roles in program evaluation. A case study from a psychosocial club. Psychiatric Social Rehabilitation Journal, 19, 79–82.Find this resource:

          Corrigan, P. W., & Garman, A. N. (1997). Considerations for research on consumer empowerment and psychosocial intervention. Psychiatric Services, 48, 347–352.Find this resource:

            Corrigan, P. W., Lickey, S. E., Campion, J., & Rashid, F. (2000). Mental health team leadership and consumer satisfaction and quality of life. Psychiatric Services, 51(6), 781–785.Find this resource:

              DeFraia, G. (2011). Organizational resilience to workplace trauma: Predicting post-incident workgroup outcomes through clinical data-mining. Unpublished doctoral dissertation, City University of New York.Find this resource:

                Epstein, I. (2001). Using available clinical information in practice-based research: Mining for silver while dreaming of gold. Social Work in Health Care, 33(3/4), 15–32.Find this resource:

                  Epstein, I., & Blumenfield, S. (Eds.). (2001). Clinical data-mining in practice-based research: Social work in hospital settings. Binghamton, NY: Haworth.Find this resource:

                    Fischer, R. L., & Valley, C. (2000). Monitoring the benefits of family counseling: Using satisfaction surveys to assess the clients’ perspective. Smith College Studies in Social Work, 90(2), 272–286.Find this resource:

                      Gambrill, E. (2006). Evidence-based practice and policy: Choices ahead. Research in Social Work Practice, 16(3) 338–357.Find this resource:

                        Gambrill, E. (2010). Evidence-informed practice: Antidote to propaganda in the helping professions? Research on Social Work Practice, 20, 302–320.Find this resource:

                          Grasso, T., & Epstein, I. (Eds.). (1992). Research utilization in the social services. Binghamton, NY: Haworth.Find this resource:

                            Grasso, T., & Epstein, I. (Eds.). (1993). Information systems in child, youth and family agencies. Binghamton, NY: Haworth.Find this resource:

                              Hartnett, H., & Kapp, S. (2003). Establishment of quality programming. In K. Yeager & A. Roberts (Eds.), Evidence-based practice manual: Research and outcome measures in health and human services. London: Oxford University Press.Find this resource:

                                Hatry, H. (2004). Using agency records. In J. Wholey, H. Hatry, & K. Newcomer (Eds.), Handbook of practical program evaluation (2nd ed., pp. 396–412). San Francisco, CA: Jossey Bass.Find this resource:

                                  Joubert, L., & Epstein, I. (Eds.). (2005). Multi-disciplinary data-mining in allied health practice: Another perspective on Australian research and evaluation. Journal of Social Work Research and Evaluation, 6(2, Special Issue), 139–141.Find this resource:

                                    Kapp, S., & Anderson, G. (2010). Agency-based program evaluation: Lessons from practice. Thousand Oaks, CA: Sage.Find this resource:

                                      Kapp, S., Hahn, S. A., & Rand, A. (2011). Building a performance information system for statewide residential treatment services. Residential Treatment for Children and Youth, 28(1), 39–54.Find this resource:

                                        Kapp, S., & Vela, R. (2004). The parent satisfaction with Foster Care Services Scale. Child Welfare, 83(3), 263–287.Find this resource:

                                          Lalayants, M. (2010). Multidisciplinary clinical consultation in child protection: Contextual influences and stakeholder perceptions of best practices. Unpublished doctoral dissertation, City University of New York.Find this resource:

                                            Lalayants, M., Epstein, I., Auslander, G., Chan, W., Fouche, C., Giles, R., et al. (2012). Clinical data-mining: Learning from practice in international settings. International Journal of Social Work, 1–23.Find this resource:

                                              Linhorst, D. M., & Eckert, A. (2002). Involving people with severe mental illness in evaluation and performance improvement. Evaluation and the Health Professions, 25, 285–301.Find this resource:

                                                Malins, G., Morland, K., Strang, J., Dowson, T., Hunt, S., Williamson, D., et al. (2006). A framework for mental health consumers to evaluate service provision. Australasian Psychiatry, 14(3), 277–280.Find this resource:

                                                  Martin, J. S., Petr, C., & Kapp, S. (2002). Consumer satisfaction with children’s mental health services. Child and Adolescent Social Work Journal, 20(3), 211–226.Find this resource:

                                                    Marty, D., & Barkett, A. (2002). Data analysis workbench. Lawrence, KS: Office of Adult Mental Health, University of Kansas School of Social Welfare.Find this resource:

                                                      Monson, K., & Thurley, M. (2011). Consumer participation in a youth mental health service. Early Intervention in Psychiatry, 5, 381–388.Find this resource:

                                                        Mooradian, J., & Grasso, A. J. (1993). The use of an agency-based information system in structural family therapy treatment. In A. J. Grasso & I. Epstein (Eds.), Information systems in child, youth, and family agencies: Planning, implementation, and service enhancement (pp. 49–76). Binghamton, NY: Haworth.Find this resource:

                                                          Moore, T., & Press, A. (2002). Results oriented management in child welfare. University of Kansas School of Social Welfare. Retrieved April 2013, from http://www.rom.ku.eduFind this resource:

                                                            Peake, K., Epstein, I., & Medeiros, D. (Eds.). (2005). Clinical and research uses of an adolescent intake questionnaire: What kids need to talk about. Binghamton, NY: Haworth.Find this resource:

                                                              Poertner, J., & Rapp, C. (2007). Social administration: A consumer-centered approach. New York: Longman.Find this resource:

                                                                Rubin, A. (2006). Foreword. In L. Alexander & P. Solomon (Eds.), The research process in the human services: Behind the scenes (pp. xii–xiv). Belmont, CA: Thomson-Brooks-Cole.Find this resource:

                                                                  Rubin, A., & Babbie, E. R. (2005). Research methods for social work (5th ed.). Belmont, CA: Wadsworth/Thompson Learning.Find this resource:

                                                                    Schoech, D. (2000). Managing information for decision making. In R. J. Patti (Ed.), The handbook of social welfare management (pp. 321–340). Thousand Oaks, CA: Sage.Find this resource:

                                                                      Tripodi, T. (1994). A primer on single-subject design for clinical social workers. Washington, DC: NASW Press.Find this resource:

                                                                        Tripodi, T., & Di Noia, J. (2008). Single case design for clinical social workers. Washington, DC: NASW Press.Find this resource:

                                                                          VanBrackle, L. G. (2012). Promoting feast or surviving famine: The financial implications of social enterprise for nonprofit human service organizations. Unpublished doctoral dissertation, City University of New York.Find this resource:

                                                                            Vonk, M. E., Tripodi, T., & Epstein, I. (2006). Research techniques for clinical social workers (2nd ed.). New York: Columbia University Press.Find this resource:

                                                                              Williams-Gray, B. (2008). Accreditation as an intervention and a means for expanding organizational capacity: An organizational data-mining study. Unpublished doctoral dissertation, Graduate faculty in Social Welfare, City University of New York.Find this resource:

                                                                                Zilberfein, F., Hutson, C., Snyder, S., & Epstein, I. (2001). Social work practice with pre- and post-liver transplant patients: A retrospective self study. Social Work in Health Care, 33(3/4), 91–104.Find this resource: