elines for Selecting an Evidence‐Based Program    What Works, Wisconsin – Research to Practice Series, #3 

 

                      In recent years there has been a significant increase in the number of evidence‐ based  programs  designed  to  reduce  individual  and  family  problems  and  promote healthy development. Because each program has undergone  rigorous  testing  and  evaluation,  program  practitioners  can  reassure  potential  program  sponsors that the program is likely to be effective under the right conditions, with  the appropriate audience and with the proper implementation. However, knowing  which program is the “right” one for a particular setting and audience is not always  easy to determine. When selecting a program, it is important to move beyond current  fads or what the latest salesperson is selling and consider whether a program fits with  the  local  agency’s  goals  and  values,  the  community  setting  and  the  needs  of  the  targeted audience. The  long‐term success of a program depends on the program being  not only a good one, but also the right one.     Unfortunately,  there  is  currently  little  research  on  how  to  best  go  about  the  process  of  selecting an evidence‐based program. Consequently, the guidelines we present in this brief  are based primarily on our experiences working with  community‐based organizations,  the  experiences of practitioners, and common sense. We have identified a number of factors that  we believe should be considered when deciding which program is the most appropriate for a  particular  audience  and  sponsoring  organization.  These  factors  can  be  grouped  into  three  general  categories: program match, program quality  and organizational  resources.  In order  to  assist with  the process of program  selection, we have developed a  set of questions  to  consider  when selecting an evidence‐based program for your particular agency and audience.  

WHAT WORKS, WISCONSIN – RESEARCH TO PRACTICE SERIES 

Guidelines for selecting an evidence‐based program:  Balancing community needs, program quality,  

and organizational resources 

ISSUE #3, MARCH 2007  BY STEPHEN A. SMALL, SIOBHAN M. COONEY,   GAY EASTMAN, AND CAILIN O’CONNOR  

University of Wisconsin–Madison and University of Wisconsin–Extension 

Guidelines for Selecting an Evidence‐Based Program  2  What Works, Wisconsin – Research to Practice Series, #3 

Program match: Questions to ask   How well do the program’s goals and objectives 

reflect what your organization hopes to achieve? 

 How well do the program’s goals match those of  your intended participants? 

 Is the program of sufficient length and intensity (i.e.,  “strong enough”) to be effective with this particular  group of participants? 

 Are potential participants willing and able to make  the time commitment required by the program? 

 Has the program demonstrated effectiveness with a  target population similar to yours?   

 To what extent might you need to adapt this  program to fit the needs of your community? How  might such adaptations affect the effectiveness of the  program? 

 Does the program allow for adaptation? 

 How well does the program complement current  programming both in your organization and in the  community? 

The  issues  raised  by  program  match,  program  quality and organizational  resources are overlap‐ ping.  Selecting  a  program  usually  requires  balancing different priorities,  so  it’s  important  to  have  a  good  understanding  of  all  three  of  these  before  determining  the  usefulness  of  a  program  for a particular situation.   

PROGRAM MATCH  A  first set of  factors  to consider  is related  to how  well the program will fit with your purposes, your  organization,  the  target  audience,  and  the  com‐ munity where it will be implemented.     Perhaps  the  most  obvious  factor  to  consider  is  whether the goals and objectives of a program are  consistent with  the  goals  and  objectives  that  the  sponsoring  organization  hopes  to  achieve. While  this may  seem  apparent,  it  is not uncommon  for  sponsors to select a program because there is grant  money available  to  support  it or everyone else  is  doing it. Just because a program is the latest fad or 

there’s  funding  to  support  it  doesn’t  necessarily  mean  it  is  going  to  accomplish  the  goals  of  the  sponsoring organization or meet  the needs of  the  targeted audience.    A  second  aspect  of  program  match  involves  whether  a  program  is  strong  enough  to  address  the level and complexity of risk factors or current  problems  among  participants.  This  refers  to  the  issue of adequate program duration and intensity.  Changing existing problem behaviors or  counter‐ acting  a  large  number  of  risk  factors  in  partici‐ pants’  lives  requires  many  hours  of  engaging  programming over a period of time. For example,  a short primary prevention program designed for  families  facing  few problems or  risks may not be  effective  for  an  audience  already  experiencing  more severe problems.     Another  facet  of  program  match  concerns  the  length of the program and whether your intended  audience  will  be  willing  and  able  to  attend  the  required  number  of  sessions.  Many  evidence‐ based  programs  are  of  fairly  long  duration,  involving  multiple  sessions  over  weeks  or  months.  A  common  concern  of  program  pro‐ viders  is  whether  potential  participants  will  make  such  a  long‐term  commitment.  Because  this is a realistic concern, program sponsors need  to assess  the  targeted audience’s availability  for  and interest in a program of a particular length.1  The  reality  is,  if people don’t  attend,  then  they  can’t reap the program’s benefits. However, it is  also important to keep in mind that programs of  longer  duration  are  more  likely  to  produce  lasting behavior change in participants. Program  sponsors sometimes need  to  find a compromise  between the most effective program and one that  will be a realistic commitment for participants.     Matching a program with the values and culture  of the intended audience is also critically import- ant.  Some  programs  are  intentionally  designed  for  particular  populations  or  cultural  groups.  Most  are more  culturally  generic  and designed 

1 Issue #2 in this series addresses strategies for  recruiting and retaining participants.

Guidelines for Selecting an Evidence‐Based Program  3  What Works, Wisconsin – Research to Practice Series, #3 

Program quality: Questions to ask   Has this program been shown to be effective? 

What is the quality of this evidence? 

 Is the level of evidence sufficient for your  organization? 

 Is the program listed on any respected evidence‐ based program registries? What rating has it  received on those registries? 

 For what audiences has the program been found  to work? 

 Is there information available about what  adaptations are acceptable if you do not  implement this program exactly as designed? Is  adaptation assistance available from the program  developer? 

 What is the extent and quality of training offered  by the program developers? 

 Do the program’s designers offer technical  assistance? Is there a charge for this assistance? 

 What is the opinion and experience of others who  have used the program? 

for  general  audiences.2  It’s  important  to  consider  whether  the  targeted  audience  will  find  the  program  acceptable  and will want  to participate.  The ideal situation would be finding evidence that  a  program  is  effective  for  the  specific  pop- ulation(s) you  intend  to use  it with. In  that  case,  you  could  reasonably  expect  the  program  to  be  effective when it is implemented well.     Unfortunately,  many  evidence‐based  programs  have only been  evaluated with a  limited number  of  populations  and  under  a  relatively  narrow  range  of  conditions. While many  evidence‐based  programs are effective and appropriate for a range  of  audiences  and  situations,  it  is  rare  to  find  a  program  that  is  suitable  or  effective  for  every  audience  or  situation. In  many  cases,  you  will  need to carefully read program materials or talk to  the program’s designers to see whether adapting a  program or using it with an audience for which it  hasn’t been evaluated is reasonable.     Depending on  the design, programs may or may  not  be  amenable  to  adaptation.  If  adapting  a  program  to  a  particular  cultural  group  is  important, then program sponsors should serious- ly  consider  whether  such  changes  are  possible.  Some program designers  are willing  to help you  with  program  adaptation  so  that  the  program’s  effectiveness  will  not  be  undermined  by  these  changes.3    Finally,  when  considering  which  program  to  select, sponsors should consider whether the pro- gram complements other programs being offered  by  the  sponsoring  organization  and  by  other  organizations  in  the  community.  The  most  effective  approaches  to  prevention  and  inter- vention  involve  addressing  multiple  risk  and 

2 Issue #1 in this series addresses the issue of culture  and evidence‐based programs. 3 Issue #4 in this series will address issues of program fidelity and adaptation.

protective  factors,  developmental  processes  and  settings.  Any  new  program  implemented  in  a  community  should  address  needs  that  other  community  programs  fail  to  address, which will  help to create the kind of multi‐pronged approach  that leads to greater overall effectiveness.    

PROGRAM QUALITY  A second set of factors to consider when selecting  a  program  are  related  to  the  quality  of  the  pro‐ gram itself and the evidence for its effectiveness.     The  program  should  have  solid,  research‐based  evidence  showing  that  it  is  effective.  For  a  pro‐ gram  to  be  deemed  evidence‐based,  it  must  go  through  a  series  of  rigorous  evaluations.  Such  evaluations  have  experimental  or  quasi‐experi‐ mental designs – meaning  they compare a group  of  program  participants  to  a  similar  group  of  people who did not participate  in  the program  to  determine whether program participation is assoc‐ iated with positive  changes. These kinds of  eval‐

Guidelines for Selecting an Evidence‐Based Program  4  What Works, Wisconsin – Research to Practice Series, #3 

TABLE 1: Selected evidence‐based program registries 

Blueprints for Violence Prevention  http://www.colorado.edu/cspv/blueprints/index.html  This registry is one of the most stringent in terms of endorsing programs as Model or Promising. Programs are  reviewed by an expert panel and staff at the University of Colorado, and endorsements are updated regularly.  Programs are added and excluded from the registry based on new evaluation findings. 

Helping America’s Youth  http://guide.helpingamericasyouth.gov/programtool.cfm  This registry was developed with the help of several federal agencies. Programs focus on a range of youth  outcomes and are categorized as Level 1, Level 2, or Level 3 according to their demonstrated effectiveness. The  registry is updated regularly to incorporate new evidence‐based programs. 

Office of Juvenile Justice and Delinquency Prevention Model Program Guide   http://www.dsgonline.com/mpg2.5/mpg_index.htm  This registry is one of the largest currently available and is continuously updated to include new programs.  Programs found on this registry are designated as Exemplary, Effective, or Promising. 

Promising Practices Network  http://www.promisingpractices.net/  A project of the RAND Corporation, this registry regularly updates its listings of Effective and Promising  programs. Programs are reviewed and endorsed by project staff. 

Strengthening Americaʹs Families   http://www.strengtheningfamilies.org/html/   Although this registry was last revised in 1999, it is the only registry with a focus specifically on family‐based  programs. Programs were reviewed by expert panels and staff at the University of Utah and the Center for  Substance Abuse Prevention. They were then designated as Exemplary I, Exemplary II, Model, or Promising.  

Substance Abuse and Mental Health Services Administration (SAMHSA) National Registry of Evidence‐ Based Programs and Practices   http://www.nrepp.samhsa.gov  This recently re‐launched site no longer categorizes programs as Model, Effective, or Promising. Instead,  programs are summarized and the quality of the research findings is rated separately for each outcome that has  been evaluated. SAMHSA has also introduced a “Readiness for Dissemination” rating for each reviewed program.  Nominations are accepted each year for programs to be reviewed; SAMHSA funds independent consultants to  review nominated programs and update the registry. 

uations allow  for a  reasonable assumption  that  it  was  the  program  itself  that  changed  people’s  knowledge, attitudes or behavior.     As  funders  and  program  sponsors  become more  committed  to  implementing  evidence‐based  pro‐ grams, program developers are increasingly likely  to  promote  their  programs  as  evidence‐based.  However,  just  because  a  program  developer  ad‐ vertises  a  program  as  evidence‐based  doesn’t 

mean that it meets the standards discussed above.  For  example,  a  program  might  be  “research‐ based,”  but  not  “evidence‐based.”  A  research‐ based  program  has  been  developed  based  on  research about  the outcomes or processes  it  add‐ resses.  However,  it  has  probably  not  been  subjected  to  the  rigorous  evaluations  and  real‐ world  testing  that  are  needed  to  designate  a  program  as  evidence‐based. The  simplest way  to  determine evidence of a program’s effectiveness is 

Guidelines for Selecting an Evidence‐Based Program  5  What Works, Wisconsin – Research to Practice Series, #3 

Organizational resources:   Questions to ask 

 What are the training, curriculum, and  implementation costs of the program? 

 Can your organization afford to implement this  program now and in the long‐term? 

 Do you have staff capable of implementing this  program? Do they have the qualifications  recommended or required to facilitate the  program? 

 Would your staff be enthusiastic about a program  of this kind and are they willing to make the  necessary time commitment? 

 Can this program be implemented in the time  available? 

 What’s the likelihood that this program will be  sustained in the future? 

 Are your community partners supportive of your  implementation of this program? 

to  examine  the designations given by well‐estab‐ lished  and  respected  evidence‐based  program  registries. Program  registries classify programs at  different levels of endorsement based on evidence  of  effectiveness  for  certain  participant  outcomes.  See  Table  1  for  an  annotated  listing  of  program  registries.     If a program  is not  listed on a  respected  registry,  then it is important to seek out scientific evidence  of the program’s effectiveness. At a minimum, you  should  review  any  evaluation  studies  that  have  been  conducted  by  the  program  developer  and  external evaluators.  Ideally,  these evaluations use  an  experimental  or  quasi‐experimental  research  design. Another sign of a high‐quality evaluation  is  that  its  results  have  been published  in  a well‐ respected, peer‐reviewed, scientific journal.    An  additional  indicator  of  program  quality  to  consider  is  the  level  of  training  and  follow‐up  support  available  from  the  program  designers.  Some  programs  have  a  great  deal  of  resources  available  to  help  program  implementers.  These 

resources  can  be  especially  important  if  you’re  working  with  a  unique  audience  and  need  to  make adaptations or if program implementation is  particularly  complex. As a general  rule, more  in‐ tensive training and more follow‐up support from  the program developer will increase the effective‐ ness  and  sustainability  of  a  program  over  time.  Some programs provide  excellent  technical  assis‐ tance; staff members are accessible and willing  to  address questions  that arise while  the program  is  being implemented. Often this technical assistance  is  free,  but  sometimes program designers  charge  an additional fee for it. Therefore, the benefits and  costs  of  technical  assistance  should  be  kept  in  mind when selecting an evidence‐based program.    Finally,  while  the  scientific  literature  and  infor‐ mation  from  the program developer provide key  information  about  program  quality,  don’t  over‐ look the experience of practitioners who have imp‐ lemented  the  program.  Ask  whether  they  encountered  any  obstacles  when  implementing  the  program, whether  they  believe  the  program  was effective, which audiences seemed to respond  most positively to the program, and whether they  would  recommend  the  program  for  your  sit‐ uation.  This  type  of  information  is  usually  not  included in scientific program evaluations but is a  critically  important  consideration  for  most  practitioners.   

ORGANIZATIONAL  RESOURCES  A final set of factors to consider when selecting a  program  is  related  to  the  resources  required  for  carrying out the program. Consider whether your  organization has the expertise, staff, financial sup‐ port  and  time  available  to  implement  the  pro‐ gram.  Implementing  evidence‐based  programs  is  usually  fairly  time‐  and  resource‐intensive.  For  example,  evidence‐based  programs  often  require  facilitators to attend multi‐day trainings or call for  facilitators with particular qualifications. Even if a  program is a good fit for your community, if your  organization doesn’t have  the human or  financial  resources  to  adequately  implement  the  program,  its chances of success are limited. 

Guidelines for Selecting an Evidence‐Based Program  6  What Works, Wisconsin – Research to Practice Series, #3 

WHAT WORKS, WISCONSIN: RESEARCH TO PRACTICE SERIES   This is one of a series of Research to Practice briefs prepared by the What Works, Wisconsin team at the  University of Wisconsin–Madison, School of Human Ecology, and Cooperative Extension, University of  Wisconsin–Extension. All of the briefs can be downloaded from: http://whatworks.uwex.edu    This series expands upon ideas that are discussed in What Works, Wisconsin: What Science Tells Us about Cost‐ Effective Programs for Juvenile Delinquency Prevention, which is also available for download at the address above.    This publication may be cited without permission provided the source is identified as: Small, S.A., Cooney,  S.M., Eastman, G., & O’Connor, C. (2007). Guidelines for selecting an evidence‐based program: Balancing  community needs, program quality, and organizational resources. What Works, Wisconsin Research to Practice  Series, 3. Madison, WI: University of Wisconsin–Madison/Extension.    This project was supported by Grant Award No. JF‐04‐PO‐0025 awarded by the Wisconsin Office of Justice  Assistance through the Wisconsin Governor’s Juvenile Justice Commission with funds from the Office of  Juvenile Justice and Delinquency Prevention.    The authors wish to thank Mary Huser of the University of Wisconsin–Extension for her edits, comments, and  suggestions in the development of this Research to Practice brief.    

                                                                                          

In  addition, when  selecting  a  program  it makes  sense  to  assess  your  organization’s  long‐term  goals and consider which programs have the best  chance of being continued in the future. Programs  that  require  significant  external  funding  are  especially prone to abandonment after the funding  runs out. Some programs are more readily adopt‐ ed  by  existing  organizations  and  are  easier  to  support over the long run. Think about whether a  program  has  a  good  chance  of  being  integrated  into  the base programming of your organization.  Can  the program be continued  in  the  future with  existing  staff  and  resources  or  will  it  always  require external support?     Lastly,  because  many  evidence‐based  programs  are  resource  intensive,  think  about  collaborating  with  other  organizations  in  the  community  to  deliver a program. Selecting a program that meets  the needs of  two or more agencies may allow  for  the  pooling  of  resources,  thus  enhancing  the 

likelihood  that  the  program  can  be  adequately  funded,  implemented  and  sustained  over  time.  Additionally,  such  an  arrangement  can  lead  to  positive,  long‐term  partnerships with  other  com‐ munity agencies.        While  all  three  of  these  factors  are  important,  some may  be more  crucial  to  your  organization  than others. The key to selecting the best program  for  your  particular  situation  involves  balancing  different  priorities  and  trade‐offs  and  finding  a  program  that  best  meets  these  competing  demands. By selecting a high quality program that  matches  the  needs  of  your  audience  and  com‐ munity  and  the  resources  of  your  organization,  you  greatly  enhance  the  likelihood  that  you will  have  an  effective program  that will have  a  long‐ term  impact  and  improve  the  lives  of  its  participants.  

Exemplary

Proficient

Progressing

Emerging

Element (1): Responsiveness: Did the student respond to the main question of the week?

9 points (28%)

Posts exceed requirements of the Discussion instructions (e.g., respond to the question being asked; go beyond what is required [i.e., incorporates additional readings outside of the assigned Learning Resources, and/or shares relevant professional experiences]; are substantive, reflective, and refers to Learning Resources demonstrating that the student has considered the information in Learning Resources and colleague postings).

9 points

Posts are responsive to and meet the requirements of the Discussion instructions. Posts respond to the question being asked in a substantive, reflective way and refer to Learning Resources demonstrating that the student has read, viewed, and considered the Learning Resources and colleague postings.

7–8 points

Posts are somewhat responsive to the requirements of the Discussion instructions. Posts are not substantive and rely more on anecdotal evidence (i.e., largely comprised of student opinion); and/or does not adequately demonstrate that the student has read, viewed, and considered Learning Resources and colleague postings.

4–6 points

Posts are unresponsive to the requirements of the Discussion instructions; miss the point of the question by providing responses that are not substantive and/or solely anecdotal (i.e., comprised of only student opinion); and do not demonstrate that the student has read, viewed, and considered Learning Resources and colleague postings.

0–3 points

Element (2): Critical Thinking, Analysis, and Synthesis: Is the student able to make meaning of the information?

9 points (28%)

Posts demonstrate the student’s ability to apply, reflect, AND synthesize concepts and issues presented in the weekly Learning Objectives. Student has integrated and mastered the general principles, ideas, and skills presented. Reflections include clear and direct correlation to authentic examples or are drawn from professional experience; insights demonstrate significant changes in awareness, self-understanding, and knowledge.

9 points

Posts demonstrate the student’s ability to apply, reflect OR synthesize concepts and issues presented in the weekly Learning Objectives. The student has integrated many of the general principles, ideas, and skills presented. Reflections include clear and direct correlation to authentic examples or are drawn from professional experience, share insights that demonstrate a change in awareness, self- understanding, and knowledge.

7–8 points

Posts demonstrate minimal ability to apply, reflect, or synthesize concepts and issues presented in the weekly Learning Objectives. The student has not fully integrated the general principles, ideas, and skills presented. There are little to no salient reflections, examples, or insights/experiences provided.

4–6 points

Posts demonstrate a lack of ability to apply, reflect, or synthesize concepts and issues presented in the weekly Learning Objectives. The student has not integrated the general principles, ideas, and skills presented. There are no reflections, examples, or insights/experiences provided.

0–3 points

Element (3): Professionalism of Writing: Does the student meet graduate level writing expectations?

5 points (16%)

Posts meet graduate-level writing expectations (e.g., are clear, concise, and use appropriate language; make few errors in spelling, grammar, and syntax; provide information about sources when paraphrasing or referring to it; use a preponderance of original language and directly quote only when necessary or appropriate). Postings are courteous and respectful when offering suggestions, constructive feedback, or opposing viewpoints.

5 points

Posts meet most graduate-level writing expectations (e.g., are clear; make only a few errors in spelling, grammar, and syntax; provide adequate information about a source when paraphrasing or referring to it; use original language wherever possible and directly quote only when necessary and/or appropriate). Postings are courteous and respectful when offering suggestions, constructive feedback, or opposing viewpoints.

4 points

Posts partially meet graduate-level writing expectation (e.g., use language that is unclear/inappropriate; make more than occasional errors in spelling, grammar, and syntax; provide inadequate information about a source when paraphrasing or referring to it; under-use original language and over-use direct quotes). Postings are at times less than courteous and respectful when offering suggestions, feedback, or opposing viewpoints.

2–3 points

Posts do not meet graduate-level writing expectations (e.g., use unclear/inappropriate language; make many errors in spelling, grammar, and syntax; do not provide information about a source when paraphrasing or referring to it; directly quote from original source materials or consistently paraphrase rather than use original language; or are discourteous and disrespectful when offering suggestions, feedback, or opposing viewpoints).

0–1 points

Element (4):

Responses to Peers: Did the student respond to peer posts and contribute professionally?

9 points (28%)

Responds to two or more peers in a manner that significantly contributes to the Discussion.

9 points

Responds to one or more peers in a manner that significantly contributes to the Discussion.

7–8 points

Responds to one or more peers in a manner that minimally contributes to the Discussion.

4–6 points

Does not respond to any peer posts.

0–3 points

32 points

100%

2528 points

7888%

1421 points

4466%

010 points

031%

© 2015 Laureate Education, Inc. Page 2 of 3

     

                                In recent years, there has been increased pressure from funding agencies and  federal, state and local governments for greater effectiveness and accountability  of  prevention  and  intervention  programs.  This  rising  demand  for  program  quality, and evidence of  that quality, has  fueled a growing  interest  in evidence‐ based  programs  (EBPs).  However,  there  remains  some  confusion  about  what  constitutes an EBP, whether some EBPs are better than others, and the advantages  and  disadvantages  of  implementing  EBPs.  In  this  Research  to  Practice  brief, we  provide an overview of what  it means  for a program  to be evidence‐based, discuss  the  advantages  and disadvantages  of  implementing EBPs,  and point  readers  in  the  direction of resources to help locate these programs and learn more about them.     

What are evidence‐based programs?  A growing body of  research  in  the  social and behavioral  sciences has demonstrated  that  certain approaches and strategies  for working with youth and  their  families can positively  impact important social problems such as delinquency, teen pregnancy, substance abuse and  family violence. Many of  these effective approaches and strategies have been packaged  into  programs targeting outcomes specific to individuals, schools, families, and communities.     Those programs that have been found to be effective based on the results of rigorous evaluations  are often called “evidence‐based.”    

WHAT WORKS, WISCONSIN – RESEARCH TO PRACTICE SERIES 

Evidence‐based programs:   An overview

ISSUE #6, OCTOBER 2007  BY SIOBHAN M. COONEY, MARY HUSER,   STEPHEN SMALL, AND CAILIN O’CONNOR 

University of Wisconsin–Madison and University of Wisconsin–Extension 

Evidence‐based programs: An overview  2  What Works, Wisconsin – Research to Practice Series, #6 

The importance of rigorous evaluation  A  rigorous  evaluation  typically  involves  either  an  experimental  design  (like  that  used  in  randomized  controlled  trials)  or  a  quasi‐experimental  design.  In  an  experimental design, people are randomly assigned to either  the treatment group, which participates in the program, or  the  control  group, which  does  not. After  the  program  is  completed,  the  outcomes  of  these  two  groups  are  compared. This  type of  research design helps  ensure  that  any  observed  differences  in  outcomes  between  the  two  groups are the result of the program and not other factors.     Given  that  randomization  is not  always possible,  a  quasi‐ experimental design is sometimes used. In evaluations using  this  design,  the  program  participants  are  compared  to  a  group  of  people  similar  in  many  ways  to  the  program  participants.  However,  because  a  quasi‐experimental  design does not  randomly assign participants  to program  and non‐program groups, it is not as strong a design as the  experimental approach. Because  there may be unobserved  differences  between  the  two  groups  of  people  who  are  being  compared,  this  design  does  not  allow  program  evaluators  to  conclude  with  the  same  certainty  that  the  program itself was responsible for the impacts observed.    Most  programs  have  evaluation  evidence  from  less  rigorous studies. Evaluations  that do not  include any  type  of  comparison  group,  for  example,  do  not  allow  for  any  conclusions to be made about whether the changes seen in  program  participants  are  related  to  or  caused  by  the  program.  These  studies  sometimes  show  the  promise  of  positive  results, but  they do not  allow  the program  to be  classified as evidence‐based. Programs with evidence from  less  rigorous  studies  are often  referred  to  as  “promising”  programs. 

An important element of EBPs is that they have  been  evaluated  rigorously  in  experimental  or  quasi‐experimental  studies  (see  box  on  this  page).     Not  only  are  the  results  of  these  evaluations  important,  but  it  is  also  essential  that  the  evaluations  themselves  have  been  subjected  to 

critical peer review. That  is, experts  in the field  –  not  just  the  people  who  developed  and  evaluated  the  program  –  have  examined  the  evaluation’s  methods  and  agreed  with  its  conclusions  about  the  program’s  effects.  Thus,  EBPs often have  evaluation  findings published  in peer‐reviewed scientific journals.   

When  a program  has  sufficient peer‐ reviewed,  empirical  evidence  for  its  effectiveness,  its  developer will  typi‐ cally  submit  it  to  certain  federal  agencies  and  respected  research  organizations for consideration. These  organizations  “certify”  or  “endorse”  programs  by  including  them  in  their  official  lists  of  effective  programs.  This  lets others  in  the  field know  the  program  meets  certain  standards  of  effectiveness.  (See  Appendix  A  for  examples of these organizations.)    Simply put, a program is judged to be  evidence‐based  if  (a)  evaluation  re‐ search  shows  that  the  program  pro‐ duces  the  expected  positive  results;  (b) the results can be attributed to the  program  itself,  rather  than  to  other  extraneous  factors  or  events;  (c)  the  evaluation  is  peer‐reviewed  by  experts  in  the  field;  and  (d)  the  program  is  “endorsed”  by  a  federal  agency  or  respected  research  organization and included in their list  of effective programs.    Given  this  definition  of  an  EBP,  it  is  important  to  distinguish  the  term  “evidence‐based”  from  “research‐ based.”  Consider  our  earlier  description  of  how  most,  if  not  all,  EBPs were developed based on years  of scientific research on what program  components,  such  as  content  and  activities, are likely to work for youth 

Evidence‐based programs: An overview  3  What Works, Wisconsin – Research to Practice Series, #6 

and  families.  Because  EBPs  contain  program  components with solid empirical bases, they can  safely  be  called  “research‐based”  programs.  However,  the  reverse  is  not  true.  Not  all,  or  even  the majority,  of  research‐based  programs  fit  the  definition  of  an  EBP.  Just  because  a  program contains research‐based content or was  guided  by  research‐based  information,  doesn’t  mean it has been proven effective. Unless it also  has  scientific  evidence  that  it  works,  it  is  incorrect to call it “evidence‐based.”    

Are some evidence‐based  programs better than others?  Programs  that meet  the definition of  evidence‐ based  are  not  all  similarly  effective  or  equally  likely to work in a given community.    For  example,  some  EBPs  have  been  evaluated  rigorously in several large‐scale evaluations that  follow  participants  for  a  long  period  of  time.  Others  have  only  undergone  one  or  two  less  rigorous  evaluations  (for  example,  those using  the  quasi‐experimental  design  described  on  page  2). Those programs  that  are  shown  to be  effective multiple times in experimental studies  are  generally  considered  to  be  of  a  higher  standard.     Furthermore,  many  EBPs  have  been  successfully  replicated  and  evaluated  in  a  variety  of  settings  with  a  range  of  different  audiences.  Others  have  only  been  evaluated  with  a  particular  audience  in  a  certain  geographical  area,  for  example.  When  a  program  has  been  shown  to  be  effective  in  different  settings and with different audiences,  it  is more  likely  that  it will  be  effective when  implemented elsewhere.    

Finally, EBPs  can  vary  in  the  strength  of  their  effects.  For  example,  one  program  may  have  evidence  that  it  reduces  delinquent  acts  in  its  participants by 10 percent over  the  subsequent  year,  while  another  program  has  evidence  of  reducing  delinquency  by  20  or  25  percent.  Generally, those programs that consistently pro‐ duce  a  greater  effect  than  other  programs  are  thought to be better programs.     Thus,  the  level  of  evidence  for  effectiveness  varies across programs, and practitioners must  use  a  critical  eye when  judging where  on  the  continuum of effectiveness a program lies.      

Advantages of evidence‐based  programs  There  are  numerous  merits  to  adopting  and  implementing  EBPs.  First,  utilizing  an  EBP  in‐ creases the odds that the program will work as  intended  and  that  the  public  good  will  be  enhanced.  There  is  also  greater  efficiency  in  using limited resources on what has been proven  to work as compared  to what people  think will  work  or  what  has  traditionally  been  done.  Instead  of  putting  resources  toward  program  development, organizations can select  from  the  growing  number  of  EBPs, which  are  not  only  known  to be effective but also often offer well‐ packaged program materials, staff training, and  technical  assistance.  Using  EBPs  where  appropriate can thus be viewed as a responsible  and thoughtful use of limited resources.     The  proven  effectiveness  that  underlies  EBPs  can  help  secure  resources  and  support  from  funding  agencies  and  other  stakeholders,  such  as  policy  makers,  community  leaders,  and  members  of  the  targeted  population.  Increasingly,  funders  and  policy  makers  are  recommending,  if  not  requiring,  that  EBPs  be  used  to  qualify  for  their  financial  support.  Additionally,  the demonstrated effectiveness of  these programs can facilitate community buy‐in 

Evidence‐based programs: An overview  4  What Works, Wisconsin – Research to Practice Series, #6 

and  the  recruitment  and  retention  of  program  participants.     A  final  benefit  of  EBPs  is  that  they may  have  cost‐benefit  information  available. This  type of  information  helps  to  convey  the  potential  eco‐ nomic  savings  that  can accrue when  funds are  invested in a program. Cost‐benefit information  can  be  very  influential  in  an  era  where  accountability and economic factors often drive  public policy and funding decisions.     

Disadvantages of evidence‐based  programs  Despite  the  numerous  advantages  of  EBPs,  there are some limitations that are important to  consider.  A  major  constraint  is  the  financial  resources needed to adopt and implement them.  Most  EBPs  are  developed,  copyrighted,  and  sold  at  rather  substantial  costs.  Program  designers  often  require  that  organizations  purchase  curricula  and  other  specially  developed program materials,  that  staff  attend  specialized  training,  and  that  program  facilitators hold certain degrees or certifications.  Furthermore, EBPs are often intended to be im‐ plemented  exactly  as  designed,  allowing  little  room for local adaptation.     Finally, organizations sometimes find that there  are few or no EBPs  that are both well‐suited  to  meet  the  needs  of  targeted  audiences  and  appropriate  for  their  organization  and  local  community  setting.  This  situation  is  especially  common  when  it  comes  to  the  promotion  of  positive outcomes rather than the prevention of  negative  ones.  Because  the  development  of  many EBPs was  sponsored by  federal agencies  concerned  with  addressing  specific  problems,  such  as  substance  abuse,  mental  illness,    

violence,  or  delinquency,  there  currently  exist  many  more  problem‐focused  EBPs  than  ones  designed  specifically  to  promote  positive  developmental outcomes  like  school  success or  social responsibility.     

Where to find evidence‐based  programs  Practitioners  looking  for  an  EBP  to  implement  in  their  community  or  learn more  about  these  programs will find the Internet to be their most  useful resource. As mentioned earlier, a number  of  federal  agencies  and  respected  research  organizations  “certify”  or  “endorse”  programs  that meet the organizations’ specified standards  for  effectiveness. Many  of  these  agencies  have  established  on‐line  registries,  of  lists  of  EBPs  that  they  have  identified  as  effective.  While  there are some differences in the standards used  by  various  organizations  to  assess  whether  a  program should be endorsed and thus included  on their registry, most share the primary criteria  regarding  the  need  for  strong  empirical  evidence of program effectiveness.     Organizations that endorse EBPs typically  limit  such  endorsements,  and  thus  their  program  registry, to those programs that have shown an  impact  on  specific  outcomes  of  interest  to  the  organization.  For  example,  programs  listed  on  the Office  of  Juvenile  Justice  and Delinquency  Prevention’s  Model  Programs  Guide  have  all  been  shown  to  have  an  impact  on  juvenile  delinquency  or  well‐known  precursors  to  delinquency.     As  previously  mentioned,  because  the  development  of  many  EBPs  was  funded  by  federal  agencies  focused  on  specific  problems,  most  existing  registries  of  EBPs  are  problem‐ oriented.  Occasionally,  EBPs  are  categorized  according  to  a  strengths‐based  orientation  and  address  outcomes  related  to  positive  youth 

Evidence‐based programs: An overview  5  What Works, Wisconsin – Research to Practice Series, #6 

WHAT WORKS, WISCONSIN: RESEARCH TO PRACTICE SERIES  

This is one of a series of Research to Practice briefs prepared by the What Works, Wisconsin team at the  University of Wisconsin–Madison, School of Human Ecology, and Cooperative Extension, University of  Wisconsin–Extension. All of the briefs can be downloaded from http://whatworks.uwex.edu.     This series expands upon ideas that are discussed in What Works, Wisconsin: What Science Tells Us about  Cost‐Effective Programs for Juvenile Delinquency Prevention, which is also available for download at the  web address above.    This publication may be cited without permission provided the source is identified as: Cooney, S.M.,  Huser, M., Small, S., & O’Connor, C. (2007). Evidence‐based programs: An overview. What Works,  Wisconsin Research to Practice Series, 6. Madison, WI: University of Wisconsin–Madison/Extension.    This project was supported, in part, by Grant Award No. JF‐04‐PO‐0025 awarded by the Wisconsin  Office of Justice Assistance through the Wisconsin Governor’s Juvenile Justice Commission with funds  from the Office of Juvenile Justice and Delinquency Prevention.       

             

development,  academic  achievement,  school  readiness and family strengthening.    While  registries  of EBPs  are usually  organized  around  the  particular  outcomes  the  programs  have  been  found  to  impact,  many  programs,  especially those focused on primary prevention,  often  have  broader  effects  than  this  pattern  would suggest. Many EBPs have been found to  be effective for reducing multiple problems and  promoting a number of positive outcomes. For  example, a parenting program that successfully  promotes effective parenting practices may not  only  reduce  the  likelihood  of  particular  problems such as drug abuse or aggression, but  may  also  promote  a  variety  of  positive  outcomes  like  academic  success  or  stronger  parent‐child  relationships. For  this  reason, you  will  often  see  the  same  program  appear  on 

multiple registries  that  focus on different  types  of outcomes.            Now,  more  than  ever,  practitioners  have  available to them a wealth of EBPs that build on  the  best  available  research  on  what  works.  Unfortunately,  they  are  currently  underused  and often not well‐understood. Although EBPs  do have some limitations, they can contribute to  a  comprehensive  approach  to  preventing  a  range of social and health‐related problems and  enhancing  the  well‐being  of  individuals,  families and communities.    

Evidence‐based programs: An overview – Appendix A  6  What Works, Wisconsin – Research to Practice Series, #6 

Appendix A  Evidence‐based program registries 

    The following websites contain registries, or lists of evidence‐based programs, that have met specific criteria  for effectiveness. Program registries are typically sponsored by federal agencies or other research organiza‐ tions that endorse programs at different rating levels based on evidence of effectiveness for certain participant  outcomes. The registries listed below cover a range of areas including substance abuse and violence preven‐ tion as well as the promotion of positive outcomes such as school success and emotional and social compe‐ tence. Generally,  registries  are  designed  to  be  used  for  finding  programs  for  implementation. However,  registries can also be used to learn about evidence‐based programs that may serve as models as organizations  modify aspects of their own programs.       Best Practices Registry for Suicide Prevention   http://www.sprc.org/featured_resources/ebpp/index.asp  This registry, developed by the Suicide Prevention Resource Center (SPRC) and the American Foundation for  Suicide Prevention, includes two registries of evidence‐based programs. The first draws directly from a larger  registry‐  that of  the Substance Abuse and Mental Health Administration’s  (SAMHSA) National Registry of  Evidence‐Based  Programs  and  Practices  (NREPP). Users  interested  in  finding  out more  about  programs  drawn from this registry will be directed to the NREPP site. The second registry was developed by SPRC in  2005 and lists Effective and Promising evidence‐based programs for suicide prevention. This portion has fact  sheets in PDF format for users interested in learning more about the listed programs.        Center for the Study and Prevention of Violence, Blueprints for Violence Prevention   http://www.colorado.edu/cspv/blueprints/index.html  This research center site provides information on model programs in its “Blueprints” section. Programs that  meet a strict scientific standard of program effectiveness are listed. These model programs (Blueprints) have  demonstrated  their  effectiveness  in  reducing  adolescent  violent  crime,  aggression,  delinquency,  and  sub‐ stance  abuse.  Other  programs  have  been  identified  as  promising  programs.  Endorsements  are  updated  regularly, with programs added to and excluded from the registry based on new evaluation findings.      The Collaborative for Academic, Social, and Emotional Learning (CASEL)  http://www.casel.org/programs/selecting.php  The Safe and Sound report developed at CASEL  lists school‐based programs  that research has  indicated are  effective in promoting social and emotional learning in schools. This type of learning has been shown to con‐ tribute  to positive youth development, academic achievement, healthy behaviors, and  reductions  in youth  problem  behaviors.  Ratings  are  given  on  specific  criteria  for  all  programs  listed, with  some  designated  “Select” programs. This registry has not been updated since programs were reviewed in 2003.   

Evidence‐based programs: An overview – Appendix A  7  What Works, Wisconsin – Research to Practice Series, #6 

Exemplary and Promising Safe, Disciplined and Drug‐Free Schools Programs   http://www.ed.gov/admins/lead/safety/exemplary01/index.html  The Department of Education and  the Expert Panel on Safe, Disciplined and Drug‐Free Schools  identified  nine exemplary and 33 promising programs for this 2001 report.  The report, which can be found at this site,  provides descriptions and contact information for each program. The focus is on programs that can be imp‐ lemented  in a  school setting whether  in  the classroom,  in extra‐curricular activities, or as after‐school pro‐ gramming.       Helping America’s Youth  http://guide.helpingamericasyouth.gov/programtool.cfm  This registry is sponsored by the White House and was developed with the help of several federal agencies.  Programs  focus  on  a  range  of  youth  outcomes  such  as  academic  achievement,  substance  use,  and  delin‐ quency, and are categorized as Level 1, Level 2, or Level 3 according to their demonstrated effectiveness. The  registry can be searched with keywords or by risk or protective factor, and is updated regularly to incorpo‐ rate new evidence‐based programs.      Northeast Center for the Application of Prevention Technology (CAPT) Database of Prevention Programs  http://www.hhd.org/capt/search.asp   This site features a simple or advanced search function to find substance abuse and other types of prevention  programs and determine  their effectiveness according  to a variety of criteria. Also  included  is  information  about the sources those agencies used for their evaluations, contact information, websites, domains, relevant  references, and a brief description of each program.      Office of Juvenile Justice and Delinquency Prevention (OJJDP) Model Programs Guide  http://www.dsgonline.com/mpg2.5/mpg_index.htm  The OJJDP Model Programs Guide is a user‐friendly, online portal to prevention and intervention programs  that address a  range of  issues across  the  juvenile  justice spectrum. The Guide now profiles more  than 200  programs – rated Exemplary, Effective, or Promising – and helps communities  identify  those  that best suit  their needs. Users can search the Guide’s database by program category, target population, risk and protec‐ tive  factors, effectiveness  rating, and other parameters. This  registry  is continuously updated and contains  more programs  than other well‐known registries, although many of  these are Promising rather  than Exem‐ plary or Effective.      Promising Practices Network on Children, Families and Communities  http://www.promisingpractices.net/programs.asp  A project of the RAND Corporation, the Promising Practices Network website contains a registry of Proven  and Promising prevention programs that research has shown to be effective for a variety of outcomes. These  programs  are  generally  focused  on  children,  adolescents,  and  families.  The website  provides  a  thorough  summary of each program and is updated regularly.    

Evidence‐based programs: An overview – Appendix A  8  What Works, Wisconsin – Research to Practice Series, #6 

Social Programs that Work, Coalition for Evidenced‐Based Policy  http://www.evidencebasedprograms.org/  This site is not a registry in the conventional sense of the word in that it does not include and exclude pro‐ grams based on some criteria of effectiveness. Instead, it summarizes the findings from rigorous evaluations  of programs targeting issues such as employment, substance use, teen pregnancy, and education. Some of the  programs have  substantial  evidence of  their  effectiveness, while others have  evaluation  results  suggesting  their ineffectiveness. Users are welcome to sign up for emails announcing when the site is updated.      Strengthening America’s Families: Effective Family Programs for Prevention of Delinquency  http://www.strengtheningfamilies.org/  This registry summarizes and rates family strengthening programs which have been proven to be effective.  Programs are designated as Exemplary I, Exemplary II, Model, or Promising based upon the degree, quality  and outcomes of research associated with them. A program matrix is also included, which can be helpful in  determining “at a glance” which programs may best meet community needs. This registry was last revised in  1999.      Substance Abuse and Mental Health Services Administration’s (SAMHSA’s) National Registry of  Evidence‐Based Programs and Practices  http://nrepp.samhsa.gov/  The National Registry of Evidence‐based Programs and Practices (NREPP) is a searchable database with up‐ to‐date, reliable information on the scientific basis and practicality of interventions. Rather than categorizing  programs as Model, Effective, or Promising, NREPP rates the quality of the research findings separately for  each outcome that has been evaluated, as well as readiness for dissemination. Users can perform customized  searches to identify specific interventions based upon desired outcomes, target populations and settings.       Youth Violence: A Report of the Surgeon General   http://www.surgeongeneral.gov/library/youthviolence/chapter5/sec3.html  This report designates programs as Model or Promising and goes further than many other registries to also  include a “Does Not Work” category. General approaches and specific programs for the prevention of youth  violence are described at  three  levels of  intervention: primary,  secondary and  tertiary. This  report has not  been  updated  since  its  publication  in  2001,  but  it  is  rare  in  that  it  discusses  the  cost‐effectiveness  of  the  programs.     

Introductory Principles of Social Work Research

Bruce A. Thyer

The scientific approach to unsolved problems is the only one which contains any hope of learning to deal with the unknown.

A -Bertha Capen Reyno lds (1942, p . 20)

n emphasis on the value of scientific research has always characterized professional social work education and practice. Indeed, this emphasis is one of the hallmarks that distinguishes genuinely "professional" services from other forms of private/public philanthropy and charity and the provision of social care motivated by religious, familial, altruistic, or

philosophical reasons. In the history of social work in )Jorth America and Great Britain, as well as in other European nations, the system of poor laws and other rel- atively unsystematic attempts to care for the destitute gave rise during the latter part of the 19th century to an orientation labeled scientific philanthropy. Coincident with the emergence of "friendly visiting;' settlement houses, formalized academic train ing, and ot her prec ursors to the professionalization of social work, the development of charitable se rvices gui ded h y a sc ienti fic orienta ti o n has evolved to the present day.

Social work historian John Graham provides a good case study o n a To ronto charity hom e for women called The Haven, established in 1878 by re li gio us elites, that gra dually made Lhe tra nsition Lo a rn o re secularl y o riented and p rofess ional service. Gr aham (l.992) describes the completion of this tra nsition in 1927 ::is follows:

Profess ional social work, therefore, had been firm ly installed at The Haven, and the last vestiges of the benevolent philanthropy of the nineteenth century were aban- doned. A growing sense of professional identity moreover demanded a strict delin- eation bet.ween the social worker and the social agency volunteer. Differentiating the former from the latter was a scientific knowledge base and specialized skills which were the social worker's alone. (p. 304, italics added)

Such a transition can be said to characterize the. majority of social work programs across orth America by the early part of the 20th century. Currently, one widely used definition

of social work can be found in The Social Work Dictionary published by the N'ational Association of Social Workers- "the applied science of helping people achieve an effective

2 THE HANDBOOK OF S OCIAL WORK R ESEARCH M ETHODS

le\rel of psychosocial function and effecting societal changes to enhance the well-being of all people" (Ril rker, 2003, p. 408, italics added). Many states further defme the practice of clinical social work, and Florida's definition provides a representative examp le of the inter- connec tedness of social work and science: "The ' practice of clinical social work' is defined as the use of scientific and applied knowledge, theories and methods for the purp oses of describing, preventing, evaluating, and treating, indiv idual, couple, fa mi ly o r gro up behav- ior " (Florida Departmen L of Hea.lth, 2008, ita lics added) . These definitions illustrate the close lin kage between the practice of social work and the world of scientific inquiry.

\'\'here do we social workers come from organizationally? V\lc have many roots, but a central one was the establishment in 1865 of the American SocjaJ Science Association (ASSA), a generalist organization influenced by French sociologist Auguste Com te's then novel philosophy of science labeled positivism, which called for the objective study of human society and behav io r using the same tools of scientific inquiry that were proving so successful in the biological and physical scie nces. rrom the ASSA sprouted numerous o ffs hoots, some of which thrive to this day, although the parent g roup crumbled in 1909. from the ASSA, in 1879, eme rged the Co nfe rence of Charities, which in 1.881 evolved into the Nat ional Conference of Charities and Correction (NCCC), described as "a forum for the communication of the ideas and values co nnccLcd with scientific char ity" (Germain, 1970, p. 9). In turn, the NCCC was renamed the Na tional Conference on Social Work in 19 17. This label lasted until 1957, when it was altered to the National Conference on Social Welfare, which gradually expired during the 1980s.

More recently, in 1994, a small group of social workers led by Janet B. W. Williams estab- lished a new scientifically oriented social work membership organization known as the Society for Social Work and Research (SSWR). AIJ social workers with a n interest in scien- tific research in social wo rk are eligible to join. T he SSWR quickly grew from 27 1 members in 1995 to more than 1,300 in 2009, and the organization has an active news letter and program of annual in tern ational conferences. The first professional SSWR co nference was held in 1995 in Washingto n, D.C., and has been followed annually since that time with very successful and high-quality conferences (see www.sswr.org) . The SSWR conferences offer a hos t of competitively reviewed symposia, papers, and posters; P.lcnary addresses by promi- nent social work researchers; and an awards program that recognizes outstanding e..xamples of recen tl y published social work research. Because of its superb organization and the top quality of its presentations, the SSWR conference has rapidly become Lhe preferred venu e for social work researchers to present their research findings. Moreover, it has become the conference ol choice for schools of social work to seek interviews w it h potential new faculty and fo r potential new faculty to seek academ ic positions. In 1999, lhc SSWR began provid- ing its members a subscription to Lhc bimon thly peer-reviewed journal Research on Social Work Practice, an in dependen t periodical established in 1991. This grow th of the SSWR augurs well for the continuing voice of science within mainstream social work.

A related bu t independent development was the establishment of the Institute for the Advancemc11l of Social Work Research (IASWR) in 1993. The mission of the IASWR is to create infrasrructure for social work research, to lead advocacy efforts to fund social work research, to help stakeholders view social work research as valuable, to provide training and professional development programs for social work researchers, to persuade social workers to undertake careers in research, to provide a free Web-based research-focused newsletter, and to promote disc ip l.in ary and inte rdisciplinary resea rch collaboration . Pive nalional pro fess io nal social work organizations contributed to the developrncn l o f the IASWR and are represented on its governing board. Its origi nal p urpose of advocating for the establishment of a federally funded Na 1ional Center for Social Work Research failed in the face of fiscal austerity, bu t the IASWR has expanded its remit as described above (see http://ww\>v.iaswresearch.org/) .

(MAPTER l • INTRODUCTORY PRI N CI PLES OF SOCIAL W ORK RESEARCH 3

Anolhcr organizalional reso urce for social work research is the Social Work Topical Interest Group (TIG) found within the American Evaluation Association (AEA) . The AEA has about 5,000 members, and several hundred of these comprise the social work TIC. The AEA holds an annual conference as well as regional ones, has an active journals program, and provides training and consultation services, and its Web site has a wealth of useful resources (e.g., locating measurement instruments, how to locate an evaluator; see hup://www.cval.org/aboutus/organization/aboutus.asp).

The National Association of Social Workers is the largest professional social work group in the world, with about 150,000 members. Almost aJJ are M.S.W. and B.S.W.-lcvcl trained professionals, and the organization primarily serves Lhc needs of ils practitioner member base, not those of social work research ers. The NASW does not host an annual conference but does have one research journal, Social Work Research J\ new initia tive is a social wo rk resea rch Well page (see www.socialworkers.org/research/), cosponsored with the IASWR, which is itself ostensibly independent but is actual1y h oused within the NJ\SW offices in Washinglon, D.C.

Social work resea rchers also find welcoming organizational suppo rt from various dis- ciplinary (e.g., American Psychological Association, American Sociological Associatio n, Associa li on for Behavior Anal ysis) and in terdisciplinary (e.g., Am erican P ublic Health Association, Associatio n fo r Advancement of Behavioral and Cognitive Therapies, American Orthopsychiatric Association, the Gerontological Society of America) groups. These groups typically have thriving annual conferences, a wcll-cslablished journals program, and training opportunities social workers can take advantage of. Thus, both budding and experienced social workers have ample opporlunities to network with research -oriented colleagues both within and oulsidc of lhe discipline.

Scientific Perspectives on Practice

The role of scientific research in social welfare can be seen through many early writings, including a11 article titled "Scientific Charity," presented at the 1889 meeting of the NCCC (cited in Germain, 1970, p. 8), and one titled "A Scientific Basis for Charity" (Wayl and, 1894), which appeared in the influential journal The Cha.rities Review. Such perspectives cu lmi n ated in the publication of Richmond's (1917) Social Diagnosis, an influenLial text that wholeheartedly extolled the virtues of positivist science. lndeed, in 1921, Richmond received an honorary M.A. degree from Smith College for "esLablishing th e scientific basis of a new profession" (cited in Germain, l 970, p. J 2).

The possible examples of conference talks, journaJ articles, chapters, and books illus- trating the central reliance on scientific research as a guiding force within early social work arc roo numerous to mention further here. Germain (1970) remains one of the very best reviews of this "ancient" history of our profession. More recent is the history of the Social Work Research Group (SWRG), a short-lived professional membership organ ization established in 1949 that became one of the original seven constituents of the l\'ational Association of Social Workers (NASW) in 1955, transmogrifying itself into the NASW's Research Section. In 1963, this became the NASW's Council on Social Work Research, where it gradually faded from v iew by the mid-1960s as the NASW allowed the research mission established in its bylaws to Largely lapse. Graham, Al-Krenawi, and J3radshaw (2000) have prepared an excellent historical study of the rise and demise of the SWRG.

Coinciden t with these organizational and policy developments related to the integra- tion of science and social work during the past quarter century have been t hree related perspectives on practice. The first is known as empirical clinica.l practice (ECP), the second

4 THE HANDBOOK OF SOCIA i WORK RFSFARCH MFTHOr>S

is called empirically supported treatments (ESTs), and the third is labeled evidence-based practice (F.BP ). Th ese are reviewed briefly in turn.

Empirical Clinical Practice

Empirical clinical practice was the name of a book authored by social workers Siri Jayaratne and Rona Levy (1979), who describe the characteristics of the ECP model they espouse: "Empirical practice is conducted by clinicians who strive Lo measure and demonstrate the effect of their clin ical practice by adapting traditional experimental research techniques to clinical practice" (p. xiii). The authors focus on teaching social workers the use of relatively simple research methods ca lled single-system research designs to empirically evaluate the outcomes of their work. l'hey be lieve t hat "clinical practice that can empirically demonstrate its effect prov ides the basis for the best service to the client" (p. xiv). They contended that ECP ca n be ::idopted by p ractitioners using vir- tually any theoretical model of practice so long as it is possib le to measure changes in the client, re late t·hese changes (provis ionally) Lo soc ial work inle rvc nlion , and Lhen base future services on these observations. The auth ors advoca te that social workers should rely on previo us research to help guide their cho ices of interve11tions that they offer clients. In their words, "The clinician would first be inte rested in us in g a n in terventi on strategy that has been successful in the past . . .. When established techniques are avail - able, they should be used, but they should be based on objective evaluation rather than subjective feelin g" (p. 7) . ECP involves the careful and repeated measure of client func- tioning, using reliable and valid measures repeated over time, combined with selected treatments based on the best available scientific evidence. Their entire book is devoted to describing how to do these activities. A similar social work text by Wodarski ( 1981 ), titled The Role of Research in Clinical Practice, advocated for much the sa m e thing- a preference to make use of psychosocial treatments that scientific research had really demonstrated to be of benefit to clients, measuring client functioning in reliable an<l valid ways, and empirically evaluating outcomes with individual clients and larger groups.

The banner of ECP was picked up by a number of subsequent social workers, and a rather large (a nd not uncontroversial ) literalure has grown around Lhese nolions (e.g., Corcoran, 1985; Ivanoff, Blythe, & .8riar, 1987; Ivanoff, H.obinson, & Blythe, 1987; G. MacDonald, 1994; Thyer, 1996). The influence of ECP has not been inconsiderable. For example, in 1982, just 3 years fol lowin g the publicalion of F.mpirical Clinical Practice (Jayaratne & Levy, 1979) , the curriculum policy statement of the Co un ci l on Social Work Ed ucation (CSWE, 1982) included a new mandate Lh a l research co urses musl now Leach "designs for the systematic evaluation of the student's ow n practice . . . [and should] pre- pare them systematically to evaluate their own practice and co ntr ibute to the generation of knowledge for practice" (pp. 10- 11). Similar sta ndards still ca n be found in the curren t CSWE guidel ines. Insistin g that individual practi tio ners co ndu cl systema tic outcome evaluations of their own services was a remarkable professional standard, one that has not ycl bee11 cm ul a lcd by educational and practice guidelines wiLhin clinical psychology o r psychiatry in the present day. Reid ( 1994) provides a ni ce overview of the rise, influence, an d dissemjnation of the ECP movement.

Empirically Supported Treatments Subscquenl lo Lhc ECP movement within social work, a rclaled iniLiaLive developed within clinical psychology called empirically validated treatments. During the mid - l 990s, the president of Section lll (Society for a Science of Clinical Psychology) of Division 12

CHAPTCR 1 • IN l ROOUtTORY P RI NCIPLES OF S OCIAL W ORK R ESEARCH 5

(Clinical Psychology) of the American Psychological Association convened a Task Force on Promotion and Dissemination of Psychological Procedures, a gro up charged with two functions: (a) develop a scientifically defen sible set of criteria that can be used to deter- mine whether a given psychological technique ca n be called empirically va lida ted and ( b) cond uct co mprehensive reviews of the research literature, apply these cr iter ia, a nd come up with, in effec t, lists of psychological procedures that fulfill these criteria and, therefore, can be co nsidered, in a scientific sense, empirically validated.

Th e evid enti ary sta ndards ultimately decided o n by the task force were actually rather modest, consisting of the following criteria:

I. At least two good between-group design experiments demonstrating efficacy in one or mo re of the following ways:

A. Superior to pill or psychological placebo or to another treatment B. Equivalent LO an already established treatment in experiments with adequate

statistical power

II. A large ser ies of single-case design expe rim en ts ( N > 9) demonstratin g efficacy that must have done the following:

A Used good experimental designs B. Compared the intervention to another treatment (as in I.A. )

Among the further criteria are that the psychological techniques must be based on well-proceduralized treatmenL manuals, that the characteristics of the client sam ples are clearly defined, and th at the positive effects must have been demon strated by at least two different inves tigators or investigatory teams. A psychological treatment m eeting the preceding criteria co uld be said to be well established. A som ewhat less stringent set of cr i- teria could be followed to potentially label a treat ment as probably efficacious (Chambless et al. , 1996).

With the criteria in place, the task force busily got to work in seeing which psycholog- ical treatments co uld be labeled empirically validated and probably efficacious, and reports soon began appearing indicating empirically validated inLerventions for a wide array of psychosocial disorders such as depression, panic disorder, pain, and schizophre nia. As with the ECP movement within social work, the task force within psychology did not escape controversy. For one thing, the task force recognized that labeling a treatm ent as empirically validaled see med to close the discussio n off, implying perhaps a stronger level of research evidence than was justified. Subseq uent reports of the task force used lhe more t empered language of empirically supporled lreatments (ESTs) . Entire issues oflead- ing professional journals (i.e., a 1996 issue of Clinical Psychology: Science and Practice, a 1998 issue of the Journal of Consulting and Clinical Psychology, a 1998 issue of Psychotherapy Research) were devoted to the topic, as were considerable independent lit- eratures (e.g., Sanderson & Woody, 1995). The influence of the EST movem ent also has been strong, and the work of the Division 12 task fo rce was commented on extrem ely favorably in Mental Health: A Report of the Surgeon General (Hatcher, 2000). The volume titled A Guide lo Treatments That Work (Natha n & Go rman, 2007), now in its third edi- tion, i.s an exemp lary resource for social workers seeking relatively current information about empirically sup ported treatments for a wide va ri ety of m ental hea lth prob lems. Division 12, Sec.:Lion HT (The Society for a Science of Clinical Psychology) co ntinues its work in defining the cr iteria and language used to describe empirically suppo rted treat- ments and maintains a Web site providing curre nt information on this influential initia- tive (see http://www.psychology.sunysb.edu/eklonsky-/divisionl2/index.html).

6 Tll E HAIWBOOK OF S OCIAL W ORK R ES EARCH METHODS

Evidence-Based Practice Coinc ident with the EST initiatives in clinical psychology have b een related activities in med ic ine labeled evidence-based practice, defined as "the conscientious, explicit, and judi- ciou s use of the current best evidence in making decisions about the care of individual patients" (Sackett, Richardson, Rosenberg, & Haynes, 1997, p. 2). On its face, EBP would not seem to be a radical notion, and indeed, most readers would ass ume that such a sta n- dard already was in place in most of the hea Ith professions. St'td ly, to a great extent, this is not the case, altho ugh a small but in fluen tit'tl grou p of health care providers is attempting to make it so. EBP and EST actually are much more sophisticated var iants of th e earlier ECP model of social work, but the spirit and intent of all three movements ECP (devel- oped within social work), EST (developed within psychology), and EBP (developed within medicine)-are the same. EBP is gradually supplanting the ECP and EST initia- tives within social work and psychology. The current president of the Society for the Science of Clinical Psychology (a section of Division 12 of the Amer ican Psychological Association) pub lished an edito rial titled "Evidence-Based Psychoth erapy: A Graduate Course Proposal" (Persons, 1999 ), and some social workers have begun using the EBP language, most notably Gambrill (1999) w ith her thoughtful arLicle titled "Evidence- Based Practice: An Alternative to Authority-Based Practice," which introduced El3P to the socia l work literature. The past decade has seen the publication of enough social work books on the EBP topic to fill a bookshelf. The melding of these disciplinary perspectives i11lo an interdisciplinary human services movemen t generically ca lled evidence-based prac- tice seems likely. Consider Perso ns's ( J 999) description of EBP:

T he evidence- based practitioner:

• Provides informed consent for treatment • Relies on the efficacy data (especially from RCTs [randomized cl inical trials]) when

recomm ending and selecting and carrying out treatm ents • Uses the empirical literature Lo gu ide decision-m aking • Uses a systematic, hypothesis -testing approach to the treatment of each case:

o Begins with careful assessment o Sets dear and measurable goals o Develops and individualized fo rmulation and a treatment plan based on the

formulation o Monitors progress toward the goals frequently and mod ifies or ends treatment

as needed (p. 2)

WeU, perhaps Jayaralne and Levy ( 1979) were simply two decades al1cad of their time. An issue of the NASW News contained an article on the Surgeon General's Report on Mental Health and noted, "A challenge in the near term is to speed transfer of new evidence-based treatments and prevenlion interventions into diverse service delivery settings and systems" (O' Neill, 2000, p. 6, italics added ). The Surgeon General's repo rt itself states clearly,

Responding to th e calls of managed menta l health and behavioral heaJth ca re sys- Lcms for evidence-based interventions will have a much needed and discernable impact on practice .... It is essential to expand the supply of effective, evidence- based services throughout the nation. (Hatcher, 2000, chap. 8, p. 453 )

EBP requires knowin g what helps socia l work clients and what does not help them. It requires being able Lo distinguish b etween unverified opin-io ns a bout p sychosocial

CHAPTER l • I NTRODUCTORY P RINCIPLES OF SO CIAL WORK RESEARCH 7

interventions and facts about their effectiveness. And separa ting facts from fictions is what science is prelly good at doing. Jot perfectly, and not without false starts, but the publicly verifiable and potentially testable conclusion s of scientific research render this form of knowledge buil ding an inherently self-correctin g one (in the long nm), a con- siderable advan Lagc over o ther "ways of knowing."

EBP differs from its precursor initiatives in that it does not tell socia l workers what interven tions should be provided to clients. TL d oes not list so -called best practices, create practice guidel ines, or develop lists of supposedly emp irically based treatments. Nor docs it unduly privilege certain forms of evidence above all others. Each of the above three sen - tences represents commo n misconceptions of EB P. EBP is actually a process of inquiry offered to practitioners, described for physicians in Straus, Richardson, Galsziou, and Haynes (2005), but readily adaptable to providers in all of the h uman service professions. These steps are as follows (from Straus ct al., 2005, pp. 3-4) :

Step l: converting th e need for information (abou t prevention, diagnosis, prognosis, therapy, causa tion, etc.) in to an answerable qucs Lion.

Step 2: tracking down Lhe besl evidence with which to answer that question.

Step 3: critically appraising that evidence for its validity (closeness to the truth), impact (size of the effect), and applicability (usefulness in our clinical practice) .

Step 4: integrating the critical appraisal with our clinical expertise and wil11 our patient's unique biology, values, and circumstances.

Step 5: Evaluating our effectiveness and efficiency in executing steps 1-4 and seeking ways to im prove them b oth for next time.

Each chapter in Straus et al. (2005) addresses on.e of these steps, and they have been adapted for use by soc ial workers in an exce ll ent series of entries appearing in 171e Social Worker's Desk Reference (see Roberts, 2009, pp. 1115-1182). EBP states that social workers need to be familiar with the best available evidence addressing the questions related to client services and to their particular practice situation and to integrate their appraisal of this information into an assessment of their own skills, the client's preferences, relevant professional and personal val ues and ethical standards, cost, feasibility, and resources . All of these factors a re rcleva n L, no t just what the research evidence indicates. And by best evidence, what is meant is not so-called gold-standard st udies such as randomi zed con- trolled trials or meta -analyses (see later chapters on Lh esc LOpics in thi s book) but simply t he best available relevan t ev idence. If there are no stud ies of superlative quality, then you locate and assess those of lesser quality. Lots of evidence can go in to th e mix, including quasi-experimental studies, single-subject studies, corrclational studies, descriptive work, epidemiological evidence, qualitative investigations, case h istories, theory, and infom1ed clinical opinion. There is always evidence for a social worker to consult, even if it is nol evidence of the highest quality. As with ECP, EBP also encourages practitioners to evalu- ate the outcomes of their work with individual cl ients using a research methodology called si ngle-subject designs .

Another option is for social workers to co nsu lt systematic reviews (SRs) of the research evidence related to various answerable que~tions invo lving assessment and interven tive meth ods. The two gro ups most responsible for preparing high-quality and independent SRs are called the Cochrane Collaboration (sec www.cochrane.o rg), focusing on issues related to health care, and the Campbell Co ll aboration (see www.campbellcollaboration .org), focusing on social welfare, education, and criminal justice. SRs are prepared by

8 THt HAN DBOOK OF SOC IAL W ORK RtSEARCH M El HODS

qualified research teams who obtain articles and reports from all over the world dealing with a specific issue. These reports are minutely analyzed and critiqued and the collected information surrunar ized in a readable format, with a take-away .message something like Treatment Xis well-supported as an effective treatment for clients with Problem Y; The avail- able evidence indicates that Treatment X is ineffective in helping clien ts with Problem Y; Clients with Problem Y who receive Treatment X demonstrated impaired outcomes, com - pared to clients who receive no treatment. You can see hmV" this information would be of immense value to social workers. Here is a sampling of SRs currently available on the Cochrane database that is of relevance to social workers:

• Behavioral and cognitive-behavioral therapy for obsessive-compulsive disorder in children and adolescents

• Family intervention for bipolar disorder • Family therapy for depression • Psychological debriefing for preventing posttraurnatic stress disorder • Psychotherapy for bulimia nervosa and binging • Short-term psychodynamic psychotherapy for common mental disorders

And here are some fonnd on the Campbell Collaboration Web site:

• Cognitive-behavioral therapy for men who physically a buse their partner • Cognitive-behavioral intervention for children who have been sexually abused • Interventions intended to reduce pregnancy-related outcomes among adolescents • School-based educational programs for the prevention of childhood sexual abuse • Work programs for welfare recipients

These systematic reviews represent the highest quality and up -to-date critical appraisals of the existing research literature addressing particular psychosocial and health problems e:>..'})erienced by social work clients. They are a wonderful resource for practitioners seeking such information and are integral to the conduct of evidence-based practice.

To sum marize, ECP suggested that social work treatment should be chosen based on support via randomized controlled studies and that social workers need to evaluate the outcomes of their practice with clients using single-system research designs. The EST ini- tiative came up with a list of evidentiary criteria needed to label a given treatmen t as "empirically supported." Once these criteria were in hand, lists of psychosocial interven- t ions meeting these standards were published. RBP provides more of a process to guide clinical and practice decision making, which explicitly embraces evidence from many sources (albeit urging one to pay particular attention to evidence of the highest quality) and explicitly includes nonscientific considerations such as client preferences and values into this decision -making process. In many ways, EBP is a more sophisticated and mature conceptualization of the conduct of practice than ECP and EST, and these latter two in i- tiatives largely have been subsurn.ed by EBP.

On Terms

The preceding brief overview helps to bring us to the present, wherein social work is attempting to really implement our original aspirations pcrtainiillg to being based on a foundation of scientific research. As in most intellectual undertakings, it always is helpful

Get help from top-rated tutors in any subject.

Efficiently complete your homework and academic assignments by getting help from the experts at homeworkarchive.com