1
Rationale and Standards of
Evidence in Evidence-Based
Practice
OLIVER C. MUDFORD, ROB MCNEILL, LISA WALTON, AND KATRINA J. PHILLIPS
What is the purpose of collecting evidence to
inform clinical practice in psychology con-
cerning the effects of psychological or other
interventions? To quote Paul’s (1967) article
that has been cited 330 times before
November 4, 2008, it is to determine the answer
to the question: “What treatment, by whom, is
most effective for this individual with that
specific problem, under which set of circum-
stances?” (p. 111). Another answer is pitched at
a systemic level, rather than concerning indi-
viduals. That is, research evidence can inform
health-care professionals and consumers about
psychological and behavioral interventions
that are more effective than pharmacological
treatments, and to improve the overall quality
and cost-effectiveness of psychological health
service provision (American Psychological
Association [APA] Presidential Task Force on
Evidence-Based Practice, 2006). The most
general answer is that research evidence can be
used to improve outcomes for clients, service
providers, and society in general.
The debate about what counts as evidence
of effectiveness in answering this question
has attracted considerable controversy
(Goodheart, Kazdin, & Sternberg, 2006;
Norcross, Beutler, & Levant, 2005). At one
end of a spectrum, evidence from research on
psychological treatments can be emphasized.
Research-oriented psychologists have pro-
moted the importance of scientific evidence in
the concept of empirically supported treat-
ment. Empirically supported treatments
(ESTs) are those that have been sufficiently
subjected to scientific research and have been
shown to produce beneficial effects in well-
controlled studies (i.e., efficacious), in more
natural clinical environments (i.e., effective),
and are the most cost-effective (i.e., efficient)
(Chambless & Hollon, 1998). The effective
and efficient criteria of Chambless and Hollon
(1998) have been amalgamated under the term
“clinical utility” (APA Presidential Task Force
on Evidence-Based Practice, 2006; Barlow,
Levitt, & Bufka, 1999). At the other end of the
spectrum are psychologists who value clinical
expertise as the source of evidence more
highly, and they can rate subjective impres-
sions and skills acquired in practice as pro-
viding personal evidence for guiding treatment
(Hunsberger, 2007). Kazdin (2008) has
asserted that the schism between clinical
researchers and practitioners on the issue of
evidence is deepening. Part of the problem,
which suggests at least part of the solution, is
that research had concentrated on empirical
evidence of treatment efficacy, but more needs
c01 20 April 2012; 12:43:29
3 Hersen, M., & Sturmey, P. (2012). Handbook of evidence-based practice in clinical psychology, child and adolescent disorders. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2021-11-18 22:16:12.
C o p yr
ig h t ©
2 0 1 2 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
to be conducted to elucidate the relevant par-
ameters of clinical experience.
In a separate dimension from the evidence–
experience spectrum have been concerns
about designing interventions that take into
account the uniqueness of the individual cli-
ent. Each of us can be seen as a unique mix
of levels of variables such as sex, age,
socioeconomic and social status, race,
nationality, language, spiritual beliefs or per-
sonal philosophies, values, preferences, level
of education, as well as number of symptoms,
diagnoses (comorbidities), or problem behav-
ior excesses and deficits that may bring us into
professional contact with clinical psycholo-
gists. The extent to which there can be prior
evidence from research or clinical experience
to guide individual’s interventions when these
variables are taken into account is question-
able, and so these individual differences add to
the mix of factors when psychologists delib-
erate on treatment recommendations with an
individual client.
Recognizing each of these three factors
as necessary considerations in intervention
selection, the APA Presidential Task Force on
Evidence-Based Practice (2006, p. 273) pro-
vided its definition: “Evidence-based practice
in psychology (EBPP) is the integration of
the best available research with clinical
expertise in the context of patient characteris-
tics, culture, and preferences.” The task force
acknowledged the similarity of its definition
to that of Sackett, Straus, Richardson,
Rosenberg, and Haynes (2000) when they
defined evidence-based practice in medicine as
“the integration of best research evidence with
clinical expertise and patient values” (p. 1).
Available research evidence is the base or
starting point for EBPP. So, in recommending
a particular intervention from a number of
available ESTs, the psychologist, using clin-
ical expertise with collaboration from the cli-
ent, weighs up the options so that the best
treatment for that client can be selected. As we
understand it, clinical expertise is not to be
considered as an equal consideration to
research evidence, as some psychologists have
implied (Hunsberger, 2007). Like client pref-
erences, the psychologist’s expertise plays an
essential part in sifting among ESTs the clin-
ician has located from searching the evidence.
Best research evidence is operationalized as
ESTs. Treatment guidelines can be developed
following review of ESTs for particular
populations and diagnoses or problem behav-
iors. According to the APA (APA, 2002; Reed,
McLaughlin, & Newman, 2002), treatment
guidelines should be developed to educate
consumers (e.g., clients and health-care sys-
tems) and professionals (e.g., clinical psych-
ologists) about the existence and benefits of
choosing ESTs for specific disorders over
alternative interventions with unknown or
adverse effects. Treatment guidelines are
intended to recommend ESTs, but not make
their use mandatory as enforceable profes-
sional standards (Reed et al., 2002). The
declared status, implications, misunderstand-
ing, and misuse of treatment guidelines based
on ESTs continue to be sources of controversy
(Reed et al., 2002; Reed & Eisman, 2006).
Our chapter examines the issues just raised
in more detail. We start with a review of the
history and methods of determining evidence-
based practice in medicine because the evi-
dence-based practice movement started in that
discipline, and has led the way for other
human services. Psychologists, especially
those working for children and young
people, tend to work collaboratively with
other professionals and paraprofessionals.
Many of these groups of colleagues subscribe
to the evidence-based practice movement
through their professional organizations. We
sample those organizations’ views. The gen-
eralizability to psychology of methods for
evidence-based decision making in medicine
is questionable, and questioned. Next we
examine criteria for determining the strength
of evidence for interventions in psychology.
These criteria are notably different to those
employed in medicine, particularly concern-
ing the relative value to the evidence base of
4 Overview and Foundational Issues
c01 20 April 2012; 12:43:29
Hersen, M., & Sturmey, P. (2012). Handbook of evidence-based practice in clinical psychology, child and adolescent disorders. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2021-11-18 22:16:12.
C o p yr
ig h t ©
2 0 1 2 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
research in psychology that has employed
methods distinct from medicine (e.g., small-N
design research). The controversies concern-
ing treatment guidelines derived from ESTs
are outlined briefly. The extent to which spe-
cial considerations exist regarding treatment
selection for children and adolescents is then
discussed. Finally, we highlight some of the
aspects of EBPP that require further work by
researchers and clinicians.
EVIDENCE-BASED MEDICINE
Evidence-based practice can be perceived as
both a philosophy and a set of problem-solving
steps, using current best research evidence to
make clinical decisions. In medicine, the
rationale for clinicians to search for best evi-
dence when making diagnostic and treatment
decisions is the desire or duty, through the
Hippocratic Oath, to use the optimal method to
prevent or cure physical, mental, or social
ailments and promote optimal health in indi-
viduals and populations (Jenicek, 2003). This
section of the chapter provides an overview of
the history of evidence-based practice in
medicine (EBM), as well as a description of
the current methodology of EBM. A critical
reflection on the process and evidence used for
EBM is also provided, leading to an intro-
duction of the relevance of evidence-based
practice (EBP) to other disciplines, including
psychological interventions.
History of Evidence-Based
Practice in Medicine
The earliest origins of EBM can be found in
19th-century Paris, with Pierre Louis (1787–
1872). Louis sought truth and medical cer-
tainty through systematic observation of
patients and the statistical analysis of observed
phenomena; however, its modern origins and
popularity are found much more recently in the
advances in epidemiological methods in
the 1970s and 1980s (Jenicek, 2003). One of
the key people responsible for the emergence
and growth of EBM, the epidemiologist Archie
Cochrane, proposed that nothing should be
introduced into clinical practice until it was
proven effective by research centers, and
preferably through double-blind randomized
controlled trials (RCTs). Cochrane criticized
the medical profession for its lack of rigorous
reviews of the evidence to guide decision
making. In 1972, Cochrane reported the results
of the first systematic review, his landmark
method for systematically evaluating the
quality and quantity of RCT evidence for
treatment approaches in clinical practice. In an
early demonstration of the value of this
methodology, Cochrane demonstrated that
corticosteroid therapy, given to halt premature
labor in high-risk women, could substantially
reduce the risk of infant death (Reynolds,
2000).
Over the past three decades, the methods and
evidence used for EBM have been extended,
refined, and reformulated many times. From
the mid-1980s, a proliferation of articles has
instructed clinicians about the process of
accessing, evaluating, and interpreting med-
ical evidence; however, it was not until 1992
that the term evidence-based medicine was
formally coined by Gordon Guyatt and the
Evidence-Based Working Group at McMaster
University in Canada. Secondary publication
clinical journals also started to emerge in the
early to mid-1990s, with the aim of summar-
izing original articles deemed to be of high
clinical relevance and methodological rigor
(e.g., Evidence-Based Medicine, ACP Journal
Club, Evidence-Based Nursing, Evidence-
Based Mental Health).
Various guidelines for EBM have been
published, including those from Evidence-
Based Working Group (Guyatt et al., 2000),
the Cochrane Collaboration, the National
Institute for Clinical Excellence (NICE),
and the British Medical Journal Clinical
Evidence group, to name but a few. David
Sackett, one of the strong proponents and
authors in EBM, describes it as an active
Rationale and Standards of Evidence in Evidence-Based Practice 5
c01 20 April 2012; 12:43:29
Hersen, M., & Sturmey, P. (2012). Handbook of evidence-based practice in clinical psychology, child and adolescent disorders. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2021-11-18 22:16:12.
C o p yr
ig h t ©
2 0 1 2 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
clinical decision-making process involving
five sequential steps: (1) convert the patient’s
problem into an answerable clinical question;
(2) track down the best evidence to answer
that question; (3) critically appraise the evi-
dence for its validity—closeness to truth;
impact—size of the effect; and applicability—
usefulness in clinical practice; (4) integrate the
appraisal with the practitioner’s clinical
expertise and the patient’s unique characteris-
tics, values, and circumstances; and (5) evaluate
the change resulting from implementing the
evidence in practice, and seek ways to improve
(Sackettetal.,2000,pp.3–4).Theguidelinesfor
EBM are generally characterized by this sort of
systematic process for determining the level
of evidence for treatment choices available to
clinicians, while at the same time recognizing
the unique characteristics of the individual’s
characteristics, situation, and context.
It is not difficult to see the inherent benefits of
EBM, and health professionals have been quick
to recognize the potential benefit of adopting it
as standard practice. Citing a 1998 survey of
UK general practitioners (GPs), Sackett and
colleagues (2000) wrote that most reported
using search techniques, with 74% accessing
evidence-based summaries generated by
others, and 84% seeking evidence-based prac-
tice guidelines or protocols. The process of
engaging in EBM requires some considerable
understanding of research and research
methods, and there is evidence that health
professionals struggle to use EBM in their
actual practice. For example, Sackett et al.
(2000) found that GPs had trouble using the
rules of evidence to interpret the literature, with
only 20% to 35% reporting that they understood
appraising tools described in the guidelines.
Clinicians’ ability to practice EBM also may be
limited by lack of time to master new skills and
inadequate access to instant technologies
(Sackett et al., 2000). In addition to these
practical difficulties, there have also been some
criticisms of EBM’s dominant methodology.
An early criticism of EBM, and nearly all
EBP approaches, is that it appears to give
greatest weight to science with little attention
to the “art” that also underlies the practice of
medicine, nursing, and other allied health
professions. For example, Guyatt and col-
leagues cited attention to patients’ humanistic
needs as a requirement for EBM (Evidence-
Based Medicine Working Group, 1992).
Nursing and other health-care disciplines note
that EBP must be delivered within a context of
caring to achieve safe, effective, and holistic
care that meets the needs of patients (DiCenso,
Cullum, Ciliska, & Guyatt, 2004).
Evidence-based practice is also criticized as
being “cookbook care” that does not take the
individual into account. Yet a requirement of
EBP is that incorporating research evidence
into practice should consistently take account
of the patient’s unique circumstances, prefer-
ences, and values. As noted by Sackett et al.
(2000), when these three elements are inte-
grated to inform clinical decision making,
“clinicians and patients form a diagnostic and
therapeutic alliance which optimizes clinical
outcomes and quality of life” (p. 1).
One of the key issues in the use of EBM is
the debate around what constitutes best evi-
dence from the findings of previous studies and
how these various types of evidence are
weighted, or even excluded, in the decision-
making process. The following section first
outlines the way in which evidence currently
tends to be judged and then provides a more in-
depth analysis of each type of evidence.
Levels and Types of Evidence
Partly through the work of Archie Cochrane,
the quality of health care has come to be
judged in relation to a number of criteria:
efficacy (especially in relation to effective-
ness), efficiency, and equity. Along with
acceptability, access, and relevance, these
criteria have been called the “Maxwell Six”
and have formed the foundation of decision
making around service provision and funding
in the United Kingdom’s National Health
Service (Maxwell, 1992). Other health systems
6 Overview and Foundational Issues
c01 20 April 2012; 12:43:29
Hersen, M., & Sturmey, P. (2012). Handbook of evidence-based practice in clinical psychology, child and adolescent disorders. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2021-11-18 22:16:12.
C o p yr
ig h t ©
2 0 1 2 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
around theworld havealso adopted these criteria
to aid in policy decision making, in both publicly
and privately funded settings. Despite this there
has been a tendency for evidence relating to
effectiveness to dominate the decision-making
process in EBM and, in particular, effectiveness
demonstrated through RCTs.
There has been considerable debate about
the ethical problems arising from clinicians
focusing too much on efficacy and not enough
on efficiency when making decisions about
treatment (Maynard, 1997). In any health
system with limited resources, the decision to
use the most effective treatment rather than the
most efficient one has the potential to impact
on the likelihood of having sufficient resources
to deliver that treatment, or any other, in the
future. By treating one person with the most
effective treatment the clinician may be taking
away the opportunity to treat others, especially
if that treatment is very expensive in relation to
its effectiveness compared to less expensive
treatment options.
According to current EBM methods, the
weight that a piece of evidence brings to
the balance of information used to make a
decision about whether a treatment is supported
by evidence can be summarized in Table 1.1.
Although there are subtle variations in this
hierarchy between different EBM guidelines
and other publications, they all typically start
with systematic reviews of RCTs at the top and
end with expert opinion at the bottom.
Before discussing systematic reviews, meta-
analyses, and clinical guidelines, it is import-
ant to understand the type of study that these
sources of evidence are typically based on:
the RCT. RCTs are a research design in
which the participants are randomly assigned
to treatment or control groups. RCTs are
really a family of designs, with different
components such as blinding (participants
and experimenter), different randomization
methods, and other differences in design.
Analysis of RCTs is quantitative, involving
estimation of the statistical significance or
probability of the difference in outcomes
observed between the treatment and control
groups in the study. The probabilities obtained
are an estimate of the likelihood of that size
difference, or something larger, existing in the
population.
There are a number of international data-
bases that store and provide information
from RCTs, including the Cochrane Central
Register of Controlled Trials (CENTRAL;
mrw.interscience.wiley.com/cochrane/cochrane
_clcentral_articles_fs.html), OTseeker (www
.otseeker.com), PEDro (www.pedro.fhs.usyd
.edu.au), and the Turning Research Into Prac-
tice (TRIP; www.tripdatabase.com) database.
Another effort to increase the ease with which
clinicians can access and use evidence from
RCTs has come through efforts to standardize
the way in which they are reported, such
as the CONSORT Statement and other efforts
from the CONSORT Group (www.consort-
statement.org).
The major strength of well-designed RCTs
and other experimental designs is that the
researcher has control over the treatment given
to the experimental groups, and also has
control over or the ability to control for any
confounding factors. The result of this control
TABLE 1.1 Typical Hierarchy of Evidence for EBM
Level of
Evidence Type of Evidence
1 (High) Systematic reviews or meta-analysis of
randomized controlled trials (RCTs)
OR
Evidence-based clinical practice guidelines
based on systematic reviews of RCTs
2 At least one well-designed RCT
3 Well-designed quasi-experimental studies
4 Well-designed case control and cohort
studies
5 Systematic reviews of descriptive and
qualitative studies
6 A single descriptive or qualitative study
7 (Low) The opinion of authorities and/or expert
committees
Source: Adapted from Melnyk and Fineout-Overholt
(2005, p. 10)
Rationale and Standards of Evidence in Evidence-Based Practice 7
c01 20 April 2012; 12:43:29
Hersen, M., & Sturmey, P. (2012). Handbook of evidence-based practice in clinical psychology, child and adolescent disorders. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2021-11-18 22:16:12.
C o p yr
ig h t ©
2 0 1 2 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
is the ability, arguably above all other designs,
to infer causation—and therefore efficacy—
from the differences in outcomes between the
treatment and control groups.
RCTs, and other purely experimental
designs, are often criticized for having poor
ecological validity, where the conditions of
the experiment do not match the conditions
or populations in which the treatment might
be delivered in the real world. Low ecological
validity can, although does not necessarily,
lead to low external validity where the findings
of the study are not generalizable to other
situations, including the real world. Another
issue with RCTs is that they are relatively
resource intensive and this leads to increased
pressure to get desired results or to suppress
any results that do not fit the desired outcome
(Gluud, 2006; Simes, 1986).
In social sciences, RCTs are not always
possible or advisable, so many systematic
reviews and meta-analyses in these areas have
less restrictive inclusion criteria. Commonly,
these include studies that compare a treated
clinical group with an untreated or attention
control group (Kazdin & Weisz, 1998;
Rosenthal, 1984). In these situations, a quasi-
experimental design is often adopted. There is
also evidence that RCTs are not necessarily any
more accurate at estimating the effect of agiven
treatment than quasi-experimental and obser-
vational designs, such as cohort and case con-
trol studies (Concato, Shah, & Horwitz, 2008).
Systematic reviews provide a summary of
evidence from studies on a particular topic. For
EBM this typically involves the formation of
an expert committee or panel, followed by the
systematic identification, appraisal, and syn-
thesis of evidence from relevant RCTs relating
to the topic (Melnyk & Fineout-Overholt,
2005). The result of the review is usually some
recommendation around the level of empirical
support from RCTs for a given diagnostic tool
or treatment. The RCT, therefore, is seen as the
gold standard of evidence in EBM.
Systematic review and meta-analysis are
closely related EBP methods to evaluate
evidence from a body of sometimes contra-
dictory research to assess treatment quality.
There is a great deal of overlap between
methods, stemming in part from their different
origins. The systematic review arises from
EBM and the early work of Cochrane and more
recently Sackett and colleagues (2000). Meta-
analysis originated in the 1970s, with the
contributions of Glass (1976) in education and
Rosenthal (1984) in psychology being central.
Meta-analyses are a particular type of sys-
tematic review. In a meta-analysis, measures
of the size of treatment effects are obtained
from individual studies. The effect sizes from
multiple studies are then combined using a
variety of techniques to provide a measure of
the overall effect of the treatment across all
of the participants in all of the studies included
in the analysis.
Both approaches use explicit methods to
systematically search, critically appraise for
quality and validity, and synthesize the litera-
ture on a given issue. Thus, a key aspect is the
quality of the individual studies and the jour-
nals in which they appear. Searches ideally
include unpublished reports as well as pub-
lished reports to counteract the “file drawer”
phenomenon: Published findings as a group
may be less reliable than they seem because
studies with statistically nonsignificant find-
ings are less likely to be published (Rosenthal,
1984; Sackett et al., 2000).
The principal difference between systematic
review and meta-analysis is the latter includes
a statistical method for combining results of
individual studies that produces a larger sam-
ple size, reduces random error, and has greater
power to determine the true size of the inter-
vention effect. Not only do these methods
compensate for the limited power of individual
studies resulting in a Type I error, the failure to
detect an actual effect when one is present,
they can also reconcile conflicting results
(Rosenthal, 1984; Sackett et al., 2000).
The criticisms of meta-analysis mostly relate
to the methodological decisions made during
the process of conducting a meta-analysis,
8 Overview and Foundational Issues
c01 20 April 2012; 12:43:29
Hersen, M., & Sturmey, P. (2012). Handbook of evidence-based practice in clinical psychology, child and adolescent disorders. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2021-11-18 22:16:12.
C o p yr
ig h t ©
2 0 1 2 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
often reducing the reliability of findings. This
can result in meta-analyses of the same topic
yielding different effect sizes (i.e., summary
statistic), with these differences stemming
from differences in the meta-analytic method
and not just differences in study findings
(Flather, Farkouh, Pogue, & Yusuf, 1997).
Methodological decisions that can influence
the reliability of the summary statistic pro-
duced from a meta-analysis include the coding
system used to analyze studies, how inclusive
the study selection process was, the outcome
measures used or accepted, and the use of raw
effect sizes or adjusted sample sizes (Flather
et al., 1997). Some meta-analyses are con-
ducted without using a rigorous systematic
review approach, and this is much more likely
to produce a mathematically valid but clinic-
ally invalid conclusion (Kazdin & Weisz,
1998; Rosenthal, 1984).
There are also some criticisms of the
meta-analysis approach, specifically from
child and adolescent psychology literature.
Meta-analyses can obscure qualitative differ-
ences in treatment execution such as investi-
gator allegiance (Chambless & Hollon, 1998;
Kazdin & Weisz, 1998) and are limited by
confounding among independent variables such
as target problems, which tend to be more evi-
dent with certain treatment methods (Kazdin &
Weisz, 1998).
Evidence-based clinical practice guidelines
are also included in this top level of evidence,
with the caveat that they must be primarily
based on systematic reviews of RCTs. The
purpose of these practice guidelines is to pro-
vide an easy-to-follow tool to assist clinicians
in making decisions about the treatment that is
most appropriate for their patients (Straus,
Richardson, Glasziou, & Haynes, 2005).
There are some issues with the publication
and use of evidence-based clinical practice
guidelines. One problem is that different
groups of experts can review the same data and
arrive at different conclusions and recom-
mendations (Hadorn, Baker, Hodges, & Hicks,
1996). Some guidelines are also criticized for
not being translated into tools for everyday
use. One of the major drawbacks of clinical
guidelines is that they are often not updated
frequently to consider new evidence (Lohr,
Eleazer, & Mauskopf, 1998).
In addition to individual researchers and
groups of researchers, there are a large number
of organizations that conduct systematic
reviews, publish clinical practice guidelines,
and publish their findings in journals and in
databases on the Internet, including the
Cochrane Collaboration (www.cochrane.org/),
the National Institute for Clinical Evidence
(NICE; www.nice.org.uk/), the Joanna Briggs
Institute (www.joannabriggs.edu.au/about
/home.php), and the TRIP database (www
.tripdatabase.com/index.html). For example,
the Cochrane Collaboration, an organization
named after Archie Cochrane, is an inter-
national network of researchers who conduct
systematic reviews and meta-analyses and
provide the results of these to the research,
practice, and policy community via the
Cochrane Library.
The overall strengths of systematic reviews,
meta-analyses, and clinical practice guidelines
relate partly to the nature of the methods used
and partly to the nature of RCTs, which were
discussed earlier. The ultimate goal of scien-
tific research is to contribute toward a body of
knowledge about a topic (Bowling, 1997).
Systematic reviews and clinical practice
guidelines are essentially trying to summarize
the body of knowledge around a particular
treatment, which is something that would
otherwise take a clinician a very long time to
do on their own, so it is easy to see the value in
a process that does this in a rigorous and sys-
tematic way.
There are a number of overall weaknesses
for this top level of evidence for EBM. For
example, the reliability of findings from sys-
tematic reviews has been found to be sensitive
to many factors, including the intercoder reli-
ability procedures adopted by the reviewers
(Yeaton & Wortman, 1993). It is possible for
different people using the same coding system
Rationale and Standards of Evidence in Evidence-Based Practice 9
c01 20 April 2012; 12:43:29
Hersen, M., & Sturmey, P. (2012). Handbook of evidence-based practice in clinical psychology, child and adolescent disorders. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2021-11-18 22:16:12.
C o p yr
ig h t ©
2 0 1 2 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
to come up with different conclusions about
the research evidence for a particular treat-
ment. The selection of comparable studies is
also a major issue, and especially in matching
comparison groups and populations. The con-
trol and treatment groups in RCTs and other
studies are usually not identical and this raises
issues, particularly in the use of meta-analyses,
where the treatment effects are being com-
bined (Eysenck, 1994). The mixing of popu-
lations can also cause problems, where the
treatment may only be effective in one or more
specific populations but the review has
included studies from other populations where
the treatment is not effective. The overall
effect of mixing these populations is to
diminish the apparent effectiveness of the
treatment under review (Eysenck, 1994).
Other issues with this top level of evidence
relate to the information not included. Once a
review or guideline is completed it must be
constantly updated to check for new evidence
that may change the conclusions. There are
also issues around the information not
included in the review process, including non-
RCT studies, gray literature, such as technical
reports and commissioned reports, and
unpublished studies. Studies that do not get
into peer-reviewed journals, and therefore
usually get excluded from the review process,
are often those that did not have significant
results. The effect of this is that reviews will
often exaggerate the effectiveness of a treat-
ment through the exclusion of studies where
the treatment in question was found to be
ineffective. For this reason there has been a
recent move to introduce what is called the
“failsafe N” statistic. This is the hypothetical
number of unpublished (or hidden) studies
showing, on average, no effect that would be
required to overturn the statistically significant
effects found from review of the published (or
located) studies results (Becker, 2006).
Quasi-experimental designs are the same
as RCTs but the groups are not randomly
assigned, there is no control group, or they lack
one or more of the other characteristics of an
RCT. In many situations randomization of
participants or having a control group is not
practically and/or ethically possible. The
strengths of quasi-experimental designs come
from the degree of control that the research has
over the groups in the study, and over possible
confounding variables. A well-designed quasi-
experimental design has the important char-
acteristic of being able to infer some degree
of causation from differences in outcomes
between study groups.
Case control studies identify a population
with the outcome of interest (cases) and a
population without that outcome (controls),
then collects retrospective data to try to
determine their relative exposure to factors of
interest (Grimes & Schulz, 2002). There are
numerous strengths of case control studies.
They have more ecological validity than
experimental studies and they are good for
health conditions that are very uncommon
(Grimes & Schulz, 2002). There is a relatively
clear temporal sequence, compared to lower
level evidence, which allows some degree of
causality to be inferred (Grimes & Schulz,
2002). They are also relatively quick to do,
relatively inexpensive, and can look at mul-
tiple potential causes at one time (Grimes &
Schulz, 2002). The weaknesses of case control
studies include the inability to control poten-
tially confounding variables except through
statistical manipulations and the reliance on
participants to recall information or retro-
spectively collating information from existing
data; the choice of control participants is also
difficult (Grimes & Schulz, 2002; Wacholder,
McLaughlin, Silverman, & Mandel, 1992). All
of this often leads to a lot of difficulty in cor-
rectly interpreting the results of case control
studies (Grimes & Schulz, 2002).
Cohort studies are longitudinal studies
where groups are divided in terms of whether
they receive or do not receive a treatment
or exposure of interest, and are followed
over time to assess the outcomes of interest
(Roberts & Yeager, 2006). The strengths of
cohort studies include the relatively clear
10 Overview and Foundational Issues
c01 20 April 2012; 12:43:29
Hersen, M., & Sturmey, P. (2012). Handbook of evidence-based practice in clinical psychology, child and adolescent disorders. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2021-11-18 22:16:12.
C o p yr
ig h t ©
2 0 1 2 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
temporal sequence that can be established
between the introduction of an intervention
and any subsequent changes in outcome vari-
ables, making the establishment of causation
possible, at least to some degree. The limita-
tions of cohort studies include the difficulty in
controlling extraneous variables, leading to a
relatively limited ability to infer causality.
They are also extremely resource intensive and
are not good where there is a large gap between
treatment and outcome (Grimes & Schulz,
2002).
Descriptive studies involve the description of
data obtained relating to the characteristics
of phenomena or variables of interest in a par-
ticular sample from a population of interest
(Melnyk & Fineout-Overholt, 2005). Correl-
ational studies are descriptive studies where the
relationship between variables is explored.
Qualitative studies collect nonnumeric data,
such as interviews and focus groups, with the
analysis usually involving some attempt at
describing or summarizing certain phenomena
in a sample from a population of interest
(Melnyk & Fineout-Overholt, 2005). Descrip-
tive studiesingeneralsit low downontheladder
of evidence quality due to their lack of control
by the researcher and therefore their inability to
infer causation between treatment and effect.
On the other hand, much research is only feas-
ible to conduct using this methodology.
Qualitative researchin health services mostly
arose from a desire to get a deeper under-
standing of what the quantitative research
finding meant for the patient and provider
(Mays & Pope, 1996). It asks questions such as
“How do patients perceive . . . ?” and “How
do patients value the options that are offered?”
Expert opinion is material written by rec-
ognized authorities on a particular topic. Evi-
dence from these sources has the least weight
in EBM, although to many clinicians the views
of experts in the field may hold more weight
than the higher level evidence outlined above.
Despite its criticisms and methodological
complexity, EBM is seen by most clinicians
as a valid and novel way of reasoning and
decision making. Its methods are widely dis-
seminated through current clinical education
programs at all levels throughout the world. As
proposed by Gordon Guyatt and colleagues
in 1992, EBM has led to a paradigm shift in
clinical practice (Sackett et al., 2000).
CURRENT STATUS OF EBP
MOVEMENTS ACROSS HEALTH
AND EDUCATION PROFESSIONS
Evidence-based practice was initially in the
domain of medicine, but now most human
service disciplines subscribe to the principles
of EBP. For example, EBP or the use of
empirically supported treatment is recom-
mended by the APA, Behavior Analyst Certi-
fication Board (BACB), American Psychiatric
Association, National Association of Social
Workers, and General Teaching Council for
England. For U.S. education professionals,
EBP has been mandated by law. The No Child
Left Behind Act of 2001 (NCLB; U.S.
Department of Education, 2002) was designed
to make the states, school districts, principals,
and teachers more answerable for the per-
formances shown by the students for whom
they were providing education services. Along
with an increase in accountability, the NCLB
requires “schoolwide reform and ensuring the
access of children to effective, scientifically
based instructional strategies and challenging
academic content” (p. 1440). The advent of the
NCLB and the resulting move toward EBP
occurred because, despite there being research
conducted on effective and efficient teaching
techniques (e.g., Project Follow Through:
Bereiter & Kurland, 1981; Gersten, 1984) and
a growing push for accountability from the
public (Hess, 2006), this seldom was translated
into actual practice. Education appears to have
been particularly susceptible to implementing
programs that were fads, based on little more
than personal ideologies and good marketing.
Much money and time has been lost by school
districts adopting programs that have no
Rationale and Standards of Evidence in Evidence-Based Practice 11
c01 20 April 2012; 12:43:30
Hersen, M., & Sturmey, P. (2012). Handbook of evidence-based practice in clinical psychology, child and adolescent disorders. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2021-11-18 22:16:12.
C o p yr
ig h t ©
2 0 1 2 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
empirical support for their effectiveness, such
as the adoption of known ineffective substance
abuse prevention programs (Ringwalt et al.,
2002) or programs that are harmful for some
students and their families, such as facilitated
communication (Jacobson, Mulick, &
Schwartz, 1995). This leads not only to
resource wastage, but an even greater cost in
lost opportunities for the students involved.
It seems like common sense for EBP to be
adopted by other disciplines, as it is difficult to
understand why reputable practitioners, ser-
vice providers, and organizations would not
want to provide interventions that have been
shown to be the most effective and efficient.
Yet, despite this commonsense feeling, the
codes of conduct and ethical requirements, and
mandated laws, many disciplines (including
medicine) have found it difficult to bridge the
gap between traditional knowledge-based
practice and the EBP framework (Greenhalgh,
2001; Stout & Hayes, 2004). Professions such
as social work (Roberts & Yeager, 2006;
Zlotnick & Solt, 2006), speech language
therapy (Enderby & Emerson, 1995, 1996),
occupational therapy (Bennett & Bennett,
2000), and education (Shavelson & Towne,
2002; Thomas & Pring, 2004) have all
attempted to overcome the difficulties that
have arisen as they try to move EBP from a
theoretical concept to an actual usable tool
for everyday practitioners. Although some of
these disciplines have unique challenges to
face, many are common to all.
Many human services disciplines have
reported that one of the major barriers to the
implementation of EBP is the lack of sound
research. For example, speech language ther-
apy reports difficulties with regard to the
quality and dissemination of research. A
review of the status of speech language therapy
literature by Enderby and Emerson (1995,
1996) found that there was insufficient quality
research available in most areas of speech
language therapy; however, they did find
that those areas of speech therapy that
were associated with the medical profession
(e.g., dysphasia and cleft palate) were more
likely to have research conducted than those
associated with education (e.g., children with
speech and language disorders and populations
with learning disabilities). There continues to
be a lack of agreement on speech language
therapies effectiveness (Almost & Rosenbaum,
1998; Glogowska, Roulstone, Enderby, &
Peters, 2000; Robertson & Weismer, 1999).
In addition to the lack of research, Enderby
and Emerson were also concerned about the
manner in which health resources were being
allocated for speech language therapists. They
found that of the resources allocated to speech
language therapy approximately 70% were
being used with children with language
impairments, despite there being limited
quality evidence of the effectiveness of speech
language therapy with the population at that
time. They also identified that dysarthria was
the most commonly acquired disorder, but had
very little research outside of the Parkinson’s
disease population. So, again, many resources
have been allocated to programs that have no
evidence of effectiveness. This questionable
allocation of resources is seen in other fields
also. For example, a number of studies have
found that people seeking mental health
treatments are unlikely to receive an inter-
vention that would be classified as EBP and
many will receive interventions that are inef-
fective (Addis & Krasnow, 2000; Goisman,
Warshaw, & Keller, 1999).
A number of organizations and government
agencies within a variety of fields try to
facilitate the much-needed research. Examples
in education include the United States
Department for Children, Schools, and Fam-
ilies (previously Department for Education
and Skills), the National Research Council
(United Kingdom), and National Teacher
Research Panel (United Kingdom). Similarly,
in social work the National Association of
Social Workers (United States) and the Society
for Social Work and Research (United States)
both facilitate the gathering of evidence to
support the use of EBP within their field;
12 Overview and Foundational Issues
c01 20 April 2012; 12:43:30
Hersen, M., & Sturmey, P. (2012). Handbook of evidence-based practice in clinical psychology, child and adolescent disorders. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2021-11-18 22:16:12.
C o p yr
ig h t ©
2 0 1 2 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
however, even if these organizations generate
and gather research demonstrating the effect-
iveness of interventions, this does not always
translate into the implementation of EBP.
As mentioned previously, resource allocation
does not necessarily go to those treatments
with evidence to support them due to a lack of
effective dissemination on the relevant topics.
Despite medicine being the profession with the
longest history of EBP, Guyatt et al. (2000)
stated that many clinicians did not want to use
original research or they fail to do so because
of time constraints and/or lack of understand-
ing on how to interpret the information. Rosen,
Proctor, Morrow-Howell, and Staudt (1995)
found that social workers also fail to consider
empirical research when making research
decisions. They found that less than 1% of the
practice decisions were justified by research.
Although the lack of research-based decision
making is concerning, many professions are
attempting to disseminate information in a
manner that is more user friendly to its prac-
titioners. Practice or treatment guidelines have
been created to disseminate research in a
manner that will facilitate EBP. These guide-
lines draw on the empirical evidence and
expert opinion to provide specific best-practice
recommendations on what interventions or
practices are the most effective/efficient for
specific populations (Stout & Hayes, 2004).
There are a number of guideline clearing-
houses (National Guidelines Clearinghouse
[www.guideline.gov/] and What Works
Clearinghouse [ies.ed.gov/ncee/wwc/]). Asso-
ciations and organizations may also offer
practice guidelines (e.g., American Psychiatric
Association).
Although there appears to have been
emphasis placed on EBP, there is still much
work to be done in most human service fields,
including medicine, to ensure that there is
appropriate research being conducted, that this
research is disseminated to the appropriate
people, and that the practitioners then put it
into practice. Reilly, Oates, and Douglas
(2004) outlined a number of areas that the
National Health Service Research and Devel-
opment Center for Evidence-Based Medicine
had identified for future development. They
suggest that there is a need for a better
understanding of how practitioners seek
information to inform their decisions, what
factors influence the inclusion of this evidence
into their practice, and the value placed, both
by patients and practitioners, on EBP. In add-
ition, there is a need to develop information
systems that facilitate the integration of evi-
dence into the decision-making processes for
practitioners and patients. They also suggest
that there is a need to provide effective and
efficient training for frontline professionals in
evidence-based patient care. Finally, they
suggest that there is simply a need for more
research.
EVIDENCE IN PSYCHOLOGY
Although RCT research is still held up as the
gold standard of evidence in medical science,
in other clinical sciences, especially psych-
ology, best research evidence has been less
focused on RCTs as being the only scientific-
ally valid approach. Randomized controlled
trials are still held in high regard in clinical
psychology, but it has long been recognized
that alternative designs may be preferred and
still provide strong evidence, depending on the
type of intervention, population studied, and
patient characteristics (APA Presidential Task
Force on Evidence-Based Practice, 2006;
Chambless & Hollon, 1998).
Research Methods Contributing to
Evidence-Based Practice in Psychology
We have described and discussed RCTs in the
previous section on EBM. The issues con-
cerning RCTs and their contribution to the
clinical psychology evidence base are similar.
The inclusion of evidence of treatment efficacy
and utility from research paradigms other than
RCT methods and approximations thereto has
Rationale and Standards of Evidence in Evidence-Based Practice 13
c01 20 April 2012; 12:43:30
Hersen, M., & Sturmey, P. (2012). Handbook of evidence-based practice in clinical psychology, child and adolescent disorders. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2021-11-18 22:16:12.
C o p yr
ig h t ©
2 0 1 2 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
been recommended for clinical psychology
since the EBPP movement commenced
(Chambless & Hollon, 1998). The APA
Presidential Task Force on Evidence-Based
Practice (2006) findings support the use of
qualitative studies, single-case or small-N
experimental designs, and process-outcome
studies or evaluations, in addition to the RCT
and systematic review techniques employed in
EBM. A more in-depth discussion of some of
these methods will be provided.
Psychologists and members of other profes-
sions that are more influenced by social sci-
ence research methods than medicine may
consider that findings from qualitative
research add to their EBP database. Although
qualitative research is not accepted as legit-
imate scientific study by many psychologists,
and not usually taught to clinical psychologists
in training, it may well have its place in
assessing variables regarding clinical exper-
tise and client preferences (Kazdin, 2008).
Research questions addressable through
qualitative research relevant to EBPP are
similar to those mentioned for EBM.
Another important source of scientific evi-
dence for psychologists, educators, and other
nonmedical professionals is that derived
from small-N research designs, also known as
single-case, single-subject, or N ¼ 1 designs. These alternative labels can confuse, espe-
cially since some single-case designs
include more than one subject (e.g., multiple
baseline across subjects usually include three
or more participants). The most familiar
small-N designs are ABAB, multiple baseline,
alternating treatments, and changing criterion
(Barlow & Hersen, 1984; Hayes, Barlow, &
Nelson-Gray, 1999; Kazdin, 1982). Data from
small-N studies have been included as sources
of evidence, sometimes apparently equivalent
to RCTs (Chambless & Hollon, 1998), or at a
somewhat lower level of strength than RCTs
(APA, 2002).
Strength of evidence from small-N designs.
Small-N designs can be robust in terms of
controlling threats to internal validity;
however, external validity has often been
viewed as problematic. This is because par-
ticipants in small-N studies are not a randomly
selected sample from the whole population of
interest. The generality of findings from small-
N studies is established by replication across
more and more members of the population of
interest in further small-N studies. A hypo-
thetical example of the process of determining
the generality of an intervention goes like this:
(a) A researcher shows that a treatment works
for a single individual with a particular type of
diagnosis or problem; (b) Either the same or
another researcher finds the same beneficial
effects with three further individuals; (c)
Another researcher reports the same findings
with another small set of individuals; and so
on. At some point, sufficient numbers of indi-
viduals have been successfully treated using
the intervention that generality can be claimed.
Within a field such as a particular treatment for
a particular disorder, small-N studies can be
designed so results can be pooled to contribute
to an evidence base larger than N ¼ 1 to 3 or 4 (Lord et al., 2005).
Even if there is an evidence base for an
intervention from a series of small-N studies,
every time another individual receives the
same treatment, the clinician in scientist-
practitioner role evaluates the effects of the
intervention using small-N design methods.
Thus, every new case is clinical research to
determine the efficacy and effectiveness of this
treatment for that individual.
Clinical psychologists can be cautioned that
physicians and medical researchers have a
similar-sounding name for an experimental
design with their “N of 1 trials.” The definition
of N of 1 trials can vary somewhat but typically
they are described as “randomised, double
blind multiple crossover comparisons of an
active drug against placebo in a single patient”
(Mahon, Laupacis, Donner, & Wood, 1996,
p. 1069). These N of 1 trials are continued until
the intervention, usually a drug, in question
shows consistently better effects than its
comparison treatment or control condition.
14 Overview and Foundational Issues
c01 20 April 2012; 12:43:30
Hersen, M., & Sturmey, P. (2012). Handbook of evidence-based practice in clinical psychology, child and adolescent disorders. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2021-11-18 22:16:12.
C o p yr
ig h t ©
2 0 1 2 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
Then the intervention is continued, or discon-
tinued if there were insufficient beneficial
effects. The N of 1 trials can be considered
top of the hierarchy of evidence in judging
strength of evidence for individual clinical
decisions, higher even than systematic reviews
of RCTs (Guyatt et al., 2000). This is because
clinicians do not have to generalize findings
of beneficial effects of the treatment to this
patient from previous researched patients.
Consistent beneficial findings from many N
of 1 trials add up to an “N of many” RCT,
thus providing evidence of the generality of
findings. An N of 1 trial is quite similar to
the alternating treatments small-N design
employed in clinical psychology, especially
applied behavior analysis; however, they are
not identical and the differences may be
overcome for some psychosocial interventions
only with careful planning, but many inter-
ventions could not be assessed by fitting them
into an N of 1 trial format. Further details are
beyond the scope of this chapter; however,
close study of the methodological require-
ments of both N of 1 designs and small-N
designs can show the similarities and differ-
ences (Barlow & Hersen, 1984; Guyatt et al.,
2000; Hayes et al., 1999; Kazdin, 1982; Mahon
et al., 1996).
Some state that small-N designs may be most
suitable for evaluating new treatments (Lord
et al., 2005), others that single-case studies are
most suitable for determining treatment effects
for individuals (APA Presidential Task Force
on Evidence-Based Practice, 2006). We do not
disagree with either, except to point out that it
has been argued that small-N studies can con-
tribute much more to EBPP than these two
advantages. In the following section, we
review how ESTs can be determined from an
evidence base consisting entirely of small-N
studies.
Criteria for Assessing Efficacy
Lonigan, Elbert, and Johnson (1998) tabulated
criteria for determining whether an intervention
for childhood disorders could be considered
well-established (i.e., efficacious) or probably
efficacious (i.e., promising). For the former,
they recommended at least two well-conducted
RCT standard studies by independent research
teams or a series of independent well-designed
small-N studies with at least nine participants
carefully classified to the diagnostic category
of interest showing that the intervention was
better than alternative interventions. The
availability of treatment manuals was recom-
mended. For “promising” treatments, the
criteria were relaxed to allow nonindependent
RCTs,comparingtreatment tonotreatment,ora
minimum of three small-N studies. These cri-
teria followed those established at the time for
psychological therapies in general (Chambless
& Hollon, 1998).
We have discussed the determination of
empirical support from RCTs already. The
next sections will examine how evidence
is derived from small-N studies: First,
how evidence is assessed from individual
research reports; and second, how the evi-
dence can be combined from a group of
research articles addressing the same topic
of interest.
Evaluating Evidence From Small-N
Research Designs
How do those assessing the efficacy of a treat-
ment from small-N designs measure the
strength of the design for the research purpose
and whether, or to what extent, a beneficial
effect has been demonstrated? Chambless and
Hollon (1998) recommended reviewers to rate
single-case studies on the stability of their
baselines, use of acceptable experimental
designs, such as ABAB or multiple base-
line designs, and visually estimated effects.
Baselines need not always be stable to provide
an adequate control phase; there are other valid
designs (e.g., changing criterion, multielement
experimental designs); and visual estimates
of effects are not necessarily reliable
(Cooper, Heron, & Heward, 2007). Validity
Rationale and Standards of Evidence in Evidence-Based Practice 15
c01 20 April 2012; 12:43:30
Hersen, M., & Sturmey, P. (2012). Handbook of evidence-based practice in clinical psychology, child and adolescent disorders. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2021-11-18 22:16:12.
C o p yr
ig h t ©
2 0 1 2 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
and, probably, reliability of reviews using only
Chambless and Hollon’s (1998) criteria could
not be assured.
Attempts to improve reliability and validity
of judgments about the value of small-N design
studies have included establishing more well-
defined criteria. Quite detailed methods for
evaluating small-N studies have been pub-
lished. For example, Kratochwill and Stoiber
(2002) describe the basis of a method endorsed
by the National Association of School Psych-
ologists (NASP) designed for evaluating mul-
tiple research reports that used single-case
designs for establishing evidence-based rec-
ommendations for interventions in educational
settings. The system is more inclusive of
design variations beyond those recommended
by Chambless and Hollon (1998), and
reviewers are instructed to code multiple
variables, including calculating effects sizes
from graphical data. The coding form extended
over 28 pages. Shernoff, Kratochwill, and
Stoiber (2002) illustrated the assessment pro-
cedure and reported that, following extensive
training and familiarity with the coding man-
ual, they achieved acceptable agreement
among themselves. They stated that the pro-
cess took 2 hours for a single study, although,
so our own graduate students report, it takes
much longer if multiple dependent and inde-
pendent variables or hundreds of data points
had been features of the research article with
which they chose to illustrate the NASP pro-
cedure. The NASP method was constructively
criticized by Levin (2002), and is appa-
rently being revised and expanded further
(Kratochwill, 2005).
Meanwhile, others have developed what
appear to be even more labor intensive
methods for attempting to evaluate objectively
the strength of evidence from a series of small-
N designs. As an example of a more detailed
method, Campbell (2003) measured every data
point shown on published graphs from 117
research articles on procedures to reduce
problem behaviors among persons with aut-
ism. Each point was measured by using
dividers to determine the distance between the
point and zero on the vertical axis. Campbell
calculated effect sizes for three variables: mean
baseline reduction, percentage of zero data
points, and percentage of nonoverlapping data
points. Use of these statistical methods may
have been appropriate considering the types of
data Campbell examined; nonzero baselines of
levels of problem behavior followed by inter-
vention phases in which the researchers’ goal
was to produce reduction to zero (see Jensen,
Clark, Kircher, & Kristjansson, 2007, for crit-
ical review of meta-analytic tools for small-N
designs). Nevertheless, the seemingly arduous
nature of the task and the lack of generaliz-
ability of the computational methods to
reviewing interventions designed to increase
behaviors are likely to mitigate wider accept-
ance of Campbell’s (2003) method.
A final example of methods to evaluate the
strength of an evidence base for interventions
is that outlined by Wilczynski and Christian
(2008). They describe the National Standards
Project (NSP), which was designed to deter-
mine the benefits or lack thereof of a wide
range of approaches for changing the behav-
iors of people with autism spectrum disorders
(ASD) aged up to 21 years. Their methods of
quantitative review enabled the evidence from
group and small-N studies to be integrated.
Briefly, and to describe their method for
evaluating small-N studies only, their review
rated research articles based on their scientific
merit first. Articles were assessed to deter-
mine whether they were sufficiently well-
designed in terms of experimental design,
measurement of the dependent variables,
assessment of treatment fidelity, the ability
to detect generalization and maintenance
effects, and the quality of the ASD classifi-
cation of participants. If the article exceeded
minimum criteria on scientific merit, the
treatment effects were assessed as being
beneficial, ineffective, adverse, or that the
data were not sufficiently interpretable
to decide on effects. Experienced trained
reviewers were able to complete a review
16 Overview and Foundational Issues
c01 20 April 2012; 12:43:30
Hersen, M., & Sturmey, P. (2012). Handbook of evidence-based practice in clinical psychology, child and adolescent disorders. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2021-11-18 22:16:12.
C o p yr
ig h t ©
2 0 1 2 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
for a research article in 1 hour or less with
interreviewer agreement of 80% or more.
Inevitable problems for all these systems for
review of single case designs arise from the
necessity of creating “one size fits all” rules.
For example, the evaluative methods of both
the NASP (Kratochwill & Stoiber, 2002) and
NSP projects (Wilczynski & Christian, 2008)
include consideration of level of interobserver
agreement for determining partly the quality of
measurement of the dependent variables. The
minimum acceptable level of agreement is
specified at, say, 70% or 80%, which makes it
relatively convenient for reviewers to deter-
mine from reading the published research art-
icle that they are rating; however, it has been
known for more than 30 years that an agree-
ment percentage is rather meaningless without
examination of how behaviors were measured,
what interobserver agreement algorithm was
employed, and the relative frequency or dur-
ation of the behaviors measured (Hawkins &
Dotson, 1975). Another example concerns
coding rules for evaluating the adequacy of
baseline measures of behavior. Whether the
criterion for a baseline phase of highest scien-
tific merit is a minimum of 3 points (NASP) or
5 points (NSP), there will be occasions when
the rule should be inapplicable. For example,
in Najdowski, Wallace, Ellsworth, MacAleese,
and Cleveland (2008), after more than 20
observational sessions during intervention
showing zero severe problem behavior, a return
to baseline in an ABAB design raised the rate
of problem behavior to more than four per
minute, which was higher than any points in the
first A-phase. To continue with the baseline to
meet NASP or NSP evaluative criteria would
have been unnecessary to show experimental
control and dangerous for the participant.
These examples indicate why more detailed
and codified methods for reviewing small-N
studies quantitatively are, as yet, not firmly
established. Although there may be satisfac-
tory methods for fitting RCTs’ scientific merit
and size of effects into databases for meta-
analyses, that appears to be more problematic
with small-N designs, given their flexibility in
use (Hayes et al., 1999).
Volume of Evidence From Small-N
Studies Required to Claim That an
Intervention Is Evidence Based
Having rated the strength of evidence from
individual research articles, the next step is to
determine whether a group of studies on the
same topic between them constitute sufficient
evidence to declare that an intervention is an
empirically supported treatment or a promis-
ing or emerging intervention. We discuss only
empirically supported treatment criteria here.
Consensus on what minimum criterion should
apply has yet to be reached. Chambless and
Hollon (1998) originally recommended a
minimum of two independent studies with
three or more participants (N $ 3) showing
good effects for a total of N $ 6 participants.
Lonigan et al. (1998) required three studies
with N $ 3; that is, beneficial effects shown for
N $ 9 participants. Since then, the bar has been
raised. For instance, Horner et al. (2005) pro-
posed that the criteria for determining that an
intervention is evidence-based included a
minimum of five small-N studies, from three or
more separate research groups, with at least 20
participants in total. Wilczynski and Christian
(2008) used similar criteria with $ 6 studies
of strongest scientific merit totaling N $ 18
participants with no conflicting results from
other studies of adequate design. Others have
recommended similar standards: Reichow,
Volkmar, and Cicchetti (2008) described a
method for evaluating research evidence
from both group and small-N designs, as had
Wilczynski and Christian (2008). Finally,
Reichow et al. (2008) set the criterion for an
established EBP at $ 10 small-N studies of at
least “adequate report strength” across three
different locations and three different research
teams with a total of 30 participants, or, if at
least five studies of “strong report strength”
with a total of 15 or more participants existed,
that could substitute for the 10-study criterion.
Rationale and Standards of Evidence in Evidence-Based Practice 17
c01 20 April 2012; 12:43:30
Hersen, M., & Sturmey, P. (2012). Handbook of evidence-based practice in clinical psychology, child and adolescent disorders. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2021-11-18 22:16:12.
C o p yr
ig h t ©
2 0 1 2 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
The rationale for selecting the numerical
criteria is usually not stated. Thus, all we can
state is that some systems (e.g., Reichow et al.,
2008) are more conservative than others (e.g.,
Lonigan et al., 1998). Sometimes conservative
criteria may be appropriate, for example, when
the costs of treatment are high, and/or the
intervention is exceedingly complex requiring
highly skilled intervention agents, and/or the
benefits for the intervention are less than ideal
(i.e., it reduces problems to a more manageable
level, but does not eliminate them), and/or
some negative side effects have been observed.
On the other hand, if relatively few resources
are required to implement an effective and
rapid intervention without unwanted side
effects, fewer well-conducted studies may be
needed to persuade consumers that the inter-
vention is empirically supported, and therefore
worth evaluating with the individual patient.
This brief discussion should advise readers to
examine and consider criteria carefully when
reviewers claim that a particular intervention is
an empirically supported treatment for a par-
ticular disorder or problem for a particular
population.
The discussions on evidence from small-N
designs have left much out. For instance,
Reichow et al. (2008) and Wilczynski and
Christian (2008) developed algorithms for
assessing the strength of evidence at different
levels, although we have outlined only the
highest levels. Both groups have also reported
algorithms for determining ESTs from
mixed methods (e.g., RCTs and small-Ns).
Wilczynski and Christian (2008) report rules
for incorporating conflicting results into the
decision-making process about overall strength
of evidence (De Los Reyes & Kazdin, 2008).
Level of Specificity of Empirically
Supported Treatments
The issue to be discussed next concerns the unit
of analysis of the research evidence. We illus-
trate by examining an example provided by
Horner et al. (2005) in which they assessed the
level of support for functional communication
training (FCT; Carr & Durand, 1985). Func-
tional communication training is an approach
to reducing problem behaviors that teaches an
appropriate nonproblematic way for individ-
uals to access reinforcers for the problem
behavior that have been identified through
functional assessment. Horner and colleagues
cited eight published research reports that
included 42 participants who had benefited in
FCT studies across five research groups. The
evidence was sufficient in quantity for it to be
concluded that FCT is an empirically supported
treatment, exceeding all criteria reviewed earlier
except that two more studies would have been
needed to reach the number of studies $ 10
by the criteria of Reichow et al. (2008).
It might reasonably be asked: “For which
population is FCT beneficial?” Perusal of the
original papers cited by Horner et al. (2005)
shows that 21/42 participants’ data were
reported in one of the eight cited studies
(Hagopian, Fisher, Sullivan, Acquisto, &
LeBlanc, 1998), with the oldest participant
being 16 years old, and none reported to have a
diagnosis of autism. Thus, applying Horner
et al.’s criteria, it cannot be concluded from the
studies cited that FCT is an empirically sup-
ported treatment for participants with autism
or for participants older than 16, regardless of
diagnosis. As an aside, eight participants across
the other seven studies were reported to have
autism, and three participants in total were aged
over 20 years; however, the literature on FCT
that Horner et al. (2005) included did not
appear to have been obtained from a systematic
search, so it is possible that there has been
sufficient research to show that FCT is an
empirically supported treatment for subgroups,
and perhaps autism and adults are two of those.
TREATMENT GUIDELINES
Treatment guidelines specifically recommend
ESTs to practitioners and consumers. Alter-
native descriptors are clinical practice and
18 Overview and Foundational Issues
c01 20 April 2012; 12:43:30
Hersen, M., & Sturmey, P. (2012). Handbook of evidence-based practice in clinical psychology, child and adolescent disorders. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2021-11-18 22:16:12.
C o p yr
ig h t ©
2 0 1 2 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
best practices guidelines (Barlow, Levitt, &
Bufka, 1999). The view of APA was that
guidelines are
not intended to be mandatory, exhaustive,
or definitive . . . and are not intended to
take precedence over the judgment of
psychologists. APA’s official approach to
guidelines strongly emphasizes professional
judgment in individual patient encounters
and is therefore at variance with that of
more ardent adherents to evidence-based
practice. (Reed et al., 2002, p. 1042)
It is apparent that many health-care organ-
izations, insurance companies, and states in the
United States interpret the purpose of lists of
ESTs and treatment guidelines differently
(Gotham, 2006; Reed & Eisman, 2006). They
can interpret guidelines as defining what treat-
ments can be offered to patients and, via
manualization, exactly how treatment is to be
administered, by whom, and for how long. The
requirement for manualizationallowsfunders of
treatment to specify a standard reimbursement
for the treatment provider. Thus, the empirically
supported treatment movement was embraced
by governments and health-care companies as it
was anticipated to be a major contributor to
controlling escalating health-care costs.
Many practicing psychologists were less
enthusiastic about the empirically supported
treatment and EBPP movements (see contri-
butions by clinicians in Goodheart et al., 2006;
Norcross et al., 2005). General concerns
included that requirements to use only ESTs
restrict professionalism by reframing psych-
ologists as technicians going by the book
mechanically; restricting client choice to
effective interventions that have been granted
empirically supported treatment status higher
than others, only because they, like drugs, are
relatively easy to evaluate in the RCT format.
Prescription of one-size-fits-all ESTs may
further disadvantage minorities, and people
with severe and multiple disorders for whom
there is scant evidence available. There
were also concerns that the acknowledged
importance of clinical expertise, such as
interpersonal skills to engage the client (child)
and significant others (family) in a therapeutic
relationship, would be ignored.
Contrary to the pronouncements from the
APA (2002, 2006), guidelines have been
interpreted or developed that “assume the force
of law” in prescribing some interventions and
proscribing others (Barlow et al., 1999, p. 155).
Compulsion of psychologists in practice to
follow treatment guidelines has been reported
to occur in the United States by giving
immunity from malpractice lawsuits to those
who use only ESTs, and increasing the vul-
nerability of those who do not to litigation and
increased professional indemnity insurance
(Barlow et al., 1999; Reed et al., 2002). Some
guidelines, especially those produced by agen-
cies or companies employing psychologists,
have been viewed as thinly veiled cost-cutting
devices justified with a scientistic gloss.
Ethical Requirements
For many human service professional organ-
izations, EBP and ESTs have become an
ethical requirement. The APA’s Ethical Prin-
ciples of Psychologists and Code of Conduct
document mentions the obligation to use some
elements of EBPP; for example, “Psycholo-
gists’ work is based upon established scien-
tific and professional knowledge of the
discipline” (American Psychological Associ-
ation, 2010, p. 5). Other professional groups
appear to be more prescriptive with regard to
EBP. The BACB’s Code for Responsible
Conduct, for example, recommends EBP with
statements such as, “Behavior analysts rely on
scientifically and professionally derived
knowledge when making scientific or profes-
sional judgments in human service provision”
(Behavior Analyst Certification Board,
2004, p. 1). The BACB also require the use
of ESTs:
a. The behavior analyst always has the
responsibility to recommend scientifically
Rationale and Standards of Evidence in Evidence-Based Practice 19
c01 20 April 2012; 12:43:30
Hersen, M., & Sturmey, P. (2012). Handbook of evidence-based practice in clinical psychology, child and adolescent disorders. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2021-11-18 22:16:12.
C o p yr
ig h t ©
2 0 1 2 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
supported most effective treatment
procedures. Effective treatment procedures
have been validated as having both long-term
and short-term benefits to clients and society.
b. Clients have a right to effective treatment
(i.e., based on the research literature and
adapted to the individual client). c. Behavior
analysts are responsible for review and
appraisal of likely effects of all alternative
treatments, including those provided by other
disciplines and no intervention. (Behavior
Analyst Certification Board, 2004, p. 4)
As shown by the previous examples, each of
these organizations have variations in how
they have included EBP and empirically sup-
ported treatment into their codes of conduct
and ethical statements; however, they both
include the basic tenets of EBP: combining the
best research information with clinical know-
ledge and the preferences of the individuals
involved.
CHILDREN, ADOLESCENTS, AND
EVIDENCE-BASED PRACTICE
IN PSYCHOLOGY
Despite there being ESTs for a number of
disorders and problem behaviors manifesting
in children and adolescents, there are more
than 500 treatments in use with children
(Kazdin, 2008), most of which are unre-
searched. Chambless and Ollendick (2001)
listed 108 ESTs for adults, compared with 37
for children, suggesting that research on ther-
apies for young people is lacking relative to
adults. Further research with child and ado-
lescent populations has been prioritized by
APA (APA Presidential Task Force on Evi-
dence-Based Practice, 2006). Researching the
effects of treatments for children brings special
difficulties (Kazdin & Weisz, 1998). Regard-
ing practice, children do not typically self-
refer for mental, psychological, or behavioral
disorders, nor are they active seekers of
ESTs or preferred interventions. Parents
or guardians tend to take those roles, either
independently or following recommendations
from family, or health or education profes-
sionals. Children and youth cannot legally
provide informed consent for treatments or for
participation in research studies. These are
among the developmental, ethical, and legal
factors that affect consideration of EBPP with
children.
Lord et al. (2005) discussed the challenges of
acquiring rigorous evidence regarding efficacy
of treatments for children with complex,
potentially chronic behavioral/psychological
disorders. Contributors to the article were
researchers from multiple disciplines assem-
bled by the National Institutes of Health in
2002. They wrote about autism spectrum dis-
orders specifically, but acknowledged that the
issues may have relevance to other child and
youth problems.
Parents may be unwilling to consent to ran-
domization studies in case their child is
assigned to what parents perceive to be a less
preferred treatment alternative, particularly
when the intervention is long term and early
intervention is, or is perceived to be, critical,
such as early intensive behavioral intervention
for pervasive developmental disorders. Lord
et al. (2005) noted that ethical concerns may
prohibit RCTs of promising interventions
when randomization to no treatment or treat-
ment of unknown effects is required by the
evaluation protocol. Additional factors that
reduce the internal validity of group compari-
son studies of psychosocial interventions
include that parental blindness to the inter-
vention allocated to their children is nigh on
impossible, diffusion of treatment through
parent support groups is likely, parents may
choose to withdraw their children from no
treatment or treatment as usual groups and
obtain the experimental intervention or an
approximation to it from outside the study,
and children with severe disorders will often
be subjected to multiple interventions of
unknown benefit, provided with varying
20 Overview and Foundational Issues
c01 20 April 2012; 12:43:30
Hersen, M., & Sturmey, P. (2012). Handbook of evidence-based practice in clinical psychology, child and adolescent disorders. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2021-11-18 22:16:12.
C o p yr
ig h t ©
2 0 1 2 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
fidelity that may interact with one another to
produce uninterpretable beneficial, neutral, or
adverse effects (Smith & Antolovich, 2000).
Parents seek out information on their child’s
disorder and intervention recommendations
through Internet or parent organizations,
sometimes accepting advice from profession-
als. Mackintosh, Meyers, and Goin-Kochel
(2006) received 498 responses to a Web-based
survey of parents with children with autism
spectrum disorders and found that the most
oft-cited sources of information were books
(88%), Web pages (86%), other parents (72%),
and autism newsletters (69%). Lagging
somewhat as sources of advice were profes-
sionals other than educators or physicians
(57%). Physicians, education professionals,
and family members were cited as sources of
information by fewer than half the parents who
responded to the survey.
Multiple fad treatments have been recom-
mended for childhood onset disorders, and
their adoption by families and some profes-
sionals, including psychologists, wastes
resources and time that could have been spent
profitably by employing ESTs (Jacobson,
Foxx, & Mulick, 2005). Although some of our
examples of supported and unsupported treat-
ments have related to children with autism
spectrum disorders (Romanczyk, Arnstein,
Soorya, & Gillis, 2003), the problem of treat-
ment selection uninformed by research affects
children with other difficulties and their fam-
ilies. Some interventions for children with
developmental and other disabilities have been
found ineffective or harmful (Jacobson et al.,
2005), and the same occurs for children with
ADHD (Waschbusch & Hill, 2003). (See also
Chapter 2 of this volume for a further discus-
sion of this point by Waschbush, Fabiano, and
Pelham.) We believe that clinical psycholo-
gists working with young people ought to have
a professional ethical obligation to inform
themselves and others about empirically
unsupportable treatments as well as ESTs for
their clients.
LIMITATIONS OF THE EVIDENCE
BASE REGARDING EVIDENCE-BASED
PRACTICE IN PSYCHOLOGY
There is a relatively large body of evidence
concerning efficacy of treatments (Kazdin &
Weisz, 1998, 2003), but far less on treatment
utility, effectiveness, and efficiency. There is
evidence that the utility of some efficacious
treatments has been demonstrated, but further
study is needed before the general statements
can be made about the similarity or difference
between outcomes from controlled research
and clinical practice (Barlow et al., 1999;
Hunsley, 2007).
There is less evidence concerning dimen-
sions of clinical expertise and client charac-
teristics, culture, and preferences that are
relevant to beneficial treatment outcomes
(Kazdin, 2008). Employment of qualitative
research methods may help us to understand
clients’ experiences of psychological treat-
ments. The APA Task Force has suggested that
clinical expertise is made up of at least eight
components, including assessment and treat-
ment planning, delivery, interpersonal skills,
self-reflection, scientific skills in evaluating
research, awareness of individual and social
factors, the ability to seek additional resources
where necessary, and having a convincing
rationale for treatment strategies (APA Presi-
dential Task Force on Evidence-Based Prac-
tice, 2006). Qualitative methods may provide
evidence regarding clinical expertise also.
Improvement of two-way communication
between psychologists who are primarily
researchers and those who identify more as
practitioners would assist dissemination of
ESTs, collaboration in clinical utility studies
of efficacious treatments, and facilitate
research into clinical expertise and barriers to
the adoption of ESTs by clinicians (Kazdin,
2008).
Lilienfeld (2005) and McLennan, Wathen,
MacMillan, and Lavis (2006) recommended
further research on interventions in child
Rationale and Standards of Evidence in Evidence-Based Practice 21
c01 20 April 2012; 12:43:31
Hersen, M., & Sturmey, P. (2012). Handbook of evidence-based practice in clinical psychology, child and adolescent disorders. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2021-11-18 22:16:12.
C o p yr
ig h t ©
2 0 1 2 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
psychopathology on two fronts: (1) increased
research on promising or new but plausible
interventions; and (2) on research to combat
questionable, but potentially harmful, inter-
ventions. Attention is needed to increase
training of clinical psychologists in ESTs and
cessation of training to use treatments that
have been shown to be harmful, ineffective, or
less effective. Decreasing the demand and use
of treatments that have not been evaluated
scientifically or have been found ineffective or
harmful may be another strategy for helping
clinical psychologists orient more to treat-
ments that do work (i.e., ESTs).
Woody, Weisz, and McLean (2005) report
that APA accredited clinical psychologist
training programs taught and supervised interns
in fewer ESTs in 2003 than they had in 1993.
The training of clinical psychologists should
include sufficient study of research method-
ology so that career-long learning, and contri-
bution to research in practice concerning ESTs
can be enhanced (Bauer, 2007; Kazdin, 2008).
Considering the APA definition of EBPP,
trainees need supervised practice in incor-
porating clients’ preferences, values, and
cultural considerations as well as to develop
clinical expertise (Collins, Leffingwell, &
Belar, 2007). Further training for university-
based psychologist training faculty should
be studied.
Psychologists in practice may wish or feel
forced to adapt to employing EBPP, but will-
ingness or compulsion to do so is not the same
as becoming immediately competent to use an
empirically supported treatment effectively.
The typical workshop format for introducing
new techniques is as ineffective for profes-
sionals (Gotham, 2006), as it is for direct care
staff (Reid, 2004). Skill-based training can be
effective when trainees practice an interven-
tion method in the natural clinical environment
with differential feedback from the trainer on
their performance of the skill. This should
occur after workshops that introduce the
rationale and method, and include in vivo or
videotaped demonstration of the skill by the
trainer (i.e., modeling). Frequent follow-up
observations by the trainer, again with
objective feedback to the trainee, can facilitate
maintenance of the newly acquired skills
(Reid, 2004). Gotham (2006) identified bar-
riers to implementation of EBPP, and provided
an example of how to implement an empiric-
ally supported treatment statewide despite
obstacles. McCabe (2004) wrote quite opti-
mistically for clinicians about the challenges
of EBPP, offering advice in a step-by-step
form to psychologists. Reorienting and training
clinicians is an area for further clinician-
researcher collaboration that requires emphasis.
We have not included discussion of an area
of EBPP that has, to date, received less atten-
tion than evidence-based treatment, which is
“evidence-based assessment” (Kazdin, 2005;
Mash & Hunsley, 2005). An initial assessment
with clinical utility will identify what is the
disorder or problem behavior so that it points
the psychologist in the right direction for
identifying the range of available ESTs for this
client. Evidence-based assessment includes
identifying reliable and valid ongoing meas-
ures that show the effects of an intervention
with the individual client (Kazdin, 2005).
Typically, empirically supported treatment
reviews identify treatments for DSM-type
classifications, the nosological approach;
however, an idiographic functional approach
to assessment may lead to better problem-EST
match (Sturmey, 2007).
It was mentioned earlier that some profes-
sional groups are concerned at the lack of
research on the outcomes of their assessment
and treatment methods. Because clinical and
other psychologists have extensive training in
research methods, we can assist other profes-
sions to assess the evidence for their interven-
tion methods. Interdisciplinary collaborations
also may help elucidate the interaction effects
of behavioral or psychological ESTs with
interventions with presently unknown effects
delivered by other professions.
22 Overview and Foundational Issues
c01 20 April 2012; 12:43:31
Hersen, M., & Sturmey, P. (2012). Handbook of evidence-based practice in clinical psychology, child and adolescent disorders. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2021-11-18 22:16:12.
C o p yr
ig h t ©
2 0 1 2 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
CONCLUDING REMARKS
The issues concerning evidence-based clinical
practice of psychology for children and ado-
lescents are arguably even more complex than
for adult clients. All readers would welcome
the day when, for any referred psychological,
behavioral, or mental problem, psychologists
with clinical expertise were able to offer a wide
range of effective and safe interventions. Then
the young person, with more or less help from
those who care for them, could select the
effective treatment that suited their culture,
preferences, and resources. The literature
reviewed for this chapter suggests that, gener-
ally speaking, the health care, including clinical
psychology, and education professions are
starting on the road to addressing the aspir-
ational targets of the EBP movement. The lack
of unanimous agreement within professions
that the evidence-based practice movement is
desirable and what constitutes evidence for
beneficial interventions is unsurprising; how-
ever, with more evidence for safe, acceptable,
and effective interventions such as contained
in the present volume and more education for
consumers, professionals, and the public,
eventually the naysayers and peddlers of
unsupportable treatments may find they have
no raison d’être (and also no income!).
REFERENCES
Addis, M. E., & Krasnow, A. D. (2000). A national survey
of practicing psychologists’ attitudes toward psycho-
therapy treatment manuals. Journal of Consulting and
Clinical Psychology, 68, 331–339.
Almost, D., & Rosenbaum, P. (1998). Effectiveness of
speech intervention for phonological disorders: A
randomized controlled trial. Developmental Medicine
and Child Neurology, 40, 319–325.
American Psychological Association. (2002). Criteria for
evaluating treatment guidelines. American Psycholo-
gist, 57, 1052–1059.
American Psychological Association. (2010). Ethical prin-
ciples of psychologists and code of conduct. Retrieved
from www.apa.org/ethics/code/index.aspx#
American Psychological Association Presidential Task
Force on Evidence-Based Practice. (2006). Evidence-
based practice in psychology. American Psychologist,
61, 271–285.
Barlow, D. H., & Hersen, M. (1984). Single case
experimental designs: Strategies for studying behav-
ior change (2nd ed.). New York, NY: Pergamon.
Barlow, D. H., Levitt, J. T., & Bufka, L. F. (1999). The
dissemination of empirically supported treatments: A
view to the future. Behaviour Research and Therapy,
37, 147–162.
Bauer, R. M. (2007). Evidence-based practice in psych-
ology: Implications for research and research training.
Journal of Clinical Psychology, 63, 683–694.
Becker, B. J. (2006). Failsafe N or file-drawer number.
In H. Rothstein, A. Sutton, & M. Borenstein
(Eds.), Publication bias in meta-analysis: Prevention,
assessment and adjustments (pp. 111–125). Hoboken,
NJ: Wiley.
Behavior Analyst Certification Board. (2004). Behavior
Analyst Certification Board guidelines for respon-
sible conduct for behavior analysts. Retrieved
from www.bacb.com/Downloadfiles/BACBguidelines/
40809_ BACB_Guidelines.pdf.
Bennett, S. A., & Bennett, J. W. (2000). The process of
evidence based practice in occupational therapy:
Informing clinical decisions. Australian Occupational
Therapy Journal, 47, 171–180.
Bereiter, C., & Kurland, M. (1981). A constructive look at
Follow Through results. Interchange, 12, 1–22.
Bowling, A. (1997). Research methods in health: Inves-
tigating health and health services. Philadelphia, PA:
Open University Press.
Campbell, J. M. (2003). Efficacy of behavioral interven-
tions for reducing problem behavior in persons with
autism: A quantitative synthesis of single-subject
research. Research in Developmental Disabilities, 24,
120–138.
Carr, E. G., & Durand, V. M. (1985). Reducing
behavior problems through functional communication
training. Journal of Applied Behavior Analysis, 18,
111–126.
Chambless, D. L., & Hollon, S. D. (1998). Defining
empirically supported therapies. Journal of Consult-
ing and Clinical Psychology, 66, 7–18.
Chambless, D. L., & Ollendick, T. H. (2001). Empirically
supported psychological interventions: Controversies
and evidence. Annual Review of Psychology, 52,
685–716.
Collins, F. L., Leffingwell, T. R., & Belar, C. D. (2007).
Teaching evidence-based practice: Implications for
psychology. Journal of Clinical Psychology, 63,
657–670.
Concato, J., Shah, N., & Horwitz, R. (2008). Randomized,
controlled trials, observational studies, and the
Rationale and Standards of Evidence in Evidence-Based Practice 23
c01 20 April 2012; 12:43:31
Hersen, M., & Sturmey, P. (2012). Handbook of evidence-based practice in clinical psychology, child and adolescent disorders. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2021-11-18 22:16:12.
C o p yr
ig h t ©
2 0 1 2 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
hierarchy of research designs. New England Journal
of Medicine, 342, 1887–1892.
Cooper, J. O., Heron, T. E., & Heward, W. L. (2007).
Applied behavior analysis (2nd ed.). Upper Saddle
River, NJ: Pearson.
De Los Reyes, A., & Kazdin, A. E. (2008). When the
evidence says, “Yes, No, and Maybe So”: Attending
to and interpreting inconsistent findings among
evidence-based interventions. Current Directions in
Psychological Science, 17, 47–51.
DiCenso, A., Cullum, N., Ciliska, D., & Guyatt, G.
(2004). Introduction to evidence-based nursing. In
A. DiCenso, N. Cullum, D. Ciliska, & G. Guyatt
(Eds.), Evidence-based nursing: A guide to clinical
practice. Philadelphia, PA: Elsevier.
Enderby, P., & Emerson, J. (1995). Does speech and
language therapy work? Review of the literature.
London, England: Whurr.
Enderby, P., & Emerson, J. (1996). Speech and language
therapy: Does it work? British Medical Journal, 312,
1655–1658.
Evidence-Based Medicine Working Group. (1992). A
new approach to teaching the practice of medicine.
Journal of the American Medical Association, 268,
2420–2425.
Eysenck, H. (1994). Meta-analysis and its problems.
British Medical Journal, 309, 789–793.
Flather, M., Farkouh, M., Pogue, J., & Yusuf, S. (1997).
Strengths and limitations of meta-analysis: Larger
studies may be more reliable. Controlled Clinical
Trials, 18, 568–579.
Gersten, R. M. (1984). Follow Through revisited:
Reflections of the site variability issue. Educational
Evaluation and Policy Analysis, 6, 411–423.
Glass, G. (1976). Primary, secondary, and meta-analysis
of research. Educational Researcher, 5, 3–8.
Glogowska, M., Roulstone, S., Enderby, P., & Peters, T. J.
(2000). Randomised controlled trial of community
based speech and language therapy in preschool
children. British Medical Journal, 231, 923–927.
Gluud, L. (2006). Bias in clinical intervention research.
American Journal of Epidemiology, 163, 493–501.
Goisman, R. M., Warshaw, M. G., & Keller, M. B.
(1999). Psychosocial treatment prescriptions for gen-
eralized anxiety disorder, panic disorders and social
phobia, 1991–1996. American Journal of Psychiatry,
156, 1819–1821.
Goodheart, C. D., Kazdin, A. E., & Sternberg, R. J. (Eds.).
(2006). Evidence-based psychotherapy: Where prac-
tice and research meet. Washington, DC: American
Psychological Association.
Gotham, H. J. (2006). Advancing the implementation of
evidence-based practices into clinical practice: How
do we get there from here? Professional Psychology:
Research and Practice, 37, 606–613.
Greenhalgh, T. (2001). How to read a paper: The basics
of evidence based medicine. London, UK: BMJ
Publishing Group.
Grimes, D. A., & Schulz, K. F. (2002). Cohort studies:
Marching towards outcomes. The Lancet, 359,
341–345.
Guyatt, G. H., Haynes, R. B., Jaeschke, R. Z., Cook, D. J.,
Naylor, C. D., & Wilson, W. S. (2000). Users’ guides
to the medical literature: XXV. Evidence-based
medicine: Principles for applying the users’ guides to
patient care. Journal of the American Medical Asso-
ciation, 284, 1290–1296.
Hadorn, D., Baker, D., Hodges, J., & Hicks, N. (1996).
Rating the quality of evidence for clinical practice
guidelines. Journal of Clinical Epidemiology, 49,
749–754.
Hagopian, L. P., Fisher, W. W., Sullivan, M. T., Acquisto,
J., & LeBlanc, L. A. (1998). Effectiveness of func-
tional communication training with and without
extinction and punishment: A summary of 21 inpa-
tient cases. Journal of Applied Behavior Analysis, 31,
211–235.
Hawkins, R. P., & Dotson, V. A. (1975). Reliability
scores that delude: An Alice in Wonderland trip
through the misleading characteristics of interobserver
agreement scores in interval recording. In E. Ramp &
G. Semb (Eds.), Behavior analysis: Areas of research
and application (pp. 359–376). Englewood Cliffs, NJ:
Prentice Hall.
Hayes, S. C., Barlow, D. H., & Nelson-Gray, R. O.
(1999). The scientist-practitioner: Research and
accountability in the age of managed care (2nd ed.).
Boston, MA: Allyn & Bacon.
Hess, F. M. (2006). Accountability without angst? Public
opinion and No Child Left Behind. Harvard Educa-
tional Review, 76, 587–610.
Horner, R. H., Carr, E. G., Halle, J., McGee, G., Odom,
S., & Wolery, M. (2005). The use of single-subject
research to identify evidence-based practice in special
education. Exceptional Children, 71, 165–179.
Hunsberger, P. H. (2007). Reestablishing Clinical Psy-
chology’s subjective core. American Psychologist, 62,
614–615.
Hunsley, J. (2007). Addressing key challenges in
evidence-based practice in psychology. Professional
Psychology: Research and Practice, 38, 113–121.
Jacobson, J. W., Foxx, R. M., & Mulick, J. A. (Eds.).
(2005). Controversial therapies for developmental
disabilities: Fad, fashion, and science in professional
practice. Mahwah, NJ: Erlbaum.
Jacobson, J. W., Mulick, J. A., & Schwartz, A. A. (1995).
A history of facilitated communication: Science,
pseudoscience and antiscience. American Psycholo-
gist, 50, 750–765.
Jenicek, M. (2003). Foundations of evidence-based
medicine. New York, NY: Parthenon.
24 Overview and Foundational Issues
c01 20 April 2012; 12:43:31
Hersen, M., & Sturmey, P. (2012). Handbook of evidence-based practice in clinical psychology, child and adolescent disorders. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2021-11-18 22:16:12.
C o p yr
ig h t ©
2 0 1 2 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
Jensen, W. R., Clark, E., Kircher, J. C., & Kristjansson,
S. D. (2007). Statistical reform: Evidence-based
practice, meta-analyses, and single subject designs.
Psychology in the Schools, 44, 483–493.
Kazdin, A. E. (1982). Single-case research designs:
Methods for clinical and applied settings. New York,
NY: Oxford University Press.
Kazdin, A. E. (2005). Evidence-based assessment for
children and adolescents: Issues in measurement
development and clinical application. Journal
of Clinical Child and Adolescent Psychology, 34,
548–558.
Kazdin, A. E. (2008). Evidence-based treatment and
practice: New opportunities to bridge clinical research
and practice, enhance the knowledge base, and
improve patient care. American Psychologist, 63,
146–159.
Kazdin, A. E., & Weisz, J. R. (1998). Identifying and
developing empirically supported child and adoles-
cent treatments. Journal of Consulting and Clinical
Psychology, 66, 19–36.
Kazdin, A. E., & Weisz, J. R. (Eds.). (2003). Evidence-
based psychotherapies for children and adolescents.
New York, NY: Guilford Press.
Kratochwill, T. R. (2005). Evidence-based interventions
in school psychology: Thoughts on a thoughtful
commentary. School Psychology Quarterly, 17,
518–532.
Kratochwill, T. R., & Stoiber, K. C. (2002). Evidence-
based interventions in school psychology: Conceptual
foundations of the Procedural and Coding Manual of
Division 16 and the Society for the Study of School
Psychology Task Force. School Psychology Quar-
terly, 17, 341–389.
Levin, J. R. (2002). How to evaluate the evidence of
evidence-based interventions? School Psychology
Quarterly, 17, 483–492.
Lilienfeld, S. O. (2005). Scientifically unsupported and
supported interventions for child psychopathology: A
summary. Pediatrics, 115, 761–764.
Lohr, K. N., Eleazer, K., & Mauskopf, J. (1998). Health
policy issues and applications for evidence-based
medicine and clinical practice guidelines. Health
Policy, 46, 1–19.
Lonigan, C. J., Elbert, J. C., & Johnson, S. B. (1998).
Empirically supported psychosocial interventions for
children: An overview. Journal of Clinical Child
Psychology, 27, 138–145.
Lord, C., Wagner, A., Rogers, S., Szatmari, P., Aman, M.,
Charman, T., . . . Yoder, P. (2005). Challenges in
evaluating psychosocial interventions for Autistic
Spectrum Disorders. Journal of Autism and Devel-
opmental Disorders, 35, 695–708.
Mackintosh, V. H., Meyers, B. J., & Goin-Kochel, R. P.
(2006). Sources of information and support used by
parents of children with autism spectrum disorders.
Journal on Developmental Disabilities, 12, 41–51.
Mahon, J., Laupacis, A., Donner, A., & Wood, T. (1996).
Randomised study of n of 1 trials versus standard
practice. British Medical Journal, 312, 1069–1074.
Mash, E. J., & Hunsley, J. (2005). Evidence-based
assessment of child and adolescent disorders: Issues
and challenges. Journal of Clinical Child and Ado-
lescent Psychology, 34, 362–379.
Maxwell, R. (1992). Dimensions of quality revisited:
From thought to action. Quality in Health Care, 1,
171–177.
Maynard, A. (1997). Evidence-based medicine: An
incomplete method for informing treatment choices.
The Lancet, 349, 126–128.
Mays, N., & Pope, C. (Eds.). (1996). Qualitative research
in health care. London, England: BMJ Publishing
Group.
McCabe, O. L. (2004). Crossing the quality chasm in
behavioral health care: The role of evidence-based
practice. Professional Psychology: Research and
Practice, 35, 571–579.
McLennan, J. D., Wathen, C. N., MacMillan, H. L., &
Lavis, J. N. (2006). Research-practice gaps in child
mental health. Journal of the American Academy of
Child and Adolescent Psychiatry, 45, 658–665.
Melnyk, B. M., & Fineout-Overholt, E. (Eds.). (2005).
Evidence-based practice in nursing and healthcare: A
guide to best practice. Philadelphia, PA: Lippincott
Williams & Wilkins.
Najdowski, A. C., Wallace, M. D., Ellsworth, C. L.,
MacAleese, A. N., & Cleveland, J. M. (2008). Func-
tional analysis and treatment of precursor behavior.
Journal of Applied Behavior Analysis, 41, 97–105.
Norcross, J. C., Beutler, L. E., & Levant, R. F. (Eds.).
(2005). Evidence based practices in mental
health: Debate and dialogue on the fundamental
questions. Washington, DC: American Psychological
Association.
Paul, G. L. (1967). Strategy of outcome research in
psychotherapy. Journal of Consulting Psychology, 31,
109–118.
Reed, G. M., & Eisman, E. J. (2006). Uses and misuses of
evidence: Managed care, treatment guidelines, and
outcomes measurement in professional practice. In
C. D. Goodheart, A. E. Kazdin, & R. J. Sternberg
(Eds.), Evidence-based psychotherapy: Where prac-
tice and research meet (pp. 13–35). Washington, DC:
American Psychological Association.
Reed, G. M., McLaughlin, C. J., & Newman, R. (2002).
American Psychological Association policy in con-
text: The development and evaluation of guidelines
for professional practice. American Psychologist, 57,
1041–1047.
Reichow, B., Volkmar, F. R., & Ciccetti, D. V. (2008).
Development of the evaluative method for evaluating
and determining evidence-based practices in autism.
Journal of Autism and Developmental Disorders, 38,
1311–1319.
Rationale and Standards of Evidence in Evidence-Based Practice 25
c01 20 April 2012; 12:43:32
Hersen, M., & Sturmey, P. (2012). Handbook of evidence-based practice in clinical psychology, child and adolescent disorders. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2021-11-18 22:16:12.
C o p yr
ig h t ©
2 0 1 2 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
Reid, D. H. (2004). Training and supervising direct care
support personnel to carry out behavioral procedures.
In J. L. Matson, R. Laud, & M. Matson (Eds.),
Behavior modification for persons with developmental
disabilities: Treatments and supports, Volume 1
(pp. 101–129). Kingston, NY: NADD Press.
Reilly, S., Oates, J., & Douglas, J. (2004). Evidence based
practice in speech pathology. London, England: Whurr.
Reynolds, S. (2000). The anatomy of evidence-based
practice: Principles and methods. In L. Trinder &
S. Reynolds (Eds.), Evidence-based practice: A critical
appraisal (pp. 17–34). Oxford, England: Blackwell.
Ringwalt, C. L., Ennett, S., Vincus, A., Thorne, J.,
Rohrbach, L. A., & Simons-Rudolph, A. (2002). The
prevalence of effective substance use prevention
curricula in U.S. middle schools. Prevention Science,
3, 257–265.
Roberts, A. R., & Yeager, K. R. (2006). Foundations of
evidence based social work practice. New York, NY:
Oxford University Press.
Robertson, S. B., & Weismer, S. E. (1999). Effects of
treatment on linguistic and social skills in toddlers with
delayed language development. Journal of Speech,
Language, and Hearing Research, 42, 1234–1247.
Romanczyk, R. G., Arnstein, L., Soorya, L. V., & Gillis, J.
(2003). The myriad of controversial treatments
for autism: A critical evaluation of efficacy. In S. O.
Lillienfeld, S. J. Levin, & J. R. Lohr (Eds.), Science
and pseudoscience in clinical psychology (pp.
363–395). New York, NY: Guilford Press.
Rosen, A., Proctor, E. E., Morrow-Howell, N., & Staudt, M.
(1995). Rationales for practice decisions: Variations
in knowledge use by decision task and social work
service. Research on Social Work Practice, 5, 501–523.
Rosenthal, R. (1984). Meta-analytic procedures for social
research. Beverly Hills, CA: Sage Publications.
Sackett, D. L., Straus, S. E., Richardson, W. S., Rosenberg,
W., & Haynes, R. B. (2000). Evidence-based medicine:
How to practice and teach EBM (2nd ed.). Edinburgh,
Scotland: Churchill Livingstone.
Shavelson, R. J., & Towne, L. (2002). Scientific research in
education. Washington, DC: National Academy Press.
Shernoff, E. S., Kratochwill, T. R., & Stoiber, K. C.
(2002). Evidence-based interventions in school
psychology: An illustration of task force coding cri-
teria using single-participant research design. School
Psychology Quarterly, 17, 390–422.
Simes, J. (1986). Publication bias: The case for an inter-
national registry of clinical trials. Journal of Clinical
Oncology, 4, 1529–1541.
Smith, T., & Antolovich, A. (2000). Parental perceptions
of supplemental interventions received by young
children with autism in intensive behavior analytic
treatment. Behavioral Interventions, 15, 83–97.
Stout, C. E., & Hayes, R. A. (2004). Evidence-based
practice: Methods, models, and tools for mental health
professionals. Hoboken, NJ: Wiley.
Straus,S.E.,Richardson,W.S.,Glasziou,P.,&Haynes,R.B.
(2005). Evidence-based medicine: How to practice
and teach EBM (2nd ed.). Edinburgh, Scotland: Elsevier.
Sturmey, P. (Ed.). (2007). Functional assessment in
clinical treatment. Burlington, MA: Academic Press.
Thomas, G., & Pring, R. (2004). Evidence-based practice
in education. Maidenhead, England: McGraw-Hill
Education.
U.S.DepartmentofEducation.(2002).NoChildLeftBehind
Act of 2001. Public Law 107-110. Retrieved from www
.ed.gov/policy/elsec/leg/esea02/107-110.pdf
Wacholder, S., McLaughlin, J., Silverman, D., & Mandel, J.
(1992). Selection of controls in case-control studies.
American Journal of Epidemiology, 135, 1019–1028.
Waschbusch, D. A., & Hill, G. P. (2003). Empirically
supported, promising, and unsupported treatments for
children with attention-deficit/hyperactivity disorder.
In S. O. Lillienfeld, S. J. Levin, & J. R. Lohr (Eds.),
Science and pseudoscience in clinical psychology
(pp. 333–362). New York, NY: Guilford Press.
Wilczynski, S. M., & Christian, L. (2008). The National
Standards Project: Promoting evidence-based practice
in autism spectrum disorders. In J. K. Luiselli, D. C.
Russo, W. P. Christian, & S. M. Wilcyznski (Eds.),
Effective practices for children with Autism: Educa-
tional and behavior support interventions that work
(pp. 37–60). New York, NY: Oxford University Press.
Woody, S. R., Weisz, J., & McLean, C. (2005). Empir-
ically supported treatments: 10 years later. The Clin-
ical Psychologist, 58, 5–11.
Yeaton, W., & Wortman, P. (1993). On the reliability of
meta-analytic reviews: The role of intercoder agree-
ment. Evaluation Review, 17, 292–309.
Zlotnick, J. L., & Solt, B. E. (2006). The institute for the
advancement of social work research: Working to
increase our practice and policy evidence base.
Research on Social Work Practice, 16, 534–539.
26 Overview and Foundational Issues
c01 20 April 2012; 12:43:33
Hersen, M., & Sturmey, P. (2012). Handbook of evidence-based practice in clinical psychology, child and adolescent disorders. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2021-11-18 22:16:12.
C o p yr
ig h t ©
2 0 1 2 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
4
Limitations to Evidence-
Based Practice
THOMAS MAIER
The promotion of evidence-based medicine
(EBM) or, more generally, of evidence-based
practice (EBP) has strongly characterized
most medical disciplines over the past 15 to 20
years. Evidence-based medicine has become a
highly influential concept in clinical practice,
medical education, research, and health
policy. Although the evidence-based approach
has also been increasingly applied in related
fields such as psychology, education, social
work, or economics, it was and still is pre-
dominantly used in medicine and nursing.
Evidence-based practice is a general and
nonspecific concept that aims to improve
and specify the way decision makers should
make decisions. For this purpose it delineates
methods of how professionals should retrieve,
summarize, and evaluate the available empir-
ical evidence in order to identify the best
possible decision to be taken in a specific
situation. So EBP is, in a broader perspective,
a method to analyze and evaluate large
amounts of statistical and empirical infor-
mation to understand a particular case. It is
therefore not limited to specific areas of sci-
ence and is potentially applicable in any field
of science using statistical and empirical data.
Many authors often cite Sackett, Rosenberg,
Muir Gray, Haynes, and Richardson’s (1996)
article entitled “Evidence-based medicine:What
it is and what it isn’t” as the founding deed of
evidence-based practice. David L. Sackett (born
1934), an American-born Canadian clinical
epidemiologist, was professor at the Department
of Clinical Epidemiology and Biostatistics of
McMaster University Medical School of Ham-
ilton, Ontario, from 1967 to 1994. During that
time, he and his team developed and propagated
modern concepts of clinical epidemiology.
Sackett later moved to England, and from 1994
to 1999,he headed the NationalHealth Services’
newly founded Centre for Evidence-Based
Medicine at Oxford University. During that
time, he largely promoted EBM in Europe by
publishing articles and textbooks as well as
by giving numerous lectures and training
courses. David Sackett is seen by many as the
founding father of EBM as a proper discipline,
although he would not at all claim this position
for himself. In fact, Sackett promoted and elab-
orated concepts that have been described and
used by others before; the origins of EBM are
rooted back in much earlier times.
The foundations of clinical epidemiology
were already laid in the 19th century mainly by
French, German, and English physicians sys-
tematically studying the prevalence and course
of diseases and the effects of therapies.
As important foundations of the EBM-
movement, certainly the works and insights
of the Scottish epidemiologist Archibald
(Archie) L. Cochrane (1909–1988) have to be
c04 18 April 2012; 19:44:27
55 Hersen, M., & Sturmey, P. (2012). Handbook of evidence-based practice in clinical psychology, adult disorders : Adult disorders. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2021-11-18 22:13:23.
C o p yr
ig h t ©
2 0 1 2 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
mentioned. Cochrane, probably the true
founding father of modern clinical epidemi-
ology, had long before insisted on sound epi-
demiological data, especially from RCTs, as the
gold standard to improve medical practice
(Cochrane, 1972). In fact, the evaluation of
epidemiological data has always been one of
the main sources of information in modern
academic medicine, and many of the most
spectacular advances of medicine are direct
consequences of the application of basic epi-
demiologicalprinciplessuchas hygiene,aseptic
surgery, vaccination, antibiotics, and the iden-
tification of cardiovasular and carcinogenic risk
factors. One of the most frequent objections
against the propagation of EBM is, “It’s nothing
new, doctors have done it all the time.”
Rangachari, for example, apostrophized EBM
as “old French wine with a new Canadian
label” (Rangachari, 1997,p. 280) alluding to the
French 19th century epidemiology pioneer
Pierre Louis, who was an influencing medical
teacher in Europe and North America, and to
David L. Sackett, the Canadian epidemiologist.
Even though the “conscientious, explicit and
judicious use of the current best evidence in
making decisions about the care of individual
patients”(Sackettetal.,1996,p.71)seemstobea
perfectly reasonable and unassailable goal, EBM
has been harshly criticized from the very begin-
ningof its promotion(Berk&Miles Leigh,1999;
B. Cooper, 2003; Miles, Bentley, Polychronis,
Grey, and Price, 1999; Norman, 1999; Williams
& Garner, 2002). In 1995, for example, the edi-
tors of The Lancet chose to publish a rebuking
editorial against EBM entitled “Evidence-based
medicine, in its place” (The Lancet, 1995):
The voice of evidence-based medicine has
grown over the past 25 years or so from a
subversive whisper to a strident insistence that
it is improper to practise medicine of any
other kind. Revolutionaries notoriously exag-
gerate their claims; nonetheless, demands to
have evidence-based medicine hallowed as
the new orthodoxy have sometimes lacked
finesse and balance, and risked antagonising
doctors who would otherwise have taken
many of its principles to heart. The Lancet
applauds practice based on the best available
evidence–bringing critically appraised news
of such advances to the attention of clinicians
is part of what peer-reviewed medical journals
do–but we deplore attempts to foist evidence-
based medicine on the profession as a discip-
line in itself. (p. 785)
This editorial elicited a fervid debate carried
on for months in the letter columns of The
Lancet. Indeed, there was a certain doggedness
on both sides at that time, astonishing neutral
observers and rendering the numerous critics
even more suspicious. The advocates of EBM
ontheir partactedwith great self-confidence and
claimed noless than to establish a new discipline
and to put clinical medicine on new fundaments;
journals, societies, conferences, and EBM
training courses sprang up like mushrooms;
soon academic lectures and chairs emerged;
however, this clamorous and pert appearance of
EBM repelled many. A somehow dogmatic,
almost sectarian, tendency of the movement was
noticed with discontent, and even the deceased
patron saint of EBM, Archie Cochrane, had to
be invoked in order to push the zealots back:
How would Archie Cochrane view the
emerging scene? His contributions are
impressive, particularly to the development
of epidemiology as a medical science, but
would he be happy about all the activities
linked with his name? He was a freethinking,
iconoclasticindividualwithahealthycynicism,
who would not accept dogma. He brought an
open sceptical approach to medical problems
and we think that he would be saddened
to find that his name now embodies a
new rigid medical orthodoxy while the real
impact of his many achievments might be
overlooked. (Williams & Garner 2002, p. 10)
THE DEMOCRATIZATION
OF KNOWLEDGE
How could such an emotional controversy
arise about the introduction of a scientific
56 Overview and Foundational Issues
c04 18 April 2012; 19:44:28
Hersen, M., & Sturmey, P. (2012). Handbook of evidence-based practice in clinical psychology, adult disorders : Adult disorders. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2021-11-18 22:13:23.
C o p yr
ig h t ©
2 0 1 2 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
method (Ghali, Saitz, Sargious, & Hershman,
1999)? Obviously, the propagation and refusal
of EBM have to be seen not only from a
rational scientific standpoint but also from
a sociological perspective (Miettinen, 1999;
Norman, 1999): The rise of the EBM move-
ment fundamentally reflects current develop-
ments in contemporary health care concerning
the allocation of information, knowledge,
authority, power, and finance (Berk & Miles
Leigh, 1999), a process becoming more and
more critical during the late 1980s and the
1990s. Medicine has, for quite some time, been
losing its prestige as an intangible, moral
institution. Its cost-value ratio is questioned
more and more and doctors are no longer
infallible authorities. We do not trust doctors
anymore to know the solution for any problem;
they are supposed to prove and to justify
what they do and why they do it. These
developments in medicine parallel similar
tendencies in other social domains and
indicate general changes in Western soci-
eties’ self-conception. Today we are living
in a knowledge society, where knowledge
and information is democratized, available and
accessible to all. There is no retreat anymore
for secret expert knowledge and for hidden
esoteric wisdom. The hallmarks of our time
are free encyclopedic databases, open access,
the World Wide Web, and Google©. In the
age of information, there are no limitations for
filing, storage, browsing, and scanning of
huge amounts of data; however, this requires
more and more expert knowledge to handle it.
So, paradoxically, EBM represents a new
specialized expertise that aims to democratize
or even to abolish detached expert knowledge.
The democratization of knowledge increas-
ingly questions the authority and self-
sufficiency of medical experts and has deeply
unsettled many doctors and medical scientists.
Of course, this struggle is not simply about
authority and truth; it is also about influence,
power, and money. For all the unsettled doc-
tors, EBM must have appeared like a guide for
the perplexed leading them out of insecurity
and doubt. Owing to its paradoxical nature,
EBM offers them a new spiritual home of
secluded expertise allowing doctors to regain
control over the debate and to reclaim
authority of interpretation from bold laymen.
For this purpose, EBM features and empha-
sizes the most valuable label of our time that is
so believable in science: science- or evidence-
based. In many areas of contention, terms like
evidence-based or scientifically proven are
used for the purpose of putting opponents on
the defensive. Nobody is entitled to question a
fact, which is declared evidence-based or
scientifically proven. By definition, these
labels are supposed to convey unquestioned
and axiomatic truth. It requires rather com-
plex and elaborate epistemological reasoning
to demonstrate how even true evidence-based
findings can at the same time be wrong,
misleading, and/or useless.
All these accounts and arguments apply in
particular to the disciplines of psychiatry and
clinical psychology, which have always had a
marginal position among the apparently
respectable disciplines of academic medicine.
Psychiatrists and psychologists always felt
particularly pressured to justify their actions
and are constantly suspected to practice
quackery rather than rational science. It is
therefore not surprising that among other
marginalized professionals, such as the general
practitioners, psychiatrists and psychotherapists
made particularly great efforts over the last
years to establish their disciplines as serious
matters of scholarly medicine by diligently
adopting the methods of EBM (Geddes &
Harrison, 1997; Gray & Pinson, 2003; Oakley-
Browne, 2001; Sharpe, Gill, Strain, & Mayou,
1996). Yet, there are also specific problems
limiting the applicability of EBP in these
disciplines.
EMPIRICISM AND REDUCTIONISM
In order to understand the role and function of
EBP within the scientific context, it may be
Limitations to Evidence-Based Practice 57
c04 18 April 2012; 19:44:28
Hersen, M., & Sturmey, P. (2012). Handbook of evidence-based practice in clinical psychology, adult disorders : Adult disorders. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2021-11-18 22:13:23.
C o p yr
ig h t ©
2 0 1 2 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
helpful to give a brief overview of the theo-
retical backgrounds of science in general.
What is science and how does it proceed?
Science can be seen as a potentially endless
human endeavour that aims to understand
and determine reality. Not only are physical
objects matters of science, but also immate-
rial phenomena like language, history, soci-
ety, politics, economics, human behavior,
thoughts, or emotions. Starting with the Greek
scientists in the ancient world, but progress-
ing more rapidly with the philosophers of
the Enlightenment, modern science adopted
defined rules of action and standards of rea-
soning that delineate science from non-
scientific knowledge such as pragmatics, art,
or religion. Unfortunately, notions like sci-
ence, scientific, or evidence are often wrongly
used in basically nonscientific contexts caus-
ing unnecessary confusion.
The heart and the starting point of any
positive science is empiricism, meaning the
systematic observation of phenomena. Scien-
tists of any kind must start their reasoning with
observations, possibly refined through sup-
portive devices or experimental arrangements.
Although positive science fundamentally
believes in the possibility of objective per-
ception, it also knows the inherent weaknesses
of reliability and potential sources of errors.
Rather than have confidence in single obser-
vations, science trusts repeated and numer-
ous observations and statistical data. This
approach rules out idiosyncratic particularities
of single cases to gain the benefit of identifying
the common characteristics of general phe-
nomena (i.e., reductionism). This approach of
comprehending phenomena by analytically
observing and describing them has in fact
produced enormous advancements in many
fields of science, especially in technical dis-
ciplines; however, contrasting and confusing
gaps of knowledge prevail in other areas
such as causes of human behavior, mind–body
problems, or genome–environment inter-
action. Some areas of science are apparently
happier and more successful using the classical
approach of positive science, while other dis-
ciplines feel less comfortable with the reduc-
tionist way of analyzing problems. The less
successful areas of science are those studying
complex phenomena where idiosyncratic fea-
tures of single cases can make a difference, in
spite of perfect empirical evidence. This
applies clearly to medicine, but even more to
psychology, sociology, or economics. Medi-
cine, at least in its academic version, usually
places itself among respectable sciences,
meeting with and observing rules of scientific
reasoning; however, this claim may be wishful
thinking and medicine is in fact a classical
example of a basically atheoretical, mainly
pragmatic undertaking pretending to be based
on sound science. Inevitably, it leads to con-
tradictions when trying to bring together
common medical practice and pure science.
COMPLEXITY
Maybe the deeper reasons for these contradic-
tions are not understood well enough. Maybe
they still give reason for unrealistic ideas to
some scientists. A major source of misconcep-
tion appears to be the confused ontological
perception of some objects of scientific inves-
tigation. What is a disease, a disorder, a diag-
nosis? What is human behavior? What are
emotions? Answering these questions in a
manner to provide a basis for scientific reason-
ing in a Popperian sense (see later) is far from
trivial. Complex objects of science, like human
behavior, medical diseases, or emotions, are in
fact not concrete, tangible things easily acces-
sible to experimental investigation. They are
emergent phenomena, hence they are not stable
material objects, but exist only as transitory,
nonlocal appearances fluctuating in time. They
continuously emerge out of indeterminable
complexity through repeated self-referencing
operations in complex systems (i.e., autopoietic
systems). Indeterminable complexity or deter-
ministic chaos means that a huge number of
mutually interacting parameters autopoietically
58 Overview and Foundational Issues
c04 18 April 2012; 19:44:28
Hersen, M., & Sturmey, P. (2012). Handbook of evidence-based practice in clinical psychology, adult disorders : Adult disorders. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2021-11-18 22:13:23.
C o p yr
ig h t ©
2 0 1 2 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
forma system, renderingany precise calculation
of the system’s future conditions impossible.
Each single element of the system perfectly
follows the physical rules of causality; however,
the system as a whole is nevertheless unpre-
dictable. Its fluctuations and oscillations can be
described only probabilistically. In order to
obtain reasonable and useful information about
a system, many scientific disciplines have
elaborated probabilistic methods of approach-
ing their objects of interest. Thermody-
namics, meteorology, electroencephalography,
epidemiology, and macroeconomics are only a
few such examples. Most structures in bio-
logical, social, and psychological reality can be
conceived as emergent phenomena in this sense.
Just as the temperature of an object is not a
quality of the single molecules forming the
object—a singlemoleculehas notemperature—
but a statistic description of a huge number of
molecules, human behavior cannot be deter-
mined through the description of composing
elements producing the phenomenon—for
example, neurons—even if these elements are
necessary and indispensable preconditions for
the emergence of the phenomenon. The char-
acteristics of the whole cannot be determined by
the description of its parts. When the precise
conditions of complex systems turn out to be
incalculable, the traditional reaction of positive
science is to intensify analytical efforts and to
compile more information about the compon-
ents forming the system. This approach allows
scientists to constantly increase their knowledge
about the system in question without ever
reaching a final understanding and a complete
determinationofthefunctionofthesystem.This
is exactly what happens currently in neurosci-
ences. Reductionist approaches have their
inherent limitations when it comes to the
understanding of complex systems.
A similar problem linked to complexity that
is particularly important is the assumed com-
parability of similar cases. In order to under-
stand an individual situation, science routinely
compares defined situations to similar situ-
ations or, even better, to a large number of
similar situations. Through the pooling of large
numbers of comparable cases, interfering
individual differences are statistically elimi-
nated, and only the common ground appears.
The conceptual assumption behind this pro-
cedure is that similar—but still not identical—
cases will evolve similarly under identical
conditions. One of the most important insights
from the study of complex phenomena is that
in complex systems very small differences in
initial conditions may lead to completely dif-
ferent outcomes after a short time—the so-
called butterfly effect. This insight is well
known to natural scientists; however, clinical
epidemiologists do not seem to be completely
aware of the consequences of the butterfly
effect to their area of research.
FROM KARL POPPER TO
THOMAS S. KUHN
Based on epistemological considerations, the
Anglo-Austrian philosopher Karl Popper
(1902–1994) demonstrated in the 1930s the
limitations of logical empiricism. He reaso-
ned that general theories drawn from empiri-
cal observations can never be proven to be
true. So, all theories must remain tentative
knowledge, waiting to be falsified by contrary
observations. In fact, Popper conceived the
project of science as a succession of theories to
be falsified sooner or later and to be replaced
by new theories. This continuous succession of
new scientific theories is the result of natural
selection of ideas through the advancement of
science. According to Popper, any scientific
theory must be formulated in a way to render it
potentially falsifiable through empirical test-
ing. Otherwise, the theory is not scientific:
It may be metaphysical, religious, or spiritual
instead. This requires that a theory must be
formulated in terms of clearly defined notions
and measurable elements.
Popper’s assertions were later qualified as
being less absolute by the American philoso-
pher of science Thomas S. Kuhn (1922–1996).
Limitations to Evidence-Based Practice 59
c04 18 April 2012; 19:44:28
Hersen, M., & Sturmey, P. (2012). Handbook of evidence-based practice in clinical psychology, adult disorders : Adult disorders. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2021-11-18 22:13:23.
C o p yr
ig h t ©
2 0 1 2 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
Kuhn, originally a physicist, pointed out that in
real science any propagated theory could be
falsified immediately by contrary observations
because contradicting observations are always
present; however, science usually ignores or
even suppresses observations dissenting with
the prevailing theory in order to maintain the
accepted theory. Kuhn calls the dissenting
observations anomalies, which are—according
to him—always obvious and visible to all, but
nevertheless blinded out of perception in order
to maintain the ruling paradigm. In Kuhn’s
view, science will never come to an end and
there will never be a final understanding of
nature. No theory will ever be able to integrate
and explain consistently all the observations
drawn from nature. At this point, even the
fundamental limitations to logical scientific
reasoning demonstrated by Gödel’s incom-
pleteness theorems become recognizable (cf.
also Sleigh, 1995). Based on his considerations,
Kuhn clear-sightedly identified science to be a
social system, rather than a strictly logical and
rational undertaking. Science, as a social phe-
nomenon, functions according to principles of
Gestalt psychology. It sees the things it wants to
see and overlooks the things that do not fit.
In his chief work The Structure of Scientific
Revolutions, Kuhn (1962) gives several
examples from the history of science support-
ing this interpretation. It is in fact amazing to
see how difficult it was for most important
scientific breakthroughs to become acknowl-
edged by the contemporary academic estab-
lishment. Kuhn uses the notion normal science
to characterize the established academic sci-
ence and emphasizes the self-referencing
nature of its operating mode. Academic teach-
ers teach students what the teachers believe is
true. Students have to learn what they are taught
by their teachers if they want to pass their
exams and get their degrees. Research is mainly
repeating and retesting what is already known
and accepted. Journals, edited and peer-
reviewed by academic teachers, publish what
conforms with academic teachers’ ideas. Soci-
eties and associations—headed by the same
academic teachers—ensure the purity of doc-
trine by sponsoring those who confirm the
prevailing paradigms. Dissenting opinions are
unwelcome. Based on Kuhn’s view of normal
science, EBP and EBM can be identified as
classical manifestations of normal science. The
EBP helps to ensure the implementation of
mainstream knowledge by declaring to be most
valid what is best evaluated. Usually the cur-
rently established practices are endorsed by the
best and most complete empirical evidence;
dissenting ideas will hardly be supported by
good evidence, even if these ideas are right.
Since EBP instructs its adherers to evaluate the
available evidence on the basis of numerical
rules of epidemiology, arguments like plausi-
bility, logic consistency, or novelty are of little
relevance.
AN EXAMPLE FROM RECENT
HISTORY OF CLINICAL MEDICINE
When in 1982 the Australian physicians
Barry Marshall and Robin Warren dis-
covered Helicobacter pylori in the stomachs
of patients with peptic ulcers, their findings
were completely ignored and neglected by
the medical establishment of that time. The
idea that peptic ulcers are provoked by an
infectious agent conflicted with the prevail-
ing paradigm of academic gastroenterology,
which conceptualized peptic ulcers as a
consequence of stress and lifestyle. Although
there had been numerous previous reports of
helicobacteria in gastric mucosa, all these
findings were completely ignored because
they conflicted with the prevailing paradigm.
As a consequence Marshall and Warren’s
discovery was ignored for years because
it fundamentally challenged current scien-
tific opinion. They were outcast by the
scientific community, and only 10 years later
their ideas slowly started to convince more
and more clinicians. Now, 25 years later, it
is common basic clinical knowledge that
60 Overview and Foundational Issues
c04 18 April 2012; 19:44:28
Hersen, M., & Sturmey, P. (2012). Handbook of evidence-based practice in clinical psychology, adult disorders : Adult disorders. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2021-11-18 22:13:23.
C o p yr
ig h t ©
2 0 1 2 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
Helicobacter pylori is one of the major causes
of peptic ulcers, and eradication therapy is
the accepted and rational therapy for gastric
ulcers. Finally, in 2005 Barry Marshall and
Robin Warren gained the Nobel Price for
their discovery (Parsonnet, 2005).
BENEFITS AND RISKS OF
EVIDENCE-BASED PRACTICE
The true benefits of EBP for patients and
society in terms of outcomes and costs have
not been proven yet—at least not through
sound empirical evidence (B. Cooper, 2003;
Geddes & Harrison, 1997). Nevertheless, there
is no doubt that the method has a beneficial
and useful potential. Many achievements of
EBP are undisputable and undisputed, hence
they are evident.
Owing to the spread of methodical skills in
retrieving and evaluating the available epi-
demiological evidence, it has become much
harder to apply any kind of obscure or
idiosyncratic practices. The experts’ commu-
nity, as well as the customers and the general
public, are much more critical toward
pretended effects of treatments and ask for
sound empirical evidence of effectiveness
and safety. It is increasingly important not
only to know the best available treatment, but
also to prove it. The EBP is therefore a helpful
instrument for doctors and therapists to justify
and legitimate their practices to insurance,
judiciary, politics, and society.
Furthermore, individual patients might be
less at risk to wrong or harmful treatment due to
scientific misapprehension. Of course, common
malpractice owing to inanity, negligence, or
viciousness will never be eliminated, not even
by the total implementation of EBP; however,
treatment errors committed by diligent and
virtuous doctors are minimized through careful
adherence to rational guidelines.
In general, clinical decision-making paths
have become more comprehensible and
rational, probably also due to the spread of
EBP. As medicine is in fact not a thoroughly
scientific matter (Ghali et al. 1999), continuous
efforts are needed to enhance and renew
rationality. The EBP contributes to this task
and helps clinicians to maintain rationality in a
job where inscrutable complexity is daily
business. In current medical education, the
algorithms of EBP are now instilled into stu-
dents as a matter of course. Seen from that
perspective, EBP is also an instrument of dis-
cipline and education, for it compels medical
students and doctors to reflect continuously all
their opinions and decisions scientifically
(Norman, 1999). Today EBP has a great
impact on the education and training of future
doctors, and it thereby enhances the uniformity
and transparency of medical doctrine. This
international alignment of medical education
with the principles of EBP will, in the long run,
allow for better comparability of medical
practice all over the world. This is an important
precondition for the planning and coordination
of research activities. Thus, the circle of nor-
mal science is perfectly closed through the
widespread implementation of EBP.
GENERAL LIMITATIONS TO
EVIDENCE-BASED PRACTICE
It has been remarked, not without reason,
that the EBP movement itself has adopted
features of dogmatic authority (B. Cooper,
2003; Geddes et al., 1996; Miles et al., 1999).
This appears particularly ironic, because EBP
explicitly aims to fight any kind of orthodox
doctrine. The ferocity of some EBP adherents
may not necessarily hint at conceptual weak-
nesses of the method; rather, it is more likely
a sign of an iconoclastic or even patricidal
tendency inherent to EBP. Young, diligent
scholars, even students, possibly without any
practical experience, are now entitled to criti-
cize and rectify clinical authorities (Norman,
1999). This kind of insurgence must evoke
resistance from authorities. If the acceptance
of EBP among clinicians should be enhanced,
Limitations to Evidence-Based Practice 61
c04 18 April 2012; 19:44:28
Hersen, M., & Sturmey, P. (2012). Handbook of evidence-based practice in clinical psychology, adult disorders : Adult disorders. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2021-11-18 22:13:23.
C o p yr
ig h t ©
2 0 1 2 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
it is advisable that the method is not only
propagated by diligent theoreticians, but
mainly by experienced practitioners.
One of the first and most important argu-
ments against EBP is reductionism (see earl-
ier, Welsby, 1999). Complex and maybe
fundamentally diverse clinical situations of
individual patients have to be condensed and
aggregated to generalized questions in order to
retrieve empirical statistical evidence. Import-
ant specific information about the individual
cases is inevitably lost owing to this general-
ization. The usefulness of the retrieved evidence
is therefore inevitably diluted to a very general
and dim level. Of course, there are some fre-
quently used standard interventions, which are
really based upon good empirical evidence
(Geddes et al., 1996).
EXAMPLES FROM CLINICAL
MEDICINE
Scabies, a parasitic infection of the skin, is
an important public health problem, mainly
in resource-poor countries. For the treatment
of the disease, two treatment options are
recommended: topical permethrin and oral
ivermectin. Both treatments are known to be
effective and are usually well tolerated.
The Cochrane Review concluded from the
available empirical evidence that topical
permethrin appears to be the most effective
treatment of scabies (Strong & Johnstone,
2007). This recommendation can be found
in up-to-date medical textbooks and is
familiar to any well-trained doctor.
Acute otitis media in children is one of the
mostcommondiseases,oneofthemaincauses
for parents to consult a pediatrician, and a
frequent motive for the prescription of antibi-
otics, even though spontaneous recovery is
the usual outcome. Systematic reviews have
shown that the role of antibiotic drugs for the
course of the disease is marginal, and there is
no consensus among experts about the
identification of subgroups who would poten-
tially profit from antibiotics. In clinical prac-
tice, in spite of lacking evidence of its benefit,
the frequent prescription of antibiotic drugs is
mainly the consequence of parents’ pressure
and doctors’ insecurity. A recent meta-analy-
sis (Rovers et al., 2006) found that children
youngerthan2yearsofagewithbilateralacute
otitis media and those with otorrhea benefited
to some extent from antibiotic treatment;
however, even for these two particular condi-
tions, differences were moderate: After 3–7
days, 30% of the children treated with antibi-
otics still had pain, fever, or both, while in the
control group the corresponding proportion
was 55%. So,the available evidence to guide a
clinicianwhentreatinga childwith acuteotitis
media is not really significant and the decision
willmostlydependonsoftfactorslikeparents’
preferences or practical and economical
considerations.
Evidently, clinicians choosing these inter-
ventions do not really need to apply the algo-
rithms of EBP to make their decisions. They
simply administer what they had learned in
their regular clinical training. The opponents
of EBP rightly argue that the real problems in
clinical practice arise from complex, multi-
morbid patients presenting with several ill-
nesses and other factors that have to be taken
into account by the treating clinician. In order
to manage such cases successfully there is
usually no specific statistical evidence avail-
able to rely on. Instead, clinicians have to put
together evidence covering some aspects of the
actual case and hope that the resulting treat-
ment will still work even if it is not really
designed and tested for that particular situ-
ation. Good statistical evidence meeting the
highest standards of EBP is almost exclusively
derived from ideal monomorbid patients, who
are rarely seen in real, everyday practice
(Williams & Garner, 2002). It is not clear at
all—and far from evidence-based—whether
evidence from ideal cases can be transferred to
62 Overview and Foundational Issues
c04 18 April 2012; 19:44:28
Hersen, M., & Sturmey, P. (2012). Handbook of evidence-based practice in clinical psychology, adult disorders : Adult disorders. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2021-11-18 22:13:23.
C o p yr
ig h t ©
2 0 1 2 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
more complex cases without substantial loss of
validity.
Another argument criticizing EBP points at
an epistemological problem. Because the EBP
operates retrospectively by evaluating what
was done in the past, it cannot directly con-
tribute to developing new strategies and to
finding new therapies. The EBP helps to
consolidate well-known therapies, but cannot
guide researchers toward scientific inno-
vations. No scientific breakthrough will ever
be made owing to EBP. On the contrary, if all
clinicians strictly followed recommendations
drawn from available retrospective evidence
and never dared to try something different,
science would stagnate in fruitless self-
reference. There is a basically conservative and
backward tendency inherent to the method.
Although it cannot exactly be called anti-
scientific on that account (B. Cooper, 2003;
Miles et al., 1999), EBP is a classical phe-
nomenon of normal science (Kuhn, 1962). It
will not itself be the source of fundamental
new insights.
Finally, there is an external problem with
EBP, which is probably most disturbing of all:
Production and compilation of evidence
available to clinicians is highly critical and
exposed to different nonscientific influences
(Miettinen, 1999). Selection of areas of
research is based more and more on economic
interests. Large, sound, and therefore scien-
tifically significant epidemiologic studies are
extremely complex and expensive. They can
be accomplished only with the support of
financially potent sponsors. Compared with
public bodies or institutions, private com-
panies are usually faster and more flexible in
investing important amounts of money into
medical research. So, for many ambitious sci-
entists keen on collecting publishable findings,
it is highly appealing to collaborate with
commercial sponsors. This has a significant
influence on the selection of diseases and
treatments being evaluated. The resulting body
of evidence is necessarily highly unbalanced
because mainly diseases and interventions
promising important profits are well evaluated.
For this reason, more money is probably put
into trials on erectile dysfunction, baldness, or
dysmenorrhea than on malaria or on typhoid
fever. So, even guidelines based on empirical
evidence—considered to be the ultimate gold
standard of clinical medicine—turn out to be
arbitrary and susceptible to economical, po-
litical, and dogmatic arguments (Berk & Miles
Leigh, 1999). So, EBP’s goals to replace
opinion and tendency by knowledge are in
danger of being missed, if the relativity of
available evidence is unrecognized. The
uncritical promotion of EBP opens a clandes-
tine gateway to those who have interests in
controlling the contents of medical debates and
have the financial means to do so. Biasing
clinical decisions in times of EBP is probably
no longer possible by false or absent evidence;
however, the selection of what is researched
in an EBP-compatible manner and what is
published may result in biased clinical deci-
sions (Miettinen, 1999). One of the most
effective treatment options in many clinical
situations—watchful waiting—is notoriously
under-researched because there is no com-
mercial or academic interest linked to that
treatment option. Unfortunately, there will
never be enough time, money, and workforce
to produce perfect statistical evidence for
all useful clinical procedures. So, even in
the very distant future, clinicians will still
apply many of their probably effective inter-
ventions without having evidence about
their efficacy and effectiveness; thus, EBP is a
technique of significant but limited utility
(Green & Britten, 1998; The Lancet, 1995;
Sackett et al., 1996).
EXAMPLE FROM CLINICAL
MEDICINE
Lumbar back pain is one of the most frequent
health problems in Western countries. About
5% of all low back problems are caused by
prolapsed lumbar discs. The treatment is
Limitations to Evidence-Based Practice 63
c04 18 April 2012; 19:44:28
Hersen, M., & Sturmey, P. (2012). Handbook of evidence-based practice in clinical psychology, adult disorders : Adult disorders. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2021-11-18 22:13:23.
C o p yr
ig h t ©
2 0 1 2 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
mainly nonsurgical and 90% of acute attacks
of nerve root pain (sciatica) settle without
surgical intervention; however, different
forms of surgical treatments have been
developed and disseminated. Usually these
methods are considered for more rapid relief
in patients whose recovery is unacceptably
slow. The Cochrane reviewers criticize that
“despite the critical importance of knowing
whether surgery is beneficial for disc pro-
lapse, only four trials have directly compared
discectomy with conservative management
and these give suggestive rather than con-
clusive results” (Gibson & Waddell, 2007,
p. 1). They concluded:
Surgical discectomy for carefully selected
patients with sciatica due to lumbar disc
prolapse provides faster relief from the
acute attack than conservative management,
although any positive or negative effects on
the lifetime natural history of the underlying
disc disease are still unclear. (p. 2)
Surgical treatments of low back pain hold an
enormous commercial potential due to
the worldwide frequency of the problem. It
appears obvious that there are only a few
trials comparing conservative treatment
with surgery.
SPECIFIC LIMITATIONS TO EBP IN
PSYCHIATRY, PSYCHOTHERAPY,
AND CLINICAL PSYCHOLOGY
In psychiatry and psychotherapy, there is an
ambivalent attitude toward EBP. Attempting to
increase their scientific respectability, some
psychiatrists and clinical psychologists zeal-
ously adopted EBP algorithms (Geddes &
Harrison, 1997; Gray & Pinson, 2003; Oakley-
Browne, 2001; Sharpe et al., 1996) and started
evidence-based psychiatry. Others remain
hesitant or doubtful about the usefulness of EBP
in their field, and several authors have addressed
different critical aspects of evidence-based
psychiatry (Berk & Miles Leigh, 1999; Bilsker,
1996; Brendel, 2003; Geddes & Harrison, 1997;
Goldner & Bilsker, 1995; Harari, 2001; Hotopf,
Churchill, & Lewis, 1999; Lawrie, Scott, &
Sharpe, 2000; Seeman, 2001; Welsby, 1999;
Williams & Garner, 2002) with all of them
fundamentally concerning practical and scien-
tific particularities of psychiatry and clinical
psychology. Next, we shall try to clarify these
arguments.
The evidence-based approach to individual
cases is critically dependent on the validity of
diagnoses. This is an axiomatic assumption
of EBP, which is rarely analysed or scrutinized
in detail. If in a concrete case no diagnosis
could be attributed, the case would not be
amenable to EBP, and no evidence could
support decisions in such a case. If the diag-
nosis is wrong, or—even more intricate—if
cases labeled with a specific diagnosis are
still not homogenous enough to be comparable
in relevant aspects, EBP will provide useless
results.
EXAMPLE FROM PSYCHIATRY
According to DSM-IV, eating disorders are
classified in different categories: anorexia
nervosa (AN), bulimia nervosa (BN), binge
eating disorder (BED), and eating disorder
not otherwise specified (EDNOS). These
categories are clinically quite distinct and
diagnostic criteria are clear and easily
applicable. In spite of the phenomenological
diversity of the disease patterns, there is a
close relationship between the different forms
of eating disorders. In clinical practice,
switches between different diagnoses and
temporary remissions and relapses are fre-
quent. In the course of time, patients may
change their disease pattern several times:
At times they may not meet the criteria for
a diagnosis anymore, although they are not
completely symptom free, and later they may
relapse to a full-blown eating disorder again
or may be classified as having EDNOS.
64 Overview and Foundational Issues
c04 18 April 2012; 19:44:28
Hersen, M., & Sturmey, P. (2012). Handbook of evidence-based practice in clinical psychology, adult disorders : Adult disorders. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2021-11-18 22:13:23.
C o p yr
ig h t ©
2 0 1 2 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
Corresponding to these clinical impressions,
longitudinal studies demonstrate that the sta-
bility of eating disorder diagnoses over time is
low ( Fichter & Quadflieg, 2007; Grilo et al.,
2007; Milos, Spindler, Schnyder, & Fairburn,
2005). Based on systematic evaluation of
the available evidence, however, treatment
guidelines give specific recommendations for
the different conditions (National Institute
for Clinical Excellence [NICE], 2004). For
patients with AN, psychological treatment on
an outpatient basis is recommended. The
treatment should be offered by “a service that
is competent in giving that treatment and in
assessing the physical risk of people with
eating disorders” (p. 60). For patients with
BN, the NICE guideline proposes as a pos-
sible first step to follow an evidence-based
self-help program. As an alternative, a trial
with an antidepressant drug is recommended,
followed by cognitive behavior therapy for
bulimia nervosa. In the absence of evidence to
guide the treatment of EDNOS, the NICE
guideline recommends pragmatically that
“the clinician considers following the guid-
ance on the treatment of the eating problem
that most closely resembles the individual
patient’s eating disorder” (p. 60). So even
though specific diagnoses of eating disorders
are not stable and a patient with AN might be
diagnosed with BN a few months later,
treatment recommendations vary consider-
ably for the two conditions. It becomes
obvious that different treatment recommen-
dations for seemingly different conditions
reflect rather accidental differences in the
availability of empirical evidence than real
differences in the response of certain condi-
tions to specific treatments. Hence, the guid-
ance offered by the guideline is basically a
rather unstable crutch, and of course, cogni-
tive behavior therapy or an evidence-based
self-help program might be just as beneficial
in AN or in EDNOS than it is in BN, even
though nobody has yet compiled the statis-
tical evidence to prove this.
What does the validity of a diagnosis mean?
The question concerns epistemological issues
and requires a closer look to the nature of
medical diagnoses with special regard to psy-
chiatric diagnoses. R. Cooper (2004) questioned
if mental disorders as defined in diagnostic
manuals are natural kinds. In her thoughtful
paper, the author concluded that diagnostic
entities are in fact theoretical conceptions,
describing complex cognitive, behavioral, and
emotional processes (R. Cooper, 2004; Harari,
2001). Diagnostic categories are based upon
observations, still they are strongly influenced
by theoretical, social, and even economical
factors. The ontological structure of psychi-
atric diagnoses is therefore not one of natural
kinds. They are not something absolutely
existing that can be observed independently.
Rather they are comprehensive theoretical
definitions serving as tools for communication
and scientific observation. Kendell and
Jablensky (2003) have also recently addressed
the issue of diagnostic entities and concluded
that the validity of psychiatric diagnoses is
limited. They analysed whether diagnostic
entities are sufficiently separable from each
other and from normality by zones of rarity.
They concluded that this was not the
case; rather, they concluded that psychiatric
diagnoses often overlap (R. Cooper, 2004;
Welsby, 1999), shift over time within the same
patient, and several similar diagnoses can be
present in the same patient at the same time
(comorbidity). Not surprisingly, diagnosis
alone is a poor predictor of outcome (Williams
& Garner, 2002). Acknowledging this hazi-
ness of diagnoses, one realizes these problems
when trying to match individual cases to
empirical evidence. When even the presence
of a correctly assessed diagnosis does not
assure comparability to other cases with the
same diagnosis, empirical evidence about
mental disorders is highly questionable
(Harari, 2001). Of course, limited validity does
not imply complete absence of validity, and
empirical evidence on mental disorders is
still useful to some extent; however, insight
Limitations to Evidence-Based Practice 65
c04 18 April 2012; 19:44:29
Hersen, M., & Sturmey, P. (2012). Handbook of evidence-based practice in clinical psychology, adult disorders : Adult disorders. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2021-11-18 22:13:23.
C o p yr
ig h t ©
2 0 1 2 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
into the limitations is important and that
insight points out that psychiatric diagno-
ses represent phenomenological descriptions
rather than natural kinds. Several authors
have treated the same issue when writing
about the complexity of cases, the problem of
subsyndromal cases, and of single cases versus
statistical evidence (Harari, 2001; Welsby,
1999; Williams & Garner, 2002).
NONLINEAR DYNAMICS IN THE
COURSE OF DISEASES
It might be fruitful to look at evidence-based
psychiatry from another perspective and to
address the issues of complexity and nonlinear
dynamics. With regard to their physical and
mental functioning, humans can be conceptu-
alized as systems of high complexity
(Luhmann, 1995). This means that they cannot
be determined precisely, but only in a prob-
abilistic manner; however, probabilistic
determination is sufficient for most purposes in
observable reality. Human life consists fun-
damentally in dealing with probabilities.
Social systems and human communication are
naturally designed to manage complexity more
or less successfully. Medicine itself is a social
system (Luhmann, 1995) trying to handle the
effects of complexity (Harari, 2001), for
example, by providing probabilistic algo-
rithms for treatments of diseases. In most
situations, medicine can ignore the particular
effects emerging from the complex nonlinear
structure of its objects, although such effects
are always present. Only sometimes do these
effects become obvious and irritating, as for
example in fluctuations of symptoms in
chronic diseases, variations in response to
treatment, unexpected courses in chronic dis-
eases, and so on. Such phenomena can be seen
as manifestations of the butterfly effect (see
earlier). This insight questions deeply the core
principle of EBP that assumes that it is rational
to treat similar cases in the same manner
because similarity in the initial conditions will
predict similar outcomes under the identical
treatment. The uncertainty of this assumption
is particularly critical in psychiatry and psy-
chotherapy. In these fields similar appearance
is just a palliation for untraceable difference,
and this exact difference may crucially influ-
ence the outcome.
Addressing such problems is daily busi-
ness for psychiatrists and psychotherapists,
so their disciplines have developed special
approaches. Diagnostic and therapeutic pro-
cedures in these disciplines are much less
focused on critical momentary decisions, but
more on gradual, iterative procedures. Psy-
chiatric treatments and even more psycho-
therapy are self-referencing processes, where
assessments and decisions are constantly re-
evaluated. Instead, EBP focuses primarily on
decision making as the crucial moment of good
medical practice. One gets the impression that
EBM clinicians are constantly making critical
decisions, and after having made the right
decision, the case is solved. Maybe it is
because of this misfit between the proposals of
the method and real daily practice that many
psychiatrists are not too attracted by EBP.
EXAMPLE FROM PSYCHIATRY
The diagnosis of posttraumatic stress dis-
order (PTSD) was first introduced in the
third edition of the Diagnostic and Statis-
tical Manual of Mental Disorders (DSM-III)
in 1980. Before that time, traumatized
individuals were either diagnosed with dif-
ferent nonspecific diagnoses (e.g., anxiety
disorders, depression, neurasthenia) or not
declared ill at all. Astonishingly, the
newly discovered entity appeared to be a
clinically distinct disorder and the corre-
sponding symptoms (re-experiencing, avoid-
ance, hyperarousal) were quite characteristic
and easily identifiable. Within a short
time after its invention (Summerfield, 2001),
PTSD became a very popular disorder;
66 Overview and Foundational Issues
c04 18 April 2012; 19:44:29
Hersen, M., & Sturmey, P. (2012). Handbook of evidence-based practice in clinical psychology, adult disorders : Adult disorders. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2021-11-18 22:13:23.
C o p yr
ig h t ©
2 0 1 2 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
clinicians and even patients loved the new
diagnosis (Andreasen, 1995). The key point
for the success of the new diagnosis was that
it is explicitly based on the assumption of an
external etiology; that is, the traumatic
experience. This conception makes PTSD so
appealing for the attribution of cause,
responsibility, and guilt is neatly separated
from the affected individual. PTSD allows
for the exculpation of the victim, a feature
that was particularly important when caring
for Holocaust survivors and Vietnam War
veterans. But what was almost proscribed
for some time after the introduction of PTSD
is now evidence-based: Preexisting individ-
ual factors play an important role in the
shaping of posttraumatic response. Whether
or not an individual develops PTSD after a
traumatic experience is not only determined
by the nature and the intensity of the
traumatic impact, but also by various
pretraumatic characteristics of the affected
individual. Furthermore, PTSD is not the
only posttraumatic mental disorder. A whole
spectrum of mental disorders is closely
linked to traumatic experiences, although
they lack the monocausal appearance of
PTSD. Anyway, the most frequent outcome
after traumatic experiences is recovery. In
the second rank of frequency comes major
depression. Borderline personality disorder
is fully recognized now as a disorder pro-
voked by traumatic experiences in early
childhood. Dissociative disorders, chronic
somatoform pain, anxiety disorders, sub-
stance abuse, and eating disorders are
equally related to traumatic experiences.
Not surprisingly, PTSD is often occurring as
a comorbid condition with one or more
additional disorder or vice versa. In clinical
practice, traumatized patients usually pre-
sent more complex than expected. This may
explain to some extent why PTSD was vir-
tually overlooked by clinicians for many
decades before its introduction, a fact that is
sometimes hard to understand by younger
therapists who are so familiar with the PTSD
diagnosis. At any rate, the high-functioning,
intelligent, monomorbid PTSD patient is
indeed best evaluated in clinical trials, but
rarely seen in everyday practice.
PTSD was right in the focus of research
since its introduction. Also from a scientific
point of view, the disorder is appealing
because it is provoked by an external event.
PTSD allows ideally for the investigation of
thehuman-environmentinteraction,whichisa
crucial issue for psychiatry and psychology in
general. The number of trials on diagnosis and
treatment of PTSD is huge, and the disorder is
now probably the best evaluated mental dis-
order. What is the benefit of the accumulated
large body of evidence on PTSD for cli-
nicians? There are several soundly elaborated
guidelines on the treatment of PTSD (Ameri-
can Psychiatric Association, 2004; Australian
Centre for PosttraumaticMentalHealth,2007;
NICE, 2005), meta-analyses, and Cochrane
Reviews providing guidance for the assess-
ment and treatment of the disorder. When we
look at the existing conclusions and recom-
mendations, we learn that:
� Debriefing is not recommended as routine practice for individuals who have
experienced a traumatic event.
� When symptoms are mild and have been present for less than 4 weeks after the
trauma, watchful waiting should be
considered.
� Trauma-focused cognitive behavior therapy on an individual outpatient basis
should be offered to people with severe
posttraumatic symptoms.
� Eye movement desensitization and repro- cessing is an alternative treatment option.
� Drug treatment should not be used as a routine first-line treatment in preference to
a trauma-focused psychological therapy.
� Drug treatment (Specific Serotonin Reuptake Inhibitors) should be considered
Limitations to Evidence-Based Practice 67
c04 18 April 2012; 19:44:29
Hersen, M., & Sturmey, P. (2012). Handbook of evidence-based practice in clinical psychology, adult disorders : Adult disorders. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2021-11-18 22:13:23.
C o p yr
ig h t ©
2 0 1 2 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
for the treatment of PTSD in adults
who express a preference not to engage
in trauma-focused psychological
treatment.
� In the context of comorbid PTSD and depression, PTSD should be treated first.
� In the context of comorbid PTSD and substance abuse, both conditions should
be treated simultaneously.
These recommendations are obviously
clear, useful, and practical. They give real
guidance to therapists and do not leave much
room for doubts or insecurity. On the other
hand, they are basically very simple, almost
trivial. For trauma therapists, these recom-
mendations are commonplace and serve
mainly to endorse what they are practicing
anyway. The main points of the guidelines
for the treatment of PTSD could be taught in
a 1-hour workshop. The key messages of the
guidelines represent basic clinical knowl-
edge on a specific disorder as it has been
instructed in times before EBP. Through
their standardizing impact on the therapeutic
community, guidelines may in fact align and
improve the general service quality offered
to traumatized individuals, although this
effect has not yet been demonstrated by
empirical evidence.
The treatment of an individual patient
remains a unique endeavor where interper-
sonal relationship, flexibility, openness, and
cleverness are crucial factors. This challenge is
not lessened by evidence or guidelines.
REFERENCES
American Psychiatric Association. (2004). APA practice
guidelines. Treatment of patients with acute stress
disorder and posttraumatic stress disorder. doi:
10.1176/appi.books.9780890423363.52257
Andreasen, N. C. (1995). Posttraumatic stress disorder:
Psychology, biology, and the Manichaean warfare
between false dichotomies. American Journal of
Psychiatry, 152, 963–965.
Australian Centre for Posttraumatic Mental Health.
(2007). Australian guidelines for the treatment of
adults with acute stress disorder and posttraumatic
stress disorder. Melbourne, Victoria.
Berk, M., & Miles Leigh, J. (1999). Evidence-based
psychiatric practice: Doctrine or trap? Journal of
Evaluation in Clinical Practice, 5, 149–152.
Bilsker, D. (1996). From evidence to conclusions in
psychiatric research. Canadian Journal of Psychiatry,
41, 227–232.
Brendel, D. H. (2003). Reductionism, eclecticism, and
pragmatism in psychiatry: The dialectic of clinical
explanation. Journal of Medicine and Philosophy, 28,
563–580.
Cochrane, A. L. (1972). Effectiveness and efficiency:
Random reflections on health services. London, En-
gland: Nuffield Provincial Hospitals Trust.
Cooper, B. (2003). Evidence-based mental health policy:
A critical appraisal. British Journal of Psychiatry,
183, 105–113.
Cooper, R. (2004). What is wrong with the DSM? History
of Psychiatry, 15, 5–25.
Fichter, M. M., & Quadflieg, N. (2007). Long-term sta-
bility of eating disorder diagnoses. International
Journal of Eating Disorders, 40(Suppl.), 61–66.
Geddes, J. R., Game, D., Jenkins, N. E., Peterson, L. A.,
Pottinger, G. R., & Sackett, D. L. (1996). What pro-
portion of primary psychiatric interventions are based
on evidence from randomised controlled trials?
Quality in Health Care, 5, 215–217.
Geddes, J. R., & Harrison, P. J. (1997). Closing the gap
between research and practice. British Journal of
Psychiatry, 171, 220–225.
Ghali, W., Saitz, R., Sargious, P. M., & Hershman, W. Y.
(1999). Evidence-based medicine and the real world:
Understanding the controversy. Journal of Evaluation
in Clinical Practice, 5, 133–138.
Gibson, J. N. A., & Waddell, G. (2007). Surgical inter-
ventions for lumbar disc prolapse. Cochrane Data-
base of Systematic Reviews, Issue 1. Art. No.:
CD001350. doi: 10.1002/14651858.CD001350.pub4
Goldner, E. M., & Bilsker, D. (1995). Evidence-based
psychiatry. Canadian Journal of Psychiatry, 40,
97–101.
Gray, G. E., & Pinson, L. A. (2003). Evidence-based
medicine and psychiatric practice. Psychiatric Quar-
terly, 74, 387–399.
Green, J., & Britten, N. (1998). Qualitative research and
evidence based medicine. British Medical Journal,
316, 1230–1232.
Grilo, C. M., Pagano, M. E., Skodol, A. E., Stanislow,
C. A., McGlashan, T. H., Gunderson, J. G., & Stout,
R. L. (2007). Natural course of bulimia nervosa and of
eating disorder not otherwise specified: Five-year
68 Overview and Foundational Issues
c04 18 April 2012; 19:44:29
Hersen, M., & Sturmey, P. (2012). Handbook of evidence-based practice in clinical psychology, adult disorders : Adult disorders. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2021-11-18 22:13:23.
C o p yr
ig h t ©
2 0 1 2 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
prospective study of remissions, relapses and the
effects of personality disorder psychopathology.
Journal of Clinical Psychiatry, 68, 738–746.
Harari, E. (2001). Whose evidence? Lessons from the
philosophy of science and the epistemology of
medicine. Australian and New Zealand Journal
of Psychiatry, 35, 724–730.
Hotopf, M., Churchill, R., & Lewis, G. (1999). Pragmatic
randomised controlled trials in psychiatry. British
Journal of Psychiatry, 175, 217–223.
Kendell, R., & Jablensky, A. (2003). Distinguishing
between the validity and utility of psychiatric diag-
noses. American Journal of Psychiatry, 160, 4–12.
Kuhn, T. (1962). The structure of scientific revolutions.
Chicago, IL: University of Chicago.
The Lancet. (1995). Evidence-based medicine, in its place
[Editorial]. Elsevier Science, 346, 785.
Lawrie, S. M., Scott, A. I., & Sharpe, M. C. (2000).
Evidence-based psychiatry—Do psychiatrists want it
and can they do it? Health Bulletin, 58, 25–33.
Luhmann, N. (1995). Social Systems. Stanford, CA:
Stanford University Press.
Miettinen, O. S. (1999). Ideas and ideals in medicine:
Fruits of reason or props of power? Journal of
Evaluation in Clinical Practice, 5, 107–116.
Miles, A., Bentley, P., Polychronis, A., Grey, J., & Price, N.
(1999). Advancing the evidence-based healthcare
debate. Journal of Evaluation in Clinical Practice,
5, 97–101.
Milos, G., Spindler, A., Schnyder, U., & Fairburn, C. G.
(2005). Instability of eating disorder diagnoses:
A prospective study. British Journal of Psychiatry,
187, 573–578.
National Institute for Clinical Excellence (NICE). (2004).
Eating disorders. Core interventions in the treatment
and management of anorexia nervosa, bulimia ner-
vosa, and related eating disorders. National clinical
practical guideline number CG9. London, England:
The British Psychological Society and Gaskell.
National Institute for Clinical Excellence (NICE). (2005).
Posttraumatic stress disorder (PTSD). The manage-
ment of PTSD in adults and children in primary and
secondary care. Clinical guideline 26. Retrieved from
www.nice.org.uk/CG026NICEguideline
Norman, G. R. (1999). Examining the assumptions of
evidence-based medicine. Journal of Evaluation in
Clinical Practice, 5, 139–147.
Oakley-Browne, M. A. (2001). EBM in practice: Psy-
chiatry. Medical Journal of Australia, 174, 403–404.
Parsonnet, J. (2005). Clinician-discoverers—Marshall,
Warren, and H. pylori. New England Journal of
Medicine, 353, 2421–2423.
Rangachari, P. K. (1997). Evidence-based medicine: Old
French wine with a new Canadian label? Journal of
the Royal Society of Medicine, 90, 280–284.
Rovers, M. M., Glasziou, P., Appelman, C. L., Burke, P.,
McCormick,D. P., Damoiseaux, R.A., . . . Hoes,A. W.
(2006). Antibiotics for acute otitis media: A meta-
analysis with individual patient data. The Lancet, 368,
1429–1435.
Sackett, D. L., Rosenberg, W. M. C., Muir Gray, J. A.,
Haynes, R., & Richardson, W. S. (1996). Evidence-
based medicine: What it is and what it isn’t. British
Medical Journal, 312, 71–72.
Seeman, M. V. (2001). Clinical trials in psychiatry: Do
results apply to practice? Canadian Journal of
Psychiatry, 46, 352–355.
Sharpe, M., Gill, D., Strain, J., & Mayou, R. (1996).
Psychosomatic medicine and evidence-based
treatment. Journal of Psychosomatic Research, 41,
101–107.
Sleigh, J. W. (1995). Evidence-based medicine and Kurt
Godel. Letter to the editor. The Lancet, 346, 1172.
Strong, M., & Johnstone, P. W. (2007). Interventions for
treating scabies. Cochrane Database of Systematic
Reviews, Issue 2. Art. No.: CD000320. doi: 10.1002/
14651858.CD000320.pub2
Summerfield, D. (2001). The invention of post-traumatic
stress disorder and the social usefulness of a psychi-
atric category. British Medical Journal, 322, 95–98.
Welsby, P. D. (1999). Reductionism in medicine: Some
thoughts on medical education from the clinical front
line. Journal of Evaluation in Clinical Practice, 5,
125–131.
Williams, D. D. R., & Garner, J. (2002). The case against
“the evidence”: A different perspective on evidence-
based medicine. British Journal of Psychiatry, 180,
8–12.
Limitations to Evidence-Based Practice 69
c04 18 April 2012; 19:44:31
Hersen, M., & Sturmey, P. (2012). Handbook of evidence-based practice in clinical psychology, adult disorders : Adult disorders. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2021-11-18 22:13:23.
C o p yr
ig h t ©
2 0 1 2 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
c04 18 April 2012; 19:44:31
Hersen, M., & Sturmey, P. (2012). Handbook of evidence-based practice in clinical psychology, adult disorders : Adult disorders. John Wiley & Sons, Incorporated. Created from ashford-ebooks on 2021-11-18 22:13:23.
C o p yr
ig h t ©
2 0 1 2 . Jo
h n W
ile y
& S
o n s,
I n co
rp o ra
te d . A
ll ri g h ts
r e se
rv e d .
UAGC | Case studies in non evidence based treatment Part one
Hello, and welcome to this discussion. My name is Dr. Steven Brewer and I'm an Assistant Professor of Psychology and Applied Behavioral Sciences at Emory University and content lead for Psych 645. This is the first of two audio files that will introduce case studies in non evidence-based treatment. But first, what is evidence-based treatment?
Evidence-based treatments or evidence-based practices are generally those that are supported by peer reviewed scientific literature. This definition may lead you to assume that non-evidence-based practices are those that have been disproven by a peer reviewed scientific literature. But that isn't the case. Non-evidence-based practices are simply those practices that have not been supported by peer reviewed literature yet.
Joining me to talk more about non-evidence-based practices is Dr. Eric Cervantez, Assistant Professor and Chair of the Complementary and Alternative Health Program at Ashford University. He'll be sharing with us some fascinating cases where evidence based practices fail to completely help a patient's concerns.
Thank you, Dr. Brewer. I really appreciate this time. A little bit of background first. I have a relative by the name of Ivan S. And so Ivan is a Navajo descent individual, born in the area of the Navajo reservation in Arizona.
And in his early years, between the age of 20 years old and 26 years old, he was deployed to Iraq for a couple of tours in that war. In any event, the point is that the first time that he returned from that war, Ivan came back with a lot of anger management issues at home, and began to have a lot of squabbles with his wife and children. Which was very different than what he was before.
He did another tour that after, of course, there are other problems in the household prior to his next departure. But when he returned, it was very apparent that he was very much affected by the war. He was taken to the naval hospital here in Southern California.
His mood became stoic. But his behavior continued to be very, very aggressive. He began to beat his wife, began to beat his children, recurring nightmares at night, night sweats, many, many, many problems.
And he continued on to go into the Naval hospital here in Southern California with no apparent relief of his symptoms or his behavior. He began to drink. He began to have issues with the law. And again, no rescue from conventional practice with the medications and the treatment. He began to also go to see a counselor, but that also didn't do very well.
OK so a little bit of background on what we'll call patient MD. So she grew up, basically in a very poor town, and agricultural town of many, many migrants, migrant of Mexican descent as well as American descent. Mother and father married an early age. Probably I think there were seven children in her family, if I remember correctly, of this case. And she was one of the younger females of the family.
At that time, at this place, they were rival gangs that were part of her life. As she began to go into her teenage years, she began to experience a lot of attention from a lot of her male peers. But one time at one point, she began to have problems with these peers that were gang related. And then experienced being rape from the age of 14 to 16.
She belonged actually to one of the rival gangs. So therefore, it was very apparent that her collusion or her involvement with the rival gang brought her into more at risk for these things to happen, not that that is a justification. But more of how these gangs relate to each other.
She began to then, from that point after 16, she began to use more heavily marijuana, meth, heroin, and alcohol. And she expressed that more than anything she was trying to drown out the anxiety, the fear, the panic, panic attacks that she would experience on an everyday basis due to her belief that this would happen again at any time. At the age of 18, she disbanded from these gangs and began to have her own life.
She began go to school, finish high school specifically, she finished with a GED. She was in and out of college between the ages of 19 and 24. In her family life, she dissociated completely from her mother and father, because she felt abandoned by them, specifically when she was trying to address the trauma that she experienced so young.
As a matter of fact, she even expressed to them that she was raped and that she wanted some people to go to jail for it. But the parents did not proceed to help her out with any legal ramifications that would come from that. So she felt also disregarded, especially for what she wanted to do. Her parents actually blamed her for the rape.
So she carries a lot of rancor and a lot of anger towards her parents, and manifests that in many ways towards her parents. She's very disrespectful. She reports that she's very disrespectful. She doesn't really visit them that often. She has a lot of fights with them for many, many little things.
She's been in and out of relationships, very short term relationships mostly, been intimate relationships rather than more of a whole relationships with peers, specifically male peers. She identifies as bisexual, but she prefers men, but still has a lot of issues with men and constantly fights with men, physically fights with men, and especially her partners.
Socially, she is-- now her focus is, she's an activist. She does a lot of work with the migrant families, trying to educate them, trying to empower them. And she also goes to rallies for migrant rights and also undocumented rights.
So she is very much like a social worker, as well as an activist and believes that she's doing this because she's trying to give back for people that are also abused and disregarded in this society. She is very youth oriented. So a lot of her focus is educating youth and also preventing youth violence, specifically sexual violence against youth and especially females.
However though, on the other side of the spectrum. She's so ardently a zealot actually with these ways of being, because she can't take no for an answer. She does things by force. So in essence, she encompasses a very, very strong male, almost patriarchal attitude about things and she demands things to be done in a certain way, which hampers a lot of her relationships with people.
" really she can't really form very good social bonds and really alienates a lot of people with her force, with the way that she is in forcefully pushing her agenda on folks and forcefully pushing a lot of her agenda of helping youth, et cetera. So I guess what I'm saying is that there definitely would be a better balance if she was a little more in tune with how she manifests to people. Right now, she would probably have more allies if she wasn't so harsh in manifesting what her agenda is.
Again as previously said, she suffers a lot of anxiety. She suffers from insomnia. She currently takes medication for anxiety. She continues to self medicate with marijuana to also decrease the anxiety. She has some social phobia, but also very outgoing interesting enough. So she has both of those polarities. And she suffers from panic attacks on a constant basis.
What's really even more interesting from a kind of holistic way of looking at this is that she was recently diagnosed with uterine cancer. And what that does from my perspective as a clinician and naturopathic doctor, what that tells me is that the energetic imprint of that trauma, obviously, is well imprinted in the uterus, and the manifestation of that trauma is still there.
So if she goes on to do other work, specific spiritual work, or other forms of therapy I think there's a possibility to lift that cancer growth. And not from a chemical perspective, not from chemotherapy or radiation therapy, but more from a very intuitive, very mental, emotional, and spiritual practice. And on other recommendations will be given from that. So that's what the background on MD.
Well thank you, Dr. Cervantez, for that fascinating case study. Students your challenge as individuals looking at these cases is to provide a professional diagnosis for these patients and then propose the treatments as discussed in the discussion forum prompt. At the end of this week or early next week, your instructor will be posting part two of this discussion where Dr. Cervantez will talk about the non-evidence-based practices that were actually employed with these patients and how those worked out.
So look forward to seeing that in the announcements section of this course. And I'll look forward to talking with you early next week.
Maier, T. (2012). Limitations to evidence-based practice . In P. Sturmey & M. Hersen (Series Eds.). Handbook of evidence-based practice in clinical psychology: Vol. 2. Adult disorders (pp. 55-69). Hoboken, N.J.: John Wiley & Sons.
Mudford, O. C., McNeill, R., Walton, L., & Phillips, K. J. (2012). Rationale and standards of evidence-based practice . In P. Sturmey & M. Hersen (Series Eds.), Handbook of evidence-based practice in clinical psychology: Vol. 1. Child and adolescent disorders (pp. 3-26). Hoboken, N.J.: John Wiley & Sons.
Brewer, S., Cervantes, E., & Simpelo, V. (2014). Case studies in non-evidence-based treatment: Part one [Audio]. Canvas@UAGC. https://login.uagc.edu

Get help from top-rated tutors in any subject.
Efficiently complete your homework and academic assignments by getting help from the experts at homeworkarchive.com