March 21, 2010, was not EBP’s date of birth, but it may be the date the approach “grew up” and left home to take on the world.
When the Affordable Care Act was passed, it came with a requirement of empirical evidence. Research on EBP increased significantly. Application of EBP spread to allied health professions, education, healthcare technology, and more. Health organizations began to adopt and promote EBP.
In this Discussion, you will consider this adoption. You will examine healthcare organization websites and analyze to what extent these organizations use EBP.
- Review the Resources and reflect on the definition and goal of EBP.
- Choose a professional healthcare organization’s website (e.g., a reimbursing body, an accredited body, or a national initiative).
- Explore the website to determine where and to what extent EBP is evident.
Post a description of the healthcare organization website you reviewed. Describe where, if at all, EBP appears (e.g., the mission, vision, philosophy, and/or goals of the healthcare organization, or in other locations on the website). Then, explain whether this healthcare organization’s work is grounded in EBP and why or why not. Finally, explain whether the information you discovered on the healthcare organization’s website has changed your perception of the healthcare organization. Be specific and provide examples.
Nurse Educators: Leading Health Care to the Quadruple Aim Sweet Spot
Eighteen years ago, an alarming report on preventable deaths from medical errors was released by the Institute of Medicine (IOM, 2000). That report featured the estimate that approximately 100,000 people in the United States die each year because of preventable medical errors. A subse- quent IOM report (2003) called for all health professionals to be better pre- pared to keep patients safe, focusing on five core competencies for health professions education: patient-centered care, interprofessional collaboration, evidence-based practice, quality im- provement, and informatics.
Visionary leaders in nursing educa- tion were ahead of the curve, responding to the call for safer and more effective care via the Quality and Safety Education for Nurses (QSEN) project (Cronenwett et al., 2007). In 2008, the Institute for Healthcare Improvement announced a major initiative—the Triple Aim—which focuses on “simultaneous pursuit of three aims: improving the experience of care, improving the health of populations, and reducing per capita costs of health care” (Berwick, Nolan, & Whittington, 2008, p. 759). Subsequently, Bodenheimer and Sinsky (2014) proposed a fourth—a quadruple—aim to improve the work life of health care providers, both clinicians and staff.
What progress has been made during the past 19 years since the IOM report, with 10 years of QSEN education, and 9 years after the Triple Aim was launched? Improvements in some health outcomes have been reported. For instance, the United States has seen a 15% reduction in infant mortality rates compared with 2005
(Kochanek, Murphy, Xu, & Tejada-Vera, 2014). Numbers of hospital-acquired con- ditions, such as central line-associated bloodstream infections (CLABSIs), pres- sure ulcers, and falls with injuries have significantly decreased from 2010 to 2013, according to a recent report from the American Hospital Association (2015). However, in terms of better care and lower costs, we are not yet there. James (2013) has estimated annual hospital patient deaths due to preventable harm to be over 400,000 per year. Reports from consumers of health care continue to include stories of poor care experiences, including lack of compassion and frustrations in navigat- ing the complexities of the care system. Further, the aim of lower costs per capita has yet to become reality. Although an estimated 20 million people were newly insured through the Patient Protection and Affordable Care Act (ACA, 2010), political challenges to the ACA remain, including rising costs, high out-of-pocket expenses, and access to affordable insur- ance.
In the world of leadership, there is a term referred to as the sweet spot, where economic health and the common good coexist and are the keys to achieving vi- able and sustainable solutions (Savitz & Weber, 2008). Is it possible to reach the sweet spot of the Quadruple Aim? Acad- emy Health and the Robert Wood John- son Foundation are partnering to pursue this formidable aim, proposing that care delivery systems collaborate across mul- tiple sectors to provide an affordable ap- proach to improving population health (Hacker, 2017).
Are we as a profession just going to sit back and wait for that to happen? I be-
lieve that nurse educators are well posi- tioned to lead the way to this lofty sweet spot goal. Nursing schools and nurse educators already work across multiple sectors to prepare nurses at all levels, from prelicensure to doctoral education. Nurse educators are already in all settings across the care continuum as practitioners themselves and as mentors to nursing stu- dents applying theory in practice. Many, if not most, prelicensure through DNP nursing students have been well prepared with the QSEN competencies. Those at the graduate level are leading evidence- based systems improvement initiatives as a part of their practice immersion and culminating projects.
I have seen the power of what nurses can do to bring the multiple sectors to- gether in the interest of patient safety, quality, population health, and affordable care. Faculty and students have taken a Quadruple Aim approach. Working in communities and across the globe, they have engaged with community and global leaders and local health advocates, such as Promotores (lay Hispanic health advocates), to partner for better health outcomes. Faculty and students have con- ducted community needs assessments to identify health priorities. They have pro- vided health education and health screen- ing. They have applied the processes and tools of the science of improvement to community-based projects to facilitate collaboration across sectors to improve health outcomes. They have been part of teams who have provided resources that communities often cannot afford alone. They have gathered and analyzed the metrics to measure results. The response from local leaders and health advocates
707Journal of Nursing Education • Vol. 56, No. 12, 2017
is consistently positive, acknowledging their contributions. And both students and faculty have benefitted from these practice experiences.
My greatest concern is that those who lead national associations in both education and practice have not found a way to rise above their respective self- interests with a genuine commitment to work in partnership towards the Qua- druple Aim sweet spot. Some have not yet learned what visionary 20th century organizational leadership pioneer Mary Follett Parker taught about the distinc- tion between power with versus power over (Briskin, Erickson, Ott, & Callahan, 2009). Power over depends on relation- ships of polarity, suspicion, and differ- entials in power. Power with relies on relationships of respect, stakeholder en- gagement, and multisector approaches, resulting in co-created power.
Faculty and students typically work in collaboration with their patients and families, as well as their clinical partners across sectors, to improve health care and health outcomes. That is what QSEN has taught us. Through care coordina- tion models, we typically collaborate in a power with stance to reach both optimal learning and optimal health outcomes, contribute to cost-effectiveness, and con- tribute to quality of life. Coordination of care, including patients as partners in care, is one evidence-based strategy for reaching the Triple Aim. Care coordina- tion is a philosophy and attitude as much as it is a process. We need to teach our politicians and public officials about the care coordination model and how it ad-
dresses gaps in care in order to achieve optimal health outcomes. I have seen this facilitative education around care coordi- nation take place when students and fac- ulty are present at the policy table as im- portant health care issues are addressed, specifically relating to homelessness and care for children and families who are at high risk for foster care. Conversations have moved beyond debate to generative dialogue because nurses (faculty, stu- dents, nurse leaders, and nurses as board members) have been at the table.
Faculty, students, and their precep- tors could teach many organizational and political leaders by modeling how lever- aging a power with approach is a viable pathway to the Quadruple Aim’s sweet spot. Power with is what makes clinical nurses, nurse educators, and nurse lead- ers so effective and so special. With a rising emphasis on population health, we have many more opportunities to com- municate with political leaders and other policy makers. We must believe in our- selves as leaders of the Quadruple Aim and act accordingly if we are ever going to reach the sweet spot.
Power with and power ahead. What a concept!
References American Hospital Association. (2015). Zeroing
in on the Triple Aim. Retrieved from http:// www.aha.org/content/15/brief-3aim.pdf
Berwick, D.M., Nolan, T.W., & Whittington, J. (2008). The Triple Aim: Care, health, and cost. Health Affairs, 27, 759-769. doi:10.1377/ hlthaff.27.3.759
Bodenheimer, T., & Sinsky, C. (2014). From Triple to Quadruple Aim: Care of the patient requires care of the provider. Annals of Family
Medicine, 12, 573-576. doi:10.1370.afm.1713 Briskin, A., Erickson, S., Ott, J., Callanan, T.
(2009). The power of collective wisdom and the trap of collective folly. San Francisco, CA: Berrett-Koehler.
Cronenwett, L., Sherwood, G., Barnsteiner, J. Disch, J. Johnson, J., Mitchell, P., . . . War- ren, J. (2007). Quality and safety education for nurses. Nursing Outlook, 55, 122-131. doi:10.1016/j.outlook.2007.02.006
Hacker, K. (2017, March 27). Bridging the di- vide: The sweet spot in health care and pub- lic health. [Web log post]. Retrieved from http://www.academyhealth.org/blog/2017- 03/bridging-divide-sweet-spot-health-care- and-public-health
Institute of Medicine. (2000). To err is human: Building a safer health system. Washington, DC: The National Academies Press. https:// doi.org/10.17226/9728
Institute of Medicine. (2003). Health professions education: A bridge to quality. Washington, DC: The National Academies Press. https:// doi.org/10.17226/10681
James, J.T. (2013). A new, evidence-based esti- mate of patient harms associated with hospi- tal care. Journal of Patient Safety, 9, 122-128. doi:10.1097/PTS.0b013e3182948a69
Kochanek, K.D., Murphy, S.L., Xu, J., & Tejanda-Vera, B. (2014). Deaths: Final data for 2014. National Vital Statistics Reports, 65(4). Retrieved from https://www.cdc.gov/ nchs/data/nvsr/nvsr65/nvsr65_04.pdf
Patient Protection and Affordable Care Act, 42 U.S.C. § 18001 et seq. (2010).
Savitz, A.W. & Weber, K. (2008). The sustainabil- ity sweet spot: Where profit meets the common good. In J.V. Gallos (Ed.), Business leadership: A Jossey-Bass reader (2nd ed., pp. 230-243). San Francisco, CA: John Wiley & Sons.
Jan Boller, PhD, RN Adjunct Associate Professor
College of Nursing Creighton University
The author has disclosed no potential conflicts of interest, financial or otherwise.
708 Copyright © SLACK Incorporated
Reproduced with permission of copyright owner. Further reproduction prohibited without permission.
202 Copyright © 2009 The Author(s)
Evidence-Based Practice: Critical Appraisal of Qualitative Evidence
Kathleen M. Williamson
One of the key steps of evidence-based practice is to critically appraise evidence to best answer a clinical question. Mental health clinicians need to understand the importance of qualitative evidence to their practice, including levels of qualitative evidence, qualitative inquiry methods, and criteria used to appraise qualitative evidence to determine how implementing the best qualitative evidence into their practice will influence mental health outcomes. The goal of qualitative research is to develop a complete understanding of reality as it is perceived by the individual and to uncover the truths that exist. These important aspects of mental health require clinicians to engage this evidence. J Am Psychiatr Nurses Assoc, 2009; 15(3), 202-207. DOI: 10.1177/1078390309338733
Keywords: evidence-based practice; qualitative inquiry; qualitative designs; critical appraisal of qualitative evidence; mental health
Evidence-based practice (EBP) is an approach that enables psychiatric mental health care practitioners as well as all clinicians to provide the highest quality of care using the best evidence available (Melnyk & Fineout-Overholt, 2005). One of the key steps of EBP is to critically appraise evidence to best answer a clinical question. For many mental health questions, understanding levels of evidence, qualitative inquiry methods, and questions used to appraise the evidence are necessary to implement the best qualitative evi- dence into practice. Drawing conclusions and making judgments about the evidence are imperative to the EBP process and clinical decision making (Melnyk & Fineout-Overholt, 2005; Polit & Beck, 2008). The over- all purpose of this article is to familiarize clinicians with qualitative research as an important source of evidence to guide practice decisions. In this article, an overview of the goals, methods and types of qualita- tive research, and the criteria used to appraise the quality of this type of evidence will be presented.
Qualitative research aims to generate insight, describe, and understand the nature of reality in
human experiences (Ayers, 2007; Milne & Oberle, 2005; Polit & Beck, 2008; Saddler, 2006; Sandelowski, 2004; Speziale & Carpenter, 2003; Thorne, 2000). Qualitative researchers are inquisitive and seek to understand knowledge about how people think and feel, about the circumstances in which they find themselves, and use methods to uncover and decon- struct the meaning of a phenomenon (Saddler, 2006; Thorne, 2000). Qualitative data are collected in a natural setting. These data are not numerical; rather, they are full and rich descriptions from participants who are experiencing the phenomenon under study. The goal of qualitative research is to uncover the truths that exist and develop a complete understand- ing of reality and the individual’s perception of what is real. This method of inquiry is deeply rooted in descriptive modes of research. “The idea that multiple realties exist and create meaning for the individuals studied is a fundamental belief of qualitative research- ers” (Speziale & Carpenter, 2003, p. 17). Qualitative research is the studying, collecting, and understand- ing the meaning of individuals’ lives using a variety of materials and methods (Denzin & Lincoln, 2005).
WHAT IS A QUALITATIVE RESEARCHER?
Qualitative researchers commonly believe that indi- viduals come to know and understand their reality in
Kathleen M. Williamson, PhD, RN, associate director, Center for the Advancement of Evidence-Based Practice, Arizona State University, College of Nursing & Healthcare Innovation, Phoenix, Arizona; Kathleen.Williamson@asu.edu.
Journal of the American Psychiatric Nurses Association,Vol. 15, No. 3 203
Critical Appraisal of Qualitative Evidence
different ways. It is through the lived experience and the interactions that take place in the natural setting that the researcher is able to discover and understand the phenomenon under study (Miles & Huberman, 1994; Patton, 2002; Speziale & Carpenter, 2003). To ensure the least disruption to the environ- ment/natural setting, qualitative researchers care- fully consider the best research method to answer the research question (Speziale & Carpenter, 2003). These researchers are intensely involved in all aspects of the research process and are considered participants and observers in setting or field (Patton, 2002; Polit & Beck, 2008; Speziale & Carpenter, 2003). Flexibility is required to obtain data from the richest possible sources of information. Using a holistic approach, the researcher attempts to cap- ture the perceptions of the participants from an “emic” approach (i.e., from an insider’s viewpoint; Miles & Huberman, 1994; Speziale & Carpenter, 2003). Often, this is accomplished through the use of a variety of data collection methods, such as inter- views, observations, and written documents (Patton, 2002). As the data are collected, the researcher simultaneously analyzes it, which includes identi- fying emerging themes, patterns, and insights within the data. According to Patton (2002), quali- tative analysis engages exploration, discovery, and inductive logic. The researcher uses a rich literary account of the setting, actions, feelings, and mean- ing of the phenomenon to report the findings (Patton, 2002).
COMMONLY USED QUALITATIVE DESIGNS
According to Patton (2002), “Qualitative methods are first and foremost research methods. They are ways of finding out what people do, know, think, and
feel by observing, interviewing, and analyzing docu- ments” (p. 145). Qualitative research designs vary by type and purpose: data collection strategies used and the type of question or phenomenon under study. To critically appraise qualitative evidence for its valid- ity and use in practice, an understanding of the types of qualitative methods as well as how they are employed and reported is necessary.
Many of the methods are routed in the anthropol- ogy, psychological, and sociology disciplines. Many commonly used methods in the health sciences research are ethnography, phenomenology, and grounded theory (see Table 1).
Ethnography has its traditions in cultural anthropology, which describe the values, beliefs, and practice of cultural groups (Ploeg, 1999; Polit & Beck, 2008). According to Speziale and Carpenter (2003), the characteristics that are central to eth- nography are that (a) the research is focused on culture, (b) the researcher is totally immersed in the culture, and (c) the researcher is aware of her/ his own perspective as well as those in the study. Ethnographic researchers strive to study cultures from an emic approach. The researcher as a par- ticipant observer becomes involved in the culture to collect data, learn from participants, and report on the way participants see their world (Patton, 2002). Data are primarily collected through obser- vations and interviews. Analysis of ethnographic results involves identifying the meanings attrib- uted to objects and events by members of the cul- ture. These meanings are often validated by members of the culture before finalizing the results (called member checks). This is a labor-intensive method that requires extensive fieldwork.
TABLE 1. Most Commonly Used Qualitative Research Methods
Sample size (on average) Data sources/collection
Ethnography Describe culture of people
What is it like to live . . . What is it . . . 30-50 Interviews, observations, field
notes, records, chart data, life histories
Phenomenology Describe phenomena, the
appearance of things, as lived experience of humans in a natural setting
What is it like to have this experience? What does it feel like?
6-8 Interviews, videotapes, observations,
Grounded theory To develop a theory rather than
describe a phenomenon
Questions emerge from the data
25-50 Taped interview, observation,
diaries, and memos from researcher
Source. Adapted from Polit and Beck (2008) and Speziale and Carpenter(2003).
204 Journal of the American Psychiatric Nurses Association,Vol. 15, No. 3
Phenomenology has its roots in both philosophy and psychology. Polit and Beck (2008) reported, “Phenomenological researchers believe that lived experience gives meaning to each person’s percep- tion of a particular phenomenon” (p. 227). According to Polit and Beck, there are four aspects of the human experience that are of interest to the phe- nomenological researcher: (a) lived space (spatial- ity), (b) lived body (corporeality), (c) lived human relationships (relationality), and (d) lived time (tem- porality). Phenomenological inquiry is focused on exploring how participants in the experience make sense of the experience, transform the experience into consciousness, and the nature or meaning of the experience (Patton, 2002). Interpretive phenom- enology (hermeneutics) focuses on the meaning and interpretation of the lived experience to better understand social, cultural, political, and historical context. Descriptive phenomenology shares vivid reports and describes the phenomenon.
In a phenomenological study, the researcher is an active participant/observer who is totally immersed in the investigation. It involves gaining access to participants who could provide rich descriptions from in-depth interviews to gather all the informa- tion needed to describe the phenomenon under study (Speziale & Carpenter, 2003). Ongoing analyses of direct quotes and statements by participants occur until common themes emerge. The outcome is a vivid description of the experience that captures the meaning of the experience and communicates clearly and logically the phenomenon under study (Speziale & Carpenter, 2003).
Grounded theory has its roots in sociology and explores the social processes that are present within human interactions (Speziale & Carpenter, 2003). The purpose is to develop or build a theory rather than test a theory or describe a phenomenon (Patton, 2002). Grounded theory takes an inductive approach in which the researcher seeks to generate emergent categories and integrate them into a theory grounded in the data (Polit & Beck, 2008). The research does not start with a focused problem; it evolves and is discovered as the study progresses. A feature of grounded theory is that the data collection, data analysis, and sampling of participants occur simulta- neously (Polit & Beck, 2008; Powers, 2005). The
researchers using ground theory methodology are able to critically analyze situations, not remove themselves from the study but realize that they are part of it, recognize bias, obtain valid and reliable data, and think abstractly (Strauss & Corbin, 1990).
Data collection is through in-depth interview and observations. A constant comparative process is used for two reasons: (a) to compare every piece of data with every other piece to more accurately refine the relevant categories and (b) to assure the researcher that saturation has occurred. Once saturation is reached the researcher connects the categories, pat- terns, or themes that describe the overall picture that emerged that will lead to theory development.
ASPECTS OF QUALITATIVE RESEARCH
The most important aspects of qualitative inquiry is that participants are actively involved in the research process rather than receiving an interven- tion or being observed for some risk or event to be quantified. Another aspect is that the sample is pur- posefully selected and is based on experience with a culture, social process, or phenomena to collect infor- mation that is rich and thick in descriptions. The final essential aspect of qualitative research is that one or more of the following strategies are used to collect data: interviews, focus groups, narratives, chat rooms, and observation and/or field notes. These methods may be used in combination with each other. The researcher may choose to use triangulation strategies on data collection, investigator, method, or theory and use multiple sources to draw conclusions about the phenomenon (Patton, 2002; Polit & Beck, 2009).
This is not an inclusive list of qualitative methods that researchers could choose to use to answer a research question, other methods include historical research, feminist research, case study method, and action research. All qualitative research methods are used to describe and discover meaning, understand- ing, or develop a theory and transport the reader to the time and place of the observation and/or inter- view (Patton, 2002).
THE HIERARCHY OF QUALITATIVE EVIDENCE
Clinical questions that require qualitative evi- dence to answer them focus on human response and
Journal of the American Psychiatric Nurses Association,Vol. 15, No. 3 205
Critical Appraisal of Qualitative Evidence
meaning. An important step in the process of apprais- ing qualitative research as a guide for clinical prac- tice is the identification of the level of evidence or the “best” evidence. The level of evidence is a guide that helps identify the most appropriate, rigorous, and clinically relevant evidence to answer the clinical question (Polit & Beck, 2008). Evidence hierarchy for qualitative research ranges from opinion of authori- ties and/or reports of expert committees to a single qualitative research study to metasynthesis (Melnyk & Fineout-Overholt, 2005; Polit & Beck, 2008). A metasynthesis is comparable to meta-analysis (i.e., systematic reviews) of quantitative studies. A meta- synthesis is a technique that integrates findings of multiple qualitative studies on a specific topic, pro- viding an interpretative synthesis of the research findings in narrative form (Polit & Beck, 2008). This is the strongest level of evidence in which to answer a clinical question. The higher the level of evidence the stronger the evidence is to change practice. However, all evidence needs be critically appraised based on (a) the best available evidence (i.e., level of evidence), (b) the quality and reliability of the study, and (c) the applicability of the findings to practice.
CRITICAL APPRAISAL OF QUALITATIVE EVIDENCE
Once the clinical issue has been identified, the PICOT question constructed, and the best evidence located through an exhaustive search, the next step is to critically appraise each study for its validity (i.e., the quality), reliability, and applicability to use in practice (Melnyk & Fineout-Overholt, 2005). Although there is no consensus among qualitative researchers on the quality criteria (Cutcliffe & McKenna, 1999; Polit & Beck, 2008; Powers, 2005; Russell & Gregory, 2003; Sandelowski, 2004), many have published excellent tools that guide the process
for critically appraising qualitative evidence (Duffy, 2005; Melnyk & Fineout-Overholt, 2005; Polit & Beck, 2008; Powers, 2005; Russell & Gregory, 2003; Speziale & Carpenter, 2003). They all base their cri- teria on three primary questions: (a) Are the study findings valid? (b) What were the results of the study? (c) Will the results help me in caring for my patients? According to Melnyk and Fineout-Overholt (2005), “The answers to these questions ensure rele- vance and transferability of the evidence from the search to the specific population for whom the practi- tioner provides care” (p. 120). In using the questions in Tables 2, 3, and 4, one can evaluate the evidence and determine if the study findings are valid, the method and instruments used to acquire the knowl- edge credible, and if the findings are transferable.
The qualitative process contributes to the rigor or trustworthiness of the data (i.e., the quality). “The goal of rigor in qualitative research is to accurately represent study participants’ experiences” (Speziale & Carpenter, 2003, p. 38). The qualitative attributes of validity include credibility, dependability, confirm- ability, transferability, and authenticity (Guba & Lincoln, 1994; Miles & Huberman, 1994; Speziale & Carpenter, 2003).
Credibility is having confidence and truth about the data and interpretations (Polit & Beck, 2008). The credibility of the findings hinges on the skill, competence, and rigor of the researcher to describe the content shared by the participants and the abil- ity of the participants to accurately describe the phenomenon (Patton, 2002; Speziale & Carpenter, 2003). Cutcliffe and McKenna (1999) reported that the most important indicator of the credibility of findings is when a practitioner reads the study find- ings and regards them meaningful and applicable and incorporates them into his or her practice.
Confirmability refers to the way the researcher documents and confirms the study findings (Speziale
TABLE 2. Subquestions to Further Answer, Are the Study Findings Valid?
How were they selected?
Was it adequate?
How were the data collected?
Did they provide rich and thick descriptions?
Was the setting appropriate to acquire an adequate sample?
Were the tools adequate?
Were the participants’ rights protected?
Was the sampling method appropriate?
How were the data coded? If so how?
Did the researcher eliminate bias?
Do the data accurately represent the study participants?
How accurate and complete were the data?
Was the group or population adequately described?
Was saturation achieved?
Does gathering the data adequately portray the phenomenon?
Source. Adapted from Powers (2005), Polit and Beck (2008), Russell and Gregory (2003), and Speziale and Carpenter (2003).
206 Journal of the American Psychiatric Nurses Association,Vol. 15, No. 3
& Carpenter, 2003). Confirmability is the process of confirming the accuracy, relevance, and meaning of the data collected. Confirmability exists if (a) the researcher identifies if saturation was reached and (b) records of the methods and procedures are detailed enough that they can be followed by an audit trail (Miles & Huberman, 1994).
Dependability is a standard that demonstrates whether (a) the process of the study was consistent, (b) data remained consistent over time and conditions, and (c) the results are reliable (Miles & Huberman, 1994; Polit & Beck, 2008; Speziale & Carpenter, 2003). For example, if study methods and results are depend- able, the researcher consistently approaches each occurrence in the same way with each encounter and results were coded with accuracy across the study.
Transferability refers to the probability that the study findings have meaning and are usable by oth- ers in similar situations (i.e., generalizable to others in that situation; Miles & Huberman, 1994; Polit & Beck, 2008; Speziale & Carpenter, 2003). To deter- mine if the findings of a study are transferable and can be used by others, the clinician must consider the potential client to whom the findings may be applied (Speziale & Carpenter, 2003).
Authenticity is when the researcher fairly and faithfully shows a range of different realities and develops an accurate and authentic portrait for the phenomenon under study (Polit & Beck, 2008). For example, if a clinician were to be in the same
environment as the researcher describes, they would experience the phenomenon similarly. All mental health providers need to become familiar with these aspects of qualitative evidence and hone their criti- cal appraisal skills to enable them to improve the outcomes of their clients.
Qualitative research aims to impart meaning of the human experience and understand how people think and feel about their circumstances. Qualitative researchers use a holistic approach in an attempt to uncover truths and understand a person’s reality. The researcher is intensely involved in all aspects of the research design, collection, and analysis pro- cesses. Ethnography, phenomenology, and grounded theory are some of the designs that a researcher may use to study a culture, phenomenon, or theory. Data collection strategies vary based on the research question, method, and informants. Methods such as interviews, observations, and journals allow for information-rich participants to provide detailed lit- erary accounts of the phenomenon. Data analysis occurs simultaneously as data collection and is the process by which the researcher identifies themes, concepts, and patterns that provide insight into the phenomenon under study.
One of the crucial steps in the EBP process is to critically appraise the evidence for its use in practice
TABLE 3. Subquestions to Further Answer, What Were the Results of the Study?
Is the research design appropriate for the research question?
Is the description of findings thorough?
Do findings fit the data from which they were generated?
Are the results logical, consistent, and easy to follow?
Was the purpose of the study clear?
Were all themes identified, useful, creative, and convincing of the phenomena?
Source. Adapted from Powers (2005), Russell and Gregory (2003), and Speziale and Carpenter (2003).
TABLE 4. Subquestions to Further Answer, Will the Results Help Me in Caring for My Patients?
What meaning and relevance does this study have for my patients?
How would I use these findings in my practice?
How does the study help provide perspective on my practice?
Are the conclusions appropriate to my patient population?
Are the results applicable to my patients?
How would patient and family values be considered in applying these results?
Source. Adapted from Powers (2005), Russell and Gregory (2003), and Speziale and Carpenter (2003).
Journal of the American Psychiatric Nurses Association,Vol. 15, No. 3 207
Critical Appraisal of Qualitative Evidence
and determine the value of findings. Critical appraisal is the review of the evidence for its validity (i.e., strengths and weaknesses), reliability, and usefulness for clients in daily practice. “Psychiatric mental health clinicians are practicing in an era emphasizing the use of the most current evidence to direct their treatment and interventions” (Rice, 2008, p. 186). Appraising the evidence is essential for assurance that the best knowledge in the field is being applied in a cost-effective, holistic, and effective way. To do this, one must incorporate the critically appraised findings with their abilities as clinicians and their clients’ preferences. As professionals, clinicians are expected to use the EBP process, which includes appraising the evidence to determine if the best results are believable, useable, and dependable. Clinicians in psychiatric mental health must use qualitative evidence to inform their practice deci- sions. For example, how do clients newly diagnosed with bipolar and their families perceive the life impact of this diagnosis? Having a well done meta- synthesis that provides an accurate representation of the participants’ experiences, and is trustworthy (i.e., credible, dependable, confirmable, transferable, and authentic), will provide insight into the situational context, human response, and meaning for these cli- ents and will assist clinicians in delivering the best care to achieve the best outcomes.
Ayers, L. (2007). Qualitative research proposals—Part I. Journal Wound Ostomy Continence Nursing, 34, 30-32.
Cutcliffe, J. R., & McKenna, H. P. (1999). Establishing the credibil- ity of qualitative research findings: The plot thickens. Journal of Advanced Nursing, 30, 374-380.
Denzin, N. K., & Lincoln, Y. S. (2005). The Sage handbook of qualitative research (3rd ed.). Thousand Oaks, CA: Sage.
Duffy, M. E. (2005). Resources for critically appraising qualitative research evidence of nursing practice clinical question. Clinical Nursing Specialist, 19, 288-290.
Guba, E. G., & Lincoln, Y. S. (1994). Competing paradigms in qualitative research. In N. K. Denzin & Y. S. Lincoln (Eds.), Handbook of qualitative research (pp. 105-117). Thousand Oaks, CA: Sage.
Melnyk, B. M., & Fineout-Overholt, E. (Eds.). (2005). Evidence-based practice in nursing and healthcare. Philadelphia: Lippincott Williams & Wilkins.
Miles, M. B., & Huberman, A. M. (1994). An expend sourcebook qualitative data analysis (4th ed.). Thousand Oaks, CA: Sage.
Milne, J., & Oberle, K. (2005). Enhancing rigor in qualitative description: A case study. Journal Wound Ostomy Continence Nursing, 32, 413-420.
Patton, M. Q. (2002). Qualitative research & evaluation methods (3rd ed.). Thousand Oaks: Sage.
Ploeg, J. (1999). Identifying the best research design to fit the question. Part 2: Qualitative designs. Evidence-Based Nursing, 2, 36-37.
Polit, D. F., & Beck, C. T. (2008). Nursing research: Generating and assessing evidence fro nursing practice. Philadelphia: Lippincott Williams & Wilkins.
Powers, B. A. (2005). Critically appraising qualitative evidence. In B. M. Melnyk & E. Fineout-Overholt (Eds.), Evidence-based practice in nursing and healthcare (pp. 127-162). Philadelphia: Lippincott Williams & Wilkins.
Rice, M. J. (2008). Evidence-based practice in psychiatric care: Defining levels of evidence. Journal of the American Psychiatric Nurses Association, 14(3), 181-187.
Russell, C. K., & Gregory, D. M. (2003). Evaluation of qualitative research studies. Evidence-Based Nursing, 6, 36-40.
Saddler, D. (2006). Research 101. Gastroenterology Nursing, 30, 314-316.
Sandelowski, M. (2004). Using qualitative research. Qualitative Health Research, 14, 1366-1386.
Speziale, H. J. S., & Carpenter, D. R. (2003). Qualitative research in nursing: Advancing the humanistic imperative. Philadelphia: Lippincott Williams & Wilkins.
Strauss, A., & Corbin, J. (1990). Basics of qualitative research: Grounded theory procedures and techniques. London: Sage.
Thorne, S. (2000). Data analysis in qualitative research. Evidence- Based Nursing, 3, 68-70.
For reprints and permission queries, please visit SAGE’s Web site at http://www.sagepub.com/journalsPermissions.nav.
The Quadruple Aim: care, health, cost and meaning in work
Rishi Sikka,1 Julianne M Morath,2 Lucian Leape3
1Advocate Health Care, Downers Grove, Illinois, USA 2Hospital Quality Institute, Sacramento, California, USA 3Harvard School of Public Health, Boston, Massachusetts, USA
Correspondence to Dr Rishi Sikka, Advocate Health Care, 3075 Highland Avenue, Suite 600, Downers Grove, Il 60515, USA; firstname.lastname@example.org
Received 5 March 2015 Revised 6 May 2015 Accepted 16 May 2015
To cite: Sikka R, Morath JM, Leape L. BMJ Qual Saf 2015;24:608–610.
In 2008, Donald Berwick and colleagues provided a framework for the delivery of high value care in the USA, the Triple Aim, that is centred around three over- arching goals: improving the individual experience of care; improving the health of populations; and reducing the per capita cost of healthcare.1 The intent is that the Triple Aim will guide the redesign of healthcare systems and the transition to population health. Health systems glo- bally grapple with these challenges of improving the health of populations while simultaneously lowering healthcare costs. As a result, the Triple Aim, although ori- ginally conceived within the USA, has been adopted as a set of principles for health system reform within many organi- sations around the world. The successful achievement of the
Triple Aim requires highly effective healthcare organisations. The backbone of any effective healthcare system is an engaged and productive workforce.2 But the Triple Aim does not explicitly acknow- ledge the critical role of the workforce in healthcare transformation. We propose a modification of the Triple Aim to acknow- ledge the importance of physicians, nurses and all employees finding joy and meaning in their work. This ‘Quadruple Aim’ would add a fourth aim: improving the experience of providing care. The core of workforce engagement is
the experience of joy and meaning in the work of healthcare. This is not synonym- ous with happiness, rather that all members of the workforce have a sense of accomplishment and meaning in their contributions. By meaning, we refer to the sense of importance of daily work. By joy, we refer to the feeling of success and fulfilment that results from meaning- ful work. In the UK, the National Health Service has captured this with the notion of an engaged staff that ‘think and act in a positive way about the work they do, the people they work with and the organ- isation that they work in’.3
The evidence that the healthcare work- force finds joy and meaning in work is not encouraging. In a recent physician survey in the USA, 60% of respondents indicated they were considering leaving practice; 70% of surveyed physicians knew at least one colleague who left their practice due to poor morale.2 A 2015 survey of British physicians reported similar findings with approximately 44% of respondents reporting very low or low morale.4 These findings also extend to the nursing profession. In a 2013 US survey of registered nurses, 51% of nurses worried that their job was affect- ing their health; 35% felt like resigning from their current job.5 Similar findings have been reported across Europe, with rates of nursing job dissatisfaction ranging from 11% to 56%.6
This absence of joy and meaning experi- enced by a majority of the healthcare workforce is in part due to the threats of psychological and physical harm that are common in the work environment. Workforce injuries are much more frequent in healthcare than in other industries. For some, such as nurses’ aides, orderlies and attendants, the rate is four times the indus- trial average.7 More days are lost due to occupational illness and injury in health- care than in mining, machinery manufac- turing or construction.7
The risk of physical harm is dwarfed by the extent of psychological harm in the complex environment of the health- care workplace. Egregious examples include bullying, intimidation and phys- ical assault. Far more prevalent is the psy- chological harm due to lack of respect. This dysfunction is compounded by pro- duction pressure, poor design of work flow and the proportion of non-value added work. The current dysfunctional healthcare
work environment is in part a by-product of the gradual shift in healthcare from a public service to a business model that occurred in the latter half of the 20th
608 Sikka R, et al. BMJ Qual Saf 2015;24:608–610. doi:10.1136/bmjqs-2015-004160http://crossmark.crossref.org/dialog/?doi=10.1136/bmjqs-2015-004160&domain=pdf&date_stamp=2015-09-09http://www.health.org.uk/http://qualitysafety.bmj.com
century.8 Complex, intimate caregiving relationships have been reduced to a series of transactional demand- ing tasks, with a focus on productivity and efficiency, fuelled by the pressures of decreasing reimbursement. These forces have led to an environment with lack
of teamwork, disrespect between colleagues and lack of workforce engagement. The problems exist from the level of the front-line caregivers, doctors and nurses, who are burdened with non-caregiving work, to the healthcare leader with bottom-line worries and disproportionate reporting requirements. Without joy and meaning in work, the workforce cannot perform at its potential. Joy and meaning are generative and allow the best to be contributed by each individual, and the teams they comprise, towards the work of the Triple Aim every day. The precondition for restoring joy and meaning is
to ensure that the workforce has physical and psycho- logical freedom from harm, neglect and disrespect. For a health system aspiring to the Triple Aim, fulfill- ing this precondition must be a non-negotiable, endur- ing property of the system. It alone does not guarantee the achievement of joy and meaning, however the absence of a safe environment guarantees robbing people of joy and meaning in their work. Cultural freedom from physical and psychological harm is the right thing to do and it is smart economics because toxic environments impose real costs on the organisation, its employees, physicians, patients and ultimately the entire population. An organisation focused on enabling joy and
meaning in work and pursuit of the Triple Aim needs to embody shared core values of mutual respect and civility, transparency and truth telling and the safety of the workforce. It recognises the work and accom- plishments of the workforce regularly and with high visibility. For the individual, these notions of joy and meaning in healthcare work are recognised in three critical questions posed by Paul O’Neill, former chair- man and chief executive officer of Alcoa. This is an internal gut-check, that needs to be answered affirma- tively by each worker each day:2
1. Am I treated with dignity and respect by everyone, everyday, by everyone I encounter, without regard to race, ethnicity, nationality, gender, religious belief, sexual orientation, title, pay grade or number of degrees?
2. Do I have the things I need: education, training, tools, financial support, encouragement, so I can make a con- tribution this organisation that gives meaning to my life?
3. Am I recognised and thanked for what I do? If each individual in the workforce cannot answer
affirmatively to these questions, the full potential to achieve patient safety, effective outcomes and lower costs is compromised. The leadership and governance of our healthcare
systems currently have strong economic and outcome motivations to focus on the Triple Aim. They also need to feel a parallel moral obligation to the
workforce to create an environment that ensures joy and meaning in work. For this reason, we recommend adding a fourth essential aim: improving the experi- ence of providing care. The notion of changing the objective to the Quadruple Aim recognises this focus within the context of the broader transformation required in our healthcare system towards high value care. While the first three aims provide a rationale for the existence of a health system, the fourth aim becomes a foundational element for the other goals to be realised. Progress on this fourth goal in the Quadruple Aim
can be measured through metrics focusing on two broad areas: workforce engagement and workforce safety. Workforce engagement can be assessed through annual surveys using established frameworks that allow for benchmarking within industry and with non-healthcare industries.9 Measures should also be extended to quantify the opposite of engagement, workforce burn-out. This could include select ques- tions from the Maslach Burnout Inventory, the gold standard for measuring employee burn-out.10 In the realm of workforce safety, metrics should include quantifying work-related deaths or disability, lost time injuries, government mandated reported injuries and all injuries. Although these measures do not com- pletely quantify the experience of providing care, they provide a practical start that is familiar and allow for an initial baseline assessment and monitoring for improvement. The rewards of the Quadruple Aim, achieved within
an inspirational workplace could be immense. No other industry has more potential to free up resources from non-value added and inefficient production practices than healthcare; no other industry has more potential to use its resources to save lives and reduce human suffering; no other industry has the potential to deliver the value envisioned by The Triple Aim on such an audacious scale. The key is the fourth aim: creating the conditions for the healthcare workforce to find joy and meaning in their work and in doing so, improving the experience of providing care.
Contributors All authors assisted in the drafting of this manuscript.
Competing interests None declared.
Provenance and peer review Not commissioned; externally peer reviewed.
REFERENCES 1 Berwick DM, Nolan TW, Whittington J. The triple aim: care,
health and cost. Health Aff 2008;27:759–69. 2 Lucian Leape Institute. 2013. Through the eyes of the
workforce: creating joy, meaning and safer health care. Boston, MA: National Patient Safety Foundation.
3 NHS employers staff engagement. http://www.nhsemployers. org/staffengagement (accessed 4 May 2015).
4 BMA Quarterly Tracker Survey. http://bma.org.uk/working- for-change/policy-and-lobbying/training-and-workforce/
Sikka R, et al. BMJ Qual Saf 2015;24:608–610. doi:10.1136/bmjqs-2015-004160 609http://dx.doi.org/10.1377/hlthaff.27.3.759http://www.nhsemployers.org/staffengagementhttp://www.nhsemployers.org/staffengagementhttp://www.nhsemployers.org/staffengagementhttp://bma.org.uk/working-for-change/policy-and-lobbying/training-and-workforce/tracker-survey/omnibus-survey-january-2015http://bma.org.uk/working-for-change/policy-and-lobbying/training-and-workforce/tracker-survey/omnibus-survey-january-2015http://bma.org.uk/working-for-change/policy-and-lobbying/training-and-workforce/tracker-survey/omnibus-survey-january-2015
tracker-survey/omnibus-survey-january-2015 (accessed 4 May 2015).
5 AMN Healthcare 2013 survey of registered nurses. http://www. amnhealthcare.com/uploadedFiles/MainSite/Content/ Healthcare_Industry_Insights/Industry_Research/2013_ RNSurvey.pdf (accessed 4 May 2015).
6 Aiken LH, Sermeus W, Van Den HeedeKoen, et al. Patient safety, satisfaction and quality of hospital care: cross sectional surveys of nurses and patients in 12 countries in Europe and the United States. BMJ 2012;344:e1717.
7 US Department of Labor Bureau of Labor Statistics. Occupational injuries and illnesses (annual) news release.
Workplace injuries and illnesses 2009. 21 October 2010. http://www.bls.gov/news.release/archives/osh_10212010.htm (accessed 4 May 2015).
8 Morath J. The quality advantage, a strategic guide for health care leaders. AHA Press, 1999:225.
9 Surveys on Patient Safety Culture. Agency for Healthcare Research and Quality. http://www.ahrq.gov/professionals/quality- patient-safety/patientsafetyculture/index.html (accessed 4 May 2015).
10 Maslach C, Jackson S, Leiter M. Maslach burnout inventory manual. 3rd edn. Palo Alto, CA: Consulting Psychologists Press, 1996.
- The Quadruple Aim: care, health, cost and meaning in work
By Ellen Fineout-Overholt, PhD, RN, FNAP, FAAN, Bernadette Mazurek Melnyk, PhD, RN, CPNP/PMHNP,
FNAP, FAAN, Susan B. Stillwell, DNP, RN, CNE, and Kathleen M.
Williamson, PhD, RN
In July’s evidence-based prac-tice (EBP) article, Rebecca R., our hypothetical staff nurse, Carlos A., her hospital’s expert EBP mentor, and Chen M., Rebecca’s nurse colleague, col lected the evidence to an- swer their clinical question: “In hospitalized adults (P), how does a rapid response team (I) compared with no rapid response team (C) affect the number of cardiac arrests (O) and unplanned admissions to the ICU (O) during a three- month period (T)?” As part of their rapid critical appraisal (RCA) of the 15 potential “keeper” studies, the EBP team found and placed the essential elements of each study (such as its population, study design, and setting) into an evaluation table. In so doing, they began to see similarities and differ- ences between the studies, which Carlos told them is the beginning of synthesis. We now join the team as they continue with their RCA of these studies to determine their worth to practice.
RAPID CRITICAL APPRAISAL Carlos explains that typically an RCA is conducted along with an RCA checklist that’s specific to the research design of the study being evaluated—and before any data are entered into an evalua- tion table. However, since Rebecca and Chen are new to appraising studies, he felt it would be easier for them to first enter the essen- tials into the table and then eval- uate each study. Carlos shows Rebecca several RCA checklists and explains that all checklists have three major questions in common, each of which contains other more specific subquestions about what constitutes a well- conducted study for the research design under review (see Example of a Rapid Critical Appraisal Checklist).
Although the EBP team will be looking at how well the re – searchers conducted their studies and discussing what makes a “good” research study, Carlos reminds them that the goal of critical appraisal is to determine the worth of a study to practice, not solely to find flaws. He also
suggests that they consult their glossary when they see an unfa- miliar word. For example, the term randomization, or random assignment, is a relevant feature of research methodology for in- tervention studies that may be unfamiliar. Using the glossary, he explains that random assignment and random sampling are often confused with one another, but that they’re very different. When researchers select subjects from within a certain population to participate in a study by using a random strategy, such as tossing a coin, this is random sampling. It allows the entire population to be fairly represented. But because it requires access to a particular population, random sampling is not always feasible. Carlos adds that many health care studies are based on a con- venience sample—participants recruited from a readily available population, such as a researcher’s affiliated hospital, which may or may not represent the desired population. Random assignment, on the other hand, is the use of a random strategy to assign study
email@example.com AJN ▼ September 2010 ▼ Vol. 110, No. 9 41
Critical Appraisal of the Evidence: Part II Digging deeper—examining the “keeper” studies.
This is the sixth article in a series from the Arizona State University College of Nursing and Health Innovation’s Center for the Advancement of Evidence-Based Practice. Evidence-based practice (EBP) is a problem-solving approach to the delivery of health care that integrates the best evidence from studies and patient care data with clinician expertise and patient preferences and values. When delivered in a context of caring and in a supportive organizational culture, the highest quality of care and best patient outcomes can be achieved.
The purpose of this series is to give nurses the knowledge and skills they need to implement EBP consistently, one step at a time. Articles will appear every two months to allow you time to incorporate information as you work toward implementing EBP at your institution. Also, we’ve scheduled “Chat with the Authors” calls every few months to provide a direct line to the experts to help you resolve questions. Details about how to participate in the next call will be pub- lished with November’s Evidence-Based Practice, Step by Step.
are the same as three of their po tential “keeper” studies. They wonder whether they should keep those studies in the pile, or if, as duplicates, they’re unnecessary. Carlos says that because the meta- analysis only included studies with control groups, it’s impor- tant to keep these three studies so that they can be compared with other studies in the pile that don’t have control groups. Rebecca notes that more than half of their 15 studies don’t have control or comparison groups. They agree as a team to include all 15 stud- ies at all levels of evidence and go on to appraise the two remaining systematic reviews.
The MERIT trial1 is next in the EBP team’s stack of studies.
with him, Rebecca and Chen find the checklist for systematic reviews.
As they start to rapidly criti- cally appraise the meta-analysis, they discuss that it seems to be biased since the authors included only studies with a control group. Carlos explains that while hav- ing a control group in a study is ideal, in the real world most stud- ies are lower-level evidence and don’t have control or compari- son groups. He emphasizes that, in eliminating lower-level studies, the meta-analysis lacks evidence that may be informative to the question. Rebecca and Chen— who are clearly growing in their appraisal skills—also realize that three studies in the meta-analysis
42 AJN ▼ September 2010 ▼ Vol. 110, No. 9 ajnonline.com
participants to the intervention or control group. Random as- signment is an important feature of higher-level studies in the hier- archy of evidence.
Carlos also reminds the team that it’s important to begin the RCA with the studies at the high- est level of evidence in order to see the most reliable evidence first. In their pile of studies, these are the three systematic reviews, includ- ing the meta-analysis and the Cochrane review, they retrieved from their database search (see “Searching for the Evidence,” and “Critical Appraisal of the Evidence: Part I,” Evidence- Based Practice, Step by Step, May and July). Among the RCA checklists Carlos has brought
Example of a Rapid Critical Appraisal Checklist
Rapid Critical Appraisal of Systematic Reviews of Clinical Interventions or Treatments
1. Are the results of the review valid? A. Are the studies in the review randomized controlled trials? Yes No B. Does the review include a detailed description of the search
strategy used to find the relevant studies? Yes No C. Does the review describe how the validity of the individual
studies was assessed (such as, methodological quality, including the use of random assignment to study groups and complete follow-up of subjects)? Yes No
D. Are the results consistent across studies? Yes No E. Did the analysis use individual patient data or aggregate data? Patient Aggregate
2. What are the results? A. How large is the intervention or treatment effect (odds ratio,
relative risk, effect size, level of significance)? B. How precise is the intervention or treatment (confidence interval)?
3. Will the results assist me in caring for my patients? A. Are my patients similar to those in the review? Yes No B. Is it feasible to implement the findings in my practice setting? Yes No C. Were all clinically important outcomes considered, including
both risks and benefits of the treatment? Yes No D. What is my clinical assessment of the patient, and are there any
contraindications or circumstances that would keep me from implementing the treatment? Yes No
E. What are my patients’ and their families’ preferences and values concerning the treatment? Yes No
© Fineout-Overholt and Melnyk, 2005.
firstname.lastname@example.org AJN ▼ September 2010 ▼ Vol. 110, No. 9 43
As we noted in the last install- ment of this series, MERIT is a good study to use to illustrate the different steps of the critical appraisal process. (Readers may want to retrieve the article, if possible, and follow along with the RCA.) Set in Australia, the MERIT trial examined whether the introduction of a rapid re – sponse team (RRT; called a med- ical emergency team or MET in the study) would reduce the incidence of cardiac arrest, death, and unplanned admissions to the ICU in the hospitals studied. To follow along as the EBP team addresses each of the essential elements of a well-conducted randomized controlled trial (RCT) and how they apply to the MERIT study, see their notes in Rapid Critical Appraisal of the MERIT Study.
ARE THE RESULTS OF THE STUDY VALID? The first section of every RCA checklist addresses the validity of the study at hand—did the researchers use sound scientific methods to obtain their study results? Rebecca asks why valid- ity is so important. Carlos replies that if the study’s conclusion can be trusted—that is, relied upon to inform practice—the study must be conducted in a way that reduces bias or eliminates con- founding variables (factors that influence how the intervention affects the outcome). Researchers typically use rigorous research methods to reduce the risk of bias. The purpose of the RCA checklist is to help the user deter- mine whether or not rigorous methods have been used in the study under review, with most questions offering the option of a quick answer of “yes,” “no,” or “unknown.”
Were the subjects randomly assigned to the intervention and control groups? Carlos explains
that this is an important question when appraising RCTs. If a study calls itself an RCT but didn’t randomly assign participants, then bias could be present. In appraising the MERIT study, the team discusses how the research- ers randomly assigned entire hospitals, not individual patients, to the RRT intervention and control groups using a technique called cluster randomization. To better understand this method, the EBP team looks it up on the Internet and finds a PowerPoint presentation by a World Health Organization researcher that explains it in simplified terms: “Cluster randomized trials are experiments in which social units or clusters [in our case, hospitals] rather than individuals are ran- domly allocated to intervention groups.”2
Was random assignment concealed from the individuals enrolling the subjects? Conceal- ment helps researchers reduce potential bias, preventing the person(s) enrolling participants from recruiting them into a study with enthusiasm if they’re des- tined for the intervention group or with obvious indifference if they’re intended for the control or comparison group. The EBP team sees that the MERIT trial used an independent statistician to conduct the random assign- ment after participants had already been enrolled in the study, which Carlos says meets the criteria for concealment.
Were the subjects and pro- viders blind to the study group? Carlos notes that it would be difficult to blind participants or researchers to the interven- tion group in the MERIT study because the hospitals that were to initiate an RRT had to know it was happening. Rebecca and Chen wonder whether their “no” answer to this question makes
the study findings invalid. Carlos says that a single “no” may or may not mean that the study findings are invalid. It’s their job as clinicians interpreting the data to weigh each aspect of the study design. Therefore, if the answer to any validity question isn’t affirmative, they must each ask themselves: does this “no” make the study findings untrustworthy to the extent that I don’t feel comfortable using them in my practice?
Were reasons given to explain why subjects didn’t complete the study? Carlos explains that sometimes par- ticipants leave a study before the end (something about the study or the participants themselves may prompt them to leave). If all or many of the participants leave for the same reason, this may lead to biased findings. Therefore, it’s important to look for an explanation for why any subjects didn’t complete a study. Since no hospitals dropped out of the MERIT study, this ques- tion is determined to be not applicable.
Were the follow-up assess- ments long enough to fully study the effects of the intervention? Chen asks Carlos why a time frame would be important in studying validity. He explains that researchers must ensure that the outcome is evaluated for a long enough period of time to show that the intervention indeed caused it. The researchers in the MERIT study conducted the RRT intervention for six months be- fore evaluating the outcomes. The team discusses how six months was likely adequate to determine how the RRT affected cardio- pulmonary arrest rates (CR) but might have been too short to es- tablish the relationship between the RRT and hospital-wide mor- tality rates (HMR).
44 AJN ▼ September 2010 ▼ Vol. 110, No. 9 ajnonline.com
Rapid Critical Appraisal of the MERIT Study
1. Are the results of the study valid? A. Were the subjects randomly assigned to the intervention and control groups? Yes No Unknown
Random assignment of hospitals was made to either a rapid response team (RRT; intervention) group or no RRT (con- trol) group. To protect against introducing further bias into the study, hospitals, not individual patients, were randomly assigned to the intervention. If patients were the study subjects, word of the RRT might have gotten around, potentially influencing the outcome.
B. Was random assignment concealed from the individuals enrolling the subjects? Yes No Unknown
An independent statistician randomly assigned hospitals to the RRT or no RRT group after baseline data had been collected; thus the assignments were concealed from both researchers and participants.
C. Were the subjects and providers blind to the study group? Yes No Unknown
Hospitals knew to which group they’d been assigned, as the intervention hospitals had to put the RRTs into practice. Management, ethics review boards, and code committees in both hospitals knew about the intervention. The control hospitals had code teams and some already had systems in place to manage unstable patients. But control hospitals didn’t have a placebo strategy to match the intervention hospitals’ educational strategy for how to implement an RRT (a red flag for confounding!). If you worked in one of the control hospitals, unless you were a member of one of the groups that gave approval, you wouldn’t have known your hospital was participating in a study on RRTs; this lessens the chance of confounding variables influencing the outcomes.
D. Were reasons given to explain why subjects didn’t complete the study? Yes No Not Applicable
This question is not applicable as no hospitals dropped out of the study.
E. Were the follow-up assessments long enough to fully study the effects of the intervention? Yes No Unknown
The intervention was conducted for six months, which should be adequate time to have an impact on the outcomes of car- diopulmonary arrest rates (CR), hospital-wide mortality rates (HMR), and unplanned ICU admissions (UICUA). However, the authors remark that it can take longer for an RRT to affect mortality, and cite trauma protocols that took up to 10 years.
F. Were the subjects analyzed in the group to which they were randomly assigned? Yes No Unknown
All 23 (12 intervention and 11 control) hospitals remained in their groups, and analysis was conducted on an intention- to-treat basis. However, in their discussion, the authors attempt to provide a reason for the disappointing study results; they suggest that because the intervention group was “inadequately implemented,” the fidelity of the intervention was compromised, leading to less than reliable results. Another possible explanation involves the baseline quality of care; if high, the improvement after an RRT may have been less than remarkable. The authors also note a historical confounder: in Australia, where the study took place, there was a nationwide increase in awareness of patient safety issues.
G. Was the control group appropriate? Yes No Unknown
See notes to question C. Controls had no time built in for education and training as the intervention hospitals did, so this time wasn’t controlled for, nor was there any known attempt to control the organizational “buzz” that something was going on. The study also didn’t account for the variance in how RRTs were implemented across hospitals. The researchers indicate that the existing code teams in control hospitals “did operate as [RRTs] to some extent.” Because of these factors, the appropriateness of the control group is questionable.
H. Were the instruments used to measure the outcomes valid and reliable? Yes No Unknown
The primary outcome was the composite of HMR (that is, unexpected deaths, excluding do not resuscitates [DNRs]), CR (that is, no palpable pulse, excluding DNRs), and UICUA (any unscheduled admissions to the ICU).
email@example.com AJN ▼ September 2010 ▼ Vol. 110, No. 9 45
I. Were the demographics and baseline clinical variables of the subjects in each of the groups similar? Yes No Unknown
The researchers provided a table showing how the RRT and control hospitals compared on several variables. Some variability existed, but there were no statistical differences between groups.
2. What are the results? A. How large is the intervention or treatment effect?
The researchers reported outcome data in various ways, but the bottom line is that the control group did better than the intervention group. For example, RRT calling criteria were documented more than 15 minutes before an event by more hospitals in the control group than in the intervention group, which is contrary to expectation. Half the HMR cases in the intervention group met the criteria compared with 55% in the control group (not statistically significant). But only 30% of CR cases in the intervention group met the criteria compared with 44% in the control group, which was statistically significant (P = 0.031). Finally, regarding UICUA, 51% in the intervention group compared with 55% in the control group met the criteria (not significant). This indicates that the control hospitals were doing a better job of documenting unstable patients before events occurred than the intervention hospitals.
B. How precise is the intervention or treatment?
The odds ratio (OR) for each of the outcomes was close to 1.0, which indicates that the RRT had no effect in the intervention hospitals compared with the control hospitals. Each confidence interval (CI) also included the num- ber 1.0, which indicates that each OR wasn’t statistically significant (HMR OR = 1.03 (0.84 – 1.28); CR OR = 0.94 (0.79 – 1.13); UICUA OR = 1.04 (0.89 – 1.21). From a clinical point of view, the results aren’t straightfor- ward. It would have been much simpler had the intervention hospitals and the control hospitals done equally badly; but the fact that the control hospitals did better than the intervention hospitals raises many questions about the results.
3. Will the results help me in caring for my patients? A. Were all clinically important outcomes measured? Yes No Unknown
It would have been helpful to measure cost, since participating hospitals that initiated an RRT didn’t eliminate their code team. If a hospital has two teams, is the cost doubled? And what’s the return on investment? There’s also no mention of the benefits of the code team. This is a curious question . . . maybe another PICOT question?
B. What are the risks and benefits of the treatment?
This is the wrong question for an RRT. The appropriate question would be: What is the risk of not adequately introduc- ing, monitoring, and evaluating the impact of an RRT?
C. Is the treatment feasible in my clinical setting? Yes No Unknown
We have administrative support, once we know what the evidence tells us. Based on this study, we don’t know much more than we did before, except to be careful about how we approach and evaluate the issue. We need to keep the following issues, which the MERIT researchers raised in their discussion, in mind: 1) allow adequate time to measure outcomes; 2) some outcomes may be reliably measured sooner than others; 3) the process of implementing an RRT is very important to its success.
D. What are my patients’ and their families’ values and expectations for the outcome and the treatment itself?
We will keep this in mind as we consider the body of evidence.
46 AJN ▼ September 2010 ▼ Vol. 110, No. 9 ajnonline.com
Were the instruments used to measure the outcomes valid and reliable? The overall measure in the MERIT study is the compos- ite of the individual outcomes: CR, HMR, and unplanned ad- missions to the ICU (UICUA). These parameters were defined reasonably and didn’t include do not resuscitate (DNR) cases. Car- los explains that since DNR cases are more likely to code or die, in- cluding them in the HMR and CR would artificially increase these outcomes and introduce bias into the findings.
As the team moves through the questions in the RCA check- list, Rebecca wonders how she and Chen would manage this kind of appraisal on their own. Carlos assures them that they’ll get better at recognizing well- conducted research the more RCAs they do. Though Rebecca feels less than confident, she appre- ciates his encouragement nonethe- less, and chooses to lead the team in discussion of the next question.
Were the demographics and baseline clinical variables of the subjects in each of the groups similar? Rebecca says that the intervention group and the con- trol or comparison group need to be similar at the beginning of any intervention study because any differences in the groups could influence the outcome, poten- tially increasing the risk that the outcome might be unrelated to the intervention. She refers the team to their earlier discussion about confounding variables. Carlos tells Rebecca that her explana- tion was excellent. Chen remarks that Rebecca’s focus on learning appears to be paying off.
WHAT ARE THE RESULTS? As the team moves on to the sec- ond major question, Carlos tells them that many clinicians are apprehensive about interpreting
statistics. He says that he didn’t take courses in graduate school on conducting statistical analysis; rather, he learned about different statistical tests in courses that re- quired students to look up how to interpret a statistic whenever they encountered it in the articles they were reading. Thus he had a context for how the statistic was being used and interpreted, what question the statistical analysis was answering, and what kind of data were being analyzed. He also learned to use a search engine, such as Google.com, to find an explanation for any statistical tests with which he was unfamil- iar. Because his goal was to un- derstand what the statistic meant clinically, he looked for simple Web sites with that same focus and avoided those with Greek symbols or extensive formulas that were mostly concerned with conducting statistical analysis.
How large is the intervention or treatment effect? As the team goes through the studies in their RCA, they decide to construct a list of statistics terminology for quick reference (see A Sampling of Statistics). The major statistic used in the MERIT study is the odds ratio (OR). The OR is used to provide insight into the measure of association between an inter- vention and an outcome. In the MERIT study, the control group did better than the intervention group, which is contrary to what was expected. Rebecca notes that the researchers discussed the pos- sible reasons for this finding in the final section of the study. Carlos says that the authors’ discussion about why their findings occurred is as important as the findings themselves. In this study, the discussion communicates to any clinicians considering initiating an RRT in their hospital that they should assess whether the current code team is already functioning
Were the subjects analyzed in the group to which they were randomly assigned? Rebecca sees the term intention-to-treat analysis in the study and says that it sounds like statistical language. Carlos confirms that it is; it means that the researchers kept the hos- pitals in their assigned groups when they con ducted the analysis, a technique intended to reduce possible bias. Even though the MERIT study used this technique, Carlos notes that in the discussion section the authors offer some important caveats about how the study was conducted, including poor intervention implementation, which may have contributed to MERIT’s unexpected findings.1
Was the control group appro- priate? Carlos explains that it’s challenging to establish an ap- propriate comparison or control group without an understanding of how the intervention will be implemented. In this case, it may be problematic that the interven- tion group received education and training in implementing the RRT and the control group re- ceived no comparable placebo (meaning education and training about something else). But Car- los reminds the team that the re- searchers attempted to control for known confounding variables by stratifying the sample on char- acteristics such as academic versus nonacademic hospitals, bed size, and other important parameters. This method helps to ensure equal representation of these pa- rameters in both the intervention and control groups. However, a major concern for clinicians con- sidering whether to use the MERIT findings in their decision making involves the control hos- pitals’ code teams and how they may have functioned as RRTs, which introduces a potential con- founder into the study that could possibly invalidate the findings.
firstname.lastname@example.org AJN ▼ September 2010 ▼ Vol. 110, No. 9 47
A Sampling of Statistics
Statistic Simple Definition Important Parameters Understanding the Statistic Clinical Implications
Odds Ratio (OR)
The odds of an outcome occurring in the intervention group compared with the odds of it occurring in the comparison or control group.
• If an OR is equal to 1, then the intervention didn’t make a differ- ence.
• Interpretation depends on the out- come.
• If the outcome is good (for exam- ple, fall prevention), the OR is pre- ferred to be above 1.
• If the outcome is bad (for example, mortality rate), the OR is preferred to be below 1.
The OR for hospital-wide mor- tality rates (HMR) in the MERIT study was 1.03 (95% CI, 0.84 – 1.28). The odds of HMR in the intervention group were about the same as HMR in the comparison group.
From the HMR OR data alone, a clinician may not feel confident that a rapid response team (RRT) is the best intervention to reduce HMR but may seek out other evidence before making a decision.
Relative Risk (RR)
The risk of an out- come occurring in the intervention group compared with the risk of it occurring in the comparison or control group.
• If an RR is equal to 1, then the intervention didn’t make a differ- ence.
• Interpretation depends on the out- come.
• If the outcome is good (for example fall prevention), the RR is preferred to be above 1.
• If the outcome is bad (for example, mortality rate), the RR is preferred to be below 1.
The RR of cardiopulmonary ar- rest in adults was reported in the Chan PS, et al., 2010 sys- tematic reviewa as 0.66 (95% CI, 0.54 – 0.80), which is sta- tistically significant because there’s no 1.0 in the CI. Thus, the RR of cardiopulmo- nary arrest occurring in the interven tion group compared with the RR of it occurring in the control group is 0.66, or less than 1. Since cardiopulmonary arrest is not a good outcome, this is a desirable finding.
The RRT significantly reduced the RR of cardiopulmonary arrest in this study. From these data, clinicians can be reasonably confident that ini- tiating an RRT will reduce CR in hospitalized adults.
Confidence Interval (CI)
The range in which clinicians can expect to get results if they pres- ent the interven- tion as it was in the study.
• CI provides the precision of the study finding: a 95% CI indicates that clinicians can be 95% con- fident that their findings will be within the range given in the study.
• CI should be narrow around the study finding, not wide.
• If a CI contains the number that indicates no effect (for OR it’s 1; for effect size it’s 0), the study finding is not statistically significant.
See the two previous examples. In the Chan PS, et al., 2010 systematic review,a the CI is a close range around the study finding and is statistically significant. Clinicians can be 95% confident that if they conduct the same interven- tion, they’ll have a result simi- lar to that of the study (that is, a reduction in risk of cardio- pulmonary arrest) within the range of the CI, 0.54 – 0.80. The narrower the CI range, the more confident clinicians can be that, using the same intervention, their results will be close to the study findings.
Mean (X) Average • Caveat: Averaging captures only those subjects who surround a central tendency, missing those who may be unique. For example, the mean (average) hair color in a classroom of schoolchildren cap- tures those with the predominant hair color. Children with hair color different from the predominant hair color aren’t captured and are con- sidered outliers (those who don’t converge around the mean).
In the Dacey MJ , et al., 2007 study,a before the RRT the aver- age (mean) CR was 7.6 per 1,000 discharges per month; after the RRT, it decreased to 3 per 1,000 discharges per month.
Introducing an RRT decreased the average CR by more than 50% (7.6 to 3 per 1,000 discharges per month).
a For study details on Chan PS, et al., and Dacey MJ, et al., go to http://links.lww.com/AJN/A11.http://links.lww.com/AJN/A11
48 AJN ▼ September 2010 ▼ Vol. 110, No. 9 ajnonline.com
as an RRT prior to RRT imple- mentation.
How precise is the interven- tion or treatment? Chen wants to tackle the precision of the findings and starts with the OR for HMR, CR, and UICUA, each of which has a confidence interval (CI) that includes the number 1.0. In an EBP workshop, she learned that a 1.0 in a CI for OR means that the results aren’t statistically sig- nificant, but she isn’t sure what statistically sig nificant means. Car- los explains that since the CIs for the OR of each of the three out- comes contains the number 1.0, these results could have been ob- tained by chance and therefore aren’t statistically significant. For clinicians, chance findings aren’t reliable findings, so they can’t confidently be put into practice. Study findings that aren’t statisti- cally significant have a probabil- ity value (P value) of greater than 0.5. Statistically significant find- ings are those that aren’t likely to be obtained by chance and have a P value of less than 0.5.
WILL THE RESULTS HELP ME IN CARING FOR MY PATIENTS? The team is nearly finished with their checklist for RCTs. The third and last major question addresses the applicability of the study— how the findings can be used to help the patients the team cares for. Rebecca observes that it’s easy to get caught up in the de- tails of the research methods and findings and to forget about how they apply to real patients.
Were all clinically important outcomes measured? Chen says that she didn’t see anything in the study about how much an RRT costs to initiate and how to com- pare that cost with the cost of one code or ICU admission. Carlos agrees that providing costs would have lent further insight into the results.
What are the risks and ben- efits of the treatment? Chen won- ders how to answer this since the findings seem to be confounded by the fact that the control hos- pital had code teams that func- tioned as RRTs. She wonders if there was any consideration of the risks and benefits of initiating an RRT prior to beginning the study. Carlos says that the study doesn’t directly mention it, but the consideration of the risks and benefits of an RRT is most likely what prompted the researchers to conduct the study. It’s helpful to remember, he tells the team, that often the answer to these questions is more than just “yes” or “no.”
Is the treatment feasible in my clinical setting? Carlos acknowl- edges that because the nursing administration is open to their project and supports it by provid- ing time for the team to conduct its work, an RRT seems feasible in their clinical setting. The team discusses that nursing can’t be the sole discipline involved in the project. They must consider how to include other disciplines as part of their next step (that is, the im- plementation plan). The team con- siders the feasibility of getting all disciplines on board and how to address several issues raised by the researchers in the discussion sec- tion (see Rapid Critical Appraisal of the MERIT Study), particu- larly if they find that the body of evidence indicates that an RRT does indeed reduce their chosen outcomes of CR, HMR, and UICUA.
What are my patients’ and their families’ values and expec- tations for the outcome and the treatment itself? Carlos asks Rebecca and Chen to discuss with their patients and their patients’ families their opinion of an RRT and if they have any objections to the intervention. If there are
objections, the patients or fami- lies will be asked to reveal them.
The EBP team finally com- pletes the RCA checklists for the 15 studies and finds them all to be “keepers.” There are some studies in which the find ings are less than reliable; in the case of MERIT, the team decides to in- clude it anyway because it’s con- sidered a landmark study. All the studies they’ve retained have something to add to their under- standing of the impact of an RRT on CR, HMR, and UICUA. Car- los says that now that they’ve determined the 15 studies to be somewhat valid and reliable, they can add the rest of the data to the evaluation table.
Be sure to join the EBP team for “Critical Appraisal of the Evi- dence: Part III” in the next install- ment in the series, when Rebecca, Chen, and Carlos complete their synthesis of the 15 studies and determine what the body of evi- dence says about implementing an RRT in an acute care setting. ▼
Ellen Fineout-Overholt is clinical pro- fessor and director of the Center for the Advancement of Evidence-Based Practice at Arizona State University in Phoenix, where Bernadette Mazurek Melnyk is dean and distinguished foun- dation professor of nursing, Susan B. Stillwell is clinical associate professor and program coordinator of the Nurse Educator Evidence-Based Practice Mentorship Program, and Kathleen M. Williamson is associate director of the Center for the Advancement of Evidence-Based Practice. Contact author: Ellen Fineout-Overholt, ellen. email@example.com.
REFERENCES 1. Hillman K, et al. Introduction of the
medical emergency team (MET) sys- tem: a cluster-randomised controlled trial. Lancet 2005;365, 2091-7.
2. Wojdyla D. Cluster randomized trials and equivalence trials [PowerPoint presentation]. Geneva, Switzerland: Geneva Foundation for Medical Education and Research; 2005. http:// www.gfmer.ch/PGC_RH_2005/pdf/ Cluster_Randomized_Trials.pdf.
Implementing EBP Column
Improving Patient Care Through Nursing Engagement in Evidence-Based Practice Elizabeth Crabtree, MPH • Emily Brennan, MLIS • Amanda Davis, MPH, RD • Andrea Coyle, MSN, MHA, RN, CMSRN
This column shares the best evidence-based strategies and innovative ideas on how to facilitate the learning of EBP principles and processes by clinicians as well as nursing and interprofessional students. Guidelines for submission are available at http://onlinelibrary.wiley.com/journal/10.1111/(ISSN)1741-6787
INTRODUCTION AND BACKGROUND The Medical University of South Carolina (MUSC) is a large academic health science center, with a 700-bed medical cen- ter (MUSC Health), and six colleges that train approximately 2,600 healthcare professionals annually. The MUSC Center for Evidence-Based Practice (EBP), housed jointly in the Li- brary and the Quality Management department of the MUSC Hospital, aims to promote scientific inquiry, EBP, and quality outcomes at MUSC. Through education, the development of evidence-based clinical decision support tools and outcomes research, the Center for EBP has begun to transform the cul- ture of MUSC into one that incorporates best evidence into clinical practice on both an individual and system level.
One of the strategies implemented by the Center for EBP to promote cultural change is an educational course: the EBP Nurse Scholars course, where nurses are taught about the the- ory, practice, and dissemination of EBP.
DETAILED DESCRIPTION OF STRATEGY Nurses serve on the frontline of health care, and have a unique opportunity to improve patient care through EBP (Hockenberry, Walaen, Brown, & Barrera, 2008). The staff nurse is a critical link in bringing evidence-based changes into clinical practice. Best practice only occurs when staff continu- ally ask questions about treatment and care, have the resources and skills necessary to search for and appraise research ev- idence, implement the evidence in practice, and evaluate its effectiveness (Dawes et al., 2005; Hockenberry et al., 2008).
MUSC’s experience in preparing practicing nurses to do EBP was limited. To address this, the Center for EBP, in part- nership with the Center for Professional Excellence, developed a 12-week, project-based course to prepare nurses to engage in EBP. The Center for Professional Excellence collaborates with
internal and external customers to create growth and devel- opment opportunities for registered nurses. Additionally, the center is responsible for Magnet application and designation. The EBP Nurse Scholars course provides nurses with a com- prehensive overview of EBP, prepares them to frame clinical questions, perform literature searches, analyze and evaluate evidence, and translate that knowledge into something clini- cally meaningful. Members of the Center for EBP staff and library faculty provided lectures and individual consultations on framing clinical questions, conducting comprehensive lit- erature searches, understanding statistics commonly reported in research articles, and appraising and summarizing evidence using the GRADE criteria. As part of the course, nurses se- lected a specific hospital policy and applied their knowledge to evaluating the evidence base for it. They then updated the policy to ensure it reflected current evidence and best practice.
RESULTS The EBP Nurse Scholars course resulted in successful com- pletion of 15 projects related to nursing care and practice, and led to significant practice improvements (Table 1). In addition, several nurses have presented their findings at regional and national conferences.
Pre- and post-course surveys demonstrated that the course improved nurses’ confidence with EBP methods and skills re- lated to research tools, statistical concepts, and study designs. Data collected included responses from students from two EBP Nurse Scholars courses: Spring 2013 and Spring 2014.
Participant Demographics The majority of students who participated in the course were BSN-level nurses working with adult populations at MUSC.
172 Worldviews on Evidence-Based Nursing, 2016; 13:2, 172–175. C© 2016 Sigma Theta Tau International
Implementing EBP Column
Use of research
Cochrane Database of Systema�c
CINAHL Na�onal Guideline
(1= Never, 2= Once per month, 3= Once per week, 4= Few �mes per week, 5= Once per day, 6= Mul�ple �mes per day )
Median (Pre) Median (Post)
Figure 1. Use of research tools in practice.
Table 1. Examples of Completed EBP Nurse Scholars Course Projects
Abdominal X-Ray for NG/OG Tube Placement
Closed Arterial Line Lab Sampling System for PCICU Patients
Discharge Planning for Psychiatric Patients
Improving Care of Elderly in the Acute Care Setting
International Normalized Ratio Cut-Off for Heart Catheterizations
Lidocaine 4% for Nonemergent IV Starts
Oral Care for Infants
Peripheral Administration of Chemotherapy Agents
Postpyloric Feeding to Reduce Risk of Aspiration Pneumonia
Preventing Turniquet-Related Injuries in Patients Undergoing TKA
Safety of “Quick Start” Depo Provera
Thermoprotective Wraps in Very Low Birth Weight Infants
The majority of participants were ICU nurses, and students typically had 0–5 years of experience.
Survey Results Study data were collected and managed using Research Electronic Data Capture (REDCap) electronic data capture tools, hosted at MUSC. Due to the continuous nature of the
variables assessed in the pre- and post-tests for the EBP Nurse Scholars course, we used pre- and post-test median scores and the Mann-Whitney U test to measure significant changes in confidence with EBP methods, use of research tools in clinical practice, and understanding of statistical concepts and study designs. A complete listing of the survey questions can be found in Appendix 1, available with the online version of this article.
After the course, there were significant increases in nurses’ confidence in critically reviewing literature (p < .001), their belief that EBP was necessary in nursing practice (p = .052), and their interest in improving skills necessary to use EBP in practice (p = .002). There were also significant increases in the use of EBP resources in clinical practice, including the Cochrane Database of Systematic Reviews (p < .001), CINAHL (p < .001), National Guideline Clearinghouse (p = .049), PubMed (p = .005), and UpToDate (p = .018), as well as in the understanding of statistical concepts and study design methods (p < .001). Pre- and post-test median scores representing the improvements in EBP resource utilization and understanding of research concepts are included in Figures 1 and 2.
The success of the EBP Nurse Scholars course led to the development of a project-based course for interprofes- sional teams of pediatric clinicians, all of which included a nurse. These teams received EBP education and worked together during this Interprofessional EBP course to frame clinical questions, systematically search for, and critically ap- praise and synthesize a body of research evidence for a par- ticular disease topic. Based on their review of the evidence, teams developed clinical practice recommendations for each of the clinical questions they framed. These recommenda- tions were used to develop admission and emergency depart- ment order sets which were integrated into MUSC’s electronic
Worldviews on Evidence-Based Nursing, 2016; 13:2, 172–175. 173 C© 2016 Sigma Theta Tau International
Improving Care through EBP
(1= Do not understand, 2= Understand somewhat, 3= Understand completely)
Median (Pre) Median (Post)
Figure 2. Understanding of Research Concepts.
medical record. Finally, participants who completed either the EBP Nurse Scholars course or the Interprofessional EBP course were invited to participate in an EBP Leadership Pro- gram. This program focused on implementation of their EBP project, evaluation of results, outcomes, and dissemination of findings.
Through these courses, the Center for EBP is transforming the institution’s culture into one that builds EBP capacity, and incorporates best evidence into clinical practice on both an individual and system level.
DISSEMINATION The Center for EBP staff has supported nurses in translating their projects into scholarly work. The EBP Leadership Program, the follow-up to the EBP Nurse Scholars course, provided nurses with the skills to both implement and evaluate practice change projects, but it also provided nurses with tools and resources for developing abstracts for poster and podium presentations. As a result of these efforts, numerous course participants have had their work accepted for presentation at regional and national conferences. The Center for EBP applied for and received a competitive, internally funded grant to support nurses in attending conferences where they present the results of their EBP projects. The grant covers the cost of printing a poster for nurses who have had abstracts accepted. Funding the printing cost of the posters further encouraged nurses to attend conferences, and present their work, and motivated departments and units to provide support for nursing engagement in EBP scholarly work.
Supporting professional growth and development is a nursing strategic priority at MUSC. To support and promote
nursing clinicians, three EBP Nurse Scholar projects were highlighted in MUSC’s Magnet document. Additionally, videos of nurses engaging in the EBP process were produced and disseminated internally. The videos highlighted scholars performing a literature search, analyzing and evaluating evi- dence, and translating that knowledge in to changes in nursing practice. WVN
LINKING EVIDENCE TO ACTION
� Organizational cultures can be transformed through provision of EBP education and mentored implementation of EBP knowledge and skills.
� A project-based EBP education program can re- sult in an increase in utilization of EBP resources, and in improvements in knowledge and attitudes related to EBP.
� The implementation and dissemination of EBP projects creates opportunities for nurses to partici- pate in the development of scholarly products, and results in professional growth and development.
Elizabeth Crabtree, Director of Evidence-Based Practice, Qual- ity, Management, and Assistant Professor, Library & Infor- matics, Medical University of South Carolina, Charleston, SC; Emily Brennan, Informationist, Research Librarian, and Assistant Professor, Library & Informatics, Medical Univer- sity of South Carolina, Charleston, SC; Amanda Davis,
174 Worldviews on Evidence-Based Nursing, 2016; 13:2, 172–175. C© 2016 Sigma Theta Tau International
Implementing EBP Column Clinical Evidence-Based Practice Analyst, Medical University of South Carolina, Charleston, SC; Andrea Coyle, Professional Excellence Coordinator, Clinical, Services Administration, Medical University of South Carolina, Charleston, SC
Address correspondence to Elizabeth Crabtree, Library & Informatics, Medical University of South Carolina, 171 Ashley Avenue, Library, Suite 408, Charleston, SC 29425; firstname.lastname@example.org
Accepted 15 August 2015 Copyright C© 2016, Sigma Theta Tau International
References Dawes, M., Summerskill, W., Glasziou, P., Cartabellotta, A., Mar-
tin, J., Hopayian, K., & . . . Osborne, J. (2005). Sicily statement on evidence-based practice. BMC Medical Education, 5, 1.
Hockenberry, M., Walaen, M., Brown, T., & Barrera, P. (2008). Creating an evidence-based practice environment: One hospi- tal’s journey. Journal of Trauma Nursing, 15(3), 136–142.
doi 10.1111/wvn.12126 WVN 2016;13:172–175
Worldviews on Evidence-Based Nursing, 2016; 13:2, 172–175. 175 C© 2016 Sigma Theta Tau International
Copyright of Worldviews on Evidence-Based Nursing is the property of Wiley-Blackwell and its content may not be copied or emailed to multiple sites or posted to a listserv without the copyright holder’s express written permission. However, users may print, download, or email articles for individual use.