Levels of Evidence in Research: Examples, Hierachies & Practice for 2025
Since 1992, evidence-based practice has been gaining traction in science and the academe. While it goes to say that occupational practices glean their foundations from scientific evidence, there is much controversy about the matter (Trinder & Reynolds, 2006). Nevertheless, it has not stopped disciplines like education, allied health, law, management, and many others from using it.
A significant part of evidence-based practice is the levels of evidence or hierarchy of evidence in research. Generally, it applies to any type of research and evaluates the strength of scientific results. While there are specific levels of evidence in various disciplines, the most developed is from medicine and allied health (Hugel, 2013)
This article offers a brief introduction to evidence-based practice. As well, it delves into its application in specific disciplines. Furthermore, it shows various examples of levels of evidence in research and how they rank or evaluate scientific research. At the end of the article, the reader should have a clear idea of what evidence-based practice is, how levels of evidence fit in, and the relevant critiques associated with it.
Levels of Evidence in Research Table of Contents
- What is evidence-based practice?
- What is evidence?
- Examples of Levels of Evidence in Research
- Criticism of Evidence Hierarchies and Evidence-Based Practice
- How Can Specialized Training Enhance Evidence-Based Practice?
- How Can You Evaluate the Credibility of Evidence Sources?
- Integrating Levels of Evidence in Decision-Making: Practical Guidelines Across Disciplines
- How Can Interdisciplinary Collaboration Enhance Evidence-Based Practice?
- Can Fast-Track Degree Programs Enhance Evidence-Based Practice Mastery?
- What Does the Future Hold for Evidence-Based Practice?
- Can Targeted Certifications Empower Evidence-Based Practice?
- How Can Evidence-Based Practice Expand Career Opportunities?
What is Evidence-Based Practice?
Evidence-based practice (EBP) is the idea of occupational disciplines based on scientific evidence (Trinder & Reynolds, 2006). It encourages and, in some cases, forces scientists and other professionals to pay more attention to evidence when making crucial decisions.
EBP aims to minimize outdated or unsound practices to more effective, evidence-based ones. It shifts the decision-making from intuition, tradition, and unsystematic experience to well-established and well-researched scientific studies. This can also define or limit the scope of work of professionals.
Evidence-based practice has been gaining popularity since its introduction back in 1992. It has spread to various fields such as management, law, medicine, education, public policy, and more. There is also an effort to apply EBP in scientific research itself, which is called metascience. A thesis statement example related to this topic could be: “The implementation of evidence-based practice across diverse disciplines has transformed decision-making processes by prioritizing scientific evidence and promoting the adoption of effective and validated practices."
Some examples of the application of EBP in various disciplines are as follows:
Research
Evidence-based research, also known as metascience, is the utilization of scientific research methodology to study science, which aims to increase the quality and efficiency of the research process (Ioannidis, 2020). As metascience concerns itself with all fields of research, it is also referred to as “a bird’s eye view of science."
Metascience is made up of five major areas of study:
- Methods It seeks to highlight poor practices in research such as biases, manipulation of statistics, and poor study designs. It also finds various ways to reduce such practices.
- Reporting It also seeks to identify issues in disseminating, explaining, reporting, and popularizing research. Poor reporting results in difficulty in the interpretation of results or even miscommunication. Metascience proposes solutions such as establishing reporting standards and improved transparency.
- Reproducibility Replication is an essential part of scientific research. However, there has been a widespread issue in replicating results in various studies, which puts into question their reliability. Such issues often plague psychology and medicine (Lehrer, 2010).
- Evaluation It aims to develop a scientific foundation for peer review, which is currently plagued with issues such as biases, underrepresentation, and more (Smith, 2006). Metascience evaluates using various systems like open peer review, pre-publication peer review, and post-publication peer review.
- Incentives It also seeks to promote better studies through enhanced incentive systems. Metascience studies the effectiveness, accuracy, costs, and benefits of various approaches in evaluating and ranking research and those who perform them (Ioannidis, 2020).
It is also important to note that a crucial aspect of conducting evidence-based research is formulating well-crafted research questions. Knowing how to write a research question is essential for guiding the inquiry and setting the direction of the study. By carefully crafting research questions, researchers can identify the specific aspects of the phenomenon under investigation and develop a focused approach to gathering and analyzing data.
Medicine
Medicine and allied health have adapted evidence-based practice, aptly named evidence-based medicine (EBM), which is an approach to the practice that optimizes decision-making by utilizing evidence from well-conducted and well-designed research. It is done by balancing three components, namely research-based evidence, patient values and preferences, and clinical expertise (Haughom, 2015). With this approach, medical practitioners can improve the quality of healthcare, improve satisfaction among patients, and potentially cut down the cost of treatments. This is something that you can learn while taking up a public health online doctorate or in an earlier course of study.

The field already has some degree of empirical evidence in itself. However, EBM goes further by classifying levels of research. It considers the epistemological strength of evidence from the strongest types (systematic reviews, meta-analyses, and similar studies), which produce strong recommendations and results from weaker types (case-control studies) that yield weak recommendations.
Evidence-based medicine is applied to various parts of the discipline, from education to the administration of health institutions (Eddy, 1990). It advocates that decisions and policies should be based on evidence as much as possible instead of the beliefs of experts, practitioners, or administrators. For example, it assures that a doctor’s opinion, which may be limited due to biases or gaps, is supplemented by knowledge and information from the current literature in order to provide the best recommendation.
Education
Evidence-based education is the utilization of well-designed and well-researched studies to identify which education methods work best and produce the best results. It combines evidence-based learning and evidence-based teaching. While the reception for EBE is generally positive, some critics point out that some educational research is poor in quality and limits the scope of relevant research (Biesta, 2007). Other studies are often difficult or impossible to replicate.
Some examples of evidence-based learning techniques are as follows:
- Errorless training This training style was introduced by Charles Ferster with the assistance of B.F. Skinner, who developed teaching methods that were created in such a way that students are not required to—and do not—make errors when learning new information or processes. They noted that errors are not a function of learning or vice versa. Mistakes are also not blamed on the learner.
- N-back training Also known as the n-back task, it is a performance task that is commonly used in cognitive neuroscience and psychology to measure the working memory capacity and overall working memory of a learner. It involves presenting a series of stimuli (a shape, letter, etc.) spaced a few seconds apart. The learner should decide whether the current stimulus matches the one displayed in trials.
- Spaced repetition Often used with flashcards, new information is presented, repeatedly spaced by a certain amount of time. New flashcards and more challenging topics are shown more frequently than less difficult or older flashcards. It takes advantage of the psychological spacing effect—learning is more effective when spaced out in several sessions. Spaced repetition has been proven to improve the rate of learning (Smolen, Zhang, & Byrne, 2016).
What is Evidence?
The entire practice revolves around evidence, the various types of evidence in research, and its validity. Evidence is the result or product of scientific research that enables decision-making. It can be divided into two main categories:
- Primary information (unfiltered) It contains the original data or results and analysis from the scientific study. It includes no interpretation or external evaluation.
- Secondary information (filtered) It is considered as the highest quality evidence. Filtered information include synthesis, analysis, interpretation, evaluation, and/or commentary on the unfiltered information. Additionally, it may come with relevant recommendations for practice.
Evidence-based practice involves levels of evidence that help practitioners determine the “strength" or value of the evidence. The hierarchy of evidence depends on the discipline itself and how each field develops their standardized process of evidence evaluation. A great example of evidence is the pyramid used by medical practitioners and researchers.
Application of Hierarchy of Evidence
Medical experts and researchers rank evidence according to quality, resulting in the evidence pyramid (Queensland University of Technology, 2019).
Secondary Information (Filtered)
The top three levels of evidence in medical research are composed of filtered information. These include the following, starting from the evidence or source with the highest quality:
- Systematic reviews Such reviews focus on peer-reviewed journals or publications that discuss specific health issues or topics. These publications use standardized processes for selecting and evaluating articles.
- Critically-appraised topics and individual articles It is composed of articles and topics that summarize information from an individual article from journals or publications. These are often written to answer a specific medical topic, problem, or concern.
Primary Information (Unfiltered)
The next three lower levels of evidence include unfiltered information starting from the highest quality:
- Randomized control trials Experiments where subjects are randomly divided into different groups where one or more groups receive procedures, intervention, or substance. A control group does not receive any treatment and will serve as the benchmark of the study. The effects of the treatment on each group are then recorded, assessed, and evaluated.
- Cohort studies A study observes a group of people who share some defining characteristics, called a cohort, which is observed over a long period of time. Typically, a cohort was exposed to a disease or have taken treatment at some point. Experts observe the effects of such exposure and how they compare to other cohorts with different exposure levels.
- Case-controlled studies and series In this type of lower level of evidence research, a case-control study observes a group of people with a condition or a disease and a suitable control group. Potential risk factors or attributes to the disease are identified and examined by comparing the control group and the diseased subjects. A case series is a collection of patients or subjects sharing common attributes or characteristics related to a condition, a treatment, or a disease. Relevant outcomes are examined and evaluated.
Background Information
The background information, or expert opinion, is not considered evidence. However, it is the foundation level of the pyramid, which supports the other evidence. It is also used to provide context to the interpretation of the evidence when needed.
The hierarchy above is just one example of the application of the hierarchy of evidence. Each discipline and its corresponding sub-fields can have their process of evaluating evidence. For example, experts have proposed more than 80 different hierarchies for assessing and evaluating medical evidence (Siegfried, 2017).

Examples of Levels of Evidence in Research
As mentioned above, the level of evidence in research is a method of raking the relative strength or validity of results coming from scientific studies. In the past, there have been multiple proposals for assessing levels of evidence in research. These research evidence examples include:
Grading of Recommendations Assessment, Development and Evaluation
The Grading of Recommendations Assessment, Development, and Evaluation (GRADE) process was proposed in 2001 by guideline developers, methodologists, clinicians, public health scientists, and other interested members. It measures the strength of recommendation (or confidence in estimated effect) and certainty in the evidence of a certain scientific study or research.
It is endorsed by various international health organizations, such as the World Health Organization, the Canadian Task Force for Preventive Health Care, and the U.K. National Institute for Health and Care Excellence (NICE), among others (McMaster University & Evidence Prime Inc.). While it has its roots in medicine and allied health disciplines, it can also be applied to research dealing with other topics.
GRADE provides the following ratings to various evidence:
- High A lot of confidence that the true effect is similar or close to that of the estimated effect.
- Moderate Moderate confidence in the estimated effect. It is likely that the true effect is close to the estimated effect. However, it is possible that substantial differences may exist.
- Low Limited confidence in the estimated effect, which may be substantially different from the true effect.
- Very Low Very little confidence in the estimated effect, which is likely to be substantially different from the true effect.
Guyatt and Sackett
G.H. Guyatt and D.L. Sackett proposed the first version of the hierarchy of primary studies back in 1995 (Guyatt & Sackett, 1995). T. Greenhalgh further modified the ranking, resulting in the following (from highest to lowest in value):
- Meta-analyses and systematic reviews of randomized controlled trials (RCTs) with definitive results (exact solution or answer to the query or question).
- RCTs with definitive results.
- RCTs with non-definitive results.
- Cohort studies
- Case-control studies
- Cross-sectional surveys
- Case reports
Saunders et al.
B. Saunders and his colleagues also proposed a protocol that assigns research results and reports into six categories. The assignment is based on the theoretical background, research design, general acceptance, and evidence of possible harm (Saunders, Berliner, & Hanson, 2003). Much like other levels of hierarchy, it is rooted in allied health.
The research outcome should have a descriptive publication such as a manual or something similar.
- Category 1 Treatments or results that belong in this category are well-supported and efficacious. Ideally, two or more randomized controlled outcome research compare the study results to the target results.
- Category 2 Results in this category are supported and possibly efficacious results. It is based on positive outcomes of non-randomized study designs with some control.
- Category 3 Outcomes in this category are supported and acceptable results. It may be supported by one controlled or uncontrolled study or by a series of single-subject research.
- Category 4 Promising and acceptable results belong to this category. It may involve studies that have no support, aside from general acceptance and existing literature.
- Category 5 Innovative and novel results are assigned to this category. It includes studies that are considered not harmful but are not widely discussed or cited in the literature.
- Category 6 Concerning results belong to this category. It may include outcomes that may possibly do harm and have unknown or untested theoretical foundations.
Criticism of Evidence Hierarchies and Evidence-Based Practice
While many disciplines have adapted and accepted evidence-based practice and evidence hierarchies, there was an increasing criticism in the past few years. Many of which highlight the shortcomings of EBP in medicine and allied health as it is most practiced in these disciplines. For instance, a 2016 survey found out that 70% of researchers are unable to reproduce the experiments of their peers (Baker, 2016). The same study revealed that 52% of the respondents agree that there is indeed a significant reproducibility crisis in the research industry.
Many critical works have been published in the literature in the past decade or so. However, upon a survey of these works (Solomon, 2011), they usually fall into one of the following:
- Procedural aspects and issues of evidence-based medicine.
- The fallibility of EBM is greater than expected.
- Some experts consider EBM as incomplete as a philosophy of science.
Furthermore, many practitioners and administrators point out that EBM has limitations in terms of informing the care of patients. The hierarchy of evidence also does not consider the research on the efficacy and safety of medical interventions. Also, studies designed using EBM guidelines fail to define key terms, consider the validity of non-randomized controlled trials, and underscore a list of study design limitations (Gugiu et al., 2012).
Others have specifically criticized the hierarchy levels of evidence, such as Borgerson, who wrote that these rankings should not be absolute and do not necessarily epistemically justify such order. Also, he noted that researchers should take a closer look at social mechanisms for managing pervasive biases (Borgerson, 2009).
A. La Caze added that basic science could actually be found on the lower tiers of EBM as it plays a significant role in specifying and contextualizing experiments. These lower ranks of evidence also help interpret and analyze the resulting data (Caze, 2010).
How Can Specialized Training Enhance Evidence-Based Practice?
Advanced training programs equip professionals with the analytical tools and current methodologies necessary for critically appraising and applying emerging evidence. Tailored courses and certifications offer hands-on experiences, enabling practitioners to streamline decision-making processes while adapting to evolving standards. For instance, pursuing short certificate programs can rapidly build expertise and support continuous professional growth in evidence-based practice.
How Can You Evaluate the Credibility of Evidence Sources?
In reviewing the authenticity and reliability of evidence, it is essential to consider factors beyond the study’s design. Examine the reputation of the publishing entity and any disclosures related to funding or conflicts of interest. Peer-reviewed status, institutional affiliations, and transparent methodologies further ensure that the source meets rigorous standards. Additionally, recognizing the influence of educational credentials on critical appraisal is key; professionals often verify their expertise based on recognized institutional legitimacy, as demonstrated by practices covered in our article Are online colleges respected?
Integrating Levels of Evidence in Decision-Making: Practical Guidelines Across Disciplines
Incorporating levels of evidence into decision-making processes is essential for ensuring that practices are informed by reliable and valid research. Whether in medicine, education, management, or public policy, understanding how to apply evidence hierarchies can help practitioners make informed, effective decisions. Below are practical steps and examples for integrating levels of evidence into decision-making.
1. Align Evidence with Contextual Needs
Not all decisions require the highest level of evidence, such as systematic reviews or meta-analyses. Practitioners must evaluate the context of their decisions to determine the level of evidence required.
- Example in Medicine: A randomized controlled trial (RCT) might be necessary for adopting a new surgical technique, while a cohort study may suffice for evaluating patient follow-up care.
- Example in Education: Spaced repetition techniques supported by experimental studies can guide curriculum design, while qualitative studies may inform classroom strategies for student engagement.
Actionable Tip: Use the hierarchy as a guide, not a rigid rule, and assess whether the available evidence aligns with the specific problem or question.
2. Combine Evidence Types for Comprehensive Decisions
While higher-level evidence offers stronger reliability, combining it with lower-level studies can provide a more nuanced understanding.
- Example in Public Policy: Policymakers designing environmental regulations might combine meta-analyses of climate data with qualitative reports from community stakeholders to balance scientific accuracy and societal impact.
Actionable Tip: Cross-reference filtered (e.g., systematic reviews) and unfiltered evidence (e.g., case studies) to capture both broad trends and specific insights.
3. Tailor Evidence Evaluation to Stakeholder Needs
Stakeholders in different fields prioritize varying aspects of evidence, such as cost-effectiveness, practicality, or ethical considerations.
- Example in Management: For implementing workplace wellness programs, managers might prioritize evidence from cohort studies on employee productivity alongside cost analyses and employee surveys.
Actionable Tip: Engage stakeholders early to understand their priorities and ensure that the evidence presented addresses their concerns.
4. Critically Assess Limitations of Evidence
Even the highest levels of evidence have limitations. Practitioners should critically evaluate the relevance, applicability, and potential biases in the studies they use.
- Example in Allied Health: A systematic review may focus on a specific population, such as middle-aged adults, and may not be directly applicable to pediatric care.
Actionable Tip: Consider the generalizability and potential gaps in the evidence and seek complementary data where necessary.
5. Emphasize Continuous Evaluation and Feedback
Decision-making is not a one-time event but a dynamic process. Regularly revisiting and updating decisions based on new evidence ensures their continued relevance and effectiveness.
- Example in Education: An evidence-based teaching strategy should be periodically evaluated for its effectiveness through classroom observations and student performance metrics.
Actionable Tip: Establish a feedback loop where decisions are reviewed and refined as new evidence becomes available.
How Do Levels of Evidence and Evidence-Based Practice Apply to Your Research?
While many of the protocols in evidence-based practice are specific to disciplines, it is a great introduction to how processes and protocols are followed in the research process. At its most basic form, evidence-based practice shows the importance of the value of supported and backed-up evidence, especially in scientific studies.
Examining the evidence, whether in EBP or not, is one of the cornerstones of science. Having a standardized method of evaluating evidence, such as a hierarchy, streamlines the entire research workflow. At the very least, it provides researchers with a tool to determine the value of the evidence, their sources, and whether it is relevant to the study.
It may seem obvious that evidence and proper sources should be considered in every scientific study, EBP and levels of evidence are controversial as well. Experts and practitioners have published critiques that examine the applicability and validity of the protocols in their respective disciplines. However, just like any scientific process, it is undergoing improvements through continuous evaluation. You can take these things into consideration when you are about to embark on scientific research. And in case you need it, you can look at a research proposal example to guide you.
How Can Interdisciplinary Collaboration Enhance Evidence-Based Practice?
Leveraging expertise across diverse fields can significantly elevate the implementation of evidence-based practice. By integrating perspectives from medicine, education, management, and law, interdisciplinary collaboration refines evaluation methods and supports more robust decision-making frameworks. Such collaboration not only bridges differing analytical approaches but also fosters innovative strategies that address complex, real-world challenges. Access to continued interdisciplinary training through recognized platforms like good online colleges further reinforces practitioners’ ability to adopt and advance best practices.
Can Fast-Track Degree Programs Enhance Evidence-Based Practice Mastery?
Accelerated academic programs are increasingly aligning with the demands of rapidly evolving industries by integrating evidence-based methodologies directly into their curricula. These programs offer condensed, rigorous training that equips professionals with advanced analytical tools and decision-making frameworks critical for applying evidence in real-world contexts. Learners benefit from immersive coursework and hands-on experiences that swiftly translate into professional competencies, thereby shortening the gap between theory and practice. For those seeking efficient educational pathways, exploring fast track degrees can offer a strategic advantage in mastering evidence-based practice in a compressed timeframe.
What Does the Future Hold for Evidence-Based Practice?
The rapid evolution of digital tools and data analytics is reshaping how evidence is gathered, analyzed, and implemented across disciplines. Emerging trends include the integration of artificial intelligence for real-time evidence synthesis, dynamic data repositories that continuously update practice guidelines, and enhanced collaboration between research communities and stakeholders. These developments are paving the way for more adaptive and transparent decision-making models that prioritize both scientific rigor and practical applicability. Professionals are also exploring innovative educational pathways, with programs offered by the most affordable online colleges supporting continuous learning and up-to-date skillsets in this evolving field.
Can Targeted Certifications Empower Evidence-Based Practice?
In today’s rapidly evolving research landscape, specialized certifications can bridge gaps between traditional methodologies and emerging analytical techniques. They provide clear frameworks for integrating new data analysis tools and systematic evaluation methods, ultimately enhancing the practical application of evidence across disciplines.
Targeted training programs reinforce core competencies and promote adaptive decision-making by aligning with evolving industry standards. For professionals looking to advance their expertise swiftly, quick certifications that pay well offer industry-relevant skills and up-to-date knowledge crucial for robust evidence-based practice implementation.
How Can Evidence-Based Practice Expand Career Opportunities?
Adopting evidence-based practice not only refines decision-making but also significantly enhances career trajectories. Professionals across sectors are increasingly valued for integrating current scientific methodologies with practical problem-solving skills, thereby securing competitive advantages. In industries ranging from healthcare to vocational services, a commitment to evidence-based strategies can differentiate candidates and foster long-term growth. For example, many technical roles now demand that practitioners employ data-driven approaches to optimize outcomes—a trend that is reflected in expanding opportunities in trade schools careers. Leveraging these practices enables individuals to align their expertise with evolving industry standards while opening new avenues for professional advancement.
Key Insights
- Growth and Impact of Evidence-Based Practice (EBP):
- Introduced in 1992, EBP has influenced various fields like medicine, education, law, and management by promoting decision-making based on scientific evidence rather than intuition or tradition.
- Role and Hierarchy of Evidence:
- The hierarchy of evidence ranks the strength of research findings, guiding practitioners in assessing the reliability and validity of scientific studies.
- Medicine and allied health have well-developed hierarchies, with systematic reviews and meta-analyses at the top.
- Application Across Disciplines:
- EBP is applied differently across disciplines, with examples in research (metascience), medicine (evidence-based medicine), and education (evidence-based education).
- Metascience examines research practices, reproducibility, and peer review processes, aiming to improve research quality.
- Criticism and Challenges:
- Critiques of EBP highlight issues such as reproducibility crises, limitations in informing patient care, and the inflexibility of evidence hierarchies.
- There is ongoing debate about the epistemological justification of evidence rankings and the need to consider biases and the role of basic science.
FAQ
- What is evidence-based practice (EBP)? Evidence-based practice is a decision-making approach that integrates the best available scientific evidence with professional expertise and patient or client preferences. It aims to replace outdated or unsound practices with those validated by research.
- How does EBP differ from traditional practices? Traditional practices often rely on intuition, experience, and established routines, while EBP emphasizes decisions based on rigorous scientific research and evidence, improving the effectiveness and reliability of outcomes.
- What are the key components of evidence-based medicine (EBM)? EBM involves balancing three main components: research-based evidence, patient values and preferences, and clinical expertise. This approach aims to optimize decision-making in healthcare.
- What are the main criticisms of evidence-based practice? Critics argue that EBP can be overly rigid, may not adequately consider the complexity of individual patient cases, and can suffer from reproducibility issues. There are also concerns about the hierarchical ranking of evidence types, which may not always reflect their practical relevance.
- How is evidence ranked in the hierarchy of evidence? Evidence is typically ranked based on the methodological quality and reliability of the study design. Systematic reviews and meta-analyses are at the top, followed by randomized control trials, cohort studies, case-control studies, and case reports.
- Why is reproducibility important in scientific research? Reproducibility ensures that research findings are reliable and can be consistently duplicated by other researchers, which is crucial for validating results and advancing scientific knowledge.
- How does evidence-based education work? Evidence-based education uses well-researched studies to determine the most effective teaching and learning methods. Techniques such as errorless training, n-back training, and spaced repetition are examples of evidence-based learning strategies.
- What is metascience, and why is it important? Metascience, or evidence-based research, studies the scientific research process itself to identify and mitigate issues such as biases, poor reporting, and reproducibility problems. It aims to improve the overall quality and efficiency of scientific research.
- What is the GRADE system in evaluating evidence? The GRADE (Grading of Recommendations Assessment, Development, and Evaluation) system rates the quality of evidence and strength of recommendations, helping practitioners make informed decisions based on the confidence level in the evidence.
- How can researchers benefit from understanding levels of evidence? Understanding levels of evidence helps researchers critically assess the validity and strength of scientific studies, ensuring that their work is based on the most reliable and relevant evidence available.
References:
- Baker, M. (2016). 1,500 scientists lift the lid on reproducibility. Nature, 533, 452454. https://doi.org/10.1038/533452a
- Biesta, G. (2007). Why “what works" won’t work: evidence-based practice and the democratic deficit in educational research. Educational Theory, 57 (1), 1-22. https://doi.org/10.1111/j.1741-5446.2006.00241.x
- Borgerson, K. (2009). Valuing evidence: bias and the evidence hierarchy of evidence-based medicine. Perspectives in Biology and Medicine, 52 (2), 218-233. https://doi.org/10.1353/pbm.0.0086
- Caze, A. L. (2010). The role of basic science in evidence-based medicine. Biology & Philosophy, 26 (1), 81-98. https://doi.org/10.1007/s10539-010-9231-5
- Eddy, D. M. (1990). Practice policies: Where do they come from? JAMA: The Journal of the American Medical Association, 263 (9), 1265-1265. https://doi.org/10.1001/jama.263.9.1265
- Greenhalgh, T. (1997). How to read a paper: Getting your bearings (deciding what the paper is about). BMJ, 315 (7102), 243-246. https://doi.org/10.1136/bmj.315.7102.243
- Gugiu, P. C., Westine, C. D., Coryn, C. L., & Hobson, K. A. (2012). An application of a new evidence grading system to research on the chronic care model. Evaluation & the Health Professions, 36 (1), 3-43. https://doi.org/10.1177/0163278712436968
- Guyatt, G. H., & Sackett, D. L. (1995). Users’ guides to the medical literature. IX. A method for grading health care recommendations. Evidence-Based Medicine Working Group. JAMA: The Journal of the American Medical Association, 274 (22), 1800-1804. https://doi.org/10.1001/jama.274.22.1800
- Haughom, J. (2015). 5 reasons the practice of evidence-based medicine is a hot topic. Health Catalyst.
- Hugel, K. (2013, May 16). The journey of research Levels of evidence. CAPhO.org.
- Ioannidis, J. P. (2020). Meta-research: Evaluation and improvement of research methods and practices. Krise Der Demokratie Krise Der Wissenschaften? 22, 101-118. https://doi.org/10.7767/9783205233008.101
- Lehrer, J. (2010, December 6). The truth wears off. The New Yorker.
- McMaster University, & Evidence Prime Inc. (n.d.). Resources. GradePro.org.
- Queensland University of Technology. (2019, January 3). Evidence explained. QUT Library.
- Saunders, B., Berliner, L., & Hanson, R. (2003). Child Physical and Sexual Abuse: Guidelines for Treatment. PsycEXTRA Dataset. https://doi.org/10.1037/e319002004-001
- Siegfried, T. (2017, November 13). Philosophical critique exposes flaws in medical evidence hierarchies. Science News.
- Smith, R. (2006). Peer review: A flawed process at the heart of science and journals. Journal of the Royal Society of Medicine, 99 (4), 178-182. https://doi.org/10.1258/jrsm.99.4.178
- Smolen, P., Zhang, Y., & Byrne, J. H. (2016). The right time to learn: Mechanisms and optimization of spaced learning. Nature Reviews Neuroscience, 17 (2), 77-88. https://doi.org/10.1038/nrn.2015.18
- Solomon, M. (2011). Just a paradigm: Evidence-based medicine in epistemological context. European Journal for Philosophy of Science, 1 (3), 451-466. https://doi.org/10.1007/s13194-011-0034-6
- Trinder, L., & Reynolds, S. (2006). Evidence-Based Practice: A Critical Appraisal. Oxford: Blackwell Science. Retrieved from Google Books.
