Research is systematic inquiry and provides a special form of information to enable decision-making. For the special nature of this information to be appropriately used, clarity is required on the nature of the claim, the perspectives that framed it, and the trustworthiness and relevance of the claim, including alternative explanations for the data used as a warrant for the claim.
Research evidence is very diverse and driven by many different perspectives, research questions, and methods, which makes it challenging to evaluate evidence claims. A first stage in assessing whether an evidence claim is justified and fit for the purposes to which it will be applied requires clarification of the variation in the
- values and perspectives framing the research,
- consequent research methods used, and
- resultant findings and evidence claim
Perspectives Driving the Interpretation and Production of Research and the Evidence Claims Made
People vary in how they perceive the world and what is important to them. They vary in their values, the research questions that they ask, and the research findings that they might consider. The planning, production, and interpretation of the research are thus driven by people’s implicit and explicit perspectives (ideological and theoretical assumptions, values, and priorities).
There is variation in the extent that there is consensus in worldviews and pertinent “facts” within and across cultures and historical time. Individuals and groups vary in what aspects of an issue they consider important, how these should be best analyzed, and what success means. Different perspectives may consider different “facts” and use methodologically robust methods to reach different valid conclusions. Disagreements about research evidence may be due to differences in the perspectives underlying the production of that evidence, or due to issues of methodological robustness, or to a mixture of the two. Being explicit about both perspectives and methods can help reduce hidden bias and misunderstandings about research findings.
In practice, much of the research agenda (and thus what is produced by research) is driven by the perspectives and interests of those funding and undertaking research. In recent years there has been growing concern that there should be greater societal involvement in the prioritization of research questions and the interpretation of research findings. This is partly a broad issue of societal engagement in research and also a more specific democratic issue of a right to involvement by those likely to be affected by the research.
There are also pragmatic reasons. First, involving broader perspectives in the research process may lead to overt consideration of different ways of framing an issue and lead to more relevant and useful research. Second, the engagement of these other actors in the research process may make it more likely that these individuals and groups are aware of and motivated to make use of research in their own thinking and decision-making. All these different factors have led health research to involve patients with carers and physicians in setting research agendas and in the interpretation of research findings.
The use of research findings is sometimes seen as a one-way process of research production leading to translation, evidence-informed decision making, and the implementation of those decisions. Use of research is now often understood as arising from a more dynamic interactive two-way process of “pull” (demand by users of research including those affected by the study findings) as well as “push” (the production and translation of research) within the evidence. It is therefore important to consider the perspectives that drive both the use and the production of research.
In education research, much of the evidence (particularly that communicated through web-based portals to users of research) is concerned with questions of the effectiveness of interventions and the fidelity of their implementation. The interest of intervention developers and evaluators is concerned with the range of competing intervention programs to add on to school provision. Providers and users of services on the other hand would be expected to be problem-driven with an interest in the range of options that could be considered for responding to educational challenges. Research that examined theories of change and mechanisms involved with different educational strategies and how research evidence may inform the addition of innovative approaches that are adapted to local contexts could be particularly useful. An appraisal of research evidence is thus not only about whether an evidence claim is justifiable in its own terms. It is also about whether it is addressing the most appropriate questions for different users of research. If most research has arisen from a particular perspective such as the efficacy of “add-on” programs, then this may limit the options for those wishing to apply broader question-driven approaches to using research to inform their policy and practice.
Perspectives also link to research ethics. These are the moral values that cover the behavior of researchers. This list five principles and some ethical standards for education researchers. They range from principles of social responsibility to ethical standards of competence, use and misuse of expertise, and avoidance of harm. Concerns about principles and standards can include the morality of the researcher’s behavior, participation of users in a democratic research process, issues of equity, and issues of research waste. If a research study is not considered ethical, then this may also undermine the evidence claims that it makes.
Types of Claim: Research Questions, Methods, and Paradigms
In addition to the perspectives driving research questions, research is very diverse in the types of questions that are asked and the methods used to address these. Education research alone has many different research themes and methods. In addition, there are further disciplines such as psychology and economics that study education topics with their own particular research focus and research designs.
Research can investigate many aspects of a phenomenon including its nature and extent, the impact of an intervention or service, and the processes by which something happens or is understood. Research can study many forms of evidence including experiential and tacit knowledge. It can create findings that describe, measure, compare, relate, and assess value. Within the wide diversity of research approaches, a common distinction is between inductive and deductive paradigms for relating theory to data created and interpreted within such theories.
Inductive approaches tend to ask open questions and use iterative configuring methods and analysis leading to results framed in terms of conceptual inference. Such research may change the way that people understand issues sometimes called the conceptual or enlightenment use of research. Some of this research may be investigating how people perceive or experience some phenomena. Other research may be trying to develop new explanatory concepts or theories. The studies often use configuring methods of analysis of qualitative data.
Deductive approaches tend to ask closed questions and use a priori aggregative methods and analysis leading to test hypotheses framed in terms of instrumental inference. Such research provides “facts” that can lead to instrumental decision-making. Some of this research is based on statistics and the probability that something does or does not pertain. Some of these studies are based on sampling from specific groups and contexts and then generalizing to the rest of the population from which the sample was drawn. The studies often use aggregative methods of analysis of quantitative data.
Both make evidence claims related to theory and to empirical data at different levels of analysis.
Scope, Level, Focus, and Generalizability of Evidence Claims
Evidence claims also vary in their scope in terms of the breadth of the issue that they address, and the depth of detail of the claim. An evidence claim at one level may mask a claim at a more micro or macro level of analysis. A study of educational outcomes and the processes by which these outcomes occurred for all school students may, for example, mask important differences between subgroups of those students.
It may be that research is not asking the most relevant questions for potential users of that research. If the research questions are framed with a broader or narrower or more macro or more microscope they might lead to different findings and implications. As discussed earlier, there is also the possibility of alternative explanations for research findings using very different perspectives and theoretical lenses. The critical appraisal includes the considerations of such alternative explanations. Are there alternative ways of framing the issues that might be more helpful or more parsimonious? Are there credible alternative explanations for this evidence having taken account of the strength of the evidence claim and the exploration and exclusion of these alternative explanations?
Some research is also undertaken to provide findings with generic knowledge while others make more context-specific claims. If an evidence claim includes generalizations (e.g., in a theory, theory of change, or mechanisms), then it may have a wide scope relevant to many contexts. It could be that one piece of research evidence may be used to make strong evidence claims that are transferable to particular contexts and more limited claims for more generic contexts. There are tools to assess the extent of generalizability of such claims, in other words, the extent of the warrant. Evidence claims may also refer to the implementation and scale-up of policy and practice decisions informed by research evidence.
With different perspectives producing and interpreting evidence according to different logics, it may not always be possible to adjudicate between the strength and veracity of different evidence claims. Two seemingly contradictory evidence claims may both be technically and logically defensible if they are produced from very different worldviews, research questions, and types of analysis. Users of research depend on transparency in the rationale for the evidence claim including any data, analysis, logic, and underlying perspectives to assess the claims being made. Their task might be made easier and research may be more productive if there was also the more explicit layering of the levels of research questions and the scope of related evidence claims.
The Basis for the Evidence Claim
How Is Evidence Brought Together to Make the Evidence Claim?
Making an evidence claim based on the warrant of what is known from research requires the consideration of all of the current evidence base relevant to that. Evidence claims based on limited evidence from a single or a few primary research studies may be very limited in the claims that can be made.
For a researcher, the most pressing issue may be the trustworthiness of their own particular study. But the findings of single studies, however technically excellent, may differ from those of all the other technically good-quality relevant research studies. In other words, evidence claims from single studies may be a form of selection bias in that they do not represent all of the research evidence available. What is required is an evidence claim based on all of the current relevant evidence bases.
The argument is that any evidence claim can be appraised according to the extent that it is appropriate considering all of the relevant evidence rather than anyone study. Evaluating the methodological adequacy of an individual study is a building block of evaluating the whole evidence base. Appraising an individual study alone is not likely to provide a very useful evidence claim to inform decisions.
In this way of thinking, the evidence base (rather than individual studies) is a starting point for policy, practice, and individual decision-makers. It is also the starting point for planning new research in terms of how it might change the preexisting evidence base. In, for example, undertaking a power calculation in an effectiveness study, the usual focus is the sample size necessary to reveal a statistical effect in the primary study. Taking an evidence-based approach, however, the calculation is on the sample size necessary to reveal a change in results from the existing systematic review on the topic. This contrasts with the practice of some funders of research in not requiring a review of the evidence base before funding new research. This is akin to “going shopping without first seeing what you have in the kitchen cupboard.”
Bringing together what is known from all research is itself a research exercise and the methods used are commonly called a “systematic review,” which can be defined as “a review of existing research using explicit, accountable rigorous research methods”. Reviews of research are similar to primary research but at a “meta-level.” Instead of collecting primary data, their samples are the findings from preexisting relevant primary studies. Instead of analyzing primary data, they synthesize the findings from such studies. There are also reviews of reviews where the data are the findings of the included reviews.
The logic of using systematic research processes to review research evidence applies to all research questions and evidence claims. The methods of review tend to reflect the research paradigms and logic of the research methods of the primary studies addressing the same type of research questions. There is therefore a wide range of methods of systematic reviews methods including a statistical meta-analysis of experimental data, theory-driven reviews of causal processes, meta-ethnography, and multicomponent mixed-methods reviews.
Technical Quality and Relevance of an Evidence Base
Critical appraisal of the evidence claims of a review of the relevant research requires consideration of how the evidence was brought together, in other words, the methods of the review. If the review was not undertaken in a technically excellent and relevant way, then less trust can be put on the evidence claims that it produces. There is also a need to check the quality and relevance of the research studies included in the review and thus contributing to the evidence claim. Finally, there may be issues of quality in terms of the research evidence as a whole. Whatever the quality of the included studies it may be that the evidence they produce is not very useful. This provides us with the following three components of critical appraisal of an evidence claim about an evidence base, which is discussed in turn:
- Review methods: How was the evidence on which the claim is made brought together?
- Included studies: What is their quality and relevance to the review question?
- The totality of evidence from the review: What is the nature and extent of the evidence and its ability to answer the review question (and to justify the warrant of the conclusions (evidence claim)?
Some Evident Claims and Researches during Covid-19
- The challenge of online privacy preservation in Muslim-majority countries during the COVID-19 pandemic
- The factors affecting student satisfaction with online education during the COVID-19 pandemic: an empirical study of an emerging Muslim country
- Fundraising campaigns via social media platforms for mitigating the impacts of the COVID-19 epidemic