8 ways to determine the credibility of research reports
In our work, we are increasingly asked to make
data-driven or fact-based decisions. A myriad of organisations offer
analysis, data, intelligence and research on developments in international higher education.
It can be difficult to know which source to rely on. Therefore, the
first page to turn to in any research report is the methodology section.
The reason is to determine if the other pages are worth reading and how
critical we should be to the information printed on them. This blog
post covers eight elements to look for in a research report to determine
its trustworthiness.
Why was the study undertaken?
Whether the aim of the research was to generate income, lobby for a policy change, evaluate the impact of a programme or develop a new theoretical framework, this will influence the research questions, data collection and analysis, and the presentation of the results. In order to make best use of the findings and place them in context for your use, it is advisable to bear the aim of the study in mind.Who conducted the study?
A myriad of organisations in the field offer intelligence that feed into the decisions in our daily work. It is therefore important to look at who has conducted the research, and if the organisation or individual in question has the expertise required for conducting research on the topic. Additionally, assessing if the organisation has an interest in a specific research outcome is a good practice. If so, the research should be transparent in demonstrating how the different stages of the study were conducted to guarantee its objectivity.Who funded the research?
It is of equal importance to check if a third party has sponsored or funded the study as this could further affect the objectivity of the study. If for example a student recruitment fair organiser sponsors a study on the efficiency of different recruitment methods, you should be critical of the results, particularly if student fairs emerge as the most efficient recruitment method.How was the data collected?
In the social sciences, structured interviews and self-completion questionnaires are perhaps the two most common ways of collecting quantitative data. How the individuals in the sample, ie those approached to be surveyed, have been identified is crucial in determining the representativeness of the results. There are two main types of samples, namely probability and non-probability samples. A probability sample is a sample in which every individual in the population has the same chance of being included. It is also a prerequisite for being able to generalise the findings to the population (see below).To illustrate the difference, let us say you survey first-year students by asking student clubs to share the survey on social media. Since this non-probability snowball sample has a greater likelihood of reaching students active in such clubs, the results won’t be representative or generalisable.
Is the sample size and response rate sufficient?
The bigger the sample size the higher the likelihood that the results are precise. After a sample size of around 1000, gains in precision become less pronounced. Often, however, due to limited time and money approaching such a large sample might not be feasible. The homogeneity of the population further affects the desired sample size; a more heterogeneous population requires a larger sample to include the different sub-groups of the population to a satisfactory degree. The response rate is a complementary measure to the sample size, showing how many of the suitable individuals in the sample have provided a usable response. In web surveys, response rates tend to be lower than in other types of surveys.Does the research make use of secondary data?
Data can be collected either through primary or secondary sources, ie it can be collected for the purposes of the study or existing data can be utilised. If existing data sets collected by another organisation or researcher is used, reflecting on how credible the data source is, and how usable it is for the study in question, is important. Here, using common sense (and Google if necessary) takes you a long way.Does the research measure what it claims to measure?
A commonly used term in statistics to convey the trustworthiness of research is ‘validity’. Validity refers to the extent to which a notion, conclusion or measurement is well founded and corresponds to reality. In other words, does it measure what it intends to measure? As an example, a study intends to investigate gender discrimination of faculty and in so doing, looks at the number of cases of discrimination brought forward by female faculty. Yet, as the study does not look at the reason for these discrimination complaints – whether it was indeed gender or ethnicity, religion, age or sexual orientation – the conclusion cannot be drawn that gender discrimination has increased.Can the findings be generalised to my situation, institution or country?
When conducting research there is often a tendency to seek to generalise the findings. Two key criteria have to be met for this to be possible. First, results are applicable only to the population of the study. In other words, if a study analyses student satisfaction among students in the UK, the findings cannot be generalised to campuses in, for example, France. Second, data must be collected via a probability sample, ie every unit of analysis, here every student in the UK, has the same chance of being included in the sample.Oftentimes reports lack many of the essential aspects of their data collection and analysis. Since time and money are, perhaps, the biggest influencers of research quality, and no one possesses infinite amounts of either, when undertaking research a balance often has to be struck between (cost-) effectiveness and quality. Transparently and clearly accounting for how the research has been conducted is central for the reader to evaluate the trustworthiness of the report in their hands.
No comments:
Post a Comment