Language skills are a key factor in educational success - this is the consensus in empirical educational research. There are numerous empirical findings on the fundamental connections between individual, social and societal factors and language skills. Nevertheless, we experience many controversial debates about language-related topics and questions such as multilingualism, reading and writing acquisition or the influence of digital media on language. In this context, we also see opinions and positions that inaccurately reflect, misinterpret or ignore the current state of research. This is how myths and prejudices arise.
Many decision-makers in education policy are in favour of empirical findings contributing to their work. To this end, it is helpful to determine scientific findings in a systematic and transparent manner and to prepare information for different addressees in educational practice, policy and administration as well as for civil society. This is where What works? comes in.
With What works?, the Mercator Institute bridges the gap between research, policy and practice in language education. On the one hand, the aim is to facilitate access to current topics, concepts and research results for educational stakeholders and civil society. On the other hand, representatives of educational policy and practice should receive the best available knowledge on language education based on scientific findings. To this end, What works? develops and refines evidence-based and participatory transfer concepts tailored to the field of language education.
The Mercator Institute understands knowledge transfer to mean the communication and transfer of research findings on questions of language education to civil society, educational practitioners, administrators and policymakers. The guiding principle is the evidence-based approach. This approach requires researchers to use quality assurance procedures in their work and to disclose how they arrived at their results. In addition, it is crucial to involve stakeholders in this process. This includes representatives who are active outside academia because they provide important information on how findings can be disseminated in a way that is appropriate for the target group. Moreover, stakeholders can give scientists impetus for their work and thereby help develop new research projects. In order to make scientific knowledge usable for different target groups, the What Works? team and the Communication Unit of the Mercator Institute work together to turn basic knowledge and research findings on language education into various formats suitable for target groups in media, education policy and administration, as well as educational practice.
Formats: Basic Knowledge of Language Education and Fact Checks
In the format Basic Knowledge of Language Education, authors present central terms and concepts of language education in a concise and comprehensible way. Fact Checks provide answers to frequently asked questions on controversial topics and thereby help to debunk myths, reduce prejudices and clarify inaccuracies on the basis of empirical evidence. Expert Reports draw evidence-based conclusions and provide recommendations for action in educational institutions so as to contribute to improving the practice of language education.
Basic Knowledge of Language Education
The media, journalists, stakeholders in education policy and administration, civil society
Providing reliable information (figures, facts); debunking myths, reducing
Is it true that...? Are there any concrete examples of/information on...?
How we work
All processes of compilation, documentation and appraisal of research results in the Fact Checks format adhere to standards from evidence-based practice (Process of Evidence Based Practice according to Dawes et al. 2005).
Our work includes
- monitoring the issues/questions that need to be clarified
- formulating searchable research questions
- systematically searching for research findings
- selecting, appraising and synthesizing the available scientific evidence
- transferring the results into practice
- accompanying evaluation of the application in practice
To achieve this, the team uses different methods (cf. the descriptions below). Methods, standards and criteria are chosen in accordance with the respective research topic and are thoroughly documented when preparing a publication.
For instance, in Fact Checks we may conduct a scoping review to objectify, classify and refine scientific evidence. In systematic reviews, researchers produce new knowledge on the basis of existing primary studies and then prepare and disseminate their findings for scientific audiences and for stakeholders in education administration. Along the way, all processes of research, selection and appraisal are systematically documented so as to make them transparent and replicable. From August 2018 to May 2022 the What works? team has worked on a systematic review of the effectiveness of integrated strategies in the classroom. Any research gaps that may be identified in this process will be used to initiate new projects.
Information about the methods we use
Non-systematic literature review
In a non-systematic literature review, researchers survey the current state of research on a specific topic, usually using scientific literature.
To survey the current state of research, structured queries in general are conducted and topic-specific databases are browsed. Manual searches and the snowball principle are also used.
In contrast to scoping reviews and systematic reviews, non-systematic literature reviews do not require researchers to specify the search strategy and selection criteria in advance. Nor do they require researchers to document each work step carefully. Thus, the selection of primary studies in this procedure is guided to a greater extent by personal references.
With the exception of scoping reviews, systematic reviews or meta-analyses, the methodology of non-systematic literature reviews is predominantly used for scientific literature (monographs, edited volumes, handbook articles, journal articles).
Dicks, Lynn et al. (2017): Knowledge synthesis for environmental decisions: an evaluation of existing methods, and guidance for their selection, use and development: Non-systematic literature review
When conducting expert consultations, researchers systematically gather evaluations, opinions and assessments on a specific topic area from designated experts.
Individual or group surveys follow a systematic procedure. In concrete terms, this means that the selection of experts and the method employed are guided by the previously specified goal, or more specifically, are tailored to answering a concrete question or problem.
The survey takes place in person or online, orally or in writing. The expert knowledge recorded in writing (e.g. through protocols or transcribed audio recordings) is evaluated according to qualitative (content-analytical) methods.
Expert consultations are suitable as additional methods of knowledge generation, e.g. to complement knowledge from evaluations, opinions and assessments in a specific topic area.
Dicks, Lynn et al. (2017): Knowledge synthesis for environmental decisions: an evaluation of existing methods, and guidance for their selection, use and development: Expert consultation
In study appraisals, researchers undertake an assessment, often conducted as a peer process, of empirical studies according to standardized (quality) criteria and catalogues.
Depending on the type of studies assessed (e.g. systematic reviews, (quasi)-experimental or qualitative studies), researchers code key quality features of empirical studies using a protocol or coding scheme and then use the codes to appraise the studies. The assessment of quality may be carried out on a (percentage) scale or using a multi-link category system.
Study appraisals aim to critically evaluate essential content and key results of empirical studies according to objective and standardized criteria. They thus serve as an inclusion criterion for synthesizing all relevant identified primary studies in systematic reviews.
What Works Clearinghouse (2017): Procedure Handbook Version 4.0 (p. 9‑11).
Quick scoping review
In a (quick) scoping review, researchers use a standardized and systematic search strategy (usually including scientific and grey literature) to identify the current state of research on a specific topic area in a question-driven manner.
After formulating a searchable research question, structured queries are carried out in general and specific databases, complemented by manual searches and snowballing. Grey literature is also taken into account.
All documents identified by this broad strategy then undergo a multi-stage selection process which researchers use to check whether the documents actually match the research question being investigated. Search strategy and selection criteria are determined in advance and all steps as well as the results are documented and disclosed in a protocol. The protocol is subjected to a peer review.
Scoping reviews allow for the systematic and transparent compilation of knowledge on a broader topic. They enable a description of the available knowledge and insights into which kind of knowledge is available and which is missing. This makes it possible to reliably identify areas in which little or no research has been done.
The scope of this type of review may vary. Compared to systematic reviews, scoping reviews generally consider fewer studies and have a higher potential for bias. However, they are less time-consuming (so-called (quick) scoping reviews).
Dicks, Lynn et al. (2017): Knowledge synthesis for environmental decisions: an evaluation of existing methods, and guidance for their selection, use and development: Scoping review.
In a systematic review, researchers present the most meaningful knowledge (usually scientific literature) on a specific topic area in a question-driven manner. They compile knowledge using a standardized and systematic search strategy. Next, the findings are examined according to pre-defined selection criteria, followed by an appraisal procedure that evaluates quality and empirical significance. Finally, the available scientific evidence is summarized either narratively or statistically (meta-analysis), producing a research synthesis.
Like the scoping review, a systematic review involves formulating a searchable research question, conducting structured queries in general and topic-specific databases, doing manual searches and snowballing, as well as taking grey literature into account. The documents identified by this broad search strategy undergo a multi-stage selection process.
During this process, researchers examine the thematic fit and the quality of the evidence identified using a pre-defined quality rating tool. Documents are either included or excluded, accordingly. Studies identified and appraised in this rigorous process then form the basis of the research synthesis. Researchers document and disclose the search strategy, the selection and assessment criteria as well as all work steps, including the results, in a protocol. The protocol is subjected to a peer review.
Systematic reviews make it possible to compile and describe knowledge on broad topics systematically and transparently. They provide an overview of existing and unavailable knowledge and also take into account studies and research results that have received less attention or are more difficult to access. In addition, new and objective conclusions can be drawn on the basis of existing knowledge, since this method reduces the influence of personal references on the selection and evaluation of studies. Furthermore, systematic reviews are the foundation of meta-analyses.
The scope and risk of bias of a systematic review depends on the scope and breadth of the search strategy (and, if applicable, the resulting number of studies to be assessed). Two variants of systematic reviews are common practice: review papers that consider fewer studies and thus have a higher bias potential. These are less time-consuming (e.g. rapid evidence assessments). Reviews that consider a larger number of studies have a lower bias potential but are more time-consuming.
Dicks, Lynn; Haddaway, Neal; Hernández-Morcillo, Monica; Velizarova, Emiliya; Santamaria, Luis; Mattsson, Brady; Randall, Nicola; Failler, Pierre; Ferretti, Johanna; Rodela, Romina; Wittmer, Heidi; Livoreil, Barbara & Saarikoski, Heli (2017). Knowledge synthesis for environmental decisions: an evaluation of existing methods, and guidance for their selection, use and development doi:10.13140/rg.2.2.15632.92161: Systematic Review, Rapid Evidence Assessment; Campbell Collaboration Guideline
Weiß, Bernd & Wagner, Michael (2019). Meta-Analyse. In Nina Baur & Jörg Blasius (Hrsg.), Handbuch Methoden der empirischen Sozialforschung (S. 1511–1522). Wiesbaden: Springer Fachmedien Wiesbaden. doi: https://doi.org/10.1007/978-3-658-21308-4_112
At a glance
What works? aims to prepare scientific findings on language education for specific target groups in various formats.
Basic Knowledge of Language Education, Fact Checks, Expert Reports
Non-systematic literature reviews, expert consultations, study appraisals, scoping reviews, systematic reviews