Literature Review Methodology

 Literature Review Methodology 

         Numerous current reviews have synthesized empirical proof applicable to elements of ed-tech
coverage.7
  
    The present paper ambitions to make contributions to those efforts in two foremost ways. first, while current critiques have included subsets of ed-tech, no current review has attempted to cowl the complete range of ed-tech interventions. mainly, no previous review to our understanding brings together laptop- and net-based learning on one hand and era-based totally behavioral interventions on the alternative. of path, expanding our scope must include a few sacrifice—it could not be possible to meaningfully integrate all studies relating to all areas of ed-tech right into a single paper. as an alternative, we focus on studies imparting proof from randomized control trials (rct) and regression discontinuity designs (rdds). our center cognizance on rct- and rdd-primarily based research constitutes a second particular contribution of this overview—we argue that, further to supporting us define sufficiently clean and slender inclusion conditions, a focal point on rcts and rdds adds a productive voice to broader and extra methodologically-numerous policy research dialogues in an environment characterised by complex tangles of reason and effect.

 

      Why attention on rcts and rdds? in the fields of software evaluation and implemented microeconomics, rcts—while nicely implemented—are normally taken into consideration the most powerful research design framework for quantitatively estimating average causal consequences.eight
 rcts are randomized experiments, research in which the researcher randomly allocates a few members into one or greater treatment institution(s) subjected to an intervention, application, or coverage of interest, and different participants right into a control group representing the counterfactual—what would have  occurred without the program.9  randomization assures that neither observable nor unobservable characteristics of contributors expect mission, “and therefore that any distinction between and manipulate…displays the impact of the treatment.”10 in different words, whilst performed efficaciously, randomization guarantees that we're evaluating apples to apples and allows us to be confident that the influences we have a look at are because of the treatment rather than some different thing. yet due to value, ethics, and a ramification of other boundaries, rcts aren't continually viable to conduct.

       Over the past several decades, methodologists have developed a toolkit of research designs, known broadly as quasi-experiments, that aim to approximate experimental research to the greatest extent possible using observational data. Commonly used examples include instrumental variable, difference-in-difference, and propensity-score matching designs. Regression discontinuity designs (RDDs) are quasi-experiments that identify a well-defined cutoff threshold which defines a change in eligibility or program status for those above it—for instance, the minimum test score required for a student to be eligible for financial aid. While very high-scoring and very low-scoring students likely differ from one another in ways other than their eligibility for financial aid, “it may be plausible to think that treatment status is ‘as good as randomly assigned’ among the subsample of observations that fall just above and just below the threshold.”11 So, when some basic assumptions are met, the jump in an outcome between those just above and those just below the threshold can be interpreted as the causal effect of the intervention in question for those near the threshold.



        RRDs can only be utilized in conditions with a nicely-defined threshold that determines whether or not a take a look at participant gets the intervention. we chose to include them but no longer other quasi-experimental designs due to the fact they may be as convincing as rcts of their identity of common causal results. with minimum sensitivity to underlying theoretical assumptions, rdds with huge samples and a nicely-defined reduce-off produce envisioned application consequences identical to engaging in rcts for participants at the cut-off.13 despite the fact that rdds are quasi-experiments, within the the rest of this review we confer with the rcts and rdds included in this review as experimental studies for simplicity. we chose to consciousness on rcts and rdds now not because we agree with they're inherently extra valuable than research following different research designs, however due to the fact we felt that the policy literature on ed-tech is flooded with observational research and could gain from a synthesis of evidence from the designs maximum possibly to produce impartial estimates of causal outcomes. furthermore, we introduce, frame, and interpret the experimental effects inside the context of broader observational literatures.

         RCTs and rdds estimate the effect of a application or coverage on results of interest. however the estimates they arrive up with are sometimes difficult to compare with each other given that research test for effect on one-of-a-kind outcomes using distinct dimension equipment, in populations that differ in their inner variety. whilst these variations can in no way be completely eliminated and impact sizes ought to usually be considered within the contexts within which they have been diagnosed, standard deviations provide a more or less similar unit that can deliver us a large experience of the general importance of impact throughout software contexts. popular deviations basically represent the impact length relative to version in the outcome dimension. economists analyzing schooling generally follow the guideline of thumb that much less than 10 percent of a standard deviation is small, 10 percent to twenty-five percent is encouraging, 25 to 40 percent is huge, and above forty percent is very huge. we record effect sizes in general deviations whenever the relevant records is to be had below to facilitate evaluation, at the same time as cautioning that those effect sizes have to be considered in context to be significant.

       We also restrained our core awareness to research conducted within evolved international locations, despite the fact that we contact on studies carried out in developing international locations where relevant to the dialogue. after thinking about each literatures, we determined that the instances surrounding the ed-tech interventions that have thus far been experimentally studied differed too significantly throughout evolved and growing united states of america training structures to allow for integrating findings from each in a way that could yield significant coverage implications. our decision to focus on the advanced as an alternative than developing world specifically changed into pushed by this assessment’s goal of studying experimental studies on the full variety of ed-tech interventions. whilst experimental coverage and assessment literature on positive training of ed-tech literature like laptop distribution and pc-assisted gaining knowledge of have already all started to flourish in the developing world, experimental studies on other areas like generation-primarily based behavioral interventions is less developed there thus far.

       Our first challenge in building this evaluation turned into for that reason to acquire all publicly available studies the usage of rct or rdd designs inside advanced international locations that estimate the effects of an ed-tech intervention on any schooling-related outcome. to find the studies, we assembled a list of search phrases, and used these to look a number educational search engines, leading economics and training journals, and evaluation databases. to ensure that no relevant studies had been omitted, we observed back and forth citations for all included articles and conducted consultations with leading researchers, evaluators, and practitioners in the subject. given that tons of the relevant studies is latest and has been conducted from each within and out of doors of academia in addition to to avoid booklet bias—we selected no longer to exclude any research based totally on their book status. our final listing of blanketed research consists of posted instructional articles, working papers, evaluation reports, and unpublished manuscripts. see our references segment for a entire list of studies we reviewed.

      As soon as the articles have been assembled, we divided them into the four categories into which we felt that they maximum clearly clustered: get entry to to generation, pc-assisted studying, technology-primarily based behavioral interventions in schooling, and on line courses. despite the fact that now not all research in shape neatly into these categories and there's a few overlap, we felt that these four first-rate encapsulated the variations in the studies’ underlying topics, motivations, and theories of alternate.

        Inside each category, we intently study all research and prepared them in addition consistent with the approach of the intervention evaluated. we then considered every take a look at’s findings in light of the others’, deliberating to the best quantity viable versions in each the nature of the applications evaluated, the contexts wherein they may be implemented, and the particular studies
designs with which they examine. in which applicable, we also contrasted findings from those research with findings from observational studies and from developing nations. inside the remainder of the assessment, we present the consequences of this analysis.

0 Response to " Literature Review Methodology "

Post a Comment