Which Is True of Meta-analysis and Systematic Review?
Special Feature: Methods Serial
You have admission Restricted Access
Systematic Review and Meta-analysis: When 1 Study Is Just non Enough
CJASN Jan 2008, 3 (1) 253-260; DOI: https://doi.org/10.2215/CJN.01430307
We live in the information age, and the practice of medicine is condign increasingly specialized. In the biomedical literature, the number of published studies has dramatically increased: There are now more than 15 1000000 citations in MEDLINE, with ten,000 to 20,000 new citations added each week (1). Multiple relevant studies ordinarily guide most clinical decisions. These studies often vary in their design; methodologic quality; population studied; and the intervention, test, or status considered. Because even highly cited trials may be challenged or refuted over fourth dimension (2), clinical decision-making requires ongoing reconciliation of studies that provide unlike answers to the same question. Both clinicians and researchers can also benefit from a summary of where uncertainty remains. Because it is oft impractical for readers to track down and review all of the primary studies (3), review articles are an of import source of summarized evidence on a particular topic (four).
Narrative Review, Systematic Review, and Meta-analysis
Review articles have traditionally taken the grade of a narrative review, whereby a content expert writes most a particular field, status, or treatment (5–7). Narrative reviews take many benefits, including a broad overview of relevant information tempered by years of practical noesis from an experienced author. Indeed, this article itself is in a narrative format, from authors who have published a number of meta-analyses in previous years.
In some circumstances, a reader wants to become very knowledgeable about specific details of a topic and wants some balls that the information presented is both comprehensive and unbiased. A narrative review typically uses an implicit process to compile prove to support the statements beingness fabricated. The reader often cannot tell which recommendations were based on the author's clinical experience, the breadth to which available literature was identified and compiled, and the reasons that some studies were given more emphasis than others. It is sometimes uncertain whether the author of a narrative review selectively cited reports that reinforced his or her preconceived ideas or promoted specific views of a topic. As well, a quantitative summary of the literature is often absent-minded in a narrative review.
A systematic review uses a process to place comprehensively all studies for a specific focused question (fatigued from inquiry and other sources), appraise the methods of the studies, summarize the results, nowadays key findings, identify reasons for different results across studies, and cite limitations of current knowledge (viii,9). In a systematic review, all decisions used to compile data are meant to be explicit, allowing the reader to gauge for him- or herself the quality of the review process and the potential for bias. In this fashion, systematic reviews tend to be more transparent than their narrative cousins, although they also can be biased if the pick or emphasis of certain primary studies is influenced past the preconceived notions of the authors or funding sources (10).
Depending on the nature of the data, the results of a systematic review can be summarized in text or graphic form. In graphic form, information technology is common for dissimilar trials to be depicted in a plot where the point guess and 95% conviction interval for each report are presented on an individual line (xi). When results are mathematically combined (a procedure sometimes referred to every bit pooling), this is referred to as meta-analysis. Graphically, the pooled result is often presented as a diamond at the bottom of the plot.
When performing a meta-analysis, a review team usually combines aggregate-level data reported in each principal written report (point and variance estimate of the summary measure). On occasion, a review team will obtain all of the individual patient data from each of the principal studies (12,thirteen). Although challenging to conduct (14), individual patient meta-analyses may have certain advantages over aggregate-level analyses. As highlighted in a review of angiotensin-converting enzyme (ACE) inhibitors for nondiabetic kidney disease, this includes the utilize of common definitions, coding and cutoff points betwixt studies, addressing questions not examined in the original publication, and a amend sense of the impact of private patient (versus report level) characteristics (12,15).
As first highlighted a decade ago (16), the number of systematic reviews in nephrology and other fields has increased dramatically with time, paralleling the rapid growth of biomedical literature during the past half century. Initiatives such as the Cochrane Collaboration accept further increased the profile and rigor of the systematic review process (details of the structured procedure of Cochrane systematic reviews are available through their Web site) (17,18). From 1990 to 2005, there were more than 400 systematic reviews and meta-analyses published in the bailiwick of nephrology (Effigy ane). Of these reviews, 40% pertained to chronic kidney disease or glomerulonephritis and 20, sixteen, 15, and 7% pertained to kidney transplantation, dialysis, acute kidney injury, and pediatric nephrology, respectively. As a publication blazon, notwithstanding, systematic reviews take non been without controversy: Some authors consider a meta-analysis the best possible apply of all bachelor data, whereas others question whether they add anything meaningful to scientific noesis (19). The strengths and weaknesses of this publication type are described next.
- Download figure
- Open up in new tab
- Download powerpoint
Strengths of Systematic Review and Meta-analysis
Physicians make better clinical decisions when they understand the circumstances and preferences of their patients and combine their personal experience with clinical evidence underlying the available options (20). The public also expects that their physicians will integrate research findings into practice in a timely style (21). Thus, sound clinical or health policy decisions are facilitated by reviewing the available evidence (and its limitations), understanding reasons why some studies differ in their results (a finding sometimes referred to as heterogeneity among the primary studies), coming up with an assessment of the expected effect of an intervention or exposure (for questions of therapy or etiology), and and so integrating the new information with other relevant treatment, patient, and health care organization factors.
In this respect, reading a properly conducted systematic review is an efficient style to go familiar with the best bachelor research evidence for a focused clinical question. The review team may as well accept obtained information from the primary authors which was non available in the original reports. The presented summary allows the reader to have into account a whole range of relevant findings from research on a detail topic. The procedure can too plant whether the scientific findings are consistent and generalizable across populations, settings, and treatment variations and whether findings vary significantly by item subgroups. Again, the potential forcefulness of a systematic review lies in the transparency of each phase of the synthesis process, allowing the reader to focus on the merits of each determination fabricated in compiling the information, rather than a simple contrast of one study to another every bit sometimes occurs in other types of reviews.
For example, studies demonstrating a significant effect of treatment are more likely to exist published than studies with negative findings, are more than likely to exist published in English language, and more likely to be cited past others (22–27). A well-conducted systematic review attempts to reduce the possibility of bias in the method of identifying and selecting studies for review, by using a comprehensive search strategy and specifying inclusion criteria that ideally accept not been influenced by a priori noesis of the main studies.
Mathematically combining data from a series of well-conducted master studies may provide a more than precise estimate of the underlying "true effect" than any individual report (28). In other words, by combining the samples of the private studies, the size of the "overall sample" is increased, enhancing the statistical ability of the analysis and reducing the size of the confidence interval for the point estimate of the effect. Information technology is as well more efficient to communicate a pooled summary than to depict the results for each of the private studies. Sometimes, if the treatment issue in small-scale trials shows a nonsignificant trend toward efficacy, then pooling the results may found the benefits of therapy (16). For instance, x trials examined whether ACE inhibitors were more effective than other antihypertensive agents for the prevention of nondiabetic kidney failure (29). Many of the 95% conviction intervals for the estimate provided by each study overlapped with a finding of no effect; however, the overall pooled judge established a benefit of ACE inhibitors.
For these reasons, a meta-analysis of like, well-conducted, randomized, controlled trials has been considered one of the highest levels of bear witness (30–32). It is important to stress that the primary trials all take to be conducted with high methodologic rigor for the meta-analysis to be definitive. Alternatively, when the existing studies have of import scientific and methodologic limitations, including smaller sized samples (which is more often the case), the systematic review may identify where gaps be in the available literature. In this case, an exploratory meta-analysis can provide a plausible approximate of effect that can be tested in subsequent studies (33,34).
Limitations of Systematic Review and Meta-assay
This blazon of publication type has many potential limitations that should be appreciated by all readers. First, the summary provided in a systematic review and meta-analysis of the literature is only as reliable every bit the methods used to gauge the effect in each of the primary studies. In other words, conducting a meta-analysis does not overcome issues that were inherent in the pattern and execution of the primary studies. Information technology also does not correct biases as a issue of selective publication, whereby studies that report dramatic effects are more than probable to exist identified, summarized, and later on pooled in meta-analysis than studies that written report smaller issue sizes (an outcome referred to as publication bias). Because more than three quarters of meta-analyses did non report any empirical assessment of publication bias (35), the true frequency of this form of bias is unknown.
Controversies also ascend effectually the interpretation of summarized results, particularly when the results of discordant studies are pooled in meta-analysis (36). The review procedure inevitably identifies studies that are diverse in their pattern, methodologic quality, specific interventions used, and types of patients studied. There is often some subjectivity when deciding how similar studies must be earlier pooling is appropriate. Combining studies of poor quality with those that were more than rigorously conducted may non be useful and can lead to worse estimates of the underlying truth or a false sense of precision around the truth (36). A false sense of precision may likewise ascend when various subgroups of patients defined by characteristics such as their historic period or gender differ in their observed response. In such cases, reporting an amass pooled effect might exist misleading if there are important reasons to explain variable treatment furnishings across different types of patients (36–xl).
Finally, merely labeling a manuscript as a "systematic review" or "meta-analysis" does not guarantee that the review was conducted or reported with due rigor (41). To reduce the take chances of arriving at misleading conclusions, guidelines on the behave and reporting of systematic reviews were recently published (42,43); however, important methodologic flaws of systematic reviews published in peer-reviewed journals have been well described (44–54). For instance, of the 86 renal systematic reviews published in 2005, the majority (58%) had of import methodologic flaws (Mrkobrada M, Thiessen-Philbrook H, Haynes RB, Iansavichus AV, Rehman F, and Garg AX, submitted). The most common flaws amidst these renal reviews were failure to assess the methodologic quality of included primary studies and failure to avoid bias in study inclusion (Mrkobrada 1000, Thiessen-Philbrook H, Haynes RB, Iansavichus AV, Rehman F, and Garg AX, submitted). In some cases, manufacture-supported reviews of drugs accept had fewer reservations about methodologic limitations of the included trials than rigorously conducted Cochrane reviews on the same topic (x); all the same, the hypothesis that less rigorous reviews more often report positive conclusions than good-quality reviews of the aforementioned topic has not been borne out in empirical assessment (48,53,55). Withal, like all good consumers, users of systematic reviews should carefully consider the quality of the product and attach to the dictum "caveat emptor": Permit the heir-apparent beware. The limitations described in this section may explicate differences in the results of meta-analyses as compared with subsequent big, randomized, controlled trials, which have occurred in approximately ane third of cases (56).
How to Appraise Critically a Systematic Review and Meta-analysis
Users of systematic reviews demand to assure themselves that the underlying methods used to get together relevant information were audio. Before considering the results or how the information could be appropriately applied in patient care (9), there are a few questions that the reader tin ask him- or herself when assessing the methodologic quality of a systematic review (Table 1).
Tabular array ane.
Questions to ask when assessing the quality of a systematic reviewa
Was the Review Conducted Co-ordinate to a Prespecified Protocol?
It is reassuring if a review was guided by a written protocol (prepared in advance) that describes the research question(s), hypotheses, review method, and program for how the data will be extracted and compiled. Such an approach minimizes the likelihood that the results or the expectations of the reviewing squad influenced study inclusion or synthesis. Although almost systematic reviews are conducted in a retrospective manner, reviews and meta-analyses tin can in theory be defined at the time several similar trials are beingness planned or under fashion. This allows a set of specific hypotheses, data collection procedures, and analytic strategies to be specified in advance earlier any of the results from the primary studies are known. Such a prospective attempt may provide more than reliable answers to medically relevant questions than the traditional retrospective approach (41).
Was the Question Focused?
Clinical questions often deal with problems of treatment, etiology, prognosis, and diagnosis. A well-formulated question usually specifies the patient'southward problem or diagnosis, the intervention or exposure of interest, whatsoever comparison group (if relevant), and the chief and secondary outcomes of involvement (57).
Were the "Right" Types of Studies Eligible for the Review?
Different study designs tin be used to answer different clinical questions. Randomized, controlled trials; observational studies; and cantankerous-sectional diagnostic studies may each be appropriate depending on the primary question posed in the review. When examining the eligible criteria for study inclusion, the reader should feel confident that a potential bias in the selection of studies was avoided. Specifically, the reader should inquire her- or himself whether the eligibility criteria for study inclusion were appropriate for the question asked. Whether the right types of studies were selected for the review besides depends on the depth and breadth of the underlying literature search.
For example, some review teams will consider only studies that were published in English. There is evidence that journals from sure countries publish a higher proportion of positive trials than others (58). Excluding non-English studies seemed to change the results of some reviews (59,60) but not others (61,62).
Some review teams utilize broad criteria for their inclusion of primary studies (e.yard., effects of agents that block the renin-angiotensin system on renal outcomes [63]), whereas other teams utilize more narrow inclusion criteria (east.g., restricting the analysis only to patients who take diabetes without evidence of nephropathy [64]). At that place is oftentimes no unmarried correct approach; however, the conclusions of any meta-assay that is highly sensitive to altering the entry criteria of included studies should be interpreted with some caution (25). For instance, ii different review teams considered whether synthetic dialysis membranes resulted in better clinical outcomes compared with cellulose-based membranes in patients with acute renal failure. In one meta-assay (65) but not the other (66), synthetic membranes reduced the gamble for death. The discordant results were due to the inclusion of a study that did non encounter eligibility for the second review (67).
Was the Method of Identifying All Relevant Information Comprehensive?
Identifying relevant studies for a given clinical question amidst the many potential sources of data is commonly a laborious process (68). Biomedical journals are the nearly common source of information, and bibliographic databases are often used to search for relevant articles. MEDLINE currently indexes approximately 4800 medical journals and contains 13 million citations (69). Similarly, EMBASE indexes approximately 5000 medical journals and contains more than than 11 one thousand thousand records. In that location are some key differences between EMBASE and MEDLINE, and the review team should take searched both databases (lxx–72). For example, EMBASE provides the all-time coverage of European research likewise as pharmaceutical inquiry including renal adverse events (73). Positive studies may exist more often published in journals that are indexed in MEDLINE, compared with nonindexed journals (25).
Depending on the question posed, other databases may also have been searched. For example, if a team is summarizing the effects of exercise grooming in patients who receive maintenance hemodialysis, then searching the Cumulative Index to Nursing and Centrolineal Health Literature (CINAHL) database would be advisable (74). Alternatively, the ECONOLIT database may exist useful for identifying information on the out-of-pocket expenses incurred by living kidney donors (75). As a supplementary method of identifying information, searching databases such as the Science Citation Alphabetize (which identifies all articles that cite a relevant commodity), equally well equally newer Net search engines such as Google Scholar and Elsevier's Scirus, tin can be useful for identifying articles that are not indexed well in traditional bibliographic databases (76). Searching bibliographies of retrieved articles can as well identify relevant articles that were missed.
Any bibliographic database was used, the review team should have used a search strategy that maximized the identification of relevant manufactures (77,78). Considering there is some subjectivity in screening databases, citations should be reviewed independently and in indistinguishable by two members of the reviewing team, with the full-text commodity retrieved for whatever commendation accounted relevant past any of the reviewers. There is also some subjectivity in assessing the eligibility of each full-text commodity, and the risk for incorrectly discarding relevant reports is reduced when two reviewers independently perform each assessment in a reliable manner (79).
Important sources of data other than journal articles should not be overlooked. Conference proceedings, abstracts, books, and manufacturers all can exist sources of potentially valuable information. Inquiries to experts, including those listed in trial registries, may have likewise proved useful (28).
A comprehensive search of bachelor literature reduces the possibility of publication bias, which occurs when studies with statistically meaning results are more likely to be published and cited (80,81). It is interesting that some recent reviews of acetylcysteine for the prevention of contrast nephropathy analyzed equally few as five studies, despite beingness submitted for publication virtually 1 year later publication of a review of 12 studies (82). Although there are many potential reasons for this, ane cannot exclude the possibility that some search strategies missed eligible trials. In improver to a comprehensive search method, which makes it unlikely that relevant studies were missed, information technology is oftentimes reassuring if the review team used graphic and statistical methods to ostend that there was fiddling chance that publication bias influenced the results (83).
Was the Data Brainchild from Each Study Appropriate?
In compiling relevant information, the review squad should have used a rigorous and reproducible method of abstracting all relevant data from the primary studies. Often two reviewers abstract key information from each chief study, including study and patient characteristics, setting, and details about the intervention, exposure, or diagnostic test as is appropriate. Linguistic communication translators may be needed. Teams who acquit their review with due rigor will indicate that they contacted the main authors from each of the primary studies to confirm the accuracy of bathetic data too as to provide additional relevant data that was non provided in the chief study. Some authors will go through the additional effort of blinding or masking the results from other study characteristics so that data abstraction is as objective as possible (84,85).
Ane element that should have been abstracted is the methodologic quality of each primary report (recognizing this is not always as straightforward as information technology may first seem) (86–91). The question to be posed by the reader is whether the reviewing team considered if each of the chief studies was designed, conducted, and analyzed in a way to minimize or avoid biases in the results (92). For randomized, controlled trials, lack of concealment of allotment, inadequate generation of the allocation sequence, and lack of double blinding can exaggerate estimates of the treatment effect (54,xc,93). The value of abstracting such data is that it may help to explain important differences in the results among the primary studies (90).
For instance, long-term risk estimates can go unreliable when participants are lost to study follow-up; those who participate in follow-up often systematically differ from nonparticipants. For this reason, prognosis studies are vulnerable to bias, unless the loss to follow-up is less than 20% (94). In a systematic review of 49 studies on the renal prognosis of diarrhea associated hemolytic uremic syndrome, on average, 21% of patients were lost to follow-up (range 0 to 59% beyond studies) (95). It was hypothesized that patients who were lost to follow-up would contribute to worse estimates of long-term prognosis because they are typically healthier than those who keep to be followed by their nephrologists. Indeed, studies with a higher proportion of patients lost to follow-upwards demonstrated a higher proportion of patients with long-term renal sequelae, explaining 28% of the between-study variability.
How Was the Information Synthesized and Summarized?
In cases in which the primary studies differ in the design, populations studied, interventions and comparisons used, or outcomes measured, it may have been appropriate for the review team simply to report the results descriptively using text and tables. When the chief studies are similar in these characteristics and the studies provide a like estimate of a truthful effect, then meta-assay may have been used to derive a more precise approximate of this outcome (96). In meta-analysis, data from the individual studies are not merely combined equally though they were from a single study; rather, greater weights are given to the results from studies that provide more information, because they are likely to be closer to true effect being estimated. Mathematically combining the results from the private studies tin be accomplished under the assumption of "fixed" effects or "random" effects model. Although a thorough description and merits of each arroyo is described elsewhere (97), it is fair to say that a random-effects model is more bourgeois than the stock-still-effects approach, and a finding that is statistically significant with the latter merely non the former should be viewed with skepticism.
Whenever individual studies are pooled in meta-analysis, it is important for the reader to determine whether it was reasonable to do and then. One way to assess the similarity of various studies is to inspect the graphic display of the results, looking for similarities in the direction of the estimated event. Fifty-fifty without because any combined meta-analytic upshot, a reader becomes much more confident when a similar outcome is beingness observed across many studies (i.e., the results have replicated across many studies). Some review teams may written report a statistical test to determine how dissimilar the studies are from one some other (as described previously, this is often termed heterogeneity of the study results [98]). This can help to prove or disprove that differences in the results that were observed between the primary studies is no different from what would be expected by chance. The most common statistical test to quantify heterogeneity is something called the Q statistic, which is similar in concept to a χ2 test. Although a nonsignificant result (by convention P > 0.ane) is oftentimes taken to bespeak that in that location are no substantial differences between the studies, information technology is of import to consider that this test is underpowered, peculiarly when the number of studies being pooled is small. A new statistic that is frequently beingness reported in meta-analysis these days is something called the Iii statistic. This statistic describes the percentage variability betwixt the studies that is present across what would be expected by chance. When interpreting an I2 statistic, values of 0 to thirty, 31 to 50, and >50% represent mild, moderate, and marked differences between the studies, respectively (99).
Whenever a review team identifies significant differences betwixt the master studies, they should try to explain possible reasons for these differences. This can be washed in an informal way by analyzing certain types of studies separately or by selectively combining studies to decide which are particularly different from the remaining studies. Alternatively, a statistical approach tin can exist taken to explore differences across studies, using a technique like to linear or logistic regression (which at the written report level is something called meta-regression) (100). Either style, a careful exploration of why study results differ tin yield important information about potential determinants of the effect being observed.
Conclusions
Similar all types of inquiry, systematic reviews and meta-analyses have both potential strengths and weaknesses. With the growth of renal clinical studies, an increasing number of these types of summary publications will certainly become available to nephrologists, researchers, administrators, and policy makers who seek to go on abreast of recent developments. To maximize their advantages, information technology is essential that time to come reviews be conducted and reported properly, with judicious estimation by the discriminating reader.
Acknowledgments
A.X.Chiliad. was supported past a Clinician Scientist Award from the Canadian Institutes of Wellness Research (CIHR). D.H. was supported by a CIHR Fellowship Award, the Chisholm Memorial Fellowship, and the Clinician-Scientist Training Plan of the Academy of Toronto. One thousand.T. was supported by a Population Health Investigator Honor from the Alberta Heritage Foundation for Medical Inquiry and a New Investigator Award from the CIHR.
We thank Drs. Chi Hsu and Harvey Feldman for help and advice. We thank Arthur Iansavichus, MLIS, who helped compile systematic reviews published in the discipline of nephrology.
Footnotes
-
Published online ahead of print. Publication date available at www.cjasn.org.
- Copyright © 2008 by the American Guild of Nephrology
References
- ↵
- ↵
Ioannidis JP: Contradicted and initially stronger effects in highly cited clinical research. JAMA 294 :218– 228,2005
- ↵
Garg AX, Iansavichus AV, Kastner Yard, Walters LA, Wilczynski N, McKibbon KA, Yang RC, Rehman F, Haynes RB: Lost in publication: One-half of all renal do bear witness is published in not-renal journals. Kidney Int seventy :1995– 2005,2006
- ↵
Haynes RB, Cotoi C, Kingdom of the netherlands J, Walters L, Wilczynski Northward, Jedraszewski D, McKinlay J, Parrish R, McKibbon KA: 2nd-guild peer review of the medical literature for clinical practitioners. JAMA 295 :1801– 1808,2006
- ↵
Barrett BJ, Parfrey PS: Clinical practise: Preventing nephropathy induced past contrast medium. N Engl J Med 354 :379– 386,2006
-
Halloran PF: Immunosuppressive drugs for kidney transplantation. North Engl J Med 351 :2715– 2729,2004
- ↵
Schrier RW, Wang W: Acute renal failure and sepsis. Northward Engl J Med 351 :159– 169,2004
- ↵
Cook DJ, Mulrow CD, Haynes RB: Systematic reviews: Synthesis of best testify for clinical decisions. Ann Intern Med 126 :376– 380,1997
- ↵
Oxman Advertizing, Cook DJ, Guyatt GH: Users' guides to the medical literature. VI. How to use an overview. Evidence-Based Medicine Working Group. JAMA 272 :1367– 1371,1994
- ↵
Jorgensen AW, Hilden J, Gotzsche PC: Cochrane reviews compared with industry supported meta-analyses and other meta-analyses of the same drugs: Systematic review. BMJ 333 :782 ,2006
- ↵
Lewis S, Clarke M: Forest plots: Trying to come across the wood and the trees. BMJ 322 :1479– 1480,2001
- ↵
Lyman GH, Kuderer NM: The strengths and limitations of meta-analyses based on aggregate information. BMC Med Res Methodol 5 :14 ,2005
- ↵
Simmonds MC, Higgins JP, Stewart LA, Tierney JF, Clarke MJ, Thompson SG: Meta-assay of individual patient data from randomized trials: A review of methods used in practice. Clin Trials ii :209– 217,2005
- ↵
Schmid CH, Landa M, Jafar TH, Giatras I, Karim T, Reddy M, Stark PC, Levey AS: Constructing a database of individual clinical trials for longitudinal analysis. Command Clin Trials 24 :324– 340,2003
- ↵
Schmid CH, Stark PC, Berlin JA, Landais P, Lau J: Meta-regression detected associations betwixt heterogeneous treatment effects and report-level, merely not patient-level, factors. J Clin Epidemiol 57 :683– 697,2004
- ↵
Fouque D, Laville Yard, Haugh K, Boissel JP: Systematic reviews and their roles in promoting testify-based medicine in renal disease. Nephrol Dial Transplant 11 :2398– 2401,1996
- ↵
Campbell MK, Daly C, Wallace SA, Cody DJ, Donaldson C, Grant AM, Khan IH, Lawrence P, Vale L, MacLeod AM: Evidence-based medicine in nephrology: Identifying and critically appraising the literature. Nephrol Punch Transplant 15 :1950– 1955,2000
- ↵
- ↵
Blettner 1000, Sauerbrei West, Schlehofer B, Scheuchenpflug T, Friedenreich C: Traditional reviews, meta-analyses and pooled analyses in epidemiology. Int J Epidemiol 28 :ane– ix,1999
- ↵
Haynes RB, Devereaux PJ, Guyatt GH: Physicians' and patients' choices in testify based practice. BMJ 324 :1350 ,2002
- ↵
Fones CS, Kua EH, Goh LG: 'What makes a good medico?' Views of the medical profession and the public in setting priorities for medical education. Singapore Med J 39 :537– 542,1998
- ↵
Sterne JA, Egger G, Smith GD: Systematic reviews in health intendance: Investigating and dealing with publication and other biases in meta-analysis. BMJ 323 :101– 105,2001
-
Simes RJ: Confronting publication bias: A cohort blueprint for meta-analysis. Stat Med half dozen :11– 29,1987
-
Easterbrook PJ, Berlin JA, Gopalan R, Matthews DR: Publication bias in clinical research. Lancet 337 :867– 872,1991
- ↵
Egger G, Smith GD: Bias in location and selection of studies. BMJ 316 :61– 66,1998
-
Dickersin M, Min YI, Meinert CL: Factors influencing publication of research results: Follow-upwards of applications submitted to two institutional review boards. JAMA 267 :374– 378,1992
- ↵
Stern JM, Simes RJ: Publication bias: Evidence of delayed publication in a accomplice study of clinical research projects. BMJ 315 :640– 645,1997
- ↵
Pogue J, Yusuf Due south: Overcoming the limitations of current meta-assay of randomised controlled trials. Lancet 351 :47– 52,1998
- ↵
Giatras I, Lau J, Levey As: Effect of angiotensin-converting enzyme inhibitors on the progression of nondiabetic renal disease: A meta-assay of randomized trials. Angiotensin-Converting-Enzyme Inhibition and Progressive Renal Disease Written report Group. Ann Intern Med 127 :337– 345,1997
- ↵
Guyatt M, Gutterman D, Baumann MH, Addrizzo-Harris D, Hylek EM, Phillips B, Raskob G, Lewis SZ, Schunemann H: Grading strength of recommendations and quality of evidence in clinical guidelines: Report from an American College of Chest Physicians job strength. Breast 129 :174– 181,2006
-
Hadorn DC, Bakery D, Hodges JS, Hicks N: Rating the quality of evidence for clinical practise guidelines. J Clin Epidemiol 49 :749– 754,1996
- ↵
Guyatt GH, Haynes RB, Jaeschke RZ, Melt DJ, Green L, Naylor CD, Wilson MC, Richardson WS: Users' guides to the medical literature: XXV. Show-based medicine: principles for applying the Users' Guides to patient care. Evidence-Based Medicine Working Group. JAMA 284 :1290– 1296,2000
- ↵
Anello C, Fleiss JL: Exploratory or analytic meta-analysis: Should we distinguish between them? J Clin Epidemiol 48 :109– 116,1995
- ↵
Boudville Due north, Prasad GV, Knoll G, Muirhead North, Thiessen-Philbrook H, Yang RC, Rosas-Arellano MP, Housawi A, Garg AX: Meta-analysis: Risk for hypertension in living kidney donors. Ann Intern Med 145 :185– 196,2006
- ↵
Palma Due south, Delgado-Rodriguez Yard: Cess of publication bias in meta-analyses of cardiovascular diseases. J Epidemiol Community Health 59 :864– 869,2005
- ↵
Lau J, Ioannidis JP, Schmid CH: Summing up evidence: One answer is non always enough. Lancet 351 :123– 127,1998
-
Thompson SG: Why sources of heterogeneity in meta-analysis should exist investigated. BMJ 309 :1351– 1355,1994
-
Berlin JA: Invited commentary: Benefits of heterogeneity in meta-assay of data from epidemiologic studies. Am J Epidemiol 142 :383– 387,1995
-
Davey SG, Egger M, Phillips AN: Meta-analysis: Beyond the grand mean? BMJ315 :1610– 1614,1997
- ↵
Thompson SG, Higgins JP: Treating individuals 4: Can meta-assay assist target interventions at individuals virtually likely to do good? Lancet 365 :341– 346,2005
- ↵
Yusuf S: Meta-analysis of randomized trials: Looking back and looking alee. Control Clin Trials 18 :594– 601,1997
- ↵
Moher D, Melt DJ, Eastwood S, Olkin I, Rennie D, Stroup DF: Improving the quality of reports of meta-analyses of randomised controlled trials: The QUOROM argument. Quality of Reporting of Meta-analyses. Lancet 354 :1896– 1900,1999
- ↵
Stroup DF, Berlin JA, Morton SC, Olkin I, Williamson GD, Rennie D, Moher D, Becker BJ, Sipe TA, Thacker SB: Meta-analysis of observational studies in epidemiology: A proposal for reporting. Meta-assay of Observational Studies in Epidemiology (MOOSE) group. JAMA 283 :2008– 2012,2000
- ↵
Choi PT, Halpern SH, Malik Due north, Jadad AR, Tramer MR, Walder B: Examining the evidence in anesthesia literature: A critical appraisal of systematic reviews. Anesth Analg 92 :700– 709,2001
-
Dixon E, Hameed M, Sutherland F, Melt DJ, Doig C: Evaluating meta-analyses in the general surgical literature: A disquisitional appraisement. Ann Surg 241 :450– 459,2005
-
Kelly KD, Travers A, Dorgan Thou, Slater 50, Rowe BH: Evaluating the quality of systematic reviews in the emergency medicine literature. Ann Emerg Med 38 :518– 526,2001
-
Sacks HS, Reitman D, Pagano D, Kupelnick B: Meta-analysis: An update. Mt Sinai J Med 63 :216– 224,1996
- ↵
Assendelft WJ, Koes BW, Knipschild PG, Bouter LM: The relationship between methodological quality and conclusions in reviews of spinal manipulation. JAMA 274 :1942– 1948,1995
-
Jadad AR, McQuay HJ: Meta-analyses to evaluate analgesic interventions: A systematic qualitative review of their methodology. J Clin Epidemiol 49 :235– 243,1996
-
Jadad AR, Cook DJ, Jones A, Klassen TP, Tugwell P, Moher M, Moher D: Methodology and reports of systematic reviews and meta-analyses: A comparison of Cochrane reviews with articles published in paper-based journals. JAMA 280 :278– 280,1998
-
Bero LA, Rennie D: Influences on the quality of published drug studies. Int J Technol Assess Wellness Intendance 12 :209– 237,1996
-
Barnes DE, Bero LA: Why review articles on the wellness furnishings of passive smoking reach dissimilar conclusions. JAMA 279 :1566– 1570,1998
- ↵
Jadad AR, Moher Yard, Browman GP, Booker 50, Sigouin C, Fuentes M, Stevens R: Systematic reviews and meta-analyses on treatment of asthma: Disquisitional evaluation. BMJ 320 :537– 540,2000
- ↵
Moher D, Pham B, Jones A, Cook DJ, Jadad AR, Moher M, Tugwell P, Klassen TP: Does quality of reports of randomised trials affect estimates of intervention efficacy reported in meta-analyses? Lancet 352 :609– 613,1998
- ↵
Katerndahl DA, Lawler WR: Variability in meta-analytic results concerning the value of cholesterol reduction in coronary heart disease: A meta-meta-analysis. Am J Epidemiol 149 :429– 441,1999
- ↵
LeLorier J, Gregoire G, Benhaddad A, Lapierre J, Derderian F: Discrepancies between meta-analyses and subsequent large randomized, controlled trials. Due north Engl J Med 337 :536– 542,1997
- ↵
Counsell C: Formulating questions and locating primary studies for inclusion in systematic reviews. Ann Intern Med 127 :380– 387,1997
- ↵
Vickers A, Goyal N, Harland R, Rees R: Do certain countries produce merely positive results? A systematic review of controlled trials. Control Clin Trials xix :159– 166,1998
- ↵
Gregoire Yard, Derderian F, Le Lorier J: Selecting the language of the publications included in a meta-analysis: Is at that place a Tower of Boom-boom bias? J Clin Epidemiol 48 :159– 163,1995
- ↵
Egger One thousand, Zellweger-Zahner T, Schneider M, Junker C, Lengeler C, Antes G: Language bias in randomised controlled trials published in English and German. Lancet 350 :326– 329,1997
- ↵
Moher D, Pham B, Klassen TP, Schulz KF, Berlin JA, Jadad AR, Liberati A: What contributions do languages other than English make on the results of meta-analyses? J Clin Epidemiol 53 :964– 972,2000
- ↵
Juni P, Holenstein F, Sterne J, Bartlett C, Egger Grand: Direction and impact of language bias in meta-analyses of controlled trials: Empirical study. Int J Epidemiol 31 :115– 123,2002
- ↵
Casas JP, Chua W, Loukogeorgakis S, Vallance P, Smeeth 50, Hingorani Ad, MacAllister RJ: Result of inhibitors of the renin-angiotensin system and other antihypertensive drugs on renal outcomes: Systematic review and meta-assay. Lancet 366 :2026– 2033,2005
- ↵
Strippoli GF, Craig MC, Schena FP, Craig JC: Office of blood pressure targets and specific antihypertensive agents used to prevent diabetic nephropathy and delay its progression. J Am Soc Nephrol 17 :S153– S155,2006
- ↵
Subramanian S, Venkataraman R, Kellum JA: Influence of dialysis membranes on outcomes in astute renal failure: a meta-assay. Kidney Int 62 :1819– 1823,2002
- ↵
Jaber BL, Lau J, Schmid CH, Karsou SA, Levey Equally, Pereira BJ: Consequence of biocompatibility of hemodialysis membranes on mortality in acute renal failure: A meta-analysis. Clin Nephrol 57 :274– 282,2002
- ↵
Teehan GS, Liangos O, Lau J, Levey Equally, Pereira BJ, Jaber BL: Dialysis membrane and modality in acute renal failure: Understanding discordant meta-analyses. Semin Dial 16 :356– 360,2003
- ↵
Dickersin G, Scherer R, Lefebvre C: Identifying relevant studies for systematic reviews. BMJ 309 :1286– 1291,1994
- ↵
- ↵
Suarez-Almazor ME, Belseck East, Homik J, Dorgan M, Ramos-Remus C: Identifying clinical trials in the medical literature with electronic databases: MEDLINE alone is not plenty. Control Clin Trials 21 :476– 487,2000
-
Topfer LA, Parada A, Menon D, Noorani H, Perras C, Serra-Prat M: Comparing of literature searches on quality and costs for wellness technology assessment using the MEDLINE and EMBASE databases. Int J Technol Appraise Health Care 15 :297– 303,1999
- ↵
Minozzi South, Pistotti Five, Forni Grand: Searching for rehabilitation articles on MEDLINE and EMBASE: An instance with cantankerous-over design. Arch Phys Med Rehabil 81 :720– 722,2000
- ↵
- ↵
Cheema BS, Singh MA: Do preparation in patients receiving maintenance hemodialysis: A systematic review of clinical trials. Am J Nephrol 25 :352– 364,2005
- ↵
Clarke KS, Klarenbach Southward, Vlaicu S, Yang RC, Garg AX: The straight and indirect economic costs incurred by living kidney donors-a systematic review. Nephrol Dial Transplant 21 :1952– 1960,2006
- ↵
Steinbrook R: Searching for the right search: Reaching the medical literature. Due north Engl J Med 354 :4– vii,2006
- ↵
Wilczynski NL, Haynes RB: Robustness of empirical search strategies for clinical content in MEDLINE. Proc AMIA Symp 904– 908,2002
- ↵
Wilczynski NL, Walker CJ, McKibbon KA, Haynes RB: Reasons for the loss of sensitivity and specificity of methodologic MeSH terms and textwords in MEDLINE. Proc Annu Symp Comput Appl Med Care 436– 440,1995
- ↵
Edwards P, Clarke G, DiGuiseppi C, Pratap South, Roberts I, Wentz R: Identification of randomized controlled trials in systematic reviews: Accuracy and reliability of screening records. Stat Med 21 :1635– 1640,2002
- ↵
Davidson RA: Source of funding and consequence of clinical trials. J Gen Intern Med 1 :155– 158,1986
- ↵
Rochon PA, Gurwitz JH, Simms RW, Fortin PR, Felson DT, Minaker KL, Chalmers TC: A study of manufacturer-supported trials of nonsteroidal anti-inflammatory drugs in the treatment of arthritis. Curvation Intern Med 154 :157– 163,1994
- ↵
Biondi-Zoccai GG, Lotrionte M, Abbate A, Testa L, Remigi E, Burzotta F, Valgimigli Grand, Romagnoli E, Crea F, Agostoni P: Compliance with QUOROM and quality of reporting of overlapping meta-analyses on the role of acetylcysteine in the prevention of contrast associated nephropathy: Example study. BMJ 332 :202– 209,2006
- ↵
Egger M, Davey SG, Schneider Thousand, Minder C: Bias in meta-assay detected by a elementary, graphical test. BMJ 315 :629– 634,1997
- ↵
Berlin JA: Does blinding of readers touch on the results of meta-analyses? Academy of Pennsylvania Meta-assay Blinding Written report Group. Lancet 350 :185– 186,1997
- ↵
Jadad AR, Moore RA, Carroll D, Jenkinson C, Reynolds DJ, Gavaghan DJ, McQuay HJ: Assessing the quality of reports of randomized clinical trials: Is blinding necessary? Control Clin Trials 17 :one– 12,1996
- ↵
Balk EM, Bonis PA, Moskowitz H, Schmid CH, Ioannidis JP, Wang C, Lau J: Correlation of quality measures with estimates of treatment effect in meta-analyses of randomized controlled trials. JAMA 287 :2973– 2982,2002
-
Cramp EM, Lau J, Bonis PA: Reading and critically appraising systematic reviews and meta-analyses: A brusk primer with a focus on hepatology. J Hepatol 43 :729– 736,2005
-
Moher D, Cook DJ, Jadad AR, Tugwell P, Moher M, Jones A, Pham B, Klassen TP: Assessing the quality of reports of randomised trials: Implications for the conduct of meta-analyses. Health Technol Assess 3 :i– 98,1999
-
Verhagen AP, de Vet HC, de Bie RA, Boers Grand, van den Brandt PA: The art of quality assessment of RCTs included in systematic reviews. J Clin Epidemiol 54 :651– 654,2001
- ↵
Juni P, Altman DG, Egger K: Systematic reviews in health care: Assessing the quality of controlled clinical trials. BMJ 323 :42– 46,2001
- ↵
Devereaux PJ, Choi PT, El Dika South, Bhandari M, Montori VM, Schunemann HJ, Garg AX, Busse JW, Heels-Ansdell D, Ghali WA, Manns BJ, Guyatt GH: An observational study found that authors of randomized controlled trials frequently use darkening of randomization and blinding, despite the failure to report these methods. J Clin Epidemiol 57 :1232– 1236,2004
- ↵
Moher D, Jadad AR, Nichol M, Penman M, Tugwell P, Walsh Due south: Assessing the quality of randomized controlled trials: An annotated bibliography of scales and checklists. Command Clin Trials 16 :62– 73,1995
- ↵
Schulz KF, Chalmers I, Hayes RJ, Altman DG: Empirical show of bias: Dimensions of methodological quality associated with estimates of treatment effects in controlled trials. JAMA 273 :408– 412,1995
- ↵
Laupacis A, Wells 1000, Richardson WS, Tugwell P: Users' guides to the medical literature. V. How to utilize an commodity about prognosis. Testify-Based Medicine Working Grouping. JAMA 272 :234– 237,1994
- ↵
Garg AX, Suri RS, Barrowman N, Rehman F, Matsell D, Rosas-Arellano MP, Salvadori M, Haynes RB, Clark WF: Long-term renal prognosis of diarrhea-associated hemolytic uremic syndrome: A systematic review, meta-assay, and meta-regression. JAMA 290 :1360– 1370,2003
- ↵
Deeks JJ: Issues in the selection of a summary statistic for meta-analysis of clinical trials with binary outcomes. Stat Med 21 :1575– 1600,2002
- ↵
DerSimonian R, Laird N: Meta-analysis in clinical trials. Command Clin Trials vii :177– 188,1986
- ↵
Hardy RJ, Thompson SG: Detecting and describing heterogeneity in meta-analysis. Stat Med 17 :841– 856,1998
- ↵
Higgins JP, Thompson SG: Quantifying heterogeneity in a meta-analysis. Stat Med 21 :1539– 1558,2002
- ↵
Thompson SG, Higgins JP: How should meta-regression analyses exist undertaken and interpreted? Stat Med 21 :1559– 1573,2002
mckinneysymee1942.blogspot.com
Source: https://cjasn.asnjournals.org/content/3/1/253
0 Response to "Which Is True of Meta-analysis and Systematic Review?"
Enregistrer un commentaire