660 resultados para systematic review, pneumonia, occupational exposure, paint industry, risk factors.
Resumo:
BACKGROUND: Secondary prevention programs for patients experiencing an acute coronary syndrome have been shown to be effective in the outpatient setting. The efficacy of in-hospital prevention interventions administered soon after acute cardiac events is unclear. We performed a systematic review and meta-analysis to determine whether in-hospital, patient-level interventions targeting multiple cardiovascular risk factors reduce all-cause mortality after an acute coronary syndrome. METHODS AND RESULTS: Using a prespecified search strategy, we included controlled clinical trials and before-after studies of secondary prevention interventions with at least a patient-level component (ie, education, counseling, or patient-specific order sets) initiated in hospital with outcomes of mortality, readmission, or reinfarction rates in acute coronary syndrome patients. We classified the interventions as patient-level interventions with or without associated healthcare provider-level interventions and/or system-level interventions. Twenty-six studies met our inclusion criteria. The summary estimate of 14 studies revealed a relative risk of all-cause mortality of 0.79 (95% CI, 0.69 to 0.92; n=37,585) at 1 year. However, the apparent benefit depended on study design and level of intervention. The before-after studies suggested reduced mortality (relative risk [RR], 0.77; 95% CI, 0.66 to 0.90; n=3680 deaths), whereas the RR was 0.96 (95% CI, 0.64 to 1.44; n=99 deaths) among the controlled clinical trials. Only interventions including a provider- or system-level intervention suggested reduced mortality compared with patient-level-only interventions. CONCLUSIONS: The evidence for in-hospital, patient-level interventions for secondary prevention is promising but not definitive because only before-after studies suggest a significant reduction in mortality. Future research should formally test which components of interventions provide the greatest benefit.
Resumo:
Objective:This review assesses the presentation,management, and outcome of delayed postpancreatectomy hemorrhage (PPH) and suggests a novel algorithm as possible standard of care.Methods: An electronic search of Medline and Embase databases from January 1990 to February 2010 was undertaken. A random-effect meta-analysis for success rate and mortality of laparotomy vs. interventional radiology after delayed PPH was performed.Results: Fifteen studies including 248 patients with delayed PPH were included. Its incidence was 3?3%. A sentinel bleed heralding a delayed PPH was observed in 45% of cases. Pancreatic leaks or intraabdominal abscesses were found in 62%. Interventional radiology was attempted in 41%, and laparotomy was undertaken in 49%. On meta-analysis comparing laparotomy vs. interventional radiology, no significant difference could be observed in term of complete hemostasis (76% vs. 80%; P = 0?35). A statistically significant difference favored interventional radiology vs. laparotomy in term of mortality (22% vs. 47%; P = 0?02).Conclusion: Proper and early management of postoperative complications, such as pancreatic leak and intraabdominal abscess, minimizes the risk of delayed PPH. Sentinel bleeding needs to be thoroughly investigated. If a pseudoaneurysm is detected, it has to be treated by interventional angiography, in order to prevent a further delayed PPH. Early angiography and embolization or stenting is safe and should be the procedure of choice. Surgery remains a therapeutic option if no interventional radiology is available, or patients cannot be resuscitated for an interventional treatment.
Resumo:
OBJECTIVES: The aim of this systematic review is to evaluate, analysing the dental literature, whether: * Patients on intravenous (IV) or oral bisphosphonates (BPs) can receive oral implant therapy and what could be the risk of developing bisphosphonate-related osteonecrosis of the jaw (BRONJ)? * Osseointegrated implants could be affected by BP therapy. MATERIAL AND METHODS: A Medline search was conducted and all publications fulfilling the inclusion and exclusion criteria from 1966 until December 2008 were included in the review. Moreover, the Cochrane Data Base of Systematic Reviews, and the Cochrane Central Register of Controlled Trials and EMBASE (from 1980 to December 2008) were searched for English-language articles published between 1966 and 2008. Literature search was completed by a hand research accessing the references cited in all identified publications. RESULTS: The literature search rendered only one prospective and three retrospective studies. The prospective controlled non-randomized clinical study followed patients with and without BP medication up to 36 months after implant therapy. The patients in the experimental group had been on oral BPs before implant therapy for periods ranging between 1 and 4 years. None of the patients developed BRONJ and implant outcome was not affected by the BP medication. The three selected retrospective studies (two case-controls and one case series) yielded very similar results. All have followed patients on oral BPs after implant therapy, with follow-up ranging between 2 and 4 years. BRONJ was never reported and implant survival rates ranged between 95% and 100%. The literature search on BRONJ including guidelines and recommendations found 59 papers, from which six were retrieved. Among the guidelines, there is a consensus on contraindicating implants in cancer patients under IV-BPs and not contraindicating dental implants in patients under oral-BPs for osteoporosis. CONCLUSIONS: From the analysis of the one prospective and the three retrospective series (217 patients), the placement of an implant may be considered a safe procedure in patients taking oral BPs for <5 years with regard to the occurrence of BRONJ since in these studies no BRONJ has been reported. Moreover, the intake of oral-BPs did not influence short-term (1-4 years) implant survival rates.
Resumo:
Background: Despite the widespread use of interferon-gamma release assays (IGRAs), their role in diagnosing tuberculosis and targeting preventive therapy in HIV-infected patients remains unclear. We conducted a comprehensive systematic review to contribute to the evidence-based practice in HIV-infected people. Methodology/Principal Findings: We searched MEDLINE, Cochrane, and Biomedicine databases to identify articles published between January 2005 and July 2011 that assessed QuantiFERON H -TB Gold In-Tube (QFT-GIT) and T-SPOT H .TB (T-SPOT.TB) in HIV-infected adults. We assessed their accuracy for the diagnosis of tuberculosis and incident active tuberculosis, and the proportion of indeterminate results. The search identified 38 evaluable studies covering a total of 6514 HIV-infected participants. The pooled sensitivity and specificity for tuberculosis were 61% and 72% for QFT-GIT, and 65% and 70% for T-SPOT.TB. The cumulative incidence of subsequent active tuberculosis was 8.3% for QFT-GIT and 10% for T-SPOT.TB in patients tested positive (one study each), and 0% for QFT-GIT (two studies) and T-SPOT.TB (one study) respectively in those tested negative. Pooled indeterminate rates were 8.2% for QFT-GIT and 5.9% for T-SPOT.TB. Rates were higher in high burden settings (12.0% for QFT-GIT and 7.7% for T-SPOT.TB) than in low-intermediate burden settings (3.9% for QFT-GIT and 4.3% for T-SPOT.TB). They were also higher in patients with CD4 + T-cell count, 200 (11.6% for QFT-GIT and 11.4% for T-SPOT.TB) than in those with CD4 + T-cell count $ 200 (3.1% for QFT-GIT and 7.9% for T-SPOT.TB). Conclusions/Significance: IGRAs have suboptimal accuracy for confirming or ruling out active tuberculosis disease in HIV-infected adults. While their predictive value for incident active tuberculosis is modest, a negative QFT-GIT implies a very low short- to medium-term risk. Identifying the factors associated with indeterminate results will help to optimize the use of IGRAs in clinical practice, particularly in resource-limited countries with a high prevalence of HIV-coinfection.
Atherosclerosis screening by noninvasive imaging for cardiovascular prevention: a systematic review.
Resumo:
BACKGROUND: Noninvasive imaging of atherosclerosis is being increasingly used in clinical practice, with some experts recommending to screen all healthy adults for atherosclerosis and some jurisdictions mandating insurance coverage for atherosclerosis screening. Data on the impact of such screening have not been systematically synthesized. OBJECTIVES: We aimed to assess whether atherosclerosis screening improves cardiovascular risk factors (CVRF) and clinical outcomes. DESIGN: This study is a systematic review. DATA SOURCES: We searched MEDLINE and the Cochrane Clinical Trial Register without language restrictions. STUDY ELIGIBILITY CRITERIA: We included studies examining the impact of atherosclerosis screening with noninvasive imaging (e.g., carotid ultrasound, coronary calcification) on CVRF, cardiovascular events, or mortality in adults without cardiovascular disease. RESULTS: We identified four randomized controlled trials (RCT, n=709) and eight non-randomized studies comparing participants with evidence of atherosclerosis on screening to those without (n=2,994). In RCTs, atherosclerosis screening did not improve CVRF, but smoking cessation rates increased (18% vs. 6%, p=0.03) in one RCT. Non-randomized studies found improvements in several intermediate outcomes, such as increased motivation to change lifestyle and increased perception of cardiovascular risk. However, such data were conflicting and limited by the lack of a randomized control group. No studies examined the impact of screening on cardiovascular events or mortality. Heterogeneity in screening methods and studied outcomes did not permit pooling of results. CONCLUSION: Available evidence about atherosclerosis screening is limited, with mixed results on CVRF control, increased smoking cessation in one RCT, and no data on cardiovascular events. Such screening should be validated by large clinical trials before widespread use.
Resumo:
Lithium is an efficacious agent for the treatment of bipolar disorder, but it is unclear to what extent its long-term use may result in neuroprotective or toxic consequences. Medline was searched with the combination of the word 'Lithium' plus key words that referred to every possible effect on the central nervous system. The papers were further classified into those supporting a neuroprotective effect, those in favour of a neurotoxic effect and those that were neutral. The papers were classified into research in humans, animal and in-vitro research, case reports, and review/opinion articles. Finally, the Natural Standard evidence-based validated grading rationale was used to validate the data. The Medline search returned 970 papers up to February 2006. Inspection of the abstracts supplied 214 papers for further reviewing. Eighty-nine papers supported the neuroprotective effect (6 human research, 58 animal/in vitro, 0 case reports, 25 review/opinion articles). A total of 116 papers supported the neurotoxic effect (17 human research, 23 animal/in vitro, 60 case reports, 16 review/opinion articles). Nine papers supported no hypothesis (5 human research, 3 animal/in vitro, 0 case reports, 1 review/opinion articles). Overall, the grading suggests that the data concerning the effect of lithium therapy is that of level C, that is 'unclear or conflicting scientific evidence' since there is conflicting evidence from uncontrolled non-randomized studies accompanied by conflicting evidence from animal and basic science studies. Although more papers are in favour of the toxic effect, the great difference in the type of papers that support either hypothesis, along with publication bias and methodological issues make conclusions difficult. Lithium remains the 'gold standard' for the prophylaxis of bipolar illness, however, our review suggests that there is a rare possibility of a neurotoxic effect in real-life clinical practice even in closely monitored patients with 'therapeutic' lithium plasma levels. It is desirable to keep lithium blood levels as low as feasible with prophylaxis.
Resumo:
Osteoporosis is a systemic bone disease that is characterized by a generalized reduction of the bone mass. It is the main cause of fractures in elderly women. Bone densitometry is used in the lumbar spine and hip in order to detect osteoporosis in its early stages. Different studies have observed a correlation between the bone mineral density of the jaw (BMD) and that of the lumbar spine and/or hip. On the other hand, there are studies that evaluate the findings in the orthopantomograms and perapical X-rays, correlating them with the early diagnosis of osteoporosis and highlighting the role of the dentist in the early diagnosis of this disease. Materials and methods: A search was carried out in the Medline-Pubmed database in order to identify those articles that deal with the association between the X-ray findings observed in the orthopantomograms and the diagnosis of the osteoporosis, as well as those that deal with the bone mineral density of the jaw. Results: There were 406 articles, and with the limits established, this number was reduced to 21. Almost all of the articles indicate that when examining oral X-rays, it is possible to detect signs indicative of osteoporosis. Discussion: The radiomorphometric indices use measurements in orthopantomograms and evaluate possible loss of bone mineral density. They can be analyzed alone or along with the visual indices. In the periapical X-rays, the photodensimetric analyses and the trabecular pattern appear to be the most useful. There are seven studies that analyze the densitometry of the jaw, but only three do so independently of the photodensitometric analysis. Conclusions: The combination of mandibular indices, along with surveys on the risk of fracture, can be useful as indicators of early diagnosis of osteoporosis. Visual and morphometric indices appear to be especially important in the orthopantomograms. Photodensitometry indices and the trabecular pattern are used in periapical X-rays. Studies on mandibular dual-energy X-ray absorptiometry are inconclusive
Resumo:
BACKGROUND: Artemether-lumefantrine is the most widely used artemisinin-based combination therapy for malaria, although treatment failures occur in some regions. We investigated the effect of dosing strategy on efficacy in a pooled analysis from trials done in a wide range of malaria-endemic settings. METHODS: We searched PubMed for clinical trials that enrolled and treated patients with artemether-lumefantrine and were published from 1960 to December, 2012. We merged individual patient data from these trials by use of standardised methods. The primary endpoint was the PCR-adjusted risk of Plasmodium falciparum recrudescence by day 28. Secondary endpoints consisted of the PCR-adjusted risk of P falciparum recurrence by day 42, PCR-unadjusted risk of P falciparum recurrence by day 42, early parasite clearance, and gametocyte carriage. Risk factors for PCR-adjusted recrudescence were identified using Cox's regression model with frailty shared across the study sites. FINDINGS: We included 61 studies done between January, 1998, and December, 2012, and included 14 327 patients in our analyses. The PCR-adjusted therapeutic efficacy was 97·6% (95% CI 97·4-97·9) at day 28 and 96·0% (95·6-96·5) at day 42. After controlling for age and parasitaemia, patients prescribed a higher dose of artemether had a lower risk of having parasitaemia on day 1 (adjusted odds ratio [OR] 0·92, 95% CI 0·86-0·99 for every 1 mg/kg increase in daily artemether dose; p=0·024), but not on day 2 (p=0·69) or day 3 (0·087). In Asia, children weighing 10-15 kg who received a total lumefantrine dose less than 60 mg/kg had the lowest PCR-adjusted efficacy (91·7%, 95% CI 86·5-96·9). In Africa, the risk of treatment failure was greatest in malnourished children aged 1-3 years (PCR-adjusted efficacy 94·3%, 95% CI 92·3-96·3). A higher artemether dose was associated with a lower gametocyte presence within 14 days of treatment (adjusted OR 0·92, 95% CI 0·85-0·99; p=0·037 for every 1 mg/kg increase in total artemether dose). INTERPRETATION: The recommended dose of artemether-lumefantrine provides reliable efficacy in most patients with uncomplicated malaria. However, therapeutic efficacy was lowest in young children from Asia and young underweight children from Africa; a higher dose regimen should be assessed in these groups. FUNDING: Bill & Melinda Gates Foundation.
Resumo:
CONTEXT: The current standard for diagnosing prostate cancer in men at risk relies on a transrectal ultrasound-guided biopsy test that is blind to the location of the cancer. To increase the accuracy of this diagnostic pathway, a software-based magnetic resonance imaging-ultrasound (MRI-US) fusion targeted biopsy approach has been proposed. OBJECTIVE: Our main objective was to compare the detection rate of clinically significant prostate cancer with software-based MRI-US fusion targeted biopsy against standard biopsy. The two strategies were also compared in terms of detection of all cancers, sampling utility and efficiency, and rate of serious adverse events. The outcomes of different targeted approaches were also compared. EVIDENCE ACQUISITION: We performed a systematic review of PubMed/Medline, Embase (via Ovid), and Cochrane Review databases in December 2013 following the Preferred Reported Items for Systematic reviews and Meta-analysis statement. The risk of bias was evaluated using the Quality Assessment of Diagnostic Accuracy Studies-2 tool. EVIDENCE SYNTHESIS: Fourteen papers reporting the outcomes of 15 studies (n=2293; range: 13-582) were included. We found that MRI-US fusion targeted biopsies detect more clinically significant cancers (median: 33.3% vs 23.6%; range: 13.2-50% vs 4.8-52%) using fewer cores (median: 9.2 vs 37.1) compared with standard biopsy techniques, respectively. Some studies showed a lower detection rate of all cancer (median: 50.5% vs 43.4%; range: 23.7-82.1% vs 14.3-59%). MRI-US fusion targeted biopsy was able to detect some clinically significant cancers that would have been missed by using only standard biopsy (median: 9.1%; range: 5-16.2%). It was not possible to determine which of the two biopsy approaches led most to serious adverse events because standard and targeted biopsies were performed in the same session. Software-based MRI-US fusion targeted biopsy detected more clinically significant disease than visual targeted biopsy in the only study reporting on this outcome (20.3% vs 15.1%). CONCLUSIONS: Software-based MRI-US fusion targeted biopsy seems to detect more clinically significant cancers deploying fewer cores than standard biopsy. Because there was significant study heterogeneity in patient inclusion, definition of significant cancer, and the protocol used to conduct the standard biopsy, these findings need to be confirmed by further large multicentre validating studies. PATIENT SUMMARY: We compared the ability of standard biopsy to diagnose prostate cancer against a novel approach using software to overlay the images from magnetic resonance imaging and ultrasound to guide biopsies towards the suspicious areas of the prostate. We found consistent findings showing the superiority of this novel targeted approach, although further high-quality evidence is needed to change current practice.
Resumo:
INTRODUCTION: There is conflicting evidence on the benefit of early transjugular intrahepatic portosystemic shunt (TIPSS) on the survival of patients with acute variceal bleeding (AVB). AIM: To assess the effect of early TIPSS on patient prognosis. MATERIALS AND METHODS: We carried out a meta-analysis of trials evaluating early TIPSS in cirrhotic patients with AVB. RESULTS: Four studies were included. Early TIPSS was associated with fewer deaths [odds ratio (OR)=0.38, 95% confidence interval (CI)=0.17-0.83, P=0.02], with moderate heterogeneity between studies (P=0.15, I=44%). Early TIPSS was not significantly associated with fewer deaths among Child-Pugh B patients (OR=0.35, 95% CI=0.10-1.17, P=0.087) nor among Child-Pugh C patients (OR=0.34, 95% CI=0.10-1.11, P=0.074). There was no heterogeneity between studies in the Child-Pugh B analysis (P=0.6, I=0%), but there was a high heterogeneity in the Child-Pugh C analysis (P=0.06, I=60%). Early TIPSS was associated with lower rates of bleeding within 1 year (OR=0.08, 95% CI=0.04-0.17, P<0.001) both among Child-Pugh B patients, (OR=0.15, 95% CI=0.05-0.47, P=0.001) and among Child-Pugh C patients (OR=0.05, 95% CI=0.02-0.15, P<0.001), with no heterogeneity between studies. Early TIPSS was not associated with higher rates of encephalopathy (OR=0.84, 95% CI=0.50-1.42, P=0.5). CONCLUSION: Cirrhotic patients with AVB treated with early TIPSS had lower death rates and lower rates of clinically significant bleeding within 1 year compared with patients treated without early TIPSS. Additional studies are required to identify the potential risk factors leading to a poor prognosis after early TIPSS in patients with AVB and to determine the impact of the degree of liver failure on the patient's prognosis.
Resumo:
AIMS: Published incidences of acute mountain sickness (AMS) vary widely. Reasons for this variation, and predictive factors of AMS, are not well understood. We aimed to identify predictive factors that are associated with the occurrence of AMS, and to test the hypothesis that study design is an independent predictive factor of AMS incidence. We did a systematic search (Medline, bibliographies) for relevant articles in English or French, up to April 28, 2013. Studies of any design reporting on AMS incidence in humans without prophylaxis were selected. Data on incidence and potential predictive factors were extracted by two reviewers and crosschecked by four reviewers. Associations between predictive factors and AMS incidence were sought through bivariate and multivariate analyses for different study designs separately. Association between AMS incidence and study design was assessed using multiple linear regression. RESULTS: We extracted data from 53,603 subjects from 34 randomized controlled trials, 44 cohort studies, and 33 cross-sectional studies. In randomized trials, the median of AMS incidences without prophylaxis was 60% (range, 16%-100%); mode of ascent and population were significantly associated with AMS incidence. In cohort studies, the median of AMS incidences was 51% (0%-100%); geographical location was significantly associated with AMS incidence. In cross-sectional studies, the median of AMS incidences was 32% (0%-68%); mode of ascent and maximum altitude were significantly associated with AMS incidence. In a multivariate analysis, study design (p=0.012), mode of ascent (p=0.003), maximum altitude (p<0.001), population (p=0.002), and geographical location (p<0.001) were significantly associated with AMS incidence. Age, sex, speed of ascent, duration of exposure, or history of AMS were inconsistently reported and therefore not further analyzed. CONCLUSIONS: Reported incidences and identifiable predictive factors of AMS depend on study design.
Resumo:
OBJECTIVE: Studies suggest that smoking may be a risk factor for the development of microvascular complications such as diabetic peripheral neuropathy (DPN). The objective of this study was to assess the relationship between smoking and DPN in persons with type 1 or type 2 diabetes. RESEARCH DESIGN AND METHODS: A systematic review of the PubMed, Embase, and Cochrane clinical trials databases was conducted for the period from January 1966 to November 2014 for cohort, cross-sectional and case-control studies that assessed the relationship between smoking and DPN. Separate meta-analyses for prospective cohort studies and case-control or cross-sectional studies were performed using random effects models. RESULTS: Thirty-eight studies (10 prospective cohort and 28 cross-sectional) were included. The prospective cohort studies included 5558 participants without DPN at baseline. During follow-up ranging from 2 to 10 years, 1550 cases of DPN occurred. The pooled unadjusted odds ratio (OR) of developing DPN associated with smoking was 1.26 (95% CI 0.86-1.85; I(2) = 74%; evidence grade: low strength). Stratified analyses of the prospective studies revealed that studies of higher quality and with better levels of adjustment and longer follow-up showed a significant positive association between smoking and DPN, with less heterogeneity. The cross-sectional studies included 27,594 participants. The pooled OR of DPN associated with smoking was 1.42 (95% CI 1.21-1.65; I(2) = 65%; evidence grade: low strength). There was no evidence of publication bias. CONCLUSIONS: Smoking may be associated with an increased risk of DPN in persons with diabetes. Further studies are needed to test whether this association is causal and whether smoking cessation reduces the risk of DPN in adults with diabetes.
Resumo:
BACKGROUND: The measurement of calcitonin in washout fluids of thyroid nodule aspirate (FNA-calcitonin) has been reported as accurate to detect medullary thyroid carcinoma (MTC). The results from these studies have been promising and the most updated version of ATA guidelines quoted for the first time that "FNA findings that are inconclusive or suggestive of MTC should have calcitonin measured in the FNA washout fluid." Here we aimed to systematically review published data on this topic to provide more robust estimates. RESEARCH DESIGN AND METHODS: A comprehensive computer literature search of the medical databases was conducted by searching for the terms "calcitonin" AND "washout." The search was updated until April 2015. RESULTS: Twelve relevant studies, published between 2007 and 2014, were found. Overall, 413 thyroid nodules or neck lymph nodes underwent FNA-calcitonin, 95 were MTC lesions and 93 (97.9%) of these were correctly detected by this measurement regardless of their cytologic report. CONCLUSIONS: The present study shows that the above ATA recommendation is well supported. Almost all MTC lesions are correctly detected by FNA-calcitonin and this technique should be used to avoid false negative or inconclusive results from cytology. The routine determination of serum calcitonin in patients undergoing FNA should improve the selection of patients at risk for MTC, guiding the use of FNA-calcitonin in the same FNA sample and providing useful information to the cytopathologist for the morphological assessment and the application of tailored ancillary tests.
Resumo:
BACKGROUND: The prognosis of patients with cirrhosis and acute variceal bleeding is very poor when the standard-of-care fails to control bleeding. New treatment modalities are needed in these patients. AIM: To synthesise the available evidence on the efficacy of self-expanding metal stents (SEMS) in patients with cirrhosis and severe or refractory oesophageal variceal bleeding. METHODS: Meta-analysis of trials evaluating SEMS in patients with cirrhosis and severe or refractory oesophageal variceal bleeding. RESULTS: Thirteen studies were included. The pooled estimate rates were 0.40 (95% confidence interval, CI = 0.31-0.49) for death, 0.41 (95% CI = 0.29-0.53) for liver-related death and 0.36 (95% CI = 0.26-0.47) for death at day 30, with low heterogeneity between studies. The pooled estimate rates were 0.12 (95% CI = 0.07-0.21) for mortality related to variceal bleeding, and 0.18 (95% CI = 0.11-0.29) for failure to control bleeding with SEMS, with no or low heterogeneity between studies. The pooled estimate rate were 0.16 (95% CI = 0.04-0.48) for rebleeding after stent removal and 0.28 (95% CI = 0.17-0.43) for stent migration, with high heterogeneity. A significant proportion of patients had access to liver transplantation or to TIPSS [pooled estimate rate 0.10 (95% CI = 0.04-0.21) and 0.26 (95% CI = 0.18-0.36), respectively]. CONCLUSIONS: Fewer than 40% of patients treated with SEMS were dead at 1 month. SEMS can be used as a bridge to TIPSS or to liver transplantation in a significant proportion of patients. Additional studies are required to identify potential risk factors leading to a poor prognosis in patients with acute variceal bleeding in whom the use of SEMS could be considered.
Resumo:
We systematically reviewed 25 randomised controlled trials of ultrasound-guided brachial plexus blockade that recruited 1948 participants: either one approach vs another (axillary, infraclavicular or supraclavicular); or one injection vs multiple injections. There were no differences in the rates of successful blockade with approach, relative risk (95% CI): axillary vs infraclavicular, 1.0 (1.0-1.1), p = 0.97; axillary vs supraclavicular, 1.0 (1.0-1.1), p = 0.68; and infraclavicular vs supraclavicular, 1.0 (1.0-1.1), p = 0.32. There was no difference in the rate of successful blockade with the number of injections, relative risk (95% CI) 1.0 (1.0-1.0), p = 0.69, for one vs multiple injections. The rate of procedural paraesthesia was less with one injection than multiple injections, relative risk (95% CI) 0.6 (0.4-0.9), p = 0.004.