105 resultados para The Evidence Base
Resumo:
Scholarship on the American Slave South generally agrees that John Eliot Cairnes's The Slave Power provided a highly biased interpretation of the functioning and long-term viability of the southern slave economy. Published shortly after the outbreak of the Civil War, its partisanship is partly attributed to its clearly stated goal to shift British support from the secession states to the states of the Union. Thus, it is generally agreed, Cairnes sifted his sources to obtain the desired outcome. A more balanced use of the sources at his possession would have provided a very different outcome. This paper will challenge this general assessment of Cairnes's book by examining in some detail two of Cairnes's most important sources: Frederic Law Olmsted's travelogues on the American Slave South and James D. B. De Bow's compilation of statistical data and essays in his Industrial Resources, etc., of the Southern and Western States (1852-53). By contrasting De Bow's use of statistical evidence with Olmsted's travelogues, my final purpose is to question the weight of evidence on the American Slave South. Cairnes aimed, I will argue, much more to balance the evidence than is generally acknowledged, but it is misleading to think that balancing a wide range of evidence washes out bias if this evidence itself is politically skewed, as is the rule rather than the exception.
Resumo:
BACKGROUND: Clinical guidelines are essential in implementing and maintaining nationwide stage-specific diagnostic and therapeutic standards. In 2011, the first German expert consensus guideline defined the evidence for diagnosis and treatment of early and locally advanced esophagogastric cancers. Here, we compare this guideline with other national guidelines as well as current literature. METHODS: The German S3-guideline used an approved development process with de novo literature research, international guideline adaptation, or good clinical practice. Other recent evidence-based national guidelines and current references were compared with German recommendations. RESULTS: In the German S3 and other Western guidelines, adenocarcinomas of the esophagogastric junction (AEG) are classified according to formerly defined AEG I-III subgroups due to the high surgical impact. To stage local disease, computed tomography of the chest and abdomen and endosonography are reinforced. In contrast, laparoscopy is optional for staging. Mucosal cancers (T1a) should be endoscopically resected "en-bloc" to allow complete histological evaluation of lateral and basal margins. For locally advanced cancers of the stomach or esophagogastric junction (≥T3N+), preferred treatment is preoperative and postoperative chemotherapy. Preoperative radiochemotherapy is an evidence-based alternative for large AEG type I-II tumors (≥T3N+). Additionally, some experts recommend treating T2 tumors with a similar approach, mainly because pretherapeutic staging is often considered to be unreliable. CONCLUSIONS: The German S3 guideline represents an up-to-date European position with regard to diagnosis, staging, and treatment recommendations for patients with locally advanced esophagogastric cancer. Effects of perioperative chemotherapy versus chemoradiotherapy are still to be investigated for adenocarcinoma of the cardia and the lower esophagus.
Resumo:
Resection of midline skull base lesions involve approaches needing extensive neurovascular manipulation. Transnasal endoscopic approach (TEA) is minimally invasive and ideal for certain selected lesions of the anterior skull base. A thorough knowledge of endonasal endoscopic anatomy is essential to be well versed with its surgical applications and this is possible only by dedicated cadaveric dissections. The goal in this study was to understand endoscopic anatomy of the orbital apex, petrous apex and the pterygopalatine fossa. Six cadaveric heads (3 injected and 3 non injected) and 12 sides, were dissected using a TEA outlining systematically, the steps of surgical dissection and the landmarks encountered. Dissection done by the "2 nostril, 4 hands" technique, allows better transnasal instrumentation with two surgeons working in unison with each other. The main surgical landmarks for the orbital apex are the carotid artery protuberance in the lateral sphenoid wall, optic nerve canal, lateral optico-carotid recess, optic strut and the V2 nerve. Orbital apex includes structures passing through the superior and inferior orbital fissure and the optic nerve canal. Vidian nerve canal and the V2 are important landmarks for the petrous apex. Identification of the sphenopalatine artery, V2 and foramen rotundum are important during dissection of the pterygopalatine fossa. In conclusion, the major potential advantage of TEA to the skull base is that it provides a direct anatomical route to the lesion without traversing any major neurovascular structures, as against the open transcranial approaches which involve more neurovascular manipulation and brain retraction. Obviously, these approaches require close cooperation and collaboration between otorhinolaryngologists and neurosurgeons.
Resumo:
This work is focused on the development of a methodology for the use of chemical characteristic of tire traces to help answer the following question: "Is the offending tire at the origin of the trace found on the crime scene?". This methodology goes from the trace sampling on the road to statistical analysis of its chemical characteristics. Knowledge about the composition and manufacture of tread tires as well as a review of instrumental techniques used for the analysis of polymeric materials were studied to select, as an ansi vi cal technique for this research, pyrolysis coupled to a gas Chromatograph with a mass spectrometry detector (Py-GC/MS). An analytical method was developed and optimized to obtain the lowest variability between replicates of the same sample. Within-variability of the tread was evaluated regarding width and circumference with several samples taken from twelve tires of different brands and/or models. The variability within each of the treads (within-variability) and between the treads (between-variability) could be quantified. Different statistical methods have shown that within-variability is lower than between-variability, which helped differentiate these tires. Ten tire traces were produced with tires of different brands and/or models by braking tests. These traces have been adequately sampled using sheets of gelatine. Particles of each trace were analysed using the same methodology as for the tires at their origin. The general chemical profile of a trace or of a tire has been characterized by eighty-six compounds. Based on a statistical comparison of the chemical profiles obtained, it has been shown that a tire trace is not differentiable from the tire at its origin but is generally differentiable from tires that are not at its origin. Thereafter, a sample containing sixty tires was analysed to assess the discrimination potential of the developed methodology. The statistical results showed that most of the tires of different brands and models are differentiable. However, tires of the same brand and model with identical characteristics, such as country of manufacture, size and DOT number, are not differentiable. A model, based on a likelihood ratio approach, was chosen to evaluate the results of the comparisons between the chemical profiles of the traces and tires. The methodology developed was finally blindly tested using three simulated scenarios. Each scenario involved a trace of an unknown tire as well as two tires possibly at its origin. The correct results for the three scenarios were used to validate the developed methodology. The different steps of this work were useful to collect the required information to test and validate the underlying assumption that it is possible to help determine if an offending tire » or is not at the origin of a trace, by means of a statistical comparison of their chemical profile. This aid was formalized by a measure of the probative value of the evidence, which is represented by the chemical profile of the trace of the tire. - Ce travail s'est proposé de développer une méthodologie pour l'exploitation des caractéristiques chimiques des traces de pneumatiques dans le but d'aider à répondre à la question suivante : «Est-ce que le pneumatique incriminé est ou n'est pas à l'origine de la trace relevée sur les lieux ? ». Cette méthodologie s'est intéressée du prélèvement de la trace de pneumatique sur la chaussée à l'exploitation statistique de ses caractéristiques chimiques. L'acquisition de connaissances sur la composition et la fabrication de la bande de roulement des pneumatiques ainsi que la revue de techniques instrumentales utilisées pour l'analyse de matériaux polymériques ont permis de choisir, comme technique analytique pour la présente recherche, la pyrolyse couplée à un chromatographe en phase gazeuse avec un détecteur de spectrométrie de masse (Py-GC/MS). Une méthode analytique a été développée et optimisée afin d'obtenir la plus faible variabilité entre les réplicas d'un même échantillon. L'évaluation de l'intravariabilité de la bande de roulement a été entreprise dans sa largeur et sa circonférence à l'aide de plusieurs prélèvements effectués sur douze pneumatiques de marques et/ou modèles différents. La variabilité au sein de chacune des bandes de roulement (intravariabilité) ainsi qu'entre les bandes de roulement considérées (intervariabilité) a pu être quantifiée. Les différentes méthodes statistiques appliquées ont montré que l'intravariabilité est plus faible que l'intervariabilité, ce qui a permis de différencier ces pneumatiques. Dix traces de pneumatiques ont été produites à l'aide de pneumatiques de marques et/ou modèles différents en effectuant des tests de freinage. Ces traces ont pu être adéquatement prélevées à l'aide de feuilles de gélatine. Des particules de chaque trace ont été analysées selon la même méthodologie que pour les pneumatiques à leur origine. Le profil chimique général d'une trace de pneumatique ou d'un pneumatique a été caractérisé à l'aide de huitante-six composés. Sur la base de la comparaison statistique des profils chimiques obtenus, il a pu être montré qu'une trace de pneumatique n'est pas différenciable du pneumatique à son origine mais est, généralement, différenciable des pneumatiques qui ne sont pas à son origine. Par la suite, un échantillonnage comprenant soixante pneumatiques a été analysé afin d'évaluer le potentiel de discrimination de la méthodologie développée. Les méthodes statistiques appliquées ont mis en évidence que des pneumatiques de marques et modèles différents sont, majoritairement, différenciables entre eux. La méthodologie développée présente ainsi un bon potentiel de discrimination. Toutefois, des pneumatiques de la même marque et du même modèle qui présentent des caractéristiques PTD (i.e. pays de fabrication, taille et numéro DOT) identiques ne sont pas différenciables. Un modèle d'évaluation, basé sur une approche dite du likelihood ratio, a été adopté pour apporter une signification au résultat des comparaisons entre les profils chimiques des traces et des pneumatiques. La méthodologie mise en place a finalement été testée à l'aveugle à l'aide de la simulation de trois scénarios. Chaque scénario impliquait une trace de pneumatique inconnue et deux pneumatiques suspectés d'être à l'origine de cette trace. Les résultats corrects obtenus pour les trois scénarios ont permis de valider la méthodologie développée. Les différentes étapes de ce travail ont permis d'acquérir les informations nécessaires au test et à la validation de l'hypothèse fondamentale selon laquelle il est possible d'aider à déterminer si un pneumatique incriminé est ou n'est pas à l'origine d'une trace, par le biais d'une comparaison statistique de leur profil chimique. Cette aide a été formalisée par une mesure de la force probante de l'indice, qui est représenté par le profil chimique de la trace de pneumatique.
Resumo:
Introduction: Fragile X syndrome (FXS) is the most common inherited cause of intellectual disability. With no curative treatment available, current therapeutic approaches are aimed at symptom management. FXS is caused by silencing the FMR1 gene, which encodes FMRP; as loss of FMRP leads to the development of symptoms associated with FXS. Areas covered: In this evaluation, the authors examine the role of the metabotropic glutamate receptor 5 (mGluR5) in the pathophysiology of FXS, and its suitability as a target for rescuing the disease state. Furthermore, the authors review the evidence from preclinical studies of pharmacological interventions targeting mGluR5 in FXS. Lastly, the authors assess the findings from clinical studies in FXS, in particular the use of the Aberrant Behavior Checklist-Community Edition (ABC-C) and the recently developed ABC-C for FXS scale, as clinical endpoints to assess disease modification in this patient population. Expert opinion: There is cautious optimism for the successful treatment of the core behavioral and cognitive symptoms of FXS based on preclinical data in animal models and early studies in humans. However, the association between mGluR5-heightened responsiveness and the clinical phenotype in humans remains to be demonstrated. Many questions regarding the optimal treatment and outcome measures of FXS remain unanswered.
Resumo:
The question of why some social systems have evolved close inbreeding is particularly intriguing given expected short- and long-term negative effects of this breeding system. Using social spiders as a case study, we quantitatively show that the potential costs of avoiding inbreeding through dispersal and solitary living could have outweighed the costs of inbreeding depression in the origin of inbred spider sociality. We further review the evidence that despite being favored in the short term, inbred spider sociality may constitute in the long run an evolutionary dead end. We also review other cases, such as the naked mole rats and some bark and ambrosia beetles, mites, psocids, thrips, parasitic ants, and termites, in which inbreeding and sociality are associated and the evidence for and against this breeding system being, in general, an evolutionary dead end.
Resumo:
Challenging environmental conditions, including heat and humidity, cold, and altitude, pose particular risks to the health of Olympic and other high-level athletes. As a further commitment to athlete safety, the International Olympic Committee (IOC) Medical Commission convened a panel of experts to review the scientific evidence base, reach consensus, and underscore practical safety guidelines and new research priorities regarding the unique environmental challenges Olympic and other international-level athletes face. For non-aquatic events, external thermal load is dependent on ambient temperature, humidity, wind speed and solar radiation, while clothing and protective gear can measurably increase thermal strain and prompt premature fatigue. In swimmers, body heat loss is the direct result of convection at a rate that is proportional to the effective water velocity around the swimmer and the temperature difference between the skin and the water. Other cold exposure and conditions, such as during Alpine skiing, biathlon and other sliding sports, facilitate body heat transfer to the environment, potentially leading to hypothermia and/or frostbite; although metabolic heat production during these activities usually increases well above the rate of body heat loss, and protective clothing and limited exposure time in certain events reduces these clinical risks as well. Most athletic events are held at altitudes that pose little to no health risks; and training exposures are typically brief and well-tolerated. While these and other environment-related threats to performance and safety can be lessened or averted by implementing a variety of individual and event preventative measures, more research and evidence-based guidelines and recommendations are needed. In the mean time, the IOC Medical Commission and International Sport Federations have implemented new guidelines and taken additional steps to mitigate risk even further.
Resumo:
CONTEXT: New trial data and drug regimens that have become available in the last 2 years warrant an update to guidelines for antiretroviral therapy (ART) in human immunodeficiency virus (HIV)-infected adults in resource-rich settings. OBJECTIVE: To provide current recommendations for the treatment of adult HIV infection with ART and use of laboratory-monitoring tools. Guidelines include when to start therapy and with what drugs, monitoring for response and toxic effects, special considerations in therapy, and managing antiretroviral failure. DATA SOURCES, STUDY SELECTION, AND DATA EXTRACTION: Data that had been published or presented in abstract form at scientific conferences in the past 2 years were systematically searched and reviewed by an International Antiviral Society-USA panel. The panel reviewed available evidence and formed recommendations by full panel consensus. DATA SYNTHESIS: Treatment is recommended for all adults with HIV infection; the strength of the recommendation and the quality of the evidence increase with decreasing CD4 cell count and the presence of certain concurrent conditions. Recommended initial regimens include 2 nucleoside reverse transcriptase inhibitors (tenofovir/emtricitabine or abacavir/lamivudine) plus a nonnucleoside reverse transcriptase inhibitor (efavirenz), a ritonavir-boosted protease inhibitor (atazanavir or darunavir), or an integrase strand transfer inhibitor (raltegravir). Alternatives in each class are recommended for patients with or at risk of certain concurrent conditions. CD4 cell count and HIV-1 RNA level should be monitored, as should engagement in care, ART adherence, HIV drug resistance, and quality-of-care indicators. Reasons for regimen switching include virologic, immunologic, or clinical failure and drug toxicity or intolerance. Confirmed treatment failure should be addressed promptly and multiple factors considered. CONCLUSION: New recommendations for HIV patient care include offering ART to all patients regardless of CD4 cell count, changes in therapeutic options, and modifications in the timing and choice of ART in the setting of opportunistic illnesses such as cryptococcal disease and tuberculosis.
Resumo:
Quantitative ultrasound (QUS) appears to be developing into an acceptable, low-cost and readily-accessible alternative to dual X-ray absorptiometry (DXA) measurements of bone mineral density (BMD) in the detection and management of osteoporosis. Perhaps the major difficulty with their widespread use is that many different QUS devices exist that differ substantially from each other, in terms of the parameters they measure and the strength of empirical evidence supporting their use. But another problem is that virtually no data exist outside of Caucasian or Asian populations. In general, heel QUS appears to be most tested and most effective. Some, but not all heel QUS devices are effective assessing fracture risk in some, but not all populations, the evidence being strongest for Caucasian females > 55 years old, though some evidence exists for Asian females > 55 and for Caucasian and Asian males > 70. Certain devices may allow to estimate the likelihood of osteoporosis, but very limited evidence exists supporting QUS use during the initiation or monitoring of osteoporosis treatment. Likely, QUS is most effective when combined with an assessment of clinical risk factors (CRF); with DXA reserved for individuals who are not identified as either high or low risk using QUS and CRF. However, monitoring and maintenance of test and instrument accuracy, precision and reproducibility are essential if QUS devices are to be used in clinical practice; and further scientific research in non-Caucasian, non-Asian populations clearly is compulsory to validate this tool for more widespread use.
Resumo:
An African oxalogenic tree, the iroko tree (Milicia excelsa), has the property to enhance carbonate precipitation in tropical oxisols, where such accumulations are not expected due to the acidic conditions in these types of soils. This uncommon process is linked to the oxalate-carbonate pathway, which increases soil pH through oxalate oxidation. In order to investigate the oxalate-carbonate pathway in the iroko system, fluxes of matter have been identified, described, and evaluated from field to microscopic scales. In the first centimeters of the soil profile, decaying of the organic matter allows the release of whewellite crystals, mainly due to the action of termites and saprophytic fungi. In addition, a concomitant flux of carbonate formed in wood tissues contributes to the carbonate flux and is identified as a direct consequence of wood feeding by termites. Nevertheless, calcite biomineralization of the tree is not a consequence of in situ oxalate consumption, but rather related to the oxalate oxidation inside the upper part of the soil. The consequence of this oxidation is the presence of carbonate ions in the soil solution pumped through the roots, leading to preferential mineralization of the roots and the trunk base. An ideal scenario for the iroko biomineralization and soil carbonate accumulation starts with oxalatization: as the iroko tree grows, the organic matter flux to the soil constitutes the litter, and an oxalate pool is formed on the forest ground. Then, wood rotting agents (mainly termites, saprophytic fungi, and bacteria) release significant amounts of oxalate crystals from decaying plant tissues. In addition, some of these agents are themselves producers of oxalate (e.g. fungi). Both processes contribute to a soil pool of "available" oxalate crystals. Oxalate consumption by oxalotrophic bacteria can then start. Carbonate and calcium ions present in the soil solution represent the end products of the oxalate-carbonate pathway. The solution is pumped through the roots, leading to carbonate precipitation. The main pools of carbon are clearly identified as the organic matter (the tree and its organic products), the oxalate crystals, and the various carbonate features. A functional model based on field observations and diagenetic investigations with δ13C signatures of the various compartments involved in the local carbon cycle is proposed. It suggests that the iroko ecosystem can act as a long-term carbon sink, as long as the calcium source is related to non-carbonate rocks. Consequently, this carbon sink, driven by the oxalate carbonate pathway around an iroko tree, constitutes a true carbon trapping ecosystem as defined by ecological theory.
Resumo:
Since the management of atrial fibrillation may be difficult in the individual patient, our purpose was to develop simple clinical recommendations to help the general internist manage this common clinical problem. Systematic review of the literature with evaluation of data-related evidence and framing of graded recommendations. Atrial fibrillation affects some 1% of the population in Western countries and is linked to a significant increase in morbidity and mortality. The management of atrial fibrillation requires individualised evaluation of the risks and benefits of therapeutic modalities, relying whenever possible on simple and validated tools. The two main points requiring a decision in clinical management are 1) whether or not to implement thromboembolic prevention therapy, and 2) whether preference should be given to a "rate control" or "rhythm control" strategy. Thromboembolic prophylaxis should be prescribed after individualised risk assessment: for patients at risk, oral anticoagulation with warfarin decreases the rate of embolic complications by 60% and aspirin by 20%, at the expense of an increased incidence of haemorrhagic complications. "Rate control" and "rhythm control" strategies are probably equivalent, and the choice should also be made on an individualised basis. To assist the physician in making his choices for the care of an atrial fibrillation patient we propose specific tables and algorithms, with graded recommendations. On the evidence of data from the literature we propose simple algorithms and tables for the clinical management of atrial fibrillation in the individual patient.
Multimodel inference and multimodel averaging in empirical modeling of occupational exposure levels.
Resumo:
Empirical modeling of exposure levels has been popular for identifying exposure determinants in occupational hygiene. Traditional data-driven methods used to choose a model on which to base inferences have typically not accounted for the uncertainty linked to the process of selecting the final model. Several new approaches propose making statistical inferences from a set of plausible models rather than from a single model regarded as 'best'. This paper introduces the multimodel averaging approach described in the monograph by Burnham and Anderson. In their approach, a set of plausible models are defined a priori by taking into account the sample size and previous knowledge of variables influent on exposure levels. The Akaike information criterion is then calculated to evaluate the relative support of the data for each model, expressed as Akaike weight, to be interpreted as the probability of the model being the best approximating model given the model set. The model weights can then be used to rank models, quantify the evidence favoring one over another, perform multimodel prediction, estimate the relative influence of the potential predictors and estimate multimodel-averaged effects of determinants. The whole approach is illustrated with the analysis of a data set of 1500 volatile organic compound exposure levels collected by the Institute for work and health (Lausanne, Switzerland) over 20 years, each concentration having been divided by the relevant Swiss occupational exposure limit and log-transformed before analysis. Multimodel inference represents a promising procedure for modeling exposure levels that incorporates the notion that several models can be supported by the data and permits to evaluate to a certain extent model selection uncertainty, which is seldom mentioned in current practice.
Resumo:
BACKGROUND: Recombinant human insulin-like growth factor I (rhIGF-I) is a possible disease modifying therapy for amyotrophic lateral sclerosis (ALS, which is also known as motor neuron disease (MND)). OBJECTIVES: To examine the efficacy of rhIGF-I in affecting disease progression, impact on measures of functional health status, prolonging survival and delaying the use of surrogates (tracheostomy and mechanical ventilation) to sustain survival in ALS. Occurrence of adverse events was also reviewed. SEARCH METHODS: We searched the Cochrane Neuromuscular Disease Group Specialized Register (21 November 2011), CENTRAL (2011, Issue 4), MEDLINE (January 1966 to November 2011) and EMBASE (January 1980 to November 2011) and sought information from the authors of randomised clinical trials and manufacturers of rhIGF-I. SELECTION CRITERIA: We considered all randomised controlled clinical trials involving rhIGF-I treatment of adults with definite or probable ALS according to the El Escorial Criteria. The primary outcome measure was change in Appel Amyotrophic Lateral Sclerosis Rating Scale (AALSRS) total score after nine months of treatment and secondary outcome measures were change in AALSRS at 1, 2, 3, 4, 5, 6, 7, 8, 9 months, change in quality of life (Sickness Impact Profile scale), survival and adverse events. DATA COLLECTION AND ANALYSIS: Each author independently graded the risk of bias in the included studies. The lead author extracted data and the other authors checked them. We generated some missing data by making ruler measurements of data in published graphs. We collected data about adverse events from the included trials. MAIN RESULTS: We identified three randomised controlled trials (RCTs) of rhIGF-I, involving 779 participants, for inclusion in the analysis. In a European trial (183 participants) the mean difference (MD) in change in AALSRS total score after nine months was -3.30 (95% confidence interval (CI) -8.68 to 2.08). In a North American trial (266 participants), the MD after nine months was -6.00 (95% CI -10.99 to -1.01). The combined analysis from both RCTs showed a MD after nine months of -4.75 (95% CI -8.41 to -1.09), a significant difference in favour of the treated group. The secondary outcome measures showed non-significant trends favouring rhIGF-I. There was an increased risk of injection site reactions with rhIGF-I (risk ratio 1.26, 95% CI 1.04 to 1.54). . A second North American trial (330 participants) used a novel primary end point involving manual muscle strength testing. No differences were demonstrated between the treated and placebo groups in this study. All three trials were at high risk of bias. AUTHORS' CONCLUSIONS: Meta-analysis revealed a significant difference in favour of rhIGF-I treatment; however, the quality of the evidence from the two included trials was low. A third study showed no difference between treatment and placebo. There is no evidence for increase in survival with IGF1. All three included trials were at high risk of bias.
Resumo:
The aging process is associated with gradual and progressive loss of muscle mass along with lowered strength and physical endurance. This condition, sarcopenia, has been widely observed with aging in sedentary adults. Regular aerobic and resistance exercise programs have been shown to counteract most aspects of sarcopenia. In addition, good nutrition, especially adequate protein and energy intake, can help limit and treat age-related declines in muscle mass, strength, and functional abilities. Protein nutrition in combination with exercise is considered optimal for maintaining muscle function. With the goal of providing recommendations for health care professionals to help older adults sustain muscle strength and function into older age, the European Society for Clinical Nutrition and Metabolism (ESPEN) hosted a Workshop on Protein Requirements in the Elderly, held in Dubrovnik on November 24 and 25, 2013. Based on the evidence presented and discussed, the following recommendations are made (a) for healthy older people, the diet should provide at least 1.0-1.2 g protein/kg body weight/day, (b) for older people who are malnourished or at risk of malnutrition because they have acute or chronic illness, the diet should provide 1.2-1.5 g protein/kg body weight/day, with even higher intake for individuals with severe illness or injury, and (c) daily physical activity or exercise (resistance training, aerobic exercise) should be undertaken by all older people, for as long as possible.
Resumo:
Background Entomopathogenic nematodes (EPNs) are tiny parasitic worms that parasitize insects, in which they reproduce. Their foraging behavior has been subject to numerous studies, most of which have proposed that, at short distances, EPNs use chemicals that are emitted directly from the host as host location cues. Carbon dioxide (CO2) in particular has been implicated as an important cue. Recent evidence shows that at longer distances several EPNs take advantage of volatiles that are specifically emitted by roots in response to insect attack. Studies that have revealed these plant-mediated interactions among three trophic levels have been met with some disbelief. Scope This review aims to take away this skepticism by summarizing the evidence for a role of root volatiles as foraging cues for EPNs. To reinforce our argument, we conducted olfactometer assays in which we directly compared the attraction of an EPN species to CO2 and two typical inducible root volatiles. Conclusions The combination of the ubiquitous gas and a more specific root volatile was found to be considerably more attractive than one of the two alone. Hence, future studies on EPN foraging behavior should take into account that CO2 and plant volatiles may work in synergy as attractants for EPNs. Recent research efforts also reveal prospects of exploiting plant-produced signals to improve the biological control of insect pests in the rhizosphere.