246 resultados para LIKELIHOOD METHODS
Resumo:
Question: When multiple observers record the same spatial units of alpine vegetation, how much variation is there in the records and what are the consequences of this variation for monitoring schemes to detect change? Location: One test summit in Switzerland (Alps) and one test summit in Scotland (Cairngorm Mountains). Method: Eight observers used the GLORIA protocols for species composition and visual cover estimates in percent on large summit sections (>100 m2) and species composition and frequency in nested quadrats (1 m2). Results: The multiple records from the same spatial unit for species composition and species cover showed considerable variation in the two countries. Estimates of pseudoturnover of composition and coefficients of variation of cover estimates for vascular plant species in 1m x 1m quadrats showed less variation than in previously published reports whereas our results in larger sections were broadly in line with previous reports. In Scotland, estimates for bryophytes and lichens were more variable than for vascular plants. Conclusions: Statistical power calculations indicated that, unless large numbers of plots were used, changes in cover or frequency were only likely to be detected for abundant species (exceeding 10% cover) or if relative changes were large (50% or more). Lower variation could be reached with the point methods and with larger numbers of small plots. However, as summits often strongly differ from each other, supplementary summits cannot be considered as a way of increasing statistical power without introducing a supplementary component of variance into the analysis and hence the power calculations.
Resumo:
The relationship between electrophysiological and functional magnetic resonance imaging (fMRI) signals remains poorly understood. To date, studies have required invasive methods and have been limited to single functional regions and thus cannot account for possible variations across brain regions. Here we present a method that uses fMRI data and singe-trial electroencephalography (EEG) analyses to assess the spatial and spectral dependencies between the blood-oxygenation-level-dependent (BOLD) responses and the noninvasively estimated local field potentials (eLFPs) over a wide range of frequencies (0-256 Hz) throughout the entire brain volume. This method was applied in a study where human subjects completed separate fMRI and EEG sessions while performing a passive visual task. Intracranial LFPs were estimated from the scalp-recorded data using the ELECTRA source model. We compared statistical images from BOLD signals with statistical images of each frequency of the eLFPs. In agreement with previous studies in animals, we found a significant correspondence between LFP and BOLD statistical images in the gamma band (44-78 Hz) within primary visual cortices. In addition, significant correspondence was observed at low frequencies (<14 Hz) and also at very high frequencies (>100 Hz). Effects within extrastriate visual areas showed a different correspondence that not only included those frequency ranges observed in primary cortices but also additional frequencies. Results therefore suggest that the relationship between electrophysiological and hemodynamic signals thus might vary both as a function of frequency and anatomical region.
Resumo:
PURPOSE: The objective of this study was to investigate the effects of weather, rank, and home advantage on international football match results and scores in the Gulf Cooperation Council (GCC) region. METHODS: Football matches (n = 2008) in six GCC countries were analyzed. To determine the weather influence on the likelihood of favorable outcome and goal difference, generalized linear model with a logit link function and multiple regression analysis were performed. RESULTS: In the GCC region, home teams tend to have greater likelihood of a favorable outcome (P < 0.001) and higher goal difference (P < 0.001). Temperature difference was identified as a significant explanatory variable when used independently (P < 0.001) or after adjustment for home advantage and team ranking (P < 0.001). The likelihood of favorable outcome for GCC teams increases by 3% for every 1-unit increase in temperature difference. After inclusion of interaction with opposition, this advantage remains significant only when playing against non-GCC opponents. While home advantage increased the odds of favorable outcome (P < 0.001) and goal difference (P < 0.001) after inclusion of interaction term, the likelihood of favorable outcome for a GCC team decreased (P < 0.001) when playing against a stronger opponent. Finally, the temperature and wet bulb globe temperature approximation were found as better indicators of the effect of environmental conditions than absolute and relative humidity or heat index on match outcomes. CONCLUSIONS: In GCC region, higher temperature increased the likelihood of a favorable outcome when playing against non-GCC teams. However, international ranking should be considered because an opponent with a higher rank reduced, but did not eliminate, the likelihood of a favorable outcome.
Resumo:
Background: The Pulmonary Embolism Rule-out Criteria (PERC) rule is a clinical diagnostic rule designed to exclude pulmonary embolism (PE) without further testing. We sought to externally validate the diagnostic performance of the PERC rule alone and combined with clinical probability assessment based on the revised Geneva score. Methods: The PERC rule was applied retrospectively to consecutive patients who presented with a clinical suspicion of PE to six emergency departments, and who were enrolled in a randomized trial of PE diagnosis. Patients who met all eight PERC criteria [PERC(-)] were considered to be at a very low risk for PE. We calculated the prevalence of PE among PERC(-) patients according to their clinical pretest probability of PE. We estimated the negative likelihood ratio of the PERC rule to predict PE. Results: Among 1675 patients, the prevalence of PE was 21.3%. Overall, 13.2% of patients were PERC(-). The prevalence of PE was 5.4% [95% confidence interval (CI): 3.1-9.3%] among PERC(-) patients overall and 6.4% (95% CI: 3.7-10.8%) among those PERC(-) patients with a low clinical pretest probability of PE. The PERC rule had a negative likelihood ratio of 0.70 (95% CI: 0.67-0.73) for predicting PE overall, and 0.63 (95% CI: 0.38-1.06) in low-risk patients. Conclusions: Our results suggest that the PERC rule alone or even when combined with the revised Geneva score cannot safely identify very low risk patients in whom PE can be ruled out without additional testing, at least in populations with a relatively high prevalence of PE.
Resumo:
Avalanche forecasting is a complex process involving the assimilation of multiple data sources to make predictions over varying spatial and temporal resolutions. Numerically assisted forecasting often uses nearest neighbour methods (NN), which are known to have limitations when dealing with high dimensional data. We apply Support Vector Machines to a dataset from Lochaber, Scotland to assess their applicability in avalanche forecasting. Support Vector Machines (SVMs) belong to a family of theoretically based techniques from machine learning and are designed to deal with high dimensional data. Initial experiments showed that SVMs gave results which were comparable with NN for categorical and probabilistic forecasts. Experiments utilising the ability of SVMs to deal with high dimensionality in producing a spatial forecast show promise, but require further work.
Resumo:
BACKGROUND: Evaluation of syncope remains often unstructured. The aim of the study was to assess the effectiveness of a standardized protocol designed to improve the diagnosis of syncope. METHODS: Consecutive patients with syncope presenting to the emergency departments of two primary and tertiary care hospitals over a period of 18 months underwent a two-phase evaluation including: 1) noninvasive assessment (phase I); and 2) specialized tests (phase II), if syncope remained unexplained after phase I. During phase II, the evaluation strategy was alternately left to physicians in charge of patients (control), or guided by a standardized protocol relying on cardiac status and frequency of events (intervention). The primary outcomes were the diagnostic yield of each phase, and the impact of the intervention (phase II) measured by multivariable analysis. RESULTS: Among 1725 patients with syncope, 1579 (92%) entered phase I which permitted to establish a diagnosis in 1061 (67%) of them, including mainly reflex causes and orthostatic hypotension. Five-hundred-eighteen patients (33%) were considered as having unexplained syncope and 363 (70%) entered phase II. A cause for syncope was found in 67 (38%) of 174 patients during intervention periods, compared to 18 (9%) of 189 during control (p<0.001). Compared to control periods, intervention permitted diagnosing more cardiac (8%, vs 3%, p=0.04) and reflex syncope (25% vs 6%, p<0.001), and increased the odds of identifying a cause for syncope by a factor of 4.5 (95% CI: 2.6-8.7, p<0.001). Overall, adding the diagnostic yield obtained during phase I and phase II (intervention periods) permitted establishing the cause of syncope in 76% of patients. CONCLUSION: Application of a standardized diagnostic protocol in patients with syncope improved the likelihood of identifying a cause for this symptom. Future trials should assess the efficacy of diagnosis-specific therapy.
Resumo:
As a thorough aggregation of probability and graph theory, Bayesian networks currently enjoy widespread interest as a means for studying factors that affect the coherent evaluation of scientific evidence in forensic science. Paper I of this series of papers intends to contribute to the discussion of Bayesian networks as a framework that is helpful for both illustrating and implementing statistical procedures that are commonly employed for the study of uncertainties (e.g. the estimation of unknown quantities). While the respective statistical procedures are widely described in literature, the primary aim of this paper is to offer an essentially non-technical introduction on how interested readers may use these analytical approaches - with the help of Bayesian networks - for processing their own forensic science data. Attention is mainly drawn to the structure and underlying rationale of a series of basic and context-independent network fragments that users may incorporate as building blocs while constructing larger inference models. As an example of how this may be done, the proposed concepts will be used in a second paper (Part II) for specifying graphical probability networks whose purpose is to assist forensic scientists in the evaluation of scientific evidence encountered in the context of forensic document examination (i.e. results of the analysis of black toners present on printed or copied documents).
Resumo:
OBJECTIVE: To evaluate the accuracy of computed tomography angiography (CTA) in predicting arterial encasement by limb tumours, by comparing CTA with surgical findings (gold standard). METHODS: Preoperative CTA images of 55 arteries in 48 patients were assessed for arterial status: cross-sectional CTA images were scored as showing a fat plane between artery and tumour (score 0), slight contact between artery and tumour (score 1), partial arterial encasement (score 2) or total arterial encasement (score 3). Reformatted CTA images were assessed for arterial displacement, rigid wall, stenosis or occlusion. At surgery, arteries were classified as free or surgically encased; 45 arteries were free and 10 were surgically encased. RESULTS: Multivariate logistic regression identified the axial CTA score as a relevant predictor for arterial encasement and subsequent vascular intervention during surgery. All sites where CTA showed a fat plane between the tumour and the artery were classified as free at surgery (n = 28/28). The sensitivity of total arterial encasement on CTA (score 3) was 90%, specificity 93%, accuracy 93% and positive likelihood ratio 13.5. CONCLUSION: CTA evidence of total arterial encasement is a highly specific indication of arterial encasement. The presence of fat between the tumour and the artery on CTA rules out arterial involvement at surgery.
Resumo:
BACKGROUND: Hippocampal atrophy (HA) is a known predictor of dementia in Alzheimer's disease. HA has been found in advanced Parkinson's disease (PD), but no predicting value has been demonstrated yet. The identification of such a predictor in candidates for subthalamic deep brain stimulation (STN-DBS) would be of value. Our objective was to compare preoperative hippocampal volumes (HV) between PD patients who subsequently converted to dementia (PDD) after STN-DBS and those who did not (PDnD). METHODS: From a cohort of 70 consecutive STN-DBS treated PD patients, 14 converted to dementia over 25.6+/-20.2 months (PDD). They were compared to 14 matched controls (PDnD) who did not convert to dementia after 43.9+/-11.7 months. On the preoperative 3D MPRAGE MRI images, HV and total brain volumes (TBV) were measured by a blinded investigator using manual and automatic segmentation respectively. RESULTS: PDD had smaller preoperative HV than PDnD (1.95+/-0.29 ml; 2.28+/-0.33 ml; p<0.01). This difference reinforced after normalization for TBV (3.28+/-0.48, 3.93+/-0.60; p<0.01). Every 0.1 ml decrease of HV increased the likelihood to develop dementia by 24.6%. A large overlap was found between PD and PDnD HVs, precluding the identification of a cut-off score. CONCLUSIONS: As in Alzheimer's disease, HA may be a predictor of the conversion to dementia in PD. This preoperative predictor suggests that the development of dementia after STN-DBS is related to the disease progression, rather then the procedure. Further studies are needed to define a cut-off score for HA, in order to affine its predictive value for an individual patient.
Resumo:
Five years after the 2005 Pakistan earthquake that triggered multiple mass movements, landslides continue to pose a threat to the population of Azad Kashmir, especially during heavy monsoon rains. The thousands of landslides that were triggered by the 7.6 magnitude earthquake in 2005 were not just due to a natural phenomenon but largely induced by human activities, namely, road building, grazing, and deforestation. The damage caused by the landslides in the study area (381 km2) is estimated at 3.6 times the annual public works budget of Azad Kashmir for 2005 of US$ 1 million. In addition to human suffering, this cost constitutes a significant economic setback to the region that could have been reduced through improved land use and risk management. This article describes interdisciplinary research conducted 18 months after the earthquake to provide a more systemic approach to understanding risks posed by landslides, including the physical, environmental, and human contexts. The goal of this research is twofold: to present empirical data on the social, geological, and environmental contexts in which widespread landslides occurred following the 2005 earthquake; and, second, to describe straightforward methods that can be used for integrated landslide risk assessments in data-poor environments. The article analyzes limitations of the methodologies and challenges for conducting interdisciplinary research that integrates both social and physical data. This research concludes that reducing landslide risk is ultimately a management issue, based in land use decisions and governance.