974 resultados para Data Interpretation, Statistical
Resumo:
The CIAO Study is a multicenter observational study currently underway in 66 European medical institutions over the course of a six-month study period (January-June 2012).This preliminary report overviews the findings of the first half of the study, which includes all data from the first three months of the six-month study period.Patients with either community-acquired or healthcare-associated complicated intra-abdominal infections (IAIs) were included in the study.912 patients with a mean age of 54.4 years (range 4-98) were enrolled in the study during the first three-month period. 47.7% of the patients were women and 52.3% were men. Among these patients, 83.3% were affected by community-acquired IAIs while the remaining 16.7% presented with healthcare-associated infections. Intraperitoneal specimens were collected from 64.2% of the enrolled patients, and from these samples, 825 microorganisms were collectively identified.The overall mortality rate was 6.4% (58/912). According to univariate statistical analysis of the data, critical clinical condition of the patient upon hospital admission (defined by severe sepsis and septic shock) as well as healthcare-associated infections, non-appendicular origin, generalized peritonitis, and serious comorbidities such as malignancy and severe cardiovascular disease were all significant risk factors for patient mortality.White Blood Cell counts (WBCs) greater than 12,000 or less than 4,000 and core body temperatures exceeding 38°C or less than 36°C by the third post-operative day were statistically significant indicators of patient mortality.
Resumo:
New stratigraphic data along a profile from the Helvetic Gotthard Massif to the remnants of the North Penninic Basin in eastern Ticino and Graubunden are presented. The stratigraphic record together with existing geochemical and structural data, motivate a new interpretation of the fossil European distal margin. We introduce a new group of Triassic facies, the North-Penninic-Triassic (NPT), which is characterised by the Ladinian "dolomie bicolori". The NPT was located in-between the Briançonnais carbonate platform and the Helvetic lands. The observed horizontal transition, coupled with the stratigraphic superposition of an Helvetic Liassic on a Briaçonnais Triassic in the Luzzone-Terri nappe, links, prior to Jurassic rifting, the Briançonnais paleogeographic domain at the Helvetic Margin, south of the Gotthard. Our observations suggest that the Jurassic rifting separated the Briançonnais domain from the Helvetic margin by complex and protracted extension. The syn-rift stratigraphic record in the Adula nappe and surroundings suggests the presence of a diffuse rising area with only moderately subsiding basins above a thinned continental and proto-oceanic crust. Strong subsidence occurred in a second phase following protracted extension and the resulting delamination of the rising area. The stratigraphic coherency in the Adula's Mesozoic questions the idea of a lithospheric mélange in the eclogitic Adula nappe, which is more likely to be a coherent alpine tectonic unit. The structural and stratigraphic observations in the Piz Terri-Lunschania zone suggest the activity of syn-rift detachments. During the alpine collision these faults are reactivated (and inverted) and played a major role in allowing the Adula subduction, the "Penninic Thrust" above it and in creating the structural complexity of the Central Alps.
Resumo:
Soil properties have an enormous impact on economic and environmental aspects of agricultural production. Quantitative relationships between soil properties and the factors that influence their variability are the basis of digital soil mapping. The predictive models of soil properties evaluated in this work are statistical (multiple linear regression-MLR) and geostatistical (ordinary kriging and co-kriging). The study was conducted in the municipality of Bom Jardim, RJ, using a soil database with 208 sampling points. Predictive models were evaluated for sand, silt and clay fractions, pH in water and organic carbon at six depths according to the specifications of the consortium of digital soil mapping at the global level (GlobalSoilMap). Continuous covariates and categorical predictors were used and their contributions to the model assessed. Only the environmental covariates elevation, aspect, stream power index (SPI), soil wetness index (SWI), normalized difference vegetation index (NDVI), and b3/b2 band ratio were significantly correlated with soil properties. The predictive models had a mean coefficient of determination of 0.21. Best results were obtained with the geostatistical predictive models, where the highest coefficient of determination 0.43 was associated with sand properties between 60 to 100 cm deep. The use of a sparse data set of soil properties for digital mapping can explain only part of the spatial variation of these properties. The results may be related to the sampling density and the quantity and quality of the environmental covariates and predictive models used.
Resumo:
The graphical representation of spatial soil properties in a digital environment is complex because it requires a conversion of data collected in a discrete form onto a continuous surface. The objective of this study was to apply three-dimension techniques of interpolation and visualization on soil texture and fertility properties and establish relationships with pedogenetic factors and processes in a slope area. The GRASS Geographic Information System was used to generate three-dimensional models and ParaView software to visualize soil volumes. Samples of the A, AB, BA, and B horizons were collected in a regular 122-point grid in an area of 13 ha, in Pinhais, PR, in southern Brazil. Geoprocessing and graphic computing techniques were effective in identifying and delimiting soil volumes of distinct ranges of fertility properties confined within the soil matrix. Both three-dimensional interpolation and the visualization tool facilitated interpretation in a continuous space (volumes) of the cause-effect relationships between soil texture and fertility properties and pedological factors and processes, such as higher clay contents following the drainage lines of the area. The flattest part with more weathered soils (Oxisols) had the highest pH values and lower Al3+ concentrations. These techniques of data interpolation and visualization have great potential for use in diverse areas of soil science, such as identification of soil volumes occurring side-by-side but that exhibit different physical, chemical, and mineralogical conditions for plant root growth, and monitoring of plumes of organic and inorganic pollutants in soils and sediments, among other applications. The methodological details for interpolation and a three-dimensional view of soil data are presented here.
Resumo:
Many of the most interesting questions ecologists ask lead to analyses of spatial data. Yet, perhaps confused by the large number of statistical models and fitting methods available, many ecologists seem to believe this is best left to specialists. Here, we describe the issues that need consideration when analysing spatial data and illustrate these using simulation studies. Our comparative analysis involves using methods including generalized least squares, spatial filters, wavelet revised models, conditional autoregressive models and generalized additive mixed models to estimate regression coefficients from synthetic but realistic data sets, including some which violate standard regression assumptions. We assess the performance of each method using two measures and using statistical error rates for model selection. Methods that performed well included generalized least squares family of models and a Bayesian implementation of the conditional auto-regressive model. Ordinary least squares also performed adequately in the absence of model selection, but had poorly controlled Type I error rates and so did not show the improvements in performance under model selection when using the above methods. Removing large-scale spatial trends in the response led to poor performance. These are empirical results; hence extrapolation of these findings to other situations should be performed cautiously. Nevertheless, our simulation-based approach provides much stronger evidence for comparative analysis than assessments based on single or small numbers of data sets, and should be considered a necessary foundation for statements of this type in future.
Resumo:
We study theoretical and empirical aspects of the mean exit time (MET) of financial time series. The theoretical modeling is done within the framework of continuous time random walk. We empirically verify that the mean exit time follows a quadratic scaling law and it has associated a prefactor which is specific to the analyzed stock. We perform a series of statistical tests to determine which kind of correlation are responsible for this specificity. The main contribution is associated with the autocorrelation property of stock returns. We introduce and solve analytically both two-state and three-state Markov chain models. The analytical results obtained with the two-state Markov chain model allows us to obtain a data collapse of the 20 measured MET profiles in a single master curve.
Resumo:
Cortical folding (gyrification) is determined during the first months of life, so that adverse events occurring during this period leave traces that will be identifiable at any age. As recently reviewed by Mangin and colleagues(2), several methods exist to quantify different characteristics of gyrification. For instance, sulcal morphometry can be used to measure shape descriptors such as the depth, length or indices of inter-hemispheric asymmetry(3). These geometrical properties have the advantage of being easy to interpret. However, sulcal morphometry tightly relies on the accurate identification of a given set of sulci and hence provides a fragmented description of gyrification. A more fine-grained quantification of gyrification can be achieved with curvature-based measurements, where smoothed absolute mean curvature is typically computed at thousands of points over the cortical surface(4). The curvature is however not straightforward to comprehend, as it remains unclear if there is any direct relationship between the curvedness and a biologically meaningful correlate such as cortical volume or surface. To address the diverse issues raised by the measurement of cortical folding, we previously developed an algorithm to quantify local gyrification with an exquisite spatial resolution and of simple interpretation. Our method is inspired of the Gyrification Index(5), a method originally used in comparative neuroanatomy to evaluate the cortical folding differences across species. In our implementation, which we name local Gyrification Index (lGI(1)), we measure the amount of cortex buried within the sulcal folds as compared with the amount of visible cortex in circular regions of interest. Given that the cortex grows primarily through radial expansion(6), our method was specifically designed to identify early defects of cortical development. In this article, we detail the computation of local Gyrification Index, which is now freely distributed as a part of the FreeSurfer Software (http://surfer.nmr.mgh.harvard.edu/, Martinos Center for Biomedical Imaging, Massachusetts General Hospital). FreeSurfer provides a set of automated reconstruction tools of the brain's cortical surface from structural MRI data. The cortical surface extracted in the native space of the images with sub-millimeter accuracy is then further used for the creation of an outer surface, which will serve as a basis for the lGI calculation. A circular region of interest is then delineated on the outer surface, and its corresponding region of interest on the cortical surface is identified using a matching algorithm as described in our validation study(1). This process is repeatedly iterated with largely overlapping regions of interest, resulting in cortical maps of gyrification for subsequent statistical comparisons (Fig. 1). Of note, another measurement of local gyrification with a similar inspiration was proposed by Toro and colleagues(7), where the folding index at each point is computed as the ratio of the cortical area contained in a sphere divided by the area of a disc with the same radius. The two implementations differ in that the one by Toro et al. is based on Euclidian distances and thus considers discontinuous patches of cortical area, whereas ours uses a strict geodesic algorithm and include only the continuous patch of cortical area opening at the brain surface in a circular region of interest.
Resumo:
Quantitative information from magnetic resonance imaging (MRI) may substantiate clinical findings and provide additional insight into the mechanism of clinical interventions in therapeutic stroke trials. The PERFORM study is exploring the efficacy of terutroban versus aspirin for secondary prevention in patients with a history of ischemic stroke. We report on the design of an exploratory longitudinal MRI follow-up study that was performed in a subgroup of the PERFORM trial. An international multi-centre longitudinal follow-up MRI study was designed for different MR systems employing safety and efficacy readouts: new T2 lesions, new DWI lesions, whole brain volume change, hippocampal volume change, changes in tissue microstructure as depicted by mean diffusivity and fractional anisotropy, vessel patency on MR angiography, and the presence of and development of new microbleeds. A total of 1,056 patients (men and women ≥ 55 years) were included. The data analysis included 3D reformation, image registration of different contrasts, tissue segmentation, and automated lesion detection. This large international multi-centre study demonstrates how new MRI readouts can be used to provide key information on the evolution of cerebral tissue lesions and within the macrovasculature after atherothrombotic stroke in a large sample of patients.
Resumo:
Traffic safety engineers are among the early adopters of Bayesian statistical tools for analyzing crash data. As in many other areas of application, empirical Bayes methods were their first choice, perhaps because they represent an intuitively appealing, yet relatively easy to implement alternative to purely classical approaches. With the enormous progress in numerical methods made in recent years and with the availability of free, easy to use software that permits implementing a fully Bayesian approach, however, there is now ample justification to progress towards fully Bayesian analyses of crash data. The fully Bayesian approach, in particular as implemented via multi-level hierarchical models, has many advantages over the empirical Bayes approach. In a full Bayesian analysis, prior information and all available data are seamlessly integrated into posterior distributions on which practitioners can base their inferences. All uncertainties are thus accounted for in the analyses and there is no need to pre-process data to obtain Safety Performance Functions and other such prior estimates of the effect of covariates on the outcome of interest. In this slight, fully Bayesian methods may well be less costly to implement and may result in safety estimates with more realistic standard errors. In this manuscript, we present the full Bayesian approach to analyzing traffic safety data and focus on highlighting the differences between the empirical Bayes and the full Bayes approaches. We use an illustrative example to discuss a step-by-step Bayesian analysis of the data and to show some of the types of inferences that are possible within the full Bayesian framework.
Resumo:
Traffic safety engineers are among the early adopters of Bayesian statistical tools for analyzing crash data. As in many other areas of application, empirical Bayes methods were their first choice, perhaps because they represent an intuitively appealing, yet relatively easy to implement alternative to purely classical approaches. With the enormous progress in numerical methods made in recent years and with the availability of free, easy to use software that permits implementing a fully Bayesian approach, however, there is now ample justification to progress towards fully Bayesian analyses of crash data. The fully Bayesian approach, in particular as implemented via multi-level hierarchical models, has many advantages over the empirical Bayes approach. In a full Bayesian analysis, prior information and all available data are seamlessly integrated into posterior distributions on which practitioners can base their inferences. All uncertainties are thus accounted for in the analyses and there is no need to pre-process data to obtain Safety Performance Functions and other such prior estimates of the effect of covariates on the outcome of interest. In this light, fully Bayesian methods may well be less costly to implement and may result in safety estimates with more realistic standard errors. In this manuscript, we present the full Bayesian approach to analyzing traffic safety data and focus on highlighting the differences between the empirical Bayes and the full Bayes approaches. We use an illustrative example to discuss a step-by-step Bayesian analysis of the data and to show some of the types of inferences that are possible within the full Bayesian framework.
Resumo:
Background: Molecular tools may help to uncover closely related and still diverging species from a wide variety of taxa and provide insight into the mechanisms, pace and geography of marine speciation. There is a certain controversy on the phylogeography and speciation modes of species-groups with an Eastern Atlantic-Western Indian Ocean distribution, with previous studies suggesting that older events (Miocene) and/or more recent (Pleistocene) oceanographic processes could have influenced the phylogeny of marine taxa. The spiny lobster genus Palinurus allows for testing among speciation hypotheses, since it has a particular distribution with two groups of three species each in the Northeastern Atlantic (P. elephas, P. mauritanicus and P. charlestoni) and Southeastern Atlantic and Southwestern Indian Oceans (P. gilchristi, P. delagoae and P. barbarae). In the present study, we obtain a more complete understanding of the phylogenetic relationships among these species through a combined dataset with both nuclear and mitochondrial markers, by testing alternative hypotheses on both the mutation rate and tree topology under the recently developed approximate Bayesian computation (ABC) methods. Results Our analyses support a North-to-South speciation pattern in Palinurus with all the South-African species forming a monophyletic clade nested within the Northern Hemisphere species. Coalescent-based ABC methods allowed us to reject the previously proposed hypothesis of a Middle Miocene speciation event related with the closure of the Tethyan Seaway. Instead, divergence times obtained for Palinurus species using the combined mtDNA-microsatellite dataset and standard mutation rates for mtDNA agree with known glaciation-related processes occurring during the last 2 my. Conclusion The Palinurus speciation pattern is a typical example of a series of rapid speciation events occurring within a group, with very short branches separating different species. Our results support the hypothesis that recent climate change-related oceanographic processes have influenced the phylogeny of marine taxa, with most Palinurus species originating during the last two million years. The present study highlights the value of new coalescent-based statistical methods such as ABC for testing different speciation hypotheses using molecular data.
Resumo:
BACKGROUND: Until recently, it was accepted that the rate of complications and failure of medical therapy were higher during recurrent episodes of diverticulitis. New data and new interpretation of older studies have challenged this opinion. The aim of the present study was to determine whether recurrent diverticulitis in comparison with the initial episode has a different short-term outcome after medical or surgical treatment. METHODS: This was a retrospective cohort study of 271 consecutive patients admitted for diverticulitis confirmed by computed tomography (CT) between 2001 and 2004. Altogether 202 patients had an initial episode (group I), and 69 had recurrent diverticulitis (group R). A total of 20 clinical and 15 radiologic parameters were analyzed and compared between the two groups, including need for surgery, clinical presentation at admission, response to treatment, complications, laboratory parameters, and pathologic CT features (colonic wall thickening, abscess, pneumoperitoneum, free intraperitoneal fluid). An unpaired Student's t-test and Fisher's and Wilcoxon's tests were applied for statistical analysis. RESULTS: None of the clinical or radiologic parameters was statistically different between the two groups. Regarding surgery, 15.8% of the group I patients needed surgery at admission compared to 5.8% in group R (p = 0.04). Conservative treatment failure was similar in the two groups (10.7% vs. 10.0%; p = 0.84). There was 3% mortality at 30 days in group I compared to 0% in group R. CONCLUSIONS: Recurrent episodes of diverticulitis do not lead to more complications and more conservative treatment failure. Moreover, surgery at admission was less frequent among patients who presented with a recurrence.
Resumo:
Numerous sources of evidence point to the fact that heterogeneity within the Earth's deep crystalline crust is complex and hence may be best described through stochastic rather than deterministic approaches. As seismic reflection imaging arguably offers the best means of sampling deep crustal rocks in situ, much interest has been expressed in using such data to characterize the stochastic nature of crustal heterogeneity. Previous work on this problem has shown that the spatial statistics of seismic reflection data are indeed related to those of the underlying heterogeneous seismic velocity distribution. As of yet, however, the nature of this relationship has remained elusive due to the fact that most of the work was either strictly empirical or based on incorrect methodological approaches. Here, we introduce a conceptual model, based on the assumption of weak scattering, that allows us to quantitatively link the second-order statistics of a 2-D seismic velocity distribution with those of the corresponding processed and depth-migrated seismic reflection image. We then perform a sensitivity study in order to investigate what information regarding the stochastic model parameters describing crustal velocity heterogeneity might potentially be recovered from the statistics of a seismic reflection image using this model. Finally, we present a Monte Carlo inversion strategy to estimate these parameters and we show examples of its application at two different source frequencies and using two different sets of prior information. Our results indicate that the inverse problem is inherently non-unique and that many different combinations of the vertical and lateral correlation lengths describing the velocity heterogeneity can yield seismic images with the same 2-D autocorrelation structure. The ratio of all of these possible combinations of vertical and lateral correlation lengths, however, remains roughly constant which indicates that, without additional prior information, the aspect ratio is the only parameter describing the stochastic seismic velocity structure that can be reliably recovered.
Resumo:
Genome-wide association studies have been instrumental in identifying genetic variants associated with complex traits such as human disease or gene expression phenotypes. It has been proposed that extending existing analysis methods by considering interactions between pairs of loci may uncover additional genetic effects. However, the large number of possible two-marker tests presents significant computational and statistical challenges. Although several strategies to detect epistasis effects have been proposed and tested for specific phenotypes, so far there has been no systematic attempt to compare their performance using real data. We made use of thousands of gene expression traits from linkage and eQTL studies, to compare the performance of different strategies. We found that using information from marginal associations between markers and phenotypes to detect epistatic effects yielded a lower false discovery rate (FDR) than a strategy solely using biological annotation in yeast, whereas results from human data were inconclusive. For future studies whose aim is to discover epistatic effects, we recommend incorporating information about marginal associations between SNPs and phenotypes instead of relying solely on biological annotation. Improved methods to discover epistatic effects will result in a more complete understanding of complex genetic effects.
Resumo:
Data are provided to CJJP through statistical summary forms completed by the JCSLs. Because forms are completed only when meaningful contact between a student and a liaison takes place, only a portion of the total population served is reported. Meaningful contact is defined as having at least five contacts within a 60-day period (at any point during the academic year) regarding at least one of the referral reasons supplied on the form. Data are entered into a web-based application by the liaisons and retrieved electronically by CJJP via the internet. Service information is submitted and uploaded only at the end of the academic year.