974 resultados para Data Interpretation, Statistical


Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is an increasing trend in the incidence of cancer worldwide, and it has been accepted that environmental factors account for an important proportion of the global burden. The present paper reports preliminary findings on the influence of the historical exposure to a group of persistent organic pollutants on total cancer risk, at year 9 in the follow-up of a cohort from Southern Spain. A cohort of 368 participants (median age 51 years) was recruited in 2003. Their historical exposure was estimated by analyzing residues of persistent organic pollutants in adipose tissue. Estimation of cancer incidence was based on data from a population-based cancer registry. Statistical analyses were performed using multivariable Cox-regression models. In males, PCB 153 concentrations were positively associated with total cancer risk, with an adjusted hazard ratio (95% confidence interval) of 1.20 (1.01-1.41) for an increment of 100 ng/g lipid. Our preliminary findings suggest a potential relationship between the historical exposure to persistent organic pollutants and the risk of cancer in men. However, these results should be interpreted with caution and require verification during the future follow-up of this cohort.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the B-ISDN there is a provision for four classes of services, all of them supported by a single transport network (the ATM network). Three of these services, the connected oriented (CO) ones, permit connection access control (CAC) but the fourth, the connectionless oriented (CLO) one, does not. Therefore, when CLO service and CO services have to share the same ATM link, a conflict may arise. This is because a bandwidth allocation to obtain maximum statistical gain can damage the contracted ATM quality of service (QOS); and vice versa, in order to guarantee the contracted QOS, the statistical gain have to be sacrificed. The paper presents a performance evaluation study of the influence of the CLO service on a CO service (a circuit emulation service or a variable bit-rate service) when sharing the same link

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Our essay aims at studying suitable statistical methods for the clustering ofcompositional data in situations where observations are constituted by trajectories ofcompositional data, that is, by sequences of composition measurements along a domain.Observed trajectories are known as functional data and several methods have beenproposed for their analysis.In particular, methods for clustering functional data, known as Functional ClusterAnalysis (FCA), have been applied by practitioners and scientists in many fields. To ourknowledge, FCA techniques have not been extended to cope with the problem ofclustering compositional data trajectories. In order to extend FCA techniques to theanalysis of compositional data, FCA clustering techniques have to be adapted by using asuitable compositional algebra.The present work centres on the following question: given a sample of compositionaldata trajectories, how can we formulate a segmentation procedure giving homogeneousclasses? To address this problem we follow the steps described below.First of all we adapt the well-known spline smoothing techniques in order to cope withthe smoothing of compositional data trajectories. In fact, an observed curve can bethought of as the sum of a smooth part plus some noise due to measurement errors.Spline smoothing techniques are used to isolate the smooth part of the trajectory:clustering algorithms are then applied to these smooth curves.The second step consists in building suitable metrics for measuring the dissimilaritybetween trajectories: we propose a metric that accounts for difference in both shape andlevel, and a metric accounting for differences in shape only.A simulation study is performed in order to evaluate the proposed methodologies, usingboth hierarchical and partitional clustering algorithm. The quality of the obtained resultsis assessed by means of several indices

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the tantalising remaining problems in compositional data analysis lies in how to deal with data sets in which there are components which are essential zeros. By anessential zero we mean a component which is truly zero, not something recorded as zero simply because the experimental design or the measuring instrument has not been sufficiently sensitive to detect a trace of the part. Such essential zeros occur inmany compositional situations, such as household budget patterns, time budgets,palaeontological zonation studies, ecological abundance studies. Devices such as nonzero replacement and amalgamation are almost invariably ad hoc and unsuccessful insuch situations. From consideration of such examples it seems sensible to build up amodel in two stages, the first determining where the zeros will occur and the secondhow the unit available is distributed among the non-zero parts. In this paper we suggest two such models, an independent binomial conditional logistic normal model and a hierarchical dependent binomial conditional logistic normal model. The compositional data in such modelling consist of an incidence matrix and a conditional compositional matrix. Interesting statistical problems arise, such as the question of estimability of parameters, the nature of the computational process for the estimation of both the incidence and compositional parameters caused by the complexity of the subcompositional structure, the formation of meaningful hypotheses, and the devising of suitable testing methodology within a lattice of such essential zero-compositional hypotheses. The methodology is illustrated by application to both simulated and real compositional data

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Evaluation of segmentation methods is a crucial aspect in image processing, especially in the medical imaging field, where small differences between segmented regions in the anatomy can be of paramount importance. Usually, segmentation evaluation is based on a measure that depends on the number of segmented voxels inside and outside of some reference regions that are called gold standards. Although some other measures have been also used, in this work we propose a set of new similarity measures, based on different features, such as the location and intensity values of the misclassified voxels, and the connectivity and the boundaries of the segmented data. Using the multidimensional information provided by these measures, we propose a new evaluation method whose results are visualized applying a Principal Component Analysis of the data, obtaining a simplified graphical method to compare different segmentation results. We have carried out an intensive study using several classic segmentation methods applied to a set of MRI simulated data of the brain with several noise and RF inhomogeneity levels, and also to real data, showing that the new measures proposed here and the results that we have obtained from the multidimensional evaluation, improve the robustness of the evaluation and provides better understanding about the difference between segmentation methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

First discussion on compositional data analysis is attributable to Karl Pearson, in 1897. However, notwithstanding the recent developments on algebraic structure of the simplex, more than twenty years after Aitchisons idea of log-transformations of closed data, scientific literature is again full of statistical treatments of this type of data by using traditional methodologies. This is particularly true in environmental geochemistry where besides the problem of the closure, the spatial structure (dependence) of the data have to be considered. In this work we propose the use of log-contrast values, obtained by asimplicial principal component analysis, as LQGLFDWRUV of given environmental conditions. The investigation of the log-constrast frequency distributions allows pointing out the statistical laws able togenerate the values and to govern their variability. The changes, if compared, for example, with the mean values of the random variables assumed as models, or other reference parameters, allow definingmonitors to be used to assess the extent of possible environmental contamination. Case study on running and ground waters from Chiavenna Valley (Northern Italy) by using Na+, K+, Ca2+, Mg2+, HCO3-, SO4 2- and Cl- concentrations will be illustrated

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The identification of compositional changes in fumarolic gases of active and quiescent volcanoes is one of the mostimportant targets in monitoring programs. From a general point of view, many systematic (often cyclic) and randomprocesses control the chemistry of gas discharges, making difficult to produce a convincing mathematical-statisticalmodelling.Changes in the chemical composition of volcanic gases sampled at Vulcano Island (Aeolian Arc, Sicily, Italy) fromeight different fumaroles located in the northern sector of the summit crater (La Fossa) have been analysed byconsidering their dependence from time in the period 2000-2007. Each intermediate chemical composition has beenconsidered as potentially derived from the contribution of the two temporal extremes represented by the 2000 and 2007samples, respectively, by using inverse modelling methodologies for compositional data. Data pertaining to fumarolesF5 and F27, located on the rim and in the inner part of La Fossa crater, respectively, have been used to achieve theproposed aim. The statistical approach has allowed us to highlight the presence of random and not random fluctuations,features useful to understand how the volcanic system works, opening new perspectives in sampling strategies and inthe evaluation of the natural risk related to a quiescent volcano

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A four compartment model of the cardiovascular system is developed. To allow for easy interpretation and to minimise the number of parameters, an effort was made to keep the model as simple as possible. A sensitivity analysis is first carried out to determine which are the most important model parameters to characterise the blood pressure signal. A four stage process is then described which accurately determines all parameter values. This process is applied to data from three patients and good agreement is shown in all cases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents and discusses the use of Bayesian procedures - introduced through the use of Bayesian networks in Part I of this series of papers - for 'learning' probabilities from data. The discussion will relate to a set of real data on characteristics of black toners commonly used in printing and copying devices. Particular attention is drawn to the incorporation of the proposed procedures as an integral part in probabilistic inference schemes (notably in the form of Bayesian networks) that are intended to address uncertainties related to particular propositions of interest (e.g., whether or not a sample originates from a particular source). The conceptual tenets of the proposed methodologies are presented along with aspects of their practical implementation using currently available Bayesian network software.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The introduction of the WHO FRAX algorithms has facilitated the assessment of fracture risk on the basis of fracture probability. Its use in fracture risk prediction has strengths, but also limitations of which the clinician should be aware and are the focus of this review INTRODUCTION: The International Osteoporosis Foundation (IOF) and the International Society for Clinical Densitometry (ISCD) appointed a joint Task Force to develop resource documents in order to make recommendations on how to improve FRAX and better inform clinicians who use FRAX. The Task Force met in November 2010 for 3days to discuss these topics which form the focus of this review. METHODS: This study reviews the resource documents and joint position statements of ISCD and IOF. RESULTS: Details on the clinical risk factors currently used in FRAX are provided, and the reasons for the exclusion of others are provided. Recommendations are made for the development of surrogate models where country-specific FRAX models are not available. CONCLUSIONS: The wish list of clinicians for the modulation of FRAX is large, but in many instances, these wishes cannot presently be fulfilled; however, an explanation and understanding of the reasons may be helpful in translating the information provided by FRAX into clinical practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the fight against doping, steroid profiling is a powerful tool to detect drug misuse with endogenous anabolic androgenic steroids. To establish sensitive and reliable models, the factors influencing profiling should be recognised. We performed an extensive literature review of the multiple factors that could influence the quantitative levels and ratios of endogenous steroids in urine matrix. For a comprehensive and scientific evaluation of the urinary steroid profile, it is necessary to define the target analytes as well as testosterone metabolism. The two main confounding factors, that is, endogenous and exogenous factors, are detailed to show the complex process of quantifying the steroid profile within WADA-accredited laboratories. Technical aspects are also discussed as they could have a significant impact on the steroid profile, and thus the steroid module of the athlete biological passport (ABP). The different factors impacting the major components of the steroid profile must be understood to ensure scientifically sound interpretation through the Bayesian model of the ABP. Not only should the statistical data be considered but also the experts in the field must be consulted for successful implementation of the steroidal module.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: Solexa/Illumina short-read ultra-high throughput DNA sequencing technology produces millions of short tags (up to 36 bases) by parallel sequencing-by-synthesis of DNA colonies. The processing and statistical analysis of such high-throughput data poses new challenges; currently a fair proportion of the tags are routinely discarded due to an inability to match them to a reference sequence, thereby reducing the effective throughput of the technology. RESULTS: We propose a novel base calling algorithm using model-based clustering and probability theory to identify ambiguous bases and code them with IUPAC symbols. We also select optimal sub-tags using a score based on information content to remove uncertain bases towards the ends of the reads. CONCLUSION: We show that the method improves genome coverage and number of usable tags as compared with Solexa's data processing pipeline by an average of 15%. An R package is provided which allows fast and accurate base calling of Solexa's fluorescence intensity files and the production of informative diagnostic plots.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Funds for this report and grant were provided to the Iowa Division of Criminal and Juvenile Justice Planning (CJJP) and Statistical Analysis Center, by the Justice Research and Statistics Association (JRSA) through a cooperative agreement entitled Juvenile Justice Evaluation Resource Center with the Office of Juvenile Justice and Delinquency Prevention (OJJDP), U.S. Department of Justice (DOJ).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In rodents and nonhuman primates subjected to spinal cord lesion, neutralizing the neurite growth inhibitor Nogo-A has been shown to promote regenerative axonal sprouting and functional recovery. The goal of the present report was to re-examine the data on the recovery of the primate manual dexterity using refined behavioral analyses and further statistical assessments, representing secondary outcome measures from the same manual dexterity test. Thirteen adult monkeys were studied; seven received an anti-Nogo-A antibody whereas a control antibody was infused into the other monkeys. Monkeys were trained to perform the modified Brinkman board task requiring opposition of index finger and thumb to grasp food pellets placed in vertically and horizontally oriented slots. Two parameters were quantified before and following spinal cord injury: (i) the standard 'score' as defined by the number of pellets retrieved within 30 s from the two types of slots; (ii) the newly introduced 'contact time' as defined by the duration of digit contact with the food pellet before successful retrieval. After lesion the hand was severely impaired in all monkeys; this was followed by progressive functional recovery. Remarkably, anti-Nogo-A antibody-treated monkeys recovered faster and significantly better than control antibody-treated monkeys, considering both the score for vertical and horizontal slots (Mann-Whitney test: P = 0.05 and 0.035, respectively) and the contact time (P = 0.008 and 0.005, respectively). Detailed analysis of the lesions excluded the possibility that this conclusion may have been caused by differences in lesion properties between the two groups of monkeys.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a webservice architecture for Statistical Machine Translation aimed at non-technical users. A workfloweditor allows a user to combine different webservices using a graphical user interface. In the current state of this project,the webservices have been implemented for a range of sentential and sub-sententialaligners. The advantage of a common interface and a common data format allows the user to build workflows exchanging different aligners.