945 resultados para Point density analysis
Resumo:
It has been widely known that a significant part of the bits are useless or even unused during the program execution. Bit-width analysis targets at finding the minimum bits needed for each variable in the program, which ensures the execution correctness and resources saving. In this paper, we proposed a static analysis method for bit-widths in general applications, which approximates conservatively at compile time and is independent of runtime conditions. While most related work focus on integer applications, our method is also tailored and applicable to floating point variables, which could be extended to transform floating point number into fixed point numbers together with precision analysis. We used more precise representations for data value ranges of both scalar and array variables. Element level analysis is carried out for arrays. We also suggested an alternative for the standard fixed-point iterations in bi-directional range analysis. These techniques are implemented on the Trimaran compiler structure and tested on a set of benchmarks to show the results.
Resumo:
Compositional data analysis motivated the introduction of a complete Euclidean structure in the simplex of D parts. This was based on the early work of J. Aitchison (1986) and completed recently when Aitchinson distance in the simplex was associated with an inner product and orthonormal bases were identified (Aitchison and others, 2002; Egozcue and others, 2003). A partition of the support of a random variable generates a composition by assigning the probability of each interval to a part of the composition. One can imagine that the partition can be refined and the probability density would represent a kind of continuous composition of probabilities in a simplex of infinitely many parts. This intuitive idea would lead to a Hilbert-space of probability densities by generalizing the Aitchison geometry for compositions in the simplex into the set probability densities
Resumo:
One of the disadvantages of old age is that there is more past than future: this, however, may be turned into an advantage if the wealth of experience and, hopefully, wisdom gained in the past can be reflected upon and throw some light on possible future trends. To an extent, then, this talk is necessarily personal, certainly nostalgic, but also self critical and inquisitive about our understanding of the discipline of statistics. A number of almost philosophical themes will run through the talk: search for appropriate modelling in relation to the real problem envisaged, emphasis on sensible balances between simplicity and complexity, the relative roles of theory and practice, the nature of communication of inferential ideas to the statistical layman, the inter-related roles of teaching, consultation and research. A list of keywords might be: identification of sample space and its mathematical structure, choices between transform and stay, the role of parametric modelling, the role of a sample space metric, the underused hypothesis lattice, the nature of compositional change, particularly in relation to the modelling of processes. While the main theme will be relevance to compositional data analysis we shall point to substantial implications for general multivariate analysis arising from experience of the development of compositional data analysis…
Resumo:
Developments in the statistical analysis of compositional data over the last two decades have made possible a much deeper exploration of the nature of variability, and the possible processes associated with compositional data sets from many disciplines. In this paper we concentrate on geochemical data sets. First we explain how hypotheses of compositional variability may be formulated within the natural sample space, the unit simplex, including useful hypotheses of subcompositional discrimination and specific perturbational change. Then we develop through standard methodology, such as generalised likelihood ratio tests, statistical tools to allow the systematic investigation of a complete lattice of such hypotheses. Some of these tests are simple adaptations of existing multivariate tests but others require special construction. We comment on the use of graphical methods in compositional data analysis and on the ordination of specimens. The recent development of the concept of compositional processes is then explained together with the necessary tools for a staying- in-the-simplex approach, namely compositional singular value decompositions. All these statistical techniques are illustrated for a substantial compositional data set, consisting of 209 major-oxide and rare-element compositions of metamorphosed limestones from the Northeast and Central Highlands of Scotland. Finally we point out a number of unresolved problems in the statistical analysis of compositional processes
Resumo:
The use of perturbation and power transformation operations permits the investigation of linear processes in the simplex as in a vectorial space. When the investigated geochemical processes can be constrained by the use of well-known starting point, the eigenvectors of the covariance matrix of a non-centred principal component analysis allow to model compositional changes compared with a reference point. The results obtained for the chemistry of water collected in River Arno (central-northern Italy) have open new perspectives for considering relative changes of the analysed variables and to hypothesise the relative effect of different acting physical-chemical processes, thus posing the basis for a quantitative modelling
Resumo:
Hydrogeological research usually includes some statistical studies devised to elucidate mean background state, characterise relationships among different hydrochemical parameters, and show the influence of human activities. These goals are achieved either by means of a statistical approach or by mixing models between end-members. Compositional data analysis has proved to be effective with the first approach, but there is no commonly accepted solution to the end-member problem in a compositional framework. We present here a possible solution based on factor analysis of compositions illustrated with a case study. We find two factors on the compositional bi-plot fitting two non-centered orthogonal axes to the most representative variables. Each one of these axes defines a subcomposition, grouping those variables that lay nearest to it. With each subcomposition a log-contrast is computed and rewritten as an equilibrium equation. These two factors can be interpreted as the isometric log-ratio coordinates (ilr) of three hidden components, that can be plotted in a ternary diagram. These hidden components might be interpreted as end-members. We have analysed 14 molarities in 31 sampling stations all along the Llobregat River and its tributaries, with a monthly measure during two years. We have obtained a bi-plot with a 57% of explained total variance, from which we have extracted two factors: factor G, reflecting geological background enhanced by potash mining; and factor A, essentially controlled by urban and/or farming wastewater. Graphical representation of these two factors allows us to identify three extreme samples, corresponding to pristine waters, potash mining influence and urban sewage influence. To confirm this, we have available analysis of diffused and widespread point sources identified in the area: springs, potash mining lixiviates, sewage, and fertilisers. Each one of these sources shows a clear link with one of the extreme samples, except fertilisers due to the heterogeneity of their composition. This approach is a useful tool to distinguish end-members, and characterise them, an issue generally difficult to solve. It is worth note that the end-member composition cannot be fully estimated but only characterised through log-ratio relationships among components. Moreover, the influence of each endmember in a given sample must be evaluated in relative terms of the other samples. These limitations are intrinsic to the relative nature of compositional data
Resumo:
In a seminal paper, Aitchison and Lauder (1985) introduced classical kernel density estimation techniques in the context of compositional data analysis. Indeed, they gave two options for the choice of the kernel to be used in the kernel estimator. One of these kernels is based on the use the alr transformation on the simplex SD jointly with the normal distribution on RD-1. However, these authors themselves recognized that this method has some deficiencies. A method for overcoming these dificulties based on recent developments for compositional data analysis and multivariate kernel estimation theory, combining the ilr transformation with the use of the normal density with a full bandwidth matrix, was recently proposed in Martín-Fernández, Chacón and Mateu- Figueras (2006). Here we present an extensive simulation study that compares both methods in practice, thus exploring the finite-sample behaviour of both estimators
Resumo:
Functional Data Analysis (FDA) deals with samples where a whole function is observed for each individual. A particular case of FDA is when the observed functions are density functions, that are also an example of infinite dimensional compositional data. In this work we compare several methods for dimensionality reduction for this particular type of data: functional principal components analysis (PCA) with or without a previous data transformation and multidimensional scaling (MDS) for diferent inter-densities distances, one of them taking into account the compositional nature of density functions. The difeerent methods are applied to both artificial and real data (households income distributions)
Resumo:
We present a new approach to model and classify breast parenchymal tissue. Given a mammogram, first, we will discover the distribution of the different tissue densities in an unsupervised manner, and second, we will use this tissue distribution to perform the classification. We achieve this using a classifier based on local descriptors and probabilistic Latent Semantic Analysis (pLSA), a generative model from the statistical text literature. We studied the influence of different descriptors like texture and SIFT features at the classification stage showing that textons outperform SIFT in all cases. Moreover we demonstrate that pLSA automatically extracts meaningful latent aspects generating a compact tissue representation based on their densities, useful for discriminating on mammogram classification. We show the results of tissue classification over the MIAS and DDSM datasets. We compare our method with approaches that classified these same datasets showing a better performance of our proposal
Resumo:
A recent trend in digital mammography is computer-aided diagnosis systems, which are computerised tools designed to assist radiologists. Most of these systems are used for the automatic detection of abnormalities. However, recent studies have shown that their sensitivity is significantly decreased as the density of the breast increases. This dependence is method specific. In this paper we propose a new approach to the classification of mammographic images according to their breast parenchymal density. Our classification uses information extracted from segmentation results and is based on the underlying breast tissue texture. Classification performance was based on a large set of digitised mammograms. Evaluation involves different classifiers and uses a leave-one-out methodology. Results demonstrate the feasibility of estimating breast density using image processing and analysis techniques
Resumo:
Objectives To determine the effect of human papillomavirus (HPV) quadrivalent vaccine on the risk of developing subsequent disease after an excisional procedure for cervical intraepithelial neoplasia or diagnosis of genital warts, vulvar intraepithelial neoplasia, or vaginal intraepithelial neoplasia. Design Retrospective analysis of data from two international, double blind, placebo controlled, randomised efficacy trials of quadrivalent HPV vaccine (protocol 013 (FUTURE I) and protocol 015 (FUTURE II)). Setting Primary care centres and university or hospital associated health centres in 24 countries and territories around the world. Participants Among 17 622 women aged 15–26 years who underwent 1:1 randomisation to vaccine or placebo, 2054 received cervical surgery or were diagnosed with genital warts, vulvar intraepithelial neoplasia, or vaginal intraepithelial neoplasia. Intervention Three doses of quadrivalent HPV vaccine or placebo at day 1, month 2, and month 6. Main outcome measures Incidence of HPV related disease from 60 days after treatment or diagnosis, expressed as the number of women with an end point per 100 person years at risk. Results A total of 587 vaccine and 763 placebo recipients underwent cervical surgery. The incidence of any subsequent HPV related disease was 6.6 and 12.2 in vaccine and placebo recipients respectively (46.2% reduction (95% confidence interval 22.5% to 63.2%) with vaccination). Vaccination was associated with a significant reduction in risk of any subsequent high grade disease of the cervix by 64.9% (20.1% to 86.3%). A total of 229 vaccine recipients and 475 placebo recipients were diagnosed with genital warts, vulvar intraepithelial neoplasia, or vaginal intraepithelial neoplasia, and the incidence of any subsequent HPV related disease was 20.1 and 31.0 in vaccine and placebo recipients respectively (35.2% reduction (13.8% to 51.8%)). Conclusions Previous vaccination with quadrivalent HPV vaccine among women who had surgical treatment for HPV related disease significantly reduced the incidence of subsequent HPV related disease, including high grade disease.
Resumo:
This article explores the medical care standard required by law for terminally illpatients and the possibility of limiting therapeutic efforts while respecting the duediligence expected from doctors. To this end, circumstances are identified in whichthe doctor is forced to choose between two possible actions: to guarantee the right tolife by continuing treatment, or to limit the right to healthcare by limiting therapeuticefforts. Two cases taken from English Common Law were reviewed that decided onthe factual problem at hand. In our country, the Constitutional Court established aline of jurisprudence on the role of the doctor in deciding whether or not to continuetreatment for a terminally ill person. Lastly, jurisprudence precedents are presentedalong with a comparative analysis of the solutions given in Great Britain andin Colombia.
Resumo:
Introduction: During the past years, alveolar recruitment maneuvers (RM) have produced growing interest due to their beneficial potential in pulmonary protection, and have been introduced in clinical practice. Objective: To describe and analyze the knowledge of MR and its application at seven intensive care units in the city of Cali, Colombia. Methods and materials: Descriptive Cross-Sectional Study with an intentional sample of 64 professionals working in seven intensive care units and who apply MR. The self-completed survey was made up of thirteen questions, and the application period was two months. Results: Out of 64 professionals surveyed, 77.8% of them follow a protocol guide; 54.7% employes during RM the ideal Positive end-expiratory pressure (PEEP), which maintains a saturation > 90% and a PaO2 > 60 mmHg; 42.1% tolerates airway pressures between 35 and 50 cmH2O; 48.4% perform RM with a progressive increase of the PEEP and a low tidal volume. Conclusions: Regarding the knowledge related to RM, heterogeneity was found in the answers. There is currently no consensus about which is the most effective and secure way to implement an MR. This study can be the starting point to create awareness towards the revision of knowledge, capacities and abilities that are required to perform RM.
Resumo:
Introduction. Duchenne and Becker Muscular Dystrophies (DMD/DMB) are X-linked recessive diseases characterized by progressive muscle weakness and wasting, loss of motor skills and death after the second decade of life. Deletions are the most prevalent mutations that affect the dystrophin gene, which spans 79 exons.Objective: Identify deletions on the dystrophin gene in 58 patients affected with DMD.Methods: Through multiplex PCR identify deletions on the dystrophin gene in 58 patients with DMD and observe the frequency of this mutation in our population.Results: We found deletions in 1.72% of patients (1 of 58 persons). Deletions were not the principal cause of disease in our population. It is possible that duplications and point mutations caused this illness in our patients.Conclusions: The frequency of deletions in the 15 exons analyzed from the dystrophin gene was low. The predominant types of mutation in our patients` samples were not deletions as has been observed in the literature worldwide, therefore, it is important to determine other types of mutations as are duplications and point mutations.
Resumo:
A topological analysis of intracule and extracule densities and their Laplacians computed within the Hartree-Fock approximation is presented. The analysis of the density distributions reveals that among all possible electron-electron interactions in atoms and between atoms in molecules only very few are located rigorously as local maxima. In contrast, they are clearly identified as local minima in the topology of Laplacian maps. The conceptually different interpretation of intracule and extracule maps is also discussed in detail. An application example to the C2H2, C2H4, and C2H6 series of molecules is presented