997 resultados para Density seed
Resumo:
We formulate density estimation as an inverse operator problem. We then use convergence results of empirical distribution functions to true distribution functions to develop an algorithm for multivariate density estimation. The algorithm is based upon a Support Vector Machine (SVM) approach to solving inverse operator problems. The algorithm is implemented and tested on simulated data from different distributions and different dimensionalities, gaussians and laplacians in $R^2$ and $R^{12}$. A comparison in performance is made with Gaussian Mixture Models (GMMs). Our algorithm does as well or better than the GMMs for the simulations tested and has the added advantage of being automated with respect to parameters.
Resumo:
In this paper we focus on the problem of estimating a bounded density using a finite combination of densities from a given class. We consider the Maximum Likelihood Procedure (MLE) and the greedy procedure described by Li and Barron. Approximation and estimation bounds are given for the above methods. We extend and improve upon the estimation results of Li and Barron, and in particular prove an $O(\\frac{1}{\\sqrt{n}})$ bound on the estimation error which does not depend on the number of densities in the estimated combination.
Resumo:
High density, uniform GaN nanodot arrays with controllable size have been synthesized by using template-assisted selective growth. The GaN nanodots with average diameter 40nm, 80nm and 120nm were selectively grown by metalorganic chemical vapor deposition (MOCVD) on a nano-patterned SiO2/GaN template. The nanoporous SiO2 on GaN surface was created by inductively coupled plasma etching (ICP) using anodic aluminum oxide (AAO) template as a mask. This selective regrowth results in highly crystalline GaN nanodots confirmed by high resolution transmission electron microscopy. The narrow size distribution and uniform spatial position of the nanoscale dots offer potential advantages over self-assembled dots grown by the Stranski–Krastanow mode.
Resumo:
Compositional data analysis motivated the introduction of a complete Euclidean structure in the simplex of D parts. This was based on the early work of J. Aitchison (1986) and completed recently when Aitchinson distance in the simplex was associated with an inner product and orthonormal bases were identified (Aitchison and others, 2002; Egozcue and others, 2003). A partition of the support of a random variable generates a composition by assigning the probability of each interval to a part of the composition. One can imagine that the partition can be refined and the probability density would represent a kind of continuous composition of probabilities in a simplex of infinitely many parts. This intuitive idea would lead to a Hilbert-space of probability densities by generalizing the Aitchison geometry for compositions in the simplex into the set probability densities
Resumo:
In a seminal paper, Aitchison and Lauder (1985) introduced classical kernel density estimation techniques in the context of compositional data analysis. Indeed, they gave two options for the choice of the kernel to be used in the kernel estimator. One of these kernels is based on the use the alr transformation on the simplex SD jointly with the normal distribution on RD-1. However, these authors themselves recognized that this method has some deficiencies. A method for overcoming these dificulties based on recent developments for compositional data analysis and multivariate kernel estimation theory, combining the ilr transformation with the use of the normal density with a full bandwidth matrix, was recently proposed in Martín-Fernández, Chacón and Mateu- Figueras (2006). Here we present an extensive simulation study that compares both methods in practice, thus exploring the finite-sample behaviour of both estimators
Resumo:
Functional Data Analysis (FDA) deals with samples where a whole function is observed for each individual. A particular case of FDA is when the observed functions are density functions, that are also an example of infinite dimensional compositional data. In this work we compare several methods for dimensionality reduction for this particular type of data: functional principal components analysis (PCA) with or without a previous data transformation and multidimensional scaling (MDS) for diferent inter-densities distances, one of them taking into account the compositional nature of density functions. The difeerent methods are applied to both artificial and real data (households income distributions)
Resumo:
We present a new approach to model and classify breast parenchymal tissue. Given a mammogram, first, we will discover the distribution of the different tissue densities in an unsupervised manner, and second, we will use this tissue distribution to perform the classification. We achieve this using a classifier based on local descriptors and probabilistic Latent Semantic Analysis (pLSA), a generative model from the statistical text literature. We studied the influence of different descriptors like texture and SIFT features at the classification stage showing that textons outperform SIFT in all cases. Moreover we demonstrate that pLSA automatically extracts meaningful latent aspects generating a compact tissue representation based on their densities, useful for discriminating on mammogram classification. We show the results of tissue classification over the MIAS and DDSM datasets. We compare our method with approaches that classified these same datasets showing a better performance of our proposal
Resumo:
It has been shown that the accuracy of mammographic abnormality detection methods is strongly dependent on the breast tissue characteristics, where a dense breast drastically reduces detection sensitivity. In addition, breast tissue density is widely accepted to be an important risk indicator for the development of breast cancer. Here, we describe the development of an automatic breast tissue classification methodology, which can be summarized in a number of distinct steps: 1) the segmentation of the breast area into fatty versus dense mammographic tissue; 2) the extraction of morphological and texture features from the segmented breast areas; and 3) the use of a Bayesian combination of a number of classifiers. The evaluation, based on a large number of cases from two different mammographic data sets, shows a strong correlation ( and 0.67 for the two data sets) between automatic and expert-based Breast Imaging Reporting and Data System mammographic density assessment
Resumo:
A recent trend in digital mammography is computer-aided diagnosis systems, which are computerised tools designed to assist radiologists. Most of these systems are used for the automatic detection of abnormalities. However, recent studies have shown that their sensitivity is significantly decreased as the density of the breast increases. This dependence is method specific. In this paper we propose a new approach to the classification of mammographic images according to their breast parenchymal density. Our classification uses information extracted from segmentation results and is based on the underlying breast tissue texture. Classification performance was based on a large set of digitised mammograms. Evaluation involves different classifiers and uses a leave-one-out methodology. Results demonstrate the feasibility of estimating breast density using image processing and analysis techniques
Resumo:
The research nurseries serve as a initiative designed for the purpose of contribute to the breakdown of seizure paradigms of knowledge, supported by a theoretical model of discernment that exceed the traditional method of learning in the classroom, to transcend critical insights into transforming the environment under an autonomous space to provide tools essentially scientific, but with ethical criterion and social commitment. Such a conception, however, could be perceived as talkative, and even metaphorical, at the contrast of reality purely versatile in which the neophyte not only tries to get enough of the fruit of knowledge, but also requires that appropriate strategies are put forward to achieve seizure. Then arises, from the formative entelechy the use of technological tools based on virtual learning environments, in order to enhance motivation as an alternative against the blurring of perspectives and the emerging business in some types of nurseries, stranded attitude, that sometimes not far from the master model.
Resumo:
Se describe de la forma más sencilla posible la historia del crecimiento y cambio de una semilla de girasol. El texto es de letra grande con dos o tres frases por cada doble página que tiene grandes ilustraciones acompañadas de otro tamaño de letra cuando es necesario proporcionar una mayor información. También hay un glosario.
Resumo:
Relato a modo de cuento del ciclo de vida de una flor. El recorrido lleno de obstáculos de una multitud de pequeñas semillas desde que el fuerte viento otoñal, las esparce por el aire y la tierra, pasando por todas las estaciones del año hasta llegar otra vez al otoño. Solamente, una de ellas consigue llegar a tierra fértil y germinar en una flor en primavera.
Resumo:
The contributions of the correlated and uncorrelated components of the electron-pair density to atomic and molecular intracule I(r) and extracule E(R) densities and its Laplacian functions ∇2I(r) and ∇2E(R) are analyzed at the Hartree-Fock (HF) and configuration interaction (CI) levels of theory. The topologies of the uncorrelated components of these functions can be rationalized in terms of the corresponding one-electron densities. In contrast, by analyzing the correlated components of I(r) and E(R), namely, IC(r) and EC(R), the effect of electron Fermi and Coulomb correlation can be assessed at the HF and CI levels of theory. Moreover, the contribution of Coulomb correlation can be isolated by means of difference maps between IC(r) and EC(R) distributions calculated at the two levels of theory. As application examples, the He, Ne, and Ar atomic series, the C2-2, N2, O2+2 molecular series, and the C2H4 molecule have been investigated. For these atoms and molecules, it is found that Fermi correlation accounts for the main characteristics of IC(r) and EC(R), with Coulomb correlation increasing slightly the locality of these functions at the CI level of theory. Furthermore, IC(r), EC(R), and the associated Laplacian functions, reveal the short-ranged nature and high isotropy of Fermi and Coulomb correlation in atoms and molecules
Resumo:
A topological analysis of intracule and extracule densities and their Laplacians computed within the Hartree-Fock approximation is presented. The analysis of the density distributions reveals that among all possible electron-electron interactions in atoms and between atoms in molecules only very few are located rigorously as local maxima. In contrast, they are clearly identified as local minima in the topology of Laplacian maps. The conceptually different interpretation of intracule and extracule maps is also discussed in detail. An application example to the C2H2, C2H4, and C2H6 series of molecules is presented
Resumo:
The performance of the SAOP potential for the calculation of NMR chemical shifts was evaluated. SAOP results show considerable improvement with respect to previous potentials, like VWN or BP86, at least for the carbon, nitrogen, oxygen, and fluorine chemical shifts. Furthermore, a few NMR calculations carried out on third period atoms (S, P, and Cl) improved when using the SAOP potential