960 resultados para Kernel density estimates


Relevância:

30.00% 30.00%

Publicador:

Resumo:

There is growing evidence that focal thinning of cortical bone in the proximal femur may predispose a hip to fracture. Detecting such defects in clinical CT is challenging, since cortices may be significantly thinner than the imaging system's point spread function. We recently proposed a model-fitting technique to measure sub-millimetre cortices, an ill-posed problem which was regularized by assuming a specific, fixed value for the cortical density. In this paper, we develop the work further by proposing and evaluating a more rigorous method for estimating the constant cortical density, and extend the paradigm to encompass the mapping of cortical mass (mineral mg/cm 2) in addition to thickness. Density, thickness and mass estimates are evaluated on sixteen cadaveric femurs, with high resolution measurements from a micro-CT scanner providing the gold standard. The results demonstrate robust, accurate measurement of peak cortical density and cortical mass. Cortical thickness errors are confined to regions of thin cortex and are bounded by the extent to which the local density deviates from the peak, averaging 20% for 0.5mm cortex. © 2012 Elsevier B.V.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wireless sensor networks have recently emerged as enablers of important applications such as environmental, chemical and nuclear sensing systems. Such applications have sophisticated spatial-temporal semantics that set them aside from traditional wireless networks. For example, the computation of temperature averaged over the sensor field must take into account local densities. This is crucial since otherwise the estimated average temperature can be biased by over-sampling areas where a lot more sensors exist. Thus, we envision that a fundamental service that a wireless sensor network should provide is that of estimating local densities. In this paper, we propose a lightweight probabilistic density inference protocol, we call DIP, which allows each sensor node to implicitly estimate its neighborhood size without the explicit exchange of node identifiers as in existing density discovery schemes. The theoretical basis of DIP is a probabilistic analysis which gives the relationship between the number of sensor nodes contending in the neighborhood of a node and the level of contention measured by that node. Extensive simulations confirm the premise of DIP: it can provide statistically reliable and accurate estimates of local density at a very low energy cost and constant running time. We demonstrate how applications could be built on top of our DIP-based service by computing density-unbiased statistics from estimated local densities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We introduce Active Hidden Models (AHM) that utilize kernel methods traditionally associated with classification. We use AHMs to track deformable objects in video sequences by leveraging kernel projections. We introduce the "subset projection" method which improves the efficiency of our tracking approach by a factor of ten. We successfully tested our method on facial tracking with extreme head movements (including full 180-degree head rotation), facial expressions, and deformable objects. Given a kernel and a set of training observations, we derive unbiased estimates of the accuracy of the AHM tracker. Kernels are generally used in classification methods to make training data linearly separable. We prove that the optimal (minimum variance) tracking kernels are those that make the training observations linearly dependent.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper outlines the effects of polymer conditioning on alum sludge properties, such as floc size, density, fractal dimension (DF) and rheological properties. Experimental results demonstrate that polymer conditioning of alum sludge leads to: larger floc size with a plateau reached in higher doses; higher densities associated with higher doses; increased degree of compactness; and an initial decrease followed by an increase of supernatant viscosity with continued increase in polymer dose. The secondary focus of this paper dwells on a comparison of the estimates of optimum dose using different criteria that emanate from established dewatering tests such as CST, SRF, liquid phase viscosity and modified SRF as well as a simple settlement test in terms of CML30. Alum sludge was derived from a water works treating coloured, low-turbidity raw waters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The equilibrium polymerization of sulfur is investigated by Monte Carlo simulations. The potential energy model is based on density functional results for the cohesive energy, structural, and vibrational properties as well as reactivity of sulfur rings and chains [Part I, J. Chem. Phys. 118, 9257 (2003)]. Liquid samples of 2048 atoms are simulated at temperatures 450less than or equal toTless than or equal to850 K and P=0 starting from monodisperse S-8 molecular compositions. Thermally activated bond breaking processes lead to an equilibrium population of unsaturated atoms that can change the local pattern of covalent bonds and allow the system to approach equilibrium. The concentration of unsaturated atoms and the kinetics of bond interchanges is determined by the energy DeltaE(b) required to break a covalent bond. Equilibrium with respect to the bond distribution is achieved for 15less than or equal toDeltaE(b)less than or equal to21 kcal/mol over a wide temperature range (Tgreater than or equal to450 K), within which polymerization occurs readily, with entropy from the bond distribution overcompensating the increase in enthalpy. There is a maximum in the polymerized fraction at temperature T-max that depends on DeltaE(b). This fraction decreases at higher temperature because broken bonds and short chains proliferate and, for Tless than or equal toT(max), because entropy is less important than enthalpy. The molecular size distribution is described well by a Zimm-Schulz function, plus an isolated peak for S-8. Large molecules are almost exclusively open chains. Rings tend to have fewer than 24 atoms, and only S-8 is present in significant concentrations at all T. The T dependence of the density and the dependence of polymerization fraction and degree on DeltaE(b) give estimates of the polymerization temperature T-f=450+/-20 K. (C) 2003 American Institute of Physics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a sample of normal Type Ia supernovae (SNe Ia) from the Nearby Supernova Factory data set with spectrophotometry at sufficiently late phases to estimate the ejected mass using the bolometric light curve.Wemeasure Ni masses from the peak bolometric luminosity, then compare the luminosity in the Co-decay tail to the expected rate of radioactive energy release from ejecta of a given mass. We infer the ejected mass in a Bayesian context using a semi-analytic model of the ejecta, incorporating constraints from contemporary numerical models as priors on the density structure and distribution of Ni throughout the ejecta. We find a strong correlation between ejected mass and light-curve decline rate, and consequently Ni mass, with ejected masses in our data ranging from 0.9 to 1.4 M. Most fast-declining (SALT2 x <-1) normal SNe Ia have significantly sub-Chandrasekhar ejected masses in our fiducial analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider the local order estimation of nonlinear autoregressive systems with exogenous inputs (NARX), which may have different local dimensions at different points. By minimizing the kernel-based local information criterion introduced in this paper, the strongly consistent estimates for the local orders of the NARX system at points of interest are obtained. The modification of the criterion and a simple procedure of searching the minimum of the criterion, are also discussed. The theoretical results derived here are tested by simulation examples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many unit root and cointegration tests require an estimate of the spectral density function at frequency zero at some process. Kernel estimators based on weighted sums of autocovariances constructed using estimated residuals from an AR(1) regression are commonly used. However, it is known that with substantially correlated errors, the OLS estimate of the AR(1) parameter is severely biased. in this paper, we first show that this least squares bias induces a significant increase in the bias and mean-squared error of kernel-based estimators.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this paper is to extend the method of approximate approximations to boundary value problems. This method was introduced by V. Maz'ya in 1991 and has been used until now for the approximation of smooth functions defined on the whole space and for the approximation of volume potentials. In the present paper we develop an approximation procedure for the solution of the interior Dirichlet problem for the Laplace equation in two dimensions using approximate approximations. The procedure is based on potential theoretical considerations in connection with a boundary integral equations method and consists of three approximation steps as follows. In a first step the unknown source density in the potential representation of the solution is replaced by approximate approximations. In a second step the decay behavior of the generating functions is used to gain a suitable approximation for the potential kernel, and in a third step Nyström's method leads to a linear algebraic system for the approximate source density. For every step a convergence analysis is established and corresponding error estimates are given.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Models developed to identify the rates and origins of nutrient export from land to stream require an accurate assessment of the nutrient load present in the water body in order to calibrate model parameters and structure. These data are rarely available at a representative scale and in an appropriate chemical form except in research catchments. Observational errors associated with nutrient load estimates based on these data lead to a high degree of uncertainty in modelling and nutrient budgeting studies. Here, daily paired instantaneous P and flow data for 17 UK research catchments covering a total of 39 water years (WY) have been used to explore the nature and extent of the observational error associated with nutrient flux estimates based on partial fractions and infrequent sampling. The daily records were artificially decimated to create 7 stratified sampling records, 7 weekly records, and 30 monthly records from each WY and catchment. These were used to evaluate the impact of sampling frequency on load estimate uncertainty. The analysis underlines the high uncertainty of load estimates based on monthly data and individual P fractions rather than total P. Catchments with a high baseflow index and/or low population density were found to return a lower RMSE on load estimates when sampled infrequently than those with a tow baseflow index and high population density. Catchment size was not shown to be important, though a limitation of this study is that daily records may fail to capture the full range of P export behaviour in smaller catchments with flashy hydrographs, leading to an underestimate of uncertainty in Load estimates for such catchments. Further analysis of sub-daily records is needed to investigate this fully. Here, recommendations are given on load estimation methodologies for different catchment types sampled at different frequencies, and the ways in which this analysis can be used to identify observational error and uncertainty for model calibration and nutrient budgeting studies. (c) 2006 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A unified approach is proposed for sparse kernel data modelling that includes regression and classification as well as probability density function estimation. The orthogonal-least-squares forward selection method based on the leave-one-out test criteria is presented within this unified data-modelling framework to construct sparse kernel models that generalise well. Examples from regression, classification and density estimation applications are used to illustrate the effectiveness of this generic sparse kernel data modelling approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A generalized or tunable-kernel model is proposed for probability density function estimation based on an orthogonal forward regression procedure. Each stage of the density estimation process determines a tunable kernel, namely, its center vector and diagonal covariance matrix, by minimizing a leave-one-out test criterion. The kernel mixing weights of the constructed sparse density estimate are finally updated using the multiplicative nonnegative quadratic programming algorithm to ensure the nonnegative and unity constraints, and this weight-updating process additionally has the desired ability to further reduce the model size. The proposed tunable-kernel model has advantages, in terms of model generalization capability and model sparsity, over the standard fixed-kernel model that restricts kernel centers to the training data points and employs a single common kernel variance for every kernel. On the other hand, it does not optimize all the model parameters together and thus avoids the problems of high-dimensional ill-conditioned nonlinear optimization associated with the conventional finite mixture model. Several examples are included to demonstrate the ability of the proposed novel tunable-kernel model to effectively construct a very compact density estimate accurately.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new sparse kernel probability density function (pdf) estimator based on zero-norm constraint is constructed using the classical Parzen window (PW) estimate as the target function. The so-called zero-norm of the parameters is used in order to achieve enhanced model sparsity, and it is suggested to minimize an approximate function of the zero-norm. It is shown that under certain condition, the kernel weights of the proposed pdf estimator based on the zero-norm approximation can be updated using the multiplicative nonnegative quadratic programming algorithm. Numerical examples are employed to demonstrate the efficacy of the proposed approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Taita Apalis Apalis fuscigularis (IUCN category: Critically Endangered) is a species endemic to south-eastern Kenya. We assessed population size and habitat use in the three forest sites in which it is known to occur (Ngangao, Chawia and Vuria, totalling 257 ha). The estimate of total population size, derived from distance sampling at 412 sample points, ranged from 310 to 654 individuals, with the northern section of Ngangao fragment having 10-fold higher densities than Chawia (2.47-4.93 versus 0.22-0.41 birds ha(-1)). Ngangao north alone hosted 50% of the global population of the species. The highly degraded Vuria fragment also had moderately high densities (1.63-3.72 birds ha(-1)) suggesting that the species tolerates some human disturbance. Taita Apalis prefers vegetation with abundant climbers, but the predictive power of habitat use models was low, suggesting that habitat structure is not a primary cause for the low density of the species in Chawia. Protecting the subpopulation in the northern section of Ngangao is a priority, as is identifying factors responsible of the low abundance in Chawia, because ameliorating conditions in this large fragment could substantially increase the population of Taita Apalis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Affymetrix GeneChip arrays are widely used for transcriptomic studies in a diverse range of species. Each gene is represented on a GeneChip array by a probe- set, consisting of up to 16 probe-pairs. Signal intensities across probe- pairs within a probe-set vary in part due to different physical hybridisation characteristics of individual probes with their target labelled transcripts. We have previously developed a technique to study the transcriptomes of heterologous species based on hybridising genomic DNA (gDNA) to a GeneChip array designed for a different species, and subsequently using only those probes with good homology. Results: Here we have investigated the effects of hybridising homologous species gDNA to study the transcriptomes of species for which the arrays have been designed. Genomic DNA from Arabidopsis thaliana and rice (Oryza sativa) were hybridised to the Affymetrix Arabidopsis ATH1 and Rice Genome GeneChip arrays respectively. Probe selection based on gDNA hybridisation intensity increased the number of genes identified as significantly differentially expressed in two published studies of Arabidopsis development, and optimised the analysis of technical replicates obtained from pooled samples of RNA from rice. Conclusion: This mixed physical and bioinformatics approach can be used to optimise estimates of gene expression when using GeneChip arrays.