74 resultados para cashew nut kernel
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
Vegeu el resum a l'inici del document del fitxer adjunt.
Resumo:
In a seminal paper, Aitchison and Lauder (1985) introduced classical kernel densityestimation techniques in the context of compositional data analysis. Indeed, they gavetwo options for the choice of the kernel to be used in the kernel estimator. One ofthese kernels is based on the use the alr transformation on the simplex SD jointly withthe normal distribution on RD-1. However, these authors themselves recognized thatthis method has some deficiencies. A method for overcoming these dificulties based onrecent developments for compositional data analysis and multivariate kernel estimationtheory, combining the ilr transformation with the use of the normal density with a fullbandwidth matrix, was recently proposed in Martín-Fernández, Chacón and Mateu-Figueras (2006). Here we present an extensive simulation study that compares bothmethods in practice, thus exploring the finite-sample behaviour of both estimators
Resumo:
For the standard kernel density estimate, it is known that one can tune the bandwidth such that the expected L1 error is within a constant factor of the optimal L1 error (obtained when one is allowed to choose the bandwidth with knowledge of the density). In this paper, we pose the same problem for variable bandwidth kernel estimates where the bandwidths are allowed to depend upon the location. We show in particular that for positive kernels on the real line, for any data-based bandwidth, there exists a densityfor which the ratio of expected L1 error over optimal L1 error tends to infinity. Thus, the problem of tuning the variable bandwidth in an optimal manner is ``too hard''. Moreover, from the class of counterexamples exhibited in the paper, it appears thatplacing conditions on the densities (monotonicity, convexity, smoothness) does not help.
Resumo:
In the fixed design regression model, additional weights areconsidered for the Nadaraya--Watson and Gasser--M\"uller kernel estimators.We study their asymptotic behavior and the relationships between new andclassical estimators. For a simple family of weights, and considering theIMSE as global loss criterion, we show some possible theoretical advantages.An empirical study illustrates the performance of the weighted estimatorsin finite samples.
Resumo:
[cat] Es presenta un estimador nucli transformat que és adequat per a distribucions de cua pesada. Utilitzant una transformació basada en la distribució de probabilitat Beta l’elecció del paràmetre de finestra és molt directa. Es presenta una aplicació a dades d’assegurances i es mostra com calcular el Valor en Risc.
Resumo:
En aquest treball demostrem que en la classe de jocs d'assignació amb diagonal dominant (Solymosi i Raghavan, 2001), el repartiment de Thompson (que coincideix amb el valor tau) és l'únic punt del core que és maximal respecte de la relació de dominància de Lorenz, i a més coincideix amb la solucié de Dutta i Ray (1989), també coneguda com solució igualitària. En segon lloc, mitjançant una condició més forta que la de diagonal dominant, introduïm una nova classe de jocs d'assignació on cada agent obté amb la seva parella òptima almenys el doble que amb qualsevol altra parella. Per aquests jocs d'assignació amb diagonal 2-dominant, el repartiment de Thompson és l'únic punt del kernel, i per tant el nucleolo.
Resumo:
[cat] Es presenta un estimador nucli transformat que és adequat per a distribucions de cua pesada. Utilitzant una transformació basada en la distribució de probabilitat Beta l’elecció del paràmetre de finestra és molt directa. Es presenta una aplicació a dades d’assegurances i es mostra com calcular el Valor en Risc.
Resumo:
En aquest treball demostrem que en la classe de jocs d'assignació amb diagonal dominant (Solymosi i Raghavan, 2001), el repartiment de Thompson (que coincideix amb el valor tau) és l'únic punt del core que és maximal respecte de la relació de dominància de Lorenz, i a més coincideix amb la solucié de Dutta i Ray (1989), també coneguda com solució igualitària. En segon lloc, mitjançant una condició més forta que la de diagonal dominant, introduïm una nova classe de jocs d'assignació on cada agent obté amb la seva parella òptima almenys el doble que amb qualsevol altra parella. Per aquests jocs d'assignació amb diagonal 2-dominant, el repartiment de Thompson és l'únic punt del kernel, i per tant el nucleolo.
Resumo:
Background Nowadays, combining the different sources of information to improve the biological knowledge available is a challenge in bioinformatics. One of the most powerful methods for integrating heterogeneous data types are kernel-based methods. Kernel-based data integration approaches consist of two basic steps: firstly the right kernel is chosen for each data set; secondly the kernels from the different data sources are combined to give a complete representation of the available data for a given statistical task. Results We analyze the integration of data from several sources of information using kernel PCA, from the point of view of reducing dimensionality. Moreover, we improve the interpretability of kernel PCA by adding to the plot the representation of the input variables that belong to any dataset. In particular, for each input variable or linear combination of input variables, we can represent the direction of maximum growth locally, which allows us to identify those samples with higher/lower values of the variables analyzed. Conclusions The integration of different datasets and the simultaneous representation of samples and variables together give us a better understanding of biological knowledge.
Resumo:
We prove upper pointwise estimates for the Bergman kernel of the weighted Fock space of entire functions in $L^{2}(e^{-2\phi}) $ where $\phi$ is a subharmonic function with $\Delta\phi$ a doubling measure. We derive estimates for the canonical solution operator to the inhomogeneous Cauchy-Riemann equation and we characterize the compactness of this operator in terms of $\Delta\phi$.
Resumo:
Background Nowadays, combining the different sources of information to improve the biological knowledge available is a challenge in bioinformatics. One of the most powerful methods for integrating heterogeneous data types are kernel-based methods. Kernel-based data integration approaches consist of two basic steps: firstly the right kernel is chosen for each data set; secondly the kernels from the different data sources are combined to give a complete representation of the available data for a given statistical task. Results We analyze the integration of data from several sources of information using kernel PCA, from the point of view of reducing dimensionality. Moreover, we improve the interpretability of kernel PCA by adding to the plot the representation of the input variables that belong to any dataset. In particular, for each input variable or linear combination of input variables, we can represent the direction of maximum growth locally, which allows us to identify those samples with higher/lower values of the variables analyzed. Conclusions The integration of different datasets and the simultaneous representation of samples and variables together give us a better understanding of biological knowledge.
Resumo:
Let $Q$ be a suitable real function on $C$. An $n$-Fekete set corresponding to $Q$ is a subset ${Z_{n1}},\dotsb, Z_{nn}}$ of $C$ which maximizes the expression $\Pi^n_i_{
Resumo:
We propose a new kernel estimation of the cumulative distribution function based on transformation and on bias reducing techniques. We derive the optimal bandwidth that minimises the asymptotic integrated mean squared error. The simulation results show that our proposed kernel estimation improves alternative approaches when the variable has an extreme value distribution with heavy tail and the sample size is small.
Resumo:
This paper shows how a high level matrix programming language may be used to perform Monte Carlo simulation, bootstrapping, estimation by maximum likelihood and GMM, and kernel regression in parallel on symmetric multiprocessor computers or clusters of workstations. The implementation of parallelization is done in a way such that an investigator may use the programs without any knowledge of parallel programming. A bootable CD that allows rapid creation of a cluster for parallel computing is introduced. Examples show that parallelization can lead to important reductions in computational time. Detailed discussion of how the Monte Carlo problem was parallelized is included as an example for learning to write parallel programs for Octave.
Resumo:
This comment corrects the errors in the estimation process that appear in Martins (2001). The first error is in the parametric probit estimation, as the previously presented results do not maximize the log-likelihood function. In the global maximum more variables become significant. As for the semiparametric estimation method, the kernel function used in Martins (2001) can take on both positive and negative values, which implies that the participation probability estimates may be outside the interval [0,1]. We have solved the problem by applying local smoothing in the kernel estimation, as suggested by Klein and Spady (1993).