909 resultados para estimador Kernel
Resumo:
Aquesta tesi estudia com estimar la distribució de les variables regionalitzades l'espai mostral i l'escala de les quals admeten una estructura d'espai Euclidià. Apliquem el principi del treball en coordenades: triem una base ortonormal, fem estadística sobre les coordenades de les dades, i apliquem els output a la base per tal de recuperar un resultat en el mateix espai original. Aplicant-ho a les variables regionalitzades, obtenim una aproximació única consistent, que generalitza les conegudes propietats de les tècniques de kriging a diversos espais mostrals: dades reals, positives o composicionals (vectors de components positives amb suma constant) són tractades com casos particulars. D'aquesta manera, es generalitza la geostadística lineal, i s'ofereix solucions a coneguts problemes de la no-lineal, tot adaptant la mesura i els criteris de representativitat (i.e., mitjanes) a les dades tractades. L'estimador per a dades positives coincideix amb una mitjana geomètrica ponderada, equivalent a l'estimació de la mediana, sense cap dels problemes del clàssic kriging lognormal. El cas composicional ofereix solucions equivalents, però a més permet estimar vectors de probabilitat multinomial. Amb una aproximació bayesiana preliminar, el kriging de composicions esdevé també una alternativa consistent al kriging indicador. Aquesta tècnica s'empra per estimar funcions de probabilitat de variables qualsevol, malgrat que sovint ofereix estimacions negatives, cosa que s'evita amb l'alternativa proposada. La utilitat d'aquest conjunt de tècniques es comprova estudiant la contaminació per amoníac a una estació de control automàtic de la qualitat de l'aigua de la conca de la Tordera, i es conclou que només fent servir les tècniques proposades hom pot detectar en quins instants l'amoni es transforma en amoníac en una concentració superior a la legalment permesa.
Resumo:
El coneixement de la superfície d'energia potencial (PES) ha estat essencial en el món de la química teòrica per tal de discutir tant la reactivitat química com l'estructura i l'espectroscòpia molecular. En el camp de la reactivitat química es hem proposat continuar amb el desenvolupament de nova metodologia dins el marc de la teoria del funcional de la densitat conceptual. En particular aquesta tesis es centrarà en els següents punts: a) El nombre i la naturalesa dels seus punts estacionaris del PES poden sofrir canvis radicals modificant el nivell de càlcul utilitzats, de tal manera que per estar segurs de la seva naturalesa cal anar a nivells de càlcul molt elevats. La duresa és una mesura de la resistència d'un sistema químic a canviar la seva configuració electrònica, i segons el principi de màxima duresa on hi hagi un mínim o un màxim d'energia trobarem un màxim o un mínim de duresa, respectivament. A l'escollir tot un conjunt de reaccions problemàtiques des del punt de vista de presència de punts estacionaris erronis, hem observat que els perfils de duresa són més independents de la base i del mètode utilitzats, a més a més sempre presenten el perfil correcte. b) Hem desenvolupat noves expressions basades en les integracions dels kernels de duresa per tal de determinar la duresa global d'una molècula de manera més precisa que la utilitzada habitualment que està basada en el càlcul numèric de la derivada segona de l'energia respecte al número d'electrons. c) Hem estudiat la validesa del principis de màxima duresa i de mínima polaritzabiliat en les vibracions asimètriques en sistemes aromàtics. Hem trobat que per aquests sistemes alguns modes vibracionals incompleixen aquests principis i hem analitzat la relació d'aquest l'incompliment amb l'efecte de l'acoblament pseudo-Jahn-Teller. A més a més, hem postulat tot un conjunt de regles molt senzilles que ens permetien deduir si una molècula compliria o no aquests principis sense la realització de cap càlcul previ. Tota aquesta informació ha estat essencial per poder determinar exactament quines són les causes del compliment o l'incompliment del MHP i MPP. d) Finalment, hem realitzat una expansió de l'energia funcional en termes del nombre d'electrons i de les coordenades normals dintre del conjunt canònic. En la comparació d'aquesta expansió amb l'expansió de l'energia del nombre d'electrons i del potencial extern hem pogut recuperar d'una altra forma diferent tot un conjunt de relacions ja conegudes entre alguns coneguts descriptors de reactivitat del funcional de la densitat i en poden establir tot un conjunt de noves relacions i de nous descriptors. Dins del marc de les propietats moleculars es proposa generalitzar i millorar la metodologia pel càlcul de la contribució vibracional (Pvib) a les propietats òptiques no lineals (NLO). Tot i que la Pvib no s'ha tingut en compte en la majoria dels estudis teòrics publicats de les propietats NLO, recentment s'ha comprovat que la Pvib de diversos polímers orgànics amb altes propietats òptiques no lineals és fins i tot més gran que la contribució electrònica. Per tant, tenir en compte la Pvib és essencial en el disseny dels nous materials òptics no lineals utilitzats en el camp de la informàtica, les telecomunicacions i la tecnologia làser. Les principals línies d'aquesta tesis sobre aquest tema són: a) Hem calculat per primera vegada els termes d'alt ordre de Pvib de diversos polímers orgànics amb l'objectiu d'avaluar la seva importància i la convergència de les sèries de Taylor que defineixen aquestes contribucions vibracionals. b) Hem avaluat les contribucions electròniques i vibracionals per una sèrie de molècules orgàniques representatives utilitzant diferents metodologies, per tal de poder de determinar quina és la manera més senzilla per poder calcular les propietats NLO amb una precisió semiquantitativa.
Resumo:
El presente trabajo tiene por finalidad visibilizar las brechas territoriales en el ingreso, la pobreza y la desigualdad de Ecuador en el período 2007 – 2013, el análisis se apoya en técnicas de estadística descriptiva para analizar la evolución del ingreso per cápita; gráficas de densidad Kernel para el análisis comparativo de la distribución del ingreso entre territorios; estadística inferencial aplicada a las variaciones de la incidencia de la pobreza y el coeficiente de Gini; y finalmente mediante técnicas econométricas del enfoque de crecimiento “pro-pobre” se identifica la relación entre los niveles de pobreza, el crecimiento del ingreso per cápita y la redistribución del ingreso en los territorios sub nacionales. Los resultados evidencian una estructura territorial sub nacional con marcadas brechas en los niveles y distribución del ingreso, con incidencia irregular de la pobreza a lo largo del país, no obstante durante el período 2007-2013 estas brechas tienden a reducirse, observándose además un crecimiento económico “pro pobre” en la mayoría de provincias entre el 2010 y 2013.
Resumo:
The aim of this paper is essentially twofold: first, to describe the use of spherical nonparametric estimators for determining statistical diagnostic fields from ensembles of feature tracks on a global domain, and second, to report the application of these techniques to data derived from a modern general circulation model. New spherical kernel functions are introduced that are more efficiently computed than the traditional exponential kernels. The data-driven techniques of cross-validation to determine the amount elf smoothing objectively, and adaptive smoothing to vary the smoothing locally, are also considered. Also introduced are techniques for combining seasonal statistical distributions to produce longer-term statistical distributions. Although all calculations are performed globally, only the results for the Northern Hemisphere winter (December, January, February) and Southern Hemisphere winter (June, July, August) cyclonic activity are presented, discussed, and compared with previous studies. Overall, results for the two hemispheric winters are in good agreement with previous studies, both for model-based studies and observational studies.
Resumo:
An improved algorithm for the generation of gridded window brightness temperatures is presented. The primary data source is the International Satellite Cloud Climatology Project, level B3 data, covering the period from July 1983 to the present. The algorithm rakes window brightness, temperatures from multiple satellites, both geostationary and polar orbiting, which have already been navigated and normalized radiometrically to the National Oceanic and Atmospheric Administration's Advanced Very High Resolution Radiometer, and generates 3-hourly global images on a 0.5 degrees by 0.5 degrees latitude-longitude grid. The gridding uses a hierarchical scheme based on spherical kernel estimators. As part of the gridding procedure, the geostationary data are corrected for limb effects using a simple empirical correction to the radiances, from which the corrected temperatures are computed. This is in addition to the application of satellite zenith angle weighting to downweight limb pixels in preference to nearer-nadir pixels. The polar orbiter data are windowed on the target time with temporal weighting to account for the noncontemporaneous nature of the data. Large regions of missing data are interpolated from adjacent processed images using a form of motion compensated interpolation based on the estimation of motion vectors using an hierarchical block matching scheme. Examples are shown of the various stages in the process. Also shown are examples of the usefulness of this type of data in GCM validation.
Resumo:
Radiotelemetry is an important tool used to aid the understanding and conservation of cryptic and rare birds. The two bird species of the family Picathartidae are little-known, secretive, forest-dwelling birds endemic to western and central Africa. In 2005, we conducted a radio-tracking trial of Grey-necked Picathartes Picathartes oreas in the Mbam Minkom Mountain Forest, southern Cameroon, using neck collar (two birds) and tail-mounted (four birds) transmitters to investigate the practicality of radio-tracking Picathartidae. Three birds with tail-mounted transmitters were successfully tracked with the fourth, though not relocated for radio tracking, resighted the following breeding season. Two of these were breeding birds that continued to provision young during radio tracking. One neck-collared bird was found dead three days after transmitter attachment and the other neither relocated nor resighted. As mortality in one bird was potentially caused by the neck collar transmitter we recommend tail-mounted transmitters in future radio-tracking studies of Picathartidae. Home ranges, shown using minimum convex polygon and kernel estimation methods, were generally small (<0.5 km(2)) and centred around breeding sites. A minimum of 60 fixes were found to be sufficient for home range estimation.
Resumo:
Rapid economic growth in China has resulted in substantially improved household incomes. Diets have also changed, with a movement away from traditional foods and towards animal products and processed foods. Yet micronutrient deficiencies, particularly for calcium and vitamin A, are still widespread in China. In this research we model the determinants of the intakes of these micronutrients using household panel data, asking particularly whether continuing income increases are likely to cause the deficiencies to be overcome. Nonparametric kernel regressions and random effects panel regression models are employed. The results show a statistically significant but relatively small positive income effect on both nutrient intakes. The local availability of milk is seen to have a strong positive effect on intakes of both micronutrients. Thus, rather than relying on increasing incomes to overcome deficiencies, supplementary government policies, such as school milk programmes, may be warranted.
Resumo:
Sequential techniques can enhance the efficiency of the approximate Bayesian computation algorithm, as in Sisson et al.'s (2007) partial rejection control version. While this method is based upon the theoretical works of Del Moral et al. (2006), the application to approximate Bayesian computation results in a bias in the approximation to the posterior. An alternative version based on genuine importance sampling arguments bypasses this difficulty, in connection with the population Monte Carlo method of Cappe et al. (2004), and it includes an automatic scaling of the forward kernel. When applied to a population genetics example, it compares favourably with two other versions of the approximate algorithm.
Resumo:
Inverse problems for dynamical system models of cognitive processes comprise the determination of synaptic weight matrices or kernel functions for neural networks or neural/dynamic field models, respectively. We introduce dynamic cognitive modeling as a three tier top-down approach where cognitive processes are first described as algorithms that operate on complex symbolic data structures. Second, symbolic expressions and operations are represented by states and transformations in abstract vector spaces. Third, prescribed trajectories through representation space are implemented in neurodynamical systems. We discuss the Amari equation for a neural/dynamic field theory as a special case and show that the kernel construction problem is particularly ill-posed. We suggest a Tikhonov-Hebbian learning method as regularization technique and demonstrate its validity and robustness for basic examples of cognitive computations.
Resumo:
Using the classical Parzen window estimate as the target function, the kernel density estimation is formulated as a regression problem and the orthogonal forward regression technique is adopted to construct sparse kernel density estimates. The proposed algorithm incrementally minimises a leave-one-out test error score to select a sparse kernel model, and a local regularisation method is incorporated into the density construction process to further enforce sparsity. The kernel weights are finally updated using the multiplicative nonnegative quadratic programming algorithm, which has the ability to reduce the model size further. Except for the kernel width, the proposed algorithm has no other parameters that need tuning, and the user is not required to specify any additional criterion to terminate the density construction procedure. Two examples are used to demonstrate the ability of this regression-based approach to effectively construct a sparse kernel density estimate with comparable accuracy to that of the full-sample optimised Parzen window density estimate.
Resumo:
This paper is addressed to the numerical solving of the rendering equation in realistic image creation. The rendering equation is integral equation describing the light propagation in a scene accordingly to a given illumination model. The used illumination model determines the kernel of the equation under consideration. Nowadays, widely used are the Monte Carlo methods for solving the rendering equation in order to create photorealistic images. In this work we consider the Monte Carlo solving of the rendering equation in the context of the parallel sampling scheme for hemisphere. Our aim is to apply this sampling scheme to stratified Monte Carlo integration method for parallel solving of the rendering equation. The domain for integration of the rendering equation is a hemisphere. We divide the hemispherical domain into a number of equal sub-domains of orthogonal spherical triangles. This domain partitioning allows to solve the rendering equation in parallel. It is known that the Neumann series represent the solution of the integral equation as a infinity sum of integrals. We approximate this sum with a desired truncation error (systematic error) receiving the fixed number of iteration. Then the rendering equation is solved iteratively using Monte Carlo approach. At each iteration we solve multi-dimensional integrals using uniform hemisphere partitioning scheme. An estimate of the rate of convergence is obtained using the stratified Monte Carlo method. This domain partitioning allows easy parallel realization and leads to convergence improvement of the Monte Carlo method. The high performance and Grid computing of the corresponding Monte Carlo scheme are discussed.
Resumo:
A unified approach is proposed for data modelling that includes supervised regression and classification applications as well as unsupervised probability density function estimation. The orthogonal-least-squares regression based on the leave-one-out test criteria is formulated within this unified data-modelling framework to construct sparse kernel models that generalise well. Examples from regression, classification and density estimation applications are used to illustrate the effectiveness of this generic data-modelling approach for constructing parsimonious kernel models with excellent generalisation capability. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
The identification of non-linear systems using only observed finite datasets has become a mature research area over the last two decades. A class of linear-in-the-parameter models with universal approximation capabilities have been intensively studied and widely used due to the availability of many linear-learning algorithms and their inherent convergence conditions. This article presents a systematic overview of basic research on model selection approaches for linear-in-the-parameter models. One of the fundamental problems in non-linear system identification is to find the minimal model with the best model generalisation performance from observational data only. The important concepts in achieving good model generalisation used in various non-linear system-identification algorithms are first reviewed, including Bayesian parameter regularisation and models selective criteria based on the cross validation and experimental design. A significant advance in machine learning has been the development of the support vector machine as a means for identifying kernel models based on the structural risk minimisation principle. The developments on the convex optimisation-based model construction algorithms including the support vector regression algorithms are outlined. Input selection algorithms and on-line system identification algorithms are also included in this review. Finally, some industrial applications of non-linear models are discussed.
Resumo:
We investigate the spectrum of certain integro-differential-delay equations (IDDEs) which arise naturally within spatially distributed, nonlocal, pattern formation problems. Our approach is based on the reformulation of the relevant dispersion relations with the use of the Lambert function. As a particular application of this approach, we consider the case of the Amari delay neural field equation which describes the local activity of a population of neurons taking into consideration the finite propagation speed of the electric signal. We show that if the kernel appearing in this equation is symmetric around some point a= 0 or consists of a sum of such terms, then the relevant dispersion relation yields spectra with an infinite number of branches, as opposed to finite sets of eigenvalues considered in previous works. Also, in earlier works the focus has been on the most rightward part of the spectrum and the possibility of an instability driven pattern formation. Here, we numerically survey the structure of the entire spectra and argue that a detailed knowledge of this structure is important within neurodynamical applications. Indeed, the Amari IDDE acts as a filter with the ability to recognise and respond whenever it is excited in such a way so as to resonate with one of its rightward modes, thereby amplifying such inputs and dampening others. Finally, we discuss how these results can be generalised to the case of systems of IDDEs.
Resumo:
A generalized or tunable-kernel model is proposed for probability density function estimation based on an orthogonal forward regression procedure. Each stage of the density estimation process determines a tunable kernel, namely, its center vector and diagonal covariance matrix, by minimizing a leave-one-out test criterion. The kernel mixing weights of the constructed sparse density estimate are finally updated using the multiplicative nonnegative quadratic programming algorithm to ensure the nonnegative and unity constraints, and this weight-updating process additionally has the desired ability to further reduce the model size. The proposed tunable-kernel model has advantages, in terms of model generalization capability and model sparsity, over the standard fixed-kernel model that restricts kernel centers to the training data points and employs a single common kernel variance for every kernel. On the other hand, it does not optimize all the model parameters together and thus avoids the problems of high-dimensional ill-conditioned nonlinear optimization associated with the conventional finite mixture model. Several examples are included to demonstrate the ability of the proposed novel tunable-kernel model to effectively construct a very compact density estimate accurately.