909 resultados para estimador Kernel
Resumo:
We propose a criterion for the validity of semiclassical gravity (SCG) which is based on the stability of the solutions of SCG with respect to quantum metric fluctuations. We pay special attention to the two-point quantum correlation functions for the metric perturbations, which contain both intrinsic and induced fluctuations. These fluctuations can be described by the Einstein-Langevin equation obtained in the framework of stochastic gravity. Specifically, the Einstein-Langevin equation yields stochastic correlation functions for the metric perturbations which agree, to leading order in the large N limit, with the quantum correlation functions of the theory of gravity interacting with N matter fields. The homogeneous solutions of the Einstein-Langevin equation are equivalent to the solutions of the perturbed semiclassical equation, which describe the evolution of the expectation value of the quantum metric perturbations. The information on the intrinsic fluctuations, which are connected to the initial fluctuations of the metric perturbations, can also be retrieved entirely from the homogeneous solutions. However, the induced metric fluctuations proportional to the noise kernel can only be obtained from the Einstein-Langevin equation (the inhomogeneous term). These equations exhibit runaway solutions with exponential instabilities. A detailed discussion about different methods to deal with these instabilities is given. We illustrate our criterion by showing explicitly that flat space is stable and a description based on SCG is a valid approximation in that case.
Resumo:
Uniform-price assignment games are introduced as those assignment markets with the core reduced to a segment. In these games, for all active agents, competitive prices are uniform although products may be non-homogeneous. A characterization in terms of the assignment matrix is given. The only assignment markets where all submarkets are uniform are the Bohm-Bawerk horse markets. We prove that for uniform-price assignment games the kernel, or set of symmetrically-pairwise bargained allocations, either coincides with the core or reduces to the nucleolus
Resumo:
[cat] En el domini dels jocs bilaterals d’assignació, es presenta una axiomàtica del nucleolus com l´unica solució que compleix les propietats de consistència respecte del joc derivat definit per Owen (1992) i monotonia de les queixes dels sectors respecte de la seva cardinalitat. Com a conseqüència obtenim una caracterització geomètrica del nucleolus mitjançant una propietat de bisecció més forta que la que satisfan els punts del kernel (Maschler et al, 1979).
Resumo:
We propose an iterative procedure to minimize the sum of squares function which avoids the nonlinear nature of estimating the first order moving average parameter and provides a closed form of the estimator. The asymptotic properties of the method are discussed and the consistency of the linear least squares estimator is proved for the invertible case. We perform various Monte Carlo experiments in order to compare the sample properties of the linear least squares estimator with its nonlinear counterpart for the conditional and unconditional cases. Some examples are also discussed
Resumo:
[cat] Aquest treball tracta d’extendre la noció d’equilibri simètric de negociació bilateral introduït per Rochford (1983) a jocs d’assignació multilateral. Un pagament corresponent a un equilibri simètric de negociación multilateral (SMB) és una imputación del core que garanteix que qualsevol agent es troba en equilibri respecte a un procés de negociación entre tots els agents basat en allò que cadascun d’ells podria rebre -i fer servir com a amenaça- en un ’matching’ òptim diferent al que s’ha format. Es prova que, en el cas de jocs d’assignació multilaterals, el conjunt de SMB és sempre no buit i que, a diferència del cas bilateral, no sempre coincideix amb el kernel (Davis and Maschler, 1965). Finalment, responem una pregunta oberta per Rochford (1982) tot introduïnt un conjunt basat en la idea de kernel, que, conjuntament amb el core, ens permet caracteritzar el conjunt de SMB.
Resumo:
In this paper we propose an innovative methodology for automated profiling of illicit tablets bytheir surface granularity; a feature previously unexamined for this purpose. We make use of the tinyinconsistencies at the tablet surface, referred to as speckles, to generate a quantitative granularity profileof tablets. Euclidian distance is used as a measurement of (dis)similarity between granularity profiles.The frequency of observed distances is then modelled by kernel density estimation in order to generalizethe observations and to calculate likelihood ratios (LRs). The resulting LRs are used to evaluate thepotential of granularity profiles to differentiate between same-batch and different-batches tablets.Furthermore, we use the LRs as a similarity metric to refine database queries. We are able to derivereliable LRs within a scope that represent the true evidential value of the granularity feature. Thesemetrics are used to refine candidate hit-lists form a database containing physical features of illicittablets. We observe improved or identical ranking of candidate tablets in 87.5% of cases when granularityis considered.
Resumo:
This paper introduces a nonlinear measure of dependence between random variables in the context of remote sensing data analysis. The Hilbert-Schmidt Independence Criterion (HSIC) is a kernel method for evaluating statistical dependence. HSIC is based on computing the Hilbert-Schmidt norm of the cross-covariance operator of mapped samples in the corresponding Hilbert spaces. The HSIC empirical estimator is very easy to compute and has good theoretical and practical properties. We exploit the capabilities of HSIC to explain nonlinear dependences in two remote sensing problems: temperature estimation and chlorophyll concentration prediction from spectra. Results show that, when the relationship between random variables is nonlinear or when few data are available, the HSIC criterion outperforms other standard methods, such as the linear correlation or mutual information.
Resumo:
This paper presents multiple kernel learning (MKL) regression as an exploratory spatial data analysis and modelling tool. The MKL approach is introduced as an extension of support vector regression, where MKL uses dedicated kernels to divide a given task into sub-problems and to treat them separately in an effective way. It provides better interpretability to non-linear robust kernel regression at the cost of a more complex numerical optimization. In particular, we investigate the use of MKL as a tool that allows us to avoid using ad-hoc topographic indices as covariables in statistical models in complex terrains. Instead, MKL learns these relationships from the data in a non-parametric fashion. A study on data simulated from real terrain features confirms the ability of MKL to enhance the interpretability of data-driven models and to aid feature selection without degrading predictive performances. Here we examine the stability of the MKL algorithm with respect to the number of training data samples and to the presence of noise. The results of a real case study are also presented, where MKL is able to exploit a large set of terrain features computed at multiple spatial scales, when predicting mean wind speed in an Alpine region.
Resumo:
Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending the corresponding approaches to the regional scale represents a major, and as-of-yet largely unresolved, challenge. To address this problem, we have developed a downscaling procedure based on a non-linear Bayesian sequential simulation approach. The basic objective of this algorithm is to estimate the value of the sparsely sampled hydraulic conductivity at non-sampled locations based on its relation to the electrical conductivity, which is available throughout the model space. The in situ relationship between the hydraulic and electrical conductivities is described through a non-parametric multivariate kernel density function. This method is then applied to the stochastic integration of low-resolution, re- gional-scale electrical resistivity tomography (ERT) data in combination with high-resolution, local-scale downhole measurements of the hydraulic and electrical conductivities. Finally, the overall viability of this downscaling approach is tested and verified by performing and comparing flow and transport simulation through the original and the downscaled hydraulic conductivity fields. Our results indicate that the proposed procedure does indeed allow for obtaining remarkably faithful estimates of the regional-scale hydraulic conductivity structure and correspondingly reliable predictions of the transport characteristics over relatively long distances.
Resumo:
In this work, the calcium-induced aggregation of phosphatidylserine liposomes is probed by means of the analysis of the kinetics of such process as well as the aggregate morphology. This novel characterization of liposome aggregation involves the use of static and dynamic light-scattering techniques to obtain kinetic exponents and fractal dimensions. For salt concentrations larger than 5 mM, a diffusion-limited aggregation regime is observed and the Brownian kernel properly describes the time evolution of the diffusion coefficient. For slow kinetics, a slightly modified multiple contact kernel is required. In any case, a time evolution model based on the numerical resolution of Smoluchowski's equation is proposed in order to establish a theoretical description for the aggregating system. Such a model provides an alternative procedure to determine the dimerization constant, which might supply valuable information about interaction mechanisms between phospholipid vesicles.
Resumo:
We show how nonlinear embedding algorithms popular for use with shallow semi-supervised learning techniques such as kernel methods can be applied to deep multilayer architectures, either as a regularizer at the output layer, or on each layer of the architecture. This provides a simple alternative to existing approaches to deep learning whilst yielding competitive error rates compared to those methods, and existing shallow semi-supervised techniques.
Resumo:
In this study we propose an evaluation of the angular effects altering the spectral response of the land-cover over multi-angle remote sensing image acquisitions. The shift in the statistical distribution of the pixels observed in an in-track sequence of WorldView-2 images is analyzed by means of a kernel-based measure of distance between probability distributions. Afterwards, the portability of supervised classifiers across the sequence is investigated by looking at the evolution of the classification accuracy with respect to the changing observation angle. In this context, the efficiency of various physically and statistically based preprocessing methods in obtaining angle-invariant data spaces is compared and possible synergies are discussed.
Resumo:
Estimativas "bootstrap" da média aritmética dos genótipos de soja 'Pickett', 'Peking', PI88788 e PI90763 e os intervalos de confiança obtidos pela teoria normal e através da distribuição "bootstrap" deste estimador, como o percentil "bootstrap" e o BCa, correção para o viés e aceleração, do parâmetro de diferenciação da cultivar padrão de suscetibilidade Lee são utilizados para classificar raças do nematóide de cisto da soja. Os intervalos de confiança obtidos a partir da distribuição "bootstrap" apresentaram menor amplitude e foram muito similares, dessa forma, o limite inferior do intervalo de confiança percentil "bootstrap" foi tomado como nível de referência nas distribuições "bootstrap" do estimador da média aritmética dos genótipos diferenciadores, permitindo estimar a probabilidade empírica de uma reação positiva ou negativa, e, conseqüentemente, identificar a raça mais provável sob determinado teste.
Resumo:
O objetivo do trabalho foi comparar, por meio de simulação, as estimativas de componentes de variância produzidas pelos métodos ANOVA (análise da variância), ML (máxima verossimilhança), REML (máxima verossimilhança restrita) e MIVQUE(0) (estimador quadrático não viesado de variância mínima), no delineamento de blocos aumentados com tratamentos adicionais (progênies) de uma ou mais procedências (cruzamentos). Os resultados indicaram superioridade relativa do método MIVQUE(0). O método ANOVA, embora não tendencioso, apresentou as estimativas de menor precisão. Os métodos de máxima verossimilhança, sobretudo ML, tenderam a subestimar a variância do erro experimental () e a superestimar as variâncias genotípicas (
), em especial nos experimentos de menor tamanho (n<120 observações). Quando as progênies vieram de um só cruzamento, REML praticamente perdeu estes vícios nos experimentos maiores e com razões
/
>0,5. Contudo, o método produziu as piores estimativas de variâncias genotípicas quando as progênies vieram de diferentes cruzamentos e os experimentos foram pequenos.
Resumo:
It is estimated that around 230 people die each year due to radon (222Rn) exposure in Switzerland. 222Rn occurs mainly in closed environments like buildings and originates primarily from the subjacent ground. Therefore it depends strongly on geology and shows substantial regional variations. Correct identification of these regional variations would lead to substantial reduction of 222Rn exposure of the population based on appropriate construction of new and mitigation of already existing buildings. Prediction of indoor 222Rn concentrations (IRC) and identification of 222Rn prone areas is however difficult since IRC depend on a variety of different variables like building characteristics, meteorology, geology and anthropogenic factors. The present work aims at the development of predictive models and the understanding of IRC in Switzerland, taking into account a maximum of information in order to minimize the prediction uncertainty. The predictive maps will be used as a decision-support tool for 222Rn risk management. The construction of these models is based on different data-driven statistical methods, in combination with geographical information systems (GIS). In a first phase we performed univariate analysis of IRC for different variables, namely the detector type, building category, foundation, year of construction, the average outdoor temperature during measurement, altitude and lithology. All variables showed significant associations to IRC. Buildings constructed after 1900 showed significantly lower IRC compared to earlier constructions. We observed a further drop of IRC after 1970. In addition to that, we found an association of IRC with altitude. With regard to lithology, we observed the lowest IRC in sedimentary rocks (excluding carbonates) and sediments and the highest IRC in the Jura carbonates and igneous rock. The IRC data was systematically analyzed for potential bias due to spatially unbalanced sampling of measurements. In order to facilitate the modeling and the interpretation of the influence of geology on IRC, we developed an algorithm based on k-medoids clustering which permits to define coherent geological classes in terms of IRC. We performed a soil gas 222Rn concentration (SRC) measurement campaign in order to determine the predictive power of SRC with respect to IRC. We found that the use of SRC is limited for IRC prediction. The second part of the project was dedicated to predictive mapping of IRC using models which take into account the multidimensionality of the process of 222Rn entry into buildings. We used kernel regression and ensemble regression tree for this purpose. We could explain up to 33% of the variance of the log transformed IRC all over Switzerland. This is a good performance compared to former attempts of IRC modeling in Switzerland. As predictor variables we considered geographical coordinates, altitude, outdoor temperature, building type, foundation, year of construction and detector type. Ensemble regression trees like random forests allow to determine the role of each IRC predictor in a multidimensional setting. We found spatial information like geology, altitude and coordinates to have stronger influences on IRC than building related variables like foundation type, building type and year of construction. Based on kernel estimation we developed an approach to determine the local probability of IRC to exceed 300 Bq/m3. In addition to that we developed a confidence index in order to provide an estimate of uncertainty of the map. All methods allow an easy creation of tailor-made maps for different building characteristics. Our work is an essential step towards a 222Rn risk assessment which accounts at the same time for different architectural situations as well as geological and geographical conditions. For the communication of 222Rn hazard to the population we recommend to make use of the probability map based on kernel estimation. The communication of 222Rn hazard could for example be implemented via a web interface where the users specify the characteristics and coordinates of their home in order to obtain the probability to be above a given IRC with a corresponding index of confidence. Taking into account the health effects of 222Rn, our results have the potential to substantially improve the estimation of the effective dose from 222Rn delivered to the Swiss population.