887 resultados para Kernel polynomials
Resumo:
This thesis describes a search for very high energy (VHE) gamma-ray emission from the starburst galaxy IC 342. The analysis was based on data from the 2003 — 2004 observing season recorded using the Whipple 10-metre imaging atmospheric Cherenkov telescope located on Mount Hopkins in southern Arizona. IC 342 may be classed as a non-blazar type galaxy and to date only a few such galaxies (M 87, Cen A, M 82 and NGC 253) have been detected as VHE gamma-ray sources. Analysis of approximately 24 hours of good quality IC 342 data, consisting entirely of ON/OFF observations, was carried out using a number of methods (standard Supercuts, optimised Supercuts, scaled optimised Supercuts and the multivariate kernel analysis technique). No evidence for TeV gamma-ray emission from IC 342 was found. The significance was 0.6 a with a nominal rate of 0.04 ± 0.06 gamma rays per minute. The flux upper limit above 600 GeV (at 99.9 % confidence) was determined to be 5.5 x 10-8 m-2 s-1, corresponding to 8 % of the Crab Nebula flux in the same energy range.
Resumo:
Kernel-Functions, Machine Learning, Least Squares, Speech Recognition, Classification, Regression
Resumo:
Speaker Recognition, Speaker Verification, Sparse Kernel Logistic Regression, Support Vector Machine
Resumo:
The authors studied the rainfall in Pesqueira (Pernambuco, Brasil) in a period of 48 years (1910 through 1957) by the method of orthogonal polynomials, degrees up to the fourth having been tried. None of them was significant, so that it seems that no trend is present. The mean observed was 679.00 mm., with standard error of the mean 205.5 mm., and a 30.3% coefficient of variation. The 95% level of probability would include annual rainfall from 263.9 up to 1094.1mm.
Resumo:
This paper deals with the study by orthogonal polynomials of trends in the mean annual and mean monthly temperatures (in degrees Centigrade) in Campinas (State of São Paulo, Brasil), from 1890 up to 1956. Only 4 months were studied (January, April, July and October) taken as typical of their respective season. For the annual averages both linear and quadratic components were significant, the regression equation being y = 19.95 - 0.0219 x + 0.00057 x², where y is the temperature (in degrees Centigrade) and x is the number of years after 1889. Thus 1890 corresponds to x = 1, 1891, to x = 2, etc. The equation shows a minimum for the year 1908, with a calculated mean y = 19.74. The expected means by the regression equation are given below. Anual temperature means for Campinas (SP, Brasil) calculated by the regression equation Year Annual mean (Degrees Centigrade) 1890 19.93 1900 10.78 1908 19.74 (minimum) 1010 19.75 1920 19.82 1930 20.01 1940 20.32 1950 20.74 1956 21.05 The mean for 67 years was 20.08°C with standard error of the mean 0.08°G. For January the regression equation was y = 23.08 - 0.0661 x + 0.00122 x², with a minimum of 22.19°C for 1916. The average for 67 years was 22.70°C, with standard error 0.12°C. For April no component of regression was significant. The average was 20.42°C, with standard error 0.13°C. For July the regression equation was of first degree, y = 16.01 + 0.0140X. The average for 67 years was 16.49°C, with standard error of the mean 0.14°C. Finally, for October the regression equation was y = 20.55 - 0.0362x + 0.00078x², with a minimum of 20.13°C for 1912. The average was 20.52°C, with standard error of the mean equal to 0.14°C.
Resumo:
This paper shows how a high level matrix programming language may be used to perform Monte Carlo simulation, bootstrapping, estimation by maximum likelihood and GMM, and kernel regression in parallel on symmetric multiprocessor computers or clusters of workstations. The implementation of parallelization is done in a way such that an investigator may use the programs without any knowledge of parallel programming. A bootable CD that allows rapid creation of a cluster for parallel computing is introduced. Examples show that parallelization can lead to important reductions in computational time. Detailed discussion of how the Monte Carlo problem was parallelized is included as an example for learning to write parallel programs for Octave.
Resumo:
This comment corrects the errors in the estimation process that appear in Martins (2001). The first error is in the parametric probit estimation, as the previously presented results do not maximize the log-likelihood function. In the global maximum more variables become significant. As for the semiparametric estimation method, the kernel function used in Martins (2001) can take on both positive and negative values, which implies that the participation probability estimates may be outside the interval [0,1]. We have solved the problem by applying local smoothing in the kernel estimation, as suggested by Klein and Spady (1993).
Resumo:
We show that a particular free-by-cyclic group has CAT(0) dimension equal to 2, but CAT(-1) dimension equal to 3. We also classify the minimal proper 2-dimensional CAT(0) actions of this group; they correspond, up to scaling, to a 1-parameter family of locally CAT(0) piecewise Euclidean metrics on a fixed presentation complex for the group. This information is used to produce an infinite family of 2-dimensional hyperbolic groups, which do not act properly by isometries on any proper CAT(0) metric space of dimension 2. This family includes a free-by-cyclic group with free kernel of rank 6.
Resumo:
We construct generating trees with with one, two, and three labels for some classes of permutations avoiding generalized patterns of length 3 and 4. These trees are built by adding at each level an entry to the right end of the permutation, which allows us to incorporate the adjacency condition about some entries in an occurrence of a generalized pattern. We use these trees to find functional equations for the generating functions enumerating these classes of permutations with respect to different parameters. In several cases we solve them using the kernel method and some ideas of Bousquet-Mélou [2]. We obtain refinements of known enumerative results and find new ones.
Resumo:
Variational steepest descent approximation schemes for the modified Patlak-Keller-Segel equation with a logarithmic interaction kernel in any dimension are considered. We prove the convergence of the suitably interpolated in time implicit Euler scheme, defined in terms of the Euclidean Wasserstein distance, associated to this equation for sub-critical masses. As a consequence, we recover the recent result about the global in time existence of weak-solutions to the modified Patlak-Keller-Segel equation for the logarithmic interaction kernel in any dimension in the sub-critical case. Moreover, we show how this method performs numerically in one dimension. In this particular case, this numerical scheme corresponds to a standard implicit Euler method for the pseudo-inverse of the cumulative distribution function. We demonstrate its capabilities to reproduce easily without the need of mesh-refinement the blow-up of solutions for super-critical masses.
Resumo:
This study examines the evolution of labor productivity across Spanish regions during the period from 1977 to 2002. By applying the kernel technique, we estimate the effects of the Transition process on labor productivity and its main sources. We find that Spanish regions experienced a major convergence process in labor productivity and in human capital in the 1977-1993 period. We also pinpoint the existence of a transition co-movement between labor productivity and human capital. Conversely, the dynamics of investment in physical capital seem unrelated to the transition dynamics of labor productivity. The lack of co-evolution can be addressed as one of the causes of the current slowdown in productivity. Classification-JEL: J24, N34, N940, O18, O52, R10
Resumo:
Given a model that can be simulated, conditional moments at a trial parameter value can be calculated with high accuracy by applying kernel smoothing methods to a long simulation. With such conditional moments in hand, standard method of moments techniques can be used to estimate the parameter. Since conditional moments are calculated using kernel smoothing rather than simple averaging, it is not necessary that the model be simulable subject to the conditioning information that is used to define the moment conditions. For this reason, the proposed estimator is applicable to general dynamic latent variable models. Monte Carlo results show that the estimator performs well in comparison to other estimators that have been proposed for estimation of general DLV models.
Resumo:
L'objectiu d'aquest projecte ha estat generalitzar i integrar la funcionalitat de dos projectes anteriors que ampliaven el tractament que oferia el Magma respecte a les matrius de Hadamard. Hem implementat funcions genèriques que permeten construir noves matrius Hadamard de qualsevol mida per a cada rang i dimensió de nucli, i així ampliar la seva base de dades. També hem optimitzat la funció que calcula el nucli, i hem desenvolupat funcions que calculen la invariant Symmetric Hamming Distance Enumerator (SH-DE) proposada per Kai-Tai Fang i Gennian Gei que és més sensible per a la detecció de la no equivalència de les matrius Hadamard.
Resumo:
The algorithmic approach to data modelling has developed rapidly these last years, in particular methods based on data mining and machine learning have been used in a growing number of applications. These methods follow a data-driven methodology, aiming at providing the best possible generalization and predictive abilities instead of concentrating on the properties of the data model. One of the most successful groups of such methods is known as Support Vector algorithms. Following the fruitful developments in applying Support Vector algorithms to spatial data, this paper introduces a new extension of the traditional support vector regression (SVR) algorithm. This extension allows for the simultaneous modelling of environmental data at several spatial scales. The joint influence of environmental processes presenting different patterns at different scales is here learned automatically from data, providing the optimum mixture of short and large-scale models. The method is adaptive to the spatial scale of the data. With this advantage, it can provide efficient means to model local anomalies that may typically arise in situations at an early phase of an environmental emergency. However, the proposed approach still requires some prior knowledge on the possible existence of such short-scale patterns. This is a possible limitation of the method for its implementation in early warning systems. The purpose of this paper is to present the multi-scale SVR model and to illustrate its use with an application to the mapping of Cs137 activity given the measurements taken in the region of Briansk following the Chernobyl accident.
Resumo:
This paper presents a semisupervised support vector machine (SVM) that integrates the information of both labeled and unlabeled pixels efficiently. Method's performance is illustrated in the relevant problem of very high resolution image classification of urban areas. The SVM is trained with the linear combination of two kernels: a base kernel working only with labeled examples is deformed by a likelihood kernel encoding similarities between labeled and unlabeled examples. Results obtained on very high resolution (VHR) multispectral and hyperspectral images show the relevance of the method in the context of urban image classification. Also, its simplicity and the few parameters involved make the method versatile and workable by unexperienced users.