139 resultados para kernel estimator
Resumo:
We prove upper pointwise estimates for the Bergman kernel of the weighted Fock space of entire functions in $L^{2}(e^{-2\phi}) $ where $\phi$ is a subharmonic function with $\Delta\phi$ a doubling measure. We derive estimates for the canonical solution operator to the inhomogeneous Cauchy-Riemann equation and we characterize the compactness of this operator in terms of $\Delta\phi$.
Resumo:
Background Nowadays, combining the different sources of information to improve the biological knowledge available is a challenge in bioinformatics. One of the most powerful methods for integrating heterogeneous data types are kernel-based methods. Kernel-based data integration approaches consist of two basic steps: firstly the right kernel is chosen for each data set; secondly the kernels from the different data sources are combined to give a complete representation of the available data for a given statistical task. Results We analyze the integration of data from several sources of information using kernel PCA, from the point of view of reducing dimensionality. Moreover, we improve the interpretability of kernel PCA by adding to the plot the representation of the input variables that belong to any dataset. In particular, for each input variable or linear combination of input variables, we can represent the direction of maximum growth locally, which allows us to identify those samples with higher/lower values of the variables analyzed. Conclusions The integration of different datasets and the simultaneous representation of samples and variables together give us a better understanding of biological knowledge.
Resumo:
Let $Q$ be a suitable real function on $C$. An $n$-Fekete set corresponding to $Q$ is a subset ${Z_{n1}},\dotsb, Z_{nn}}$ of $C$ which maximizes the expression $\Pi^n_i_{
Resumo:
Difference-in-Difference (DiD) methods are being increasingly used to analyze the impact of mergers on pricing and other market equilibrium outcomes. Using evidence from an exogenous merger between two retail gasoline companies in a specific market in Spain, this paper shows how concentration did not lead to a price increase. In fact, the conjectural variation model concludes that the existence of a collusive agreement before and after the merger accounts for this result, rather than the existence of efficient gains. This result may explain empirical evidence reported in the literature according to which mergers between firms do not have significant effects on prices.
Resumo:
We propose a new kernel estimation of the cumulative distribution function based on transformation and on bias reducing techniques. We derive the optimal bandwidth that minimises the asymptotic integrated mean squared error. The simulation results show that our proposed kernel estimation improves alternative approaches when the variable has an extreme value distribution with heavy tail and the sample size is small.
Resumo:
This paper shows how a high level matrix programming language may be used to perform Monte Carlo simulation, bootstrapping, estimation by maximum likelihood and GMM, and kernel regression in parallel on symmetric multiprocessor computers or clusters of workstations. The implementation of parallelization is done in a way such that an investigator may use the programs without any knowledge of parallel programming. A bootable CD that allows rapid creation of a cluster for parallel computing is introduced. Examples show that parallelization can lead to important reductions in computational time. Detailed discussion of how the Monte Carlo problem was parallelized is included as an example for learning to write parallel programs for Octave.
Resumo:
This comment corrects the errors in the estimation process that appear in Martins (2001). The first error is in the parametric probit estimation, as the previously presented results do not maximize the log-likelihood function. In the global maximum more variables become significant. As for the semiparametric estimation method, the kernel function used in Martins (2001) can take on both positive and negative values, which implies that the participation probability estimates may be outside the interval [0,1]. We have solved the problem by applying local smoothing in the kernel estimation, as suggested by Klein and Spady (1993).
Resumo:
The Hausman (1978) test is based on the vector of differences of two estimators. It is usually assumed that one of the estimators is fully efficient, since this simplifies calculation of the test statistic. However, this assumption limits the applicability of the test, since widely used estimators such as the generalized method of moments (GMM) or quasi maximum likelihood (QML) are often not fully efficient. This paper shows that the test may easily be implemented, using well-known methods, when neither estimator is efficient. To illustrate, we present both simulation results as well as empirical results for utilization of health care services.
Resumo:
We show that a particular free-by-cyclic group has CAT(0) dimension equal to 2, but CAT(-1) dimension equal to 3. We also classify the minimal proper 2-dimensional CAT(0) actions of this group; they correspond, up to scaling, to a 1-parameter family of locally CAT(0) piecewise Euclidean metrics on a fixed presentation complex for the group. This information is used to produce an infinite family of 2-dimensional hyperbolic groups, which do not act properly by isometries on any proper CAT(0) metric space of dimension 2. This family includes a free-by-cyclic group with free kernel of rank 6.
Resumo:
We construct generating trees with with one, two, and three labels for some classes of permutations avoiding generalized patterns of length 3 and 4. These trees are built by adding at each level an entry to the right end of the permutation, which allows us to incorporate the adjacency condition about some entries in an occurrence of a generalized pattern. We use these trees to find functional equations for the generating functions enumerating these classes of permutations with respect to different parameters. In several cases we solve them using the kernel method and some ideas of Bousquet-Mélou [2]. We obtain refinements of known enumerative results and find new ones.
Resumo:
Variational steepest descent approximation schemes for the modified Patlak-Keller-Segel equation with a logarithmic interaction kernel in any dimension are considered. We prove the convergence of the suitably interpolated in time implicit Euler scheme, defined in terms of the Euclidean Wasserstein distance, associated to this equation for sub-critical masses. As a consequence, we recover the recent result about the global in time existence of weak-solutions to the modified Patlak-Keller-Segel equation for the logarithmic interaction kernel in any dimension in the sub-critical case. Moreover, we show how this method performs numerically in one dimension. In this particular case, this numerical scheme corresponds to a standard implicit Euler method for the pseudo-inverse of the cumulative distribution function. We demonstrate its capabilities to reproduce easily without the need of mesh-refinement the blow-up of solutions for super-critical masses.
Resumo:
This study examines the evolution of labor productivity across Spanish regions during the period from 1977 to 2002. By applying the kernel technique, we estimate the effects of the Transition process on labor productivity and its main sources. We find that Spanish regions experienced a major convergence process in labor productivity and in human capital in the 1977-1993 period. We also pinpoint the existence of a transition co-movement between labor productivity and human capital. Conversely, the dynamics of investment in physical capital seem unrelated to the transition dynamics of labor productivity. The lack of co-evolution can be addressed as one of the causes of the current slowdown in productivity. Classification-JEL: J24, N34, N940, O18, O52, R10
Resumo:
L'objectiu d'aquest projecte ha estat generalitzar i integrar la funcionalitat de dos projectes anteriors que ampliaven el tractament que oferia el Magma respecte a les matrius de Hadamard. Hem implementat funcions genèriques que permeten construir noves matrius Hadamard de qualsevol mida per a cada rang i dimensió de nucli, i així ampliar la seva base de dades. També hem optimitzat la funció que calcula el nucli, i hem desenvolupat funcions que calculen la invariant Symmetric Hamming Distance Enumerator (SH-DE) proposada per Kai-Tai Fang i Gennian Gei que és més sensible per a la detecció de la no equivalència de les matrius Hadamard.
Resumo:
La finalitat d'aquest projecte és aconseguir representar codis binaris no lineals de manera eficient en un ordinador. Per fer-ho, hem desenvolupat funcions per representar un codi binari a partir del super dual. Hem millorat la funció de càlcul del kernel d'un codi binari, implementada en projectes d'anys anteriors. També hem desenvolupat un paquet software per l'intèrpret MAGMA. Aquest paquet ens proveeix d'eines per al tractament de codis binaris no necessàriament lineals.
Resumo:
El projecte recull el treball portat a càrrec per l’anàlisi, disseny i implementació d’una eina per l’Institut Municipal D’Hisenda de l’Ajuntament de Barcelona que compleixi les necessitats d’un sistema d’informació capaç de gestionar els expedients que genera una sèrie de tributs, les sancions que comporten així com la documentació necessària per la comunicació amb el ciutadà. Per realitzar l’aplicació s’han utilitzat tecnologies que ens permeten treballar en l’entorn web, un nucli programat en llenguatge Java sobre la plataforma MVC de Struts, tot sobre un servidor d’aplicacions WebSphere i un motor de base de dades Oracle.