22 resultados para Kernels
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
A l'estadística de processos estocàstics i camps aleatoris, una funció de moments o un cumulant d'un estimador de la funció de correlació o de la densitat espectral sovint pot contenir una integral amb un producte cíclic de nuclis. En aquest treball es defineix i s'investiga aquesta classe d'integrals i es demostra la desigualtat de Young-Hölder que permet estudiar el comportament asimptòtic de les esmentades integrals en la situació quan els nuclis depenen d'un pàràmetre. Es considera una aplicació al problema d'estimació de la funció de resposta en un sistema de Volterra.
Resumo:
Memòria elaborada a partir d’una estada al projecte Proteus de la New York University entre abril i juny del 2007. Les tècniques de clustering poden ajudar a reduir la supervisió en processos d’obtenció de patrons per a Extracció d’Informació. Tanmateix, és necessari disposar d’algorismes adequats a documents, i aquests algorismes requereixen mesures adequades de similitud entre patrons. Els kernels poden oferir una solució a aquests problemes, però l’aprenentatge no supervisat requereix d’estrat`egies m´es astutes que l’aprenentatge supervisat per a incorporar major quantitat d’informació. En aquesta memòria, fruit de la meva estada de mes d’Abril al de Juny de 2007 al projecte. Proteus de la New York University, es proposen i avaluen diversos kernels sobre patrons. Ini- cialment s’estudien kernels amb una família de patrons restringits, i a continuació s’apliquen kernels ja usats en tasques supervisades d’Extracció d’Informació. Degut a la degradació del rendiment que experimenta el clustering a l’afegir informació irrellevant, els kernels se simpli- fiquen i es busquen estratègies per a incorporar-hi semàntica de forma selectiva. Finalment, s’estudia quin efecte té aplicar clustering sobre el coneixement semàntic com a pas previ al clustering de patrons. Les diverses estratègies s’avaluen en tasques de clustering de documents i patrons usant dades reals.
Resumo:
Vegeu el resum a l'inici del document del fitxer adjunt.
Resumo:
Vegeu el resum a l'inici del document del fitxer adjunt.
Resumo:
Most integrodifference models of biological invasions are based on the nonoverlapping-generations approximation. However, the effect of multiple reproduction events overlapping generations on the front speed can be very important especially for species with a long life spam . Only in one-dimensional space has this approximation been relaxed previously, although almost all biological invasions take place in two dimensions. Here we present a model that takes into account the overlapping generations effect or, more generally, the stage structure of the population , and we analyze the main differences with the corresponding nonoverlappinggenerations results
Resumo:
Bimodal dispersal probability distributions with characteristic distances differing by several orders of magnitude have been derived and favorably compared to observations by Nathan [Nature (London) 418, 409 (2002)]. For such bimodal kernels, we show that two-dimensional molecular dynamics computer simulations are unable to yield accurate front speeds. Analytically, the usual continuous-space random walks (CSRWs) are applied to two dimensions. We also introduce discrete-space random walks and use them to check the CSRW results (because of the inefficiency of the numerical simulations). The physical results reported are shown to predict front speeds high enough to possibly explain Reid's paradox of rapid tree migration. We also show that, for a time-ordered evolution equation, fronts are always slower in two dimensions than in one dimension and that this difference is important both for unimodal and for bimodal kernels
Resumo:
We point out that using the heat kernel on a cone to compute the first quantum correction to the entropy of Rindler space does not yield the correct temperature dependence. In order to obtain the physics at arbitrary temperature one must compute the heat kernel in a geometry with different topology (without a conical singularity). This is done in two ways, which are shown to agree with computations performed by other methods. Also, we discuss the ambiguities in the regularization procedure and their physical consequences.
Resumo:
The relationship between yield, carbon isotope discrimination and ash content in mature kernels was examined for a set of 13 barley (Hordeum vulgare) cultivars. Plants were grown under rainfed and well-irrigated conditions in a Mediterranean area. Water deficit caused a decrease in both grain yield and carbon isotope discrimination (Δ). The yield was positively related to Δ and negatively related to ash content, across genotypes within each treatment. However, whereas the correlation between yield and Δ was higher for the set of genotypes under well-irrigated (r=0.70, P<0.01) than under rainfed (r=0.42) conditions, the opposite occurred when yield and ash content were related, ie r=-0.38 under well-irrigated and r=-0.73, (P<0.01) under rainfed conditions. Carbon isotope discrimination and ash content together account for almost 60% of the variation in yield, in both conditions. There was no significant relationship (r=-0.15) between carbon isotope discrimination and ash content in well-irrigated plants, whereas in rainfed plants, this relationship, although significant (r=-0.54, P< 0.05), was weakly negative. The concentration of several mineral elements was measured in the same kernels. The mineral that correlated best with ash content, yield and A, was K. For yield and Δ, although the relationship with K followed the same pattern as the relationhip with ash content, the correlation coefficients were lower. Thus, mineral accumulation in mature kernels seems to be independent of transpiration efficiency. In fact, filling of grains takes place through the phloem pathway. The ash content in kernels is proposed as a complementary criterion, in addition to kernel Δ, to assess genotype differences in barley grain yield under rainfed conditions.
Resumo:
We show that L2-bounded singular integrals in metric spaces with respect to general measures and kernels converge weakly. This implies a kind of average convergence almost everywhere. For measures with zero density we prove the almost everywhere existence of principal values.
Resumo:
Debido al gran número de transistores por mm2 que hoy en día podemos encontrar en las GPU convencionales, en los últimos años éstas se vienen utilizando para propósitos generales gracias a que ofrecen un mayor rendimiento para computación paralela. Este proyecto implementa el producto sparse matrix-vector sobre OpenCL. En los primeros capítulos hacemos una revisión de la base teórica necesaria para comprender el problema. Después veremos los fundamentos de OpenCL y del hardware sobre el que se ejecutarán las librerías desarrolladas. En el siguiente capítulo seguiremos con una descripción del código de los kernels y de su flujo de datos. Finalmente, el software es evaluado basándose en comparativas con la CPU.
Resumo:
Technological limitations and power constraints are resulting in high-performance parallel computing architectures that are based on large numbers of high-core-count processors. Commercially available processors are now at 8 and 16 cores and experimental platforms, such as the many-core Intel Single-chip Cloud Computer (SCC) platform, provide much higher core counts. These trends are presenting new sets of challenges to HPC applications including programming complexity and the need for extreme energy efficiency.In this work, we first investigate the power behavior of scientific PGAS application kernels on the SCC platform, and explore opportunities and challenges for power management within the PGAS framework. Results obtained via empirical evaluation of Unified Parallel C (UPC) applications on the SCC platform under different constraints, show that, for specific operations, the potential for energy savings in PGAS is large; and power/performance trade-offs can be effectively managed using a cross-layerapproach. We investigate cross-layer power management using PGAS language extensions and runtime mechanisms that manipulate power/performance tradeoffs. Specifically, we present the design, implementation and evaluation of such a middleware for application-aware cross-layer power management of UPC applications on the SCC platform. Finally, based on our observations, we provide a set of recommendations and insights that can be used to support similar power management for PGAS applications on other many-core platforms.
Resumo:
In a seminal paper, Aitchison and Lauder (1985) introduced classical kernel densityestimation techniques in the context of compositional data analysis. Indeed, they gavetwo options for the choice of the kernel to be used in the kernel estimator. One ofthese kernels is based on the use the alr transformation on the simplex SD jointly withthe normal distribution on RD-1. However, these authors themselves recognized thatthis method has some deficiencies. A method for overcoming these dificulties based onrecent developments for compositional data analysis and multivariate kernel estimationtheory, combining the ilr transformation with the use of the normal density with a fullbandwidth matrix, was recently proposed in Martín-Fernández, Chacón and Mateu-Figueras (2006). Here we present an extensive simulation study that compares bothmethods in practice, thus exploring the finite-sample behaviour of both estimators
Resumo:
This paper presents a new registration algorithm, called Temporal Di eomorphic Free Form Deformation (TDFFD), and its application to motion and strain quanti cation from a sequence of 3D ultrasound (US) images. The originality of our approach resides in enforcing time consistency by representing the 4D velocity eld as the sum of continuous spatiotemporal B-Spline kernels. The spatiotemporal displacement eld is then recovered through forward Eulerian integration of the non-stationary velocity eld. The strain tensor iscomputed locally using the spatial derivatives of the reconstructed displacement eld. The energy functional considered in this paper weighs two terms: the image similarity and a regularization term. The image similarity metric is the sum of squared di erences between the intensities of each frame and a reference one. Any frame in the sequence can be chosen as reference. The regularization term is based on theincompressibility of myocardial tissue. TDFFD was compared to pairwise 3D FFD and 3D+t FFD, bothon displacement and velocity elds, on a set of synthetic 3D US images with di erent noise levels. TDFFDshowed increased robustness to noise compared to these two state-of-the-art algorithms. TDFFD also proved to be more resistant to a reduced temporal resolution when decimating this synthetic sequence. Finally, this synthetic dataset was used to determine optimal settings of the TDFFD algorithm. Subsequently, TDFFDwas applied to a database of cardiac 3D US images of the left ventricle acquired from 9 healthy volunteers and 13 patients treated by Cardiac Resynchronization Therapy (CRT). On healthy cases, uniform strain patterns were observed over all myocardial segments, as physiologically expected. On all CRT patients, theimprovement in synchrony of regional longitudinal strain correlated with CRT clinical outcome as quanti ed by the reduction of end-systolic left ventricular volume at follow-up (6 and 12 months), showing the potential of the proposed algorithm for the assessment of CRT.
Resumo:
For the standard kernel density estimate, it is known that one can tune the bandwidth such that the expected L1 error is within a constant factor of the optimal L1 error (obtained when one is allowed to choose the bandwidth with knowledge of the density). In this paper, we pose the same problem for variable bandwidth kernel estimates where the bandwidths are allowed to depend upon the location. We show in particular that for positive kernels on the real line, for any data-based bandwidth, there exists a densityfor which the ratio of expected L1 error over optimal L1 error tends to infinity. Thus, the problem of tuning the variable bandwidth in an optimal manner is ``too hard''. Moreover, from the class of counterexamples exhibited in the paper, it appears thatplacing conditions on the densities (monotonicity, convexity, smoothness) does not help.