971 resultados para Tridiagonal Kernel


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In multi-label classification, examples can be associated with multiple labels simultaneously. The task of learning from multi-label data can be addressed by methods that transform the multi-label classification problem into several single-label classification problems. The binary relevance approach is one of these methods, where the multi-label learning task is decomposed into several independent binary classification problems, one for each label in the set of labels, and the final labels for each example are determined by aggregating the predictions from all binary classifiers. However, this approach fails to consider any dependency among the labels. Aiming to accurately predict label combinations, in this paper we propose a simple approach that enables the binary classifiers to discover existing label dependency by themselves. An experimental study using decision trees, a kernel method as well as Naive Bayes as base-learning techniques shows the potential of the proposed approach to improve the multi-label classification performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fractal theory presents a large number of applications to image and signal analysis. Although the fractal dimension can be used as an image object descriptor, a multiscale approach, such as multiscale fractal dimension (MFD), increases the amount of information extracted from an object. MFD provides a curve which describes object complexity along the scale. However, this curve presents much redundant information, which could be discarded without loss in performance. Thus, it is necessary the use of a descriptor technique to analyze this curve and also to reduce the dimensionality of these data by selecting its meaningful descriptors. This paper shows a comparative study among different techniques for MFD descriptors generation. It compares the use of well-known and state-of-the-art descriptors, such as Fourier, Wavelet, Polynomial Approximation (PA), Functional Data Analysis (FDA), Principal Component Analysis (PCA), Symbolic Aggregate Approximation (SAX), kernel PCA, Independent Component Analysis (ICA), geometrical and statistical features. The descriptors are evaluated in a classification experiment using Linear Discriminant Analysis over the descriptors computed from MFD curves from two data sets: generic shapes and rotated fish contours. Results indicate that PCA, FDA, PA and Wavelet Approximation provide the best MFD descriptors for recognition and classification tasks. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Abstract Background With the development of DNA hybridization microarray technologies, nowadays it is possible to simultaneously assess the expression levels of thousands to tens of thousands of genes. Quantitative comparison of microarrays uncovers distinct patterns of gene expression, which define different cellular phenotypes or cellular responses to drugs. Due to technical biases, normalization of the intensity levels is a pre-requisite to performing further statistical analyses. Therefore, choosing a suitable approach for normalization can be critical, deserving judicious consideration. Results Here, we considered three commonly used normalization approaches, namely: Loess, Splines and Wavelets, and two non-parametric regression methods, which have yet to be used for normalization, namely, the Kernel smoothing and Support Vector Regression. The results obtained were compared using artificial microarray data and benchmark studies. The results indicate that the Support Vector Regression is the most robust to outliers and that Kernel is the worst normalization technique, while no practical differences were observed between Loess, Splines and Wavelets. Conclusion In face of our results, the Support Vector Regression is favored for microarray normalization due to its superiority when compared to the other methods for its robustness in estimating the normalization curve.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The objective of this study was to evaluate the chemical composition and dry matter in vitro digestibility of stem, leaf, straw, cob and kernel fractions of eleven corn (Zea mays) cultivars, harvested at two cutting heights. The experiment was designed as randomized blocks, with three replicates, in a 2 × 11 factorial arrangement (eleven cultivars and two cutting heights). The corn cultivars evaluated were D 766, D 657, D 1000, P 3021, P 3041, C 805, C 333, AG 5011, FOR 01, CO 9621 and BR 205, harvested at a low cutting height (5 cm above ground) and a high cutting height (5 cm below the first ear insertion). Cutting height influenced the dry matter content of the stem fraction, which was lower (23.95%) in plants harvested at the low, than in plants harvested at the high cutting height (26.28%). The kernel fraction had the highest dry matter in vitro digestibility (85.13%), while cultivars did not differ between each other. Cob and straw were the fractions with the highest level of neutral detergent fiber (80.74 and 79.77%, respectively) and the lowest level of crude protein (3.84% and 3.69%, respectively). The leaf fraction had the highest crude protein content, both for plants of low and high cuttings (15.55% and 16.20%, respectively). The increase in the plant cutting height enhanced the dry matter content and dry matter in vitro digestibility of stem fraction, but did not affect the DM content of the leaf fraction.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We prove that any continuous function with domain {z ∈ C: |z| ≤ 1} that generates a bizonal positive definite kernel on the unit sphere in 'C POT.Q' , q ⩾ 3, is continuously differentiable in {z ∈ C: |z| < 1} up to order q − 2, with respect to both z and 'Z BARRA'. In particular, the partial derivatives of the function with respect to x = Re z and y = Im z exist and are continuous in {z ∈ C: |z| < 1} up to the same order.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We obtain explicit formulas for the eigenvalues of integral operators generated by continuous dot product kernels defined on the sphere via the usual gamma function. Using them, we present both, a procedure to describe sharp bounds for the eigenvalues and their asymptotic behavior near 0. We illustrate our results with examples, among them the integral operator generated by a Gaussian kernel. Finally, we sketch complex versions of our results to cover the cases when the sphere sits in a Hermitian space.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study evaluated the presence of fungi and mycotoxins [aflatoxins (AFs), cyclopiazonic acid (CPA), and aspergillic acid] in stored samples of peanut cultivar Runner IAC Caiapó and cultivar Runner IAC 886 during 6 months. A total of 70 pod and 70 kernel samples were directly seeded onto Aspergillus flavus and Aspergillus parasiticus agar for fungi isolation and aspergillic acid detection, and AFs and CPA were analyzed by high-performance liquid chromatography. The results showed the predominance of Aspergillus section Flavi strains, Aspergillus section Nigri strains, Fusarium spp., Penicillium spp. and Rhizopus spp. from both peanut cultivars. AFs were detected in 11.4% of kernel samples of the two cultivars and in 5.7% and 8.6% of pod samples of the Caiapó and 886 cultivars, respectively. CPA was detected in 60.0% and 74.3% of kernel samples of the Caiapó and 886 cultivars, respectively. Co-occurrence of both mycotoxins was observed in 11.4% of kernel samples of the two cultivars. These results indicate a potential risk of aflatoxin production if good storage practices are not applied. In addition, the large number of samples contaminated with CPA and the simultaneous detection of AFs and CPA highlight the need to investigate factors related to the control and co-occurrence of these toxins in peanuts.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The modern GPUs are well suited for intensive computational tasks and massive parallel computation. Sparse matrix multiplication and linear triangular solver are the most important and heavily used kernels in scientific computation, and several challenges in developing a high performance kernel with the two modules is investigated. The main interest it to solve linear systems derived from the elliptic equations with triangular elements. The resulting linear system has a symmetric positive definite matrix. The sparse matrix is stored in the compressed sparse row (CSR) format. It is proposed a CUDA algorithm to execute the matrix vector multiplication using directly the CSR format. A dependence tree algorithm is used to determine which variables the linear triangular solver can determine in parallel. To increase the number of the parallel threads, a coloring graph algorithm is implemented to reorder the mesh numbering in a pre-processing phase. The proposed method is compared with parallel and serial available libraries. The results show that the proposed method improves the computation cost of the matrix vector multiplication. The pre-processing associated with the triangular solver needs to be executed just once in the proposed method. The conjugate gradient method was implemented and showed similar convergence rate for all the compared methods. The proposed method showed significant smaller execution time.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Programa de doctorado: Cibernética y Telecomunicación

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We study some perturbative and nonperturbative effects in the framework of the Standard Model of particle physics. In particular we consider the time dependence of the Higgs vacuum expectation value given by the dynamics of the StandardModel and study the non-adiabatic production of both bosons and fermions, which is intrinsically non-perturbative. In theHartree approximation, we analyze the general expressions that describe the dissipative dynamics due to the backreaction of the produced particles. Then, we solve numerically some relevant cases for the Standard Model phenomenology in the regime of relatively small oscillations of the Higgs vacuum expectation value (vev). As perturbative effects, we consider the leading logarithmic resummation in small Bjorken x QCD, concentrating ourselves on the Nc dependence of the Green functions associated to reggeized gluons. Here the eigenvalues of the BKP kernel for states of more than three reggeized gluons are unknown in general, contrary to the large Nc limit (planar limit) case where the problem becomes integrable. In this contest we consider a 4-gluon kernel for a finite number of colors and define some simple toy models for the configuration space dynamics, which are directly solvable with group theoretical methods. In particular we study the depencence of the spectrum of thesemodelswith respect to the number of colors andmake comparisons with the planar limit case. In the final part we move on the study of theories beyond the Standard Model, considering models built on AdS5 S5/Γ orbifold compactifications of the type IIB superstring, where Γ is the abelian group Zn. We present an appealing three family N = 0 SUSY model with n = 7 for the order of the orbifolding group. This result in a modified Pati–Salam Model which reduced to the StandardModel after symmetry breaking and has interesting phenomenological consequences for LHC.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This work provides a forward step in the study and comprehension of the relationships between stochastic processes and a certain class of integral-partial differential equation, which can be used in order to model anomalous diffusion and transport in statistical physics. In the first part, we brought the reader through the fundamental notions of probability and stochastic processes, stochastic integration and stochastic differential equations as well. In particular, within the study of H-sssi processes, we focused on fractional Brownian motion (fBm) and its discrete-time increment process, the fractional Gaussian noise (fGn), which provide examples of non-Markovian Gaussian processes. The fGn, together with stationary FARIMA processes, is widely used in the modeling and estimation of long-memory, or long-range dependence (LRD). Time series manifesting long-range dependence, are often observed in nature especially in physics, meteorology, climatology, but also in hydrology, geophysics, economy and many others. We deepely studied LRD, giving many real data examples, providing statistical analysis and introducing parametric methods of estimation. Then, we introduced the theory of fractional integrals and derivatives, which indeed turns out to be very appropriate for studying and modeling systems with long-memory properties. After having introduced the basics concepts, we provided many examples and applications. For instance, we investigated the relaxation equation with distributed order time-fractional derivatives, which describes models characterized by a strong memory component and can be used to model relaxation in complex systems, which deviates from the classical exponential Debye pattern. Then, we focused in the study of generalizations of the standard diffusion equation, by passing through the preliminary study of the fractional forward drift equation. Such generalizations have been obtained by using fractional integrals and derivatives of distributed orders. In order to find a connection between the anomalous diffusion described by these equations and the long-range dependence, we introduced and studied the generalized grey Brownian motion (ggBm), which is actually a parametric class of H-sssi processes, which have indeed marginal probability density function evolving in time according to a partial integro-differential equation of fractional type. The ggBm is of course Non-Markovian. All around the work, we have remarked many times that, starting from a master equation of a probability density function f(x,t), it is always possible to define an equivalence class of stochastic processes with the same marginal density function f(x,t). All these processes provide suitable stochastic models for the starting equation. Studying the ggBm, we just focused on a subclass made up of processes with stationary increments. The ggBm has been defined canonically in the so called grey noise space. However, we have been able to provide a characterization notwithstanding the underline probability space. We also pointed out that that the generalized grey Brownian motion is a direct generalization of a Gaussian process and in particular it generalizes Brownain motion and fractional Brownain motion as well. Finally, we introduced and analyzed a more general class of diffusion type equations related to certain non-Markovian stochastic processes. We started from the forward drift equation, which have been made non-local in time by the introduction of a suitable chosen memory kernel K(t). The resulting non-Markovian equation has been interpreted in a natural way as the evolution equation of the marginal density function of a random time process l(t). We then consider the subordinated process Y(t)=X(l(t)) where X(t) is a Markovian diffusion. The corresponding time-evolution of the marginal density function of Y(t) is governed by a non-Markovian Fokker-Planck equation which involves the same memory kernel K(t). We developed several applications and derived the exact solutions. Moreover, we considered different stochastic models for the given equations, providing path simulations.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Every seismic event produces seismic waves which travel throughout the Earth. Seismology is the science of interpreting measurements to derive information about the structure of the Earth. Seismic tomography is the most powerful tool for determination of 3D structure of deep Earth's interiors. Tomographic models obtained at the global and regional scales are an underlying tool for determination of geodynamical state of the Earth, showing evident correlation with other geophysical and geological characteristics. The global tomographic images of the Earth can be written as a linear combinations of basis functions from a specifically chosen set, defining the model parameterization. A number of different parameterizations are commonly seen in literature: seismic velocities in the Earth have been expressed, for example, as combinations of spherical harmonics or by means of the simpler characteristic functions of discrete cells. With this work we are interested to focus our attention on this aspect, evaluating a new type of parameterization, performed by means of wavelet functions. It is known from the classical Fourier theory that a signal can be expressed as the sum of a, possibly infinite, series of sines and cosines. This sum is often referred as a Fourier expansion. The big disadvantage of a Fourier expansion is that it has only frequency resolution and no time resolution. The Wavelet Analysis (or Wavelet Transform) is probably the most recent solution to overcome the shortcomings of Fourier analysis. The fundamental idea behind this innovative analysis is to study signal according to scale. Wavelets, in fact, are mathematical functions that cut up data into different frequency components, and then study each component with resolution matched to its scale, so they are especially useful in the analysis of non stationary process that contains multi-scale features, discontinuities and sharp strike. Wavelets are essentially used in two ways when they are applied in geophysical process or signals studies: 1) as a basis for representation or characterization of process; 2) as an integration kernel for analysis to extract information about the process. These two types of applications of wavelets in geophysical field, are object of study of this work. At the beginning we use the wavelets as basis to represent and resolve the Tomographic Inverse Problem. After a briefly introduction to seismic tomography theory, we assess the power of wavelet analysis in the representation of two different type of synthetic models; then we apply it to real data, obtaining surface wave phase velocity maps and evaluating its abilities by means of comparison with an other type of parametrization (i.e., block parametrization). For the second type of wavelet application we analyze the ability of Continuous Wavelet Transform in the spectral analysis, starting again with some synthetic tests to evaluate its sensibility and capability and then apply the same analysis to real data to obtain Local Correlation Maps between different model at same depth or between different profiles of the same model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The thesis reports the synthesis, and the chemical, structural and spectroscopic characterization of a series of new Rhodium and Au-Fe carbonyl clusters. Most new high-nuclearity rhodium carbonyl clusters have been obtained by redox condensation of preformed rhodium clusters reacting with a species in a different oxidation state generated in situ by mild oxidation. In particular the starting Rh carbonyl clusters is represented by the readily available [Rh7(CO)16]3- 9 compound. The oxidized species is generated in situ by reaction of the above with a stoichiometric defect of a mild oxidizing agents such as [M(H2O)x]n+ aquo complexes possessing different pKa’s and Mn+/M potentials. The experimental results are roughly in keeping with the conclusion that aquo complexes featuring E°(Mn+/M) < ca. -0.20 V do not lead to the formation of hetero-metallic Rh clusters, probably because of the inadequacy of their redox potentials relative to that of the [Rh7(CO)16]3-/2- redox couple. Only homometallic cluster s such as have been fairly selectively obtained. As a fallout of the above investigations, also a convenient and reproducible synthesis of the ill-characterized species [HnRh22(CO)35]8-n has been discovered. The ready availability of the above compound triggered both its complete spectroscopic and chemical characterization. because it is the only example of Rhodium carbonyl clusters with two interstitial metal atoms. The presence of several hydride atoms, firstly suggested by chemical evidences, has been implemented by ESI-MS and 1H-NMR, as well as new structural characterization of its tetra- and penta-anion. All these species display redox behaviour and behave as molecular capacitors. Their chemical reactivity with CO gives rise to a new series of Rh22 clusters containing a different number of carbonyl groups, which have been likewise fully characterized. Formation of hetero-metallic Rh clusters was only observed when using SnCl2H2O as oxidizing agent because. Quite all the Rh-Sn carbonyl clusters obtained have icosahedral geometry. The only previously reported example of an icosahedral Rh cluster with an interstitial atom is the [Rh12Sb(CO)27]3- trianion. They have very similar metal framework, as well as the same number of CO ligands and, consequently, cluster valence electrons (CVEs). .A first interesting aspect of the chemistry of the Rh-Sn system is that it also provides icosahedral clusters making exception to the cluster-borane analogy by showing electron counts from 166 to 171. As a result, the most electron-short species, namely [Rh12Sn(CO)25]4- displays redox propensity, even if disfavoured by the relatively high free negative charge of the starting anion and, moreover, behaves as a chloride scavenger. The presence of these bulky interstitial atoms results in the metal framework adopting structures different from a close-packed metal lattice and, above all, imparts a notable stability to the resulting cluster. An organometallic approach to a new kind of molecular ligand-stabilized gold nanoparticles, in which Fe(CO)x (x = 3,4) moieties protect and stabilize the gold kernel has also been undertaken. As a result, the new clusters [Au21{Fe(CO)4}10]5-, [Au22{Fe(CO)4}12]6-, Au28{Fe(CO)3}4{Fe(CO)4}10]8- and [Au34{Fe(CO)3}6{Fe(CO)4}8]6- have been isolated and characterized. As suggested by concepts of isolobal analogies, the Fe(CO)4 molecular fragment may display the same ligand capability of thiolates and go beyond. Indeed, the above clusters bring structural resemblance to the structurally characterized gold thiolates by showing Fe-Au-Fe, rather than S-Au-S, staple motives. Staple motives, the oxidation state of surface gold atoms and the energy of Au atomic orbitals are likely to concur in delaying the insulator-to-metal transition as the nuclearity of gold thiolates increases, relative to the more compact transition-metal carbonyl clusters. Finally, a few previously reported Au-Fe carbonyl clusters have been used as precursors in the preparation of supported gold catalysts. The catalysts obtained are active for toluene oxidation and the catalytic activity depends on the Fe/Au cluster loading over TiO2.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In der vorliegenden Dissertation werden zwei verschiedene Aspekte des Sektors ungerader innerer Parität der mesonischen chiralen Störungstheorie (mesonische ChPT) untersucht. Als erstes wird die Ein-Schleifen-Renormierung des führenden Terms, der sog. Wess-Zumino-Witten-Wirkung, durchgeführt. Dazu muß zunächst der gesamte Ein-Schleifen-Anteil der Theorie mittels Sattelpunkt-Methode extrahiert werden. Im Anschluß isoliert man alle singulären Ein-Schleifen-Strukturen im Rahmen der Heat-Kernel-Technik. Zu guter Letzt müssen diese divergenten Anteile absorbiert werden. Dazu benötigt man eine allgemeinste anomale Lagrange-Dichte der Ordnung O(p^6), welche systematisch entwickelt wird. Erweitert man die chirale Gruppe SU(n)_L x SU(n)_R auf SU(n)_L x SU(n)_R x U(1)_V, so kommen zusätzliche Monome ins Spiel. Die renormierten Koeffizienten dieser Lagrange-Dichte, die Niederenergiekonstanten (LECs), sind zunächst freie Parameter der Theorie, die individuell fixiert werden müssen. Unter Betrachtung eines komplementären vektormesonischen Modells können die Amplituden geeigneter Prozesse bestimmt und durch Vergleich mit den Ergebnissen der mesonischen ChPT eine numerische Abschätzung einiger LECs vorgenommen werden. Im zweiten Teil wird eine konsistente Ein-Schleifen-Rechnung für den anomalen Prozeß (virtuelles) Photon + geladenes Kaon -> geladenes Kaon + neutrales Pion durchgeführt. Zur Kontrolle unserer Resultate wird eine bereits vorhandene Rechnung zur Reaktion (virtuelles) Photon + geladenes Pion -> geladenes Pion + neutrales Pion reproduziert. Unter Einbeziehung der abgeschätzten Werte der jeweiligen LECs können die zugehörigen hadronischen Strukturfunktionen numerisch bestimmt und diskutiert werden.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis explores the capabilities of heterogeneous multi-core systems, based on multiple Graphics Processing Units (GPUs) in a standard desktop framework. Multi-GPU accelerated desk side computers are an appealing alternative to other high performance computing (HPC) systems: being composed of commodity hardware components fabricated in large quantities, their price-performance ratio is unparalleled in the world of high performance computing. Essentially bringing “supercomputing to the masses”, this opens up new possibilities for application fields where investing in HPC resources had been considered unfeasible before. One of these is the field of bioelectrical imaging, a class of medical imaging technologies that occupy a low-cost niche next to million-dollar systems like functional Magnetic Resonance Imaging (fMRI). In the scope of this work, several computational challenges encountered in bioelectrical imaging are tackled with this new kind of computing resource, striving to help these methods approach their true potential. Specifically, the following main contributions were made: Firstly, a novel dual-GPU implementation of parallel triangular matrix inversion (TMI) is presented, addressing an crucial kernel in computation of multi-mesh head models of encephalographic (EEG) source localization. This includes not only a highly efficient implementation of the routine itself achieving excellent speedups versus an optimized CPU implementation, but also a novel GPU-friendly compressed storage scheme for triangular matrices. Secondly, a scalable multi-GPU solver for non-hermitian linear systems was implemented. It is integrated into a simulation environment for electrical impedance tomography (EIT) that requires frequent solution of complex systems with millions of unknowns, a task that this solution can perform within seconds. In terms of computational throughput, it outperforms not only an highly optimized multi-CPU reference, but related GPU-based work as well. Finally, a GPU-accelerated graphical EEG real-time source localization software was implemented. Thanks to acceleration, it can meet real-time requirements in unpreceeded anatomical detail running more complex localization algorithms. Additionally, a novel implementation to extract anatomical priors from static Magnetic Resonance (MR) scansions has been included.