884 resultados para Dunkl Kernel
Resumo:
This paper studies the asymptotic optimality of discrete-time Markov decision processes (MDPs) with general state space and action space and having weak and strong interactions. By using a similar approach as developed by Liu, Zhang, and Yin [Appl. Math. Optim., 44 (2001), pp. 105-129], the idea in this paper is to consider an MDP with general state and action spaces and to reduce the dimension of the state space by considering an averaged model. This formulation is often described by introducing a small parameter epsilon > 0 in the definition of the transition kernel, leading to a singularly perturbed Markov model with two time scales. Our objective is twofold. First it is shown that the value function of the control problem for the perturbed system converges to the value function of a limit averaged control problem as epsilon goes to zero. In the second part of the paper, it is proved that a feedback control policy for the original control problem defined by using an optimal feedback policy for the limit problem is asymptotically optimal. Our work extends existing results of the literature in the following two directions: the underlying MDP is defined on general state and action spaces and we do not impose strong conditions on the recurrence structure of the MDP such as Doeblin's condition.
Resumo:
This paper is concerned with the energy decay for a class of plate equations with memory and lower order perturbation of p-Laplacian type, utt+?2u-?pu+?0tg(t-s)?u(s)ds-?ut+f(u)=0inOXR+, with simply supported boundary condition, where O is a bounded domain of RN, g?>?0 is a memory kernel that decays exponentially and f(u) is a nonlinear perturbation. This kind of problem without the memory term models elastoplastic flows.
Resumo:
trans-Free interesterified fat was produced for possible usage as a margarine. Palm stearin, coconut oil, and canola oil were used as substrates for chemical interesterification. The main aim of the present study was to evaluate the physicochemical properties of blends of palm stearin, coconut oil, and canola oil submitted to chemical interesterification using sodium methoxide as the catalyst. The original and interesterified blends were examined for fatty acid composition, softening and melting points, solid fat content, and consistency. Chemical interesterification reduced softening and melting points, consistency, and solid fat content. The interesterified fats showed desirable physicochemical properties for possible use as a margarine. Therefore, our result suggested that the interesterified fat without trans-fatty acids could be used as an alternative to partially hydrogenated fat.
Resumo:
This study describes the hypocholesterolaemic effect of whole lupin and its protein in hamsters. The diets were: casein (control group HC), lupin protein isolate (group HPI) and whole lupin seed (group HWS). Diets from HPI and HWS promoted a significant reduction of total cholesterol and non-HDL cholesterol in the hamsters' plasma as compared with HC. The true digestibility of HPI and HC groups were similar and differed significantly from the HWS one, which in turn showed a significant difference in total sterol excretion as compared to the former groups. Histological analysis of the liver revealed that animals fed on HPI and HWS diets presented a low level of steatosis (level 1) as compared to the ones fed on HC diet (level 4). Our findings demonstrate that protein isolate from Lupinus albus from Brazil has a metabolic effect on endogenous cholesterol metabolism and a protector effect on development of hepatic steatosis. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
In multi-label classification, examples can be associated with multiple labels simultaneously. The task of learning from multi-label data can be addressed by methods that transform the multi-label classification problem into several single-label classification problems. The binary relevance approach is one of these methods, where the multi-label learning task is decomposed into several independent binary classification problems, one for each label in the set of labels, and the final labels for each example are determined by aggregating the predictions from all binary classifiers. However, this approach fails to consider any dependency among the labels. Aiming to accurately predict label combinations, in this paper we propose a simple approach that enables the binary classifiers to discover existing label dependency by themselves. An experimental study using decision trees, a kernel method as well as Naive Bayes as base-learning techniques shows the potential of the proposed approach to improve the multi-label classification performance.
Resumo:
Fractal theory presents a large number of applications to image and signal analysis. Although the fractal dimension can be used as an image object descriptor, a multiscale approach, such as multiscale fractal dimension (MFD), increases the amount of information extracted from an object. MFD provides a curve which describes object complexity along the scale. However, this curve presents much redundant information, which could be discarded without loss in performance. Thus, it is necessary the use of a descriptor technique to analyze this curve and also to reduce the dimensionality of these data by selecting its meaningful descriptors. This paper shows a comparative study among different techniques for MFD descriptors generation. It compares the use of well-known and state-of-the-art descriptors, such as Fourier, Wavelet, Polynomial Approximation (PA), Functional Data Analysis (FDA), Principal Component Analysis (PCA), Symbolic Aggregate Approximation (SAX), kernel PCA, Independent Component Analysis (ICA), geometrical and statistical features. The descriptors are evaluated in a classification experiment using Linear Discriminant Analysis over the descriptors computed from MFD curves from two data sets: generic shapes and rotated fish contours. Results indicate that PCA, FDA, PA and Wavelet Approximation provide the best MFD descriptors for recognition and classification tasks. (C) 2012 Elsevier B.V. All rights reserved.
Resumo:
Abstract Background With the development of DNA hybridization microarray technologies, nowadays it is possible to simultaneously assess the expression levels of thousands to tens of thousands of genes. Quantitative comparison of microarrays uncovers distinct patterns of gene expression, which define different cellular phenotypes or cellular responses to drugs. Due to technical biases, normalization of the intensity levels is a pre-requisite to performing further statistical analyses. Therefore, choosing a suitable approach for normalization can be critical, deserving judicious consideration. Results Here, we considered three commonly used normalization approaches, namely: Loess, Splines and Wavelets, and two non-parametric regression methods, which have yet to be used for normalization, namely, the Kernel smoothing and Support Vector Regression. The results obtained were compared using artificial microarray data and benchmark studies. The results indicate that the Support Vector Regression is the most robust to outliers and that Kernel is the worst normalization technique, while no practical differences were observed between Loess, Splines and Wavelets. Conclusion In face of our results, the Support Vector Regression is favored for microarray normalization due to its superiority when compared to the other methods for its robustness in estimating the normalization curve.
Resumo:
The objective of this study was to evaluate the chemical composition and dry matter in vitro digestibility of stem, leaf, straw, cob and kernel fractions of eleven corn (Zea mays) cultivars, harvested at two cutting heights. The experiment was designed as randomized blocks, with three replicates, in a 2 × 11 factorial arrangement (eleven cultivars and two cutting heights). The corn cultivars evaluated were D 766, D 657, D 1000, P 3021, P 3041, C 805, C 333, AG 5011, FOR 01, CO 9621 and BR 205, harvested at a low cutting height (5 cm above ground) and a high cutting height (5 cm below the first ear insertion). Cutting height influenced the dry matter content of the stem fraction, which was lower (23.95%) in plants harvested at the low, than in plants harvested at the high cutting height (26.28%). The kernel fraction had the highest dry matter in vitro digestibility (85.13%), while cultivars did not differ between each other. Cob and straw were the fractions with the highest level of neutral detergent fiber (80.74 and 79.77%, respectively) and the lowest level of crude protein (3.84% and 3.69%, respectively). The leaf fraction had the highest crude protein content, both for plants of low and high cuttings (15.55% and 16.20%, respectively). The increase in the plant cutting height enhanced the dry matter content and dry matter in vitro digestibility of stem fraction, but did not affect the DM content of the leaf fraction.
Resumo:
We prove that any continuous function with domain {z ∈ C: |z| ≤ 1} that generates a bizonal positive definite kernel on the unit sphere in 'C POT.Q' , q ⩾ 3, is continuously differentiable in {z ∈ C: |z| < 1} up to order q − 2, with respect to both z and 'Z BARRA'. In particular, the partial derivatives of the function with respect to x = Re z and y = Im z exist and are continuous in {z ∈ C: |z| < 1} up to the same order.
Sharp estimates for eigenvalues of integral operators generated by dot product kernels on the sphere
Resumo:
We obtain explicit formulas for the eigenvalues of integral operators generated by continuous dot product kernels defined on the sphere via the usual gamma function. Using them, we present both, a procedure to describe sharp bounds for the eigenvalues and their asymptotic behavior near 0. We illustrate our results with examples, among them the integral operator generated by a Gaussian kernel. Finally, we sketch complex versions of our results to cover the cases when the sphere sits in a Hermitian space.
Resumo:
This study evaluated the presence of fungi and mycotoxins [aflatoxins (AFs), cyclopiazonic acid (CPA), and aspergillic acid] in stored samples of peanut cultivar Runner IAC Caiapó and cultivar Runner IAC 886 during 6 months. A total of 70 pod and 70 kernel samples were directly seeded onto Aspergillus flavus and Aspergillus parasiticus agar for fungi isolation and aspergillic acid detection, and AFs and CPA were analyzed by high-performance liquid chromatography. The results showed the predominance of Aspergillus section Flavi strains, Aspergillus section Nigri strains, Fusarium spp., Penicillium spp. and Rhizopus spp. from both peanut cultivars. AFs were detected in 11.4% of kernel samples of the two cultivars and in 5.7% and 8.6% of pod samples of the Caiapó and 886 cultivars, respectively. CPA was detected in 60.0% and 74.3% of kernel samples of the Caiapó and 886 cultivars, respectively. Co-occurrence of both mycotoxins was observed in 11.4% of kernel samples of the two cultivars. These results indicate a potential risk of aflatoxin production if good storage practices are not applied. In addition, the large number of samples contaminated with CPA and the simultaneous detection of AFs and CPA highlight the need to investigate factors related to the control and co-occurrence of these toxins in peanuts.
Resumo:
The modern GPUs are well suited for intensive computational tasks and massive parallel computation. Sparse matrix multiplication and linear triangular solver are the most important and heavily used kernels in scientific computation, and several challenges in developing a high performance kernel with the two modules is investigated. The main interest it to solve linear systems derived from the elliptic equations with triangular elements. The resulting linear system has a symmetric positive definite matrix. The sparse matrix is stored in the compressed sparse row (CSR) format. It is proposed a CUDA algorithm to execute the matrix vector multiplication using directly the CSR format. A dependence tree algorithm is used to determine which variables the linear triangular solver can determine in parallel. To increase the number of the parallel threads, a coloring graph algorithm is implemented to reorder the mesh numbering in a pre-processing phase. The proposed method is compared with parallel and serial available libraries. The results show that the proposed method improves the computation cost of the matrix vector multiplication. The pre-processing associated with the triangular solver needs to be executed just once in the proposed method. The conjugate gradient method was implemented and showed similar convergence rate for all the compared methods. The proposed method showed significant smaller execution time.
Resumo:
Programa de doctorado: Cibernética y Telecomunicación
Resumo:
We study some perturbative and nonperturbative effects in the framework of the Standard Model of particle physics. In particular we consider the time dependence of the Higgs vacuum expectation value given by the dynamics of the StandardModel and study the non-adiabatic production of both bosons and fermions, which is intrinsically non-perturbative. In theHartree approximation, we analyze the general expressions that describe the dissipative dynamics due to the backreaction of the produced particles. Then, we solve numerically some relevant cases for the Standard Model phenomenology in the regime of relatively small oscillations of the Higgs vacuum expectation value (vev). As perturbative effects, we consider the leading logarithmic resummation in small Bjorken x QCD, concentrating ourselves on the Nc dependence of the Green functions associated to reggeized gluons. Here the eigenvalues of the BKP kernel for states of more than three reggeized gluons are unknown in general, contrary to the large Nc limit (planar limit) case where the problem becomes integrable. In this contest we consider a 4-gluon kernel for a finite number of colors and define some simple toy models for the configuration space dynamics, which are directly solvable with group theoretical methods. In particular we study the depencence of the spectrum of thesemodelswith respect to the number of colors andmake comparisons with the planar limit case. In the final part we move on the study of theories beyond the Standard Model, considering models built on AdS5 S5/Γ orbifold compactifications of the type IIB superstring, where Γ is the abelian group Zn. We present an appealing three family N = 0 SUSY model with n = 7 for the order of the orbifolding group. This result in a modified Pati–Salam Model which reduced to the StandardModel after symmetry breaking and has interesting phenomenological consequences for LHC.
Resumo:
This work provides a forward step in the study and comprehension of the relationships between stochastic processes and a certain class of integral-partial differential equation, which can be used in order to model anomalous diffusion and transport in statistical physics. In the first part, we brought the reader through the fundamental notions of probability and stochastic processes, stochastic integration and stochastic differential equations as well. In particular, within the study of H-sssi processes, we focused on fractional Brownian motion (fBm) and its discrete-time increment process, the fractional Gaussian noise (fGn), which provide examples of non-Markovian Gaussian processes. The fGn, together with stationary FARIMA processes, is widely used in the modeling and estimation of long-memory, or long-range dependence (LRD). Time series manifesting long-range dependence, are often observed in nature especially in physics, meteorology, climatology, but also in hydrology, geophysics, economy and many others. We deepely studied LRD, giving many real data examples, providing statistical analysis and introducing parametric methods of estimation. Then, we introduced the theory of fractional integrals and derivatives, which indeed turns out to be very appropriate for studying and modeling systems with long-memory properties. After having introduced the basics concepts, we provided many examples and applications. For instance, we investigated the relaxation equation with distributed order time-fractional derivatives, which describes models characterized by a strong memory component and can be used to model relaxation in complex systems, which deviates from the classical exponential Debye pattern. Then, we focused in the study of generalizations of the standard diffusion equation, by passing through the preliminary study of the fractional forward drift equation. Such generalizations have been obtained by using fractional integrals and derivatives of distributed orders. In order to find a connection between the anomalous diffusion described by these equations and the long-range dependence, we introduced and studied the generalized grey Brownian motion (ggBm), which is actually a parametric class of H-sssi processes, which have indeed marginal probability density function evolving in time according to a partial integro-differential equation of fractional type. The ggBm is of course Non-Markovian. All around the work, we have remarked many times that, starting from a master equation of a probability density function f(x,t), it is always possible to define an equivalence class of stochastic processes with the same marginal density function f(x,t). All these processes provide suitable stochastic models for the starting equation. Studying the ggBm, we just focused on a subclass made up of processes with stationary increments. The ggBm has been defined canonically in the so called grey noise space. However, we have been able to provide a characterization notwithstanding the underline probability space. We also pointed out that that the generalized grey Brownian motion is a direct generalization of a Gaussian process and in particular it generalizes Brownain motion and fractional Brownain motion as well. Finally, we introduced and analyzed a more general class of diffusion type equations related to certain non-Markovian stochastic processes. We started from the forward drift equation, which have been made non-local in time by the introduction of a suitable chosen memory kernel K(t). The resulting non-Markovian equation has been interpreted in a natural way as the evolution equation of the marginal density function of a random time process l(t). We then consider the subordinated process Y(t)=X(l(t)) where X(t) is a Markovian diffusion. The corresponding time-evolution of the marginal density function of Y(t) is governed by a non-Markovian Fokker-Planck equation which involves the same memory kernel K(t). We developed several applications and derived the exact solutions. Moreover, we considered different stochastic models for the given equations, providing path simulations.