996 resultados para Exponential smoothing methods
Resumo:
In this paper, various types of fault detection methods for fuel cells are compared. For example, those that use a model based approach or a data driven approach or a combination of the two. The potential advantages and drawbacks of each method are discussed and comparisons between methods are made. In particular, classification algorithms are investigated, which separate a data set into classes or clusters based on some prior knowledge or measure of similarity. In particular, the application of classification methods to vectors of reconstructed currents by magnetic tomography or to vectors of magnetic field measurements directly is explored. Bases are simulated using the finite integration technique (FIT) and regularization techniques are employed to overcome ill-posedness. Fisher's linear discriminant is used to illustrate these concepts. Numerical experiments show that the ill-posedness of the magnetic tomography problem is a part of the classification problem on magnetic field measurements as well. This is independent of the particular working mode of the cell but influenced by the type of faulty behavior that is studied. The numerical results demonstrate the ill-posedness by the exponential decay behavior of the singular values for three examples of fault classes.
Resumo:
Purpose: To quantify to what extent the new registration method, DARTEL (Diffeomorphic Anatomical Registration Through Exponentiated Lie Algebra), may reduce the smoothing kernel width required and investigate the minimum group size necessary for voxel-based morphometry (VBM) studies. Materials and Methods: A simulated atrophy approach was employed to explore the role of smoothing kernel, group size, and their interactions on VBM detection accuracy. Group sizes of 10, 15, 25, and 50 were compared for kernels between 0–12 mm. Results: A smoothing kernel of 6 mm achieved the highest atrophy detection accuracy for groups with 50 participants and 8–10 mm for the groups of 25 at P < 0.05 with familywise correction. The results further demonstrated that a group size of 25 was the lower limit when two different groups of participants were compared, whereas a group size of 15 was the minimum for longitudinal comparisons but at P < 0.05 with false discovery rate correction. Conclusion: Our data confirmed DARTEL-based VBM generally benefits from smaller kernels and different kernels perform best for different group sizes with a tendency of smaller kernels for larger groups. Importantly, the kernel selection was also affected by the threshold applied. This highlighted that the choice of kernel in relation to group size should be considered with care.
Resumo:
Three different types of maltodextrin encapsulated dehydrated blackberry fruit powders were obtained using vibrofluidized bed drying (VF), spray drying (SD), vacuum drying (VD), and freeze drying (FD). Moisture equilibrium data of blackberry pulp powders with 18% maltodextrin were determined at 20, 30, 40, and 50 degrees C using the static gravimetric method for the water activity range of 0.06-0.90. Experimental equilibrium moisture content data versus water activity were fit to the Guggenheim-Anderson-de Boer (GAB) model. Agreement was found between experimental and calculated values. The isosteric heat of sorption of water was determined using the Clausius-Clapeyron equation from the equilibrium data; isosteric heats of sorption were found to increase with increasing temperature and could be adjusted by an exponential relationship. For freeze dried, vibrofluidized, and vacuum dried pulp powder samples, the isosteric heats of sorption were lower (more negative) than those calculated for spray dried samples. The enthalpy-entropy compensation theory was applied to sorption isotherms and plots of Delta H versus Delta S provided the isokinetic temperatures, indicating an enthalpy-controlled sorption process.
Resumo:
This paper presents the characterization of single-mode waveguides for 980 and 1550 nm wavelengths. High quality planar waveguide structure was fabricated from Y(1-x)Er(x)Al(3)(BO(3))(4) multilayer thin films with x = 0.02, 0.05, 0.1, 0.3, and 0.5, prepared through the polymeric precursor and sol-gel methods using spin-coating. The propagation losses of the planar waveguides varying from 0.63 to 0.88 dB/cm were measured at 632.8 and 1550 nm. The photoluminescence spectra and radiative lifetimes of the Er(3+) (4)I(13/2) energy level were measured in waveguiding geometry. For most samples the photoluminescence decay was single exponential with lifetimes in between 640 mu s and 200 mu s, depending on the erbium concentration and synthesis method. These results indicate that Er doped YAl(3)(BO(3))(4) compounds are promising for low loss waveguides. (C) 2009 Elsevier B.V. All fights reserved.
Resumo:
In this article, we give an asymptotic formula of order n(-1/2), where n is the sample size, for the skewness of the distributions of the maximum likelihood estimates of the parameters in exponencial family nonlinear models. We generalize the result by Cordeiro and Cordeiro ( 2001). The formula is given in matrix notation and is very suitable for computer implementation and to obtain closed form expressions for a great variety of models. Some special cases and two applications are discussed.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The generalized exponential distribution, proposed by Gupta and Kundu (1999), is a good alternative to standard lifetime distributions as exponential, Weibull or gamma. Several authors have considered the problem of Bayesian estimation of the parameters of generalized exponential distribution, assuming independent gamma priors and other informative priors. In this paper, we consider a Bayesian analysis of the generalized exponential distribution by assuming the conventional non-informative prior distributions, as Jeffreys and reference prior, to estimate the parameters. These priors are compared with independent gamma priors for both parameters. The comparison is carried out by examining the frequentist coverage probabilities of Bayesian credible intervals. We shown that maximal data information prior implies in an improper posterior distribution for the parameters of a generalized exponential distribution. It is also shown that the choice of a parameter of interest is very important for the reference prior. The different choices lead to different reference priors in this case. Numerical inference is illustrated for the parameters by considering data set of different sizes and using MCMC (Markov Chain Monte Carlo) methods.
Resumo:
This paper deals with exponential stability of discrete-time singular systems with Markov jump parameters. We propose a set of coupled generalized Lyapunov equations (CGLE) that provides sufficient conditions to check this property for this class of systems. A method for solving the obtained CGLE is also presented, based on iterations of standard singular Lyapunov equations. We present also a numerical example to illustrate the effectiveness of the approach we are proposing.
Resumo:
The exponential-logarithmic is a new lifetime distribution with decreasing failure rate and interesting applications in the biological and engineering sciences. Thus, a Bayesian analysis of the parameters would be desirable. Bayesian estimation requires the selection of prior distributions for all parameters of the model. In this case, researchers usually seek to choose a prior that has little information on the parameters, allowing the data to be very informative relative to the prior information. Assuming some noninformative prior distributions, we present a Bayesian analysis using Markov Chain Monte Carlo (MCMC) methods. Jeffreys prior is derived for the parameters of exponential-logarithmic distribution and compared with other common priors such as beta, gamma, and uniform distributions. In this article, we show through a simulation study that the maximum likelihood estimate may not exist except under restrictive conditions. In addition, the posterior density is sometimes bimodal when an improper prior density is used. © 2013 Copyright Taylor and Francis Group, LLC.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Weibull and generalised exponential overdispersion models with an application to ozone air pollution
Resumo:
We consider the problem of estimating the mean and variance of the time between occurrences of an event of interest (inter-occurrences times) where some forms of dependence between two consecutive time intervals are allowed. Two basic density functions are taken into account. They are the Weibull and the generalised exponential density functions. In order to capture the dependence between two consecutive inter-occurrences times, we assume that either the shape and/or the scale parameters of the two density functions are given by auto-regressive models. The expressions for the mean and variance of the inter-occurrences times are presented. The models are applied to the ozone data from two regions of Mexico City. The estimation of the parameters is performed using a Bayesian point of view via Markov chain Monte Carlo (MCMC) methods.
Resumo:
In this paper, we establish the existence of many rotationally non-equivalent and nonradial solutions for the following class of quasilinear problems (p) {-Delta(N)u = lambda f(vertical bar x vertical bar, u) x is an element of Omega(r), u > 0 x is an element of Omega(r), u = 0 x is an element of Omega(r), where Omega(r) = {x is an element of R-N : r < vertical bar x vertical bar < r + 1}, N >= 2, N not equal 3, r >0, lambda > 0, Delta(N)u = div(vertical bar del u vertical bar(N-2)del u) is the N-Laplacian operator and f is a continuous function with exponential critical growth.
Resumo:
This paper is concerned with the energy decay for a class of plate equations with memory and lower order perturbation of p-Laplacian type, utt+?2u-?pu+?0tg(t-s)?u(s)ds-?ut+f(u)=0inOXR+, with simply supported boundary condition, where O is a bounded domain of RN, g?>?0 is a memory kernel that decays exponentially and f(u) is a nonlinear perturbation. This kind of problem without the memory term models elastoplastic flows.
Resumo:
Abstract Background With the development of DNA hybridization microarray technologies, nowadays it is possible to simultaneously assess the expression levels of thousands to tens of thousands of genes. Quantitative comparison of microarrays uncovers distinct patterns of gene expression, which define different cellular phenotypes or cellular responses to drugs. Due to technical biases, normalization of the intensity levels is a pre-requisite to performing further statistical analyses. Therefore, choosing a suitable approach for normalization can be critical, deserving judicious consideration. Results Here, we considered three commonly used normalization approaches, namely: Loess, Splines and Wavelets, and two non-parametric regression methods, which have yet to be used for normalization, namely, the Kernel smoothing and Support Vector Regression. The results obtained were compared using artificial microarray data and benchmark studies. The results indicate that the Support Vector Regression is the most robust to outliers and that Kernel is the worst normalization technique, while no practical differences were observed between Loess, Splines and Wavelets. Conclusion In face of our results, the Support Vector Regression is favored for microarray normalization due to its superiority when compared to the other methods for its robustness in estimating the normalization curve.
Resumo:
Some fundamental biological processes such as embryonic development have been preserved during evolution and are common to species belonging to different phylogenetic positions, but are nowadays largely unknown. The understanding of cell morphodynamics leading to the formation of organized spatial distribution of cells such as tissues and organs can be achieved through the reconstruction of cells shape and position during the development of a live animal embryo. We design in this work a chain of image processing methods to automatically segment and track cells nuclei and membranes during the development of a zebrafish embryo, which has been largely validates as model organism to understand vertebrate development, gene function and healingrepair mechanisms in vertebrates. The embryo is previously labeled through the ubiquitous expression of fluorescent proteins addressed to cells nuclei and membranes, and temporal sequences of volumetric images are acquired with laser scanning microscopy. Cells position is detected by processing nuclei images either through the generalized form of the Hough transform or identifying nuclei position with local maxima after a smoothing preprocessing step. Membranes and nuclei shapes are reconstructed by using PDEs based variational techniques such as the Subjective Surfaces and the Chan Vese method. Cells tracking is performed by combining informations previously detected on cells shape and position with biological regularization constraints. Our results are manually validated and reconstruct the formation of zebrafish brain at 7-8 somite stage with all the cells tracked starting from late sphere stage with less than 2% error for at least 6 hours. Our reconstruction opens the way to a systematic investigation of cellular behaviors, of clonal origin and clonal complexity of brain organs, as well as the contribution of cell proliferation modes and cell movements to the formation of local patterns and morphogenetic fields.