38 resultados para Empirical Algorithm Analysis
Resumo:
Starting from the Durbin algorithm in polynomial space with an inner product defined by the signal autocorrelation matrix, an isometric transformation is defined that maps this vector space into another one where the Levinson algorithm is performed. Alternatively, for iterative algorithms such as discrete all-pole (DAP), an efficient implementation of a Gohberg-Semencul (GS) relation is developed for the inversion of the autocorrelation matrix which considers its centrosymmetry. In the solution of the autocorrelation equations, the Levinson algorithm is found to be less complex operationally than the procedures based on GS inversion for up to a minimum of five iterations at various linear prediction (LP) orders.
Resumo:
The most popular algorithms for blind equalization are the constant-modulus algorithm (CMA) and the Shalvi-Weinstein algorithm (SWA). It is well-known that SWA presents a higher convergence rate than CMA. at the expense of higher computational complexity. If the forgetting factor is not sufficiently close to one, if the initialization is distant from the optimal solution, or if the signal-to-noise ratio is low, SWA can converge to undesirable local minima or even diverge. In this paper, we show that divergence can be caused by an inconsistency in the nonlinear estimate of the transmitted signal. or (when the algorithm is implemented in finite precision) by the loss of positiveness of the estimate of the autocorrelation matrix, or by a combination of both. In order to avoid the first cause of divergence, we propose a dual-mode SWA. In the first mode of operation. the new algorithm works as SWA; in the second mode, it rejects inconsistent estimates of the transmitted signal. Assuming the persistence of excitation condition, we present a deterministic stability analysis of the new algorithm. To avoid the second cause of divergence, we propose a dual-mode lattice SWA, which is stable even in finite-precision arithmetic, and has a computational complexity that increases linearly with the number of adjustable equalizer coefficients. The good performance of the proposed algorithms is confirmed through numerical simulations.
Resumo:
The present work reports the porous alumina structures fabrication and their quantitative structural characteristics study based on mathematical morphology analysis by using the SEM images. The algorithm used in this work was implemented in 6.2 MATLAB software. Using the algorithm it was possible to obtain the distribution of maximum, minimum and average radius of the pores in porous alumina structures. Additionally, with the calculus of the area occupied by the pores, it was possible to obtain the porosity of the structures. The quantitative results could be obtained and related to the process fabrication characteristics, showing to be reliable and promising to be used to control the pores formation process. Then, this technique could provide a more accurate determination of pore sizes and pores distribution. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
In this paper, we compare three residuals to assess departures from the error assumptions as well as to detect outlying observations in log-Burr XII regression models with censored observations. These residuals can also be used for the log-logistic regression model, which is a special case of the log-Burr XII regression model. For different parameter settings, sample sizes and censoring percentages, various simulation studies are performed and the empirical distribution of each residual is displayed and compared with the standard normal distribution. These studies suggest that the residual analysis usually performed in normal linear regression models can be straightforwardly extended to the modified martingale-type residual in log-Burr XII regression models with censored data.
Resumo:
This paper proposes a regression model considering the modified Weibull distribution. This distribution can be used to model bathtub-shaped failure rate functions. Assuming censored data, we consider maximum likelihood and Jackknife estimators for the parameters of the model. We derive the appropriate matrices for assessing local influence on the parameter estimates under different perturbation schemes and we also present some ways to perform global influence. Besides, for different parameter settings, sample sizes and censoring percentages, various simulations are performed and the empirical distribution of the modified deviance residual is displayed and compared with the standard normal distribution. These studies suggest that the residual analysis usually performed in normal linear regression models can be straightforwardly extended for a martingale-type residual in log-modified Weibull regression models with censored data. Finally, we analyze a real data set under log-modified Weibull regression models. A diagnostic analysis and a model checking based on the modified deviance residual are performed to select appropriate models. (c) 2008 Elsevier B.V. All rights reserved.
Resumo:
Consumers worldwide are increasingly concerned with sustainable production and consumption. Recently, a comprehensive study ranked 17 countries in regard to their environmentally friendly behaviour among consumers. Brazil was one of the top countries in the list. Yet, several studies highlight significant differences between consumers` intentions to consume ethically, and their actual purchase behaviour: the so-called `Attitude-Behaviour Gap`. In developing countries, few studies have been conducted on this issue. The objective of this study is therefore to investigate the gap between citizens` sustainability-related attitudes and food purchasing behaviour using empirical data from Brazil. To this end, Brazilian citizens` attitudes towards pig production systems were mapped through conjoint analysis and their coexistence with relevant pork product-related purchasing behaviour of consumers was investigated through cluster analysis. The conjoint experiment was carried Out with empirical data collected from 475 respondents surveyed in the South and Center-West regions of Brazil. The results of the conjoint analysis were used for a subsequent cluster analysis in order to identify clusters of Brazilian citizens with diversified attitudes towards pig production systems, using socio-demographics, attitudes towards sustainability-related themes that are expected to influence the way they evaluate pig production systems, and consumption frequency of various pork products as clusters` background information. Three clusters were identified as `indifferent`, `environmental conscious` and `sustainability-oriented` citizens. Although attitudes towards environment and nature had indeed an influence on citizens` specific attitudes towards pig farming at the cluster level, the relationship between `citizenship` and consumption behaviour was found to be weak. This finding is similar to previous research conducted with European consumers: what people (in their role of citizens) think about pig production systems does not appear to significantly influence their pork consumption choices. Improvements in the integrated management of this chain would better meet consumers` sustainability-related expectations towards pig production systems.
Resumo:
This paper examines the hysteresis hypothesis in the Brazilian industrialized exports using a time series analysis. This hypothesis finds an empirical representation into the nonlinear adjustments of the exported quantity to relative price changes. Thus, the threshold cointegration analysis proposed by Balke and Fomby [Balke, N.S. and Fomby, T.B. Threshold Cointegration. International Economic Review, 1997; 38; 627-645.] was used for estimating models with asymmetric adjustment of the error correction term. Amongst sixteen industrial sectors selected, there was evidence of nonlinearities in the residuals of long-run relationships of supply or demand for exports in nine of them. These nonlinearities represent asymmetric and/or discontinuous responses of exports to different representative measures of real exchange rates, in addition to other components of long-run demand or supply equations. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
The identification, modeling, and analysis of interactions between nodes of neural systems in the human brain have become the aim of interest of many studies in neuroscience. The complex neural network structure and its correlations with brain functions have played a role in all areas of neuroscience, including the comprehension of cognitive and emotional processing. Indeed, understanding how information is stored, retrieved, processed, and transmitted is one of the ultimate challenges in brain research. In this context, in functional neuroimaging, connectivity analysis is a major tool for the exploration and characterization of the information flow between specialized brain regions. In most functional magnetic resonance imaging (fMRI) studies, connectivity analysis is carried out by first selecting regions of interest (ROI) and then calculating an average BOLD time series (across the voxels in each cluster). Some studies have shown that the average may not be a good choice and have suggested, as an alternative, the use of principal component analysis (PCA) to extract the principal eigen-time series from the ROI(s). In this paper, we introduce a novel approach called cluster Granger analysis (CGA) to study connectivity between ROIs. The main aim of this method was to employ multiple eigen-time series in each ROI to avoid temporal information loss during identification of Granger causality. Such information loss is inherent in averaging (e.g., to yield a single ""representative"" time series per ROI). This, in turn, may lead to a lack of power in detecting connections. The proposed approach is based on multivariate statistical analysis and integrates PCA and partial canonical correlation in a framework of Granger causality for clusters (sets) of time series. We also describe an algorithm for statistical significance testing based on bootstrapping. By using Monte Carlo simulations, we show that the proposed approach outperforms conventional Granger causality analysis (i.e., using representative time series extracted by signal averaging or first principal components estimation from ROIs). The usefulness of the CGA approach in real fMRI data is illustrated in an experiment using human faces expressing emotions. With this data set, the proposed approach suggested the presence of significantly more connections between the ROIs than were detected using a single representative time series in each ROI. (c) 2010 Elsevier Inc. All rights reserved.
Resumo:
Functional MRI (fMRI) data often have low signal-to-noise-ratio (SNR) and are contaminated by strong interference from other physiological sources. A promising tool for extracting signals, even under low SNR conditions, is blind source separation (BSS), or independent component analysis (ICA). BSS is based on the assumption that the detected signals are a mixture of a number of independent source signals that are linearly combined via an unknown mixing matrix. BSS seeks to determine the mixing matrix to recover the source signals based on principles of statistical independence. In most cases, extraction of all sources is unnecessary; instead, a priori information can be applied to extract only the signal of interest. Herein we propose an algorithm based on a variation of ICA, called Dependent Component Analysis (DCA), where the signal of interest is extracted using a time delay obtained from an autocorrelation analysis. We applied such method to inspect functional Magnetic Resonance Imaging (fMRI) data, aiming to find the hemodynamic response that follows neuronal activation from an auditory stimulation, in human subjects. The method localized a significant signal modulation in cortical regions corresponding to the primary auditory cortex. The results obtained by DCA were also compared to those of the General Linear Model (GLM), which is the most widely used method to analyze fMRI datasets.
Resumo:
This paper demonstrates by means of joint time-frequency analysis that the acoustic noise produced by the breaking of biscuits is dependent on relative humidity and water activity. It also shows that the time-frequency coefficients calculated using the adaptive Gabor transformation algorithm is dependent on the period of time a biscuit is exposed to humidity. This is a new methodology that can be used to assess the crispness of crisp foods. (c) 2007 Elsevier Ltd. All rights reserved.
The SARS algorithm: detrending CoRoT light curves with Sysrem using simultaneous external parameters
Resumo:
Surveys for exoplanetary transits are usually limited not by photon noise but rather by the amount of red noise in their data. In particular, although the CoRoT space-based survey data are being carefully scrutinized, significant new sources of systematic noises are still being discovered. Recently, a magnitude-dependant systematic effect was discovered in the CoRoT data by Mazeh et al. and a phenomenological correction was proposed. Here we tie the observed effect to a particular type of effect, and in the process generalize the popular Sysrem algorithm to include external parameters in a simultaneous solution with the unknown effects. We show that a post-processing scheme based on this algorithm performs well and indeed allows for the detection of new transit-like signals that were not previously detected.
Resumo:
We obtained long-slit spectra of high signal-to-noise ratio of the galaxy M32 with the Gemini Multi-Object Spectrograph at the Gemini-North telescope. We analysed the integrated spectra by means of full spectral fitting in order to extract the mixture of stellar populations that best represents its composite nature. Three different galactic radii were analysed, from the nuclear region out to 2 arcmin from the centre. This allows us to compare, for the first time, the results of integrated light spectroscopy with those of resolved colour-magnitude diagrams from the literature. As a main result we propose that an ancient and an intermediate-age population co-exist in M32, and that the balance between these two populations change between the nucleus and outside one effective radius (1r(eff)) in the sense that the contribution from the intermediate population is larger at the nuclear region. We retrieve a smaller signal of a young population at all radii whose origin is unclear and may be a contamination from horizontal branch stars, such as the ones identified by Brown et al. in the nuclear region. We compare our metallicity distribution function for a region 1 to 2 arcmin from the centre to the one obtained with photometric data by Grillmair et al. Both distributions are broad, but our spectroscopically derived distribution has a significant component with [Z/Z(circle dot)] <= -1, which is not found by Grillmair et al.
Resumo:
The South American (SA) rainy season is studied in this paper through the application of a multivariate Empirical Orthogonal Function (EOF) analysis to a SA gridded precipitation analysis and to the components of Lorenz Energy Cycle (LEC) derived from the National Centers for Environmental Prediction (NCEP) reanalysis. The EOF analysis leads to the identification of patterns of the rainy season and the associated mechanisms in terms of their energetics. The first combined EOF represents the northwest-southeast dipole of the precipitation between South and Central America, the South American Monsoon System (SAMS). The second combined EOF represents a synoptic pattern associated with the SACZ (South Atlantic convergence zone) and the third EOF is in spatial quadrature to the second EOF. The phase relationship of the EOFs, as computed from the principal components (PCs), suggests a nonlinear transition from the SACZ to the fully developed SAMS mode by November and between both components describing the SACZ by September-October (the rainy season onset). According to the LEC, the first mode is dominated by the eddy generation term at its maximum, the second by both baroclinic and eddy generation terms and the third by barotropic instability previous to the connection to the second mode by September-October. The predominance of the different LEC components at each phase of the SAMS can be used as an indicator of the onset of the rainy season in terms of physical processes, while the existence of the outstanding spectral peaks in the time dependence of the EOFs at the intraseasonal time scale could be used for monitoring purposes. Copyright (C) 2009 Royal Meteorological Society
Resumo:
A large amount of biological data has been produced in the last years. Important knowledge can be extracted from these data by the use of data analysis techniques. Clustering plays an important role in data analysis, by organizing similar objects from a dataset into meaningful groups. Several clustering algorithms have been proposed in the literature. However, each algorithm has its bias, being more adequate for particular datasets. This paper presents a mathematical formulation to support the creation of consistent clusters for biological data. Moreover. it shows a clustering algorithm to solve this formulation that uses GRASP (Greedy Randomized Adaptive Search Procedure). We compared the proposed algorithm with three known other algorithms. The proposed algorithm presented the best clustering results confirmed statistically. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
In this paper, we present an algorithm for cluster analysis that integrates aspects from cluster ensemble and multi-objective clustering. The algorithm is based on a Pareto-based multi-objective genetic algorithm, with a special crossover operator, which uses clustering validation measures as objective functions. The algorithm proposed can deal with data sets presenting different types of clusters, without the need of expertise in cluster analysis. its result is a concise set of partitions representing alternative trade-offs among the objective functions. We compare the results obtained with our algorithm, in the context of gene expression data sets, to those achieved with multi-objective Clustering with automatic K-determination (MOCK). the algorithm most closely related to ours. (C) 2009 Elsevier B.V. All rights reserved.