796 resultados para Empirical Algorithm Analysis


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper examines the hysteresis hypothesis in the Brazilian industrialized exports using a time series analysis. This hypothesis finds an empirical representation into the nonlinear adjustments of the exported quantity to relative price changes. Thus, the threshold cointegration analysis proposed by Balke and Fomby [Balke, N.S. and Fomby, T.B. Threshold Cointegration. International Economic Review, 1997; 38; 627-645.] was used for estimating models with asymmetric adjustment of the error correction term. Amongst sixteen industrial sectors selected, there was evidence of nonlinearities in the residuals of long-run relationships of supply or demand for exports in nine of them. These nonlinearities represent asymmetric and/or discontinuous responses of exports to different representative measures of real exchange rates, in addition to other components of long-run demand or supply equations. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Qu-Prolog is an extension of Prolog which performs meta-level computations over object languages, such as predicate calculi and lambda-calculi, which have object-level variables, and quantifier or binding symbols creating local scopes for those variables. As in Prolog, the instantiable (meta-level) variables of Qu-Prolog range over object-level terms, and in addition other Qu-Prolog syntax denotes the various components of the object-level syntax, including object-level variables. Further, the meta-level operation of substitution into object-level terms is directly represented by appropriate Qu-Prolog syntax. Again as in Prolog, the driving mechanism in Qu-Prolog computation is a form of unification, but this is substantially more complex than for Prolog because of Qu-Prolog's greater generality, and especially because substitution operations are evaluated during unification. In this paper, the Qu-Prolog unification algorithm is specified, formalised and proved correct. Further, the analysis of the algorithm is carried out in a frame-work which straightforwardly allows the 'completeness' of the algorithm to be proved: though fully explicit answers to unification problems are not always provided, no information is lost in the unification process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The classification rules of linear discriminant analysis are defined by the true mean vectors and the common covariance matrix of the populations from which the data come. Because these true parameters are generally unknown, they are commonly estimated by the sample mean vector and covariance matrix of the data in a training sample randomly drawn from each population. However, these sample statistics are notoriously susceptible to contamination by outliers, a problem compounded by the fact that the outliers may be invisible to conventional diagnostics. High-breakdown estimation is a procedure designed to remove this cause for concern by producing estimates that are immune to serious distortion by a minority of outliers, regardless of their severity. In this article we motivate and develop a high-breakdown criterion for linear discriminant analysis and give an algorithm for its implementation. The procedure is intended to supplement rather than replace the usual sample-moment methodology of discriminant analysis either by providing indications that the dataset is not seriously affected by outliers (supporting the usual analysis) or by identifying apparently aberrant points and giving resistant estimators that are not affected by them.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The identification, modeling, and analysis of interactions between nodes of neural systems in the human brain have become the aim of interest of many studies in neuroscience. The complex neural network structure and its correlations with brain functions have played a role in all areas of neuroscience, including the comprehension of cognitive and emotional processing. Indeed, understanding how information is stored, retrieved, processed, and transmitted is one of the ultimate challenges in brain research. In this context, in functional neuroimaging, connectivity analysis is a major tool for the exploration and characterization of the information flow between specialized brain regions. In most functional magnetic resonance imaging (fMRI) studies, connectivity analysis is carried out by first selecting regions of interest (ROI) and then calculating an average BOLD time series (across the voxels in each cluster). Some studies have shown that the average may not be a good choice and have suggested, as an alternative, the use of principal component analysis (PCA) to extract the principal eigen-time series from the ROI(s). In this paper, we introduce a novel approach called cluster Granger analysis (CGA) to study connectivity between ROIs. The main aim of this method was to employ multiple eigen-time series in each ROI to avoid temporal information loss during identification of Granger causality. Such information loss is inherent in averaging (e.g., to yield a single ""representative"" time series per ROI). This, in turn, may lead to a lack of power in detecting connections. The proposed approach is based on multivariate statistical analysis and integrates PCA and partial canonical correlation in a framework of Granger causality for clusters (sets) of time series. We also describe an algorithm for statistical significance testing based on bootstrapping. By using Monte Carlo simulations, we show that the proposed approach outperforms conventional Granger causality analysis (i.e., using representative time series extracted by signal averaging or first principal components estimation from ROIs). The usefulness of the CGA approach in real fMRI data is illustrated in an experiment using human faces expressing emotions. With this data set, the proposed approach suggested the presence of significantly more connections between the ROIs than were detected using a single representative time series in each ROI. (c) 2010 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The popular Newmark algorithm, used for implicit direct integration of structural dynamics, is extended by means of a nodal partition to permit use of different timesteps in different regions of a structural model. The algorithm developed has as a special case an explicit-explicit subcycling algorithm previously reported by Belytschko, Yen and Mullen. That algorithm has been shown, in the absence of damping or other energy dissipation, to exhibit instability over narrow timestep ranges that become narrower as the number of degrees of freedom increases, making them unlikely to be encountered in practice. The present algorithm avoids such instabilities in the case of a one to two timestep ratio (two subcycles), achieving unconditional stability in an exponential sense for a linear problem. However, with three or more subcycles, the trapezoidal rule exhibits stability that becomes conditional, falling towards that of the central difference method as the number of subcycles increases. Instabilities over narrow timestep ranges, that become narrower as the model size increases, also appear with three or more subcycles. However by moving the partition between timesteps one row of elements into the region suitable for integration with the larger timestep these the unstable timestep ranges become extremely narrow, even in simple systems with a few degrees of freedom. As well, accuracy is improved. Use of a version of the Newmark algorithm that dissipates high frequencies minimises or eliminates these narrow bands of instability. Viscous damping is also shown to remove these instabilities, at the expense of having more effect on the low frequency response.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Functional MRI (fMRI) data often have low signal-to-noise-ratio (SNR) and are contaminated by strong interference from other physiological sources. A promising tool for extracting signals, even under low SNR conditions, is blind source separation (BSS), or independent component analysis (ICA). BSS is based on the assumption that the detected signals are a mixture of a number of independent source signals that are linearly combined via an unknown mixing matrix. BSS seeks to determine the mixing matrix to recover the source signals based on principles of statistical independence. In most cases, extraction of all sources is unnecessary; instead, a priori information can be applied to extract only the signal of interest. Herein we propose an algorithm based on a variation of ICA, called Dependent Component Analysis (DCA), where the signal of interest is extracted using a time delay obtained from an autocorrelation analysis. We applied such method to inspect functional Magnetic Resonance Imaging (fMRI) data, aiming to find the hemodynamic response that follows neuronal activation from an auditory stimulation, in human subjects. The method localized a significant signal modulation in cortical regions corresponding to the primary auditory cortex. The results obtained by DCA were also compared to those of the General Linear Model (GLM), which is the most widely used method to analyze fMRI datasets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There has been little study of economic and general attitudes towards the conservation of the Asian elephant. This paper reports and analyses results from surveys conducted in Sri Lanka of attitudes of urban dwellers and farmers towards nature conservation in general and the elephant conservation in particular. The analyses are based on urban and a rural sample. Contingent valuation techniques are used as survey instruments. Multivariate logit regression analysis is used to analyze the respondents’ attitudes towards conservation of elephants. It is found that, although some variations occurred between the samples, the majority of the respondents (both rural and urban) have positive attitudes towards nature conservation in general. However, marked differences in attitudes toward elephant conservation are evident between these two samples: the majority of urban respondents were in favour of elephant conservation; rural respondents expressed a mixture of positive and negative attitudes. Overall, considerable unrecorded and as yet unutilised economic support for conservation of wild elephants exists in Sri Lanka.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article reports on the results of a study undertaken by the author together with her research assistant, Heather Green. The study collected and analysed data from all disciplinary tribunal decisions heard in Queensland since 1930 in an attempt to provide empirical information which has previously been lacking. This article will outline the main features of the disciplinary system in Queensland, describe the research methodology used in the present study and then report on some findings from the study. Reported findings include a profile of solicitors who have appeared before a disciplinary hearing, the types of matters which have attracted formal discipline and the types of orders made by the tribunal. Much of the data is then presented on a time scale so as to reveal any changes over time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Neurological disease or dysfunction in newborn infants is often first manifested by seizures. Prolonged seizures can result in impaired neurodevelopment or even death. In adults, the clinical signs of seizures are well defined and easily recognized. In newborns, however, the clinical signs are subtle and may be absent or easily missed without constant close observation. This article describes the use of adaptive signal processing techniques for removing artifacts from newborn electroencephalogram (EEG) signals. Three adaptive algorithms have been designed in the context of EEG signals. This preprocessing is necessary before attempting a fine time-frequency analysis of EEG rhythmical activities, such as electrical seizures, corrupted by high amplitude signals. After an overview of newborn EEG signals, the authors describe the data acquisition set-up. They then introduce the basic physiological concepts related to normal and abnormal newborn EEGs and discuss the three adaptive algorithms for artifact removal. They also present time-frequency representations (TFRs) of seizure signals and discuss the estimation and modeling of the instantaneous frequency related to the main ridge of the TFR.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Motivation: A consensus sequence for a family of related sequences is, as the name suggests, a sequence that captures the features common to most members of the family. Consensus sequences are important in various DNA sequencing applications and are a convenient way to characterize a family of molecules. Results: This paper describes a new algorithm for finding a consensus sequence, using the popular optimization method known as simulated annealing. Unlike the conventional approach of finding a consensus sequence by first forming a multiple sequence alignment, this algorithm searches for a sequence that minimises the sum of pairwise distances to each of the input sequences. The resulting consensus sequence can then be used to induce a multiple sequence alignment. The time required by the algorithm scales linearly with the number of input sequences and quadratically with the length of the consensus sequence. We present results demonstrating the high quality of the consensus sequences and alignments produced by the new algorithm. For comparison, we also present similar results obtained using ClustalW. The new algorithm outperforms ClustalW in many cases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A field matching method is described to analyze a recessed circular cavity radiating into a radial waveguide. Using the wall impedance approach, the analysis is divided into two separate problems of the cavity and its external environment. Based on this analysis, a computer algorithm is developed for determining wall admittances as seen at the edge of the patch in the cavity, the radial admittance matrix for the two-probe feed arrangement, and the input impedance as observed from the coaxial line feeding the cavity. This algorithm is tested against the general-purpose Hewlett-Packard finite-element High Frequency Structure Simulator as well as against measured results. Good agreement in all considered cases is noted.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fault detection and isolation (FDI) are important steps in the monitoring and supervision of industrial processes. Biological wastewater treatment (WWT) plants are difficult to model, and hence to monitor, because of the complexity of the biological reactions and because plant influent and disturbances are highly variable and/or unmeasured. Multivariate statistical models have been developed for a wide variety of situations over the past few decades, proving successful in many applications. In this paper we develop a new monitoring algorithm based on Principal Components Analysis (PCA). It can be seen equivalently as making Multiscale PCA (MSPCA) adaptive, or as a multiscale decomposition of adaptive PCA. Adaptive Multiscale PCA (AdMSPCA) exploits the changing multivariate relationships between variables at different time-scales. Adaptation of scale PCA models over time permits them to follow the evolution of the process, inputs or disturbances. Performance of AdMSPCA and adaptive PCA on a real WWT data set is compared and contrasted. The most significant difference observed was the ability of AdMSPCA to adapt to a much wider range of changes. This was mainly due to the flexibility afforded by allowing each scale model to adapt whenever it did not signal an abnormal event at that scale. Relative detection speeds were examined only summarily, but seemed to depend on the characteristics of the faults/disturbances. The results of the algorithms were similar for sudden changes, but AdMSPCA appeared more sensitive to slower changes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new algorithm has been developed for smoothing the surfaces in finite element formulations of contact-impact. A key feature of this method is that the smoothing is done implicitly by constructing smooth signed distance functions for the bodies. These functions are then employed for the computation of the gap and other variables needed for implementation of contact-impact. The smoothed signed distance functions are constructed by a moving least-squares approximation with a polynomial basis. Results show that when nodes are placed on a surface, the surface can be reproduced with an error of about one per cent or less with either a quadratic or a linear basis. With a quadratic basis, the method exactly reproduces a circle or a sphere even for coarse meshes. Results are presented for contact problems involving the contact of circular bodies. Copyright (C) 2002 John Wiley Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Combinatorial optimization problems share an interesting property with spin glass systems in that their state spaces can exhibit ultrametric structure. We use sampling methods to analyse the error surfaces of feedforward multi-layer perceptron neural networks learning encoder problems. The third order statistics of these points of attraction are examined and found to be arranged in a highly ultrametric way. This is a unique result for a finite, continuous parameter space. The implications of this result are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider a mixture model approach to the regression analysis of competing-risks data. Attention is focused on inference concerning the effects of factors on both the probability of occurrence and the hazard rate conditional on each of the failure types. These two quantities are specified in the mixture model using the logistic model and the proportional hazards model, respectively. We propose a semi-parametric mixture method to estimate the logistic and regression coefficients jointly, whereby the component-baseline hazard functions are completely unspecified. Estimation is based on maximum likelihood on the basis of the full likelihood, implemented via an expectation-conditional maximization (ECM) algorithm. Simulation studies are performed to compare the performance of the proposed semi-parametric method with a fully parametric mixture approach. The results show that when the component-baseline hazard is monotonic increasing, the semi-parametric and fully parametric mixture approaches are comparable for mildly and moderately censored samples. When the component-baseline hazard is not monotonic increasing, the semi-parametric method consistently provides less biased estimates than a fully parametric approach and is comparable in efficiency in the estimation of the parameters for all levels of censoring. The methods are illustrated using a real data set of prostate cancer patients treated with different dosages of the drug diethylstilbestrol. Copyright (C) 2003 John Wiley Sons, Ltd.