991 resultados para computational costs


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Understanding neural functions requires knowledge from analysing electrophysiological data. The process of assigning spikes of a multichannel signal into clusters, called spike sorting, is one of the important problems in such analysis. There have been various automated spike sorting techniques with both advantages and disadvantages regarding accuracy and computational costs. Therefore, developing spike sorting methods that are highly accurate and computationally inexpensive is always a challenge in the biomedical engineering practice.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Partial state estimation of dynamical systems provides significant advantages in practical applications. Likewise, pre-compensator design for multi variable systems invokes considerable increase in the order of the original system. Hence, applying functional observer to pre-compensated systems can result in lower computational costs and more practicability in some applications such as fault diagnosis and output feedback control of these systems. In this note, functional observer design is investigated for pre-compensated systems. A lower order pre-compensator is designed based on a H2 norm optimization that is designed as the solution of a set of linear matrix inequalities (LMIs). Next, a minimum order functional observer is designed for the pre-compensated system. An LTI model of an irreversible chemical reactor is used to demonstrate our design algorithm, and to highlight the benefits of the proposed schemes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Os objetivos deste trabalho foram (i) rever métodos numéricos para precificação de derivativos; e (ii) comparar os métodos assumindo que os preços de mercado refletem àqueles obtidos pela fórmula de Black Scholes para precificação de opções do tipo européia. Aplicamos estes métodos para precificar opções de compra da ações Telebrás. Os critérios de acurácia e de custo computacional foram utilizados para comparar os seguintes modelos binomial, Monte Carlo, e diferenças finitas. Os resultados indicam que o modelo binomial possui boa acurácia e custo baixo, seguido pelo Monte Carlo e diferenças finitas. Entretanto, o método Monte Carlo poderia ser usado quando o derivativo depende de mais de dois ativos-objetos. É recomendável usar o método de diferenças finitas quando se obtém uma equação diferencial parcial cuja solução é o valor do derivativo.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Neste trabalho é desenvolvida uma metodologia de projeto para identificar as regiões críticas da estrutura de um reboque de linha leve sendo tracionado em pavimentos do tipo rodovia de baixa qualidade e estrada secundária de muito baixa qualidade. Para tanto, são levantados alguns dados experimentais da estrutura, necessários para a aproximação e simulação dinâmica de um modelo simplificado. A excitação da base é realizada por atuadores que simulam as oscilações verticais de um perfil de estrada, a qual é definida de acordo com os estudos realizados por Dodds e Robson (1973). Isto permite a determinação de um histórico de carregamentos das regiões da estrutura do chassi sob a ação das molas da suspensão. Em seguida, é gerado um modelo estrutural simplificado do reboque em elementos finitos, chamado de global, no qual são determinadas as regiões sob ação das maiores tensões. Tendo identificada a região mais crítica da estrutura, é criado um modelo local desta parte, onde se pode observar a distribuição de tensões com mais detalhe, permitindo a identificação dos pontos de concentração de tensões. Desta forma, com a aplicação do método de análise global-local é possível a obtenção de resultados detalhados quanto aos esforços da estrutura com um menor custo computacional.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this work we focus on tests for the parameter of an endogenous variable in a weakly identi ed instrumental variable regressionmodel. We propose a new unbiasedness restriction for weighted average power (WAP) tests introduced by Moreira and Moreira (2013). This new boundary condition is motivated by the score e ciency under strong identi cation. It allows reducing computational costs of WAP tests by replacing the strongly unbiased condition. This latter restriction imposes, under the null hypothesis, the test to be uncorrelated to a given statistic with dimension given by the number of instruments. The new proposed boundary condition only imposes the test to be uncorrelated to a linear combination of the statistic. WAP tests under both restrictions to perform similarly numerically. We apply the di erent tests discussed to an empirical example. Using data from Yogo (2004), we assess the e ect of weak instruments on the estimation of the elasticity of inter-temporal substitution of a CCAPM model.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Self-organizing maps (SOM) are artificial neural networks widely used in the data mining field, mainly because they constitute a dimensionality reduction technique given the fixed grid of neurons associated with the network. In order to properly the partition and visualize the SOM network, the various methods available in the literature must be applied in a post-processing stage, that consists of inferring, through its neurons, relevant characteristics of the data set. In general, such processing applied to the network neurons, instead of the entire database, reduces the computational costs due to vector quantization. This work proposes a post-processing of the SOM neurons in the input and output spaces, combining visualization techniques with algorithms based on gravitational forces and the search for the shortest path with the greatest reward. Such methods take into account the connection strength between neighbouring neurons and characteristics of pattern density and distances among neurons, both associated with the position that the neurons occupy in the data space after training the network. Thus, the goal consists of defining more clearly the arrangement of the clusters present in the data. Experiments were carried out so as to evaluate the proposed methods using various artificially generated data sets, as well as real world data sets. The results obtained were compared with those from a number of well-known methods existent in the literature

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Two methods to evaluate the state transition matrix are implemented and analyzed to verify the computational cost and the accuracy of both methods. This evaluation represents one of the highest computational costs on the artificial satellite orbit determination task. The first method is an approximation of the Keplerian motion, providing an analytical solution which is then calculated numerically by solving Kepler's equation. The second one is a local numerical approximation that includes the effect of J(2). The analysis is performed comparing these two methods with a reference generated by a numerical integrator. For small intervals of time (1 to 10s) and when one needs more accuracy, it is recommended to use the second method, since the CPU time does not excessively overload the computer during the orbit determination procedure. For larger intervals of time and when one expects more stability on the calculation, it is recommended to use the first method.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A methodology for pipeline leakage detection using a combination of clustering and classification tools for fault detection is presented here. A fuzzy system is used to classify the running mode and identify the operational and process transients. The relationship between these transients and the mass balance deviation are discussed. This strategy allows for better identification of the leakage because the thresholds are adjusted by the fuzzy system as a function of the running mode and the classified transient level. The fuzzy system is initially off-line trained with a modified data set including simulated leakages. The methodology is applied to a small-scale LPG pipeline monitoring case where portability, robustness and reliability are amongst the most important criteria for the detection system. The results are very encouraging with relatively low levels of false alarms, obtaining increased leakage detection with low computational costs. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Traditional pattern recognition techniques can not handle the classification of large datasets with both efficiency and effectiveness. In this context, the Optimum-Path Forest (OPF) classifier was recently introduced, trying to achieve high recognition rates and low computational cost. Although OPF was much faster than Support Vector Machines for training, it was slightly slower for classification. In this paper, we present the Efficient OPF (EOPF), which is an enhanced and faster version of the traditional OPF, and validate it for the automatic recognition of white matter and gray matter in magnetic resonance images of the human brain. © 2010 IEEE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper we propose a fast and an accurate method for fault diagnosis in power transformers by means of Optimum-Path Forest (OPF) classifier. Since we applied Dissolved Gas Analysis (DGA), the samples have been labeled by IEEE/IEC standard, which was further analyzed by OPF and several other well known supervised pattern recognition techniques. The experiments have showed that OPF can achieve high recognition rates with low computational cost. © 2012 IEEE.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

An important tool for the heart disease diagnosis is the analysis of electrocardiogram (ECG) signals, since the non-invasive nature and simplicity of the ECG exam. According to the application, ECG data analysis consists of steps such as preprocessing, segmentation, feature extraction and classification aiming to detect cardiac arrhythmias (i.e.; cardiac rhythm abnormalities). Aiming to made a fast and accurate cardiac arrhythmia signal classification process, we apply and analyze a recent and robust supervised graph-based pattern recognition technique, the optimum-path forest (OPF) classifier. To the best of our knowledge, it is the first time that OPF classifier is used to the ECG heartbeat signal classification task. We then compare the performance (in terms of training and testing time, accuracy, specificity, and sensitivity) of the OPF classifier to the ones of other three well-known expert system classifiers, i.e.; support vector machine (SVM), Bayesian and multilayer artificial neural network (MLP), using features extracted from six main approaches considered in literature for ECG arrhythmia analysis. In our experiments, we use the MIT-BIH Arrhythmia Database and the evaluation protocol recommended by The Association for the Advancement of Medical Instrumentation. A discussion on the obtained results shows that OPF classifier presents a robust performance, i.e.; there is no need for parameter setup, as well as a high accuracy at an extremely low computational cost. Moreover, in average, the OPF classifier yielded greater performance than the MLP and SVM classifiers in terms of classification time and accuracy, and to produce quite similar performance to the Bayesian classifier, showing to be a promising technique for ECG signal analysis. © 2012 Elsevier Ltd. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Pós-graduação em Engenharia Elétrica - FEIS

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A migração com amplitudes verdadeiras de dados de reflexão sísmica, em profundidade ou em tempo, possibilita que seja obtida uma medida dos coeficientes de reflexão dos chamados eventos de reflexão primária. Estes eventos são constituídos, por exemplo, pelas reflexões de ondas longitudinais P-P em refletores de curvaturas arbitrárias e suaves. Um dos métodos mais conhecido é o chamado migração de Kirchhoff, através do qual a imagem sísmica é produzida pela integração do campo de ondas sísmicas, utilizando-se superfícies de difrações, denominadas de Superfícies de Huygens. A fim de se obter uma estimativa dos coeficientes de reflexão durante a migração, isto é a correção do efeito do espalhamento geométrico, utiliza-se uma função peso no operador integral de migração. A obtenção desta função peso é feita pela solução assintótica da integral em pontos estacionários. Tanto no cálculo dos tempos de trânsito como na determinação da função peso, necessita-se do traçamento de raios, o que torna a migração em situações de forte heterogeneidade da propriedade física um processo com alto custo computacional. Neste trabalho é apresentado um algoritmo de migração em profundidade com amplitudes verdadeiras, para o caso em que se tem uma fonte sísmica pontual, sendo o modelo de velocidades em subsuperfície representado por uma função que varia em duas dimensões, e constante na terceira dimensão. Esta situação, conhecida como modelo dois-e-meio dimensional (2,5-D), possui características típicas de muitas situações de interesse na exploração do petróleo, como é o caso da aquisição de dados sísmicos 2-D com receptores ao longo de uma linha sísmica e fonte sísmica 3-D. Em particular, é dada ênfase ao caso em que a velocidade de propagação da onda sísmica varia linearmente com a profundidade. Outro tópico de grande importância abordado nesse trabalho diz respeito ao método de inversão sísmica denominado empilhamento duplo de difrações. Através do quociente de dois empilhamentos com pesos apropriados, pode-se determinar propriedades físicas e parâmetros geométricos relacionados com a trajetória do raio refletido, os quais podem ser utilizados a posteriori no processamento dos dados sísmicos, visando por exemplo, a análise de amplitudes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Recognition of individuals through the characteristics of the iris has in recent years become a well accepted technique due to both the high reliability of this procedure and the its non invasiveness. The methods used in such procedures seek information all over the iris, which depending on the algorithm used may result in high computational costs. Considering that most characteristics of the iris are in its inner region the goal of this work is to develop an algorithm for the recognition of individuals using only this region. To bring the outcome of our approach to the level of the best techniques described in the literature this technique has to be further elaborated, even so the results show a promising technique.