946 resultados para signal processing algorithms
Resumo:
Esta dissertação apresenta o desenvolvimento de uma plataforma multimodal de aquisição e processamento de sinais. O projeto proposto insere-se no contexto do desenvolvimento de interfaces multimodais para aplicação em dispositivos robóticos cujo propósito é a reabilitação motora adaptando o controle destes dispositivos de acordo com a intenção do usuário. A interface desenvolvida adquire, sincroniza e processa sinais eletroencefalográficos (EEG), eletromiográficos (EMG) e sinais provenientes de sensores inerciais (IMUs). A aquisição dos dados é feita em experimentos realizados com sujeitos saudáveis que executam tarefas motoras de membros inferiores. O objetivo é analisar a intenção de movimento, a ativação muscular e o início efetivo dos movimentos realizados, respectivamente, através dos sinais de EEG, EMG e IMUs. Para este fim, uma análise offline foi realizada. Nessa análise, são utilizadas técnicas de processamento dos sinais biológicos e técnicas para processar sinais provenientes de sensores inerciais. A partir destes, os ângulos da articulação do joelho também são aferidos ao longo dos movimentos. Um protocolo experimental de testes foi proposto para as tarefas realizadas. Os resultados demonstraram que o sistema proposto foi capaz de adquirir, sincronizar, processar e classificar os sinais combinadamente. Análises acerca da acurácia dos classificadores utilizados mostraram que a interface foi capaz de identificar intenção de movimento em 76, 0 ± 18, 2% dos movimentos. A maior média de tempo de antecipação ao movimento foi obtida através da análise do sinal de EEG e foi de 716, 0±546, 1 milisegundos. A partir da análise apenas do sinal de EMG, este valor foi de 88, 34 ± 67, 28 milisegundos. Os resultados das etapas de processamento dos sinais biológicos, a medição dos ângulos da articulação, bem como os valores de acurácia e tempo de antecipação ao movimento se mostraram em conformidade com a literatura atual relacionada.
Resumo:
Background: Surgical repair of pectus excavatum (PE) has become more popular due to improvements in the minimally invasive Nuss procedure. The pre-surgical assessment of PE patients requires Computerized Tomography (CT), as the malformation characteristics vary from patient to patient. Objective: This work aims to characterize soft tissue thickness (STT) external to the ribs among PE patients. It also presents a comparative analysis between the anterior chest wall surface before and after surgical correction. Methods: Through surrounding tissue segmentation in CT data, STT values were calculated at different lines along the thoracic wall, with a reference point in the intersection of coronal and median planes. The comparative analysis between the two 3D anterior chest surfaces sets a surgical correction influence area (SCIA) and a volume of interest (VOI) based on image processing algorithms, 3D surface algorithms, and registration methods. Results: There are always variations between left and right side STTs (2.54±2.05 mm and 2.95±2.97 mm for female and male patients, respectively). STTs are dependent on age, sex, and body mass index of each patient. On female patients, breast tissue induces additional errors in bar manual
Resumo:
Pectus excavatum is the most common deformity of the thorax. Pre-operative diagnosis usually includes Computed Tomography (CT) to successfully employ a thoracic prosthesis for anterior chest wall remodeling. Aiming at the elimination of radiation exposure, this paper presents a novel methodology for the replacement of CT by a 3D laser scanner (radiation-free) for prosthesis modeling. The complete elimination of CT is based on an accurate determination of ribs position and prosthesis placement region through skin surface points. The developed solution resorts to a normalized and combined outcome of an artificial neural network (ANN) set. Each ANN model was trained with data vectors from 165 male patients and using soft tissue thicknesses (STT) comprising information from the skin and rib cage (automatically determined by image processing algorithms). Tests revealed that ribs position for prosthesis placement and modeling can be estimated with an average error of 5.0 ± 3.6 mm. One also showed that the ANN performance can be improved by introducing a manually determined initial STT value in the ANN normalization procedure (average error of 2.82 ± 0.76 mm). Such error range is well below current prosthesis manual modeling (approximately 11 mm), which can provide a valuable and radiation-free procedure for prosthesis personalization.
Resumo:
Background: Surgical repair of pectus excavatum (PE) has become more popular due to improvements in the minimally invasive Nuss procedure. The pre-surgical assessment of PE patients requires Computerized Tomography (CT), as the malformation characteristics vary from patient to patient. Objective: This work aims to characterize soft tissue thickness (STT) external to the ribs among PE patients. It also presents a comparative analysis between the anterior chest wall surface before and after surgical correction. Methods: Through surrounding tissue segmentation in CT data, STT values were calculated at different lines along the thoracic wall, with a reference point in the intersection of coronal and median planes. The comparative analysis between the two 3D anterior chest surfaces sets a surgical correction influence area (SCIA) and a volume of interest (VOI) based on image processing algorithms, 3D surface algorithms, and registration methods. Results: There are always variations between left and right side STTs (2.54±2.05 mm and 2.95±2.97 mm for female and male patients, respectively). STTs are dependent on age, sex, and body mass index of each patient. On female patients, breast tissue induces additional errors in bar manual
Resumo:
Pectus excavatum is the most common deformity of the thorax. Pre-operative diagnosis usually includes Computed Tomography (CT) to successfully employ a thoracic prosthesis for anterior chest wall remodeling. Aiming at the elimination of radiation exposure, this paper presents a novel methodology for the replacement of CT by a 3D laser scanner (radiation-free) for prosthesis modeling. The complete elimination of CT is based on an accurate determination of ribs position and prosthesis placement region through skin surface points. The developed solution resorts to a normalized and combined outcome of an artificial neural network (ANN) set. Each ANN model was trained with data vectors from 165 male patients and using soft tissue thicknesses (STT) comprising information from the skin and rib cage (automatically determined by image processing algorithms). Tests revealed that ribs position for prosthesis placement and modeling can be estimated with an average error of 5.0 ± 3.6 mm. One also showed that the ANN performance can be improved by introducing a manually determined initial STT value in the ANN normalization procedure (average error of 2.82 ± 0.76 mm). Such error range is well below current prosthesis manual modeling (approximately 11 mm), which can provide a valuable and radiation-free procedure for prosthesis personalization.
Resumo:
Sliding mode controllers for power converters usually employ hysteresis comparators to directly generate the power semiconductors switching states. This paper presents a new sliding mode modulator based on the direct implementation of the sliding mode stability condition, which for multilevel power converters shows advantages, as branch equalized switching frequencies and less distortion on the ac currents when operating near the rated converter power. The new sliding mode multilevel modulator is used to control a three-phase multilevel converter, operated as a reactive power compensator (STATCOM), implementing the stability condition in a digital signal processing system. The performance of this new sliding mode modulator is compared with a multilevel modulator based on hysteresis comparators. Simulation and experimental results are presented in order to highlight the system operation and control robustness.
Design of improved rail-to-rail low-distortion and low-stress switches in advanced CMOS technologies
Resumo:
This paper describes the efficient design of an improved and dedicated switched-capacitor (SC) circuit capable of linearizing CMOS switches to allow SC circuits to reach low distortion levels. The described circuit (SC linearization control circuit, SLC) has the advantage over conventional clock-bootstrapping circuits of exhibiting low-stress, since large gate voltages are avoided. This paper presents exhaustive corner simulation results of a SC sample-and-hold (S/H) circuit which employs the proposed and optimized circuits, together with the experimental evaluation of a complete 10-bit ADC utilizing the referred S/H circuit. These results show that the SLC circuits can reduce distortion and increase dynamic linearity above 12 bits for wide input signal bandwidths.
Resumo:
This paper is an elaboration of the DECA algorithm [1] to blindly unmix hyperspectral data. The underlying mixing model is linear, meaning that each pixel is a linear mixture of the endmembers signatures weighted by the correspondent abundance fractions. The proposed method, as DECA, is tailored to highly mixed mixtures in which the geometric based approaches fail to identify the simplex of minimum volume enclosing the observed spectral vectors. We resort then to a statitistical framework, where the abundance fractions are modeled as mixtures of Dirichlet densities, thus enforcing the constraints on abundance fractions imposed by the acquisition process, namely non-negativity and constant sum. With respect to DECA, we introduce two improvements: 1) the number of Dirichlet modes are inferred based on the minimum description length (MDL) principle; 2) The generalized expectation maximization (GEM) algorithm we adopt to infer the model parameters is improved by using alternating minimization and augmented Lagrangian methods to compute the mixing matrix. The effectiveness of the proposed algorithm is illustrated with simulated and read data.
Resumo:
O trabalho apresentado nesta dissertação refere-se à concepção, projecto e realização experimental de um conversor estático de potência tolerante a falhas. Foram analisados trabalhos de investigação sobre modos de falha de conversores electrónicos de potência, topologias de conversores tolerantes a falhas, métodos de detecção de falhas, entre outros. Com vista à concepção de uma solução, foram nomeados e analisados os principais modos de falhas para três soluções propostas de conversores com topologias tolerantes a falhas onde existem elementos redundantes em modo de espera. Foram analisados os vários aspectos de natureza técnica dos circuitos de potência e guiamento de sinais onde se salientam a necessidade de tempos mortos entre os sinais de disparo de IGBT do mesmo ramo, o isolamento galvânico entre os vários andares de disparo, a necessidade de minimizar as auto-induções entre o condensador DC e os braços do conversor de potência. Com vista a melhorar a fiabilidade e segurança de funcionamento do conversor estático de potência tolerante a falhas, foi concebido um circuito electrónico permitindo a aceleração da actuação normal de contactores e outro circuito responsável pelo encaminhamento e inibição dos sinais de disparo. Para a aplicação do conversor estático de potência tolerante a falhas desenvolvido num accionamento com um motor de corrente contínua, foi implementado um algoritmo de controlo numa placa de processamento digital de sinais (DSP), sendo a supervisão e actuação do sistema realizados em tempo-real, para a detecção de falhas e actuação de contactores e controlo de corrente e velocidade do motor utilizando uma estratégia de comando PWM. Foram realizados ensaios que, mediante uma detecção adequada de falhas, realiza a comutação entre blocos de conversores de potência. São apresentados e discutidos resultados experimentais, obtidos usando o protótipo laboratorial.
Resumo:
A presente dissertação apresenta o desenvolvimento de um medidor de componentes passivos RLC. Este medidor baseia-se num protótipo desenvolvido para possibilitar a medição da impedância de um dispositivo em teste. Tendo uma carta de aquisição de sinal como interface, o protótipo comunica com um computador que controla todo o processo de medição desde a aquisição e processamento de sinais ao cálculo e visualização dos parâmetros. A topologia de medição implementada é a da ponte auto-balanceada e no processamento recorre-se ao método da desmodulação síncrona ou coerente. A sua viabilidade é suportada por um estudo teórico prévio, pela discussão das opções tomadas no projecto e pelos resultados obtidos através do algoritmo desenvolvido utilizando o software LabVIEW de programação gráfica.
Resumo:
Structures experience various types of loads along their lifetime, which can be either static or dynamic and may be associated to phenomena of corrosion and chemical attack, among others. As a consequence, different types of structural damage can be produced; the deteriorated structure may have its capacity affected, leading to excessive vibration problems or even possible failure. It is very important to develop methods that are able to simultaneously detect the existence of damage and to quantify its extent. In this paper the authors propose a method to detect and quantify structural damage, using response transmissibilities measured along the structure. Some numerical simulations are presented and a comparison is made with results using frequency response functions. Experimental tests are also undertaken to validate the proposed technique. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
A definition of medium voltage (MV) load diagrams was made, based on the data base knowledge discovery process. Clustering techniques were used as support for the agents of the electric power retail markets to obtain specific knowledge of their customers’ consumption habits. Each customer class resulting from the clustering operation is represented by its load diagram. The Two-step clustering algorithm and the WEACS approach based on evidence accumulation (EAC) were applied to an electricity consumption data from a utility client’s database in order to form the customer’s classes and to find a set of representative consumption patterns. The WEACS approach is a clustering ensemble combination approach that uses subsampling and that weights differently the partitions in the co-association matrix. As a complementary step to the WEACS approach, all the final data partitions produced by the different variations of the method are combined and the Ward Link algorithm is used to obtain the final data partition. Experiment results showed that WEACS approach led to better accuracy than many other clustering approaches. In this paper the WEACS approach separates better the customer’s population than Two-step clustering algorithm.
Resumo:
This paper studies the human DNA in the perspective of signal processing. Six wavelets are tested for analyzing the information content of the human DNA. By adopting real Shannon wavelet several fundamental properties of the code are revealed. A quantitative comparison of the chromosomes and visualization through multidimensional and dendograms is developed.
Resumo:
Video coding technologies have played a major role in the explosion of large market digital video applications and services. In this context, the very popular MPEG-x and H-26x video coding standards adopted a predictive coding paradigm, where complex encoders exploit the data redundancy and irrelevancy to 'control' much simpler decoders. This codec paradigm fits well applications and services such as digital television and video storage where the decoder complexity is critical, but does not match well the requirements of emerging applications such as visual sensor networks where the encoder complexity is more critical. The Slepian Wolf and Wyner-Ziv theorems brought the possibility to develop the so-called Wyner-Ziv video codecs, following a different coding paradigm where it is the task of the decoder, and not anymore of the encoder, to (fully or partly) exploit the video redundancy. Theoretically, Wyner-Ziv video coding does not incur in any compression performance penalty regarding the more traditional predictive coding paradigm (at least for certain conditions). In the context of Wyner-Ziv video codecs, the so-called side information, which is a decoder estimate of the original frame to code, plays a critical role in the overall compression performance. For this reason, much research effort has been invested in the past decade to develop increasingly more efficient side information creation methods. This paper has the main objective to review and evaluate the available side information methods after proposing a classification taxonomy to guide this review, allowing to achieve more solid conclusions and better identify the next relevant research challenges. After classifying the side information creation methods into four classes, notably guess, try, hint and learn, the review of the most important techniques in each class and the evaluation of some of them leads to the important conclusion that the side information creation methods provide better rate-distortion (RD) performance depending on the amount of temporal correlation in each video sequence. It became also clear that the best available Wyner-Ziv video coding solutions are almost systematically based on the learn approach. The best solutions are already able to systematically outperform the H.264/AVC Intra, and also the H.264/AVC zero-motion standard solutions for specific types of content. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
Mestrado em Engenharia Electrotécnica e de Computadores