989 resultados para Complex Signals
Resumo:
Timely detection of sudden change in dynamics that adversely affect the performance of systems and quality of products has great scientific relevance. This work focuses on effective detection of dynamical changes of real time signals from mechanical as well as biological systems using a fast and robust technique of permutation entropy (PE). The results are used in detecting chatter onset in machine turning and identifying vocal disorders from speech signal.Permutation Entropy is a nonlinear complexity measure which can efficiently distinguish regular and complex nature of any signal and extract information about the change in dynamics of the process by indicating sudden change in its value. Here we propose the use of permutation entropy (PE), to detect the dynamical changes in two non linear processes, turning under mechanical system and speech under biological system.Effectiveness of PE in detecting the change in dynamics in turning process from the time series generated with samples of audio and current signals is studied. Experiments are carried out on a lathe machine for sudden increase in depth of cut and continuous increase in depth of cut on mild steel work pieces keeping the speed and feed rate constant. The results are applied to detect chatter onset in machining. These results are verified using frequency spectra of the signals and the non linear measure, normalized coarse-grained information rate (NCIR).PE analysis is carried out to investigate the variation in surface texture caused by chatter on the machined work piece. Statistical parameter from the optical grey level intensity histogram of laser speckle pattern recorded using a charge coupled device (CCD) camera is used to generate the time series required for PE analysis. Standard optical roughness parameter is used to confirm the results.Application of PE in identifying the vocal disorders is studied from speech signal recorded using microphone. Here analysis is carried out using speech signals of subjects with different pathological conditions and normal subjects, and the results are used for identifying vocal disorders. Standard linear technique of FFT is used to substantiate thc results.The results of PE analysis in all three cases clearly indicate that this complexity measure is sensitive to change in regularity of a signal and hence can suitably be used for detection of dynamical changes in real world systems. This work establishes the application of the simple, inexpensive and fast algorithm of PE for the benefit of advanced manufacturing process as well as clinical diagnosis in vocal disorders.
Resumo:
An experimental comparison of information features used by neural network is performed. The sensing method was used. Suboptimal classifier agreeable to the gaussian model of the training data was used as a probe. Neural nets with architectures of perceptron and feedforward net with one hidden layer were used. The experiments were carried out with spatial ultrasonic data, which are used for car’s passenger safety system neural controller learning. In this paper we show that a neural network doesn’t fully make use of gaussian components, which are first two moment coefficients of probability distribution. On the contrary, the network can find more complicated regularities inside data vectors and thus shows better results than suboptimal classifier. The parallel connection of suboptimal classifier improves work of modular neural network whereas its connection to the network input improves the specialization effect during training.
Resumo:
We present a signal processing approach using discrete wavelet transform (DWT) for the generation of complex synthetic aperture radar (SAR) images at an arbitrary number of dyadic scales of resolution. The method is computationally efficient and is free from significant system-imposed limitations present in traditional subaperture-based multiresolution image formation. Problems due to aliasing associated with biorthogonal decomposition of the complex signals are addressed. The lifting scheme of DWT is adapted to handle complex signal approximations and employed to further enhance the computational efficiency. Multiresolution SAR images formed by the proposed method are presented.
Resumo:
Atrial fibrillation (AF) is the most common cardiac arrhythmia, and is responsible for the highest number of rhythm-related disorders and cardioembolic strokes worldwide. Intracardiac signal analysis during the onset of paroxysmal AF led to the discovery of pulmonary vein as a triggering source of AF, which has led to the development of pulmonary vein ablation--an established curative therapy for drug-resistant AF. Complex, multicomponent and rapid electrical activity widely involving the atrial substrate characterizes persistent/permanent AF. Widespread nature of the problem and complexity of signals in persistent AF reduce the success rate of ablation therapy. Although signal processing applied to extraction of relevant features from these complex electrograms has helped to improve the efficacy of ablation therapy in persistent/permanent AF, improved understanding of complex signals should help to identify sources of AF and further increase the success rate of ablation therapy.
Resumo:
The genetic analysis of mate choice is fraught with difficulties. Males produce complex signals and displays that can consist of a combination of acoustic, visual, chemical and behavioural phenotypes. Furthermore, female preferences for these male traits are notoriously difficult to quantify. During mate choice, genes not only affect the phenotypes of the individual they are in, but can influence the expression of traits in other individuals. How can genetic analyses be conducted to encompass this complexity? Tighter integration of classical quantitative genetic approaches with modern genomic technologies promises to advance our understanding of the complex genetic basis of mate choice.
Resumo:
In this paper, we investigate the capacity of multiple-input multiple-output (MIMO) wireless communication systems over spatially correlated Rayleigh distributed flat fading channels with complex Gaussian additive noise. Specifically, we derive the probability density function of the mutual information between transmitted and received complex signals of MIMO systems. Using this density we derive the closed-form ergodic capacity (mean), delay-limited capacity, capacity variance and outage capacity formulas for spatially correlated channels and then evaluate these formulas numerically. Numerical results show how the channel correlation degrades the capacity of MIMO communication systems. We also show that the density of mutual information of correlated/uncorrelated MIMO systems can be approximated by a Gaussian density with derived mean and variance, even for a finite number of inputs and outputs.
Resumo:
Compressed sensing is a new paradigm in signal processing which states that for certain matrices sparse representations can be obtained by a simple l1-minimization. In this thesis we explore this paradigm for higher-dimensional signal. In particular three cases are being studied: signals taking values in a bicomplex algebra, quaternionic signals, and complex signals which are representable by a nonlinear Fourier basis, a so-called Takenaka-Malmquist system.
Resumo:
This paper proposes an improved voice activity detection (VAD) algorithm using wavelet and support vector machine (SVM) for European Telecommunication Standards Institution (ETS1) adaptive multi-rate (AMR) narrow-band (NB) and wide-band (WB) speech codecs. First, based on the wavelet transform, the original IIR filter bank and pitch/tone detector are implemented, respectively, via the wavelet filter bank and the wavelet-based pitch/tone detection algorithm. The wavelet filter bank can divide input speech signal into several frequency bands so that the signal power level at each sub-band can be calculated. In addition, the background noise level can be estimated in each sub-band by using the wavelet de-noising method. The wavelet filter bank is also derived to detect correlated complex signals like music. Then the proposed algorithm can apply SVM to train an optimized non-linear VAD decision rule involving the sub-band power, noise level, pitch period, tone flag, and complex signals warning flag of input speech signals. By the use of the trained SVM, the proposed VAD algorithm can produce more accurate detection results. Various experimental results carried out from the Aurora speech database with different noise conditions show that the proposed algorithm gives considerable VAD performances superior to the AMR-NB VAD Options 1 and 2, and AMR-WB VAD. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
Synthetic-heterodyne demodulation is a useful technique for dynamic displacement and velocity detection in interferometric sensors, as it can provide an output signal that is immune to interferometric drift. With the advent of cost-effective, high-speed real-time signal-processing systems and software, processing of the complex signals encountered in interferometry has become more feasible. In synthetic heterodyne, to obtain the actual dynamic displacement or vibration of the object under test requires knowledge of the interferometer visibility and also the argument of two Bessel functions. In this paper, a method is described for determining the former and setting the Bessel function argument to a set value, which ensures maximum sensitivity. Conventional synthetic-heterodyne demodulation requires the use of two in-phase local oscillators; however, the relative phase of these oscillators relative to the interferometric signal is unknown. It is shown that, by using two additional quadrature local oscillators, a demodulated signal can be obtained that is independent of this phase difference. The experimental interferometer is aMichelson configuration using a visible single-mode laser, whose current is sinusoidally modulated at a frequency of 20 kHz. The detected interferometer output is acquired using a 250 kHz analog-to-digital converter and processed in real time. The system is used to measure the displacement sensitivity frequency response and linearity of a piezoelectric mirror shifter over a range of 500 Hz to 10 kHz. The experimental results show good agreement with two data-obtained independent techniques: the signal coincidence and denominated n-commuted Pernick method.
Resumo:
In this work we report results of continuous wave (CW) electron paramagnetic resonance (EPR) spectroscopy of vanadium oxide nanotubes. The observed EPR spectra are composed of a weak well-resolved spectrum of isolated V4+ ions on top of an intense and broad structure-less line shape, attributed to spin-spin exchanged V4+ clusters. With the purpose to deconvolute the structured weak spectrum from the composed broad line, a new approach based on the Krylov basis diagonalization method (KBDM) is introduced. It is based on the discrimination between broad and sharp components with respect to a selectable threshold and can be executed with few adjustable parameters, without the need of a priori information on the shape and structure of the lines. This makes the method advantageous with respect to other procedures and suitable for fast and routine spectral analysis, which, in conjunction with simulation techniques based on the spin Hamiltonian parameters, can provide a full characterization of the EPR spectrum. Results demonstrate and characterize the coexistence of two V4+ species in the nanotubes and show good progress toward the goal of obtaining high fidelity deconvoluted spectra from complex signals with overlapping broader line shapes. (C) 2012 Elsevier Inc. All rights reserved.
Resumo:
The development of next generation microwave technology for backhauling systems is driven by an increasing capacity demand. In order to provide higher data rates and throughputs over a point-to-point link, a cost-effective performance improvement is enabled by an enhanced energy-efficiency of the transmit power amplification stage, whereas a combination of spectrally efficient modulation formats and wider bandwidths is supported by amplifiers that fulfil strict constraints in terms of linearity. An optimal trade-off between these conflicting requirements can be achieved by resorting to flexible digital signal processing techniques at baseband. In such a scenario, the adaptive digital pre-distortion is a well-known linearization method, that comes up to be a potentially widely-used solution since it can be easily integrated into base stations. Its operation can effectively compensate for the inter-modulation distortion introduced by the power amplifier, keeping up with the frequency-dependent time-varying behaviour of the relative nonlinear characteristic. In particular, the impact of the memory effects become more relevant and their equalisation become more challenging as the input discrete signal feature a wider bandwidth and a faster envelope to pre-distort. This thesis project involves the research, design and simulation a pre-distorter implementation at RTL based on a novel polyphase architecture, which makes it capable of operating over very wideband signals at a sampling rate that complies with the actual available clock speed of current digital devices. The motivation behind this structure is to carry out a feasible pre-distortion for the multi-band spectrally efficient complex signals carrying multiple channels that are going to be transmitted in near future high capacity and reliability microwave backhaul links.
Resumo:
El proyecto consiste en el diseño y estudio de un software cuyas prestaciones estén orientadas a gestionar una simulación de un sistema de radar. El prototipo de este entorno de simulación se ha realizado en el lenguaje Matlab debido a que inicialmente se considera el más adecuado para el tratamiento de las señales que los sistemas de radar manejan para realizar sus cálculos. Se ha escogido como modelo el software desarrollado por la compañía SAP para gestionar los E.R.P.s de grandes empresas. El motivo es que es un software cuyo diseño y funcionalidad es especialmente adecuado para la gestión ordenada de una cantidad grande de datos diversos de forma integrada. Diseñar e implementar el propio entorno es una tarea de enorme complejidad y que requerirá el esfuerzo de una cantidad importante de personas; por lo que este proyecto se ha limitado, a un prototipo básico con una serie de características mínimas; así como a indicar y dejar preparado el camino por el que deberán transcurrir las futuras agregaciones de funcionalidad o mejoras. Funcionalmente, esto es, independientemente de la implementación específica con la que se construya el entorno de simulación, se ha considerado dividir las características y prestaciones ofrecidas por el sistema en bloques. Estos bloques agruparán los componentes relacionados con un aspecto específico de la simulación, por ejemplo, el bloque 1, es el asignado a todo lo relacionado con el blanco a detectar. El usuario del entorno de simulación interactuará con el sistema ejecutando lo que se llaman transacciones, que son agrupaciones lógicas de datos a introducir/consultar en el sistema relacionados y que se pueden ejecutar de forma independiente. Un ejemplo de transacción es la que permite mantener una trayectoria de un blanco junto con sus parámetros, pero también puede ser una transacción la aplicación que permite por ejemplo, gestionar los usuarios con acceso al entorno. Es decir, las transacciones son el componente mínimo a partir del cual el usuario puede interactuar con el sistema. La interfaz gráfica que se le ofrecerá al usuario, está basada en modos, que se pueden considerar “ventanas” independientes entre sí dentro de las cuáles el usuario ejecuta sus transacciones. El usuario podrá trabajar con cuantos modos en paralelo desee y cambiar según desee entre ellos. La programación del software se ha realizado utilizando la metodología de orientación a objetos y se ha intentado maximizar la reutilización del código así como la configurabilidad de su funcionalidad. Una característica importante que se ha incorporado para garantizar la integridad de los datos es un diccionario sintáctico. Para permitir la persistencia de los datos entre sesiones del usuario se ha implementado una base de datos virtual (que se prevé se reemplace por una real), que permite manejar, tablas, campos clave, etc. con el fin de guardar todos los datos del entorno, tanto los de configuración que solo serían responsabilidad de los administradores/desarrolladores como los datos maestros y transaccionales que serían gestionados por los usuarios finales del entorno de simulación. ABSTRACT. This end-of-degree project comprises the design, study and implementation of a software based application able to simulate the various aspects and performance of a radar system. A blueprint for this application has been constructed upon the Matlab programming language. This is due to the fact that initially it was thought to be the one most suitable to the complex signals radar systems usually process; but it has proven to be less than adequate for all the other core processes the simulation environment must provide users with. The software’s design has been based on another existing software which is the one developed by the SAP company for managing enterprises, a software categorized (and considered the paradigm of) as E.R.P. software (E.R.P. stands for Enterprise Resource Planning). This software has been selected as a model because is very well suited (its basic features) for working in an orderly fashion with a pretty good quantity of data of very diverse characteristics, and for doing it in a way which protects the integrity of the data. To design and construct the simulation environment with all its potential features is a pretty hard task and requires a great amount of effort and work to be dedicated to its accomplishment. Due to this, the scope of this end-of-degree project has been focused to design and construct a very basic prototype with minimal features, but which way future developments and upgrades to the systems features should go has also been pointed. In a purely functional approach, i.e. disregarding completely the specific implementation which accomplishes the simulation features, the different parts or aspects of the simulation system have been divided and classified into blocks. The blocks will gather together and comprise the various components related with a specific aspect of the simulation landscape, for example, block number one will be the one dealing with all the features related to the radars system target. The user interaction with the system will be based on the execution of so called transactions, which essentially consist on bunches of information which logically belong together and can thus be managed together. A good example, could be a transaction which permits to maintain a series of specifications for target’s paths; but it could also be something completely unrelated with the radar system itself as for example, the management of the users who can access the system. Transactions will be thus the minimum unit of interaction of users with the system. The graphic interface provided to the user will be mode based, which can be considered something akin to a set of independent windows which are able on their own to sustain the execution of an independent transaction. The user ideally should be able to work with as many modes simultaneously as he wants to, switching his focus between them at will. The approach to the software construction has been based on the object based paradigm. An effort has been made to maximize the code’s reutilization and also in maximizing its customizing, i.e., same sets of code able to perform different tasks based on configuration data. An important feature incorporated to the software has been a data dictionary (a syntactic one) which helps guarantee data integrity. Another important feature that allow to maintain data persistency between user sessions, is a virtual relational data base (which should in future times become a real data base) which allows to store data in tables. The data store in this tables comprises both the system’s configuration data (which administrators and developers will maintain) and also master and transactional data whose maintenance will be the end users task.
Resumo:
This dissertation proposed a new approach to seizure detection in intracranial EEG recordings using nonlinear decision functions. It implemented well-established features that were designed to deal with complex signals such as brain recordings, and proposed a 2-D domain of analysis. Since the features considered assume both the time and frequency domains, the analysis was carried out both temporally and as a function of different frequency ranges in order to ascertain those measures that were most suitable for seizure detection. In retrospect, this study established a generalized approach to seizure detection that works across several features and across patients. ^ Clinical experiments involved 8 patients with intractable seizures that were evaluated for potential surgical interventions. A total of 35 iEEG data files collected were used in a training phase to ascertain the reliability of the formulated features. The remaining 69 iEEG data files were then used in the testing phase. ^ The testing phase revealed that the correlation sum is the feature that performed best across all patients with a sensitivity of 92% and an accuracy of 99%. The second best feature was the gamma power with a sensitivity of 92% and an accuracy of 96%. In the frequency domain, all of the 5 other spectral bands considered, revealed mixed results in terms of low sensitivity in some frequency bands and low accuracy in other frequency bands, which is expected given that the dominant frequencies in iEEG are those of the gamma band. In the time domain, other features which included mobility, complexity, and activity, all performed very well with an average a sensitivity of 80.3% and an accuracy of 95%. ^ The computational requirement needed for these nonlinear decision functions to be generated in the training phase was extremely long. It was determined that when the duration dimension was rescaled, the results improved and the convergence rates of the nonlinear decision functions were reduced dramatically by more than a 100 fold. Through this rescaling, the sensitivity of the correlation sum improved to 100% and the sensitivity of the gamma power to 97%, which meant that there were even less false negatives and false positives detected. ^
Resumo:
9 p. : il.