857 resultados para Doppler Return Signal, SNR,Signal Estimation, Multi-Component Quadratic


Relevância:

50.00% 50.00%

Publicador:

Resumo:

One of the most challenging task underlying many hyperspectral imagery applications is the spectral unmixing, which decomposes a mixed pixel into a collection of reectance spectra, called endmember signatures, and their corresponding fractional abundances. Independent Component Analysis (ICA) have recently been proposed as a tool to unmix hyperspectral data. The basic goal of ICA is to nd a linear transformation to recover independent sources (abundance fractions) given only sensor observations that are unknown linear mixtures of the unobserved independent sources. In hyperspectral imagery the sum of abundance fractions associated to each pixel is constant due to physical constraints in the data acquisition process. Thus, sources cannot be independent. This paper address hyperspectral data source dependence and its impact on ICA performance. The study consider simulated and real data. In simulated scenarios hyperspectral observations are described by a generative model that takes into account the degradation mechanisms normally found in hyperspectral applications. We conclude that ICA does not unmix correctly all sources. This conclusion is based on the a study of the mutual information. Nevertheless, some sources might be well separated mainly if the number of sources is large and the signal-to-noise ratio (SNR) is high.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This paper presents MOTION, a modular on-line model for urban traffic signal control. It consists of a network and a local level and builds on enhanced traffic state estimation. Special consideration is given to the prioritization of public transit. MOTION provides possibilities for the interaction with integrated urban management systems.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

The quality of the image of 18F-FDG PET/CT scans in overweight patients is commonly degraded. This study evaluates, retrospectively, the relation between SNR, weight and dose injected in 65 patients, with a range of weights from 35 to 120 kg, with scans performed using the Biograph mCT using a standardized protocol in the Nuclear Medicine Department at Radboud University Medical Centre in Nijmegen, The Netherlands. Five ROI’s were made in the liver, assumed to be an organ of homogenous metabolism, at the same location, in five consecutive slices of the PET/CT scans to obtain the mean uptake (signal) values and its standard deviation (noise). The ratio of both gave us the Signal-to- Noise Ratio in the liver. With the help of a spreadsheet, weight, height, SNR and Body Mass Index were calculated and graphs were designed in order to obtain the relation between these factors. The graphs showed that SNR decreases as the body weight and/or BMI increased and also showed that, even though the dose injected increased, the SNR also decreased. This is due to the fact that heavier patients receive higher dose and, as reported, heavier patients have less SNR. These findings suggest that the quality of the images, measured by SNR, that were acquired in heavier patients are worst than thinner patients, even though higher FDG doses are given. With all this taken in consideration, it was necessary to make a new formula to calculate a new dose to give to patients and having a good and constant SNR in every patient. Through mathematic calculations, it was possible to reach to two new equations (power and exponential), which would lead to a SNR from a scan made with a specific reference weight (86 kg was the considered one) which was independent of body mass. The study implies that with these new formulas, patients heavier than the reference weight will receive higher doses and lighter patients will receive less doses. With the median being 86 kg, the new dose and new SNR was calculated and concluded that the quality of the image remains almost constant as the weight increases and the quantity of the necessary FDG remains almost the same, without increasing the costs for the total amount of FDG used in all these patients.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This thesis deals with robust adaptive control and its applications, and it is divided into three main parts. The first part is about the design of robust estimation algorithms based on recursive least squares. First, we present an estimator for the frequencies of biased multi-harmonic signals, and then an algorithm for distributed estimation of an unknown parameter over a network of adaptive agents. In the second part of this thesis, we consider a cooperative control problem over uncertain networks of linear systems and Kuramoto systems, in which the agents have to track the reference generated by a leader exosystem. Since the reference signal is not available to each network node, novel distributed observers are designed so as to reconstruct the reference signal locally for each agent, and therefore decentralizing the problem. In the third and final part of this thesis, we consider robust estimation tasks for mobile robotics applications. In particular, we first consider the problem of slip estimation for agricultural tracked vehicles. Then, we consider a search and rescue application in which we need to drive an unmanned aerial vehicle as close as possible to the unknown (and to be estimated) position of a victim, who is buried under the snow after an avalanche event. In this thesis, robustness is intended as an input-to-state stability property of the proposed identifiers (sometimes referred to as adaptive laws), with respect to additive disturbances, and relative to a steady-state trajectory that is associated with a correct estimation of the unknown parameter to be found.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

Brain functioning relies on the interaction of several neural populations connected through complex connectivity networks, enabling the transmission and integration of information. Recent advances in neuroimaging techniques, such as electroencephalography (EEG), have deepened our understanding of the reciprocal roles played by brain regions during cognitive processes. The underlying idea of this PhD research is that EEG-related functional connectivity (FC) changes in the brain may incorporate important neuromarkers of behavior and cognition, as well as brain disorders, even at subclinical levels. However, a complete understanding of the reliability of the wide range of existing connectivity estimation techniques is still lacking. The first part of this work addresses this limitation by employing Neural Mass Models (NMMs), which simulate EEG activity and offer a unique tool to study interconnected networks of brain regions in controlled conditions. NMMs were employed to test FC estimators like Transfer Entropy and Granger Causality in linear and nonlinear conditions. Results revealed that connectivity estimates reflect information transmission between brain regions, a quantity that can be significantly different from the connectivity strength, and that Granger causality outperforms the other estimators. A second objective of this thesis was to assess brain connectivity and network changes on EEG data reconstructed at the cortical level. Functional brain connectivity has been estimated through Granger Causality, in both temporal and spectral domains, with the following goals: a) detect task-dependent functional connectivity network changes, focusing on internal-external attention competition and fear conditioning and reversal; b) identify resting-state network alterations in a subclinical population with high autistic traits. Connectivity-based neuromarkers, compared to the canonical EEG analysis, can provide deeper insights into brain mechanisms and may drive future diagnostic methods and therapeutic interventions. However, further methodological studies are required to fully understand the accuracy and information captured by FC estimates, especially concerning nonlinear phenomena.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

In questo elaborato vengono analizzate differenti tecniche per la detection di jammer attivi e costanti in una comunicazione satellitare in uplink. Osservando un numero limitato di campioni ricevuti si vuole identificare la presenza di un jammer. A tal fine sono stati implementati i seguenti classificatori binari: support vector machine (SVM), multilayer perceptron (MLP), spectrum guarding e autoencoder. Questi algoritmi di apprendimento automatico dipendono dalle features che ricevono in ingresso, per questo motivo è stata posta particolare attenzione alla loro scelta. A tal fine, sono state confrontate le accuratezze ottenute dai detector addestrati utilizzando differenti tipologie di informazione come: i segnali grezzi nel tempo, le statistical features, le trasformate wavelet e lo spettro ciclico. I pattern prodotti dall’estrazione di queste features dai segnali satellitari possono avere dimensioni elevate, quindi, prima della detection, vengono utilizzati i seguenti algoritmi per la riduzione della dimensionalità: principal component analysis (PCA) e linear discriminant analysis (LDA). Lo scopo di tale processo non è quello di eliminare le features meno rilevanti, ma combinarle in modo da preservare al massimo l’informazione, evitando problemi di overfitting e underfitting. Le simulazioni numeriche effettuate hanno evidenziato come lo spettro ciclico sia in grado di fornire le features migliori per la detection producendo però pattern di dimensioni elevate, per questo motivo è stato necessario l’utilizzo di algoritmi di riduzione della dimensionalità. In particolare, l'algoritmo PCA è stato in grado di estrarre delle informazioni migliori rispetto a LDA, le cui accuratezze risentivano troppo del tipo di jammer utilizzato nella fase di addestramento. Infine, l’algoritmo che ha fornito le prestazioni migliori è stato il Multilayer Perceptron che ha richiesto tempi di addestramento contenuti e dei valori di accuratezza elevati.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Friction and triboelectrification of materials show a strong correlation during sliding contacts. Friction force fluctuations are always accompanied by two tribocharging events at metal-insulator [e.g., polytetrafluoroethylene (PTFE)] interfaces: injection of charged species from the metal into PTFE followed by the flow of charges from PTFE to the metal surface. Adhesion maps that were obtained by atomic force microscopy (AFM) show that the region of contact increases the pull-off force from 10 to 150 nN, reflecting on a resilient electrostatic adhesion between PTFE and the metallic surface. The reported results suggest that friction and triboelectrification have a common origin that must be associated with the occurrence of strong electrostatic interactions at the interface.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The purpose of this study was to correlate the pre-operative imaging, vascularity of the proximal pole, and histology of the proximal pole bone of established scaphoid fracture non-union. This was a prospective non-controlled experimental study. Patients were evaluated pre-operatively for necrosis of the proximal scaphoid fragment by radiography, computed tomography (CT) and magnetic resonance imaging (MRI). Vascular status of the proximal scaphoid was determined intra-operatively, demonstrating the presence or absence of puncate bone bleeding. Samples were harvested from the proximal scaphoid fragment and sent for pathological examination. We determined the association between the imaging and intra-operative examination and histological findings. We evaluated 19 male patients diagnosed with scaphoid nonunion. CT evaluation showed no correlation to scaphoid proximal fragment necrosis. MRI showed marked low signal intensity on T1-weighted images that confirmed the histological diagnosis of necrosis in the proximal scaphoid fragment in all patients. Intra-operative assessment showed that 90% of bones had absence of intra-operative puncate bone bleeding, which was confirmed necrosis by microscopic examination. In scaphoid nonunion MRI images with marked low signal intensity on T1-weighted images and the absence of intra-operative puncate bone bleeding are strong indicatives of osteonecrosis of the proximal fragment.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The analysis of Macdonald for electrolytes is generalized to the case in which two groups of ions are present. We assume that the electrolyte can be considered as a dispersion of ions in a dielectric liquid, and that the ionic recombination can be neglected. We present the differential equations governing the ionic redistribution when the liquid is subjected to an external electric field, describing the simultaneous diffusion of the two groups of ions in the presence of their own space charge fields. We investigate the influence of the ions on the impedance spectroscopy of an electrolytic cell. In the analysis, we assume that each group of ions have equal mobility, the electrodes perfectly block and that the adsorption phenomena can be neglected. In this framework, it is shown that the real part of the electrical impedance of the cell has a frequency dependence presenting two plateaux, related to a type of ambipolar and free diffusion coefficients. The importance of the considered problem on the ionic characterization performed by means of the impedance spectroscopy technique was discussed. (c) 2008 American Institute of Physics.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Objective: We carry out a systematic assessment on a suite of kernel-based learning machines while coping with the task of epilepsy diagnosis through automatic electroencephalogram (EEG) signal classification. Methods and materials: The kernel machines investigated include the standard support vector machine (SVM), the least squares SVM, the Lagrangian SVM, the smooth SVM, the proximal SVM, and the relevance vector machine. An extensive series of experiments was conducted on publicly available data, whose clinical EEG recordings were obtained from five normal subjects and five epileptic patients. The performance levels delivered by the different kernel machines are contrasted in terms of the criteria of predictive accuracy, sensitivity to the kernel function/parameter value, and sensitivity to the type of features extracted from the signal. For this purpose, 26 values for the kernel parameter (radius) of two well-known kernel functions (namely. Gaussian and exponential radial basis functions) were considered as well as 21 types of features extracted from the EEG signal, including statistical values derived from the discrete wavelet transform, Lyapunov exponents, and combinations thereof. Results: We first quantitatively assess the impact of the choice of the wavelet basis on the quality of the features extracted. Four wavelet basis functions were considered in this study. Then, we provide the average accuracy (i.e., cross-validation error) values delivered by 252 kernel machine configurations; in particular, 40%/35% of the best-calibrated models of the standard and least squares SVMs reached 100% accuracy rate for the two kernel functions considered. Moreover, we show the sensitivity profiles exhibited by a large sample of the configurations whereby one can visually inspect their levels of sensitiveness to the type of feature and to the kernel function/parameter value. Conclusions: Overall, the results evidence that all kernel machines are competitive in terms of accuracy, with the standard and least squares SVMs prevailing more consistently. Moreover, the choice of the kernel function and parameter value as well as the choice of the feature extractor are critical decisions to be taken, albeit the choice of the wavelet family seems not to be so relevant. Also, the statistical values calculated over the Lyapunov exponents were good sources of signal representation, but not as informative as their wavelet counterparts. Finally, a typical sensitivity profile has emerged among all types of machines, involving some regions of stability separated by zones of sharp variation, with some kernel parameter values clearly associated with better accuracy rates (zones of optimality). (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The goal of this paper is to study and propose a new technique for noise reduction used during the reconstruction of speech signals, particularly for biomedical applications. The proposed method is based on Kalman filtering in the time domain combined with spectral subtraction. Comparison with discrete Kalman filter in the frequency domain shows better performance of the proposed technique. The performance is evaluated by using the segmental signal-to-noise ratio and the Itakura-Saito`s distance. Results have shown that Kalman`s filter in time combined with spectral subtraction is more robust and efficient, improving the Itakura-Saito`s distance by up to four times. (C) 2007 Elsevier Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Void fraction sensors are important instruments not only for monitoring two-phase flow, but for furnishing an important parameter for obtaining flow map pattern and two-phase flow heat transfer coefficient as well. This work presents the experimental results obtained with the analysis of two axially spaced multiple-electrode impedance sensors tested in an upward air-water two-phase flow in a vertical tube for void fraction measurements. An electronic circuit was developed for signal generation and post-treatment of each sensor signal. By phase shifting the electrodes supplying the signal, it was possible to establish a rotating electric field sweeping across the test section. The fundamental principle of using a multiple-electrode configuration is based on reducing signal sensitivity to the non-uniform cross-section void fraction distribution problem. Static calibration curves were obtained for both sensors, and dynamic signal analyses for bubbly, slug, and turbulent churn flows were carried out. Flow parameters such as Taylor bubble velocity and length were obtained by using cross-correlation techniques. As an application of the void fraction tested, vertical flow pattern identification could be established by using the probability density function technique for void fractions ranging from 0% to nearly 70%.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Real-time viscosity measurement remains a necessity for highly automated industry. To resolve this problem, many studies have been carried out using an ultrasonic shear wave reflectance method. This method is based on the determination of the complex reflection coefficient`s magnitude and phase at the solid-liquid interface. Although magnitude is a stable quantity and its measurement is relatively simple and precise, phase measurement is a difficult task because of strong temperature dependence. A simplified method that uses only the magnitude of the reflection coefficient and that is valid under the Newtonian regimen has been proposed by some authors, but the obtained viscosity values do not match conventional viscometry measurements. In this work, a mode conversion measurement cell was used to measure glycerin viscosity as a function of temperature (15 to 25 degrees C) and corn syrup-water mixtures as a function of concentration (70 to 100 wt% of corn syrup). Tests were carried out at 1 MHz. A novel signal processing technique that calculates the reflection coefficient magnitude in a frequency band, instead of a single frequency, was studied. The effects of the bandwidth on magnitude and viscosity were analyzed and the results were compared with the values predicted by the Newtonian liquid model. The frequency band technique improved the magnitude results. The obtained viscosity values came close to those measured by the rotational viscometer with percentage errors up to 14%, whereas errors up to 96% were found for the single frequency method.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents the results of the in-depth study of the Barkhausen effect signal properties for the plastically deformed Fe-2%Si samples. The investigated samples have been deformed by cold rolling up to plastic strain epsilon(p) = 8%. The first approach consisted of time-domain-resolved pulse and frequency analysis of the Barkhausen noise signals whereas the complementary study consisted of the time-resolved pulse count analysis as well as a total pulse count. The latter included determination of time distribution of pulses for different threshold voltage levels as well as the total pulse count as a function of both the amplitude and the duration time of the pulses. The obtained results suggest that the observed increase in the Barkhausen noise signal intensity as a function of deformation level is mainly due to the increase in the number of bigger pulses.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We propose a robust and low complexity scheme to estimate and track carrier frequency from signals traveling under low signal-to-noise ratio (SNR) conditions in highly nonstationary channels. These scenarios arise in planetary exploration missions subject to high dynamics, such as the Mars exploration rover missions. The method comprises a bank of adaptive linear predictors (ALP) supervised by a convex combiner that dynamically aggregates the individual predictors. The adaptive combination is able to outperform the best individual estimator in the set, which leads to a universal scheme for frequency estimation and tracking. A simple technique for bias compensation considerably improves the ALP performance. It is also shown that retrieval of frequency content by a fast Fourier transform (FFT)-search method, instead of only inspecting the angle of a particular root of the error predictor filter, enhances performance, particularly at very low SNR levels. Simple techniques that enforce frequency continuity improve further the overall performance. In summary we illustrate by extensive simulations that adaptive linear prediction methods render a robust and competitive frequency tracking technique.