984 resultados para multi-channel
Resumo:
Au cours des dernières décennies, l’effort sur les applications de capteurs infrarouges a largement progressé dans le monde. Mais, une certaine difficulté demeure, en ce qui concerne le fait que les objets ne sont pas assez clairs ou ne peuvent pas toujours être distingués facilement dans l’image obtenue pour la scène observée. L’amélioration de l’image infrarouge a joué un rôle important dans le développement de technologies de la vision infrarouge de l’ordinateur, le traitement de l’image et les essais non destructifs, etc. Cette thèse traite de la question des techniques d’amélioration de l’image infrarouge en deux aspects, y compris le traitement d’une seule image infrarouge dans le domaine hybride espacefréquence, et la fusion d’images infrarouges et visibles employant la technique du nonsubsampled Contourlet transformer (NSCT). La fusion d’images peut être considérée comme étant la poursuite de l’exploration du modèle d’amélioration de l’image unique infrarouge, alors qu’il combine les images infrarouges et visibles en une seule image pour représenter et améliorer toutes les informations utiles et les caractéristiques des images sources, car une seule image ne pouvait contenir tous les renseignements pertinents ou disponibles en raison de restrictions découlant de tout capteur unique de l’imagerie. Nous examinons et faisons une enquête concernant le développement de techniques d’amélioration d’images infrarouges, et ensuite nous nous consacrons à l’amélioration de l’image unique infrarouge, et nous proposons un schéma d’amélioration de domaine hybride avec une méthode d’évaluation floue de seuil amélioré, qui permet d’obtenir une qualité d’image supérieure et améliore la perception visuelle humaine. Les techniques de fusion d’images infrarouges et visibles sont établies à l’aide de la mise en oeuvre d’une mise en registre précise des images sources acquises par différents capteurs. L’algorithme SURF-RANSAC est appliqué pour la mise en registre tout au long des travaux de recherche, ce qui conduit à des images mises en registre de façon très précise et des bénéfices accrus pour le traitement de fusion. Pour les questions de fusion d’images infrarouges et visibles, une série d’approches avancées et efficaces sont proposés. Une méthode standard de fusion à base de NSCT multi-canal est présente comme référence pour les approches de fusion proposées suivantes. Une approche conjointe de fusion, impliquant l’Adaptive-Gaussian NSCT et la transformée en ondelettes (Wavelet Transform, WT) est propose, ce qui conduit à des résultats de fusion qui sont meilleurs que ceux obtenus avec les méthodes non-adaptatives générales. Une approche de fusion basée sur le NSCT employant la détection comprime (CS, compressed sensing) et de la variation totale (TV) à des coefficients d’échantillons clairsemés et effectuant la reconstruction de coefficients fusionnés de façon précise est proposée, qui obtient de bien meilleurs résultats de fusion par le biais d’une pré-amélioration de l’image infrarouge et en diminuant les informations redondantes des coefficients de fusion. Une procédure de fusion basée sur le NSCT utilisant une technique de détection rapide de rétrécissement itératif comprimé (fast iterative-shrinking compressed sensing, FISCS) est proposée pour compresser les coefficients décomposés et reconstruire les coefficients fusionnés dans le processus de fusion, qui conduit à de meilleurs résultats plus rapidement et d’une manière efficace.
Resumo:
The use of human brain electroencephalography (EEG) signals for automatic person identi cation has been investigated for a decade. It has been found that the performance of an EEG-based person identication system highly depends on what feature to be extracted from multi-channel EEG signals. Linear methods such as Power Spectral Density and Autoregressive Model have been used to extract EEG features. However these methods assumed that EEG signals are stationary. In fact, EEG signals are complex, non-linear, non-stationary, and random in nature. In addition, other factors such as brain condition or human characteristics may have impacts on the performance, however these factors have not been investigated and evaluated in previous studies. It has been found in the literature that entropy is used to measure the randomness of non-linear time series data. Entropy is also used to measure the level of chaos of braincomputer interface systems. Therefore, this thesis proposes to study the role of entropy in non-linear analysis of EEG signals to discover new features for EEG-based person identi- cation. Five dierent entropy methods including Shannon Entropy, Approximate Entropy, Sample Entropy, Spectral Entropy, and Conditional Entropy have been proposed to extract entropy features that are used to evaluate the performance of EEG-based person identication systems and the impacts of epilepsy, alcohol, age and gender characteristics on these systems. Experiments were performed on the Australian EEG and Alcoholism datasets. Experimental results have shown that, in most cases, the proposed entropy features yield very fast person identication, yet with compatible accuracy because the feature dimension is low. In real life security operation, timely response is critical. The experimental results have also shown that epilepsy, alcohol, age and gender characteristics have impacts on the EEG-based person identication systems.
Resumo:
The purpose of this paper is to survey and assess the state-of-the-art in automatic target recognition for synthetic aperture radar imagery (SAR-ATR). The aim is not to develop an exhaustive survey of the voluminous literature, but rather to capture in one place the various approaches for implementing the SAR-ATR system. This paper is meant to be as self-contained as possible, and it approaches the SAR-ATR problem from a holistic end-to-end perspective. A brief overview for the breadth of the SAR-ATR challenges is conducted. This is couched in terms of a single-channel SAR, and it is extendable to multi-channel SAR systems. Stages pertinent to the basic SAR-ATR system structure are defined, and the motivations of the requirements and constraints on the system constituents are addressed. For each stage in the SAR-ATR processing chain, a taxonomization methodology for surveying the numerous methods published in the open literature is proposed. Carefully selected works from the literature are presented under the taxa proposed. Novel comparisons, discussions, and comments are pinpointed throughout this paper. A two-fold benchmarking scheme for evaluating existing SAR-ATR systems and motivating new system designs is proposed. The scheme is applied to the works surveyed in this paper. Finally, a discussion is presented in which various interrelated issues, such as standard operating conditions, extended operating conditions, and target-model design, are addressed. This paper is a contribution toward fulfilling an objective of end-to-end SAR-ATR system design.
Resumo:
The Pianosa Contourite Depositional System (CDS) is located in the Corsica Trough (Northern Tyrrhenian Sea), a confined basin dominated by mass transport and contour currents in the eastern flank and by turbidity currents in the western flank. The morphologic and stratigraphic characterisation of the Pianosa CDS is based on multibeam bathymetry, seismic reflection data (multi-channel high resolution mini GI gun, single-channel sparker and CHIRP), sediment cores and ADCP data. The Pianosa CDS is located at shallow to intermediate water depths (170 to 850 m water depth) and is formed under the influence of the Levantine Intermediate Water (LIW). It is 120 km long, has a maximum width of 10 km and is composed of different types of muddy sediment drifts: plastered drift, separated mounded drift, sigmoid drift and multicrested drift. The reduced tectonic activity in the Corsica Trough since the early Pliocene permits to recover a sedimentary record of the contourite depositional system that is only influenced by climate fluctuations. Contourites started to develop in the Middle-Late Pliocene, but their growth was enhanced since the Middle Pleistocene Transition (0.7–0.9 Ma). Although the general circulation of the LIW, flowing northwards in the Corsica Trough, remained active all along the history of the system, contourite drift formation changed, controlled by sediment influx and bottom current velocity. During periods of sea level fall, fast bottom currents often eroded the drift crest in the middle and upper slope. At that time the proximity of the coast to the shelf edge favoured the formation of bioclastic sand deposits winnowed by bottom currents. Higher sediment accumulation of mud in the drifts occurred during periods of fast bottom currents and high sediment availability (i.e. high activity of turbidity currents), coincident with periods of sea level low-stands. Condensed sections were formed during sea level high-stands, when bottom currents were more sluggish and the turbidite system was disconnected, resulting in a lower sediment influx.
Resumo:
Two Pleistocene mass transport deposits (MTDs), with volumes of thousands of km(3), have been identified from multi-channel seismic data in the abyssal plain at the front of the Barbados accretionary prism. Estimated sediment volumes for these MTDs are likely underestimated due to limited seismic coverage. In this work, we suggest that these MTDs are comparable in size to large submarine landslides as reported in the literature. These MTDs lie on the vicinity of two major oceanic ridges, the Barracuda Ridge and the Tiburon Rise. It is also suggested in this work that the MTDs come from seismicity associated with the formation of the Barracuda Ridge or the Barbados accretionary prism; however, triggering mechanisms involved in their formation remain uncertain. The present study discusses the potential causal factors accounting for the formation of these MTDs.
Resumo:
The TOMO-ETNA experiment was devised to image of the crust underlying the volcanic edifice and, possibly, its plumbing system by using passive and active refraction/reflection seismic methods. This experiment included activities both on-land and offshore with the main objective of obtaining a new high-resolution seismic tomography to improve the knowledge of the crustal structures existing beneath the Etna volcano and northeast Sicily up to Aeolian Islands. The TOMO ETNA experiment was divided in two phases. The first phase started on June 15, 2014 and finalized on July 24, 2014, with the withdrawal of two removable seismic networks (a Short Period Network and a Broadband network composed by 80 and 20 stations respectively) deployed at Etna volcano and surrounding areas. During this first phase the oceanographic research vessel “Sarmiento de Gamboa” and the hydro-oceanographic vessel “Galatea” performed the offshore activities, which includes the deployment of ocean bottom seismometers (OBS), air-gun shooting for Wide Angle Seismic refraction (WAS), Multi-Channel Seismic (MCS) reflection surveys, magnetic surveys and ROV (Remotely Operated Vehicle) dives. This phase finished with the recovery of the short period seismic network. In the second phase the Broadband seismic network remained operative until October 28, 2014, and the R/V “Aegaeo” performed additional MCS surveys during November 19-27, 2014. Overall, the information deriving from TOMO-ETNA experiment could provide the answer to many uncertainties that have arisen while exploiting the large amount of data provided by the cutting-edge monitoring systems of Etna volcano and seismogenic area of eastern Sicily.
Resumo:
A NOx reduction efficiency higher than 95% with NH3 slip less than 30 ppm is desirable for heavy-duty diesel (HDD) engines using selective catalytic reduction (SCR) systems to meet the US EPA 2010 NOx standard and the 2014-2018 fuel consumption regulation. The SCR performance needs to be improved through experimental and modeling studies. In this research, a high fidelity global kinetic 1-dimensional 2-site SCR model with mass transfer, heat transfer and global reaction mechanisms was developed for a Cu-zeolite catalyst. The model simulates the SCR performance for the engine exhaust conditions with NH3 maldistribution and aging effects, and the details are presented. SCR experimental data were collected for the model development, calibration and validation from a reactor at Oak Ridge National Laboratory (ORNL) and an engine experimental setup at Michigan Technological University (MTU) with a Cummins 2010 ISB engine. The model was calibrated separately to the reactor and engine data. The experimental setup, test procedures including a surrogate HD-FTP cycle developed for transient studies and the model calibration process are described. Differences in the model parameters were determined between the calibrations developed from the reactor and the engine data. It was determined that the SCR inlet NH3 maldistribution is one of the reasons causing the differences. The model calibrated to the engine data served as a basis for developing a reduced order SCR estimator model. The effect of the SCR inlet NO2/NOx ratio on the SCR performance was studied through simulations using the surrogate HD-FTP cycle. The cumulative outlet NOx and the overall NOx conversion efficiency of the cycle are highest with a NO2/NOx ratio of 0.5. The outlet NH3 is lowest for the NO2/NOx ratio greater than 0.6. A combined engine experimental and simulation study was performed to quantify the NH3 maldistribution at the SCR inlet and its effects on the SCR performance and kinetics. The uniformity index (UI) of the SCR inlet NH3 and NH3/NOx ratio (ANR) was determined to be below 0.8 for the production system. The UI was improved to 0.9 after installation of a swirl mixer into the SCR inlet cone. A multi-channel model was developed to simulate the maldistribution effects. The results showed that reducing the UI of the inlet ANR from 1.0 to 0.7 caused a 5-10% decrease in NOx reduction efficiency and 10-20 ppm increase in the NH3 slip. The simulations of the steady-state engine data with the multi-channel model showed that the NH3 maldistribution is a factor causing the differences in the calibrations developed from the engine and the reactor data. The Reactor experiments were performed at ORNL using a Spaci-IR technique to study the thermal aging effects. The test results showed that the thermal aging (at 800°C for 16 hours) caused a 30% reduction in the NH3 stored on the catalyst under NH3 saturation conditions and different axial concentration profiles under SCR reaction conditions. The kinetics analysis showed that the thermal aging caused a reduction in total NH3 storage capacity (94.6 compared to 138 gmol/m3), different NH3 adsorption/desorption properties and a decrease in activation energy and the pre-exponential factor for NH3 oxidation, standard and fast SCR reactions. Both reduction in the storage capability and the change in kinetics of the major reactions contributed to the change in the axial storage and concentration profiles observed from the experiments.
Resumo:
Introduction Seizures are harmful to the neonatal brain; this compels many clinicians and researchers to persevere further in optimizing every aspects of managing neonatal seizures. Aims To delineate the seizure profile between non-cooled versus cooled neonates with hypoxic-ischaemic encephalopathy (HIE), in neonates with stroke, the response of seizure burden to phenobarbitone and to quantify the degree of electroclinical dissociation (ECD) of seizures. Methods The multichannel video-EEG was used in this research study as the gold standard to detect seizures, allowing accurate quantification of seizure burden to be ascertained in term neonates. The entire EEG recording for each neonate was independently reviewed by at least 1 experienced neurophysiologist. Data were expressed in medians and interquartile ranges. Linear mixed models results were presented as mean (95% confidence interval); p values <0.05 were deemed as significant. Results Seizure burden in cooled neonates was lower than in non-cooled neonates [60(39-224) vs 203(141-406) minutes; p=0.027]. Seizure burden was reduced in cooled neonates with moderate HIE [49(26-89) vs 162(97-262) minutes; p=0.020] when compared with severe HIE. In neonates with stroke, the background pattern showed suppression over the infarcted side and seizures demonstrated a characteristic pattern. Compared with 10 mg/kg, phenobarbitone doses at 20 mg/kg reduced seizure burden (p=0.004). Seizure burden was reduced within 1 hour of phenobarbitone administration [mean (95% confidence interval): -14(-20 to -8) minutes/hour; p<0.001], but seizures returned to pre-treatment levels within 4 hours (p=0.064). The ECD index in cooled, non-cooled neonates with HIE, stroke and in neonates with other diagnoses were 88%, 94%, 64% and 75% respectively. Conclusions Further research exploring the treatment effects on seizure burden in the neonatal brain is required. A change to our current treatment strategy is warranted as we continue to strive for more effective seizure control, anchored with use of the multichannel EEG as the surveillance tool.
Resumo:
The constantly increasing demand of clean water has become challenging to deal with over the past years, water being an ever more precious resource. In recent times, the existing wastewater treatments had to be integrated with new steps, due to the detection of so-called organic micropollutants (OMPs). These compounds have been shown to adversely affect the environment and possibly human health, even when found in very low concentrations. In order to remove OMPs from wastewater, one possible technique is a hybrid process combining filtration and adsorption. In this work, polyethersulfone multi-channel mixed-matrix membranes with embedded powdered activated carbon (PAC) were tested to investigate the membrane’s adsorption and desorption performance. Micropollutants retention was analyzed using the pharmaceutical compounds diclofenac (DCF), paracetamol (PARA) and carbamazepine (CBZ) in filtration mode, combining the PAC adsorption process with the membrane’s ultrafiltration. Desorption performance was studied through solvent regeneration, using seven different solvents: pure water, pure ethanol, mixture of ethanol and water in different concentration, sodium hydroxide and a mixture of ethanol and sodium hydroxide. Regeneration experiments were carried out in forward-flushing. At first regeneration efficiency was investigated using a single-solute solution (diclofenac in water). The mixture Ethanol/Water (50:50) was found to be the most efficient with long-term retention of 59% after one desorption cycle. It was, therefore, later tested on a membrane previously loaded with a multi-solute solution. Three desorption cycles were performed after which, retention (after 30 min) reached values of 87% for PARA and 72% for CBZ and 55% for DCF, which indicates decent regenerability. A morphological analysis on the membranes confirmed that, in any case, the regeneration cycles did not affect either the membranes’ structure, or the content and distribution of PAC in the matrix.
Resumo:
The GRAIN detector is part of the SAND Near Detector of the DUNE neutrino experiment. A new imaging technique involving the collection of the scintillation light will be used in order to reconstruct images of particle tracks in the GRAIN detector. Silicon photomultiplier (SiPM) matrices will be used as photosensors for collecting the scintillation light emitted at 127 nm by liquid argon. The readout of SiPM matrices inside the liquid argon requires the use of a multi-channel mixed-signal ASIC, while the back-end electronics will be implemented in FPGAs outside the cryogenic environment. The ALCOR (A Low-power Circuit for Optical sensor Readout) ASIC, developed by Torino division of INFN, is under study, since it is optimized to readout SiPMs at cryogenic temperatures. I took part in the realization of a demonstrator of the imaging system, which consists of a SiPM matrix connected to a custom circuit board, on which an ALCOR ASIC is mounted. The board communicates with an FPGA. The first step of the present project that I have accomplished was the development of an emulator for the ALCOR ASIC. This emulator allowed me to verify the correct functioning of the initial firmware before the real ASIC itself was available. I programmed the emulator using VHDL and I also developed test benches in order to test its correct working. Furthermore, I developed portions of the DAQ software, which I used for the acquisition of data and the slow control of the ASICs. In addition, I made some parts of the DAQ firmware for the FPGAs. Finally, I tested the complete SiPMs readout system at both room and cryogenic temperature in order to ensure its full functionality.
Resumo:
This work aims at proposing the use of the evolutionary computation methodology in order to jointly solve the multiuser channel estimation (MuChE) and detection problems at its maximum-likelihood, both related to the direct sequence code division multiple access (DS/CDMA). The effectiveness of the proposed heuristic approach is proven by comparing performance and complexity merit figures with that obtained by traditional methods found in literature. Simulation results considering genetic algorithm (GA) applied to multipath, DS/CDMA and MuChE and multi-user detection (MuD) show that the proposed genetic algorithm multi-user channel estimation (GAMuChE) yields a normalized mean square error estimation (nMSE) inferior to 11%, under slowly varying multipath fading channels, large range of Doppler frequencies and medium system load, it exhibits lower complexity when compared to both maximum likelihood multi-user channel estimation (MLMuChE) and gradient descent method (GrdDsc). A near-optimum multi-user detector (MuD) based on the genetic algorithm (GAMuD), also proposed in this work, provides a significant reduction in the computational complexity when compared to the optimum multi-user detector (OMuD). In addition, the complexity of the GAMuChE and GAMuD algorithms were (jointly) analyzed in terms of number of operations necessary to reach the convergence, and compared to other jointly MuChE and MuD strategies. The joint GAMuChE-GAMuD scheme can be regarded as a promising alternative for implementing third-generation (3G) and fourth-generation (4G) wireless systems in the near future. Copyright (C) 2010 John Wiley & Sons, Ltd.
Energy-efficient diversity combining for different access schemes in a multi-path dispersive channel
Resumo:
Dissertação para obtenção do Grau de Doutor em Engenharia Electrotécnica e Computadores
Resumo:
This paper analyzes the asymptotic performance of maximum likelihood (ML) channel estimation algorithms in wideband code division multiple access (WCDMA) scenarios. We concentrate on systems with periodic spreading sequences (period larger than or equal to the symbol span) where the transmitted signal contains a code division multiplexed pilot for channel estimation purposes. First, the asymptotic covariances of the training-only, semi-blind conditional maximum likelihood (CML) and semi-blind Gaussian maximum likelihood (GML) channelestimators are derived. Then, these formulas are further simplified assuming randomized spreading and training sequences under the approximation of high spreading factors and high number of codes. The results provide a useful tool to describe the performance of the channel estimators as a function of basicsystem parameters such as number of codes, spreading factors, or traffic to training power ratio.
Resumo:
In this paper, a new equalizer learning scheme is introduced based on the algorithm of the directional evolutionary multi-objective optimization (EMOO). Whilst nonlinear channel equalizers such as the radial basis function (RBF) equalizers have been widely studied to combat the linear and nonlinear distortions in the modern communication systems, most of them do not take into account the equalizers' generalization capabilities. In this paper, equalizers are designed aiming at improving their generalization capabilities. It is proposed that this objective can be achieved by treating the equalizer design problem as a multi-objective optimization (MOO) problem, with each objective based on one of several training sets, followed by deriving equalizers with good capabilities of recovering the signals for all the training sets. Conventional EMOO which is widely applied in the MOO problems suffers from disadvantages such as slow convergence speed. Directional EMOO improves the computational efficiency of the conventional EMOO by explicitly making use of the directional information. The new equalizer learning scheme based on the directional EMOO is applied to the RBF equalizer design. Computer simulation demonstrates that the new scheme can be used to derive RBF equalizers with good generalization capabilities, i.e., good performance on predicting the unseen samples.
Resumo:
Multi-rate multicarrier DS/CDMA is a potentially attractive multiple access method for future wireless communications networks that must support multimedia, and thus multi-rate, traffic. Several receiver structures exist for single-rate multicarrier systems, but little has been reported on multi-rate multicarrier systems. Considering that high-performance detection such as coherent demodulation needs the explicit knowledge of the channel, based on the finite-length chip waveform truncation, this paper proposes a subspace-based scheme for timing and channel estimation in multi-rate multicarrier DS/CDMA systems, which is applicable to both multicode and variable spreading factor systems. The performance of the proposed scheme for these two multi-rate systems is validated via numerical simulations. The effects of the finite-length chip waveform truncation on the performance of the proposed scheme is also analyzed theoretically.