969 resultados para signal quality
Resumo:
This paper addresses the problem of survivable lightpath provisioning in wavelength-division-multiplexing (WDM) mesh networks, taking into consideration optical-layer protection and some realistic optical signal quality constraints. The investigated networks use sparsely placed optical–electrical–optical (O/E/O) modules for regeneration and wavelength conversion. Given a fixed network topology with a number of sparsely placed O/E/O modules and a set of connection requests, a pair of link-disjoint lightpaths is established for each connection. Due to physical impairments and wavelength continuity, both the working and protection lightpaths need to be regenerated at some intermediate nodes to overcome signal quality degradation and wavelength contention. In the present paper, resource-efficient provisioning solutions are achieved with the objective of maximizing resource sharing. The authors propose a resource-sharing scheme that supports three kinds of resource-sharing scenarios, including a conventional wavelength-link sharing scenario, which shares wavelength links between protection lightpaths, and two new scenarios, which share O/E/O modules between protection lightpaths and between working and protection lightpaths. An integer linear programming (ILP)-based solution approach is used to find optimal solutions. The authors also propose a local optimization heuristic approach and a tabu search heuristic approach to solve this problem for real-world, large mesh networks. Numerical results show that our solution approaches work well under a variety of network settings and achieves a high level of resource-sharing rates (over 60% for O/E/O modules and over 30% for wavelength links), which translate into great savings in network costs.
The optimal lead insertion depth for esophageal ECG recordings with respect to atrial signal quality
Resumo:
BACKGROUND Diagnosing supraventricular arrhythmias by conventional long-term ECG can be cumbersome because of poor p-waves. Esophageal long-term electrocardiography (eECG) has an excellent sensitivity for atrial signals and may overcome this limitation. However, the optimal lead insertion depth (OLID) is not known. METHODS We registered eECGs at different lead insertion depths in 27 patients and analyzed 199,716 atrial complexes with respect to signal amplitude and slope. Correlation and regression analyses were used to find a criterion for OLID. RESULTS Atrial signal amplitudes and slopes significantly depend on lead insertion depth. OLID correlates with body height (rSpearman=0.71) and can be estimated by OLID [cm]=0.25*body height[cm]-7cm. At this insertion depth, we recorded the largest esophageal atrial signal amplitudes (1.27±0.86mV), which were much larger compared to conventional surface lead II (0.19±0.10mV, p<0.0001). CONCLUSION The OLID depends on body height and can be calculated by a simple regression formula.
Resumo:
We present a statistical model-based approach to signal enhancement in the case of additive broadband noise. Because broadband noise is localised in neither time nor frequency, its removal is one of the most pervasive and difficult signal enhancement tasks. In order to improve perceived signal quality, we take advantage of human perception and define a best estimate of the original signal in terms of a cost function incorporating perceptual optimality criteria. We derive the resultant signal estimator and implement it in a short-time spectral attenuation framework. Audio examples, references, and further information may be found at http://www-sigproc.eng.cam.ac.uk/~pjw47.
Resumo:
Free space transmission of an on-off modulated sinusoidal signal through a phase conjugating lens (PCL) is theoretically examined using a combined time/frequency domain approach. The on-off keyed (OOK) signal is generated by a dipole antenna located in the far-field zone of the lens. The PCL consists of a dual layer of antenna elements interconnected via phase conjugating circuitry. We demonstrate that electromagnetic interference between antenna elements creates spatially localised areas of good-quality reception and zones where the signal is significantly denigrated by interference. Next, it is shown that destructive interference and packet desynchronisation effects critically depend on bit rate. It is also shown that a circular concave lens can be used to produce high-quality signal reception in a given direction while suppressing signal reception in all other directions. The effect that the bandwidth of the phase conjugating unit has on the transmitted signal properties for the cases of high and low bit rate OOK modulation are studied and a signal quality characterisation scheme is proposed which uses cross-correlation. The results of the study yields understanding of the performance of phase conjugating arrays under OOK modulation. The work suggests a novel approach for realising a secure communication wireless system.
Resumo:
Using the principle of quasi-continuous filtering in a non-linear fibre, we propose an optical device for the simultaneous regeneration of sevaral channels at 40 Gbit/s. Simulations predict an improvement of the signal quality for four channels by more than 6.8 dB.
Resumo:
Using the principle of quasi-continuous filtering in a non-linear fibre, we propose an optical device for the simultaneous regeneration of sevaral channels at 40 Gbit/s. Simulations predict an improvement of the signal quality for four channels by more than 6.8 dB.
Resumo:
While close talking microphones give the best signal quality and produce the highest accuracy from current Automatic Speech Recognition (ASR) systems, the speech signal enhanced by microphone array has been shown to be an effective alternative in a noisy environment. The use of microphone arrays in contrast to close talking microphones alleviates the feeling of discomfort and distraction to the user. For this reason, microphone arrays are popular and have been used in a wide range of applications such as teleconferencing, hearing aids, speaker tracking, and as the front-end to speech recognition systems. With advances in sensor and sensor network technology, there is considerable potential for applications that employ ad-hoc networks of microphone-equipped devices collaboratively as a virtual microphone array. By allowing such devices to be distributed throughout the users’ environment, the microphone positions are no longer constrained to traditional fixed geometrical arrangements. This flexibility in the means of data acquisition allows different audio scenes to be captured to give a complete picture of the working environment. In such ad-hoc deployment of microphone sensors, however, the lack of information about the location of devices and active speakers poses technical challenges for array signal processing algorithms which must be addressed to allow deployment in real-world applications. While not an ad-hoc sensor network, conditions approaching this have in effect been imposed in recent National Institute of Standards and Technology (NIST) ASR evaluations on distant microphone recordings of meetings. The NIST evaluation data comes from multiple sites, each with different and often loosely specified distant microphone configurations. This research investigates how microphone array methods can be applied for ad-hoc microphone arrays. A particular focus is on devising methods that are robust to unknown microphone placements in order to improve the overall speech quality and recognition performance provided by the beamforming algorithms. In ad-hoc situations, microphone positions and likely source locations are not known and beamforming must be achieved blindly. There are two general approaches that can be employed to blindly estimate the steering vector for beamforming. The first is direct estimation without regard to the microphone and source locations. An alternative approach is instead to first determine the unknown microphone positions through array calibration methods and then to use the traditional geometrical formulation for the steering vector. Following these two major approaches investigated in this thesis, a novel clustered approach which includes clustering the microphones and selecting the clusters based on their proximity to the speaker is proposed. Novel experiments are conducted to demonstrate that the proposed method to automatically select clusters of microphones (ie, a subarray), closely located both to each other and to the desired speech source, may in fact provide a more robust speech enhancement and recognition than the full array could.
Resumo:
Automatic speech recognition from multiple distant micro- phones poses significant challenges because of noise and reverberations. The quality of speech acquisition may vary between microphones because of movements of speakers and channel distortions. This paper proposes a channel selection approach for selecting reliable channels based on selection criterion operating in the short-term modulation spectrum domain. The proposed approach quantifies the relative strength of speech from each microphone and speech obtained from beamforming modulations. The new technique is compared experimentally in the real reverb conditions in terms of perceptual evaluation of speech quality (PESQ) measures and word error rate (WER). Overall improvement in recognition rate is observed using delay-sum and superdirective beamformers compared to the case when the channel is selected randomly using circular microphone arrays.
Resumo:
An object in the peripheral visual field is more difficult to recognize when surrounded by other objects. This phenomenon is called "crowding". Crowding places a fundamental constraint on human vision that limits performance on numerous tasks. It has been suggested that crowding results from spatial feature integration necessary for object recognition. However, in the absence of convincing models, this theory has remained controversial. Here, we present a quantitative and physiologically plausible model for spatial integration of orientation signals, based on the principles of population coding. Using simulations, we demonstrate that this model coherently accounts for fundamental properties of crowding, including critical spacing, "compulsory averaging", and a foveal-peripheral anisotropy. Moreover, we show that the model predicts increased responses to correlated visual stimuli. Altogether, these results suggest that crowding has little immediate bearing on object recognition but is a by-product of a general, elementary integration mechanism in early vision aimed at improving signal quality.
Resumo:
In modem signal Processing,non-linear,non-Gaussian and non-stable signals are usually the analyzed and Processed objects,especially non-stable signals. The convention always to analyze and Process non-stable signals are: short time Fourier transform,Wigner-Ville distribution,wavelet Transform and so on. But the above three algorithms are all based on Fourier Transform,so they all have the shortcoming of Fourier Analysis and cannot get rid of the localization of it. Hilbert-Huang Transform is a new non-stable signal processing technology,proposed by N. E. Huang in 1998. It is composed of Empirical Mode Decomposition (referred to as EMD) and Hilbert Spectral Analysis (referred to as HSA). After EMD Processing,any non-stable signal will be decomposed to a series of data sequences with different scales. Each sequence is called an Intrinsic Mode Function (referred to as IMF). And then the energy distribution plots of the original non-stable signal can be found by summing all the Hilbert spectrums of each IMF. In essence,this algorithm makes the non-stable signals become stable and decomposes the fluctuations and tendencies of different scales by degrees and at last describes the frequency components with instantaneous frequency and energy instead of the total frequency and energy in Fourier Spectral Analysis. In this case,the shortcoming of using many fake harmonic waves to describe non-linear and non-stable signals in Fourier Transform can be avoided. This Paper researches in the following parts: Firstly,This paper introduce the history and development of HHT,subsequently the characters and main issues of HHT. This paper briefly introduced the basic realization principles and algorithms of Hilbert-Huang transformation and confirms its validity by simulations. Secondly, This paper discuss on some shortcoming of HHT. By using FFT interpolation, we solve the problem of IMF instability and instantaneous frequency undulate which are caused by the insufficiency of sampling rate. As to the bound effect caused by the limitation of envelop algorithm of HHT, we use the wave characteristic matching method, and have good result. Thirdly, This paper do some deeply research on the application of HHT in electromagnetism signals processing. Based on the analysis of actual data examples, we discussed its application in electromagnetism signals processing and noise suppression. Using empirical mode decomposition method and multi-scale filter characteristics can effectively analyze the noise distribution of electromagnetism signal and suppress interference processing and information interpretability. It has been founded that selecting electromagnetism signal sessions using Hilbert time-frequency energy spectrum is helpful to improve signal quality and enhance the quality of data.
Resumo:
O metano é um gás de estufa potente e uma importante fonte de energia. A importância global e impacto em zonas costeiras de acumulações e escape de gás metano são ainda pouco conhecidas. Esta tese investiga acumulações e escape de gás em canais de maré da Ria de Aveiro com dados de cinco campanhas de reflexão sísmica de alta resolução realizadas em 1986, 1999, 2002 e 2003. Estas incluem três campanhas de Chirp (RIAV99, RIAV02 e RIAV02A) e duas campanhas de Boomer (VOUGA86 e RIAV03). O processamento dos dados de navegação incluíram filtros de erros, correcções de sincronização de relógios de sistemas de aquisição de dados, ajuste de “layback” e estimativa da posição de “midpoint”. O processamento do sinal sísmico consistiu na correcção das amplitudes, remoção de ruído do tipo “burst”, correcções estáticas, correcção do “normal move-out”, filtragem passabanda, desconvolução da assinatura e migração Stolt F-K. A análise da regularidade do trajecto de navegação, dos desfasamentos entre horizontes e dos modelos de superfícies foi utilizada para controlo de qualidade, e permitiu a revisão e melhoria dos parâmetros de processamento. A heterogeneidade da cobertura sísmica, da qualidade do sinal, da penetração e da resolução, no seu conjunto constrangeram o uso dos dados a interpretações detalhadas, mas locais, de objectos geológicos da Ria. É apresentado um procedimento para determinar a escolha de escalas adequadas para modelar os objectos geológicos, baseado na resolução sísmica, erros de posicionamento conhecidos e desfasamentos médios entre horizontes. As evidências de acumulação e escape de gás na Ria de Aveiro incluem turbidez acústica, reflexões reforçadas, cortinas acústicas, domas, “pockmarks” e alinhamentos de “pockmarks” enterradas, horizontes perturbados e plumas acústicas na coluna de água (flares). A estratigrafia e a estrutura geológica controlam a distribuição e extensão das acumulações e escape de gás. Ainda assim, nestes sistemas de baixa profundidade de água, as variações da altura de maré têm um impacto significativo na detecção de gás com métodos acústicos, através de alterações nas amplitudes originais de reflexões reforçadas, turbidez acústica e branqueamento acústico em zonas com gás. Os padrões encontrados confirmam que o escape de bolhas de gás é desencadeado pela descida da maré. Há acumulações de gás em sedimentos Holocénicos e no substrato de argilas e calcários do Mesozóico. Evidências directas de escape de gás em sondagens em zonas vizinhas, mostraram gás essencialmente biogénico. A maioria do gás na área deve ter sido gerado em sedimentos lagunares Holocénicos. No entanto, a localização e geometria de estruturas de escape de fluidos em alguns canais de maré, seguem o padrão de fracturas do substrato Mesozóico, indicando uma possível fonte mais profunda de gás e que estas fracturas funcionam como condutas preferenciais de migração dos fluidos e exercem um controlo estrutural na ocorrência de gás na Ria.
Resumo:
Introducción. En Colombia, el 80% de los pacientes con enfermedad renal crónica en hemodiálisis tienen fístula arteriovenosa periférica (FAV) que asegura el flujo de sangre durante la hemodiálisis (1), la variabilidad en el flujo de sangre en el brazo de la FAV hacia la parte distal, puede afectar la lectura de la oximetría de pulso (SpO2) (2), llevando a la toma de decisiones equivocadas por el personal de salud. El objetivo de este estudio es aclarar si existe diferencia entre la SpO2 del brazo de la FAV y el brazo contralateral. Materiales y métodos. Se realizó un estudio de correlación entre los valores de SpO2 del brazo con FAV contra el brazo sin FAV, de 40 pacientes que asistieron a hemodiálisis. La recolección de los datos se llevó a cabo, con un formato que incluyó el resultado de la pulsioximetria y variables asociadas, antes, durante y después de la hemodiálisis. Se comparó la mediana de los deltas de las diferencias con pruebas estadísticas T Student – Mann Whitney, aceptando un valor significativo de p < 0,05. Resultados. No se encontraron diferencias estadísticamente significativas de la SpO2 entre el brazo con FAV y el brazo sin FAV, antes, durante y después de la diálisis, sin embargo si se apreció una correlación positiva estadísticamente significativa. Conclusiones. Se encontró correlación positiva estadísticamente significativa, donde no hubo diferencias en el resultado la pulsioximetría entre el brazo con FAV y brazo sin FAV, por lo tanto es válido tomar la pulsioximetría en cualquiera de los brazos.
Resumo:
Abstract. Different types of mental activity are utilised as an input in Brain-Computer Interface (BCI) systems. One such activity type is based on Event-Related Potentials (ERPs). The characteristics of ERPs are not visible in single-trials, thus averaging over a number of trials is necessary before the signals become usable. An improvement in ERP-based BCI operation and system usability could be obtained if the use of single-trial ERP data was possible. The method of Independent Component Analysis (ICA) can be utilised to separate single-trial recordings of ERP data into components that correspond to ERP characteristics, background electroencephalogram (EEG) activity and other components with non- cerebral origin. Choice of specific components and their use to reconstruct “denoised” single-trial data could improve the signal quality, thus allowing the successful use of single-trial data without the need for averaging. This paper assesses single-trial ERP signals reconstructed using a selection of estimated components from the application of ICA on the raw ERP data. Signal improvement is measured using Contrast-To-Noise measures. It was found that such analysis improves the signal quality in all single-trials.
Resumo:
Use of orthogonal space-time block codes (STBCs) with multiple transmitters and receivers can improve signal quality. However, in optical intensity modulated signals, output of the transmitter is non-negative and hence standard orthogonal STBC schemes need to be modified. A generalised framework for applying orthogonal STBCs for free-space IM/DD optical links is presented.
Resumo:
Use of orthogonal space-time block codes (STBCs) with multiple transmitters and receivers can improve signal quality. However, in optical intensity modulated signals, output of the transmitter is non-negative and hence standard orthogonal STBC schemes need to be modified. A generalised framework for applying orthogonal STBCs for free-space IM/DD optical links is presented.