954 resultados para Low pass filters
Resumo:
In physical layer security systems there is a clear need to exploit the radio link characteristics to automatically generate an encryption key between two end points. The success of the key generation depends on the channel reciprocity, which is impacted by the non-simultaneous measurements and the white nature of the noise. In this paper, an OFDM subcarriers' channel responses based key generation system with enhanced channel reciprocity is proposed. By theoretically modelling the OFDM subcarriers' channel responses, the channel reciprocity is modelled and analyzed. A low pass filter is accordingly designed to improve the channel reciprocity by suppressing the noise. This feature is essential in low SNR environments in order to reduce the risk of the failure of the information reconciliation phase during key generation. The simulation results show that the low pass filter improves the channel reciprocity, decreases the key disagreement, and effectively increases the success of the key generation.
Resumo:
Key generation from the randomness of wireless channels is a promising technique to establish a secret cryptographic key securely between legitimate users. This paper proposes a new approach to extract keys efficiently from channel responses of individual orthogonal frequency-division multiplexing (OFDM) subcarriers. The efficiency is achieved by (i) fully exploiting randomness from time and frequency domains and (ii) improving the cross-correlation of the channel measurements. Through the theoretical modelling of the time and frequency autocorrelation relationship of the OFDM subcarrier's channel responses, we can obtain the optimal probing rate and use multiple uncorrelated subcarriers as random sources. We also study the effects of non-simultaneous measurements and noise on the cross-correlation of the channel measurements. We find the cross-correlation is mainly impacted by noise effects in a slow fading channel and use a low pass filter (LPF) to reduce the key disagreement rate and extend the system's working signal-to-noise ratio range. The system is evaluated in terms of randomness, key generation rate, and key disagreement rate, verifying that it is feasible to extract randomness from both time and frequency domains of the OFDM subcarrier's channel responses.
Resumo:
Tese de dout., Engenharia Electrónica e de Computadores, Faculdade de Ciência e Tecnologia, Universidade do Algarve, 2007
Resumo:
Indwelling electromyography (EMG) has great diagnostic value but its invasive and often painful characteristics make it inappropriate for monitoring human movement. Spike shape analysis of the surface electromyographic signal responds to the call for non-invasive EMG measures for monitoring human movement and detecting neuromuscular disorders. The present study analyzed the relationship between surface and indwelling EMG interference patterns. Twenty four males and twenty four females performed three isometric dorsiflexion contractions at five force levels from 20% to maximal force. The amplitude measures increased differently between electrode types, attributed to the electrode sensitivity. The frequency measures were different between traditional and spike shape measures due to different noise rejection criteria. These measures were also different between surface and indwelling EMG due to the low-pass tissue filtering effect. The spike shape measures, thought to collectively function as a means to differentiate between motor unit characteristics, changed independent of one another.
Resumo:
L’effet d’encombrement, qui nous empêche d’identifier correctement un stimulus visuel lorsqu’il est entouré de flanqueurs, est omniprésent à travers une grande variété de classes de stimuli. L’excentricité du stimulus cible ainsi que la distance cible-flanqueur constituent des facteurs fondamentaux qui modulent l’effet d’encombrement. La similarité cible-flanqueur semble également contribuer à l’ampleur de l’effet d’encombrement, selon des données obtenues avec des stimuli non-linguistiques. La présente étude a examiné ces trois facteurs en conjonction avec le contenu en fréquences spatiales des stimuli, dans une tâche d’identification de lettres. Nous avons présenté des images filtrées de lettres à des sujets non-dyslexiques exempts de troubles neurologiques, tout en manipulant l’excentricité de la cible ainsi que la similarité cible-flanqueurs (selon des matrices de confusion pré-établies). Quatre types de filtrage de fréquences spatiales ont été utilisés : passe-bas, passe-haut, à large bande et mixte (i.e. élimination des fréquences moyennes, connues comme étant optimales pour l’identification de lettres). Ces conditions étaient appariées en termes d’énergie de contraste. Les sujets devaient identifier la lettre cible le plus rapidement possible en évitant de commettre une erreur. Les résultats démontrent que la similarité cible-flanqueur amplifie l’effet d’encombrement, i.e. l’effet conjoint de distance et d’excentricité. Ceci étend les connaissances sur l’impact de la similarité sur l’encombrement à l’identification visuelle de stimuli linguistiques. De plus, la magnitude de l’effet d’encombrement est plus grande avec le filtre passe-bas, suivit du filtre mixte, du filtre passe-haut et du filtre à large bande, avec différences significatives entre les conditions consécutives. Nous concluons que : 1- les fréquences spatiales moyennes offrent une protection optimale contre l’encombrement en identification de lettres; 2- lorsque les fréquences spatiales moyennes sont absentes du stimulus, les hautes fréquences protègent contre l’encombrement alors que les basses fréquences l’amplifient, probablement par l’entremise de leur impact opposé quant la disponibilité de l’information sur les caractéristiques distinctives des stimul.
Resumo:
Mémoire numérisé par la Division de la gestion de documents et des archives de l'Université de Montréal
Resumo:
Airborne scanning laser altimetry (LiDAR) is an important new data source for river flood modelling. LiDAR can give dense and accurate DTMs of floodplains for use as model bathymetry. Spatial resolutions of 0.5m or less are possible, with a height accuracy of 0.15m. LiDAR gives a Digital Surface Model (DSM), so vegetation removal software (e.g. TERRASCAN) must be used to obtain a DTM. An example used to illustrate the current state of the art will be the LiDAR data provided by the EA, which has been processed by their in-house software to convert the raw data to a ground DTM and separate vegetation height map. Their method distinguishes trees from buildings on the basis of object size. EA data products include the DTM with or without buildings removed, a vegetation height map, a DTM with bridges removed, etc. Most vegetation removal software ignores short vegetation less than say 1m high. We have attempted to extend vegetation height measurement to short vegetation using local height texture. Typically most of a floodplain may be covered in such vegetation. The idea is to assign friction coefficients depending on local vegetation height, so that friction is spatially varying. This obviates the need to calibrate a global floodplain friction coefficient. It’s not clear at present if the method is useful, but it’s worth testing further. The LiDAR DTM is usually determined by looking for local minima in the raw data, then interpolating between these to form a space-filling height surface. This is a low pass filtering operation, in which objects of high spatial frequency such as buildings, river embankments and walls may be incorrectly classed as vegetation. The problem is particularly acute in urban areas. A solution may be to apply pattern recognition techniques to LiDAR height data fused with other data types such as LiDAR intensity or multispectral CASI data. We are attempting to use digital map data (Mastermap structured topography data) to help to distinguish buildings from trees, and roads from areas of short vegetation. The problems involved in doing this will be discussed. A related problem of how best to merge historic river cross-section data with a LiDAR DTM will also be considered. LiDAR data may also be used to help generate a finite element mesh. In rural area we have decomposed a floodplain mesh according to taller vegetation features such as hedges and trees, so that e.g. hedge elements can be assigned higher friction coefficients than those in adjacent fields. We are attempting to extend this approach to urban area, so that the mesh is decomposed in the vicinity of buildings, roads, etc as well as trees and hedges. A dominant points algorithm is used to identify points of high curvature on a building or road, which act as initial nodes in the meshing process. A difficulty is that the resulting mesh may contain a very large number of nodes. However, the mesh generated may be useful to allow a high resolution FE model to act as a benchmark for a more practical lower resolution model. A further problem discussed will be how best to exploit data redundancy due to the high resolution of the LiDAR compared to that of a typical flood model. Problems occur if features have dimensions smaller than the model cell size e.g. for a 5m-wide embankment within a raster grid model with 15m cell size, the maximum height of the embankment locally could be assigned to each cell covering the embankment. But how could a 5m-wide ditch be represented? Again, this redundancy has been exploited to improve wetting/drying algorithms using the sub-grid-scale LiDAR heights within finite elements at the waterline.
Resumo:
Current methods for estimating event-related potentials (ERPs) assume stationarity of the signal. Empirical Mode Decomposition (EMD) is a data-driven decomposition technique that does not assume stationarity. We evaluated an EMD-based method for estimating the ERP. On simulated data, EMD substantially reduced background EEG while retaining the ERP. EMD-denoised single trials also estimated shape, amplitude, and latency of the ERP better than raw single trials. On experimental data, EMD-denoised trials revealed event-related differences between two conditions (condition A and B) more effectively than trials lowpass filtered at 40 Hz. EMD also revealed event-related differences on both condition A and condition B that were clearer and of longer duration than those revealed by low-pass filtering at 40 Hz. Thus, EMD-based denoising is a promising data-driven, nonstationary method for estimating ERPs and should be investigated further.
Resumo:
The East China Sea is a hot area for typhoon waves to occur. A wave spectra assimilation model has been developed to predict the typhoon wave more accurately and operationally. This is the first time where wave data from Taiwan have been used to predict typhoon wave along the mainland China coast. The two-dimensional spectra observed in Taiwan northeast coast modify the wave field output by SWAN model through the technology of optimal interpolation (OI) scheme. The wind field correction is not involved as it contributes less than a quarter of the correction achieved by assimilation of waves. The initialization issue for assimilation is discussed. A linear evolution law for noise in the wave field is derived from the SWAN governing equations. A two-dimensional digital low-pass filter is used to obtain the initialized wave fields. The data assimilation model is optimized during the typhoon Sinlaku. During typhoons Krosa and Morakot, data assimilation significantly improves the low frequency wave energy and wave propagation direction in Taiwan coast. For the far-field region, the assimilation model shows an expected ability of improving typhoon wave forecast as well, as data assimilation enhances the low frequency wave energy. The proportion of positive assimilation indexes is over 81% for all the periods of comparison. The paper also finds that the impact of data assimilation on the far-field region depends on the state of the typhoon developing and the swell propagation direction.
Resumo:
Purpose: To identify the electromyographic fatigue threshold in the erector spinae muscle. Methods: Eight 19 to 24-year-old male volunteers participated in this study, in which surface electrodes were used, as well as a biological signals acquisition module (Lynx) with a sampling frequency of 1000Hz, a 1000 times gain, a 20Hz high pass filter and a 500Hz low pass filter. The test consisted of repeated isometric contractions of the erector spinae muscle in a 45° hip flexion posture, with 30%, 40%, 50% and 60% of the maximum voluntary isometric contraction. Results: A positive correlation of the RMS (root mean square) value as a function of time was found for most of the subjects with 40% (N = 6), 50% (N = 7) and 60% (N = 8) loads of the maximum voluntary isometric contraction. Conclusions: It was concluded, from this study, that the proposed protocol provides evidence, through the electromyographic signal, of the development of fatigue in the erector spinae muscle with loads of 40%, 50% and 60% of the maximum voluntary isometric contraction. The protocol also allows the electromyographic fatigue threshold and its probable applicability in the diagnosis of this phenomenon during repetitive activities to be determined.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
A seletividade espacial para cor tem sido investigada usando métodos eletrofisiológicos invasivos e não invasivos, e métodos psicofísicos. Em eletrofisiologia cortical visual não invasiva este tópico foi investigado usando métodos convencionais de estimulação periódica e extração de respostas por promediação simples. Novos métodos de estimulação (apresentação pseudo-aleatória) e extração de respostas corticais não invasivas (correlação cruzada) foram desenvolvidos e ainda não foram usados para investigar a seletividade espacial de cor de respostas corticais. Este trabalho objetivou introduzir esse novo método de eletrofisiologia pseudoaleatória para estudar a seletividade espacial de cor. Foram avaliados 14 tricromatas e 16 discromatópsicos com acuidade visual normal ou corrigida. Os voluntários foram avaliados pelo anomaloscópio HMC e teste de figuras de Ishihara para caracterizar a visão de cores quanto à presença de tricromacia. Foram usadas redes senoidais, 8º de ângulo visual, vermelho-verde para 8 frequências espaciais entre 0,2 a 10 cpg. O estímulo foi temporalmente modulado por uma sequência-m binária em um modo de apresentação de padrão reverso. O sistema VERIS foi usado para extrair o primeiro e o segundo slice do kernel de segunda ordem (K2.1 e K2.2, respectivamente). Após a modelagem da resposta às frequências espaciais com função de diferença de gaussianas, extraiu-se a frequência espacial ótima e banda de frequências com amplitudes acima de ¾ da amplitude máxima da função para servirem como indicadores da seletividade espacial da função. Também foi estimada a acuidade visual cromática pelo ajuste de uma função linear aos dados de amplitude a partir da frequência espacial do pico de amplitude até a mais alta frequência espacial testada. Em tricromatas, foi encontrada respostas cromáticas no K2.1 e no K2.2 que apresentaram seletividade espacial diferentes. Os componentes negativos do K2.1 e do K2.2 apresentaram sintonia passa-banda e o componente positivo do K2.1 apresentou sintonia passa-baixa. A acuidade visual estimada de todos os componentes estudados foi próxima àquelas encontradas por Mullen (1985) e Kelly (1983). Diferentes componentes celulares podem estar contribuindo para a geração do VECP pseudoaleatório. Este novo método se candidata a ser uma importante ferramenta para a avaliação não invasiva da visão de cores em humanos.
Resumo:
The purpose of the present study was to measure contrast sensitivity to equiluminant gratings using steady-state visual evoked cortical potential (ssVECP) and psychophysics. Six healthy volunteers were evaluated with ssVECPs and psychophysics. The visual stimuli were red-green or blue-yellow horizontal sinusoidal gratings, 5° × 5°, 34.3 cd/m2 mean luminance, presented at 6 Hz. Eight spatial frequencies from 0.2 to 8 cpd were used, each presented at 8 contrast levels. Contrast threshold was obtained by extrapolating second harmonic amplitude values to zero. Psychophysical contrast thresholds were measured using stimuli at 6 Hz and static presentation. Contrast sensitivity was calculated as the inverse function of the pooled cone contrast threshold. ssVECP and both psychophysical contrast sensitivity functions (CSFs) were low-pass functions for red-green gratings. For electrophysiology, the highest contrast sensitivity values were found at 0.4 cpd (1.95 ± 0.15). ssVECP CSF was similar to dynamic psychophysical CSF, while static CSF had higher values ranging from 0.4 to 6 cpd (P < 0.05, ANOVA). Blue-yellow chromatic functions showed no specific tuning shape; however, at high spatial frequencies the evoked potentials showed higher contrast sensitivity than the psychophysical methods (P < 0.05, ANOVA). Evoked potentials can be used reliably to evaluate chromatic red-green CSFs in agreement with psychophysical thresholds, mainly if the same temporal properties are applied to the stimulus. For blue-yellow CSF, correlation between electrophysiology and psychophysics was poor at high spatial frequency, possibly due to a greater effect of chromatic aberration on this kind of stimulus.
Resumo:
The aim of this work was to isolate and investigate subcortical and cortical lateral interactions involved in flicker perception. We quantified the perceived flicker strength (PFS) in the center of a test stimulus which was simultaneously modulated with a surround stimulus (50% Michelson contrast in both stimuli). Subjects were requested to adjust the modulation depth of a separate matching stimulus that was physically identical to the center of the test stimulus but without the surround. Using LCD goggles, synchronized to the frame rate of a CRT screen, the center and surround could be presented monoptically or dichoptically. In the monoptic condition, center-surround interactions can have both subcortical and cortical origins. In the dichoptic condition, center-surround interactions cannot occur in the retina and the LGN, therefore isolating a cortical mechanism. Results revealed both a strong monoptic (subcortical plus cortical) lateral interaction and a weaker dichoptic (cortical) lateral interaction. Subtraction of the dichoptic from the monoptic data revealed a subcortical mechanism of the lateral interaction. While the modulation of the cortical PFS component showed a low-pass temporal-frequency tuning, the modulation of the subcortical PFS component was maximal at 6 Hz. These findings are consistent with two separate temporal channels influencing the monoptic PFS, each with distinct lateral interactions strength and frequency tuning characteristics. We conclude that both subcortical and cortical lateral interactions modulate flicker perception.
Resumo:
Os principais objetivos deste trabalho são propor um algoritmo eficiente e o mais automático possível para estimar o que está coberto por regiões de nuvens e sombras em imagens de satélite; e um índice de confiabilidade, que seja aplicado previamente à imagem, visando medir a viabilidade da estimação das regiões cobertas pelos componentes atmosféricos usando tal algoritmo. A motivação vem dos problemas causados por esses elementos, entre eles: dificultam a identificação de objetos de imagem, prejudicam o monitoramento urbano e ambiental, e desfavorecem etapas cruciais do processamento digital de imagens para extrair informações ao usuário, como segmentação e classificação. Através de uma abordagem híbrida, é proposto um método para decompor regiões usando um filtro passa-baixas não-linear de mediana, a fim de mapear as regiões de estrutura (homogêneas), como vegetação, e de textura (heterogêneas), como áreas urbanas, na imagem. Nessas áreas, foram aplicados os métodos de restauração Inpainting por suavização baseado em Transformada Cosseno Discreta (DCT), e Síntese de Textura baseada em modelos, respectivamente. É importante salientar que as técnicas foram modificadas para serem capazes de trabalhar com imagens de características peculiares que são obtidas por meio de sensores de satélite, como por exemplo, as grandes dimensões e a alta variação espectral. Já o índice de confiabilidade, tem como objetivo analisar a imagem que contém as interferências atmosféricas e daí estimar o quão confiável será a redefinição com base no percentual de cobertura de nuvens sobre as regiões de textura e estrutura. Tal índice é composto pela combinação do resultado de algoritmos supervisionados e não-supervisionados envolvendo 3 métricas: Exatidão Global Média (EGM), Medida De Similaridade Estrutural (SSIM) e Confiança Média Dos Pixels (CM). Finalmente, verificou-se a eficácia destas metodologias através de uma avaliação quantitativa (proporcionada pelo índice) e qualitativa (pelas imagens resultantes do processamento), mostrando ser possível a aplicação das técnicas para solucionar os problemas que motivaram a realização deste trabalho.