48 resultados para aliasing


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Neste trabalho, foi aplicado o processamento de imagens digitais auxiliado pelas Redes Neurais Artificiais (RNA) com a finalidade de identificar algumas variedades de soja por meio da forma e do tamanho das sementes. Foram analisadas as seguintes variedades: EMBRAPA 133, EMBRAPA 184, COODETEC 205, COODETEC 206, EMBRAPA 48, SYNGENTA 8350, FEPAGRO 10 e MONSOY 8000 RR, safra 2005/2006. O processamento das imagens foi constituído pelas seguintes etapas: 1) Aquisição da imagem: as amostras de cada variedade foram fotografadas por máquina fotográfica Coolpix995, Nikon, com resolução de 3.34 megapixels; 2) Pré-processamento: um filtro de anti-aliasing foi aplicado para obter tons acinzentados da imagem; 3) Segmentação: foi realizada a detecção das bordas das sementes (Método de Prewitt), dilatação dessas bordas e remoção de segmentos não-necessários para a análise. 4) Representação: cada semente foi representada na forma de matriz binária 130x130, e 5) Reconhecimento e interpretação: foi utilizada uma rede neural feedforward multicamadas, com três camadas ocultas. O treinamento da rede foi realizado pelo método backpropagation. A validação da RNA treinada mostrou que o processamento aplicado pode ser usado para a identificação das variedades consideradas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

When contrast sensitivity functions to Cartesian and angular gratings were compared in previous studies the peak sensitivity to angular stimuli was reported to be 0.21 log units higher. In experiments carried out to repeat this result, we used the same two-alternative forced-choice paradigm, but improved experimental control and precision by increasing contrast resolution from 8 to 12 bits, increasing the screen refresh rate from 30 Hz interlaced to 85 Hz non-interlaced, linearizing the voltage-luminance relation, modulating luminance in frequencies that minimize pixel aliasing, and improving control of the subject's exposure to the stimuli. The contrast sensitivity functions to Cartesian and angular gratings were similar in form and peak sensitivity (2.4 cycles per visual degree (c/deg) and 32 c/360º, respectively) to those reported in a previous study (3 c/deg and 32 c/360º, respectively), but peak sensitivity to angular stimuli was 0.13 log units lower than that to Cartesian stimuli. When the experiment was repeated, this time simulating the experimental control level used in the previous study, no difference between the peak sensitivity to Cartesian and angular stimuli was found. This result agrees with most current models that assume Cartesian filtering at the first visual processing stage. The discrepancy in the results is explained in part by differences in the degree of experimental control.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present study proposes to apply magnitude-squared coherence (MSC) to the somatosensory evoked potential for identifying the maximum driving response band. EEG signals, leads [Fpz'-Cz'] and [C3'-C4'], were collected from two groups of normal volunteers, stimulated at the rate of 4.91 (G1: 26 volunteers) and 5.13 Hz (G2: 18 volunteers). About 1400 stimuli were applied to the right tibial nerve at the motor threshold level. After applying the anti-aliasing filter, the signals were digitized and then further low-pass filtered (200 Hz, 6th order Butterworth and zero-phase). Based on the rejection of the null hypothesis of response absence (MSC(f) > 0.0060 with 500 epochs and the level of significance set at a = 0.05), the beta and gamma bands, 15-66 Hz, were identified as the maximum driving response band. Taking both leads together ("logical-OR detector", with a false-alarm rate of a = 0.05, and hence a = 0.0253 for each derivation), the detection exceeded 70% for all multiples of the stimulation frequency within this range. Similar performance was achieved for MSC of both leads but at 15, 25, 35, and 40 Hz. Moreover, the response was detected in [C3'-C4'] at 35.9 Hz and in [Fpz'-Cz'] at 46.2 Hz for all members of G2. Using the "logical-OR detector" procedure, the response was detected at the 7th multiple of the stimulation frequency for the series as a whole (considering both groups). Based on these findings, the MSC technique may be used for monitoring purposes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Le flou de mouvement de haute qualité est un effet de plus en plus important en rendu interactif. Avec l'augmentation constante en qualité des ressources et en fidélité des scènes vient un désir semblable pour des effets lenticulaires plus détaillés et réalistes. Cependant, même dans le contexte du rendu hors-ligne, le flou de mouvement est souvent approximé à l'aide d'un post-traitement. Les algorithmes de post-traitement pour le flou de mouvement ont fait des pas de géant au niveau de la qualité visuelle, générant des résultats plausibles tout en conservant un niveau de performance interactif. Néanmoins, des artefacts persistent en présence, par exemple, de mouvements superposés ou de motifs de mouvement à très large ou très fine échelle, ainsi qu'en présence de mouvement à la fois linéaire et rotationnel. De plus, des mouvements d'amplitude importante ont tendance à causer des artefacts évidents aux bordures d'objets ou d'image. Ce mémoire présente une technique qui résout ces artefacts avec un échantillonnage plus robuste et un système de filtrage qui échantillonne selon deux directions qui sont dynamiquement et automatiquement sélectionnées pour donner l'image la plus précise possible. Ces modifications entraînent un coût en performance somme toute mineur comparativement aux implantations existantes: nous pouvons générer un flou de mouvement plausible et temporellement cohérent pour plusieurs séquences d'animation complexes, le tout en moins de 2ms à une résolution de 1280 x 720. De plus, notre filtre est conçu pour s'intégrer facilement avec des filtres post-traitement d'anticrénelage.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In a sigma-delta analog to digital (A/D) As most of the sigma-delta ADC applications require converter, the most computationally intensive block is decimation filters with linear phase characteristics, the decimation filter and its hardware implementation symmetric Finite Impulse Response (FIR) filters are may require millions of transistors. Since these widely used for implementation. But the number of FIR converters are now targeted for a portable application, filter coefficients will be quite large for implementing a a hardware efficient design is an implicit requirement. narrow band decimation filter. Implementing decimation In this effect, this paper presents a computationally filter in several stages reduces the total number of filter efficient polyphase implementation of non-recursive coefficients, and hence reduces the hardware complexity cascaded integrator comb (CIC) decimators for and power consumption [2]. Sigma-Delta Converters (SDCs). The SDCs are The first stage of decimation filter can be operating at high oversampling frequencies and hence implemented very efficiently using a cascade of integrators require large sampling rate conversions. The filtering and comb filters which do not require multiplication or and rate reduction are performed in several stages to coefficient storage. The remaining filtering is performed reduce hardware complexity and power dissipation. either in single stage or in two stages with more complex The CIC filters are widely adopted as the first stage of FIR or infinite impulse response (IIR) filters according to decimation due to its multiplier free structure. In this the requirements. The amount of passband aliasing or research, the performance of polyphase structure is imaging error can be brought within prescribed bounds by compared with the CICs using recursive and increasing the number of stages in the CIC filter. The non-recursive algorithms in terms of power, speed and width of the passband and the frequency characteristics area. This polyphase implementation offers high speed outside the passband are severely limited. So, CIC filters operation and low power consumption. The polyphase are used to make the transition between high and low implementation of 4th order CIC filter with a sampling rates. Conventional filters operating at low decimation factor of '64' and input word length of sampling rate are used to attain the required transition '4-bits' offers about 70% and 37% of power saving bandwidth and stopband attenuation. compared to the corresponding recursive and Several papers are available in literature that deals non-recursive implementations respectively. The same with different implementations of decimation filter polyphase CIC filter can operate about 7 times faster architecture for sigma-delta ADCs. Hogenauer has than the recursive and about 3.7 times faster than the described the design procedures for decimation and non-recursive CIC filters.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Super Resolution problem is an inverse problem and refers to the process of producing a High resolution (HR) image, making use of one or more Low Resolution (LR) observations. It includes up sampling the image, thereby, increasing the maximum spatial frequency and removing degradations that arise during the image capture namely aliasing and blurring. The work presented in this thesis is based on learning based single image super-resolution. In learning based super-resolution algorithms, a training set or database of available HR images are used to construct the HR image of an image captured using a LR camera. In the training set, images are stored as patches or coefficients of feature representations like wavelet transform, DCT, etc. Single frame image super-resolution can be used in applications where database of HR images are available. The advantage of this method is that by skilfully creating a database of suitable training images, one can improve the quality of the super-resolved image. A new super resolution method based on wavelet transform is developed and it is better than conventional wavelet transform based methods and standard interpolation methods. Super-resolution techniques based on skewed anisotropic transform called directionlet transform are developed to convert a low resolution image which is of small size into a high resolution image of large size. Super-resolution algorithm not only increases the size, but also reduces the degradations occurred during the process of capturing image. This method outperforms the standard interpolation methods and the wavelet methods, both visually and in terms of SNR values. Artifacts like aliasing and ringing effects are also eliminated in this method. The super-resolution methods are implemented using, both critically sampled and over sampled directionlets. The conventional directionlet transform is computationally complex. Hence lifting scheme is used for implementation of directionlets. The new single image super-resolution method based on lifting scheme reduces computational complexity and thereby reduces computation time. The quality of the super resolved image depends on the type of wavelet basis used. A study is conducted to find the effect of different wavelets on the single image super-resolution method. Finally this new method implemented on grey images is extended to colour images and noisy images

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Study on variable stars is an important topic of modern astrophysics. After the invention of powerful telescopes and high resolving powered CCD’s, the variable star data is accumulating in the order of peta-bytes. The huge amount of data need lot of automated methods as well as human experts. This thesis is devoted to the data analysis on variable star’s astronomical time series data and hence belong to the inter-disciplinary topic, Astrostatistics. For an observer on earth, stars that have a change in apparent brightness over time are called variable stars. The variation in brightness may be regular (periodic), quasi periodic (semi-periodic) or irregular manner (aperiodic) and are caused by various reasons. In some cases, the variation is due to some internal thermo-nuclear processes, which are generally known as intrinsic vari- ables and in some other cases, it is due to some external processes, like eclipse or rotation, which are known as extrinsic variables. Intrinsic variables can be further grouped into pulsating variables, eruptive variables and flare stars. Extrinsic variables are grouped into eclipsing binary stars and chromospheri- cal stars. Pulsating variables can again classified into Cepheid, RR Lyrae, RV Tauri, Delta Scuti, Mira etc. The eruptive or cataclysmic variables are novae, supernovae, etc., which rarely occurs and are not periodic phenomena. Most of the other variations are periodic in nature. Variable stars can be observed through many ways such as photometry, spectrophotometry and spectroscopy. The sequence of photometric observa- xiv tions on variable stars produces time series data, which contains time, magni- tude and error. The plot between variable star’s apparent magnitude and time are known as light curve. If the time series data is folded on a period, the plot between apparent magnitude and phase is known as phased light curve. The unique shape of phased light curve is a characteristic of each type of variable star. One way to identify the type of variable star and to classify them is by visually looking at the phased light curve by an expert. For last several years, automated algorithms are used to classify a group of variable stars, with the help of computers. Research on variable stars can be divided into different stages like observa- tion, data reduction, data analysis, modeling and classification. The modeling on variable stars helps to determine the short-term and long-term behaviour and to construct theoretical models (for eg:- Wilson-Devinney model for eclips- ing binaries) and to derive stellar properties like mass, radius, luminosity, tem- perature, internal and external structure, chemical composition and evolution. The classification requires the determination of the basic parameters like pe- riod, amplitude and phase and also some other derived parameters. Out of these, period is the most important parameter since the wrong periods can lead to sparse light curves and misleading information. Time series analysis is a method of applying mathematical and statistical tests to data, to quantify the variation, understand the nature of time-varying phenomena, to gain physical understanding of the system and to predict future behavior of the system. Astronomical time series usually suffer from unevenly spaced time instants, varying error conditions and possibility of big gaps. This is due to daily varying daylight and the weather conditions for ground based observations and observations from space may suffer from the impact of cosmic ray particles. Many large scale astronomical surveys such as MACHO, OGLE, EROS, xv ROTSE, PLANET, Hipparcos, MISAO, NSVS, ASAS, Pan-STARRS, Ke- pler,ESA, Gaia, LSST, CRTS provide variable star’s time series data, even though their primary intention is not variable star observation. Center for Astrostatistics, Pennsylvania State University is established to help the astro- nomical community with the aid of statistical tools for harvesting and analysing archival data. Most of these surveys releases the data to the public for further analysis. There exist many period search algorithms through astronomical time se- ries analysis, which can be classified into parametric (assume some underlying distribution for data) and non-parametric (do not assume any statistical model like Gaussian etc.,) methods. Many of the parametric methods are based on variations of discrete Fourier transforms like Generalised Lomb-Scargle peri- odogram (GLSP) by Zechmeister(2009), Significant Spectrum (SigSpec) by Reegen(2007) etc. Non-parametric methods include Phase Dispersion Minimi- sation (PDM) by Stellingwerf(1978) and Cubic spline method by Akerlof(1994) etc. Even though most of the methods can be brought under automation, any of the method stated above could not fully recover the true periods. The wrong detection of period can be due to several reasons such as power leakage to other frequencies which is due to finite total interval, finite sampling interval and finite amount of data. Another problem is aliasing, which is due to the influence of regular sampling. Also spurious periods appear due to long gaps and power flow to harmonic frequencies is an inherent problem of Fourier methods. Hence obtaining the exact period of variable star from it’s time series data is still a difficult problem, in case of huge databases, when subjected to automation. As Matthew Templeton, AAVSO, states “Variable star data analysis is not always straightforward; large-scale, automated analysis design is non-trivial”. Derekas et al. 2007, Deb et.al. 2010 states “The processing of xvi huge amount of data in these databases is quite challenging, even when looking at seemingly small issues such as period determination and classification”. It will be beneficial for the variable star astronomical community, if basic parameters, such as period, amplitude and phase are obtained more accurately, when huge time series databases are subjected to automation. In the present thesis work, the theories of four popular period search methods are studied, the strength and weakness of these methods are evaluated by applying it on two survey databases and finally a modified form of cubic spline method is intro- duced to confirm the exact period of variable star. For the classification of new variable stars discovered and entering them in the “General Catalogue of Vari- able Stars” or other databases like “Variable Star Index“, the characteristics of the variability has to be quantified in term of variable star parameters.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Nonregular two-level fractional factorial designs are designs which cannot be specified in terms of a set of defining contrasts. The aliasing properties of nonregular designs can be compared by using a generalisation of the minimum aberration criterion called minimum G2-aberration.Until now, the only nontrivial designs that are known to have minimum G2-aberration are designs for n runs and m n–5 factors. In this paper, a number of construction results are presented which allow minimum G2-aberration designs to be found for many of the cases with n = 16, 24, 32, 48, 64 and 96 runs and m n/2–2 factors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

By comparing annual and seasonal changes in precipitation over land and ocean since 1950 simulated by the CMIP5 (Coupled Model Intercomparison Project, phase 5) climate models in which natural and anthropogenic forcings have been included, we find that clear global-scale and regional-scale changes due to human influence are expected to have occurred over both land and ocean. These include moistening over northern high latitude land and ocean throughout all seasons and over the northern subtropical oceans during boreal winter. However we show that this signal of human influence is less distinct when considered over the relatively small area of land for which there are adequate observations to make assessments of multi-decadal scale trends. These results imply that extensive and significant changes in precipitation over the land and ocean may have already happened, even though, inadequacies in observations in some parts of the world make it difficult to identify conclusively such a human fingerprint on the global water cycle. In some regions and seasons, due to aliasing of different kinds of variability as a result of sub sampling by the sparse and changing observational coverage, observed trends appear to have been increased, underscoring the difficulties of interpreting the apparent magnitude of observed changes in precipitation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Three methods for intercalibrating humidity sounding channels are compared to assess their merits and demerits. The methods use the following: (1) natural targets (Antarctica and tropical oceans), (2) zonal average brightness temperatures, and (3) simultaneous nadir overpasses (SNOs). Advanced Microwave Sounding Unit-B instruments onboard the polar-orbiting NOAA 15 and NOAA 16 satellites are used as examples. Antarctica is shown to be useful for identifying some of the instrument problems but less promising for intercalibrating humidity sounders due to the large diurnal variations there. Owing to smaller diurnal cycles over tropical oceans, these are found to be a good target for estimating intersatellite biases. Estimated biases are more resistant to diurnal differences when data from ascending and descending passes are combined. Biases estimated from zonal-averaged brightness temperatures show large seasonal and latitude dependence which could have resulted from diurnal cycle aliasing and scene-radiance dependence of the biases. This method may not be the best for channels with significant surface contributions. We have also tested the impact of clouds on the estimated biases and found that it is not significant, at least for tropical ocean estimates. Biases estimated from SNOs are the least influenced by diurnal cycle aliasing and cloud impacts. However, SNOs cover only relatively small part of the dynamic range of observed brightness temperatures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Chambers (1998) explores the interaction between long memory and aggregation. For continuous-time processes, he takes the aliasing effect into account when studying temporal aggregation. For discrete-time processes, however, he seems to fail to do so. This note gives the spectral density function of temporally aggregated long memory discrete-time processes in light of the aliasing effect. The results are different from those in Chambers (1998) and are supported by a small simulation exercise. As a result, the order of aggregation may not be invariant to temporal aggregation, specifically if d is negative and the aggregation is of the stock type.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A maioria dos métodos de síntese e sintonia de controladores, bem como métodos de otimização e análise de processos necessitam de um modelo do processo em estudo. A identificação de processos é portanto uma área de grande importância para a engenharia em geral pois permite a obtenção de modelos empíricos dos processos com que nos deparamos de uma forma simples e rápida. Mesmo não utilizando leis da natureza, os modelos empíricos são úteis pois descrevem o comportamento específico de determinado processo. Com o rápido desenvolvimento dos computadores digitais e sua larga aplicação nos sistemas de controle em geral, a identificação de modelos discretos foi amplamente desenvolvida e empregada, entretanto, modelos discretos não são de fácil interpretação como os modelos contínuos pois a maioria dos sistema com que lidamos são de representação contínua. A identificação de modelos contínuos é portanto útil na medida que gera modelos de compreensão mais simples. A presente dissertação estuda a identificação de modelos lineares contínuos a partir de dados amostrados discretamente. O método estudado é o chamado método dos momentos de Poisson. Este método se baseia em uma transformação linear que quando aplicada a uma equação diferencial ordinária linear a transforma em uma equação algébrica evitando com isso a necessidade do cálculo das derivadas do sinais de entrada e saída Além da análise detalhada desse método, onde demonstramos o efeito de cada parâmetro do método de Poisson sobre o desempenho desse, foi realizado também um estudo dos problemas decorrentes da discretização de sinais contínuos, como por exemplo o efeito aliasing decorrente da utilização de tempos de amostragem muito grandes e de problemas numéricos da identificação de modelos discretos utilizando dados com tempos de amostragem muito pequenos de forma a destacar as vantagens da identificação contínua sobre a identificação discreta Também foi estudado um método para compensar a presença de offsets nos sinais de entrada e saída, método esse inédito quando se trata do método dos momentos de Poisson. Esse trabalho também comprova a equivalência entre o método dos momentos de Poisson e uma metodologia apresentada por Rolf Johansson em um artigo de 1994. Na parte final desse trabalho são apresentados métodos para a compensação de erros de modelagem devido à presença de ruído e distúrbios não medidos nos dados utilizados na identificação. Esses métodos permitem que o método dos momentos de Poisson concorra com os métodos de identificação discretos normalmente empregados como por exemplo ARMAX e Box-Jenkins.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper derives the spectral density function of aggregated long memory processes in light of the aliasing effect. The results are different from previous analyses in the literature and a small simulation exercise provides evidence in our favour. The main result point to that flow aggregates from long memory processes shall be less biased than stock ones, although both retain the degree of long memory. This result is illustrated with the daily US Dollar/ French Franc exchange rate series.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O objetivo principal deste trabalho é avaliar o uso dos dados de altimetria de satélites para mapear a superfície do potencial gravitacional (geóide) no mar. Esta avaliação se faz por comparações da resolução e precisão entre os dados de altimetria processados numa superfície equipotencial (o mar) e dados obtidos a partir de levantamentos convencionais. Uma vez processada a superfície equipotencial, quantidades tais como a anomalia "ar livre" juntamente com o desvio vertical podem ser calculadas. Os dados altimétricos ("altura do mar") utilizados neste trabalho foram coletados pelo satélite GEOSAT. Este satélite rastreou diversas áreas oceânicas do globo processando 44 ciclos em dois anos. Alguns pesquisadores utilizaram os valores médios da "altura do mar" deste satélite para melhoramentos em precisão e resolução dos registros. Estes valores tratados estão disponíveis em NOAA (National Oceanic and Atmospheric Administration) sendo deste modo repassados à UFPa para utilização nesta tese. Os dados de gravimetria marinha utilizados neste trabalho são aqueles obtidos do levantamento "Equatorial Atlantic" (EQUANT I e II) resultantes de uma pesquisa conjunta entre várias instituições com objetivos científicos de conhecer o comportamento da margem equatorial Brasileira. Para comparação e integração entre os dois tipos de dados obtidos através de fontes distintas (medidas de satélite e do navio), poder-se-ia obter a aceleração vertical numa superfície equipotencial partindo-se de um tratamento algébrico dos dados coletados por rastreamento altimétrico do satélite Geosat ou alternativamente poder-se-ia processar transformações dos dados de gravimetria marinha em uma superfície equipotencial equivalente. Em decorrência de diferenças no espaçamento entre as linhas dos levantamentos, ou seja, as linhas das trajetórias do satélite estão largamente espaçadas em comparação com aquelas do levantamento marinho, optou-se por transformar os dados gravimétricos do navio em uma superfície equipotencial. Neste tipo de transformação, consideraram-se vários fatores tais como efeitos "aliasing", nível de ruídos nos levantamentos do navio, redução ao geóide (correção "ar livre"), bem como erros computacionais durante as transformações. Com a supressão parcial desses efeitos (enfatizando o "aliasing") encontrou-se forte correlação entre os dois conjuntos de dados verificando-se um nível de coerência satisfatório a partir do comprimento de onda de 11 km. Comparando-se este resultado com o nível de resolução do satélite Geosat largamente estudado por outros pesquisadores enfatiza-se que de fato a resolução dos valores médios (2 anos) do satélite Geosat aproxima-se da resolução dos levantamentos do Equant I e Equant II.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work we introduce an analytical approach for the frequency warping transform. Criteria for the design of operators based on arbitrary warping maps are provided and an algorithm carrying out a fast computation is defined. Such operators can be used to shape the tiling of time-frequency plane in a flexible way. Moreover, they are designed to be inverted by the application of their adjoint operator. According to the proposed mathematical model, the frequency warping transform is computed by considering two additive operators: the first one represents its nonuniform Fourier transform approximation and the second one suppresses aliasing. The first operator is known to be analytically characterized and fast computable by various interpolation approaches. A factorization of the second operator is found for arbitrary shaped non-smooth warping maps. By properly truncating the operators involved in the factorization, the computation turns out to be fast without compromising accuracy.