906 resultados para discrete cosine transform


Relevância:

100.00% 100.00%

Publicador:

Resumo:

2-D Discrete Cosine Transform (DCT) is widely used as the core of digital image and video compression. In this paper, we present a novel DCT architecture that allows aggressive voltage scaling by exploiting the fact that not all intermediate computations are equally important in a DCT system to obtain "good" image quality with Peak Signal to Noise Ratio(PSNR) > 30 dB. This observation has led us to propose a DCT architecture where the signal paths that are less contributive to PSNR improvement are designed to be longer than the paths that are more contributive to PSNR improvement. It should also be noted that robustness with respect to parameter variations and low power operation typically impose contradictory requirements in terms of architecture design. However, the proposed architecture lends itself to aggressive voltage scaling for low-power dissipation even under process parameter variations. Under a scaled supply voltage and/or variations in process parameters, any possible delay errors would only appear from the long paths that are less contributive towards PSNR improvement, providing large improvement in power dissipation with small PSNR degradation. Results show that even under large process variation and supply voltage scaling (0.8V), there is a gradual degradation of image quality with considerable power savings (62.8%) for the proposed architecture when compared to existing implementations in 70 nm process technology.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Studies have been carried out to recognize individuals from a frontal view using their gait patterns. In previous work, gait sequences were captured using either single or stereo RGB camera systems or the Kinect 1.0 camera system. In this research, we used a new frontal view gait recognition method using a laser based Time of Flight (ToF) camera. In addition to the new gait data set, other contributions include enhancement of the silhouette segmentation, gait cycle estimation and gait image representations. We propose four new gait image representations namely Gait Depth Energy Image (GDE), Partial GDE (PGDE), Discrete Cosine Transform GDE (DGDE) and Partial DGDE (PDGDE). The experimental results show that all the proposed gait image representations produce better accuracy than the previous methods. In addition, we have also developed Fusion GDEs (FGDEs) which achieve better overall accuracy and outperform the previous methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia de Electrónica e Telecomunicações

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Parkinson's disease (PD) is a degenerative illness whose cardinal symptoms include rigidity, tremor, and slowness of movement. In addition to its widely recognized effects PD can have a profound effect on speech and voice.The speech symptoms most commonly demonstrated by patients with PD are reduced vocal loudness, monopitch, disruptions of voice quality, and abnormally fast rate of speech. This cluster of speech symptoms is often termed Hypokinetic Dysarthria.The disease can be difficult to diagnose accurately, especially in its early stages, due to this reason, automatic techniques based on Artificial Intelligence should increase the diagnosing accuracy and to help the doctors make better decisions. The aim of the thesis work is to predict the PD based on the audio files collected from various patients.Audio files are preprocessed in order to attain the features.The preprocessed data contains 23 attributes and 195 instances. On an average there are six voice recordings per person, By using data compression technique such as Discrete Cosine Transform (DCT) number of instances can be minimized, after data compression, attribute selection is done using several WEKA build in methods such as ChiSquared, GainRatio, Infogain after identifying the important attributes, we evaluate attributes one by one by using stepwise regression.Based on the selected attributes we process in WEKA by using cost sensitive classifier with various algorithms like MultiPass LVQ, Logistic Model Tree(LMT), K-Star.The classified results shows on an average 80%.By using this features 95% approximate classification of PD is acheived.This shows that using the audio dataset, PD could be predicted with a higher level of accuracy.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Image compress consists in represent by small amount of data, without loss a visual quality. Data compression is important when large images are used, for example satellite image. Full color digital images typically use 24 bits to specify the color of each pixel of the Images with 8 bits for each of the primary components, red, green and blue (RGB). Compress an image with three or more bands (multispectral) is fundamental to reduce the transmission time, process time and record time. Because many applications need images, that compression image data is important: medical image, satellite image, sensor etc. In this work a new compression color images method is proposed. This method is based in measure of information of each band. This technique is called by Self-Adaptive Compression (S.A.C.) and each band of image is compressed with a different threshold, for preserve information with better result. SAC do a large compression in large redundancy bands, that is, lower information and soft compression to bands with bigger amount of information. Two image transforms are used in this technique: Discrete Cosine Transform (DCT) and Principal Component Analysis (PCA). Primary step is convert data to new bands without relationship, with PCA. Later Apply DCT in each band. Data Loss is doing when a threshold discarding any coefficients. This threshold is calculated with two elements: PCA result and a parameter user. Parameters user define a compression tax. The system produce three different thresholds, one to each band of image, that is proportional of amount information. For image reconstruction is realized DCT and PCA inverse. SAC was compared with JPEG (Joint Photographic Experts Group) standard and YIQ compression and better results are obtain, in MSE (Mean Square Root). Tests shown that SAC has better quality in hard compressions. With two advantages: (a) like is adaptive is sensible to image type, that is, presents good results to divers images kinds (synthetic, landscapes, people etc., and, (b) it need only one parameters user, that is, just letter human intervention is required

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Digital image processing is a field that demands great processing capacity. As such it becomes relevant to implement software that is based on the distribution of the processing into several nodes divided by computers belonging to the same network. Specifically discussed in this work are distributed algorithms of compression and expansion of images using the discrete cosine transform. The results show that the savings in processing time obtained due to the parallel algorithms in comparison to its sequential equivalents is a function that depends on the resolution of the image and the complexity of the involved calculation; that is efficiency is greater the longer the processing period is in terms of the time involved for the communication between the network points.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Apresenta-se nesta dissertação a proposta de um algoritmo supervisionado de classificação de imagens de sensoreamento remoto, composto de três etapas: remoção ou suavização de nuvens, segmentação e classificação.O método de remoção de nuvens usa filtragem homomórfica para tratar as obstruções causadas pela presença de nuvens suaves e o método Inpainting para remover ou suavizar a preseça de sombras e nuvens densas. Para as etapas de segmentação e classificação é proposto um método baseado na energia AC dos coeficientes da Transformada Cosseno Discreta (DCT). O modo de classificação adotado é do tipo supervisionado. Para avaliar o algioritmo foi usado um banco de 14 imagens captadas por vários sensores, das quais 12 possuem algum tipo de obstrução. Para avaliar a etapa de remoção ou suavização de nuvens e sombras são usados a razão sinal-ruído de pico (PSNR) e o coeficiente Kappa. Nessa fase, vários filtros passa-altas foram comparados para a escolha do mais eficiente. A segmentação das imagens é avaliada pelo método da coincidência entre bordas (EBC) e a classificação é avaliada pela medida da entropia relativa e do erro médio quadrático (MSE). Tão importante quanto as métricas, as imagens resultantes são apresentadas de forma a permitir a avaliação subjetiva por comparação visual. Os resultados mostram a eficiência do algoritmo proposto, principalmente quando comparado ao software Spring, distribuído pelo Instituto Nacional de Pesquisas Espaciais (INPE).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Os principais objetivos deste trabalho são propor um algoritmo eficiente e o mais automático possível para estimar o que está coberto por regiões de nuvens e sombras em imagens de satélite; e um índice de confiabilidade, que seja aplicado previamente à imagem, visando medir a viabilidade da estimação das regiões cobertas pelos componentes atmosféricos usando tal algoritmo. A motivação vem dos problemas causados por esses elementos, entre eles: dificultam a identificação de objetos de imagem, prejudicam o monitoramento urbano e ambiental, e desfavorecem etapas cruciais do processamento digital de imagens para extrair informações ao usuário, como segmentação e classificação. Através de uma abordagem híbrida, é proposto um método para decompor regiões usando um filtro passa-baixas não-linear de mediana, a fim de mapear as regiões de estrutura (homogêneas), como vegetação, e de textura (heterogêneas), como áreas urbanas, na imagem. Nessas áreas, foram aplicados os métodos de restauração Inpainting por suavização baseado em Transformada Cosseno Discreta (DCT), e Síntese de Textura baseada em modelos, respectivamente. É importante salientar que as técnicas foram modificadas para serem capazes de trabalhar com imagens de características peculiares que são obtidas por meio de sensores de satélite, como por exemplo, as grandes dimensões e a alta variação espectral. Já o índice de confiabilidade, tem como objetivo analisar a imagem que contém as interferências atmosféricas e daí estimar o quão confiável será a redefinição com base no percentual de cobertura de nuvens sobre as regiões de textura e estrutura. Tal índice é composto pela combinação do resultado de algoritmos supervisionados e não-supervisionados envolvendo 3 métricas: Exatidão Global Média (EGM), Medida De Similaridade Estrutural (SSIM) e Confiança Média Dos Pixels (CM). Finalmente, verificou-se a eficácia destas metodologias através de uma avaliação quantitativa (proporcionada pelo índice) e qualitativa (pelas imagens resultantes do processamento), mostrando ser possível a aplicação das técnicas para solucionar os problemas que motivaram a realização deste trabalho.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The performance of the parallel vector implementation of the one- and two-dimensional orthogonal transforms is evaluated. The orthogonal transforms are computed using actual or modified fast Fourier transform (FFT) kernels. The factors considered in comparing the speed-up of these vectorized digital signal processing algorithms are discussed and it is shown that the traditional way of comparing th execution speed of digital signal processing algorithms by the ratios of the number of multiplications and additions is no longer effective for vector implementation; the structure of the algorithm must also be considered as a factor when comparing the execution speed of vectorized digital signal processing algorithms. Simulation results on the Cray X/MP with the following orthogonal transforms are presented: discrete Fourier transform (DFT), discrete cosine transform (DCT), discrete sine transform (DST), discrete Hartley transform (DHT), discrete Walsh transform (DWHT), and discrete Hadamard transform (DHDT). A comparison between the DHT and the fast Hartley transform is also included.(34 refs)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

When stereo images are captured under less than ideal conditions, there may be inconsistencies between the two images in brightness, contrast, blurring, etc. When stereo matching is performed between the images, these variations can greatly reduce the quality of the resulting depth map. In this paper we propose a method for correcting sharpness variations in stereo image pairs which is performed as a pre-processing step to stereo matching. Our method is based on scaling the 2D discrete cosine transform (DCT) coefficients of both images so that the two images have the same amount of energy in each of a set of frequency bands. Experiments show that applying the proposed correction method can greatly improve the disparity map quality when one image in a stereo pair is more blurred than the other.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the last recent years, with the popularity of image compression techniques, many architectures have been proposed. Those have been generally based on the Forward and Inverse Discrete Cosine Transform (FDCT, IDCT). Alternatively, compression schemes based on discrete “wavelets” transform (DWT), used, both, in JPEG2000 coding standard and in the next H264-SVC (Scalable Video Coding), do not need to divide the image into non-overlapping blocks or macroblocks. This paper discusses the DLMT (Discrete Lopez-Moreno Transform). It proposes a new scheme intermediate between the DCT and the DWT (Discrete Wavelet Transform). The DLMT is computationally very similar to the DCT and uses quasi-sinusoidal functions, so the emergence of artifact blocks and their effects have a relative low importance. The use of quasi-sinusoidal functions has allowed achieving a multiresolution control quite close to that obtained by a DWT, but without increasing the computational complexity of the transformation. The DLMT can also be applied over a whole image, but this does not involve increasing computational complexity. Simulation results in MATLAB show that the proposed DLMT has significant performance benefits and improvements comparing with the DCT

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we propose the use of Discrete Cosine Transform Type-III (DCT3) for multicarrier modulation. There are two DCT3 (even and odd) and, for each of them, we derive the expressions for both prefix and suffix to be appended into each data symbol to be transmitted. Moreover, DCT3 are closely related to the corresponding inverse DCT Type-II even and odd. Furthermore, we give explicit expressions for the 1-tap per subcarrier equalizers that must be implemented at the receiver to perform the channel equalization in the frequency-domain. As a result, the proposed DCT3-based multicarrier modulator can be used as an alternative to DFT-based systems to perform Orthogonal Frequency-Division Multiplexing or Discrete Multitone Modulation

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the last recent years, with the popularity of image compression techniques, many architectures have been proposed. Those have been generally based on the Forward and Inverse Discrete Cosine Transform (FDCT, IDCT). Alternatively, compression schemes based on discrete "wavelets" transform (DWT), used, both, in JPEG2000 coding standard and in H264-SVC (Scalable Video Coding) standard, do not need to divide the image into non-overlapping blocks or macroblocks. This paper discusses the DLMT (Discrete Lopez-Moreno Transform) hardware implementation. It proposes a new scheme intermediate between the DCT and the DWT, comparing results of the most relevant proposed architectures for benchmarking. The DLMT can also be applied over a whole image, but this does not involve increasing computational complexity. FPGA implementation results show that the proposed DLMT has significant performance benefits and improvements comparing with the DCT and the DWT and consequently it is very suitable for implementation on WSN (Wireless Sensor Network) applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

El presente proyecto final de carrera titulado “Modelado de alto nivel con SystemC” tiene como objetivo principal el modelado de algunos módulos de un codificador de vídeo MPEG-2 utilizando el lenguaje de descripción de sistemas igitales SystemC con un nivel de abstracción TLM o Transaction Level Modeling. SystemC es un lenguaje de descripción de sistemas digitales basado en C++. En él hay un conjunto de rutinas y librerías que implementan tipos de datos, estructuras y procesos especiales para el modelado de sistemas digitales. Su descripción se puede consultar en [GLMS02] El nivel de abstracción TLM se caracteriza por separar la comunicación entre los módulos de su funcionalidad. Este nivel de abstracción hace un mayor énfasis en la funcionalidad de la comunicación entre los módulos (de donde a donde van datos) que la implementación exacta de la misma. En los documentos [RSPF] y [HG] se describen el TLM y un ejemplo de implementación. La arquitectura del modelo se basa en el codificador MVIP-2 descrito en [Gar04], de dicho modelo, los módulos implementados son: · IVIDEOH: módulo que realiza un filtrado del vídeo de entrada en la dimensión horizontal y guarda en memoria el video filtrado. · IVIDEOV: módulo que lee de la memoria el vídeo filtrado por IVIDEOH, realiza el filtrado en la dimensión horizontal y escribe el video filtrado en memoria. · DCT: módulo que lee el video filtrado por IVIDEOV, hace la transformada discreta del coseno y guarda el vídeo transformado en la memoria. · QUANT: módulo que lee el video transformado por DCT, lo cuantifica y guarda el resultado en la memoria. · IQUANT: módulo que lee el video cuantificado por QUANT, realiza la cuantificación inversa y guarda el resultado en memoria. · IDCT: módulo que lee el video procesado por IQUANT, realiza la transformada inversa del coseno y guarda el resultado en memoria. · IMEM: módulo que hace de interfaz entre los módulos anteriores y la memoria. Gestiona las peticiones simultáneas de acceso a la memoria y asegura el acceso exclusivo a la memoria en cada instante de tiempo. Todos estos módulos aparecen en gris en la siguiente figura en la que se muestra la arquitectura del modelo: Figura 1. Arquitectura del modelo (VER PDF DEL PFC) En figura también aparecen unos módulos en blanco, dichos módulos son de pruebas y se han añadido para realizar simulaciones y probar los módulos del modelo: · CAMARA: módulo que simula una cámara en blanco y negro, lee la luminancia de un fichero de vídeo y lo envía al modelo a través de una FIFO. · FIFO: hace de interfaz entre la cámara y el modelo, guarda los datos que envía la cámara hasta que IVIDEOH los lee. · CONTROL: módulo que se encarga de controlar los módulos que procesan el vídeo, estos le indican cuando terminan de procesar un frame de vídeo y este módulo se encarga de iniciar los módulos que sean necesarios para seguir con la codificación. Este módulo se encarga del correcto secuenciamiento de los módulos procesadores de vídeo. · RAM: módulo que simula una memoria RAM, incluye un retardo programable en el acceso. Para las pruebas también se han generado ficheros de vídeo con el resultado de cada módulo procesador de vídeo, ficheros con mensajes y un fichero de trazas en el que se muestra el secuenciamiento de los procesadores. Como resultado del trabajo en el presente PFC se puede concluir que SystemC permite el modelado de sistemas digitales con bastante sencillez (hace falta conocimientos previos de C++ y programación orientada objetos) y permite la realización de modelos con un nivel de abstracción mayor a RTL, el habitual en Verilog y VHDL, en el caso del presente PFC, el TLM. ABSTRACT This final career project titled “High level modeling with SystemC” have as main objective the modeling of some of the modules of an MPEG-2 video coder using the SystemC digital systems description language at the TLM or Transaction Level Modeling abstraction level. SystemC is a digital systems description language based in C++. It contains routines and libraries that define special data types, structures and process to model digital systems. There is a complete description of the SystemC language in the document [GLMS02]. The main characteristic of TLM abstraction level is that it separates the communication among modules of their functionality. This abstraction level puts a higher emphasis in the functionality of the communication (from where to where the data go) than the exact implementation of it. The TLM and an example are described in the documents [RSPF] and [HG]. The architecture of the model is based in the MVIP-2 video coder (described in the document [Gar04]) The modeled modules are: · IVIDEOH: module that filter the video input in the horizontal dimension. It saves the filtered video in the memory. · IVIDEOV: module that read the IVIDEOH filtered video, filter it in the vertical dimension and save the filtered video in the memory. · DCT: module that read the IVIDEOV filtered video, do the discrete cosine transform and save the transformed video in the memory. · QUANT: module that read the DCT transformed video, quantify it and save the quantified video in the memory. · IQUANT: module that read the QUANT processed video, do the inverse quantification and save the result in the memory. · IDCT: module that read the IQUANT processed video, do the inverse cosine transform and save the result in the memory. · IMEM: this module is the interface between the modules described previously and the memory. It manage the simultaneous accesses to the memory and ensure an unique access at each instant of time All this modules are included in grey in the following figure (SEE PDF OF PFC). This figure shows the architecture of the model: Figure 1. Architecture of the model This figure also includes other modules in white, these modules have been added to the model in order to simulate and prove the modules of the model: · CAMARA: simulates a black and white video camera, it reads the luminance of a video file and sends it to the model through a FIFO. · FIFO: is the interface between the camera and the model, it saves the video data sent by the camera until the IVIDEOH module reads it. · CONTROL: controls the modules that process the video. These modules indicate the CONTROL module when they have finished the processing of a video frame. The CONTROL module, then, init the necessary modules to continue with the video coding. This module is responsible of the right sequence of the video processing modules. · RAM: it simulates a RAM memory; it also simulates a programmable delay in the access to the memory. It has been generated video files, text files and a trace file to check the correct function of the model. The trace file shows the sequence of the video processing modules. As a result of the present final career project, it can be deduced that it is quite easy to model digital systems with SystemC (it is only needed previous knowledge of C++ and object oriented programming) and it also allow the modeling with a level of abstraction higher than the RTL used in Verilog and VHDL, in the case of the present final career project, the TLM.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The growth and advances made in computer technology have led to the present interest in picture processing techniques. When considering image data compression the tendency is towards trans-form source coding of the image data. This method of source coding has reached a stage where very high reductions in the number of bits representing the data can be made while still preserving image fidelity. The point has thus been reached where channel errors need to be considered, as these will be inherent in any image comnunication system. The thesis first describes general source coding of images with the emphasis almost totally on transform coding. The transform technique adopted is the Discrete Cosine Transform (DCT) which becomes common to both transform coders. Hereafter the techniques of source coding differ substantially i.e. one tech­nique involves zonal coding, the other involves threshold coding. Having outlined the theory and methods of implementation of the two source coders, their performances are then assessed first in the absence, and then in the presence, of channel errors. These tests provide a foundation on which to base methods of protection against channel errors. Six different protection schemes are then proposed. Results obtained, from each particular, combined, source and channel error protection scheme, which are described in full are then presented. Comparisons are made between each scheme and indicate the best one to use given a particular channel error rate.