954 resultados para Median filter
Resumo:
Median filtering is a simple digital non—linear signal smoothing operation in which median of the samples in a sliding window replaces the sample at the middle of the window. The resulting filtered sequence tends to follow polynomial trends in the original sample sequence. Median filter preserves signal edges while filtering out impulses. Due to this property, median filtering is finding applications in many areas of image and speech processing. Though median filtering is simple to realise digitally, its properties are not easily analysed with standard analysis techniques,
Resumo:
The ever increasing demand for high image quality requires fast and efficient methods for noise reduction. The best-known order-statistics filter is the median filter. A method is presented to calculate the median on a set of N W-bit integers in W/B time steps. Blocks containing B-bit slices are used to find B-bits of the median; using a novel quantum-like representation allowing the median to be computed in an accelerated manner compared to the best-known method (W time steps). The general method allows a variety of designs to be synthesised systematically. A further novel architecture to calculate the median for a moving set of N integers is also discussed.
Resumo:
The core processing step of the noise reduction median filter technique is to find the median within a window of integers. A four-step procedure method to compute the running median of the last N W-bit stream of integers showing area and time benefits is proposed. The method slices integers into groups of B-bit using a pipeline of W/B blocks. From the method, an architecture is developed giving a designer the flexibility to exchange area gains for faster frequency of operation, or vice versa, by adjusting N, W and B parameter values. Gains in area of around 40%, or in frequency of operation of around 20%, are clearly observed by FPGA circuit implementations compared to latest methods in the literature.
Resumo:
Tässä työssä raportoidaan harjoitustyön kehittäminen ja toteuttaminen Aktiivisen- ja robottinäön kurssille. Harjoitustyössä suunnitellaan ja toteutetaan järjestelmä joka liikuttaa kappaleita robottikäsivarrella kolmiuloitteisessa avaruudessa. Kappaleidenpaikkojen määrittämiseen järjestelmä käyttää digitaalisia kuvia. Tässä työssä esiteltävässä harjoitustyötoteutuksessa käytettiin raja-arvoistusta HSV-väriavaruudessa kappaleiden segmentointiin kuvasta niiden värien perusteella. Segmentoinnin tuloksena saatavaa binäärikuvaa suodatettiin mediaanisuotimella kuvan häiriöiden poistamiseksi. Kappaleen paikkabinäärikuvassa määritettiin nimeämällä yhtenäisiä pikseliryhmiä yhtenäisen alueen nimeämismenetelmällä. Kappaleen paikaksi määritettiin suurimman nimetyn pikseliryhmän paikka. Kappaleiden paikat kuvassa yhdistettiin kolmiuloitteisiin koordinaatteihin kalibroidun kameran avulla. Järjestelmä liikutti kappaleita niiden arvioitujen kolmiuloitteisten paikkojen perusteella.
Resumo:
Tietokoneiden vuosi vuodelta kasvanut prosessointikyky mahdollistaa spektrikuvien hyö- dyntämisen harmaasävy- ja RGB-värikuvien sijaan yhä useampien ongelmien ratkaisemi- sessa. Valitettavasti häiriöiden suodatuksen tutkimus on jäänyt jälkeen tästä kehityksestä. Useimmat menetelmät on testattu vain harmaasävy- tai RGB-värikuvien yhteydessä, mut- ta niiden toimivuutta ei ole testattu spektrikuvien suhteen. Tässä diplomityössä tutkitaan erilaisia menetelmiä bittivirheiden poistamisessa spektrikuvista. Uutena menetelmänä työssä käytetään kuutiomediaanisuodatinta ja monivaiheista kuutio- mediaanisuodatinta. Muita tutkittuja menetelmiä olivat vektorimediaanisuodatus, moni- vaiheinen vektorimediaanisuodatus, sekä rajattu keskiarvosuodatus. Kuutiosuodattimilla pyrittiin hyödyntämään spektrikuvien kaistojen välillä olevaa korrelaatiota ja niillä pääs- tiinkin kokonaisuuden kannalta parhaisiin tuloksiin. Kaikkien suodattimien toimintaa tutkittiin kahdella eri 224 komponenttisella spektriku- valla lisäämällä kuviin satunnaisia bittivirheitä.
Resumo:
With the increase of use of digital media the need for the methods of multimedia protection becomes extremely important. The number of the solutions to the problem from encryption to watermarking is large and is growing every year. In this work digital image watermarking is considered, specifically a novel method of digital watermarking of color and spectral images. An overview of existing methods watermarking of color and grayscale images is given in the paper. Methods using independent component analysis (ICA) for detection and the ones using discrete wavelet transform (DWT) and discrete cosine transform (DCT) are considered in more detail. A novel method of watermarking proposed in this paper allows embedding of a color or spectral watermark image into color or spectral image consequently and successful extraction of the watermark out of the resultant watermarked image. A number of experiments have been performed on the quality of extraction depending on the parameters of the embedding procedure. Another set of experiments included the test of the robustness of the algorithm proposed. Three techniques have been chosen for that purpose: median filter, low-pass filter (LPF) and discrete cosine transform (DCT), which are a part of a widely known StirMark - Image Watermarking Robustness Test. The study shows that the proposed watermarking technique is fragile, i.e. watermark is altered by simple image processing operations. Moreover, we have found that the contents of the image to be watermarked do not affect the quality of the extraction. Mixing coefficients, that determine the amount of the key and watermark image in the result, should not exceed 1% of the original. The algorithm proposed has proven to be successful in the task of watermark embedding and extraction.
Resumo:
The main objective of the present thesis was the seismic interpretation and seismic attribute analysis of the 3D seismic data from the Siririzinho high, located in the Sergipe Sub-basin (southern portion of Sergipe-Alagoas Basin). This study has enabled a better understanding of the stratigraphy and structure that the Siririzinho high experienced during its development. In a first analysis, we used two types of filters: the dip-steered median filter, was used to remove random noise and increase the lateral continuity of reflections, and fault-enhancement filter was applied to enhance the reflection discontinuities. After this filtering step similarity and curvature attributes were applied in order to identify and enhance the distribution of faults and fractures. The use of attributes and filtering greatly contributed to the identification and enhancement of continuity of faults. Besides the application of typical attributes (similarity and curvature) neural network and fingerprint techniques were also used, which generate meta-attributes, also aiming to highlight the faults; however, the results were not satisfactory. In a subsequent step, well log and seismic data analysis were performed, which allowed the understanding of the distribution and arrangement of sequences that occur in the Siririzinho high, as well as an understanding of how these units are affected by main structures in the region. The Siririzinho high comprises an elongated structure elongated in the NS direction, capped by four seismo-sequences (informally named, from bottom to top, the sequences I to IV, plus the top of the basement). It was possible to recognize the main NS-oriented faults, which especially affect the sequences I and II, and faults oriented NE-SW, that reach the younger sequences, III and IV. Finally, with the interpretation of seismic horizons corresponding to each of these sequences, it was possible to define a better understanding of geometry, deposition and structural relations in the area.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
Long-term electrocardiogram (ECG) often suffers from relevant noise. Baseline wander in particular is pronounced in ECG recordings using dry or esophageal electrodes, which are dedicated for prolonged registration. While analog high-pass filters introduce phase distortions, reliable offline filtering of the baseline wander implies a computational burden that has to be put in relation to the increase in signal-to-baseline ratio (SBR). Here we present a graphics processor unit (GPU) based parallelization method to speed up offline baseline wander filter algorithms, namely the wavelet, finite, and infinite impulse response, moving mean, and moving median filter. Individual filter parameters were optimized with respect to the SBR increase based on ECGs from the Physionet database superimposed to auto-regressive modeled, real baseline wander. A Monte-Carlo simulation showed that for low input SBR the moving median filter outperforms any other method but negatively affects ECG wave detection. In contrast, the infinite impulse response filter is preferred in case of high input SBR. However, the parallelized wavelet filter is processed 500 and 4 times faster than these two algorithms on the GPU, respectively, and offers superior baseline wander suppression in low SBR situations. Using a signal segment of 64 mega samples that is filtered as entire unit, wavelet filtering of a 7-day high-resolution ECG is computed within less than 3 seconds. Taking the high filtering speed into account, the GPU wavelet filter is the most efficient method to remove baseline wander present in long-term ECGs, with which computational burden can be strongly reduced.
Resumo:
El presente trabajo describe una nueva metodología para la detección automática del espacio glotal de imágenes laríngeas tomadas a partir de 15 vídeos grabados por el servicio ORL del hospital Gregorio Marañón de Madrid con luz estroboscópica. El sistema desarrollado está basado en el modelo de contornos activos (snake). El algoritmo combina en el pre-procesado, algunas técnicas tradicionales (umbralización y filtro de mediana) con técnicas más sofisticadas tales como filtrado anisotrópico. De esta forma, se obtiene una imagen apropiada para el uso de las snakes. El valor escogido para el umbral es del 85% del pico máximo del histograma de la imagen; sobre este valor la información de los píxeles no es relevante. El filtro anisotrópico permite distinguir dos niveles de intensidad, uno es el fondo y el otro es la glotis. La inicialización se basa en obtener el módulo del campo GVF; de esta manera se asegura un proceso automático para la selección del contorno inicial. El rendimiento del algoritmo se valida usando los coeficientes de Pratt y se compara contra una segmentación realizada manualmente y otro método automático basado en la transformada de watershed. SUMMARY: The present work describes a new methodology for the automatic detection of the glottal space from laryngeal images taken from 15 videos recorded by the ENT service of the Gregorio Marañon Hospital in Madrid with videostroboscopic equipment. The system is based on active contour models (snakes). The algorithm combines for the pre-processing, some traditional techniques (thresholding and median filter) with more sophisticated techniques such as anisotropic filtering. In this way, we obtain an appropriate image for the use of snake. The value selected for the threshold is 85% of the maximum peak of the image histogram; over this point the information of the pixels is not relevant. The anisotropic filter permits to distinguish two intensity levels, one is the background and the other one is the glottis. The initialization is based on the obtained magnitude by GVF field; in this manner an automatic process for the initial contour selection will be assured. The performance of the algorithm is tested using the Pratt coefficient and compared against a manual segmentation and another automatic method based on the watershed transformation.
Resumo:
The present work describes a new methodology for the automatic detection of the glottal space from laryngeal images based on active contour models (snakes). In order to obtain an appropriate image for the use of snakes based techniques, the proposed algorithm combines a pre-processing stage including some traditional techniques (thresholding and median filter) with more sophisticated ones such as anisotropic filtering. The value selected for the thresholding was fixed to the 85% of the maximum peak of the image histogram, and the anisotropic filter permits to distinguish two intensity levels, one corresponding to the background and the other one to the foreground (glottis). The initialization carried out is based on the magnitude obtained using the Gradient Vector Flow field, ensuring an automatic process for the selection of the initial contour. The performance of the algorithm is tested using the Pratt coefficient and compared against a manual segmentation. The results obtained suggest that this method provided results comparable with other techniques such as the proposed in (Osma-Ruiz et al., 2008).
Resumo:
Ampliación de software dedicado al análisis de imágenes mediante la introducción de nuevas opciones en el procesamiento de video digital, mejoras en la interacción con el usuario. Para ello se ha estudiado el funcionamiento de la aplicación, integrando el lenguaje Python como herramienta de gestión y ejecución de la aplicación. En esta parte de la aplicación se ha integrado: - Traducción de la UI a una versión castellana. - Modificación y eliminación de cualquier filtro añadido para el procesamiento de video, no únicamente el último. - Descripciones de puntero y en la barra de estado de elementos de la aplicación. - Iconos en la barra de herramientas de los filtros añadidos más importantes. Por la otra parte, la del tratamiento digital de video, Avisynth se dispone como el eje de estudio, el cuál ejecuta sobre lenguaje de bajo nivel (C++) las operaciones pertinentes a través de librerías de enlace dinámico o *.dll. Las nuevas funcionalidades son: Convolución matricial, filtro de media adaptativa, DCT, ajustes de video generales, en formato RGB o YUV, rotaciones, cambios de perspectiva y filtrado en frecuencia. ABSTRACT. Improvement about a digital image processing software, creating new options in digital video processing or the user interaction. For this porpuse, we have integrated the application language,Python, as the tool to the application management and execution. In this part of the application has been integrated: - Translation of the UI: Spanish version. - Modifying and removing any added filter for video processing, not just the last. - Descriptions for the pointer and the status bar of the application. - New icons on the toolbar of the most important filters added. On the other hand, Avisynth was used tool for the digital video processing, which runs on low-level language (C ++) for a quickly and to improve the video operations. The new introduced filters are: Matrix Convolution, adaptive median filter, DCT, general video settings on RGB or YUV format, rotations, changes in perspective and frequency filtering.
Resumo:
In order to optimize frontal detection in sea surface temperature fields at 4 km resolution, a combined statistical and expert-based approach is applied to test different spatial smoothing of the data prior to the detection process. Fronts are usually detected at 1 km resolution using the histogram-based, single image edge detection (SIED) algorithm developed by Cayula and Cornillon in 1992, with a standard preliminary smoothing using a median filter and a 3 × 3 pixel kernel. Here, detections are performed in three study regions (off Morocco, the Mozambique Channel, and north-western Australia) and across the Indian Ocean basin using the combination of multiple windows (CMW) method developed by Nieto, Demarcq and McClatchie in 2012 which improves on the original Cayula and Cornillon algorithm. Detections at 4 km and 1 km of resolution are compared. Fronts are divided in two intensity classes (“weak” and “strong”) according to their thermal gradient. A preliminary smoothing is applied prior to the detection using different convolutions: three type of filters (median, average and Gaussian) combined with four kernel sizes (3 × 3, 5 × 5, 7 × 7, and 9 × 9 pixels) and three detection window sizes (16 × 16, 24 × 24 and 32 × 32 pixels) to test the effect of these smoothing combinations on reducing the background noise of the data and therefore on improving the frontal detection. The performance of the combinations on 4 km data are evaluated using two criteria: detection efficiency and front length. We find that the optimal combination of preliminary smoothing parameters in enhancing detection efficiency and preserving front length includes a median filter, a 16 × 16 pixel window size, and a 5 × 5 pixel kernel for strong fronts and a 7 × 7 pixel kernel for weak fronts. Results show an improvement in detection performance (from largest to smallest window size) of 71% for strong fronts and 120% for weak fronts. Despite the small window used (16 × 16 pixels), the length of the fronts has been preserved relative to that found with 1 km data. This optimal preliminary smoothing and the CMW detection algorithm on 4 km sea surface temperature data are then used to describe the spatial distribution of the monthly frequencies of occurrence for both strong and weak fronts across the Indian Ocean basin. In general strong fronts are observed in coastal areas whereas weak fronts, with some seasonal exceptions, are mainly located in the open ocean. This study shows that adequate noise reduction done by a preliminary smoothing of the data considerably improves the frontal detection efficiency as well as the global quality of the results. Consequently, the use of 4 km data enables frontal detections similar to 1 km data (using a standard median 3 × 3 convolution) in terms of detectability, length and location. This method, using 4 km data is easily applicable to large regions or at the global scale with far less constraints of data manipulation and processing time relative to 1 km data.
Resumo:
The purpose of this work is to demonstrate and to assess a simple algorithm for automatic estimation of the most salient region in an image, that have possible application in computer vision. The algorithm uses the connection between color dissimilarities in the image and the image’s most salient region. The algorithm also avoids using image priors. Pixel dissimilarity is an informal function of the distance of a specific pixel’s color to other pixels’ colors in an image. We examine the relation between pixel color dissimilarity and salient region detection on the MSRA1K image dataset. We propose a simple algorithm for salient region detection through random pixel color dissimilarity. We define dissimilarity by accumulating the distance between each pixel and a sample of n other random pixels, in the CIELAB color space. An important result is that random dissimilarity between each pixel and just another pixel (n = 1) is enough to create adequate saliency maps when combined with median filter, with competitive average performance if compared with other related methods in the saliency detection research field. The assessment was performed by means of precision-recall curves. This idea is inspired on the human attention mechanism that is able to choose few specific regions to focus on, a biological system that the computer vision community aims to emulate. We also review some of the history on this topic of selective attention.
Resumo:
Tutkimuksen tavoitteena on muodostaa liikkeenjohdon päätöksentekoa tukeva toimintatapa, jonka avulla pystytään keräämään ja suodattamaan sosiaalisesta mediasta olennainen asiakastyytyväisyyden muodostumiseen vaikuttava informaatio. Tavoitteena on myös täsmentää sosiaalisen median viestien tarkastelutapaa ja tarjota liikkeenjohdolle käytännönläheinen ja vaihtoehtoinen tiedonkeruumenetelmä perinteisten asiakastyytyväisyyskyselyiden rinnalle. Tutkimuksen case-yrityksenä käytetään Fox International Channels Oy:tä, joka alkuvuodesta 2012 hankki SuomiTV:n ja lanseerasi tämän tilalle uuden FOX-kanavan. Sosiaalisen median viestit kerätään sosiaalisen median seurannan avulla ja kerättyä aineistoa tarkastellaan kriittisten tapausten tekniikan avulla.Tämän tutkimuksen perusteella voidaan todeta, että on olemassa valtava määrä käsittelemätöntä tietoa, jota keräämällä, suodattamalla ja analysoimalla voidaan kerätä tavoitteita tukevaa relevanttia informaatiota. Tutkimuksessa tulee ilmi, että viestit sisältävät paljon ironiaa, slangia, sarkasmia ja metaforia, minkä takia automaattinen hakusanoihin perustuva seuranta- ja analysointipalvelu ei riitä syvällisen ymmärryksen luomiseen. Yrityksen ulkopuolinen tieto ja sosiaalisen median viestit ovat puhtaimmillaan hyvin tärkeää ulkoista informaatiota yrityksille, mitä hyväksikäyttämällä voidaan luoda kilpailuetua pidemmällä aikavälillä.