502 resultados para Pixels
Resumo:
In current industrial environments there is an increasing need for practical and inexpensive quality control systems to detect the foreign food materials in powder food processing lines. This demand is especially important for the detection of product adulteration with traces of highly allergenic products, such as peanuts and tree nuts. Manufacturing industries dealing with the processing of multiple powder food products present a substantial risk for the contamination of powder foods with traces of tree nuts and other adulterants, which might result in unintentional ingestion of nuts by the sensitised population. Hence, the need for an in-line system to detect nut traces at the early stages of food manufacturing is of crucial importance. In this present work, a feasibility study of a spectral index for revealing adulteration of tree nut and peanut traces in wheat flour samples with hyperspectral images is reported. The main nuts responsible for allergenic reactions considered in this work were peanut, hazelnut and walnut. Enhanced contrast between nuts and wheat flour was obtained after the application of the index. Furthermore, the segmentation of these images by selecting different thresholds for different nut and flour mixtures allowed the identification of nut traces in the samples. Pixels identified as nuts were counted and with the actual percentage of peanut adulteration. As a result, the multispectral system was able to detect and provide good visualisation of tree nut and peanut trace levels down to 0.01% by weight. In this context, multispectral imaging could operate in conjuction with chemical procedures, such as Real Time Polymerase Chain Reaction and Enzyme-Linked Immunosorbent Assay to save time, money and skilled labour on product quality control. This approach could enable not only a few selected samples to be assessed but also to extensively incorporate quality control surveyance on product processing lines.
Resumo:
LHE (logarithmical hopping encoding) is a computationally efficient image compression algorithm that exploits the Weber–Fechner law to encode the error between colour component predictions and the actual value of such components. More concretely, for each pixel, luminance and chrominance predictions are calculated as a function of the surrounding pixels and then the error between the predictions and the actual values are logarithmically quantised. The main advantage of LHE is that although it is capable of achieving a low-bit rate encoding with high quality results in terms of peak signal-to-noise ratio (PSNR) and image quality metrics with full-reference (FSIM) and non-reference (blind/referenceless image spatial quality evaluator), its time complexity is O( n) and its memory complexity is O(1). Furthermore, an enhanced version of the algorithm is proposed, where the output codes provided by the logarithmical quantiser are used in a pre-processing stage to estimate the perceptual relevance of the image blocks. This allows the algorithm to downsample the blocks with low perceptual relevance, thus improving the compression rate. The performance of LHE is especially remarkable when the bit per pixel rate is low, showing much better quality, in terms of PSNR and FSIM, than JPEG and slightly lower quality than JPEG-2000 but being more computationally efficient.
Resumo:
Video Quality Assessment needs to correspond to human perception. Pixel-based metrics (PSNR or MSE) fail in many circumstances for not taking into account the spatio-temporal property of human's visual perception. In this paper we propose a new pixel-weighted method to improve video quality metrics for artifacts evaluation. The method applies a psychovisual model based on motion, level of detail, pixel location and the appearance of human faces, which approximate the quality to the human eye's response. Subjective tests were developed to adjust the psychovisual model for demonstrating the noticeable improvement of an algorithm when weighting the pixels according to the factors analyzed instead of treating them equally. The analysis developed demonstrates the necessity of models adapted to the specific visualization of contents and the model presents an advance in quality to be applied over sequences when a determined artifact is analyzed.
Resumo:
La medida de calidad de vídeo sigue siendo necesaria para definir los criterios que caracterizan una señal que cumpla los requisitos de visionado impuestos por el usuario. Las nuevas tecnologías, como el vídeo 3D estereoscópico o formatos más allá de la alta definición, imponen nuevos criterios que deben ser analizadas para obtener la mayor satisfacción posible del usuario. Entre los problemas detectados durante el desarrollo de esta tesis doctoral se han determinado fenómenos que afectan a distintas fases de la cadena de producción audiovisual y tipo de contenido variado. En primer lugar, el proceso de generación de contenidos debe encontrarse controlado mediante parámetros que eviten que se produzca el disconfort visual y, consecuentemente, fatiga visual, especialmente en lo relativo a contenidos de 3D estereoscópico, tanto de animación como de acción real. Por otro lado, la medida de calidad relativa a la fase de compresión de vídeo emplea métricas que en ocasiones no se encuentran adaptadas a la percepción del usuario. El empleo de modelos psicovisuales y diagramas de atención visual permitirían ponderar las áreas de la imagen de manera que se preste mayor importancia a los píxeles que el usuario enfocará con mayor probabilidad. Estos dos bloques se relacionan a través de la definición del término saliencia. Saliencia es la capacidad del sistema visual para caracterizar una imagen visualizada ponderando las áreas que más atractivas resultan al ojo humano. La saliencia en generación de contenidos estereoscópicos se refiere principalmente a la profundidad simulada mediante la ilusión óptica, medida en términos de distancia del objeto virtual al ojo humano. Sin embargo, en vídeo bidimensional, la saliencia no se basa en la profundidad, sino en otros elementos adicionales, como el movimiento, el nivel de detalle, la posición de los píxeles o la aparición de caras, que serán los factores básicos que compondrán el modelo de atención visual desarrollado. Con el objetivo de detectar las características de una secuencia de vídeo estereoscópico que, con mayor probabilidad, pueden generar disconfort visual, se consultó la extensa literatura relativa a este tema y se realizaron unas pruebas subjetivas preliminares con usuarios. De esta forma, se llegó a la conclusión de que se producía disconfort en los casos en que se producía un cambio abrupto en la distribución de profundidades simuladas de la imagen, aparte de otras degradaciones como la denominada “violación de ventana”. A través de nuevas pruebas subjetivas centradas en analizar estos efectos con diferentes distribuciones de profundidades, se trataron de concretar los parámetros que definían esta imagen. Los resultados de las pruebas demuestran que los cambios abruptos en imágenes se producen en entornos con movimientos y disparidades negativas elevadas que producen interferencias en los procesos de acomodación y vergencia del ojo humano, así como una necesidad en el aumento de los tiempos de enfoque del cristalino. En la mejora de las métricas de calidad a través de modelos que se adaptan al sistema visual humano, se realizaron también pruebas subjetivas que ayudaron a determinar la importancia de cada uno de los factores a la hora de enmascarar una determinada degradación. Los resultados demuestran una ligera mejora en los resultados obtenidos al aplicar máscaras de ponderación y atención visual, los cuales aproximan los parámetros de calidad objetiva a la respuesta del ojo humano. ABSTRACT Video quality assessment is still a necessary tool for defining the criteria to characterize a signal with the viewing requirements imposed by the final user. New technologies, such as 3D stereoscopic video and formats of HD and beyond HD oblige to develop new analysis of video features for obtaining the highest user’s satisfaction. Among the problems detected during the process of this doctoral thesis, it has been determined that some phenomena affect to different phases in the audiovisual production chain, apart from the type of content. On first instance, the generation of contents process should be enough controlled through parameters that avoid the occurrence of visual discomfort in observer’s eye, and consequently, visual fatigue. It is especially necessary controlling sequences of stereoscopic 3D, with both animation and live-action contents. On the other hand, video quality assessment, related to compression processes, should be improved because some objective metrics are adapted to user’s perception. The use of psychovisual models and visual attention diagrams allow the weighting of image regions of interest, giving more importance to the areas which the user will focus most probably. These two work fields are related together through the definition of the term saliency. Saliency is the capacity of human visual system for characterizing an image, highlighting the areas which result more attractive to the human eye. Saliency in generation of 3DTV contents refers mainly to the simulated depth of the optic illusion, i.e. the distance from the virtual object to the human eye. On the other hand, saliency is not based on virtual depth, but on other features, such as motion, level of detail, position of pixels in the frame or face detection, which are the basic features that are part of the developed visual attention model, as demonstrated with tests. Extensive literature involving visual comfort assessment was looked up, and the development of new preliminary subjective assessment with users was performed, in order to detect the features that increase the probability of discomfort to occur. With this methodology, the conclusions drawn confirmed that one common source of visual discomfort was when an abrupt change of disparity happened in video transitions, apart from other degradations, such as window violation. New quality assessment was performed to quantify the distribution of disparities over different sequences. The results confirmed that abrupt changes in negative parallax environment produce accommodation-vergence mismatches derived from the increasing time for human crystalline to focus the virtual objects. On the other side, for developing metrics that adapt to human visual system, additional subjective tests were developed to determine the importance of each factor, which masks a concrete distortion. Results demonstrated slight improvement after applying visual attention to objective metrics. This process of weighing pixels approximates the quality results to human eye’s response.
Resumo:
A qualidade óssea, bem como a estabilidade inicial dos implantes, está diretamente relacionada com o sucesso das reabilitações na implantodontia. O presente estudo teve como objetivo analisar a correlação entre índices radiomorfométricos de densidade óssea por meio de radiografias panorâmicas, perfil de qualidade óssea com o auxílio de Tomografia Computadorizada de Feixe Cônico (TCFC) com o uso do software de imagens OsiriX, Análise da Frequência de Ressonância (RFA) e Torque de Inserção do implante. Foram avaliados 160 implantes de 72 indivíduos, com média etária de 55,5 (±10,5) anos. Nas radiografias panorâmicas foram obtidos os índices IM, IPM e ICM, e nas tomografias computadorizadas de feixe cônico, os valores de pixels e a espessura da cortical da crista óssea alveolar, além da estabilidade primária por meio do torque de inserção e análise da frequência de ressonância. Os resultados foram analisados pelo coeficiente de correlação de Spearman, para p<= 0,01 foi obtido entre o torque de inserção e valores de pixels (0.330), o torque de inserção e a espessura da cortical da crista alveolar (0.339), o torque de inserção e o ISQ vestibulo-lingual (0.193), os valores de pixels e espessura da cortical da crista alveolar (0.377), as duas direções vestíbulo-lingual e mesio-distal do ISQ (0.674), o ISQ vestíbulo-lingual e a espessura da cortical da crista alveolar (0.270); os índices radiomorfométricos foram correlacionados entre eles e para p<= 0,05 foi obtido entre torque de inserção e ISQ mesio-distal (0.131), entre o ISQ vestibulo-lingual e os valores de pixels (0.156) e ISQ mesio-distal e IPMI esquerdo (0.149) e ISQ mesio-distal e IPMS esquerdo (0.145). Existe correlação entre a TCFC, o torque de inserção e a RFA na avaliação da qualidade óssea. É possível utilizar, pré-cirurgicamente, os exames de TCFC para avaliar a qualidade e quantidade óssea, tendo em vista as correlações obtidas neste estudo.
Resumo:
Differential SAR Interferometry (DInSAR) is a remote sensing method with the well demonstrated ability to monitor geological hazards like earthquakes, landslides and subsidence. Among all these hazards, subsidence involves the settlement of the ground surface affecting wide areas. Frequently, subsidence is induced by overexploitation of aquifers and constitutes a common problem that affects developed societies. The excessive pumping of underground water decreases the piezometric level in the subsoil and, as a consequence, increases the effective stresses with depth causing a consolidation of the soil column. This consolidation originates a settlement of ground surface that must be withstood by civil structures built on these areas. In this paper we make use of an advanced DInSAR approach - the Coherent Pixels Technique (CPT) [1] - to monitor subsidence induced by aquifer overexploitation in the Vega Media of the Segura River (SE Spain) from 1993 to the present. 28 ERS-1/2 scenes covering a time interval of about 10 years were used to study this phenomenon. The deformation map retrieved with CPT technique shows settlements of up to 80 mm at some points of the studied zone. These values agree with data obtained by means of borehole extensometers, but not with the distribution of damaged buildings, well points and basements, because the occurrence of damages also depends on the structural quality of the buildings and their foundations. The most interesting relationship observed is the one existing between piezometric changes, settlement evolution and local geology. Three main patterns of ground surface and piezometric level behaviour have been distinguished for the study zone during this period: 1) areas where deformation occurs while ground conditions remain altered (recent deformable sediments), 2) areas with no deformation (old and non-deformable materials), and 3) areas where ground deformation mimics piezometric level changes (expansive soils). The temporal relationship between deformation patterns and soil characteristics has been analysed in this work, showing a delay between them. Moreover, this technique has allowed the measurement of ground subsidence for a period (1993-1995) where no instrument information was available.
Resumo:
This paper presents an analysis of the performance of TerraSAR-X for subsidence monitoring in urban areas. The city of Murcia has been selected as a test-site due to its high deformation rate and the set of extensometers deployed along the city that provide validation data. The obtained results have been compared with those obtained from ERS/ENVISAT data belonging to the same period and validated with the in-situ measurements.
Resumo:
This study was partially financed by the Spanish Ministry of Education and Science and EU FEDER under project TEC2005-06863, by the Valencia Regional Government under projects GV006/179 and ACOMP07/087, and by the University of Alicante under projects VIGROB2004102, VIGROB-053, and VIGROB-114.
Resumo:
The objective of this paper is to develop a method to hide information inside a binary image. An algorithm to embed data in scanned text or figures is proposed, based on the detection of suitable pixels, which verify some conditions in order to be not detected. In broad terms, the algorithm locates those pixels placed at the contours of the figures or in those areas where some scattering of the two colors can be found. The hidden information is independent from the values of the pixels where this information is embedded. Notice that, depending on the sequence of bits to be hidden, around half of the used pixels to keep bits of data will not be modified. The other basic characteristic of the proposed scheme is that it is necessary to take into consideration the bits that are modified, in order to perform the recovering process of the information, which consists on recovering the sequence of bits placed in the proper positions. An application to banking sector is proposed for hidding some information in signatures.
Resumo:
A parallel algorithm for image noise removal is proposed. The algorithm is based on peer group concept and uses a fuzzy metric. An optimization study on the use of the CUDA platform to remove impulsive noise using this algorithm is presented. Moreover, an implementation of the algorithm on multi-core platforms using OpenMP is presented. Performance is evaluated in terms of execution time and a comparison of the implementation parallelised in multi-core, GPUs and the combination of both is conducted. A performance analysis with large images is conducted in order to identify the amount of pixels to allocate in the CPU and GPU. The observed time shows that both devices must have work to do, leaving the most to the GPU. Results show that parallel implementations of denoising filters on GPUs and multi-cores are very advisable, and they open the door to use such algorithms for real-time processing.
Resumo:
A parallel algorithm to remove impulsive noise in digital images using heterogeneous CPU/GPU computing is proposed. The parallel denoising algorithm is based on the peer group concept and uses an Euclidean metric. In order to identify the amount of pixels to be allocated in multi-core and GPUs, a performance analysis using large images is presented. A comparison of the parallel implementation in multi-core, GPUs and a combination of both is performed. Performance has been evaluated in terms of execution time and Megapixels/second. We present several optimization strategies especially effective for the multi-core environment, and demonstrate significant performance improvements. The main advantage of the proposed noise removal methodology is its computational speed, which enables efficient filtering of color images in real-time applications.
Resumo:
Moderate resolution remote sensing data, as provided by MODIS, can be used to detect and map active or past wildfires from daily records of suitable combinations of reflectance bands. The objective of the present work was to develop and test simple algorithms and variations for automatic or semiautomatic detection of burnt areas from time series data of MODIS biweekly vegetation indices for a Mediterranean region. MODIS-derived NDVI 250m time series data for the Valencia region, East Spain, were subjected to a two-step process for the detection of candidate burnt areas, and the results compared with available fire event records from the Valencia Regional Government. For each pixel and date in the data series, a model was fitted to both the previous and posterior time series data. Combining drops between two consecutive points and 1-year average drops, we used discrepancies or jumps between the pre and post models to identify seed pixels, and then delimitated fire scars for each potential wildfire using an extension algorithm from the seed pixels. The resulting maps of the detected burnt areas showed a very good agreement with the perimeters registered in the database of fire records used as reference. Overall accuracies and indices of agreement were very high, and omission and commission errors were similar or lower than in previous studies that used automatic or semiautomatic fire scar detection based on remote sensing. This supports the effectiveness of the method for detecting and mapping burnt areas in the Mediterranean region.
Resumo:
In this paper we present a novel image processing algorithm providing good preliminary capabilities for in vitro detection of malaria. The proposed concept is based upon analysis of the temporal variation of each pixel. Changes in dark pixels mean that inter cellular activity happened, indicating the presence of the malaria parasite inside the cell. Preliminary experimental results involving analysis of red blood cells being either healthy or infected with malaria parasites, validated the potential benefit of the proposed numerical approach.
Resumo:
Tese de doutoramento, Engenharia Biomédica e Biofísica, Universidade de Lisboa, Faculdade de Ciências, 2016
Resumo:
This dataset contains continuous time series of land surface temperature (LST) at spatial resolution of 300m around the 12 experimental sites of the PAGE21 project (grant agreement number 282700, funded by the EC seventh Framework Program theme FP7-ENV-2011). This dataset was produced from hourly LST time series at 25km scale, retrieved from SSM/I data (André et al., 2015, doi:10.1016/j.rse.2015.01.028) and downscaled to 300m using a dynamic model and a particle smoothing approach. This methodology is based on two main assumptions. First, LST spatial variability is mostly explained by land cover and soil hydric state. Second, LST is unique for a land cover class within the low resolution pixel. Given these hypotheses, this variable can be estimated using a land cover map and a physically based land surface model constrained with observations using a data assimilation process. This methodology described in Mechri et al. (2014, doi:10.1002/2013JD020354) was applied to the ORCHIDEE land surface model (Krinner et al., 2005, doi:10.1029/2003GB002199) to estimate prior values of each land cover class provided by the ESA CCI-Land Cover product (Bontemps et al., 2013) at 300m resolution . The assimilation process (particle smoother) consists in simulating ensemble of LST time series for each land cover class and for a large number of parameter sets. For each parameter set, the resulting temperatures are aggregated considering the grid fraction of each land cover and compared to the coarse observations. Miniminizing the distance between the aggregated model solutions and the observations allow us to select the simulated LST and the corresponding parameter sets which fit the observations most closely. The retained parameter sets are then duplicated and randomly perturbed before simulating the next time window. At the end, the most likely LST of each land cover class are estimated and used to reconstruct LST maps at 300m resolution using ESA CCI-Land Cover. The resulting temperature maps on which ice pixels were masked, are provided at daily time step during the nine-year analysis period (2000-2009).