894 resultados para techniques: image processing


Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, we demonstrate a digital signal processing (DSP) algorithm for improving spatial resolution of images captured by CMOS cameras. The basic approach is to reconstruct a high resolution (HR) image from a shift-related low resolution (LR) image sequence. The aliasing relationship of Fourier transforms between discrete and continuous images in the frequency domain is used for mapping LR images to a HR image. The method of projection onto convex sets (POCS) is applied to trace the best estimate of pixel matching from the LR images to the reconstructed HR image. Computer simulations and preliminary experimental results have shown that the algorithm works effectively on the application of post-image-captured processing for CMOS cameras. It can also be applied to HR digital image reconstruction, where shift information of the LR image sequence is known.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Context: Mobile applications support a set of user-interaction features that are independent of the application logic. Rotating the device, scrolling, or zooming are examples of such features. Some bugs in mobile applications can be attributed to user-interaction features. Objective: This paper proposes and evaluates a bug analyzer based on user-interaction features that uses digital image processing to find bugs. Method: Our bug analyzer detects bugs by comparing the similarity between images taken before and after a user-interaction. SURF, an interest point detector and descriptor, is used to compare the images. To evaluate the bug analyzer, we conducted a case study with 15 randomly selected mobile applications. First, we identified user-interaction bugs by manually testing the applications. Images were captured before and after applying each user-interaction feature. Then, image pairs were processed with SURF to obtain interest points, from which a similarity percentage was computed, to finally decide whether there was a bug. Results: We performed a total of 49 user-interaction feature tests. When manually testing the applications, 17 bugs were found, whereas when using image processing, 15 bugs were detected. Conclusions: 8 out of 15 mobile applications tested had bugs associated to user-interaction features. Our bug analyzer based on image processing was able to detect 88% (15 out of 17) of the user-interaction bugs found with manual testing.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The use of digital image processing techniques is prominent in medical settings for the automatic diagnosis of diseases. Glaucoma is the second leading cause of blindness in the world and it has no cure. Currently, there are treatments to prevent vision loss, but the disease must be detected in the early stages. Thus, the objective of this work is to develop an automatic detection method of Glaucoma in retinal images. The methodology used in the study were: acquisition of image database, Optic Disc segmentation, texture feature extraction in different color models and classification of images in glaucomatous or not. We obtained results of 93% accuracy

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Humans have a high ability to extract visual data information acquired by sight. Trought a learning process, which starts at birth and continues throughout life, image interpretation becomes almost instinctively. At a glance, one can easily describe a scene with reasonable precision, naming its main components. Usually, this is done by extracting low-level features such as edges, shapes and textures, and associanting them to high level meanings. In this way, a semantic description of the scene is done. An example of this, is the human capacity to recognize and describe other people physical and behavioral characteristics, or biometrics. Soft-biometrics also represents inherent characteristics of human body and behaviour, but do not allow unique person identification. Computer vision area aims to develop methods capable of performing visual interpretation with performance similar to humans. This thesis aims to propose computer vison methods which allows high level information extraction from images in the form of soft biometrics. This problem is approached in two ways, unsupervised and supervised learning methods. The first seeks to group images via an automatic feature extraction learning , using both convolution techniques, evolutionary computing and clustering. In this approach employed images contains faces and people. Second approach employs convolutional neural networks, which have the ability to operate on raw images, learning both feature extraction and classification processes. Here, images are classified according to gender and clothes, divided into upper and lower parts of human body. First approach, when tested with different image datasets obtained an accuracy of approximately 80% for faces and non-faces and 70% for people and non-person. The second tested using images and videos, obtained an accuracy of about 70% for gender, 80% to the upper clothes and 90% to lower clothes. The results of these case studies, show that proposed methods are promising, allowing the realization of automatic high level information image annotation. This opens possibilities for development of applications in diverse areas such as content-based image and video search and automatica video survaillance, reducing human effort in the task of manual annotation and monitoring.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The analysis of fluid behavior in multiphase flow is very relevant to guarantee system safety. The use of equipment to describe such behavior is subjected to factors such as the high level of investments and of specialized labor. The application of image processing techniques to flow analysis can be a good alternative, however, very little research has been developed. In this subject, this study aims at developing a new approach to image segmentation based on Level Set method that connects the active contours and prior knowledge. In order to do that, a model shape of the targeted object is trained and defined through a model of point distribution and later this model is inserted as one of the extension velocity functions for the curve evolution at zero level of level set method. The proposed approach creates a framework that consists in three terms of energy and an extension velocity function λLg(θ)+vAg(θ)+muP(0)+θf. The first three terms of the equation are the same ones introduced in (LI CHENYANG XU; FOX, 2005) and the last part of the equation θf is based on the representation of object shape proposed in this work. Two method variations are used: one restricted (Restrict Level Set - RLS) and the other with no restriction (Free Level Set - FLS). The first one is used in image segmentation that contains targets with little variation in shape and pose. The second will be used to correctly identify the shape of the bubbles in the liquid gas two phase flows. The efficiency and robustness of the approach RLS and FLS are presented in the images of the liquid gas two phase flows and in the image dataset HTZ (FERRARI et al., 2009). The results confirm the good performance of the proposed algorithm (RLS and FLS) and indicate that the approach may be used as an efficient method to validate and/or calibrate the various existing equipment used as meters for two phase flow properties, as well as in other image segmentation problems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Abstract: Medical image processing in general and brain image processing in particular are computationally intensive tasks. Luckily, their use can be liberalized by means of techniques such as GPU programming. In this article we study NiftyReg, a brain image processing library with a GPU implementation using CUDA, and analyse different possible ways of further optimising the existing codes. We will focus on fully using the memory hierarchy and on exploiting the computational power of the CPU. The ideas that lead us towards the different attempts to change and optimize the code will be shown as hypotheses, which we will then test empirically using the results obtained from running the application. Finally, for each set of related optimizations we will study the validity of the obtained results in terms of both performance and the accuracy of the resulting images.

Relevância:

90.00% 90.00%

Publicador:

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Los retos en las actividades de innovación en docencia, se basan en la necesidad de proponer nuevos métodos y estrategias que permitan ampliar y armonizar toda clase de recursos que se tengan disponibles, para potencializar los resultados en el proceso de enseñanza - aprendizaje -- En el caso de la asignatura de rocas metamórficas, es muy común identificar dificultades en los estudiantes en el análisis petrográfico, identificación de minerales, patrones texturales, y relación con curvas de blastesis; por lo anterior se quiso implementar el tratamiento de análisis digital de imágenes (ADI), como una herramienta pedagógica que facilite el aprendizaje de los mismos

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Digital rock physics combines modern imaging with advanced numerical simulations to analyze the physical properties of rocks -- In this paper we suggest a special segmentation procedure which is applied to a carbonate rock from Switzerland -- Starting point is a CTscan of a specimen of Hauptmuschelkalk -- The first step applied to the raw image data is a nonlocal mean filter -- We then apply different thresholds to identify pores and solid phases -- Because we are aware of a nonneglectable amount of unresolved microporosity we also define intermediate phases -- Based on this segmentation determine porositydependent values for the pwave velocity and for the permeability -- The porosity measured in the laboratory is then used to compare our numerical data with experimental data -- We observe a good agreement -- Future work includes an analytic validation to the numerical results of the pwave velocity upper bound, employing different filters for the image segmentation and using data with higher resolution

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Inter-subject parcellation of functional Magnetic Resonance Imaging (fMRI) data based on a standard General Linear Model (GLM) and spectral clustering was recently proposed as a means to alleviate the issues associated with spatial normalization in fMRI. However, for all its appeal, a GLM-based parcellation approach introduces its own biases, in the form of a priori knowledge about the shape of Hemodynamic Response Function (HRF) and task-related signal changes, or about the subject behaviour during the task. In this paper, we introduce a data-driven version of the spectral clustering parcellation, based on Independent Component Analysis (ICA) and Partial Least Squares (PLS) instead of the GLM. First, a number of independent components are automatically selected. Seed voxels are then obtained from the associated ICA maps and we compute the PLS latent variables between the fMRI signal of the seed voxels (which covers regional variations of the HRF) and the principal components of the signal across all voxels. Finally, we parcellate all subjects data with a spectral clustering of the PLS latent variables. We present results of the application of the proposed method on both single-subject and multi-subject fMRI datasets. Preliminary experimental results, evaluated with intra-parcel variance of GLM t-values and PLS derived t-values, indicate that this data-driven approach offers improvement in terms of parcellation accuracy over GLM based techniques.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Dissertação de Mestrado, Geomática, Faculdade de Ciências e Tecnologia, Universidade do Algarve, 2015

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this work, we describe the growth of NaCl crystals by evaporating droplets of aqueous solution while monitoring them with infrared thermography. Over the course of the evaporation experiments, variations in the recorded signal were observed and interpreted as being the result of evaporation and crystallisation. In particular, we observed sharp and transient decreases in the thermosignal during the later stages of high-concentration drop evaporation. The number of such events per experiment, referred to as “pop-cold events”, varied from 1 to over 100 and had durations from 1 to 15 s. These events are interpreted as a consequence from the top-supplied creeping (TSC) of the solution feeding the growth of efflorescence-like crystals. This phenomenon occurred when the solution was no longer macroscopically visible. In this case, efflorescence-like crystals with a spherulite shape grew around previously formed cubic crystals. Other crystal morphologies were also observed but were likely fed by mass diffusion or bottom-supplied creeping (BSC) and were not associated with “pop-cold events”; these morphologies included the cubic crystals at the centre, ring-shaped at the edge of droplets and fan-shaped crystals. After complete evaporation, an analysis of the numbers and sizes of the different types of crystals was performed using image processing. Clear differences in their sizes and distribution were observed in relation to the salt concentration. Infrared thermography permitted a level of quantification that previously was only possible using other techniques. As example, the intermittent efflorescence growth process was clearly observed and measured for the first time using infrared thermography.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This work presents the results of a survey in oil-producing region of the Macau City, northern coast of Rio Grande do Norte. All work was performed under the Project for Monitoring Environmental Change and the Influence of Hydrodynamic forcing on Morphology Beach Grass Fields, Serra Potiguar in Macau, with the support of the Laboratory of Geoprocessing, linked to PRH22 - Training Program in Geology Geophysics and Information Technology Oil and Gas - Department of Geology/CCET/UFRN and the Post-Graduation in Science and Engineering Oil/PPGCEP/UFRN. Within the economic-ecological context, this paper assesses the importance of mangrove ecosystem in the region of Macau and its surroundings as well as in the following investigative exploration of potential areas for projects involving reforestation and / or Environmental Restoration. At first it was confirmed the ecological potential of mangrove forests, with primary functions: (i) protection and stabilization of the shoreline, (ii) nursery of marine life, and (iii) source of organic matter to aquatic ecosystems, (iv) refuge of species, among others. In the second phase, using Landsat imagery and techniques of Digital Image Processing (DIP), I came across about 18,000 acres of land that can be worked on environmental projects, being inserted in the rules signed the Kyoto Protocol to the market carbon. The results also revealed a total area of 14,723.75 hectares of activity of shrimp production and salting that can be harnessed for the social, economic and environmental potential of the region, considering that over 60% of this area, ie, 8,800 acres, may be used in the planting of the genus Avicennia considered by the literature that the species best sequesters atmospheric carbon, reaching a mean value of 59.79 tons / ha of mangrove

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Phyllotaxis patterns in plants, or the arrangement of leaves and flowers radially around the shoot, have fascinated both biologists and mathematicians for centuries. The current model of this process involves the lateral transport of the hormone auxin through the first layer of cells in the shoot apical meristem via the auxin efflux carrier protein PIN1. Locations around the meristem with high auxin concentration are sites of organ formation and differentiation. Many of the molecular players in this process are well known and characterized. Computer models composed of all these components are able to produce many of the observed phyllotaxis patterns. To understand which parts of this model have a large effect on the phenotype I automated parameter testing and tried many different parameter combinations. Results of this showed that cell size and meristem size should have the largest effect on phyllotaxis. This lead to three questions: (1) How is cell geometry regulated? (2) Does cell size affect auxin distribution? (3) Does meristem size affect phyllotaxis? To answer the first question I tracked cell divisions in live meristems and quantified the geometry of the cells and the division planes using advanced image processing techniques. The results show that cell shape is maintained by minimizing the length of the new wall and by minimizing the difference in area of the daughter cells. To answer the second question I observed auxin patterning in the meristem, shoot, leaves, and roots of Arabidopsis mutants with larger and smaller cell sizes. In the meristem and shoot, cell size plays an important role in determining the distribution of auxin. Observations of auxin in the root and leaves are less definitive. To answer the third question I measured meristem sizes and phyllotaxis patterns in mutants with altered meristem sizes. These results show that there is no correlation between meristem size and average divergence angle. But in an extreme case, making the meristem very small does lead to a switch on observed phyllotaxis in accordance with the model.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

It is well known that rib cage dimensions depend on the gender and vary with the age of the individual. Under this setting it is therefore possible to assume that a computational approach to the problem may be thought out and, consequently, this work will focus on the development of an Artificial Intelligence grounded decision support system to predict individual’s age, based on such measurements. On the one hand, using some basic image processing techniques it were extracted such descriptions from chest X-rays (i.e., its maximum width and height). On the other hand, the computational framework was built on top of a Logic Programming Case Base approach to knowledge representation and reasoning, which caters for the handling of incomplete, unknown, or even contradictory information. Furthermore, clustering methods based on similarity analysis among cases were used to distinguish and aggregate collections of historical data in order to reduce the search space, therefore enhancing the cases retrieval and the overall computational process. The accuracy of the proposed model is satisfactory, close to 90%.