912 resultados para Efficient image processing


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Context: Mobile applications support a set of user-interaction features that are independent of the application logic. Rotating the device, scrolling, or zooming are examples of such features. Some bugs in mobile applications can be attributed to user-interaction features. Objective: This paper proposes and evaluates a bug analyzer based on user-interaction features that uses digital image processing to find bugs. Method: Our bug analyzer detects bugs by comparing the similarity between images taken before and after a user-interaction. SURF, an interest point detector and descriptor, is used to compare the images. To evaluate the bug analyzer, we conducted a case study with 15 randomly selected mobile applications. First, we identified user-interaction bugs by manually testing the applications. Images were captured before and after applying each user-interaction feature. Then, image pairs were processed with SURF to obtain interest points, from which a similarity percentage was computed, to finally decide whether there was a bug. Results: We performed a total of 49 user-interaction feature tests. When manually testing the applications, 17 bugs were found, whereas when using image processing, 15 bugs were detected. Conclusions: 8 out of 15 mobile applications tested had bugs associated to user-interaction features. Our bug analyzer based on image processing was able to detect 88% (15 out of 17) of the user-interaction bugs found with manual testing.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Efficient crop monitoring and pest damage assessments are key to protecting the Australian agricultural industry and ensuring its leading position internationally. An important element in pest detection is gathering reliable crop data frequently and integrating analysis tools for decision making. Unmanned aerial systems are emerging as a cost-effective solution to a number of precision agriculture challenges. An important advantage of this technology is it provides a non-invasive aerial sensor platform to accurately monitor broad acre crops. In this presentation, we will give an overview on how unmanned aerial systems and machine learning can be combined to address crop protection challenges. A recent 2015 study on insect damage in sorghum will illustrate the effectiveness of this methodology. A UAV platform equipped with a high-resolution camera was deployed to autonomously perform a flight pattern over the target area. We describe the image processing pipeline implemented to create a georeferenced orthoimage and visualize the spatial distribution of the damage. An image analysis tool has been developed to minimize human input requirements. The computer program is based on a machine learning algorithm that automatically creates a meaningful partition of the image into clusters. Results show the algorithm delivers decision boundaries that accurately classify the field into crop health levels. The methodology presented in this paper represents a venue for further research towards automated crop protection assessments in the cotton industry, with applications in detecting, quantifying and monitoring the presence of mealybugs, mites and aphid pests.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The analysis of fluid behavior in multiphase flow is very relevant to guarantee system safety. The use of equipment to describe such behavior is subjected to factors such as the high level of investments and of specialized labor. The application of image processing techniques to flow analysis can be a good alternative, however, very little research has been developed. In this subject, this study aims at developing a new approach to image segmentation based on Level Set method that connects the active contours and prior knowledge. In order to do that, a model shape of the targeted object is trained and defined through a model of point distribution and later this model is inserted as one of the extension velocity functions for the curve evolution at zero level of level set method. The proposed approach creates a framework that consists in three terms of energy and an extension velocity function λLg(θ)+vAg(θ)+muP(0)+θf. The first three terms of the equation are the same ones introduced in (LI CHENYANG XU; FOX, 2005) and the last part of the equation θf is based on the representation of object shape proposed in this work. Two method variations are used: one restricted (Restrict Level Set - RLS) and the other with no restriction (Free Level Set - FLS). The first one is used in image segmentation that contains targets with little variation in shape and pose. The second will be used to correctly identify the shape of the bubbles in the liquid gas two phase flows. The efficiency and robustness of the approach RLS and FLS are presented in the images of the liquid gas two phase flows and in the image dataset HTZ (FERRARI et al., 2009). The results confirm the good performance of the proposed algorithm (RLS and FLS) and indicate that the approach may be used as an efficient method to validate and/or calibrate the various existing equipment used as meters for two phase flow properties, as well as in other image segmentation problems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Abstract: As time has passed, the general purpose programming paradigm has evolved, producing different hardware architectures whose characteristics differ widely. In this work, we are going to demonstrate, through different applications belonging to the field of Image Processing, the existing difference between three Nvidia hardware platforms: two of them belong to the GeForce graphics cards series, the GTX 480 and the GTX 980 and one of the low consumption platforms which purpose is to allow the execution of embedded applications as well as providing an extreme efficiency: the Jetson TK1. With respect to the test applications we will use five examples from Nvidia CUDA Samples. These applications are directly related to Image Processing, as the algorithms they use are similar to those from the field of medical image registration. After the tests, it will be proven that GTX 980 is both the device with the highest computational power and the one that has greater consumption, it will be seen that Jetson TK1 is the most efficient platform, it will be shown that GTX 480 produces more heat than the others and we will learn other effects produced by the existing difference between the architecture of the devices.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Résumé : La texture dispose d’un bon potentiel discriminant qui complète celui des paramètres radiométriques dans le processus de classification d’image. L’indice Compact Texture Unit (CTU) multibande, récemment mis au point par Safia et He (2014), permet d’extraire la texture sur plusieurs bandes à la fois, donc de tirer parti d’un surcroît d’informations ignorées jusqu’ici dans les analyses texturales traditionnelles : l’interdépendance entre les bandes. Toutefois, ce nouvel outil n’a pas encore été testé sur des images multisources, usage qui peut se révéler d’un grand intérêt quand on considère par exemple toute la richesse texturale que le radar peut apporter en supplément à l’optique, par combinaison de données. Cette étude permet donc de compléter la validation initiée par Safia (2014) en appliquant le CTU sur un couple d’images optique-radar. L’analyse texturale de ce jeu de données a permis de générer une image en « texture couleur ». Ces bandes texturales créées sont à nouveau combinées avec les bandes initiales de l’optique, avant d’être intégrées dans un processus de classification de l’occupation du sol sous eCognition. Le même procédé de classification (mais sans CTU) est appliqué respectivement sur : la donnée Optique, puis le Radar, et enfin la combinaison Optique-Radar. Par ailleurs le CTU généré sur l’Optique uniquement (monosource) est comparé à celui dérivant du couple Optique-Radar (multisources). L’analyse du pouvoir séparateur de ces différentes bandes à partir d’histogrammes, ainsi que l’outil matrice de confusion, permet de confronter la performance de ces différents cas de figure et paramètres utilisés. Ces éléments de comparaison présentent le CTU, et notamment le CTU multisources, comme le critère le plus discriminant ; sa présence rajoute de la variabilité dans l’image permettant ainsi une segmentation plus nette, une classification à la fois plus détaillée et plus performante. En effet, la précision passe de 0.5 avec l’image Optique à 0.74 pour l’image CTU, alors que la confusion diminue en passant de 0.30 (dans l’Optique) à 0.02 (dans le CTU).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Biomedicine is a highly interdisciplinary research area at the interface of sciences, anatomy, physiology, and medicine. In the last decade, biomedical studies have been greatly enhanced by the introduction of new technologies and techniques for automated quantitative imaging, thus considerably advancing the possibility to investigate biological phenomena through image analysis. However, the effectiveness of this interdisciplinary approach is bounded by the limited knowledge that a biologist and a computer scientist, by professional training, have of each other’s fields. The possible solution to make up for both these lacks lies in training biologists to make them interdisciplinary researchers able to develop dedicated image processing and analysis tools by exploiting a content-aware approach. The aim of this Thesis is to show the effectiveness of a content-aware approach to automated quantitative imaging, by its application to different biomedical studies, with the secondary desirable purpose of motivating researchers to invest in interdisciplinarity. Such content-aware approach has been applied firstly to the phenomization of tumour cell response to stress by confocal fluorescent imaging, and secondly, to the texture analysis of trabecular bone microarchitecture in micro-CT scans. Third, this approach served the characterization of new 3-D multicellular spheroids of human stem cells, and the investigation of the role of the Nogo-A protein in tooth innervation. Finally, the content-aware approach also prompted to the development of two novel methods for local image analysis and colocalization quantification. In conclusion, the content-aware approach has proved its benefit through building new approaches that have improved the quality of image analysis, strengthening the statistical significance to allow unveiling biological phenomena. Hopefully, this Thesis will contribute to inspire researchers to striving hard for pursuing interdisciplinarity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

To investigate the degree of T2 relaxometry changes over time in groups of patients with familial mesial temporal lobe epilepsy (FMTLE) and asymptomatic relatives. We conducted both cross-sectional and longitudinal analyses of T2 relaxometry with Aftervoxel, an in-house software for medical image visualization. The cross-sectional study included 35 subjects (26 with FMTLE and 9 asymptomatic relatives) and 40 controls; the longitudinal study was composed of 30 subjects (21 with FMTLE and 9 asymptomatic relatives; the mean time interval of MRIs was 4.4 ± 1.5 years) and 16 controls. To increase the size of our groups of patients and relatives, we combined data acquired in 2 scanners (2T and 3T) and obtained z-scores using their respective controls. General linear model on SPSS21® was used for statistical analysis. In the cross-sectional analysis, elevated T2 relaxometry was identified for subjects with seizures and intermediate values for asymptomatic relatives compared to controls. Subjects with MRI signs of hippocampal sclerosis presented elevated T2 relaxometry in the ipsilateral hippocampus, while patients and asymptomatic relatives with normal MRI presented elevated T2 values in the right hippocampus. The longitudinal analysis revealed a significant increase in T2 relaxometry for the ipsilateral hippocampus exclusively in patients with seizures. The longitudinal increase of T2 signal in patients with seizures suggests the existence of an interaction between ongoing seizures and the underlying pathology, causing progressive damage to the hippocampus. The identification of elevated T2 relaxometry in asymptomatic relatives and in patients with normal MRI suggests that genetic factors may be involved in the development of some mild hippocampal abnormalities in FMTLE.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The search for an Alzheimer's disease (AD) biomarker is one of the most relevant contemporary research topics due to the high prevalence and social costs of the disease. Functional connectivity (FC) of the default mode network (DMN) is a plausible candidate for such a biomarker. We evaluated 22 patients with mild AD and 26 age- and gender-matched healthy controls. All subjects underwent resting functional magnetic resonance imaging (fMRI) in a 3.0 T scanner. To identify the DMN, seed-based FC of the posterior cingulate was calculated. We also measured the sensitivity/specificity of the method, and verified a correlation with cognitive performance. We found a significant difference between patients with mild AD and controls in average z-scores: DMN, whole cortical positive (WCP) and absolute values. DMN individual values showed a sensitivity of 77.3% and specificity of 70%. DMN and WCP values were correlated to global cognition and episodic memory performance. We showed that individual measures of DMN connectivity could be considered a promising method to differentiate AD, even at an early phase, from normal aging. Further studies with larger numbers of participants, as well as validation of normal values, are needed for more definitive conclusions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This study was designed to evaluate the correlation between computed tomography findings and data from the physical examination and the Friedman Staging System (FSS) in patients with obstructive sleep apnea (OSA). We performed a retrospective evaluation by reviewing the medical records of 33 patients (19 male and 14 female patients) with a mean body mass index of 30.38 kg/m(2) and mean age of 49.35 years. Among these patients, 14 presented with severe OSA, 7 had moderate OSA, 7 had mild OSA, and 5 were healthy. The patients were divided into 2 groups according to the FSS: Group A comprised patients with FSS stage I or II, and group B comprised patients with FSS stage III. By use of the Fisher exact test, a positive relationship between the FSS stage and apnea-hypopnea index (P = .011) and between the FSS stage and body mass index (P = .012) was found. There was no correlation between age (P = .55) and gender (P = .53) with the FSS stage. The analysis of variance test comparing the upper airway volume between the 2 groups showed P = .018. In this sample the FSS and upper airway volume showed an inverse correlation and were useful in analyzing the mechanisms of airway collapse in patients with OSA.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Remotely sensed imagery has been widely used for land use/cover classification thanks to the periodic data acquisition and the widespread use of digital image processing systems offering a wide range of classification algorithms. The aim of this work was to evaluate some of the most commonly used supervised and unsupervised classification algorithms under different landscape patterns found in Rondônia, including (1) areas of mid-size farms, (2) fish-bone settlements and (3) a gradient of forest and Cerrado (Brazilian savannah). Comparison with a reference map based on the kappa statistics resulted in good to superior indicators (best results - K-means: k=0.68; k=0.77; k=0.64 and MaxVer: k=0.71; k=0.89; k=0.70 respectively for three areas mentioned). Results show that choosing a specific algorithm requires to take into account both its capacity to discriminate among various spectral signatures under different landscape patterns as well as a cost/benefit analysis considering the different steps performed by the operator performing a land cover/use map. it is suggested that a more systematic assessment of several options of implementation of a specific project is needed prior to beginning a land use/cover mapping job.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Universidade Estadual de Campinas. Faculdade de Educação Física

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Universidade Estadual de Campinas . Faculdade de Educação Física

Relevância:

80.00% 80.00%

Publicador:

Resumo:

OBJETIVO: Avaliar os efeitos da infiltração de dióxido de carbono em adipócitos presentes na parede abdominal. MÉTODOS: Quinze voluntárias foram submetidas a sessões de infusão de CO2 durante três semanas consecutivas (duas sessões por semana com intervalos de dois a três dias entre cada sessão). O volume de gás carbônico infundido por sessão, em pontos previamente demarcados, foi sempre calculado com base na superfície da área a ser tratada, com volume infundido fixo de 250 mL/100cm² de superfície tratada. Os pontos de infiltração foram demarcados respeitando-se o limite eqüidistante 2cm entre eles. Em cada ponto se injetou 10mL, por sessão, com fluxo de 80mL/min. Foram colhidos fragmentos de tecido celular subcutâneo da parede abdominal anterior antes e após o tratamento. O número e as alterações histomorfológicas dos adipócitos (diâmetro médio, perímetro, comprimento, largura e número de adipócitos por campos de observação) foram mensurados por citometria computadorizada. Os resultados foram analisados com o teste t de Student pareado, adotando-se nível de significância de 5% (p<0,05). RESULTADOS: Encontrou-se redução significativa no número de adipócitos da parede abdominal e na área, diâmetro, perímetro, comprimento e largura após o uso da hipercapnia (p=0,0001). CONCLUSÃO: A infiltração percutânea de CO2 reduz a população e modifica a morfologia dos adipócitos presentes na parede abdominal anterior.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

PURPOSE: The ability to predict and understand which biomechanical properties of the cornea are responsible for the stability or progression of keratoconus may be an important clinical and surgical tool for the eye-care professional. We have developed a finite element model of the cornea, that tries to predicts keratoconus-like behavior and its evolution based on material properties of the corneal tissue. METHODS: Corneal material properties were modeled using bibliographic data and corneal topography was based on literature values from a schematic eye model. Commercial software was used to simulate mechanical and surface properties when the cornea was subject to different local parameters, such as elasticity. RESULTS: The simulation has shown that, depending on the corneal initial surface shape, changes in local material properties and also different intraocular pressures values induce a localized protuberance and increase in curvature when compared to the remaining portion of the cornea. CONCLUSIONS: This technique provides a quantitative and accurate approach to the problem of understanding the biomechanical nature of keratoconus. The implemented model has shown that changes in local material properties of the cornea and intraocular pressure are intrinsically related to keratoconus pathology and its shape/curvature.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

OBJETIVO: Desenvolver a instrumentação e o "software" para topografia de córnea de grande-ângulo usando o tradicional disco de Plácido. O objetivo é permitir o mapeamento de uma região maior da córnea para topógrafos de córnea que usem a técnica de Plácido, fazendo-se uma adaptação simples na mira. MÉTODOS: Utilizando o tradicional disco de Plácido de um topógrafo de córnea tradicional, 9 LEDs (Light Emitting Diodes) foram adaptados no anteparo cônico para que o paciente voluntário pudesse fixar o olhar em diferentes direções. Para cada direção imagens de Plácido foram digitalizadas e processadas para formar, por meio de algoritmo envolvendo elementos sofisticados de computação gráfica, um mapa tridimensional completo da córnea toda. RESULTADOS: Resultados apresentados neste trabalho mostram que uma região de até 100% maior pode ser mapeada usando esta técnica, permitindo que o clínico mapeie até próximo ao limbo da córnea. São apresentados aqui os resultados para uma superfície esférica de calibração e também para uma córnea in vivo com alto grau de astigmatismo, mostrando a curvatura e elevação. CONCLUSÃO: Acredita-se que esta nova técnica pode propiciar a melhoria de alguns processos, como por exemplo: adaptação de lentes de contato, algoritmos para ablações costumizadas para hipermetropia, entre outros.