867 resultados para image processing and analysis
Resumo:
The use of fiber reinforced plastics has increased in the last decades due to their unique properties. Advantages of their use are related with low weight, high strength and stiffness. Drilling of composite plates can be carried out in conventional machinery with some adaptations. However, the presence of typical defects like delamination can affect mechanical properties of produced parts. In this paper delamination influence in bearing stress of drilled hybrid carbon+glass/epoxy quasi-isotropic plates is studied by using image processing and analysis techniques. Results from bearing test show that damage minimization is an important mean to improve mechanical properties of the joint area of the plate. The appropriateness of the image processing and analysis techniques used in the measurement of the damaged area is demonstrated.
Resumo:
The characteristics of carbon fibre reinforced laminates have widened their use from aerospace to domestic appliances, and new possibilities for their usage emerge almost daily. In many of the possible applications, the laminates need to be drilled for assembly purposes. It is known that a drilling process that reduces the drill thrust force can decrease the risk of delamination. In this work, damage assessment methods based on data extracted from radiographic images are compared and correlated with mechanical test results—bearing test and delamination onset test—and analytical models. The results demonstrate the importance of an adequate selection of drilling tools and machining parameters to extend the life cycle of these laminates as a consequence of enhanced reliability.
Resumo:
O controlo da qualidade em ressonância magnética (RM) passa pela realização de diversos testes ao equipamento e calibrações diárias, onde os fantomas desempenham um papel fundamental. Este trabalho teve como objetivo principal o desenvolvimento de um fantoma cerebral para um sistema de RM de intensidade 3.0 Tesla. Com base na literatura existente, escolheram-se como reagentes o cloreto de gadolínio (III) (GdCl3), a agarose, e o gelificante carragena, tendo sido ainda acrescentado o conservante químico azida de sódio (NaN3) de forma a inibir a degradação da solução. Realizaram-se vários testes com diferentes concentrações dos materiais selecionados até obter as misturas adequadas a suscetibilidade magnética das substâncias branca e cinzenta cerebrais. Os tempos de relaxação T1 das diversas substâncias desenvolvidas foram medidos, apresentando o fantoma final uns tempos de T1 de 702±10 ms, quando a concentração de GdCl3 foi de 100 µmol (substância branca) e 1179±23 ms quando a concentração foi de 15 µmol (substância cinzenta). Os valores de T1 do fantoma foram comparados estatisticamente com os tempos de relaxação conseguidos a partir de um cérebro humano, obtendo-se uma correlação de 0.867 com significância estatística. No intuito de demonstrar a aplicabilidade do fantoma, este foi sujeito a um protocolo de RM, do qual constaram as sequências habitualmente usadas no estudo cerebral. Como principais resultados constatou-se que, nas sequências ponderadas em T1, o fantoma apresenta uma forte associação positiva (rs > 0.700 p = 0.072) com o cérebro de referência, ainda que não sejam estatisticamente significativos. As sequências ponderadas em T2 demonstraram uma correlação positiva moderada e fraca, sendo a ponderação densidade protónica a única a apresentar uma associação negativa. Desta forma, o fantoma revelou-se um ótimo substituto do cérebro humano. Este trabalho culminou na criação de um modelo cerebral tridimensional onde foram individualizadas as regiões das substâncias branca e cinzenta, de forma a posteriormente serem preenchidas pelas correspondentes substâncias desenvolvidas, obtendo-se um fantoma cerebral antropomórfico.
Resumo:
A recent trend in digital mammography is computer-aided diagnosis systems, which are computerised tools designed to assist radiologists. Most of these systems are used for the automatic detection of abnormalities. However, recent studies have shown that their sensitivity is significantly decreased as the density of the breast increases. This dependence is method specific. In this paper we propose a new approach to the classification of mammographic images according to their breast parenchymal density. Our classification uses information extracted from segmentation results and is based on the underlying breast tissue texture. Classification performance was based on a large set of digitised mammograms. Evaluation involves different classifiers and uses a leave-one-out methodology. Results demonstrate the feasibility of estimating breast density using image processing and analysis techniques
Resumo:
In this article we provide a comprehensive literature review on the in vivo assessment of use-dependant brain structure changes in humans using magnetic resonance imaging (MRI) and computational anatomy. We highlight the recent findings in this field that allow the uncovering of the basic principles behind brain plasticity in light of the existing theoretical models at various scales of observation. Given the current lack of in-depth understanding of the neurobiological basis of brain structure changes we emphasize the necessity of a paradigm shift in the investigation and interpretation of use-dependent brain plasticity. Novel quantitative MRI acquisition techniques provide access to brain tissue microstructural properties (e.g., myelin, iron, and water content) in-vivo, thereby allowing unprecedented specific insights into the mechanisms underlying brain plasticity. These quantitative MRI techniques require novel methods for image processing and analysis of longitudinal data allowing for straightforward interpretation and causality inferences.
Resumo:
Aquest informe tècnic mostra la classificació, incidència, característiques i diagnòstic dels tumors ossis primaris i secundaris metastàsics més freqüents a partir de 145 radiografies digitalitzades
Resumo:
Print quality and the printability of paper are very important attributes when modern printing applications are considered. In prints containing images, high print quality is a basic requirement. Tone unevenness and non uniform glossiness of printed products are the most disturbing factors influencing overall print quality. These defects are caused by non ideal interactions of paper, ink and printing devices in high speed printing processes. Since print quality is a perceptive characteristic, the measurement of unevenness according to human vision is a significant problem. In this thesis, the mottling phenomenon is studied. Mottling is a printing defect characterized by a spotty, non uniform appearance in solid printed areas. Print mottle is usually the result of uneven ink lay down or non uniform ink absorption across the paper surface, especially visible in mid tone imagery or areas of uniform color, such as solids and continuous tone screen builds. By using existing knowledge on visual perception and known methods to quantify print tone variation, a new method for print unevenness evaluation is introduced. The method is compared to previous results in the field and is supported by psychometric experiments. Pilot studies are made to estimate the effect of optical paper characteristics prior to printing, on the unevenness of the printed area after printing. Instrumental methods for print unevenness evaluation have been compared and the results of the comparison indicate that the proposed method produces better results in terms of visual evaluation correspondence. The method has been successfully implemented as ail industrial application and is proved to be a reliable substitute to visual expertise.
Resumo:
The papermaking industry has been continuously developing intelligent solutions to characterize the raw materials it uses, to control the manufacturing process in a robust way, and to guarantee the desired quality of the end product. Based on the much improved imaging techniques and image-based analysis methods, it has become possible to look inside the manufacturing pipeline and propose more effective alternatives to human expertise. This study is focused on the development of image analyses methods for the pulping process of papermaking. Pulping starts with wood disintegration and forming the fiber suspension that is subsequently bleached, mixed with additives and chemicals, and finally dried and shipped to the papermaking mills. At each stage of the process it is important to analyze the properties of the raw material to guarantee the product quality. In order to evaluate properties of fibers, the main component of the pulp suspension, a framework for fiber characterization based on microscopic images is proposed in this thesis as the first contribution. The framework allows computation of fiber length and curl index correlating well with the ground truth values. The bubble detection method, the second contribution, was developed in order to estimate the gas volume at the delignification stage of the pulping process based on high-resolution in-line imaging. The gas volume was estimated accurately and the solution enabled just-in-time process termination whereas the accurate estimation of bubble size categories still remained challenging. As the third contribution of the study, optical flow computation was studied and the methods were successfully applied to pulp flow velocity estimation based on double-exposed images. Finally, a framework for classifying dirt particles in dried pulp sheets, including the semisynthetic ground truth generation, feature selection, and performance comparison of the state-of-the-art classification techniques, was proposed as the fourth contribution. The framework was successfully tested on the semisynthetic and real-world pulp sheet images. These four contributions assist in developing an integrated factory-level vision-based process control.
Resumo:
Currently, laser scribing is growing material processing method in the industry. Benefits of laser scribing technology are studied for example for improving an efficiency of solar cells. Due high-quality requirement of the fast scribing process, it is important to monitor the process in real time for detecting possible defects during the process. However, there is a lack of studies of laser scribing real time monitoring. Commonly used monitoring methods developed for other laser processes such a laser welding, are sufficient slow and existed applications cannot be implemented in fast laser scribing monitoring. The aim of this thesis is to find a method for laser scribing monitoring with a high-speed camera and evaluate reliability and performance of the developed monitoring system with experiments. The laser used in experiments is an IPG ytterbium pulsed fiber laser with 20 W maximum average power and Scan head optics used in the laser is Scanlab’s Hurryscan 14 II with an f100 tele-centric lens. The camera was connected to laser scanner using camera adapter to follow the laser process. A powerful fully programmable industrial computer was chosen for executing image processing and analysis. Algorithms for defect analysis, which are based on particle analysis, were developed using LabVIEW system design software. The performance of the algorithms was analyzed by analyzing a non-moving image from the scribing line with resolution 960x20 pixel. As a result, the maximum analysis speed was 560 frames per second. Reliability of the algorithm was evaluated by imaging scribing path with a variable number of defects 2000 mm/s when the laser was turned off and image analysis speed was 430 frames per second. The experiment was successful and as a result, the algorithms detected all defects from the scribing path. The final monitoring experiment was performed during a laser process. However, it was challenging to get active laser illumination work with the laser scanner due physical dimensions of the laser lens and the scanner. For reliable error detection, the illumination system is needed to be replaced.
Resumo:
RÉSUMÉ - Les images satellitales multispectrales, notamment celles à haute résolution spatiale (plus fine que 30 m au sol), représentent une source d’information inestimable pour la prise de décision dans divers domaines liés à la gestion des ressources naturelles, à la préservation de l’environnement ou à l’aménagement et la gestion des centres urbains. Les échelles d’étude peuvent aller du local (résolutions plus fines que 5 m) à des échelles régionales (résolutions plus grossières que 5 m). Ces images caractérisent la variation de la réflectance des objets dans le spectre qui est l’information clé pour un grand nombre d’applications de ces données. Or, les mesures des capteurs satellitaux sont aussi affectées par des facteurs « parasites » liés aux conditions d’éclairement et d’observation, à l’atmosphère, à la topographie et aux propriétés des capteurs. Deux questions nous ont préoccupé dans cette recherche. Quelle est la meilleure approche pour restituer les réflectances au sol à partir des valeurs numériques enregistrées par les capteurs tenant compte des ces facteurs parasites ? Cette restitution est-elle la condition sine qua non pour extraire une information fiable des images en fonction des problématiques propres aux différents domaines d’application des images (cartographie du territoire, monitoring de l’environnement, suivi des changements du paysage, inventaires des ressources, etc.) ? Les recherches effectuées les 30 dernières années ont abouti à une série de techniques de correction des données des effets des facteurs parasites dont certaines permettent de restituer les réflectances au sol. Plusieurs questions sont cependant encore en suspens et d’autres nécessitent des approfondissements afin, d’une part d’améliorer la précision des résultats et d’autre part, de rendre ces techniques plus versatiles en les adaptant à un plus large éventail de conditions d’acquisition des données. Nous pouvons en mentionner quelques unes : - Comment prendre en compte des caractéristiques atmosphériques (notamment des particules d’aérosol) adaptées à des conditions locales et régionales et ne pas se fier à des modèles par défaut qui indiquent des tendances spatiotemporelles à long terme mais s’ajustent mal à des observations instantanées et restreintes spatialement ? - Comment tenir compte des effets de « contamination » du signal provenant de l’objet visé par le capteur par les signaux provenant des objets environnant (effet d’adjacence) ? ce phénomène devient très important pour des images de résolution plus fine que 5 m; - Quels sont les effets des angles de visée des capteurs hors nadir qui sont de plus en plus présents puisqu’ils offrent une meilleure résolution temporelle et la possibilité d’obtenir des couples d’images stéréoscopiques ? - Comment augmenter l’efficacité des techniques de traitement et d’analyse automatique des images multispectrales à des terrains accidentés et montagneux tenant compte des effets multiples du relief topographique sur le signal capté à distance ? D’autre part, malgré les nombreuses démonstrations par des chercheurs que l’information extraite des images satellitales peut être altérée à cause des tous ces facteurs parasites, force est de constater aujourd’hui que les corrections radiométriques demeurent peu utilisées sur une base routinière tel qu’est le cas pour les corrections géométriques. Pour ces dernières, les logiciels commerciaux de télédétection possèdent des algorithmes versatiles, puissants et à la portée des utilisateurs. Les algorithmes des corrections radiométriques, lorsqu’ils sont proposés, demeurent des boîtes noires peu flexibles nécessitant la plupart de temps des utilisateurs experts en la matière. Les objectifs que nous nous sommes fixés dans cette recherche sont les suivants : 1) Développer un logiciel de restitution des réflectances au sol tenant compte des questions posées ci-haut. Ce logiciel devait être suffisamment modulaire pour pouvoir le bonifier, l’améliorer et l’adapter à diverses problématiques d’application d’images satellitales; et 2) Appliquer ce logiciel dans différents contextes (urbain, agricole, forestier) et analyser les résultats obtenus afin d’évaluer le gain en précision de l’information extraite par des images satellitales transformées en images des réflectances au sol et par conséquent la nécessité d’opérer ainsi peu importe la problématique de l’application. Ainsi, à travers cette recherche, nous avons réalisé un outil de restitution de la réflectance au sol (la nouvelle version du logiciel REFLECT). Ce logiciel est basé sur la formulation (et les routines) du code 6S (Seconde Simulation du Signal Satellitaire dans le Spectre Solaire) et sur la méthode des cibles obscures pour l’estimation de l’épaisseur optique des aérosols (aerosol optical depth, AOD), qui est le facteur le plus difficile à corriger. Des améliorations substantielles ont été apportées aux modèles existants. Ces améliorations concernent essentiellement les propriétés des aérosols (intégration d’un modèle plus récent, amélioration de la recherche des cibles obscures pour l’estimation de l’AOD), la prise en compte de l’effet d’adjacence à l’aide d’un modèle de réflexion spéculaire, la prise en compte de la majorité des capteurs multispectraux à haute résolution (Landsat TM et ETM+, tous les HR de SPOT 1 à 5, EO-1 ALI et ASTER) et à très haute résolution (QuickBird et Ikonos) utilisés actuellement et la correction des effets topographiques l’aide d’un modèle qui sépare les composantes directe et diffuse du rayonnement solaire et qui s’adapte également à la canopée forestière. Les travaux de validation ont montré que la restitution de la réflectance au sol par REFLECT se fait avec une précision de l’ordre de ±0.01 unités de réflectance (pour les bandes spectrales du visible, PIR et MIR), même dans le cas d’une surface à topographie variable. Ce logiciel a permis de montrer, à travers des simulations de réflectances apparentes à quel point les facteurs parasites influant les valeurs numériques des images pouvaient modifier le signal utile qui est la réflectance au sol (erreurs de 10 à plus de 50%). REFLECT a également été utilisé pour voir l’importance de l’utilisation des réflectances au sol plutôt que les valeurs numériques brutes pour diverses applications courantes de la télédétection dans les domaines des classifications, du suivi des changements, de l’agriculture et de la foresterie. Dans la majorité des applications (suivi des changements par images multi-dates, utilisation d’indices de végétation, estimation de paramètres biophysiques, …), la correction des images est une opération cruciale pour obtenir des résultats fiables. D’un point de vue informatique, le logiciel REFLECT se présente comme une série de menus simples d’utilisation correspondant aux différentes étapes de saisie des intrants de la scène, calcul des transmittances gazeuses, estimation de l’AOD par la méthode des cibles obscures et enfin, l’application des corrections radiométriques à l’image, notamment par l’option rapide qui permet de traiter une image de 5000 par 5000 pixels en 15 minutes environ. Cette recherche ouvre une série de pistes pour d’autres améliorations des modèles et méthodes liés au domaine des corrections radiométriques, notamment en ce qui concerne l’intégration de la FDRB (fonction de distribution de la réflectance bidirectionnelle) dans la formulation, la prise en compte des nuages translucides à l’aide de la modélisation de la diffusion non sélective et l’automatisation de la méthode des pentes équivalentes proposée pour les corrections topographiques.
Resumo:
A recent trend in digital mammography is computer-aided diagnosis systems, which are computerised tools designed to assist radiologists. Most of these systems are used for the automatic detection of abnormalities. However, recent studies have shown that their sensitivity is significantly decreased as the density of the breast increases. This dependence is method specific. In this paper we propose a new approach to the classification of mammographic images according to their breast parenchymal density. Our classification uses information extracted from segmentation results and is based on the underlying breast tissue texture. Classification performance was based on a large set of digitised mammograms. Evaluation involves different classifiers and uses a leave-one-out methodology. Results demonstrate the feasibility of estimating breast density using image processing and analysis techniques
Resumo:
This study had to aimed to characterize the sediments of shallow continental shelf and realize the mapping of features visible for satellite images by using remote sensing techniques, digital image processing and analysis of bathymetry between Maxaranguape and Touros - RN. The study s area is located in the continental shallow shelf of Rio Grande do Norte, Brazil, and is part of the Environmental Protection Area (APA) of Coral Reefs. A total of 1186 sediment samples were collected using a dredge type van veen and positioning of the vessel was made out with the aid of a Garmin 520s. The samples were treated In the laboratory to analyze particle size of the sediment, concentration of calcium carbonate and biogenic composition. The digital images from the Landsat-5 TM were used to mapping of features. This stage was used the band 1 (0,45-1,52 μm) where the image were georeferenced, and then adjusting the histogram, giving a better view of feature bottom and contacts between different types of bottom. The results obtained from analysis of the sediment showed that the sediments of the continental shelf east of RN have a dominance of carbonate facies and a sand-gravelly bottom because the region is dominated by biogenic sediments, that are made mainly of calcareous algae. The bedform types identified and morphological features found were validated by bathymetric data and sediment samples examined. From the results obtained a division for the shelf under study is suggested, these regions being subdivided, in well characterized: (1) Turbid Zone, (2) Coral Patch Reefs Zone, (3) Mixed Sediments Carbonates Zone, ( 4) Algae Fouling Zone, (5) Alignment Rocky Zone, (6) Sand Waves Field (7) Deposit siliciclastic sands
Resumo:
The aim of this study was to investigate the morphology and localisation of calcium hydroxide- and mineral trioxide aggregate (MTA)-induced hard tissue barriers after pulpotomy in dogs' teeth. Pulpotomies were performed on maxillary and mandibular premolars of five dogs. The teeth were assigned into three groups according to the pulp-capping agent used. The pulpal wounds were capped with calcium hydroxide (Ca(OH)(2) - control), MTA or ProRoot MTA, and the cavities were restored with amalgam. After a 90-day follow-up period, the dogs were euthanised and the teeth were examined under scanning electron microscopy (SEM). An image-processing and analysis software was used to delimit the perimeters of the root canal area and the hard tissue barrier to determine the percentage of root canal obliteration. SEM data were used to assess the morphology, localisation and extension of the reparative hard tissue barriers. ProRoot MTA was statistically different from MTA and Ca(OH)(2) (P < 0.05) regarding tissue barrier morphology. Localisation data showed that ProRoot MTA was significantly different from Ca(OH)(2) (P < 0.05) and similar to MTA (P > 0.01; P > 0.05). No statistically significant difference (P > 0.01; P > 0.05) was observed between MTA and Ca(OH)(2). A larger number of complete (centroperipheral) hard tissue barriers with predominance of dentinal tubules was observed to the ProRoot MTA when compared with the Ca(OH)(2) group.
Resumo:
A very new method application for digital image processing and analysis to classify shape and evaluate size and morphology parameters of pit corrosion is used in this paper. This method seems to be very effective to analysis surfaces with low or high degree of pitting formation. Pits formed on 2024 alloy surface by chloride and by chloride + molibdate anions have similar mean area, are found to be widther than deeper and exhibit predominantly conical or near-conical and irregular geometries.
Resumo:
Animal behavioral parameters can be used to assess welfare status in commercial broiler breeders. Behavioral parameters can be monitored with a variety of sensing devices, for instance, the use of video cameras allows comprehensive assessment of animal behavioral expressions. Nevertheless, the development of efficient methods and algorithms to continuously identify and differentiate animal behavior patterns is needed. The objective this study was to provide a methodology to identify hen white broiler breeder behavior using combined techniques of image processing and computer vision. These techniques were applied to differentiate body shapes from a sequence of frames as the birds expressed their behaviors. The method was comprised of four stages: (1) identification of body positions and their relationship with typical behaviors. For this stage, the number of frames required to identify each behavior was determined; (2) collection of image samples, with the isolation of the birds that expressed a behavior of interest; (3) image processing and analysis using a filter developed to separate white birds from the dark background; and finally (4) construction and validation of a behavioral classification tree, using the software tool Weka (model 148). The constructed tree was structured in 8 levels and 27 leaves, and it was validated using two modes: the set training mode with an overall rate of success of 96.7%, and the cross validation mode with an overall rate of success of 70.3%. The results presented here confirmed the feasibility of the method developed to identify white broiler breeder behavior for a particular group of study. Nevertheless, more improvements in the method can be made in order to increase the validation overall rate of success. (C) 2013 Elsevier B.V. All rights reserved.