955 resultados para document image analysis
Resumo:
Technological and environmental problems related to ore processing are a serious limitation for sustainable development of mineral resources, particularly for countries / companies rich in ores, but with little access to sophisticated technology, e.g. in Latin America. Digital image analysis (DIA) can provide a simple, unexpensive and broadly applicable methodology to assess these problems, but this methodology has to be carefully defined, to produce reproducible and relevant information.
Resumo:
Mining in the Iberian Pyrite Belt (IPB), the biggest VMS metallogenetic province known in the world to date, has to face a deep crisis in spite of the huge reserves still known after ≈5 000 years of production. This is due to several factors, as the difficult processing of complex Cu-Pb-Zn-Ag- Au ores, the exhaustion of the oxidation zone orebodies (the richest for gold, in gossan), the scarce demand for sulphuric acid in the world market, and harder environmental regulations. Of these factors, only the first and the last mentioned can be addressed by local ore geologists. A reactivation of mining can therefore only be achieved by an improved and more efficient ore processing, under the constraint of strict environmental controls. Digital image analysis of the ores, coupled to reflected light microscopy, provides a quantified and reliable mineralogical and textural characterization of the ores. The automation of the procedure for the first time furnishes the process engineers with real-time information, to improve the process and to preclude or control pollution; it can be applied to metallurgical tailings as well. This is shown by some examples of the IPB.
Resumo:
To properly understand and model animal embryogenesis it is crucial to obtain detailed measurements, both in time and space, about their gene expression domains and cell dynamics. Such challenge has been confronted in recent years by a surge of atlases which integrate a statistically relevant number of different individuals to get robust, complete information about their spatiotemporal locations of gene patterns. This paper will discuss the fundamental image analysis strategies required to build such models and the most common problems found along the way. We also discuss the main challenges and future goals in the field.
Resumo:
To properly understand and model animal embryogenesis it is crucial to obtain detailed measurements, both in time and space, about their gene expression domains and cell dynamics. Such challenge has been confronted in recent years by a surge of atlases which integrate a statistically relevant number of different individuals to get robust, complete information about their spatiotemporal locations of gene patterns. This paper will discuss the fundamental image analysis strategies required to build such models and the most common problems found along the way. We also discuss the main challenges and future goals in the field.
Resumo:
The experimental results obtained in experiment “STACO” made on board the Spacelab D-2 are re-visited, with image-analysis tools not then available. The configuration consisted of a liquid bridge between two solid supporting discs. An expected breakage occurred during the experiment. The recorded images are analysed and the measured behaviour compared with the results of a three dimensional model of the liquid dynamics, obtaining a much better fit than with linear models
Resumo:
Images acquired during free breathing using first-pass gadolinium-enhanced myocardial perfusion magnetic resonance imaging (MRI) exhibit a quasiperiodic motion pattern that needs to be compensated for if a further automatic analysis of the perfusion is to be executed. In this work, we present a method to compensate this movement by combining independent component analysis (ICA) and image registration: First, we use ICA and a time?frequency analysis to identify the motion and separate it from the intensity change induced by the contrast agent. Then, synthetic reference images are created by recombining all the independent components but the one related to the motion. Therefore, the resulting image series does not exhibit motion and its images have intensities similar to those of their original counterparts. Motion compensation is then achieved by using a multi-pass image registration procedure. We tested our method on 39 image series acquired from 13 patients, covering the basal, mid and apical areas of the left heart ventricle and consisting of 58 perfusion images each. We validated our method by comparing manually tracked intensity profiles of the myocardial sections to automatically generated ones before and after registration of 13 patient data sets (39 distinct slices). We compared linear, non-linear, and combined ICA based registration approaches and previously published motion compensation schemes. Considering run-time and accuracy, a two-step ICA based motion compensation scheme that first optimizes a translation and then for non-linear transformation performed best and achieves registration of the whole series in 32 ± 12 s on a recent workstation. The proposed scheme improves the Pearsons correlation coefficient between manually and automatically obtained time?intensity curves from .84 ± .19 before registration to .96 ± .06 after registration
Resumo:
Process mineralogy provides the mineralogical information required by geometallurgists to address the inherent variation of geological data. The successful benefitiation of ores mostly depends on the ability of mineral processing to be efficiently adapted to the ore characteristics, being liberation one of the most relevant mineralogical parameters. The liberation characteristics of ores are intimately related to mineral texture. Therefore, the characterization of liberation necessarily requieres the identification and quantification of those textural features with a major bearing on mineral liberation. From this point of view grain size, bonding between mineral grains and intergrowth types are considered as the most influential textural attributes. While the quantification of grain size is a usual output of automated current technologies, information about grain boundaries and intergrowth types is usually descriptive and difficult to quantify to be included in the geometallurgical model. Aiming at the systematic and quantitative analysis of the intergrowth type within mineral particles, a new methodology based on digital image analysis has been developed. In this work, the ability of this methodology to achieve a more complete characterization of liberation is explored by the analysis of chalcopyrite in the rougher concentrate of the Kansanshi copper-gold mine (Zambia). Results obtained show that the method provides valuable textural information to achieve a better understanding of mineral behaviour during concentration processes. The potential of this method is enhanced by the fact that it provides data unavailable by current technologies. This opens up new perspectives on the quantitative analysis of mineral processing performance based on textural attributes.
Resumo:
Las imágenes hiperespectrales permiten extraer información con una gran resolución espectral, que se suele extender desde el espectro ultravioleta hasta el infrarrojo. Aunque esta tecnología fue aplicada inicialmente a la observación de la superficie terrestre, esta característica ha hecho que, en los últimos años, la aplicación de estas imágenes se haya expandido a otros campos, como la medicina y, en concreto, la detección del cáncer. Sin embargo, este nuevo ámbito de aplicación ha generado nuevas necesidades, como la del procesado de las imágenes en tiempo real. Debido, precisamente, a la gran resolución espectral, estas imágenes requieren una elevada capacidad computacional para ser procesadas, lo que imposibilita la consecución de este objetivo con las técnicas tradicionales de procesado. En este sentido, una de las principales líneas de investigación persigue el objetivo del tiempo real mediante la paralelización del procesamiento, dividiendo esta carga computacional en varios núcleos que trabajen simultáneamente. A este respecto, en el presente documento se describe el desarrollo de una librería de procesado hiperespectral para el lenguaje RVC - CAL, que está específicamente pensado para el desarrollo de aplicaciones multimedia y proporciona las herramientas necesarias para paralelizar las aplicaciones. En concreto, en este Proyecto Fin de Grado se han desarrollado las funciones necesarias para implementar dos de las cuatro fases de la cadena de análisis de una imagen hiperespectral - en concreto, las fases de estimación del número de endmembers y de la estimación de la distribución de los mismos en la imagen -; conviene destacar que este trabajo se complementa con el realizado por Daniel Madroñal en su Proyecto Fin de Grado, donde desarrolla las funciones necesarias para completar las otras dos fases de la cadena. El presente documento sigue la estructura clásica de un trabajo de investigación, exponiendo, en primer lugar, las motivaciones que han cimentado este Proyecto Fin de Grado y los objetivos que se esperan alcanzar con él. A continuación, se realiza un amplio análisis del estado del arte de las tecnologías necesarias para su desarrollo, explicando, por un lado, las imágenes hiperespectrales y, por otro, todos los recursos hardware y software necesarios para la implementación de la librería. De esta forma, se proporcionarán todos los conceptos técnicos necesarios para el correcto seguimiento de este documento. Tras ello, se detallará la metodología seguida para la generación de la mencionada librería, así como el proceso de implementación de una cadena completa de procesado de imágenes hiperespectrales que permita la evaluación tanto de la bondad de la librería como del tiempo necesario para analizar una imagen hiperespectral completa. Una vez expuesta la metodología utilizada, se analizarán en detalle los resultados obtenidos en las pruebas realizadas; en primer lugar, se explicarán los resultados individuales extraídos del análisis de las dos etapas implementadas y, posteriormente, se discutirán los arrojados por el análisis de la ejecución de la cadena completa, tanto en uno como en varios núcleos. Por último, como resultado de este estudio se extraen una serie de conclusiones, que engloban aspectos como bondad de resultados, tiempos de ejecución y consumo de recursos; asimismo, se proponen una serie de líneas futuras de actuación con las que se podría continuar y complementar la investigación desarrollada en este documento. ABSTRACT. Hyperspectral imaging collects information from across the electromagnetic spectrum, covering a wide range of wavelengths. Although this technology was initially developed for remote sensing and earth observation, its multiple advantages - such as high spectral resolution - led to its application in other fields, as cancer detection. However, this new field has shown specific requirements; for example, it needs to accomplish strong time specifications, since all the potential applications - like surgical guidance or in vivo tumor detection - imply real-time requisites. Achieving this time requirements is a great challenge, as hyperspectral images generate extremely high volumes of data to process. For that reason, some new research lines are studying new processing techniques, and the most relevant ones are related to system parallelization: in order to reduce the computational load, this solution executes image analysis in several processors simultaneously; in that way, this computational load is divided among the different cores, and real-time specifications can be accomplished. This document describes the construction of a new hyperspectral processing library for RVC - CAL language, which is specifically designed for multimedia applications and allows multithreading compilation and system parallelization. This Diploma Project develops the required library functions to implement two of the four stages of the hyperspectral imaging processing chain - endmember and abundance estimations -. The two other stages - dimensionality reduction and endmember extraction - are studied in the Diploma Project of Daniel Madroñal, which complements the research work described in this document. The document follows the classical structure of a research work. Firstly, it introduces the motivations that have inspired this Diploma Project and the main objectives to achieve. After that, it thoroughly studies the state of the art of the technologies related to the development of the library. The state of the art contains all the concepts needed to understand the contents of this research work, like the definition and applications of hyperspectral imaging and the typical processing chain. Thirdly, it explains the methodology of the library implementation, as well as the construction of a complete processing chain in RVC - CAL applying the mentioned library. This chain will test both the correct behavior of the library and the time requirements for the complete analysis of one hyperspectral image, either executing the chain in one processor or in several ones. Afterwards, the collected results will be carefully analyzed: first of all, individual results -from endmember and abundance estimations stages - will be discussed and, after that, complete results will be studied; this results will be obtained from the complete processing chain, so they will analyze the effects of multithreading and system parallelization on the mentioned processing chain. Finally, as a result of this discussion, some conclusions will be gathered regarding some relevant aspects, such as algorithm behavior, execution times and processing performance. Likewise, this document will conclude with the proposal of some future research lines that could continue the research work described in this document.
Resumo:
In the last decade, Object Based Image Analysis (OBIA) has been accepted as an effective method for processing high spatial resolution multiband images. This image analysis method is an approach that starts with the segmentation of the image. Image segmentation in general is a procedure to partition an image into homogenous groups (segments). In practice, visual interpretation is often used to assess the quality of segmentation and the analysis relies on the experience of an analyst. In an effort to address the issue, in this study, we evaluate several seed selection strategies for an automatic image segmentation methodology based on a seeded region growing-merging approach. In order to evaluate the segmentation quality, segments were subjected to spatial autocorrelation analysis using Moran's I index and intra-segment variance analysis. We apply the algorithm to image segmentation using an aerial multiband image.
Resumo:
The colony shape of four yeast species growing on agar medium wasmeasured for 116 days by image analysis. Initially, all the colonies are circular, with regular edges. The loss of circularity can be quantitatively estimated by the eccentricity index, Ei, calculated as the ratio between their orthogonal vertical and horizontal diameters. Ei can increase from 1 (complete circularity) to a maximum of 1.17–1.30, depending on the species. One colony inhibits its neighbour only when it has reached a threshold area. Then, Ei of the inhibited colony increases proportionally to the area of the inhibitory colony. The initial distance between colonies affects those threshold values but not the proportionality, Ei/area; this inhibition affects the shape but not the total surface of the colony. The appearance of irregularities in the edges is associated, in all the species, not with age but with nutrient exhaustion. The edge irregularity can be quantified by the Fourier index, Fi, calculated by the minimum number of Fourier coefficients that are needed to describe the colony contour with 99% fitness. An ad hoc function has been developed in Matlab v. 7.0 to automate the computation of the Fourier coefficients. In young colonies, Fi has a value between 2 (circumference) and 3 (ellipse). These values are maintained in mature colonies of Debaryomyces, but can reach values up to 14 in Saccharomyces.All the species studied showed the inhibition of growth in facing colony edges, but only three species showed edge irregularities associated with substrate exhaustion. Copyright © 2014 John Wiley & Sons, Ltd.
Resumo:
Video analytics play a critical role in most recent traffic monitoring and driver assistance systems. In this context, the correct detection and classification of surrounding vehicles through image analysis has been the focus of extensive research in the last years. Most of the pieces of work reported for image-based vehicle verification make use of supervised classification approaches and resort to techniques, such as histograms of oriented gradients (HOG), principal component analysis (PCA), and Gabor filters, among others. Unfortunately, existing approaches are lacking in two respects: first, comparison between methods using a common body of work has not been addressed; second, no study of the combination potentiality of popular features for vehicle classification has been reported. In this study the performance of the different techniques is first reviewed and compared using a common public database. Then, the combination capabilities of these techniques are explored and a methodology is presented for the fusion of classifiers built upon them, taking into account also the vehicle pose. The study unveils the limitations of single-feature based classification and makes clear that fusion of classifiers is highly beneficial for vehicle verification.
Resumo:
High-resolution video microscopy, image analysis, and computer simulation were used to study the role of the Spitzenkörper (Spk) in apical branching of ramosa-1, a temperature-sensitive mutant of Aspergillus niger. A shift to the restrictive temperature led to a cytoplasmic contraction that destabilized the Spk, causing its disappearance. After a short transition period, new Spk appeared where the two incipient apical branches emerged. Changes in cell shape, growth rate, and Spk position were recorded and transferred to the fungus simulator program to test the hypothesis that the Spk functions as a vesicle supply center (VSC). The simulation faithfully duplicated the elongation of the main hypha and the two apical branches. Elongating hyphae exhibited the growth pattern described by the hyphoid equation. During the transition phase, when no Spk was visible, the growth pattern was nonhyphoid, with consecutive periods of isometric and asymmetric expansion; the apex became enlarged and blunt before the apical branches emerged. Video microscopy images suggested that the branch Spk were formed anew by gradual condensation of vesicle clouds. Simulation exercises where the VSC was split into two new VSCs failed to produce realistic shapes, thus supporting the notion that the branch Spk did not originate by division of the original Spk. The best computer simulation of apical branching morphogenesis included simulations of the ontogeny of branch Spk via condensation of vesicle clouds. This study supports the hypothesis that the Spk plays a major role in hyphal morphogenesis by operating as a VSC—i.e., by regulating the traffic of wall-building vesicles in the manner predicted by the hyphoid model.
Resumo:
The discovery that the epsilon 4 allele of the apolipoprotein E (apoE) gene is a putative risk factor for Alzheimer disease (AD) in the general population has highlighted the role of genetic influences in this extremely common and disabling illness. It has long been recognized that another genetic abnormality, trisomy 21 (Down syndrome), is associated with early and severe development of AD neuropathological lesions. It remains a challenge, however, to understand how these facts relate to the pathological changes in the brains of AD patients. We used computerized image analysis to examine the size distribution of one of the characteristic neuropathological lesions in AD, deposits of A beta peptide in senile plaques (SPs). Surprisingly, we find that a log-normal distribution fits the SP size distribution quite well, motivating a porous model of SP morphogenesis. We then analyzed SP size distribution curves in genotypically defined subgroups of AD patients. The data demonstrate that both apoE epsilon 4/AD and trisomy 21/AD lead to increased amyloid deposition, but by apparently different mechanisms. The size distribution curve is shifted toward larger plaques in trisomy 21/AD, probably reflecting increased A beta production. In apoE epsilon 4/AD, the size distribution is unchanged but the number of SP is increased compared to apoE epsilon 3, suggesting increased probability of SP initiation. These results demonstrate that subgroups of AD patients defined on the basis of molecular characteristics have quantitatively different neuropathological phenotypes.