934 resultados para hierarchical image analysis
Resumo:
Date of Acceptance: 31/08/2015 The authors would like to thank Total E&P and BG Group for project funding and support and the Industry Technology Facilitator for enabling the collaborative development (grant number 3322PSD). The authors would also like to thank Aberdeen Formation Evaluation Society and the College of Physical Sciences at the University of Aberdeen for partial financial support. Dougal Jerram, Raymi Castilla, Claude Gout, Frances Abbots and an anonymous reviewer are thanked for their constructive comments and suggestions to improve the standard of this manuscript. The authors would also like to express their gratitude toJohn Still and Colin Taylor for technical assistance in the laboratory and Nick Timms (Curtin University) and Angela Halfpenny (CSIRO) for their assistance with the full thin section scanning equipment.
Resumo:
The importance of non-destructive techniques (NDT) in structural health monitoring programmes is being critically felt in the recent times. The quality of the measured data, often affected by various environmental conditions can be a guiding factor in terms usefulness and prediction efficiencies of the various detection and monitoring methods used in this regard. Often, a preprocessing of the acquired data in relation to the affecting environmental parameters can improve the information quality and lead towards a significantly more efficient and correct prediction process. The improvement can be directly related to the final decision making policy about a structure or a network of structures and is compatible with general probabilistic frameworks of such assessment and decision making programmes. This paper considers a preprocessing technique employed for an image analysis based structural health monitoring methodology to identify sub-marine pitting corrosion in the presence of variable luminosity, contrast and noise affecting the quality of images. A preprocessing of the gray-level threshold of the various images is observed to bring about a significant improvement in terms of damage detection as compared to an automatically computed gray-level threshold. The case dependent adjustments of the threshold enable to obtain the best possible information from an existing image. The corresponding improvements are observed in a qualitative manner in the present study.
Resumo:
Scientists planning to use underwater stereoscopic image technologies are often faced with numerous problems during the methodological implementations: commercial equipment is too expensive; the setup or calibration is too complex; or the imaging processing (i.e. measuring objects in the stereo-images) is too complicated to be performed without a time-consuming phase of training and evaluation. The present paper addresses some of these problems and describes a workflow for stereoscopic measurements for marine biologists. It also provides instructions on how to assemble an underwater stereo-photographic system with two digital consumer cameras and gives step-by-step guidelines for setting up the hardware. The second part details a software procedure to correct stereo-image pairs for lens distortions, which is especially important when using cameras with non-calibrated optical units. The final part presents a guide to the process of measuring the lengths (or distances) of objects in stereoscopic image pairs. To reveal the applicability and the restrictions of the described systems and to test the effects of different types of camera (a compact camera and an SLR type), experiments were performed to determine the precision and accuracy of two generic stereo-imaging units: a diver-operated system based on two Olympus Mju 1030SW compact cameras and a cable-connected observatory system based on two Canon 1100D SLR cameras. In the simplest setup without any correction for lens distortion, the low-budget Olympus Mju 1030SW system achieved mean accuracy errors (percentage deviation of a measurement from the object's real size) between 10.2 and -7.6% (overall mean value: -0.6%), depending on the size, orientation and distance of the measured object from the camera. With the single lens reflex (SLR) system, very similar values between 10.1% and -3.4% (overall mean value: -1.2%) were observed. Correction of the lens distortion significantly improved the mean accuracy errors of either system. Even more, system precision (spread of the accuracy) improved significantly in both systems. Neither the use of a wide-angle converter nor multiple reassembly of the system had a significant negative effect on the results. The study shows that underwater stereophotography, independent of the system, has a high potential for robust and non-destructive in situ sampling and can be used without prior specialist training.
Resumo:
The use of remote sensing for monitoring of submerged aquatic vegetation (SAV) in fluvial environments has been limited by the spatial and spectral resolution of available image data. The absorption of light in water also complicates the use of common image analysis methods. This paper presents the results of a study that uses very high resolution (VHR) image data, collected with a Near Infrared sensitive DSLR camera, to map the distribution of SAV species for three sites along the Desselse Nete, a lowland river in Flanders, Belgium. Plant species, including Ranunculus aquatilis L., Callitriche obtusangula Le Gall, Potamogeton natans L., Sparganium emersum L. and Potamogeton crispus L., were classified from the data using Object-Based Image Analysis (OBIA) and expert knowledge. A classification rule set based on a combination of both spectral and structural image variation (e.g. texture and shape) was developed for images from two sites. A comparison of the classifications with manually delineated ground truth maps resulted for both sites in 61% overall accuracy. Application of the rule set to a third validation image, resulted in 53% overall accuracy. These consistent results show promise for species level mapping in such biodiverse environments, but also prompt a discussion on assessment of classification accuracy.
Resumo:
Image processing offers unparalleled potential for traffic monitoring and control. For many years engineers have attempted to perfect the art of automatic data abstraction from sequences of video images. This paper outlines a research project undertaken at Napier University by the authors in the field of image processing for automatic traffic analysis. A software based system implementing TRIP algorithms to count cars and measure vehicle speed has been developed by members of the Transport Engineering Research Unit (TERU) at the University. The TRIP algorithm has been ported and evaluated on an IBM PC platform with a view to hardware implementation of the pre-processing routines required for vehicle detection. Results show that a software based traffic counting system is realisable for single window processing. Due to the high volume of data required to be processed for full frames or multiple lanes, system operations in real time are limited. Therefore specific hardware is required to be designed. The paper outlines a hardware design for implementation of inter-frame and background differencing, background updating and shadow removal techniques. Preliminary results showing the processing time and counting accuracy for the routines implemented in software are presented and a real time hardware pre-processing architecture is described.
Resumo:
A two-step etching technique for fine-grained calcite mylonites using 0.37% hydrochloric and 0.1% acetic acid produces a topographic relief which reflects the grain boundary geometry. With this technique, calcite grain boundaries become more intensely dissolved than their grain interiors but second phase minerals like dolomite, quartz, feldspars, apatite, hematite and pyrite are not affected by the acid and therefore form topographic peaks. Based on digital backscatter electron images and element distribution maps acquired on a scanning electron microscope, the geometry of calcite and the second phase minerals can be automatically quantified using image analysis software. For research on fine-grained carbonate rocks (e.g. dolomite calcite mixtures), this low-cost approach is an attractive alternative to the generation of manual grain boundary maps based on photographs from ultra-thin sections or orientation contrast images.
Resumo:
2016
Resumo:
Biomedicine is a highly interdisciplinary research area at the interface of sciences, anatomy, physiology, and medicine. In the last decade, biomedical studies have been greatly enhanced by the introduction of new technologies and techniques for automated quantitative imaging, thus considerably advancing the possibility to investigate biological phenomena through image analysis. However, the effectiveness of this interdisciplinary approach is bounded by the limited knowledge that a biologist and a computer scientist, by professional training, have of each other’s fields. The possible solution to make up for both these lacks lies in training biologists to make them interdisciplinary researchers able to develop dedicated image processing and analysis tools by exploiting a content-aware approach. The aim of this Thesis is to show the effectiveness of a content-aware approach to automated quantitative imaging, by its application to different biomedical studies, with the secondary desirable purpose of motivating researchers to invest in interdisciplinarity. Such content-aware approach has been applied firstly to the phenomization of tumour cell response to stress by confocal fluorescent imaging, and secondly, to the texture analysis of trabecular bone microarchitecture in micro-CT scans. Third, this approach served the characterization of new 3-D multicellular spheroids of human stem cells, and the investigation of the role of the Nogo-A protein in tooth innervation. Finally, the content-aware approach also prompted to the development of two novel methods for local image analysis and colocalization quantification. In conclusion, the content-aware approach has proved its benefit through building new approaches that have improved the quality of image analysis, strengthening the statistical significance to allow unveiling biological phenomena. Hopefully, this Thesis will contribute to inspire researchers to striving hard for pursuing interdisciplinarity.
Resumo:
Multidimensional Visualization techniques are invaluable tools for analysis of structured and unstructured data with variable dimensionality. This paper introduces PEx-Image-Projection Explorer for Images-a tool aimed at supporting analysis of image collections. The tool supports a methodology that employs interactive visualizations to aid user-driven feature detection and classification tasks, thus offering improved analysis and exploration capabilities. The visual mappings employ similarity-based multidimensional projections and point placement to layout the data on a plane for visual exploration. In addition to its application to image databases, we also illustrate how the proposed approach can be successfully employed in simultaneous analysis of different data types, such as text and images, offering a common visual representation for data expressed in different modalities.
Resumo:
Texture is one of the most important visual attributes used in image analysis. It is used in many content-based image retrieval systems, where it allows the identification of a larger number of images from distinct origins. This paper presents a novel approach for image analysis and retrieval based on complexity analysis. The approach consists of a texture segmentation step, performed by complexity analysis through BoxCounting fractal dimension, followed by the estimation of complexity of each computed region by multiscale fractal dimension. Experiments have been performed with MRI database in both pattern recognition and image retrieval contexts. Results show the accuracy of the method and also indicate how the performance changes as the texture segmentation process is altered.
Resumo:
A set of NIH Image macro programs was developed to make qualitative and quantitative analyses from digital stereo pictures produced by scanning electron microscopes. These tools were designed for image alignment, anaglyph representation, animation, reconstruction of true elevation surfaces, reconstruction of elevation profiles, true-scale elevation mapping and, for the quantitative approach, surface area and roughness calculations. Limitations on time processing, scanning techniques and programming concepts are also discussed.
Resumo:
Research on image processing has shown that combining segmentation methods may lead to a solid approach to extract semantic information from different sort of images. Within this context, the Normalized Cut (NCut) is usually used as a final partitioning tool for graphs modeled in some chosen method. This work explores the Watershed Transform as a modeling tool, using different criteria of the hierarchical Watershed to convert an image into an adjacency graph. The Watershed is combined with an unsupervised distance learning step that redistributes the graph weights and redefines the Similarity matrix, before the final segmentation step using NCut. Adopting the Berkeley Segmentation Data Set and Benchmark as a background, our goal is to compare the results obtained for this method with previous work to validate its performance.
Resumo:
Photodynamic therapy (PDT) is a treatment modality that has advanced rapidly in recent years. It causes tissue and vascular damage with the interaction of a photosensitizing agent (PS), light of a proper wavelength, and molecular oxygen. Evaluation of vessel damage usually relies on histopathology evaluation. Results are often qualitative or at best semi-quantitative based on a subjective system. The aim of this study was to evaluate, using CD31 immunohistochem- istry and image analysis software, the vascular damage after PDT in a well-established rodent model of chemically induced mammary tumor. Fourteen Sprague-Dawley rats received a single dose of 7,12-dimethylbenz(a)anthraxcene (80 mg/kg by gavage), treatment efficacy was evaluated by comparing the vascular density of tumors after treatment with Photogem® as a PS, intraperitoneally, followed by interstitial fiber optic lighting, from a diode laser, at 200 mW/cm and light dose of 100 J/cm directed against his tumor (7 animals), with a control group (6 animals, no PDT). The animals were euthanized 30 hours after the lighting and mammary tumors were removed and samples from each lesion were formalin-fixed. Immunostained blood vessels were quantified by Image Pro-Plus version 7.0. The control group had an average of 3368.6 ± 4027.1 pixels per picture and the treated group had an average of 779 ± 1242.6 pixels per area (P < 0.01), indicating that PDT caused a significant decrease in vascular density of mammary tumors. The combined immu- nohistochemistry using CD31, with selection of representative areas by a trained pathology, followed by quantification of staining using Image Pro-Plus version 7.0 system was a practical and robust methodology for vessel damage evalua- tion, which probably could be used to assess other antiangiogenic treatments.