902 resultados para Image-based cytometry
Resumo:
The anatomical and morphometric (shape indices, contour descriptors and otolith weight) characterizations of sagittal otoliths were investigated in 13 species of Lutjanus spp. inhabiting the Persian Gulf. This is the first study that compares the efficiency of three different image analysis techniques for discriminating species based on the shape of the outer otolith contour, including elliptical Fourier descriptors (EFD), fast Fourier transform (FFT) and wavelet transform (WT). Sagittal otoliths of snappers are morphologically similar with some small specific variations. The use of otolith contour based on wavelets (WT) provided the best results in comparison with the two other methods based on Fourier descriptors, but only the combination of the all three methods (EFD, FFT and WT) was useful to obtain a robust classification of species. The species prediction improved when otolith weight was included. In relation to the shape indices, only the aspect ratio provided a clear grouping of species. Also, another study was carried on to test the possibility of application of shape analysis and comparing otolith contour of otoliths of Lutjanus johnii from Persian Gulf and Oman Sea to identify potential stocks. The results showed the otoliths have differences in contour shape and can be contribute to two different stocks.
Resumo:
The University of Cambridge is unusual in that its Department of Engineering is a single department which covers virtually all branches of engineering under one roof. In their first two years of study, our undergrads study the full breadth of engineering topics and then have to choose a specialization area for the final two years of study. Here we describe part of a course, given towards the end of their second year, which is designed to entice these students to specialize in signal processing and information engineering topics for years 3 and 4. The course is based around a photo editor and an image search application, and it requires no prior knowledge of the z-transform or of 2-dimensional signal processing. It does assume some knowledge of 1-D convolution and basic Fourier methods and some prior exposure to Matlab. The subject of this paper, the photo editor, is written in standard Matlab m-files which are fully visible to the students and help them to see how specific algorithms are implemented in detail. © 2011 IEEE.
Resumo:
This paper is in two parts and addresses two of getting more information out of the RF signal from three-dimensional (3D) mechanically-swept medical ultrasound . The first topic is the use of non-blind deconvolution improve the clarity of the data, particularly in the direction to the individual B-scans. The second topic is imaging. We present a robust and efficient approach to estimation and display of axial strain information. deconvolution, we calculate an estimate of the point-spread at each depth in the image using Field II. This is used as of an Expectation Maximisation (EM) framework in which ultrasound scatterer field is modelled as the product of (a) a smooth function and (b) a fine-grain varying function. the E step, a Wiener filter is used to estimate the scatterer based on an assumed piecewise smooth component. In the M , wavelet de-noising is used to estimate the piecewise smooth from the scatterer field. strain imaging, we use a quasi-static approach with efficient based algorithms. Our contributions lie in robust and 3D displacement tracking, point-wise quality-weighted , and a stable display that shows not only strain but an indication of the quality of the data at each point in the . This enables clinicians to see where the strain estimate is and where it is mostly noise. deconvolution, we present in-vivo images and simulations quantitative performance measures. With the blurred 3D taken as OdB, we get an improvement in signal to noise ratio 4.6dB with a Wiener filter alone, 4.36dB with the ForWaRD and S.18dB with our EM algorithm. For strain imaging show images based on 2D and 3D data and describe how full D analysis can be performed in about 20 seconds on a typical . We will also present initial results of our clinical study to explore the applications of our system in our local hospital. © 2008 IEEE.
Resumo:
A novel method for modelling the statistics of 2D photographic images useful in image restoration is defined. The new method is based on the Dual Tree Complex Wavelet Transform (DT-CWT) but a phase rotation is applied to the coefficients to create complex coefficients whose phase is shift-invariant at multiscale edge and ridge features. This is in addition to the magnitude shift invariance achieved by the DT-CWT. The increased correlation between coefficients adjacent in space and scale provides an improved mechanism for signal estimation. © 2006 IEEE.
Resumo:
This paper tackles the novel challenging problem of 3D object phenotype recognition from a single 2D silhouette. To bridge the large pose (articulation or deformation) and camera viewpoint changes between the gallery images and query image, we propose a novel probabilistic inference algorithm based on 3D shape priors. Our approach combines both generative and discriminative learning. We use latent probabilistic generative models to capture 3D shape and pose variations from a set of 3D mesh models. Based on these 3D shape priors, we generate a large number of projections for different phenotype classes, poses, and camera viewpoints, and implement Random Forests to efficiently solve the shape and pose inference problems. By model selection in terms of the silhouette coherency between the query and the projections of 3D shapes synthesized using the galleries, we achieve the phenotype recognition result as well as a fast approximate 3D reconstruction of the query. To verify the efficacy of the proposed approach, we present new datasets which contain over 500 images of various human and shark phenotypes and motions. The experimental results clearly show the benefits of using the 3D priors in the proposed method over previous 2D-based methods. © 2011 IEEE.
Resumo:
Estimating the fundamental matrix (F), to determine the epipolar geometry between a pair of images or video frames, is a basic step for a wide variety of vision-based functions used in construction operations, such as camera-pair calibration, automatic progress monitoring, and 3D reconstruction. Currently, robust methods (e.g., SIFT + normalized eight-point algorithm + RANSAC) are widely used in the construction community for this purpose. Although they can provide acceptable accuracy, the significant amount of required computational time impedes their adoption in real-time applications, especially video data analysis with many frames per second. Aiming to overcome this limitation, this paper presents and evaluates the accuracy of a solution to find F by combining the use of two speedy and consistent methods: SURF for the selection of a robust set of point correspondences and the normalized eight-point algorithm. This solution is tested extensively on construction site image pairs including changes in viewpoint, scale, illumination, rotation, and moving objects. The results demonstrate that this method can be used for real-time applications (5 image pairs per second with the resolution of 640 × 480) involving scenes of the built environment.
Resumo:
Manually inspecting concrete surface defects (e.g., cracks and air pockets) is not always reliable. Also, it is labor-intensive. In order to overcome these limitations, automated inspection using image processing techniques was proposed. However, the current work can only detect defects in an image without the ability of evaluating them. This paper presents a novel approach for automatically assessing the impact of two common surface defects (i.e., air pockets and discoloration). These two defects are first located using the developed detection methods. Their attributes, such as the number of air pockets and the area of discoloration regions, are then retrieved to calculate defects’ visual impact ratios (VIRs). The appropriate threshold values for these VIRs are selected through a manual rating survey. This way, for a given concrete surface image, its quality in terms of air pockets and discoloration can be automatically measured by judging whether their VIRs are below the threshold values or not. The method presented in this paper was implemented in C++ and a database of concrete surface images was tested to validate its performance. Read More: http://ascelibrary.org/doi/abs/10.1061/%28ASCE%29CO.1943-7862.0000126?journalCode=jcemd4
Resumo:
Calibration of a camera system is a necessary step in any stereo metric process. It correlates all cameras to a common coordinate system by measuring the intrinsic and extrinsic parameters of each camera. Currently, manual calibration of a camera system is the only way to achieve calibration in civil engineering operations that require stereo metric processes (photogrammetry, videogrammetry, vision based asset tracking, etc). This type of calibration however is time-consuming and labor-intensive. Furthermore, in civil engineering operations, camera systems are exposed to open, busy sites. In these conditions, the position of presumably stationary cameras can easily be changed due to external factors such as wind, vibrations or due to an unintentional push/touch from personnel on site. In such cases manual calibration must be repeated. In order to address this issue, several self-calibration algorithms have been proposed. These algorithms use Projective Geometry, Absolute Conic and Kruppa Equations and variations of these to produce processes that achieve calibration. However, most of these methods do not consider all constraints of a camera system such as camera intrinsic constraints, scene constraints, camera motion or varying camera intrinsic properties. This paper presents a novel method that takes all constraints into consideration to auto-calibrate cameras using an image alignment algorithm originally meant for vision based tracking. In this method, image frames are taken from cameras. These frames are used to calculate the fundamental matrix that gives epipolar constraints. Intrinsic and extrinsic properties of cameras are acquired from this calculation. Test results are presented in this paper with recommendations for further improvement.
Resumo:
Vision based tracking can provide the spatial location of project related entities such as equipment, workers, and materials in a large-scale congested construction site. It tracks entities in a video stream by inferring their motion. To initiate the process, it is required to determine the pixel areas of the entities to be tracked in the following consecutive video frames. For the purpose of fully automating the process, this paper presents an automated way of initializing trackers using Semantic Texton Forests (STFs) method. STFs method performs simultaneously the segmentation of the image and the classification of the segments based on the low-level semantic information and the context information. In this paper, STFs method is tested in the case of wheel loaders recognition. In the experiments, wheel loaders are further divided into several parts such as wheels and body parts to help learn the context information. The results show 79% accuracy of recognizing the pixel areas of the wheel loader. These results signify that STFs method has the potential to automate the initialization process of vision based tracking.
Resumo:
This book explores the processes for retrieval, classification, and integration of construction images in AEC/FM model based systems. The author describes a combination of techniques from the areas of image and video processing, computer vision, information retrieval, statistics and content-based image and video retrieval that have been integrated into a novel method for the retrieval of related construction site image data from components of a project model. This method has been tested on available construction site images from a variety of sources like past and current building construction and transportation projects and is able to automatically classify, store, integrate and retrieve image data files in inter-organizational systems so as to allow their usage in project management related tasks. objects. Therefore, automated methods for the integration of construction images are important for construction information management. During this research, processes for retrieval, classification, and integration of construction images in AEC/FM model based systems have been explored. Specifically, a combination of techniques from the areas of image and video processing, computer vision, information retrieval, statistics and content-based image and video retrieval have been deployed in order to develop a methodology for the retrieval of related construction site image data from components of a project model. This method has been tested on available construction site images from a variety of sources like past and current building construction and transportation projects and is able to automatically classify, store, integrate and retrieve image data files in inter-organizational systems so as to allow their usage in project management related tasks.