902 resultados para Image-based cytometry
Resumo:
A Concise Intro to Image Processing using C++ presents state-of-the-art image processing methodology, including current industrial practices for image compression, image de-noising methods based on partial differential equations, and new image compression methods such as fractal image compression and wavelet compression. It includes elementary concepts of image processing and related fundamental tools with coding examples as well as exercises. With a particular emphasis on illustrating fractal and wavelet compression algorithms, the text covers image segmentation, object recognition, and morphology. An accompanying CD-ROM contains code for all algorithms.
Resumo:
This study presents a methods evaluation and intercalibration of active fluorescence-based measurements of the quantum yield ( inline image) and absorption coefficient ( inline image) of photosystem II (PSII) photochemistry. Measurements of inline image, inline image, and irradiance (E) can be scaled to derive photosynthetic electron transport rates ( inline image), the process that fuels phytoplankton carbon fixation and growth. Bio-optical estimates of inline image and inline image were evaluated using 10 phytoplankton cultures across different pigment groups with varying bio-optical absorption characteristics on six different fast-repetition rate fluorometers that span two different manufacturers and four different models. Culture measurements of inline image and the effective absorption cross section of PSII photochemistry ( inline image, a constituent of inline image) showed a high degree of correspondence across instruments, although some instrument-specific biases are identified. A range of approaches have been used in the literature to estimate inline image and are evaluated here. With the exception of ex situ inline image estimates from paired inline image and PSII reaction center concentration ( inline image) measurements, the accuracy and precision of in situ inline image methodologies are largely determined by the variance of method-specific coefficients. The accuracy and precision of these coefficients are evaluated, compared to literature data, and discussed within a framework of autonomous inline image measurements. This study supports the application of an instrument-specific calibration coefficient ( inline image) that scales minimum fluorescence in the dark ( inline image) to inline image as both the most accurate in situ measurement of inline image, and the methodology best suited for highly resolved autonomous inline image measurements.
Resumo:
The ultrasonic measurement and imaging of tissue elasticity is currently under wide investigation and development as a clinical tool for the assessment of a broad range of diseases, but little account in this field has yet been taken of the fact that soft tissue is porous and contains mobile fluid. The ability to squeeze fluid out of tissue may have implications for conventional elasticity imaging, and may present opportunities for new investigative tools. When a homogeneous, isotropic, fluid-saturated poroelastic material with a linearly elastic solid phase and incompressible solid and fluid constituents is subjected to stress, the behaviour of the induced internal strain field is influenced by three material constants: the Young's modulus (E(s)) and Poisson's ratio (nu(s)) of the solid matrix and the permeability (k) of the solid matrix to the pore fluid. New analytical expressions were derived and used to model the time-dependent behaviour of the strain field inside simulated homogeneous cylindrical samples of such a poroelastic material undergoing sustained unconfined compression. A model-based reconstruction technique was developed to produce images of parameters related to the poroelastic material constants (E(s), nu(s), k) from a comparison of the measured and predicted time-dependent spatially varying radial strain. Tests of the method using simulated noisy strain data showed that it is capable of producing three unique parametric images: an image of the Poisson's ratio of the solid matrix, an image of the axial strain (which was not time-dependent subsequent to the application of the compression) and an image representing the product of the aggregate modulus E(s)(1-nu(s))/(1+nu(s))(1-2nu(s)) of the solid matrix and the permeability of the solid matrix to the pore fluid. The analytical expressions were further used to numerically validate a finite element model and to clarify previous work on poroelastography.
Resumo:
Grey Level Co-occurrence Matrix (GLCM), one of the best known tool for texture analysis, estimates image properties related to second-order statistics. These image properties commonly known as Haralick texture features can be used for image classification, image segmentation, and remote sensing applications. However, their computations are highly intensive especially for very large images such as medical ones. Therefore, methods to accelerate their computations are highly desired. This paper proposes the use of programmable hardware to accelerate the calculation of GLCM and Haralick texture features. Further, as an example of the speedup offered by programmable logic, a multispectral computer vision system for automatic diagnosis of prostatic cancer has been implemented. The performance is then compared against a microprocessor based solution.
Resumo:
Taking into account the huge repercussion and influence that J.J. Rousseau has had on modern pedagogy, the recent tercentenary of his birth is a good opportunity to think about his outstanding relevance nowadays. This paper is a theoretical and educative research developed with an analytic and comparative hermeneutical method. The main objective is to show how some concepts of his philosophy of education have a great similarity with certain changes that the present competency based teaching is demanding, so it could be considered its methodological background. In order to achieve this objective this exposure has been divided in three parts. The first part is an analysis of Rousseau's educational theory as developed in the first three books of the Emilio, in which one of the main themes is self experience-based learning, fostering self-sufficiency, curiosity and the motivation for learning. Rousseau proposed as a method the negative education, which requires, among other conditions, a constant monitoring of the learner by the tutor. In the second part, a brief summary of the most relevant changes and characteristics of competency-based teaching is developed, as well as its purpose. The student’s participation and activity are highlighted within their own learning process through the carrying out of tasks. The new educational model involves a radical change in the curriculum, in which it is highlighted the transformation of the methodology used in the classroom as well as the role of the teacher. Finally, the aim of the third part is to offer a comparative synthesis of both proposals grouping the parallelisms found in 4 topics: origin of the two models, its aims, methodology, and change in the teaching roles.
Resumo:
In a typical shoeprint classification and retrieval system, the first step is to segment meaningful basic shapes and patterns in a noisy shoeprint image. This step has significant influence on shape descriptors and shoeprint indexing in the later stages. In this paper, we extend a recently developed denoising technique proposed by Buades, called non-local mean filtering, to give a more general model. In this model, the expected result of an operation on a pixel can be estimated by performing the same operation on all of its reference pixels in the same image. A working pixel’s reference pixels are those pixels whose neighbourhoods are similar to the working pixel’s neighbourhood. Similarity is based on the correlation between the local neighbourhoods of the working pixel and the reference pixel. We incorporate a special instance of this general case into thresholding a very noisy shoeprint image. Visual and quantitative comparisons with two benchmarking techniques, by Otsu and Kittler, are conducted in the last section, giving evidence of the effectiveness of our method for thresholding noisy shoeprint images.
Resumo:
The use of image processing techniques to assess the performance of airport landing lighting using images of it collected from an aircraft-mounted camera is documented. In order to assess the performance of the lighting, it is necessary to uniquely identify each luminaire within an image and then track the luminaires through the entire sequence and store the relevant information for each luminaire, that is, the total number of pixels that each luminaire covers and the total grey level of these pixels. This pixel grey level can then be used for performance assessment. The authors propose a robust model-based (MB) featurematching technique by which the performance is assessed. The development of this matching technique is the key to the automated performance assessment of airport lighting. The MB matching technique utilises projective geometry in addition to accurate template of the 3D model of a landing-lighting system. The template is projected onto the image data and an optimum match found, using nonlinear least-squares optimisation. The MB matching software is compared with standard feature extraction and tracking techniques known within the community, these being the Kanade–Lucus–Tomasi (KLT) and scaleinvariant feature transform (SIFT) techniques. The new MB matching technique compares favourably with the SIFT and KLT feature-tracking alternatives. As such, it provides a solid foundation to achieve the central aim of this research which is to automatically assess the performance of airport lighting.
Resumo:
A high-sample rate 3D median filtering processor architecture is proposed, based on a novel 3D median filtering algorithm, that can reduce the computing complexity in comparison with the traditional bubble sorting algorithm. A 3 x 3 x 3 filter processor is implemented in VHDL, and the simulation verifies that the processor can process a 128 x 128 x 96 MRI image in 0.03 seconds while running at 50 MHz.
Resumo:
This paper proposes a novel image denoising technique based on the normal inverse Gaussian (NIG) density model using an extended non-negative sparse coding (NNSC) algorithm proposed by us. This algorithm can converge to feature basis vectors, which behave in the locality and orientation in spatial and frequency domain. Here, we demonstrate that the NIG density provides a very good fitness to the non-negative sparse data. In the denoising process, by exploiting a NIG-based maximum a posteriori estimator (MAP) of an image corrupted by additive Gaussian noise, the noise can be reduced successfully. This shrinkage technique, also referred to as the NNSC shrinkage technique, is self-adaptive to the statistical properties of image data. This denoising method is evaluated by values of the normalized signal to noise rate (SNR). Experimental results show that the NNSC shrinkage approach is indeed efficient and effective in denoising. Otherwise, we also compare the effectiveness of the NNSC shrinkage method with methods of standard sparse coding shrinkage, wavelet-based shrinkage and the Wiener filter. The simulation results show that our method outperforms the three kinds of denoising approaches mentioned above.
Resumo:
Face recognition with unknown, partial distortion and occlusion is a practical problem, and has a wide range of applications, including security and multimedia information retrieval. The authors present a new approach to face recognition subject to unknown, partial distortion and occlusion. The new approach is based on a probabilistic decision-based neural network, enhanced by a statistical method called the posterior union model (PUM). PUM is an approach for ignoring severely mismatched local features and focusing the recognition mainly on the reliable local features. It thereby improves the robustness while assuming no prior information about the corruption. We call the new approach the posterior union decision-based neural network (PUDBNN). The new PUDBNN model has been evaluated on three face image databases (XM2VTS, AT&T and AR) using testing images subjected to various types of simulated and realistic partial distortion and occlusion. The new system has been compared to other approaches and has demonstrated improved performance.