896 resultados para Active shape model
Resumo:
With improvements in acquisition speed and quality, the amount of medical image data to be screened by clinicians is starting to become challenging in the daily clinical practice. To quickly visualize and find abnormalities in medical images, we propose a new method combining segmentation algorithms with statistical shape models. A statistical shape model built from a healthy population will have a close fit in healthy regions. The model will however not fit to morphological abnormalities often present in the areas of pathologies. Using the residual fitting error of the statistical shape model, pathologies can be visualized very quickly. This idea is applied to finding drusen in the retinal pigment epithelium (RPE) of optical coherence tomography (OCT) volumes. A segmentation technique able to accurately segment drusen in patients with age-related macular degeneration (AMD) is applied. The segmentation is then analyzed with a statistical shape model to visualize potentially pathological areas. An extensive evaluation is performed to validate the segmentation algorithm, as well as the quality and sensitivity of the hinting system. Most of the drusen with a height of 85.5 microm were detected, and all drusen at least 93.6 microm high were detected.
Resumo:
The aim of this study was to validate the accuracy and reproducibility of a statistical shape model-based 2D/3D reconstruction method for determining cup orientation after total hip arthroplasty. With a statistical shape model, this method allows reconstructing a patient-specific 3D-model of the pelvis from a standard AP X-ray radiograph. Cup orientation (inclination and anteversion) is then calculated with respect to the anterior pelvic plane that is derived from the reconstructed model.
Resumo:
In this paper, reconstruction of three-dimensional (3D) patient-specific models of a hip joint from two-dimensional (2D) calibrated X-ray images is addressed. Existing 2D-3D reconstruction techniques usually reconstruct a patient-specific model of a single anatomical structure without considering the relationship to its neighboring structures. Thus, when those techniques would be applied to reconstruction of patient-specific models of a hip joint, the reconstructed models may penetrate each other due to narrowness of the hip joint space and hence do not represent a true hip joint of the patient. To address this problem we propose a novel 2D-3D reconstruction framework using an articulated statistical shape model (aSSM). Different from previous work on constructing an aSSM, where the joint posture is modeled as articulation in a training set via statistical analysis, here it is modeled as a parametrized rotation of the femur around the joint center. The exact rotation of the hip joint as well as the patient-specific models of the joint structures, i.e., the proximal femur and the pelvis, are then estimated by optimally fitting the aSSM to a limited number of calibrated X-ray images. Taking models segmented from CT data as the ground truth, we conducted validation experiments on both plastic and cadaveric bones. Qualitatively, the experimental results demonstrated that the proposed 2D-3D reconstruction framework preserved the hip joint structure and no model penetration was found. Quantitatively, average reconstruction errors of 1.9 mm and 1.1 mm were found for the pelvis and the proximal femur, respectively.
Resumo:
Point Distribution Models (PDM) are among the most popular shape description techniques and their usefulness has been demonstrated in a wide variety of medical imaging applications. However, to adequately characterize the underlying modeled population it is essential to have a representative number of training samples, which is not always possible. This problem is especially relevant as the complexity of the modeled structure increases, being the modeling of ensembles of multiple 3D organs one of the most challenging cases. In this paper, we introduce a new GEneralized Multi-resolution PDM (GEM-PDM) in the context of multi-organ analysis able to efficiently characterize the different inter-object relations, as well as the particular locality of each object separately. Importantly, unlike previous approaches, the configuration of the algorithm is automated thanks to a new agglomerative landmark clustering method proposed here, which equally allows us to identify smaller anatomically significant regions within organs. The significant advantage of the GEM-PDM method over two previous approaches (PDM and hierarchical PDM) in terms of shape modeling accuracy and robustness to noise, has been successfully verified for two different databases of sets of multiple organs: six subcortical brain structures, and seven abdominal organs. Finally, we propose the integration of the new shape modeling framework into an active shape-model-based segmentation algorithm. The resulting algorithm, named GEMA, provides a better overall performance than the two classical approaches tested, ASM, and hierarchical ASM, when applied to the segmentation of 3D brain MRI.
Resumo:
We analyzed more than 200 OSIRIS NAC images with a pixel scale of 0.9-2.4 m/pixel of comet 67P/Churyumov-Gerasimenko (67P) that have been acquired from onboard the Rosetta spacecraft in August and September 2014 using stereo-photogrammetric methods (SPG). We derived improved spacecraft position and pointing data for the OSIRIS images and a high-resolution shape model that consists of about 16 million facets (2 m horizontal sampling) and a typical vertical accuracy at the decimeter scale. From this model, we derive a volume for the northern hemisphere of 9.35 km(3) +/- 0.1 km(3). With the assumption of a homogeneous density distribution and taking into account the current uncertainty of the position of the comet's center-of-mass, we extrapolated this value to an overall volume of 18.7 km(3) +/- 1.2 km(3), and, with a current best estimate of 1.0 X 10(13) kg for the mass, we derive a bulk density of 535 kg/m(3) +/- 35 kg/m(3). Furthermore, we used SPG methods to analyze the rotational elements of 67P. The rotational period for August and September 2014 was determined to be 12.4041 +/- 0.0004 h. For the orientation of the rotational axis (z-axis of the body-fixed reference frame) we derived a precession model with a half-cone angle of 0.14 degrees, a cone center position at 69.54 degrees/64.11 degrees (RA/Dec J2000 equatorial coordinates), and a precession period of 10.7 days. For the definition of zero longitude (x-axis orientation), we finally selected the boulder-like Cheops feature on the big lobe of 67P and fixed its spherical coordinates to 142.35 degrees right-hand-rule eastern longitude and -0.28 degrees latitude. This completes the definition of the new Cheops reference frame for 67P. Finally, we defined cartographic mapping standards for common use and combined analyses of scientific results that have been obtained not only within the OSIRIS team, but also within other groups of the Rosetta mission.
Resumo:
The propagation of inhomogeneous, weakly nonlinear waves is considered in a cochlear model having two degrees of freedom that represent the transverse motions of the tectorial and basilar membranes within the organ of Corti. It is assumed that nonlinearity arises from the saturation of outer hair cell active force generation. I use multiple scale asymptotics and treat nonlinearity as a correction to a linear hydroelastic wave. The resulting theory is used to explain experimentally observed features of the response of the cochlear partition to a pure tone, including: the amplification of the response in a healthy cochlea vs a dead one; the less than linear growth rate of the response to increasing sound pressure level; and the amount of distortion to be expected at high and low frequencies at basal and apical locations, respectively. I also show that the outer hair cell nonlinearity generates retrograde waves.
Resumo:
Extraction and reconstruction of rectal wall structures from an ultrasound image is helpful for surgeons in rectal clinical diagnosis and 3-D reconstruction of rectal structures from ultrasound images. The primary task is to extract the boundary of the muscular layers on the rectal wall. However, due to the low SNR from ultrasound imaging and the thin muscular layer structure of the rectum, this boundary detection task remains a challenge. An active contour model is an effective high-level model, which has been used successfully to aid the tasks of object representation and recognition in many image-processing applications. We present a novel multigradient field active contour algorithm with an extended ability for multiple-object detection, which overcomes some limitations of ordinary active contour models—"snakes." The core part in the algorithm is the proposal of multigradient vector fields, which are used to replace image forces in kinetic function for alternative constraints on the deformation of active contour, thereby partially solving the initialization limitation of active contour for rectal wall boundary detection. An adaptive expanding force is also added to the model to help the active contour go through the homogenous region in the image. The efficacy of the model is explained and tested on the boundary detection of a ring-shaped image, a synthetic image, and an ultrasound image. The experimental results show that the proposed multigradient field-active contour is feasible for multilayer boundary detection of rectal wall
Resumo:
In several areas of health professionals (pediatricians, nutritionists, orthopedists, endocrinologists, dentists, etc.) are used in the assessment of bone age to diagnose growth disorders in children. Through interviews with specialists in diagnostic imaging and research done in the literature, we identified the TW method - Tanner and Whitehouse as the most efficient. Even achieving better results than other methods, it is still not the most used, due to the complexity of their use. This work presents the possibility of automation of this method and therefore that its use more widespread. Also in this work, they are met two important steps in the evaluation of bone age, identification and classification of regions of interest. Even in the radiography in which the positioning of the hands were not suitable for TW method, the identification algorithm of the fingers showed good results. As the use AAM - Active Appearance Models showed good results in the identification of regions of interest even in radiographs with high contrast and brightness variation. It has been shown through appearance, good results in the classification of the epiphysis in their stages of development, being chosen the average epiphysis finger III (middle) to show the performance. The final results show an average percentage of 90% hit and misclassified, it was found that the error went away just one stage of the correct stage.
Resumo:
Facial expression is an important channel of human social communication. Facial expression recognition (FER) aims to perceive and understand emotional states of humans based on information in the face. Building robust and high performance FER systems that can work in real-world video is still a challenging task, due to the various unpredictable facial variations and complicated exterior environmental conditions, as well as the difficulty of choosing a suitable type of feature descriptor for extracting discriminative facial information. Facial variations caused by factors such as pose, age, gender, race and occlusion, can exert profound influence on the robustness, while a suitable feature descriptor largely determines the performance. Most present attention on FER has been paid to addressing variations in pose and illumination. No approach has been reported on handling face localization errors and relatively few on overcoming facial occlusions, although the significant impact of these two variations on the performance has been proved and highlighted in many previous studies. Many texture and geometric features have been previously proposed for FER. However, few comparison studies have been conducted to explore the performance differences between different features and examine the performance improvement arisen from fusion of texture and geometry, especially on data with spontaneous emotions. The majority of existing approaches are evaluated on databases with posed or induced facial expressions collected in laboratory environments, whereas little attention has been paid on recognizing naturalistic facial expressions on real-world data. This thesis investigates techniques for building robust and high performance FER systems based on a number of established feature sets. It comprises of contributions towards three main objectives: (1) Robustness to face localization errors and facial occlusions. An approach is proposed to handle face localization errors and facial occlusions using Gabor based templates. Template extraction algorithms are designed to collect a pool of local template features and template matching is then performed to covert these templates into distances, which are robust to localization errors and occlusions. (2) Improvement of performance through feature comparison, selection and fusion. A comparative framework is presented to compare the performance between different features and different feature selection algorithms, and examine the performance improvement arising from fusion of texture and geometry. The framework is evaluated for both discrete and dimensional expression recognition on spontaneous data. (3) Evaluation of performance in the context of real-world applications. A system is selected and applied into discriminating posed versus spontaneous expressions and recognizing naturalistic facial expressions. A database is collected from real-world recordings and is used to explore feature differences between standard database images and real-world images, as well as between real-world images and real-world video frames. The performance evaluations are based on the JAFFE, CK, Feedtum, NVIE, Semaine and self-collected QUT databases. The results demonstrate high robustness of the proposed approach to the simulated localization errors and occlusions. Texture and geometry have different contributions to the performance of discrete and dimensional expression recognition, as well as posed versus spontaneous emotion discrimination. These investigations provide useful insights into enhancing robustness and achieving high performance of FER systems, and putting them into real-world applications.
Resumo:
In this paper, we present a 3D face photography system based on a facial expression training dataset, composed of both facial range images (3D geometry) and facial texture (2D photography). The proposed system allows one to obtain a 3D geometry representation of a given face provided as a 2D photography, which undergoes a series of transformations through the texture and geometry spaces estimated. In the training phase of the system, the facial landmarks are obtained by an active shape model (ASM) extracted from the 2D gray-level photography. Principal components analysis (PCA) is then used to represent the face dataset, thus defining an orthonormal basis of texture and another of geometry. In the reconstruction phase, an input is given by a face image to which the ASM is matched. The extracted facial landmarks and the face image are fed to the PCA basis transform, and a 3D version of the 2D input image is built. Experimental tests using a new dataset of 70 facial expressions belonging to ten subjects as training set show rapid reconstructed 3D faces which maintain spatial coherence similar to the human perception, thus corroborating the efficiency and the applicability of the proposed system.
Resumo:
Automatic scan planning for magnetic resonance imaging of the knee aims at defining an oriented bounding box around the knee joint from sparse scout images in order to choose the optimal field of view for the diagnostic images and limit acquisition time. We propose a fast and fully automatic method to perform this task based on the standard clinical scout imaging protocol. The method is based on sequential Chamfer matching of 2D scout feature images with a three-dimensional mean model of femur and tibia. Subsequently, the joint plane separating femur and tibia, which contains both menisci, can be automatically detected using an information-augmented active shape model on the diagnostic images. This can assist the clinicians in quickly defining slices with standardized and reproducible orientation, thus increasing diagnostic accuracy and also comparability of serial examinations. The method has been evaluated on 42 knee MR images. It has the potential to be incorporated into existing systems because it does not change the current acquisition protocol.
Resumo:
Se comenzó el trabajo recabando información sobre los distintos enfoques que se le había dado a la anotación a lo largo del tiempo, desde anotación de imágenes a mano, pasando por anotación de imágenes utilizando características de bajo nivel, como color y textura, hasta la anotación automática. Tras entrar en materia, se procedió a estudiar artículos relativos a los diferentes algoritmos utilizados para la anotación automática de imágenes. Dado que la anotación automática es un campo bastante abierto, hay un gran numero de enfoques. Teniendo las características de las imágenes en particular en las que se iba a centrar el proyecto, se fueron descartando los poco idoneos, bien por un coste computacional elevado, o porque estaba centrado en un tipo diferente de imágenes, entre otras cosas. Finalmente, se encontró un algoritmo basado en formas (Active Shape Model) que se consideró que podría funcionar adecuadamente. Básicamente, los diferentes objetos de la imagen son identicados a partir de un contorno base generado a partir de imágenes de muestra, siendo modicado automáticamente para cubrir la zona deseada. Dado que las imágenes usadas son todas muy similares en composición, se cree que puede funcionar bien. Se partió de una implementación del algoritmo programada en MATLAB. Para empezar, se obtuvieron una serie de radiografías del tórax ya anotadas. Las imágenes contenían datos de contorno para ambos pulmones, las dos clavículas y el corazón. El primer paso fue la creación de una serie de scripts en MATLAB que permitieran: - Leer y transformar las imágenes recibidas en RAW, para adaptarlas al tamaño y la posición de los contornos anotados - Leer los archivos de texto con los datos de los puntos del contorno y transformarlos en variables de MATLAB - Unir la imagen transformada con los puntos y guardarla en un formato que la implementación del algoritmo entendiera. Tras conseguir los ficheros necesarios, se procedió a crear un modelo para cada órgano utilizando para el entrenamiento una pequeña parte de las imágenes. El modelo obtenido se probó con varias imágenes de las restantes. Sin embargo, se encontro bastante variación dependiendo de la imagen utilizada y el órgano detectado. ---ABSTRACT---The project was started by procuring information about the diferent approaches to image annotation over time, from manual image anotation to automatic annotation. The next step was to study several articles about the diferent algorithms used for automatic image annotation. Given that automatic annotation is an open field, there is a great number of approaches. Taking into account the features of the images that would be used, the less suitable algorithms were rejected. Eventually, a shape-based algorithm (Active Shape Model) was found. Basically, the diferent objects in the image are identified from a base contour, which is generated from training images. Then this contour is automatically modified to cover the desired area. Given that all the images that would be used are similar in object placement, the algorithm would probably work nicely. The work started from a MATLAB implementation of the algorithm. To begin with, a set of chest radiographs already annotated were obtained. These images came with contour data for both lungs, both clavicles and the heart. The first step was the creation of a series of MATLAB scripts to join the RAW images with the annotation data and transform them into a format that the algorithm could read. After obtaining the necessary files, a model for each organ was created using part of the images for training. The trained model was tested on several of the reimaining images. However, there was much variation in the results from one image to another. Generally, lungs were detected pretty accurately, whereas clavicles and the heart gave more problems. To improve the method, a new model was trained using half of the available images. With this model, a significant inprovement of the results can be seen.
Resumo:
This paper considers the problem of tissue classification in 3D MRI. More specifically, a new set of texture features, based on phase information, is used to perform the segmentation of the bones of the knee. The phase information provides a very good discrimination between the bone and the surrounding tissues, but is usually not used due to phase unwrapping problems. We present a method to extract textural information from the phase that does not require phase unwrapping. The textural information extracted from the magnitude and the phase can be combined to perform tissue classification, and used to initialise an active shape model, leading to a more precise segmentation.
Resumo:
本文通过形状约束方程(组)与一般主动轮廓模型结合,将目标形状与主动轮廓模型融合到统一能量泛函模型中,提出了一种形状保持主动轮廓模型即曲线在演化过程中保持为某一类特定形状。模型通过参数化水平集函数的零水平集控制演化曲线形状,不仅达到了分割即目标的目的,而且能够给出特定目标的定量描述。根据形状保持主动轮廓模型,建立了一个用于椭圆状目标检测的统一能量泛函模型,导出了相应的Euler-Lagrange常微分方程并用水平集方法实现了椭圆状目标检测。此模型可以应用于眼底乳头分割,虹膜检测及相机标定。实验结果表明,此模型不仅能够准确的检测出给定图像中的椭圆状目标,而且有很强的抗噪、抗变形及遮挡性能。