973 resultados para patient-specific biomechanical model
Resumo:
There are many ways to generate geometrical models for numerical simulation, and most of them start with a segmentation step to extract the boundaries of the regions of interest. This paper presents an algorithm to generate a patient-specific three-dimensional geometric model, based on a tetrahedral mesh, without an initial extraction of contours from the volumetric data. Using the information directly available in the data, such as gray levels, we built a metric to drive a mesh adaptation process. The metric is used to specify the size and orientation of the tetrahedral elements everywhere in the mesh. Our method, which produces anisotropic meshes, gives good results with synthetic and real MRI data. The resulting model quality has been evaluated qualitatively and quantitatively by comparing it with an analytical solution and with a segmentation made by an expert. Results show that our method gives, in 90% of the cases, as good or better meshes as a similar isotropic method, based on the accuracy of the volume reconstruction for a given mesh size. Moreover, a comparison of the Hausdorff distances between adapted meshes of both methods and ground-truth volumes shows that our method decreases reconstruction errors faster. Copyright © 2015 John Wiley & Sons, Ltd.
Resumo:
Purpose Accurate three-dimensional (3D) models of lumbar vertebrae can enable image-based 3D kinematic analysis. The common approach to derive 3D models is by direct segmentation of CT or MRI datasets. However, these have the disadvantages that they are expensive, timeconsuming and/or induce high-radiation doses to the patient. In this study, we present a technique to automatically reconstruct a scaled 3D lumbar vertebral model from a single two-dimensional (2D) lateral fluoroscopic image. Methods Our technique is based on a hybrid 2D/3D deformable registration strategy combining a landmark-to-ray registration with a statistical shape model-based 2D/3D reconstruction scheme. Fig. 1 shows different stages of the reconstruction process. Four cadaveric lumbar spine segments (total twelve lumbar vertebrae) were used to validate the technique. To evaluate the reconstruction accuracy, the surface models reconstructed from the lateral fluoroscopic images were compared to the associated ground truth data derived from a 3D CT-scan reconstruction technique. For each case, a surface-based matching was first used to recover the scale and the rigid transformation between the reconstructed surface model Results Our technique could successfully reconstruct 3D surface models of all twelve vertebrae. After recovering the scale and the rigid transformation between the reconstructed surface models and the ground truth models, the average error of the 2D/3D surface model reconstruction over the twelve lumbar vertebrae was found to be 1.0 mm. The errors of reconstructing surface models of all twelve vertebrae are shown in Fig. 2. It was found that the mean errors of the reconstructed surface models in comparison to their associated ground truths after iterative scaled rigid registrations ranged from 0.7 mm to 1.3 mm and the rootmean squared (RMS) errors ranged from 1.0 mm to 1.7 mm. The average mean reconstruction error was found to be 1.0 mm. Conclusion An accurate, scaled 3D reconstruction of the lumbar vertebra can be obtained from a single lateral fluoroscopic image using a statistical shape model based 2D/3D reconstruction technique. Future work will focus on applying the reconstructed model for 3D kinematic analysis of lumbar vertebrae, an extension of our previously-reported imagebased kinematic analysis. The developed method also has potential applications in surgical planning and navigation.
Resumo:
This paper presents a system for 3-D reconstruction of a patient-specific surface model from calibrated X-ray images. Our system requires two X-ray images of a patient with one acquired from the anterior-posterior direction and the other from the axial direction. A custom-designed cage is utilized in our system to calibrate both images. Starting from bone contours that are interactively identified from the X-ray images, our system constructs a patient-specific surface model of the proximal femur based on a statistical model based 2D/3D reconstruction algorithm. In this paper, we present the design and validation of the system with 25 bones. An average reconstruction error of 0.95 mm was observed.
Resumo:
BACKGROUND Aortic dissection is a severe pathological condition in which blood penetrates between layers of the aortic wall and creates a duplicate channel - the false lumen. This considerable change on the aortic morphology alters hemodynamic features dramatically and, in the case of rupture, induces markedly high rates of morbidity and mortality. METHODS In this study, we establish a patient-specific computational model and simulate the pulsatile blood flow within the dissected aorta. The k-ω SST turbulence model is employed to represent the flow and finite volume method is applied for numerical solutions. Our emphasis is on flow exchange between true and false lumen during the cardiac cycle and on quantifying the flow across specific passages. Loading distributions including pressure and wall shear stress have also been investigated and results of direct simulations are compared with solutions employing appropriate turbulence models. RESULTS Our results indicate that (i) high velocities occur at the periphery of the entries; (ii) for the case studied, approximately 40% of the blood flow passes the false lumen during a heartbeat cycle; (iii) higher pressures are found at the outer wall of the dissection, which may induce further dilation of the pseudo-lumen; (iv) highest wall shear stresses occur around the entries, perhaps indicating the vulnerability of this region to further splitting; and (v) laminar simulations with adequately fine mesh resolutions, especially refined near the walls, can capture similar flow patterns to the (coarser mesh) turbulent results, although the absolute magnitudes computed are in general smaller. CONCLUSIONS The patient-specific model of aortic dissection provides detailed flow information of blood transport within the true and false lumen and quantifies the loading distributions over the aorta and dissection walls. This contributes to evaluating potential thrombotic behavior in the false lumen and is pivotal in guiding endovascular intervention. Moreover, as a computational study, mesh requirements to successfully evaluate the hemodynamic parameters have been proposed.
Resumo:
Modeling of tumor growth has been performed according to various approaches addressing different biocomplexity levels and spatiotemporal scales. Mathematical treatments range from partial differential equation based diffusion models to rule-based cellular level simulators, aiming at both improving our quantitative understanding of the underlying biological processes and, in the mid- and long term, constructing reliable multi-scale predictive platforms to support patient-individualized treatment planning and optimization. The aim of this paper is to establish a multi-scale and multi-physics approach to tumor modeling taking into account both the cellular and the macroscopic mechanical level. Therefore, an already developed biomodel of clinical tumor growth and response to treatment is self-consistently coupled with a biomechanical model. Results are presented for the free growth case of the imageable component of an initially point-like glioblastoma multiforme tumor. The composite model leads to significant tumor shape corrections that are achieved through the utilization of environmental pressure information and the application of biomechanical principles. Using the ratio of smallest to largest moment of inertia of the tumor material to quantify the effect of our coupled approach, we have found a tumor shape correction of 20\% by coupling biomechanics to the cellular simulator as compared to a cellular simulation without preferred growth directions. We conclude that the integration of the two models provides additional morphological insight into realistic tumor growth behavior. Therefore, it might be used for the development of an advanced oncosimulator focusing on tumor types for which morphology plays an important role in surgical and/or radio-therapeutic treatment planning.
Resumo:
Ophthalmologists typically acquire different image modalities to diagnose eye pathologies. They comprise, e.g., Fundus photography, optical coherence tomography, computed tomography, and magnetic resonance imaging (MRI). Yet, these images are often complementary and do express the same pathologies in a different way. Some pathologies are only visible in a particular modality. Thus, it is beneficial for the ophthalmologist to have these modalities fused into a single patient-specific model. The goal of this paper is a fusion of Fundus photography with segmented MRI volumes. This adds information to MRI that was not visible before like vessels and the macula. This paper contributions include automatic detection of the optic disc, the fovea, the optic axis, and an automatic segmentation of the vitreous humor of the eye.
Resumo:
Ophthalmologists typically acquire different image modalities to diagnose eye pathologies. They comprise, e.g., Fundus photography, optical coherence tomography, computed tomography, and magnetic resonance imaging (MRI). Yet, these images are often complementary and do express the same pathologies in a different way. Some pathologies are only visible in a particular modality. Thus, it is beneficial for the ophthalmologist to have these modalities fused into a single patient-specific model. The goal of this paper is a fusion of Fundus photography with segmented MRI volumes. This adds information to MRI that was not visible before like vessels and the macula. This paper contributions include automatic detection of the optic disc, the fovea, the optic axis, and an automatic segmentation of the vitreous humor of the eye.
Resumo:
A method for the construction of a patient-specific model of a scoliotic torso for surgical planning via inter- patient registration is presented. Magnetic Resonance Images (MRI) of a generic model are registered to surface topography (TP) and X-ray data of a test patient. A partial model is first obtained via thin-plate spline registration between TP and X-ray data of the test patient. The MRIs from the generic model are then fit into the test patient using articulated model registration between the vertebrae of the generic model’s MRIs in prone position and the test patient’s X-rays in standing position. A non-rigid deformation of the soft tissues is performed using a modified thin-plate spline constrained to maintain bone rigidity and to fit in the space between the vertebrae and the surface of the torso. Results show average Dice values of 0.975 ± 0.012 between the MRIs following inter-patient registration and the surface topography of the test patient, which is comparable to the average value of 0.976 ± 0.009 previously obtained following intra-patient registration. The results also show a significant improvement compared to rigid inter-patient registration. Future work includes validating the method on a larger cohort of patients and incorporating soft tissue stiffness constraints. The method developed can be used to obtain a geometric model of a patient including bone structures, soft tissues and the surface of the torso which can be incorporated in a surgical simulator in order to better predict the outcome of scoliosis surgery, even if MRI data cannot be acquired for the patient.
Resumo:
Reconstruction of patient-specific 3D bone surface from 2D calibrated fluoroscopic images and a point distribution model is discussed. We present a 2D/3D reconstruction scheme combining statistical extrapolation and regularized shape deformation with an iterative image-to-model correspondence establishing algorithm, and show its application to reconstruct the surface of proximal femur. The image-to-model correspondence is established using a non-rigid 2D point matching process, which iteratively uses a symmetric injective nearest-neighbor mapping operator and 2D thin-plate splines based deformation to find a fraction of best matched 2D point pairs between features detected from the fluoroscopic images and those extracted from the 3D model. The obtained 2D point pairs are then used to set up a set of 3D point pairs such that we turn a 2D/3D reconstruction problem to a 3D/3D one. We designed and conducted experiments on 11 cadaveric femurs to validate the present reconstruction scheme. An average mean reconstruction error of 1.2 mm was found when two fluoroscopic images were used for each bone. It decreased to 1.0 mm when three fluoroscopic images were used.
Resumo:
Ophthalmologists typically acquire different image modalities to diagnose eye pathologies. They comprise e.g., Fundus photography, Optical Coherence Tomography (OCT), Computed Tomography (CT) and Magnetic Resonance Imaging (MRI). Yet, these images are often complementary and do express the same pathologies in a different way. Some pathologies are only visible in a particular modality. Thus, it is beneficial for the ophthalmologist to have these modalities fused into a single patient-specific model. The presented article’s goal is a fusion of Fundus photography with segmented MRI volumes. This adds information to MRI which was not visible before like vessels and the macula. This article’s contributions include automatic detection of the optic disc, the fovea, the optic axis and an automatic segmentation of the vitreous humor of the eye.