9 resultados para Image-based mesh generation

em CiencIPCA - Instituto Politécnico do Cávado e do Ave, Portugal


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pectus Carinatum is a deformity of the chest wall, characterized by an anterior protrusion of the sternum, often corrected surgically due to cosmetic motivation. This work presents an alternative approach to the current open surgery option, proposing a novel technique based on a personalized orthosis. Two different processes for the orthosis’ personalization are presented. One based on a 3D laser scan of the patient chest, followed by the reconstruction of the thoracic wall mesh using a radial basis function, and a second one, based on a computer tomography scan followed by a neighbouring cells algorithm. The axial position where the orthosis is to be located is automatically calculated using a Ray-Triangle intersection method, whose outcome is input to a pseudo Kochenek interpolating spline method to define the orthosis curvature. Results show that no significant differences exist between the patient chest physiognomy and the curvature angle and size of the orthosis, allowing a better cosmetic outcome and less initial discomfort

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pectus Carinatum is a deformity of the chest wall, characterized by an anterior protrusion of the sternum, often corrected surgically due to cosmetic motivation. This work presents an alternative approach to the current open surgery option, proposing a novel technique based on a personalized orthosis. Two different processes for the orthosis’ personalization are presented. One based on a 3D laser scan of the patient chest, followed by the reconstruction of the thoracic wall mesh using a radial basis function, and a second one, based on a computer tomography scan followed by a neighbouring cells algorithm. The axial position where the orthosis is to be located is automatically calculated using a Ray-Triangle intersection method, whose outcome is input to a pseudo Kochenek interpolating spline method to define the orthosis curvature. Results show that no significant differences exist between the patient chest physiognomy and the curvature angle and size of the orthosis, allowing a better cosmetic outcome and less initial discomfort.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The success of dental implant-supported prosthesis is directly linked to the accuracy obtained during implant’s pose estimation (position and orientation). Although traditional impression techniques and recent digital acquisition methods are acceptably accurate, a simultaneously fast, accurate and operator-independent methodology is still lacking. Hereto, an image-based framework is proposed to estimate the patient-specific implant’s pose using cone-beam computed tomography (CBCT) and prior knowledge of implanted model. The pose estimation is accomplished in a threestep approach: (1) a region-of-interest is extracted from the CBCT data using 2 operator-defined points at the implant’s main axis; (2) a simulated CBCT volume of the known implanted model is generated through Feldkamp-Davis-Kress reconstruction and coarsely aligned to the defined axis; and (3) a voxel-based rigid registration is performed to optimally align both patient and simulated CBCT data, extracting the implant’s pose from the optimal transformation. Three experiments were performed to evaluate the framework: (1) an in silico study using 48 implants distributed through 12 tridimensional synthetic mandibular models; (2) an in vitro study using an artificial mandible with 2 dental implants acquired with an i-CAT system; and (3) two clinical case studies. The results shown positional errors of 67±34μm and 108μm, and angular misfits of 0.15±0.08º and 1.4º, for experiment 1 and 2, respectively. Moreover, in experiment 3, visual assessment of clinical data results shown a coherent alignment of the reference implant. Overall, a novel image-based framework for implants’ pose estimation from CBCT data was proposed, showing accurate results in agreement with dental prosthesis modelling requirements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Given the dynamic nature of cardiac function, correct temporal alignment of pre-operative models and intraoperative images is crucial for augmented reality in cardiac image-guided interventions. As such, the current study focuses on the development of an image-based strategy for temporal alignment of multimodal cardiac imaging sequences, such as cine Magnetic Resonance Imaging (MRI) or 3D Ultrasound (US). First, we derive a robust, modality-independent signal from the image sequences, estimated by computing the normalized crosscorrelation between each frame in the temporal sequence and the end-diastolic frame. This signal is a resembler for the left-ventricle (LV) volume curve over time, whose variation indicates di erent temporal landmarks of the cardiac cycle. We then perform the temporal alignment of these surrogate signals derived from MRI and US sequences of the same patient through Dynamic Time Warping (DTW), allowing to synchronize both sequences. The proposed framework was evaluated in 98 patients, which have undergone both 3D+t MRI and US scans. The end-systolic frame could be accurately estimated as the minimum of the image-derived surrogate signal, presenting a relative error of 1:6 1:9% and 4:0 4:2% for the MRI and US sequences, respectively, thus supporting its association with key temporal instants of the cardiac cycle. The use of DTW reduces the desynchronization of the cardiac events in MRI and US sequences, allowing to temporally align multimodal cardiac imaging sequences. Overall, a generic, fast and accurate method for temporal synchronization of MRI and US sequences of the same patient was introduced. This approach could be straightforwardly used for the correct temporal alignment of pre-operative MRI information and intra-operative US images.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Protein aggregation became a widely accepted marker of many polyQ disorders, including Machado-Joseph disease (MJD), and is often used as readout for disease progression and development of therapeutic strategies. The lack of good platforms to rapidly quantify protein aggregates in a wide range of disease animal models prompted us to generate a novel image processing application that automatically identifies and quantifies the aggregates in a standardized and operator-independent manner. We propose here a novel image processing tool to quantify the protein aggregates in a Caenorhabditis elegans (C. elegans) model of MJD. Confocal mi-croscopy images were obtained from animals of different genetic conditions. The image processing application was developed using MeVisLab as a platform to pro-cess, analyse and visualize the images obtained from those animals. All segmenta-tion algorithms were based on intensity pixel levels.The quantification of area or numbers of aggregates per total body area, as well as the number of aggregates per animal were shown to be reliable and reproducible measures of protein aggrega-tion in C. elegans. The results obtained were consistent with the levels of aggrega-tion observed in the images. In conclusion, this novel imaging processing applica-tion allows the non-biased, reliable and high throughput quantification of protein aggregates in a C. elegans model of MJD, which may contribute to a significant improvement on the prognosis of treatment effectiveness for this group of disor-ders

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last years, it has become increasingly clear that neurodegenerative diseases involve protein aggregation, a process often used as disease progression readout and to develop therapeutic strategies. This work presents an image processing tool to automatic segment, classify and quantify these aggregates and the whole 3D body of the nematode Caenorhabditis Elegans. A total of 150 data set images, containing different slices, were captured with a confocal microscope from animals of distinct genetic conditions. Because of the animals’ transparency, most of the slices pixels appeared dark, hampering their body volume direct reconstruction. Therefore, for each data set, all slices were stacked in one single 2D image in order to determine a volume approximation. The gradient of this image was input to an anisotropic diffusion algorithm that uses the Tukey’s biweight as edge-stopping function. The image histogram median of this outcome was used to dynamically determine a thresholding level, which allows the determination of a smoothed exterior contour of the worm and the medial axis of the worm body from thinning its skeleton. Based on this exterior contour diameter and the medial animal axis, random 3D points were then calculated to produce a volume mesh approximation. The protein aggregations were subsequently segmented based on an iso-value and blended with the resulting volume mesh. The results obtained were consistent with qualitative observations in literature, allowing non-biased, reliable and high throughput protein aggregates quantification. This may lead to a significant improvement on neurodegenerative diseases treatment planning and interventions prevention

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Image segmentation is an ubiquitous task in medical image analysis, which is required to estimate morphological or functional properties of given anatomical targets. While automatic processing is highly desirable, image segmentation remains to date a supervised process in daily clinical practice. Indeed, challenging data often requires user interaction to capture the required level of anatomical detail. To optimize the analysis of 3D images, the user should be able to efficiently interact with the result of any segmentation algorithm to correct any possible disagreement. Building on a previously developed real-time 3D segmentation algorithm, we propose in the present work an extension towards an interactive application where user information can be used online to steer the segmentation result. This enables a synergistic collaboration between the operator and the underlying segmentation algorithm, thus contributing to higher segmentation accuracy, while keeping total analysis time competitive. To this end, we formalize the user interaction paradigm using a geometrical approach, where the user input is mapped to a non-cartesian space while this information is used to drive the boundary towards the position provided by the user. Additionally, we propose a shape regularization term which improves the interaction with the segmented surface, thereby making the interactive segmentation process less cumbersome. The resulting algorithm offers competitive performance both in terms of segmentation accuracy, as well as in terms of total analysis time. This contributes to a more efficient use of the existing segmentation tools in daily clinical practice. Furthermore, it compares favorably to state-of-the-art interactive segmentation software based on a 3D livewire-based algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Regulating mechanisms of branching morphogenesis of fetal lung rat explants have been an essential tool for molecular research. This work presents a new methodology to accurately quantify the epithelial, outer contour and peripheral airway buds of lung explants during cellular development from microscopic images. Methods: The outer contour was defined using an adaptive and multi-scale threshold algorithm whose level was automatically calculated based on an entropy maximization criterion. The inner lung epithelial was defined by a clustering procedure that groups small image regions according to the minimum description length principle and local statistical properties. Finally, the number of peripheral buds were counted as the skeleton branched ends from a skeletonized image of the lung inner epithelial. Results: The time for lung branching morphometric analysis was reduced in 98% in contrast to the manual method. Best results were obtained in the first two days of cellular development, with lesser standard deviations. Non-significant differences were found between the automatic and manual results in all culture days. Conclusions: The proposed method introduces a series of advantages related to its intuitive use and accuracy, making the technique suitable to images with different lightning characteristics and allowing a reliable comparison between different researchers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Regulating mechanisms of branchingmorphogenesis of fetal lung rat explants have been an essential tool formolecular research.This work presents a new methodology to accurately quantify the epithelial, outer contour, and peripheral airway buds of lung explants during cellular development frommicroscopic images. Methods.Theouter contour was defined using an adaptive and multiscale threshold algorithm whose level was automatically calculated based on an entropy maximization criterion. The inner lung epithelium was defined by a clustering procedure that groups small image regions according to the minimum description length principle and local statistical properties. Finally, the number of peripheral buds was counted as the skeleton branched ends from a skeletonized image of the lung inner epithelia. Results. The time for lung branching morphometric analysis was reduced in 98% in contrast to themanualmethod. Best results were obtained in the first two days of cellular development, with lesser standard deviations. Nonsignificant differences were found between the automatic and manual results in all culture days. Conclusions. The proposed method introduces a series of advantages related to its intuitive use and accuracy, making the technique suitable to images with different lighting characteristics and allowing a reliable comparison between different researchers.