950 resultados para Model accuracy
Resumo:
Purpose Accurate three-dimensional (3D) models of lumbar vertebrae can enable image-based 3D kinematic analysis. The common approach to derive 3D models is by direct segmentation of CT or MRI datasets. However, these have the disadvantages that they are expensive, timeconsuming and/or induce high-radiation doses to the patient. In this study, we present a technique to automatically reconstruct a scaled 3D lumbar vertebral model from a single two-dimensional (2D) lateral fluoroscopic image. Methods Our technique is based on a hybrid 2D/3D deformable registration strategy combining a landmark-to-ray registration with a statistical shape model-based 2D/3D reconstruction scheme. Fig. 1 shows different stages of the reconstruction process. Four cadaveric lumbar spine segments (total twelve lumbar vertebrae) were used to validate the technique. To evaluate the reconstruction accuracy, the surface models reconstructed from the lateral fluoroscopic images were compared to the associated ground truth data derived from a 3D CT-scan reconstruction technique. For each case, a surface-based matching was first used to recover the scale and the rigid transformation between the reconstructed surface model Results Our technique could successfully reconstruct 3D surface models of all twelve vertebrae. After recovering the scale and the rigid transformation between the reconstructed surface models and the ground truth models, the average error of the 2D/3D surface model reconstruction over the twelve lumbar vertebrae was found to be 1.0 mm. The errors of reconstructing surface models of all twelve vertebrae are shown in Fig. 2. It was found that the mean errors of the reconstructed surface models in comparison to their associated ground truths after iterative scaled rigid registrations ranged from 0.7 mm to 1.3 mm and the rootmean squared (RMS) errors ranged from 1.0 mm to 1.7 mm. The average mean reconstruction error was found to be 1.0 mm. Conclusion An accurate, scaled 3D reconstruction of the lumbar vertebra can be obtained from a single lateral fluoroscopic image using a statistical shape model based 2D/3D reconstruction technique. Future work will focus on applying the reconstructed model for 3D kinematic analysis of lumbar vertebrae, an extension of our previously-reported imagebased kinematic analysis. The developed method also has potential applications in surgical planning and navigation.
Resumo:
Seventeen bones (sixteen cadaveric bones and one plastic bone) were used to validate a method for reconstructing a surface model of the proximal femur from 2D X-ray radiographs and a statistical shape model that was constructed from thirty training surface models. Unlike previously introduced validation studies, where surface-based distance errors were used to evaluate the reconstruction accuracy, here we propose to use errors measured based on clinically relevant morphometric parameters. For this purpose, a program was developed to robustly extract those morphometric parameters from the thirty training surface models (training population), from the seventeen surface models reconstructed from X-ray radiographs, and from the seventeen ground truth surface models obtained either by a CT-scan reconstruction method or by a laser-scan reconstruction method. A statistical analysis was then performed to classify the seventeen test bones into two categories: normal cases and outliers. This classification step depends on the measured parameters of the particular test bone. In case all parameters of a test bone were covered by the training population's parameter ranges, this bone is classified as normal bone, otherwise as outlier bone. Our experimental results showed that statistically there was no significant difference between the morphometric parameters extracted from the reconstructed surface models of the normal cases and those extracted from the reconstructed surface models of the outliers. Therefore, our statistical shape model based reconstruction technique can be used to reconstruct not only the surface model of a normal bone but also that of an outlier bone.
Resumo:
The interest in automatic volume meshing for finite element analysis (FEA) has grown more since the appearance of microfocus CT (μCT), due to its high resolution, which allows for the assessment of mechanical behaviour at a high precision. Nevertheless, the basic meshing approach of generating one hexahedron per voxel produces jagged edges. To prevent this effect, smoothing algorithms have been introduced to enhance the topology of the mesh. However, whether smoothing also improves the accuracy of voxel-based meshes in clinical applications is still under question. There is a trade-off between smoothing and quality of elements in the mesh. Distorted elements may be produced by excessive smoothing and reduce accuracy of the mesh. In the present work, influence of smoothing on the accuracy of voxel-based meshes in micro-FE was assessed. An accurate 3D model of a trabecular structure with known apparent mechanical properties was used as a reference model. Virtual CT scans of this reference model (with resolutions of 16, 32 and 64 μm) were then created and used to build voxel-based meshes of the microarchitecture. Effects of smoothing on the apparent mechanical properties of the voxel-based meshes as compared to the reference model were evaluated. Apparent Young’s moduli of the smooth voxel-based mesh were significantly closer to those of the reference model for the 16 and 32 μm resolutions. Improvements were not significant for the 64 μm, due to loss of trabecular connectivity in the model. This study shows that smoothing offers a real benefit to voxel-based meshes used in micro-FE. It might also broaden voxel-based meshing to other biomechanical domains where it was not used previously due to lack of accuracy. As an example, this work will be used in the framework of the European project ContraCancrum, which aims at providing a patient-specific simulation of tumour development in brain and lungs for oncologists. For this type of clinical application, such a fast, automatic, and accurate generation of the mesh is of great benefit.
Resumo:
Osteoarticular allograft transplantation is a popular treatment method in wide surgical resections with large defects. For this reason hospitals are building bone data banks. Performing the optimal allograft selection on bone banks is crucial to the surgical outcome and patient recovery. However, current approaches are very time consuming hindering an efficient selection. We present an automatic method based on registration of femur bones to overcome this limitation. We introduce a new regularization term for the log-domain demons algorithm. This term replaces the standard Gaussian smoothing with a femur specific polyaffine model. The polyaffine femur model is constructed with two affine (femoral head and condyles) and one rigid (shaft) transformation. Our main contribution in this paper is to show that the demons algorithm can be improved in specific cases with an appropriate model. We are not trying to find the most optimal polyaffine model of the femur, but the simplest model with a minimal number of parameters. There is no need to optimize for different number of regions, boundaries and choice of weights, since this fine tuning will be done automatically by a final demons relaxation step with Gaussian smoothing. The newly developed synthesis approach provides a clear anatomically motivated modeling contribution through the specific three component transformation model, and clearly shows a performance improvement (in terms of anatomical meaningful correspondences) on 146 CT images of femurs compared to a standard multiresolution demons. In addition, this simple model improves the robustness of the demons while preserving its accuracy. The ground truth are manual measurements performed by medical experts.
Resumo:
We present an automatic method to segment brain tissues from volumetric MRI brain tumor images. The method is based on non-rigid registration of an average atlas in combination with a biomechanically justified tumor growth model to simulate soft-tissue deformations caused by the tumor mass-effect. The tumor growth model, which is formulated as a mesh-free Markov Random Field energy minimization problem, ensures correspondence between the atlas and the patient image, prior to the registration step. The method is non-parametric, simple and fast compared to other approaches while maintaining similar accuracy. It has been evaluated qualitatively and quantitatively with promising results on eight datasets comprising simulated images and real patient data.
Resumo:
A series of CCSD(T) single-point calculations on MP4(SDQ) geometries and the W1 model chemistry method have been used to calculate ΔH° and ΔG° values for the deprotonation of 17 gas-phase reactions where the experimental values have reported accuracies within 1 kcal/mol. These values have been compared with previous calculations using the G3 and CBS model chemistries and two DFT methods. The most accurate CCSD(T) method uses the aug-cc-pVQZ basis set. Extrapolation of the aug-cc-pVTZ and aug-cc-pVQZ results yields the most accurate agreement with experiment, with a standard deviation of 0.58 kcal/mol for ΔG° and 0.70 kcal/mol for ΔH°. Standard deviations from experiment for ΔG° and ΔH° for the W1 method are 0.95 and 0.83 kcal/mol, respectively. The G3 and CBS-APNO results are competitive with W1 and are much less expensive. Any of the model chemistry methods or the CCSD(T)/aug-cc-pVQZ method can serve as a valuable check on the accuracy of experimental data reported in the National Institutes of Standards and Technology (NIST) database.
Resumo:
This paper examines the accuracy of software-based on-line energy estimation techniques. It evaluates today’s most widespread energy estimation model in order to investigate whether the current methodology of pure software-based energy estimation running on a sensor node itself can indeed reliably and accurately determine its energy consumption - independent of the particular node instance, the traffic load the node is exposed to, or the MAC protocol the node is running. The paper enhances today’s widely used energy estimation model by integrating radio transceiver switches into the model, and proposes a methodology to find the optimal estimation model parameters. It proves by statistical validation with experimental data that the proposed model enhancement and parameter calibration methodology significantly increases the estimation accuracy.
Resumo:
Transcatheter aortic valve implantation (TAVI) is a less invasive alternative to surgical aortic valve replacement (SAVR) for patients with symptomatic severe aortic stenosis (AS) and a high operative risk. Risk stratification plays a decisive role in the optimal selection of therapeutic strategies for AS patients. The accuracy of contemporary surgical risk algorithms for AS patients has spurred considerable debate especially in the higher risk patient population. Future trials will explore TAVI in patients at intermediate operative risk. During the design of the SURgical replacement and Transcatheter Aortic Valve Implantation (SURTAVI) trial, a novel concept of risk stratification was proposed based upon age in combination with a fixed number of predefined risk factors, which are relatively prevalent, easy to capture and with a reasonable impact on operative mortality. Retrospective application of this algorithm to a contemporary academic practice dealing with clinically significant AS patients allocates about one-fourth of these patients as being at intermediate operative risk. Further testing is required for validation of this new paradigm in risk stratification. Finally, the Heart Team, consisting of at least an interventional cardiologist and cardiothoracic surgeon, should have the decisive role in determining whether a patient could be treated with TAVI or SAVR.
Resumo:
The aim of this study was to validate the accuracy and reproducibility of a statistical shape model-based 2D/3D reconstruction method for determining cup orientation after total hip arthroplasty. With a statistical shape model, this method allows reconstructing a patient-specific 3D-model of the pelvis from a standard AP X-ray radiograph. Cup orientation (inclination and anteversion) is then calculated with respect to the anterior pelvic plane that is derived from the reconstructed model.
Resumo:
Image-guided microsurgery requires accuracies an order of magnitude higher than today's navigation systems provide. A critical step toward the achievement of such low-error requirements is a highly accurate and verified patient-to-image registration. With the aim of reducing target registration error to a level that would facilitate the use of image-guided robotic microsurgery on the rigid anatomy of the head, we have developed a semiautomatic fiducial detection technique. Automatic force-controlled localization of fiducials on the patient is achieved through the implementation of a robotic-controlled tactile search within the head of a standard surgical screw. Precise detection of the corresponding fiducials in the image data is realized using an automated model-based matching algorithm on high-resolution, isometric cone beam CT images. Verification of the registration technique on phantoms demonstrated that through the elimination of user variability, clinically relevant target registration errors of approximately 0.1 mm could be achieved.
Resumo:
Model-based calibration of steady-state engine operation is commonly performed with highly parameterized empirical models that are accurate but not very robust, particularly when predicting highly nonlinear responses such as diesel smoke emissions. To address this problem, and to boost the accuracy of more robust non-parametric methods to the same level, GT-Power was used to transform the empirical model input space into multiple input spaces that simplified the input-output relationship and improved the accuracy and robustness of smoke predictions made by three commonly used empirical modeling methods: Multivariate Regression, Neural Networks and the k-Nearest Neighbor method. The availability of multiple input spaces allowed the development of two committee techniques: a 'Simple Committee' technique that used averaged predictions from a set of 10 pre-selected input spaces chosen by the training data and the "Minimum Variance Committee" technique where the input spaces for each prediction were chosen on the basis of disagreement between the three modeling methods. This latter technique equalized the performance of the three modeling methods. The successively increasing improvements resulting from the use of a single best transformed input space (Best Combination Technique), Simple Committee Technique and Minimum Variance Committee Technique were verified with hypothesis testing. The transformed input spaces were also shown to improve outlier detection and to improve k-Nearest Neighbor performance when predicting dynamic emissions with steady-state training data. An unexpected finding was that the benefits of input space transformation were unaffected by changes in the hardware or the calibration of the underlying GT-Power model.
Resumo:
The Receiver Operating Characteristic (ROC) curve is a prominent tool for characterizing the accuracy of continuous diagnostic test. To account for factors that might invluence the test accuracy, various ROC regression methods have been proposed. However, as in any regression analysis, when the assumed models do not fit the data well, these methods may render invalid and misleading results. To date practical model checking techniques suitable for validating existing ROC regression models are not yet available. In this paper, we develop cumulative residual based procedures to graphically and numerically assess the goodness-of-fit for some commonly used ROC regression models, and show how specific components of these models can be examined within this framework. We derive asymptotic null distributions for the residual process and discuss resampling procedures to approximate these distributions in practice. We illustrate our methods with a dataset from the Cystic Fibrosis registry.
Resumo:
In evaluating the accuracy of diagnosis tests, it is common to apply two imperfect tests jointly or sequentially to a study population. In a recent meta-analysis of the accuracy of microsatellite instability testing (MSI) and traditional mutation analysis (MUT) in predicting germline mutations of the mismatch repair (MMR) genes, a Bayesian approach (Chen, Watson, and Parmigiani 2005) was proposed to handle missing data resulting from partial testing and the lack of a gold standard. In this paper, we demonstrate an improved estimation of the sensitivities and specificities of MSI and MUT by using a nonlinear mixed model and a Bayesian hierarchical model, both of which account for the heterogeneity across studies through study-specific random effects. The methods can be used to estimate the accuracy of two imperfect diagnostic tests in other meta-analyses when the prevalence of disease, the sensitivities and/or the specificities of diagnostic tests are heterogeneous among studies. Furthermore, simulation studies have demonstrated the importance of carefully selecting appropriate random effects on the estimation of diagnostic accuracy measurements in this scenario.
Resumo:
Studies of diagnostic accuracy require more sophisticated methods for their meta-analysis than studies of therapeutic interventions. A number of different, and apparently divergent, methods for meta-analysis of diagnostic studies have been proposed, including two alternative approaches that are statistically rigorous and allow for between-study variability: the hierarchical summary receiver operating characteristic (ROC) model (Rutter and Gatsonis, 2001) and bivariate random-effects meta-analysis (van Houwelingen and others, 1993), (van Houwelingen and others, 2002), (Reitsma and others, 2005). We show that these two models are very closely related, and define the circumstances in which they are identical. We discuss the different forms of summary model output suggested by the two approaches, including summary ROC curves, summary points, confidence regions, and prediction regions.
Resumo:
Computer assisted orthopaedic surgery (CAOS) technology has recently been introduced to overcome problems resulting from acetabular component malpositioning in total hip arthroplasty. Available navigation modules can conceptually be categorized as computer tomography (CT) based, fluoroscopy based, or image-free. The current study presents a comprehensive accuracy analysis on the computer assisted placement accuracy of acetabular cups. It combines analyses using mathematical approaches, in vitro testing environments, and an in vivo clinical trial. A hybrid navigation approach combining image-free with fluoroscopic technology was chosen as the best compromise to CT-based systems. It introduces pointer-based digitization for easily assessable points and bi-planar fluoroscopy for deep-seated landmarks. From the in vitro data maximum deviations were found to be 3.6 degrees for inclination and 3.8 degrees for anteversion relative to a pre-defined test position. The maximum difference between intraoperatively calculated cup inclination and anteversion with the postoperatively measured position was 4 degrees and 5 degrees, respectively. These data coincide with worst cases scenario predictions applying a statistical simulation model. The proper use of navigation technology can reduce variability of cup placement well within the surgical safe zone. Surgeons have to concentrate on a variety of error sources during the procedure, which may explain the reported strong learning curves for CAOS technologies.