950 resultados para Model accuracy
Resumo:
A major barrier to widespread clinical implementation of Monte Carlo dose calculation is the difficulty in characterizing the radiation source within a generalized source model. This work aims to develop a generalized three-component source model (target, primary collimator, flattening filter) for 6- and 18-MV photon beams that match full phase-space data (PSD). Subsource by subsource comparison of dose distributions, using either source PSD or the source model as input, allows accurate source characterization and has the potential to ease the commissioning procedure, since it is possible to obtain information about which subsource needs to be tuned. This source model is unique in that, compared to previous source models, it retains additional correlations among PS variables, which improves accuracy at nonstandard source-to-surface distances (SSDs). In our study, three-dimensional (3D) dose calculations were performed for SSDs ranging from 50 to 200 cm and for field sizes from 1 x 1 to 30 x 30 cm2 as well as a 10 x 10 cm2 field 5 cm off axis in each direction. The 3D dose distributions, using either full PSD or the source model as input, were compared in terms of dose-difference and distance-to-agreement. With this model, over 99% of the voxels agreed within +/-1% or 1 mm for the target, within 2% or 2 mm for the primary collimator, and within +/-2.5% or 2 mm for the flattening filter in all cases studied. For the dose distributions, 99% of the dose voxels agreed within 1% or 1 mm when the combined source model-including a charged particle source and the full PSD as input-was used. The accurate and general characterization of each photon source and knowledge of the subsource dose distributions should facilitate source model commissioning procedures by allowing scaling the histogram distributions representing the subsources to be tuned.
Resumo:
RATIONALE AND OBJECTIVES: A feasibility study on measuring kidney perfusion by a contrast-free magnetic resonance (MR) imaging technique is presented. MATERIALS AND METHODS: A flow-sensitive alternating inversion recovery (FAIR) prepared true fast imaging with steady-state precession (TrueFISP) arterial spin labeling sequence was used on a 3.0-T MR-scanner. The basis for quantification is a two-compartment exchange model proposed by Parkes that corrects for diverse assumptions in single-compartment standard models. RESULTS: Eleven healthy volunteers (mean age, 42.3 years; range 24-55) were examined. The calculated mean renal blood flow values for the exchange model (109 +/- 5 [medulla] and 245 +/- 11 [cortex] ml/min - 100 g) are in good agreement with the literature. Most important, the two-compartment exchange model exhibits a stabilizing effect on the evaluation of perfusion values if the finite permeability of the vessel wall and the venous outflow (fast solution) are considered: the values for the one-compartment standard model were 93 +/- 18 (medulla) and 208 +/- 37 (cortex) ml/min - 100 g. CONCLUSION: This improvement will increase the accuracy of contrast-free imaging of kidney perfusion in treatment renovascular disease.
Resumo:
The numerical solution of the incompressible Navier-Stokes equations offers an alternative to experimental analysis of fluid-structure interaction (FSI). We would save a lot of time and effort and help cut back on costs, if we are able to accurately model systems by these numerical solutions. These advantages are even more obvious when considering huge structures like bridges, high rise buildings or even wind turbine blades with diameters as large as 200 meters. The modeling of such processes, however, involves complex multiphysics problems along with complex geometries. This thesis focuses on a novel vorticity-velocity formulation called the Kinematic Laplacian Equation (KLE) to solve the incompressible Navier-stokes equations for such FSI problems. This scheme allows for the implementation of robust adaptive ordinary differential equations (ODE) time integration schemes, allowing us to tackle each problem as a separate module. The current algortihm for the KLE uses an unstructured quadrilateral mesh, formed by dividing each triangle of an unstructured triangular mesh into three quadrilaterals for spatial discretization. This research deals with determining a suitable measure of mesh quality based on the physics of the problems being tackled. This is followed by exploring methods to improve the quality of quadrilateral elements obtained from the triangles and thereby improving the overall mesh quality. A series of numerical experiments were designed and conducted for this purpose and the results obtained were tested on different geometries with varying degrees of mesh density.
Resumo:
OBJECTIVES: Implementation of an experimental model to compare cartilage MR imaging by means of histological analyses. MATERIAL AND METHODS: MRI was obtained from 4 patients expecting total knee replacement at 1.5 and/or 3T prior surgery. The timeframe between pre-op MRI and knee replacement was within two days. Resected cartilage-bone samples were tagged with Ethi((R))-pins to reproduce the histological cutting course. Pre-operative scanning at 1.5T included following parameters for fast low angle shot (FLASH: TR/TE/FA=33ms/6ms/30 degrees , BW=110kHz, 120mmx120mm FOV, 256x256 matrix, 0.65mm slice-thickness) and double echo steady state (DESS: TR/TE/FA=23.7ms/6.9ms/40 degrees , BW=130kHz, 120x120mm FOV, 256x256 matrix, 0.65mm slice-thickness). At 3T, scan parameters were: FLASH (TR/TE/FA=12.2ms/5.1ms/10 degrees , BW=130kHz, 170x170mm FOV, 320x320, 0.5mm slice-thickness) and DESS (TR/TE/FA=15.6ms/4.5ms/25 degrees , BW=200kHz, 135mmx150mm FOV, 288x320matrix, 0.5mm slice-thickness). Imaging of the specimens was done the same day at 1.5T. MRI (Noyes) and histological (Mankin) score scales were correlated using the paired t-test. Sensitivity and specificity for the detection of different grades of cartilage degeneration were assessed. Inter-reader and intra-reader reliability was determined using Kappa analysis. RESULTS: Low correlation (sensitivity, specificity) was found for both sequences in normal to mild Mankin grades. Only moderate to severe changes were diagnosed with higher significance and specificity. The use of higher field-strengths was advantageous for both protocols with sensitivity values ranging from 13.6% to 93.3% (FLASH) and 20.5% to 96.2% (DESS). Kappa values ranged from 0.488 to 0.944. CONCLUSIONS: Correlating MR images with continuous histological slices was feasible by using three-dimensional imaging, multi-planar-reformat and marker pins. The capability of diagnosing early cartilage changes with high accuracy could not be proven for both FLASH and DESS.
Resumo:
Background mortality is an essential component of any forest growth and yield model. Forecasts of mortality contribute largely to the variability and accuracy of model predictions at the tree, stand and forest level. In the present study, I implement and evaluate state-of-the-art techniques to increase the accuracy of individual tree mortality models, similar to those used in many of the current variants of the Forest Vegetation Simulator, using data from North Idaho and Montana. The first technique addresses methods to correct for bias induced by measurement error typically present in competition variables. The second implements survival regression and evaluates its performance against the traditional logistic regression approach. I selected the regression calibration (RC) algorithm as a good candidate for addressing the measurement error problem. Two logistic regression models for each species were fitted, one ignoring the measurement error, which is the “naïve” approach, and the other applying RC. The models fitted with RC outperformed the naïve models in terms of discrimination when the competition variable was found to be statistically significant. The effect of RC was more obvious where measurement error variance was large and for more shade-intolerant species. The process of model fitting and variable selection revealed that past emphasis on DBH as a predictor variable for mortality, while producing models with strong metrics of fit, may make models less generalizable. The evaluation of the error variance estimator developed by Stage and Wykoff (1998), and core to the implementation of RC, in different spatial patterns and diameter distributions, revealed that the Stage and Wykoff estimate notably overestimated the true variance in all simulated stands, but those that are clustered. Results show a systematic bias even when all the assumptions made by the authors are guaranteed. I argue that this is the result of the Poisson-based estimate ignoring the overlapping area of potential plots around a tree. Effects, especially in the application phase, of the variance estimate justify suggested future efforts of improving the accuracy of the variance estimate. The second technique implemented and evaluated is a survival regression model that accounts for the time dependent nature of variables, such as diameter and competition variables, and the interval-censored nature of data collected from remeasured plots. The performance of the model is compared with the traditional logistic regression model as a tool to predict individual tree mortality. Validation of both approaches shows that the survival regression approach discriminates better between dead and alive trees for all species. In conclusion, I showed that the proposed techniques do increase the accuracy of individual tree mortality models, and are a promising first step towards the next generation of background mortality models. I have also identified the next steps to undertake in order to advance mortality models further.
Resumo:
Skeletal muscle force evaluation is difficult to implement in a clinical setting. Muscle force is typically assessed through either manual muscle testing, isokinetic/isometric dynamometry, or electromyography (EMG). Manual muscle testing is a subjective evaluation of a patient’s ability to move voluntarily against gravity and to resist force applied by an examiner. Muscle testing using dynamometers adds accuracy by quantifying functional mechanical output of a limb. However, like manual muscle testing, dynamometry only provides estimates of the joint moment. EMG quantifies neuromuscular activation signals of individual muscles, and is used to infer muscle function. Despite the abundance of work performed to determine the degree to which EMG signals and muscle forces are related, the basic problem remains that EMG cannot provide a quantitative measurement of muscle force. Intramuscular pressure (IMP), the pressure applied by muscle fibers on interstitial fluid, has been considered as a correlate for muscle force. Numerous studies have shown that an approximately linear relationship exists between IMP and muscle force. A microsensor has recently been developed that is accurate, biocompatible, and appropriately sized for clinical use. While muscle force and pressure have been shown to be correlates, IMP has been shown to be non-uniform within the muscle. As it would not be practicable to experimentally evaluate how IMP is distributed, computational modeling may provide the means to fully evaluate IMP generation in muscles of various shapes and operating conditions. The work presented in this dissertation focuses on the development and validation of computational models of passive skeletal muscle and the evaluation of their performance for prediction of IMP. A transversly isotropic, hyperelastic, and nearly incompressible model will be evaluated along with a poroelastic model.
Resumo:
There is a need by engine manufactures for computationally efficient and accurate predictive combustion modeling tools for integration in engine simulation software for the assessment of combustion system hardware designs and early development of engine calibrations. This thesis discusses the process for the development and validation of a combustion modeling tool for Gasoline Direct Injected Spark Ignited Engine with variable valve timing, lift and duration valvetrain hardware from experimental data. Data was correlated and regressed from accepted methods for calculating the turbulent flow and flame propagation characteristics for an internal combustion engine. A non-linear regression modeling method was utilized to develop a combustion model to determine the fuel mass burn rate at multiple points during the combustion process. The computational fluid dynamic software Converge ©, was used to simulate and correlate the 3-D combustion system, port and piston geometry to the turbulent flow development within the cylinder to properly predict the experimental data turbulent flow parameters through the intake, compression and expansion processes. The engine simulation software GT-Power © is then used to determine the 1-D flow characteristics of the engine hardware being tested to correlate the regressed combustion modeling tool to experimental data to determine accuracy. The results of the combustion modeling tool show accurate trends capturing the combustion sensitivities to turbulent flow, thermodynamic and internal residual effects with changes in intake and exhaust valve timing, lift and duration.
Resumo:
Computer-aided surgery (CAS) allows for real-time intraoperative feedback resulting in increased accuracy, while reducing intraoperative radiation. CAS is especially useful for the treatment of certain pelvic ring fractures, which necessitate the precise placement of screws. Flouroscopy-based CAS modules have been developed for many orthopedic applications. The integration of the isocentric flouroscope even enables navigation using intraoperatively acquired three-dimensional (3D) data, though the scan volume and imaging quality are limited. Complicated and comprehensive pathologies in regions like the pelvis can necessitate a CT-based navigation system because of its larger field of view. To be accurate, the patient's anatomy must be registered and matched with the virtual object (CT data). The actual precision within the region of interest depends on the area of the bone where surface matching is performed. Conventional surface matching with a solid pointer requires extensive soft tissue dissection. This contradicts the primary purpose of CAS as a minimally invasive alternative to conventional surgical techniques. We therefore integrated an a-mode ultrasound pointer into the process of surface matching for pelvic surgery and compared it to the conventional method. Accuracy measurements were made in two pelvic models: a foam model submerged in water and one with attached porcine muscle tissue. Three different tissue depths were selected based on CT scans of 30 human pelves. The ultrasound pointer allowed for registration of virtually any point on the pelvis. This method of surface matching could be successfully integrated into CAS of the pelvis.
Resumo:
A new anisotropic elastic-viscoplastic damage constitutive model for bone is proposed using an eccentric elliptical yield criterion and nonlinear isotropic hardening. A micromechanics-based multiscale homogenization scheme proposed by Reisinger et al. is used to obtain the effective elastic properties of lamellar bone. The dissipative process in bone is modeled as viscoplastic deformation coupled to damage. The model is based on an orthotropic ecuntric elliptical criterion in stress space. In order to simplify material identification, an eccentric elliptical isotropic yield surface was defined in strain space, which is transformed to a stress-based criterion by means of the damaged compliance tensor. Viscoplasticity is implemented by means of the continuous Perzyna formulation. Damage is modeled by a scalar function of the accumulated plastic strain D(κ) , reducing all element s of the stiffness matrix. A polynomial flow rule is proposed in order to capture the rate-dependent post-yield behavior of lamellar bone. A numerical algorithm to perform the back projection on the rate-dependent yield surface has been developed and implemented in the commercial finite element solver Abaqus/Standard as a user subroutine UMAT. A consistent tangent operator has been derived and implemented in order to ensure quadratic convergence. Correct implementation of the algorithm, convergence, and accuracy of the tangent operator was tested by means of strain- and stress-based single element tests. A finite element simulation of nano- indentation in lamellar bone was finally performed in order to show the abilities of the newly developed constitutive model.
Resumo:
The development of susceptibility maps for debris flows is of primary importance due to population pressure in hazardous zones. However, hazard assessment by process-based modelling at a regional scale is difficult due to the complex nature of the phenomenon, the variability of local controlling factors, and the uncertainty in modelling parameters. A regional assessment must consider a simplified approach that is not highly parameter dependant and that can provide zonation with minimum data requirements. A distributed empirical model has thus been developed for regional susceptibility assessments using essentially a digital elevation model (DEM). The model is called Flow-R for Flow path assessment of gravitational hazards at a Regional scale (available free of charge under http://www.flow-r.org) and has been successfully applied to different case studies in various countries with variable data quality. It provides a substantial basis for a preliminary susceptibility assessment at a regional scale. The model was also found relevant to assess other natural hazards such as rockfall, snow avalanches and floods. The model allows for automatic source area delineation, given user criteria, and for the assessment of the propagation extent based on various spreading algorithms and simple frictional laws. We developed a new spreading algorithm, an improved version of Holmgren's direction algorithm, that is less sensitive to small variations of the DEM and that is avoiding over-channelization, and so produces more realistic extents. The choices of the datasets and the algorithms are open to the user, which makes it compliant for various applications and dataset availability. Amongst the possible datasets, the DEM is the only one that is really needed for both the source area delineation and the propagation assessment; its quality is of major importance for the results accuracy. We consider a 10 m DEM resolution as a good compromise between processing time and quality of results. However, valuable results have still been obtained on the basis of lower quality DEMs with 25 m resolution.
Resumo:
ABSTRACT: Fourier transform infrared spectroscopy (FTIRS) can provide detailed information on organic and minerogenic constituents of sediment records. Based on a large number of sediment samples of varying age (0�340 000 yrs) and from very diverse lake settings in Antarctica, Argentina, Canada, Macedonia/Albania, Siberia, and Sweden, we have developed universally applicable calibration models for the quantitative determination of biogenic silica (BSi; n = 816), total inorganic carbon (TIC; n = 879), and total organic carbon (TOC; n = 3164) using FTIRS. These models are based on the differential absorbance of infrared radiation at specific wavelengths with varying concentrations of individual parameters, due to molecular vibrations associated with each parameter. The calibration models have low prediction errors and the predicted values are highly correlated with conventionally measured values (R = 0.94�0.99). Robustness tests indicate the accuracy of the newly developed FTIRS calibration models is similar to that of conventional geochemical analyses. Consequently FTIRS offers a useful and rapid alternative to conventional analyses for the quantitative determination of BSi, TIC, and TOC. The rapidity, cost-effectiveness, and small sample size required enables FTIRS determination of geochemical properties to be undertaken at higher resolutions than would otherwise be possible with the same resource allocation, thus providing crucial sedimentological information for climatic and environmental reconstructions.
Resumo:
The prognosis for lung cancer patients remains poor. Five year survival rates have been reported to be 15%. Studies have shown that dose escalation to the tumor can lead to better local control and subsequently better overall survival. However, dose to lung tumor is limited by normal tissue toxicity. The most prevalent thoracic toxicity is radiation pneumonitis. In order to determine a safe dose that can be delivered to the healthy lung, researchers have turned to mathematical models predicting the rate of radiation pneumonitis. However, these models rely on simple metrics based on the dose-volume histogram and are not yet accurate enough to be used for dose escalation trials. The purpose of this work was to improve the fit of predictive risk models for radiation pneumonitis and to show the dosimetric benefit of using the models to guide patient treatment planning. The study was divided into 3 specific aims. The first two specifics aims were focused on improving the fit of the predictive model. In Specific Aim 1 we incorporated information about the spatial location of the lung dose distribution into a predictive model. In Specific Aim 2 we incorporated ventilation-based functional information into a predictive pneumonitis model. In the third specific aim a proof of principle virtual simulation was performed where a model-determined limit was used to scale the prescription dose. The data showed that for our patient cohort, the fit of the model to the data was not improved by incorporating spatial information. Although we were not able to achieve a significant improvement in model fit using pre-treatment ventilation, we show some promising results indicating that ventilation imaging can provide useful information about lung function in lung cancer patients. The virtual simulation trial demonstrated that using a personalized lung dose limit derived from a predictive model will result in a different prescription than what was achieved with the clinically used plan; thus demonstrating the utility of a normal tissue toxicity model in personalizing the prescription dose.
Resumo:
Computer vision-based food recognition could be used to estimate a meal's carbohydrate content for diabetic patients. This study proposes a methodology for automatic food recognition, based on the Bag of Features (BoF) model. An extensive technical investigation was conducted for the identification and optimization of the best performing components involved in the BoF architecture, as well as the estimation of the corresponding parameters. For the design and evaluation of the prototype system, a visual dataset with nearly 5,000 food images was created and organized into 11 classes. The optimized system computes dense local features, using the scale-invariant feature transform on the HSV color space, builds a visual dictionary of 10,000 visual words by using the hierarchical k-means clustering and finally classifies the food images with a linear support vector machine classifier. The system achieved classification accuracy of the order of 78%, thus proving the feasibility of the proposed approach in a very challenging image dataset.
Resumo:
Purpose Malposition of the acetabular component in total hip arthroplasty (THA) is a common surgical problem that can lead to hip dislocation, reduced range of motion and may result in early loosening. The aim of this study is to validate the accuracy and reproducibility of a single x-ray image based 2D/3D reconstruction technique in determining cup inclination and anteversion against two different computer tomography (CT)-based measurement techniques. Methods Cup anteversion and inclination of 20 patients after cementless primary THA was measured on standard anteroposterior (AP) radiographs with the help of the single x-ray 2D/3D reconstruction program and compared with two different 3D CT-based analyses [Ground Truth (GT) and MeVis (MV) reconstruction model]. Results The measurements from the single x-ray 2D/3D reconstruction technique were strongly correlated with both types of CT image-processing protocols for both cup inclination [R²=0.69 (GT); R²=0.59 (MV)] and anteversion [R²=0.89 (GT); R²=0.80 (MV)]. Conclusions The single x-ray image based 2D/3D reconstruction technique is a feasible method to assess cup position on postoperative x-rays. CTscans remain the golden standard for a more complex biomechanical evaluation when a lower tolerance limit (+/-2 degrees) is required.
Resumo:
Correct predictions of future blood glucose levels in individuals with Type 1 Diabetes (T1D) can be used to provide early warning of upcoming hypo-/hyperglycemic events and thus to improve the patient's safety. To increase prediction accuracy and efficiency, various approaches have been proposed which combine multiple predictors to produce superior results compared to single predictors. Three methods for model fusion are presented and comparatively assessed. Data from 23 T1D subjects under sensor-augmented pump (SAP) therapy were used in two adaptive data-driven models (an autoregressive model with output correction - cARX, and a recurrent neural network - RNN). Data fusion techniques based on i) Dempster-Shafer Evidential Theory (DST), ii) Genetic Algorithms (GA), and iii) Genetic Programming (GP) were used to merge the complimentary performances of the prediction models. The fused output is used in a warning algorithm to issue alarms of upcoming hypo-/hyperglycemic events. The fusion schemes showed improved performance with lower root mean square errors, lower time lags, and higher correlation. In the warning algorithm, median daily false alarms (DFA) of 0.25%, and 100% correct alarms (CA) were obtained for both event types. The detection times (DT) before occurrence of events were 13.0 and 12.1 min respectively for hypo-/hyperglycemic events. Compared to the cARX and RNN models, and a linear fusion of the two, the proposed fusion schemes represents a significant improvement.