982 resultados para Gel Dosimetry, Monte Carlo Modelling
Resumo:
Aim: Optimise a set of exposure factors, with the lowest effective dose, to delineate spinal curvature with the modified Cobb method in a full spine using computed radiography (CR) for a 5-year-old paediatric anthropomorphic phantom. Methods: Images were acquired by varying a set of parameters: positions (antero-posterior (AP), posteroanterior (PA) and lateral), kilo-voltage peak (kVp) (66-90), source-to-image distance (SID) (150 to 200cm), broad focus and the use of a grid (grid in/out) to analyse the impact on E and image quality (IQ). IQ was analysed applying two approaches: objective [contrast-to-noise-ratio/(CNR] and perceptual, using 5 observers. Monte-Carlo modelling was used for dose estimation. Cohen’s Kappa coefficient was used to calculate inter-observer-variability. The angle was measured using Cobb’s method on lateral projections under different imaging conditions. Results: PA promoted the lowest effective dose (0.013 mSv) compared to AP (0.048 mSv) and lateral (0.025 mSv). The exposure parameters that allowed lower dose were 200cm SID, 90 kVp, broad focus and grid out for paediatrics using an Agfa CR system. Thirty-seven images were assessed for IQ and thirty-two were classified adequate. Cobb angle measurements varied between 16°±2.9 and 19.9°±0.9. Conclusion: Cobb angle measurements can be performed using the lowest dose with a low contrast-tonoise ratio. The variation on measurements for this was ±2.9° and this is within the range of acceptable clinical error without impact on clinical diagnosis. Further work is recommended on improvement to the sample size and a more robust perceptual IQ assessment protocol for observers.
Resumo:
Durant la dernière décennie, les développements technologiques en radiothérapie ont transformé considérablement les techniques de traitement. Les nouveaux faisceaux non standard améliorent la conformité de la dose aux volumes cibles, mais également complexifient les procédures dosimétriques. Puisque des études récentes ont démontré l’invalidité de ces protocoles actuels avec les faisceaux non standard, un nouveau protocole applicable à la dosimétrie de référence de ces faisceaux est en préparation par l’IAEA-AAPM. Le but premier de cette étude est de caractériser les facteurs responsables des corrections non unitaires en dosimétrie des faisceaux non standard, et ainsi fournir des solutions conceptuelles afin de minimiser l’ordre de grandeur des corrections proposées dans le nouveau formalisme de l’IAEA-AAPM. Le deuxième but de l’étude est de construire des méthodes servant à estimer les incertitudes d’une manière exacte en dosimétrie non standard, et d’évaluer les niveaux d’incertitudes réalistes pouvant être obtenus dans des situations cliniques. Les résultats de l’étude démontrent que de rapporter la dose au volume sensible de la chambre remplie d’eau réduit la correction d’environ la moitié sous de hauts gradients de dose. Une relation théorique entre le facteur de correction de champs non standard idéaux et le facteur de gradient du champ de référence est obtenue. En dosimétrie par film radiochromique, des niveaux d’incertitude de l’ordre de 0.3% sont obtenus par l’application d’une procédure stricte, ce qui démontre un intérêt potentiel pour les mesures de faisceaux non standard. Les résultats suggèrent également que les incertitudes expérimentales des faisceaux non standard doivent être considérées sérieusement, que ce soit durant les procédures quotidiennes de vérification ou durant les procédures de calibration. De plus, ces incertitudes pourraient être un facteur limitatif dans la nouvelle génération de protocoles.
Resumo:
Distributions sensitive to the underlying event in QCD jet events have been measured with the ATLAS detector at the LHC, based on 37 pb−1 of proton–proton collision data collected at a centre-of-mass energy of 7 TeV. Chargedparticle mean pT and densities of all-particle ET and chargedparticle multiplicity and pT have been measured in regions azimuthally transverse to the hardest jet in each event. These are presented both as one-dimensional distributions and with their mean values as functions of the leading-jet transverse momentum from 20 to 800 GeV. The correlation of chargedparticle mean pT with charged-particle multiplicity is also studied, and the ET densities include the forward rapidity region; these features provide extra data constraints for Monte Carlo modelling of colour reconnection and beamremnant effects respectively. For the first time, underlying event observables have been computed separately for inclusive jet and exclusive dijet event selections, allowing more detailed study of the interplay of multiple partonic scattering and QCD radiation contributions to the underlying event. Comparisonsto the predictions of different Monte Carlo models show a need for further model tuning, but the standard approach is found to generally reproduce the features of the underlying event in both types of event selection.
Resumo:
The purpose of this research was to estimate the cost-effectiveness of two rehabilitation interventions for breast cancer survivors, each compared to a population-based, non-intervention group (n = 208). The two services included an early home-based physiotherapy intervention (DAART, n = 36) and a group-based exercise and psychosocial intervention (STRETCH, n = 31). A societal perspective was taken and costs were included as those incurred by the health care system, the survivors and community. Health outcomes included: (a) 'rehabilitated cases' based on changes in health-related quality of life between 6 and 12 months post-diagnosis, using the Functional Assessment of Cancer Therapy - Breast Cancer plus Arm Morbidity (FACT-B+4) questionnaire, and (b) quality-adjusted life years (QALYs) using utility scores from the Subjective Health Estimation (SHE) scale. Data were collected using self-reported questionnaires, medical records and program budgets. A Monte-Carlo modelling approach was used to test for uncertainty in cost and outcome estimates. The proportion of rehabilitated cases was similar across the three groups. From a societal perspective compared with the non-intervention group, the DAART intervention appeared to be the most efficient option with an incremental cost of $1344 per QALY gained, whereas the incremental cost per QALY gained from the STRETCH program was $14,478. Both DAART and STRETCH are low-cost, low-technological health promoting programs representing excellent public health investments.
Resumo:
The development of a permanent, stable ice sheet in East Antarctica happened during the middle Miocene, about 14 million years (Myr) ago. The middle Miocene therefore represents one of the distinct phases of rapid change in the transition from the "greenhouse" of the early Eocene to the "icehouse" of the present day. Carbonate carbon isotope records of the period immediately following the main stage of ice sheet development reveal a major perturbation in the carbon system, represented by the positive d13C excursion known as carbon maximum 6 ("M6"), which has traditionally been interpreted as reflecting increased burial of organic matter and atmospheric pCO2 drawdown. More recently, it has been suggested that the d13C excursion records a negative feedback resulting from the reduction of silicate weathering and an increase in atmospheric pCO2. Here we present high-resolution multi-proxy (alkenone carbon and foraminiferal boron isotope) records of atmospheric carbon dioxide and sea surface temperature across CM6. Similar to previously published records spanning this interval, our records document a world of generally low (~300 ppm) atmospheric pCO2 at a time generally accepted to be much warmer than today. Crucially, they also reveal a pCO2 decrease with associated cooling, which demonstrates that the carbon burial hypothesis for CM6 is feasible and could have acted as a positive feedback on global cooling.
Resumo:
We measured the distribution in absolute magnitude - circular velocity space for a well-defined sample of 199 rotating galaxies of the Calar Alto Legacy Integral Field Area Survey (CALIFA) using their stellar kinematics. Our aim in this analysis is to avoid subjective selection criteria and to take volume and large-scale structure factors into account. Using stellar velocity fields instead of gas emission line kinematics allows including rapidly rotating early-type galaxies. Our initial sample contains 277 galaxies with available stellar velocity fields and growth curve r-band photometry. After rejecting 51 velocity fields that could not be modelled because of the low number of bins, foreground contamination, or significant interaction, we performed Markov chain Monte Carlo modelling of the velocity fields, from which we obtained the rotation curve and kinematic parameters and their realistic uncertainties. We performed an extinction correction and calculated the circular velocity v_circ accounting for the pressure support of a given galaxy. The resulting galaxy distribution on the M-r - v(circ) plane was then modelled as a mixture of two distinct populations, allowing robust and reproducible rejection of outliers, a significant fraction of which are slow rotators. The selection effects are understood well enough that we were able to correct for the incompleteness of the sample. The 199 galaxies were weighted by volume and large-scale structure factors, which enabled us to fit a volume-corrected Tully-Fisher relation (TFR). More importantly, we also provide the volume-corrected distribution of galaxies in the M_r - v_circ plane, which can be compared with cosmological simulations. The joint distribution of the luminosity and circular velocity space densities, representative over the range of -20 > M_r > -22 mag, can place more stringent constraints on the galaxy formation and evolution scenarios than linear TFR fit parameters or the luminosity function alone.
Resumo:
The present paper addresses two major concerns that were identified when developing neural network based prediction models and which can limit their wider applicability in the industry. The first problem is that it appears neural network models are not readily available to a corrosion engineer. Therefore the first part of this paper describes a neural network model of CO2 corrosion which was created using a standard commercial software package and simple modelling strategies. It was found that such a model was able to capture practically all of the trends noticed in the experimental data with acceptable accuracy. This exercise has proven that a corrosion engineer could readily develop a neural network model such as the one described below for any problem at hand, given that sufficient experimental data exist. This applies even in the cases when the understanding of the underlying processes is poor. The second problem arises from cases when all the required inputs for a model are not known or can be estimated with a limited degree of accuracy. It seems advantageous to have models that can take as input a range rather than a single value. One such model, based on the so-called Monte Carlo approach, is presented. A number of comparisons are shown which have illustrated how a corrosion engineer might use this approach to rapidly test the sensitivity of a model to the uncertainities associated with the input parameters. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
The MCNPX code was used to calculate the TG-43U1 recommended parameters in water and prostate tissue in order to quantify the dosimetric impact in 30 patients treated with (125)I prostate implants when replacing the TG-43U1 formalism parameters calculated in water by a prostate-like medium in the planning system (PS) and to evaluate the uncertainties associated with Monte Carlo (MC) calculations. The prostate density was obtained from the CT of 100 patients with prostate cancer. The deviations between our results for water and the TG-43U1 consensus dataset values were -2.6% for prostate V100, -13.0% for V150, and -5.8% for D90; -2.0% for rectum V100, and -5.1% for D0.1; -5.0% for urethra D10, and -5.1% for D30. The same differences between our water and prostate results were all under 0.3%. Uncertainties estimations were up to 2.9% for the gL(r) function, 13.4% for the F(r,θ) function and 7.0% for Λ, mainly due to seed geometry uncertainties. Uncertainties in extracting the TG-43U1 parameters in the MC simulations as well as in the literature comparison are of the same order of magnitude as the differences between dose distributions computed for water and prostate-like medium. The selection of the parameters for the PS should be done carefully, as it may considerably affect the dose distributions. The seeds internal geometry uncertainties are a major limiting factor in the MC parameters deduction.
Resumo:
PURPOSE: In the radiopharmaceutical therapy approach to the fight against cancer, in particular when it comes to translating laboratory results to the clinical setting, modeling has served as an invaluable tool for guidance and for understanding the processes operating at the cellular level and how these relate to macroscopic observables. Tumor control probability (TCP) is the dosimetric end point quantity of choice which relates to experimental and clinical data: it requires knowledge of individual cellular absorbed doses since it depends on the assessment of the treatment's ability to kill each and every cell. Macroscopic tumors, seen in both clinical and experimental studies, contain too many cells to be modeled individually in Monte Carlo simulation; yet, in particular for low ratios of decays to cells, a cell-based model that does not smooth away statistical considerations associated with low activity is a necessity. The authors present here an adaptation of the simple sphere-based model from which cellular level dosimetry for macroscopic tumors and their end point quantities, such as TCP, may be extrapolated more reliably. METHODS: Ten homogenous spheres representing tumors of different sizes were constructed in GEANT4. The radionuclide 131I was randomly allowed to decay for each model size and for seven different ratios of number of decays to number of cells, N(r): 1000, 500, 200, 100, 50, 20, and 10 decays per cell. The deposited energy was collected in radial bins and divided by the bin mass to obtain the average bin absorbed dose. To simulate a cellular model, the number of cells present in each bin was calculated and an absorbed dose attributed to each cell equal to the bin average absorbed dose with a randomly determined adjustment based on a Gaussian probability distribution with a width equal to the statistical uncertainty consistent with the ratio of decays to cells, i.e., equal to Nr-1/2. From dose volume histograms the surviving fraction of cells, equivalent uniform dose (EUD), and TCP for the different scenarios were calculated. Comparably sized spherical models containing individual spherical cells (15 microm diameter) in hexagonal lattices were constructed, and Monte Carlo simulations were executed for all the same previous scenarios. The dosimetric quantities were calculated and compared to the adjusted simple sphere model results. The model was then applied to the Bortezomib-induced enzyme-targeted radiotherapy (BETR) strategy of targeting Epstein-Barr virus (EBV)-expressing cancers. RESULTS: The TCP values were comparable to within 2% between the adjusted simple sphere and full cellular models. Additionally, models were generated for a nonuniform distribution of activity, and results were compared between the adjusted spherical and cellular models with similar comparability. The TCP values from the experimental macroscopic tumor results were consistent with the experimental observations for BETR-treated 1 g EBV-expressing lymphoma tumors in mice. CONCLUSIONS: The adjusted spherical model presented here provides more accurate TCP values than simple spheres, on par with full cellular Monte Carlo simulations while maintaining the simplicity of the simple sphere model. This model provides a basis for complementing and understanding laboratory and clinical results pertaining to radiopharmaceutical therapy.
Resumo:
Aim of the present article was to perform three-dimensional (3D) single photon emission tomography-based dosimetry in radioimmunotherapy (RIT) with (90)Y-ibritumomab-tiuxetan. A custom MATLAB-based code was used to elaborate 3D images and to compare average 3D doses to lesions and to organs at risk (OARs) with those obtained with planar (2D) dosimetry. Our 3D dosimetry procedure was validated through preliminary phantom studies using a body phantom consisting of a lung insert and six spheres with various sizes. In phantom study, the accuracy of dose determination of our imaging protocol decreased when the object volume decreased below 5 mL, approximately. The poorest results were obtained for the 2.58 mL and 1.30 mL spheres where the dose error evaluated on corrected images with regard to the theoretical dose value was -12.97% and -18.69%, respectively. Our 3D dosimetry protocol was subsequently applied on four patients before RIT with (90)Y-ibritumomab-tiuxetan for a total of 5 lesions and 4 OARs (2 livers, 2 spleens). In patient study, without the implementation of volume recovery technique, tumor absorbed doses calculated with the voxel-based approach were systematically lower than those calculated with the planar protocol, with average underestimation of -39% (range from -13.1% to -62.7%). After volume recovery, dose differences reduce significantly, with average deviation of -14.2% (range from -38.7.4% to +3.4%, 1 overestimation, 4 underestimations). Organ dosimetry in one case overestimated, in the other underestimated the dose delivered to liver and spleen. However, both for 2D and 3D approach, absorbed doses to organs per unit administered activity are comparable with most recent literature findings.
Resumo:
Gel electrophoresis can be used to separate nicked circular DNA molecules of equal length but forming different knot types. At low electric fields, complex knots drift faster than simpler knots. However, at high electric field the opposite is the case and simpler knots migrate faster than more complex knots. Using Monte Carlo simulations we investigate the reasons of this reversal of relative order of electrophoretic mobility of DNA molecules forming different knot types. We observe that at high electric fields the simulated knotted molecules tend to hang over the gel fibres and require passing over a substantial energy barrier to slip over the impeding gel fibre. At low electric field the interactions of drifting molecules with the gel fibres are weak and there are no significant energy barriers that oppose the detachment of knotted molecules from transverse gel fibres.
Resumo:
Gel electrophoresis allows one to separate knotted DNA (nicked circular) of equal length according to the knot type. At low electric fields, complex knots, being more compact, drift faster than simpler knots. Recent experiments have shown that the drift velocity dependence on the knot type is inverted when changing from low to high electric fields. We present a computer simulation on a lattice of a closed, knotted, charged DNA chain drifting in an external electric field in a topologically restricted medium. Using a Monte Carlo algorithm, the dependence of the electrophoretic migration of the DNA molecules on the knot type and on the electric field intensity is investigated. The results are in qualitative and quantitative agreement with electrophoretic experiments done under conditions of low and high electric fields.
Resumo:
PURPOSE: In the radiopharmaceutical therapy approach to the fight against cancer, in particular when it comes to translating laboratory results to the clinical setting, modeling has served as an invaluable tool for guidance and for understanding the processes operating at the cellular level and how these relate to macroscopic observables. Tumor control probability (TCP) is the dosimetric end point quantity of choice which relates to experimental and clinical data: it requires knowledge of individual cellular absorbed doses since it depends on the assessment of the treatment's ability to kill each and every cell. Macroscopic tumors, seen in both clinical and experimental studies, contain too many cells to be modeled individually in Monte Carlo simulation; yet, in particular for low ratios of decays to cells, a cell-based model that does not smooth away statistical considerations associated with low activity is a necessity. The authors present here an adaptation of the simple sphere-based model from which cellular level dosimetry for macroscopic tumors and their end point quantities, such as TCP, may be extrapolated more reliably. METHODS: Ten homogenous spheres representing tumors of different sizes were constructed in GEANT4. The radionuclide 131I was randomly allowed to decay for each model size and for seven different ratios of number of decays to number of cells, N(r): 1000, 500, 200, 100, 50, 20, and 10 decays per cell. The deposited energy was collected in radial bins and divided by the bin mass to obtain the average bin absorbed dose. To simulate a cellular model, the number of cells present in each bin was calculated and an absorbed dose attributed to each cell equal to the bin average absorbed dose with a randomly determined adjustment based on a Gaussian probability distribution with a width equal to the statistical uncertainty consistent with the ratio of decays to cells, i.e., equal to Nr-1/2. From dose volume histograms the surviving fraction of cells, equivalent uniform dose (EUD), and TCP for the different scenarios were calculated. Comparably sized spherical models containing individual spherical cells (15 microm diameter) in hexagonal lattices were constructed, and Monte Carlo simulations were executed for all the same previous scenarios. The dosimetric quantities were calculated and compared to the adjusted simple sphere model results. The model was then applied to the Bortezomib-induced enzyme-targeted radiotherapy (BETR) strategy of targeting Epstein-Barr virus (EBV)-expressing cancers. RESULTS: The TCP values were comparable to within 2% between the adjusted simple sphere and full cellular models. Additionally, models were generated for a nonuniform distribution of activity, and results were compared between the adjusted spherical and cellular models with similar comparability. The TCP values from the experimental macroscopic tumor results were consistent with the experimental observations for BETR-treated 1 g EBV-expressing lymphoma tumors in mice. CONCLUSIONS: The adjusted spherical model presented here provides more accurate TCP values than simple spheres, on par with full cellular Monte Carlo simulations while maintaining the simplicity of the simple sphere model. This model provides a basis for complementing and understanding laboratory and clinical results pertaining to radiopharmaceutical therapy.
Resumo:
Tumors in non-Hodgkin lymphoma (NHL) patients are often proximal to the major blood vessels in the abdomen or neck. In external-beam radiotherapy, these tumors present a challenge because imaging resolution prevents the beam from being targeted to the tumor lesion without also irradiating the artery wall. This problem has led to potentially life-threatening delayed toxicity. Because radioimmunotherapy has resulted in long-term survival of NHL patients, we investigated whether the absorbed dose (AD) to the artery wall in radioimmunotherapy of NHL is of potential concern for delayed toxicity. SPECT resolution is not sufficient to enable dosimetric analysis of anatomic features of the thickness of the aortic wall. Therefore, we present a model of aortic wall toxicity based on data from 4 patients treated with (131)I-tositumomab. METHODS: Four NHL patients with periaortic tumors were administered pretherapeutic (131)I-tositumomab. Abdominal SPECT and whole-body planar images were obtained at 48, 72, and 144 h after tracer administration. Blood-pool activity concentrations were obtained from regions of interest drawn on the heart on the planar images. Tumor and blood activity concentrations, scaled to therapeutic administered activities-both standard and myeloablative-were input into a geometry and tracking model (GEANT, version 4) of the aorta. The simulated energy deposited in the arterial walls was collected and fitted, and the AD and biologic effective dose values to the aortic wall and tumors were obtained for standard therapeutic and hypothetical myeloablative administered activities. RESULTS: Arterial wall ADs from standard therapy were lower (0.6-3.7 Gy) than those typical from external-beam therapy, as were the tumor ADs (1.4-10.5 Gy). The ratios of tumor AD to arterial wall AD were greater for radioimmunotherapy by a factor of 1.9-4.0. For myeloablative therapy, artery wall ADs were in general less than those typical for external-beam therapy (9.4-11.4 Gy for 3 of 4 patients) but comparable for 1 patient (32.6 Gy). CONCLUSION: Blood vessel radiation dose can be estimated using the software package 3D-RD combined with GEANT modeling. The dosimetry analysis suggested that arterial wall toxicity is highly unlikely in standard dose radioimmunotherapy but should be considered a potential concern and limiting factor in myeloablative therapy.
Resumo:
This work presents models and methods that have been used in producing forecasts of population growth. The work is intended to emphasize the reliability bounds of the model forecasts. Leslie model and various versions of logistic population models are presented. References to literature and several studies are given. A lot of relevant methodology has been developed in biological sciences. The Leslie modelling approach involves the use of current trends in mortality,fertility, migration and emigration. The model treats population divided in age groups and the model is given as a recursive system. Other group of models is based on straightforward extrapolation of census data. Trajectories of simple exponential growth function and logistic models are used to produce the forecast. The work presents the basics of Leslie type modelling and the logistic models, including multi- parameter logistic functions. The latter model is also analysed from model reliability point of view. Bayesian approach and MCMC method are used to create error bounds of the model predictions.