67 resultados para markov chains monte carlo methods
Resumo:
As part of a project to use the long-lived (T(1/2)=1200a) (166m)Ho as reference source in its reference ionisation chamber, IRA standardised a commercially acquired solution of this nuclide using the 4pibeta-gamma coincidence and 4pigamma (NaI) methods. The (166m)Ho solution supplied by Isotope Product Laboratories was measured to have about 5% Europium impurities (3% (154)Eu, 0.94% (152)Eu and 0.9% (155)Eu). Holmium had therefore to be separated from europium, and this was carried out by means of ion-exchange chromatography. The holmium fractions were collected without europium contamination: 162h long HPGe gamma measurements indicated no europium impurity (detection limits of 0.01% for (152)Eu and (154)Eu, and 0.03% for (155)Eu). The primary measurement of the purified (166m)Ho solution with the 4pi (PC) beta-gamma coincidence technique was carried out at three gamma energy settings: a window around the 184.4keV peak and gamma thresholds at 121.8 and 637.3keV. The results show very good self-consistency, and the activity concentration of the solution was evaluated to be 45.640+/-0.098kBq/g (0.21% with k=1). The activity concentration of this solution was also measured by integral counting with a well-type 5''x5'' NaI(Tl) detector and efficiencies computed by Monte Carlo simulations using the GEANT code. These measurements were mutually consistent, while the resulting weighted average of the 4pi NaI(Tl) method was found to agree within 0.15% with the result of the 4pibeta-gamma coincidence technique. An ampoule of this solution and the measured value of the concentration were submitted to the BIPM as a contribution to the Système International de Référence.
Resumo:
High throughput genome (HTG) and expressed sequence tag (EST) sequences are currently the most abundant nucleotide sequence classes in the public database. The large volume, high degree of fragmentation and lack of gene structure annotations prevent efficient and effective searches of HTG and EST data for protein sequence homologies by standard search methods. Here, we briefly describe three newly developed resources that should make discovery of interesting genes in these sequence classes easier in the future, especially to biologists not having access to a powerful local bioinformatics environment. trEST and trGEN are regularly regenerated databases of hypothetical protein sequences predicted from EST and HTG sequences, respectively. Hits is a web-based data retrieval and analysis system providing access to precomputed matches between protein sequences (including sequences from trEST and trGEN) and patterns and profiles from Prosite and Pfam. The three resources can be accessed via the Hits home page (http://hits. isb-sib.ch).
Resumo:
Axial deflection of DNA molecules in solution results from thermal motion and intrinsic curvature related to the DNA sequence. In order to measure directly the contribution of thermal motion we constructed intrinsically straight DNA molecules and measured their persistence length by cryo-electron microscopy. The persistence length of such intrinsically straight DNA molecules suspended in thin layers of cryo-vitrified solutions is about 80 nm. In order to test our experimental approach, we measured the apparent persistence length of DNA molecules with natural "random" sequences. The result of about 45 nm is consistent with the generally accepted value of the apparent persistence length of natural DNA sequences. By comparing the apparent persistence length to intrinsically straight DNA with that of natural DNA, it is possible to determine both the dynamic and the static contributions to the apparent persistence length.
Resumo:
MOTIVATION: Regulatory gene networks contain generic modules such as feedback loops that are essential for the regulation of many biological functions. The study of the stochastic mechanisms of gene regulation is instrumental for the understanding of how cells maintain their expression at levels commensurate with their biological role, as well as to engineer gene expression switches of appropriate behavior. The lack of precise knowledge on the steady-state distribution of gene expression requires the use of Gillespie algorithms and Monte-Carlo approximations. METHODOLOGY: In this study, we provide new exact formulas and efficient numerical algorithms for computing/modeling the steady-state of a class of self-regulated genes, and we use it to model/compute the stochastic expression of a gene of interest in an engineered network introduced in mammalian cells. The behavior of the genetic network is then analyzed experimentally in living cells. RESULTS: Stochastic models often reveal counter-intuitive experimental behaviors, and we find that this genetic architecture displays a unimodal behavior in mammalian cells, which was unexpected given its known bimodal response in unicellular organisms. We provide a molecular rationale for this behavior, and we implement it in the mathematical picture to explain the experimental results obtained from this network.
Resumo:
OBJECTIVES: To determine whether nalmefene combined with psychosocial support is cost-effective compared with psychosocial support alone for reducing alcohol consumption in alcohol-dependent patients with high/very high drinking risk levels (DRLs) as defined by the WHO, and to evaluate the public health benefit of reducing harmful alcohol-attributable diseases, injuries and deaths. DESIGN: Decision modelling using Markov chains compared costs and effects over 5 years. SETTING: The analysis was from the perspective of the National Health Service (NHS) in England and Wales. PARTICIPANTS: The model considered the licensed population for nalmefene, specifically adults with both alcohol dependence and high/very high DRLs, who do not require immediate detoxification and who continue to have high/very high DRLs after initial assessment. DATA SOURCES: We modelled treatment effect using data from three clinical trials for nalmefene (ESENSE 1 (NCT00811720), ESENSE 2 (NCT00812461) and SENSE (NCT00811941)). Baseline characteristics of the model population, treatment resource utilisation and utilities were from these trials. We estimated the number of alcohol-attributable events occurring at different levels of alcohol consumption based on published epidemiological risk-relation studies. Health-related costs were from UK sources. MAIN OUTCOME MEASURES: We measured incremental cost per quality-adjusted life year (QALY) gained and number of alcohol-attributable harmful events avoided. RESULTS: Nalmefene in combination with psychosocial support had an incremental cost-effectiveness ratio (ICER) of £5204 per QALY gained, and was therefore cost-effective at the £20,000 per QALY gained decision threshold. Sensitivity analyses showed that the conclusion was robust. Nalmefene plus psychosocial support led to the avoidance of 7179 alcohol-attributable diseases/injuries and 309 deaths per 100,000 patients compared to psychosocial support alone over the course of 5 years. CONCLUSIONS: Nalmefene can be seen as a cost-effective treatment for alcohol dependence, with substantial public health benefits. TRIAL REGISTRATION NUMBERS: This cost-effectiveness analysis was developed based on data from three randomised clinical trials: ESENSE 1 (NCT00811720), ESENSE 2 (NCT00812461) and SENSE (NCT00811941).
Resumo:
A solution of (18)F was standardised with a 4pibeta-4pigamma coincidence counting system in which the beta detector is a one-inch diameter cylindrical UPS89 plastic scintillator, positioned at the bottom of a well-type 5''x5'' NaI(Tl) gamma-ray detector. Almost full detection efficiency-which was varied downwards electronically-was achieved in the beta-channel. Aliquots of this (18)F solution were also measured using 4pigamma NaI(Tl) integral counting and Monte Carlo calculated efficiencies as well as the CIEMAT-NIST method. Secondary measurements of the same solution were also performed with an IG11 ionisation chamber whose equivalent activity is traceable to the Système International de Référence through the contribution IRA-METAS made to it in 2001; IRA's degree of equivalence was found to be close to the key comparison reference value (KCRV). The (18)F activity predicted by this coincidence system agrees closely with the ionisation chamber measurement and is compatible within one standard deviation of the other primary measurements. This work demonstrates that our new coincidence system can standardise short-lived radionuclides used in nuclear medicine.
Resumo:
Astrocytes have recently become a major center of interest in neurochemistry with the discoveries on their major role in brain energy metabolism. An interesting way to probe this glial contribution is given by in vivo (13) C NMR spectroscopy coupled with the infusion labeled glial-specific substrate, such as acetate. In this study, we infused alpha-chloralose anesthetized rats with [2-(13) C]acetate and followed the dynamics of the fractional enrichment (FE) in the positions C4 and C3 of glutamate and glutamine with high sensitivity, using (1) H-[(13) C] magnetic resonance spectroscopy (MRS) at 14.1T. Applying a two-compartment mathematical model to the measured time courses yielded a glial tricarboxylic acid (TCA) cycle rate (Vg ) of 0.27 ± 0.02 μmol/g/min and a glutamatergic neurotransmission rate (VNT ) of 0.15 ± 0.01 μmol/g/min. Glial oxidative ATP metabolism thus accounts for 38% of total oxidative metabolism measured by NMR. Pyruvate carboxylase (VPC ) was 0.09 ± 0.01 μmol/g/min, corresponding to 37% of the glial glutamine synthesis rate. The glial and neuronal transmitochondrial fluxes (Vx (g) and Vx (n) ) were of the same order of magnitude as the respective TCA cycle fluxes. In addition, we estimated a glial glutamate pool size of 0.6 ± 0.1 μmol/g. The effect of spectral data quality on the fluxes estimates was analyzed by Monte Carlo simulations. In this (13) C-acetate labeling study, we propose a refined two-compartment analysis of brain energy metabolism based on (13) C turnover curves of acetate, glutamate and glutamine measured with state of the art in vivo dynamic MRS at high magnetic field in rats, enabling a deeper understanding of the specific role of glial cells in brain oxidative metabolism. In addition, the robustness of the metabolic fluxes determination relative to MRS data quality was carefully studied.
Resumo:
Understanding why dispersal is sex-biased in many taxa is still a major concern in evolutionary ecology. Dispersal tends to be male-biased in mammals and female-biased in birds, but counter-examples exist and little is known about sex bias in other taxa. Obtaining accurate measures of dispersal in the field remains a problem. Here we describe and compare several methods for detecting sex-biased dispersal using bi-parentally inherited, codominant genetic markers. If gene flow is restricted among populations, then the genotype of an individual tells something about its origin. Provided that dispersal occurs at the juvenile stage and that sampling is carried out on adults, genotypes sampled from the dispersing sex should on average be less likely (compared to genotypes from the philopatric sex) in the population in which they were sampled. The dispersing sex should be less genetically structured and should present a larger heterozygote deficit. In this study we use computer simulations and a permutation test on four statistics to investigate the conditions under which sex-biased dispersal can be detected. Two tests emerge as fairly powerful. We present results concerning the optimal sampling strategy (varying number of samples, individuals, loci per individual and level of polymorphism) under different amounts of dispersal for each sex. These tests for biases in dispersal are also appropriate for any attribute (e.g. size, colour, status) suspected to influence the probability of dispersal. A windows program carrying out these tests can be freely downloaded from http://www.unil.ch/izea/softwares/fstat.html
Resumo:
PURPOSE: In the radiopharmaceutical therapy approach to the fight against cancer, in particular when it comes to translating laboratory results to the clinical setting, modeling has served as an invaluable tool for guidance and for understanding the processes operating at the cellular level and how these relate to macroscopic observables. Tumor control probability (TCP) is the dosimetric end point quantity of choice which relates to experimental and clinical data: it requires knowledge of individual cellular absorbed doses since it depends on the assessment of the treatment's ability to kill each and every cell. Macroscopic tumors, seen in both clinical and experimental studies, contain too many cells to be modeled individually in Monte Carlo simulation; yet, in particular for low ratios of decays to cells, a cell-based model that does not smooth away statistical considerations associated with low activity is a necessity. The authors present here an adaptation of the simple sphere-based model from which cellular level dosimetry for macroscopic tumors and their end point quantities, such as TCP, may be extrapolated more reliably. METHODS: Ten homogenous spheres representing tumors of different sizes were constructed in GEANT4. The radionuclide 131I was randomly allowed to decay for each model size and for seven different ratios of number of decays to number of cells, N(r): 1000, 500, 200, 100, 50, 20, and 10 decays per cell. The deposited energy was collected in radial bins and divided by the bin mass to obtain the average bin absorbed dose. To simulate a cellular model, the number of cells present in each bin was calculated and an absorbed dose attributed to each cell equal to the bin average absorbed dose with a randomly determined adjustment based on a Gaussian probability distribution with a width equal to the statistical uncertainty consistent with the ratio of decays to cells, i.e., equal to Nr-1/2. From dose volume histograms the surviving fraction of cells, equivalent uniform dose (EUD), and TCP for the different scenarios were calculated. Comparably sized spherical models containing individual spherical cells (15 microm diameter) in hexagonal lattices were constructed, and Monte Carlo simulations were executed for all the same previous scenarios. The dosimetric quantities were calculated and compared to the adjusted simple sphere model results. The model was then applied to the Bortezomib-induced enzyme-targeted radiotherapy (BETR) strategy of targeting Epstein-Barr virus (EBV)-expressing cancers. RESULTS: The TCP values were comparable to within 2% between the adjusted simple sphere and full cellular models. Additionally, models were generated for a nonuniform distribution of activity, and results were compared between the adjusted spherical and cellular models with similar comparability. The TCP values from the experimental macroscopic tumor results were consistent with the experimental observations for BETR-treated 1 g EBV-expressing lymphoma tumors in mice. CONCLUSIONS: The adjusted spherical model presented here provides more accurate TCP values than simple spheres, on par with full cellular Monte Carlo simulations while maintaining the simplicity of the simple sphere model. This model provides a basis for complementing and understanding laboratory and clinical results pertaining to radiopharmaceutical therapy.
Resumo:
Aim of the present article was to perform three-dimensional (3D) single photon emission tomography-based dosimetry in radioimmunotherapy (RIT) with (90)Y-ibritumomab-tiuxetan. A custom MATLAB-based code was used to elaborate 3D images and to compare average 3D doses to lesions and to organs at risk (OARs) with those obtained with planar (2D) dosimetry. Our 3D dosimetry procedure was validated through preliminary phantom studies using a body phantom consisting of a lung insert and six spheres with various sizes. In phantom study, the accuracy of dose determination of our imaging protocol decreased when the object volume decreased below 5 mL, approximately. The poorest results were obtained for the 2.58 mL and 1.30 mL spheres where the dose error evaluated on corrected images with regard to the theoretical dose value was -12.97% and -18.69%, respectively. Our 3D dosimetry protocol was subsequently applied on four patients before RIT with (90)Y-ibritumomab-tiuxetan for a total of 5 lesions and 4 OARs (2 livers, 2 spleens). In patient study, without the implementation of volume recovery technique, tumor absorbed doses calculated with the voxel-based approach were systematically lower than those calculated with the planar protocol, with average underestimation of -39% (range from -13.1% to -62.7%). After volume recovery, dose differences reduce significantly, with average deviation of -14.2% (range from -38.7.4% to +3.4%, 1 overestimation, 4 underestimations). Organ dosimetry in one case overestimated, in the other underestimated the dose delivered to liver and spleen. However, both for 2D and 3D approach, absorbed doses to organs per unit administered activity are comparable with most recent literature findings.
Resumo:
BACKGROUND: Low-molecular-weight heparin (LMWH) appears to be safe and effective for treating pulmonary embolism (PE), but its cost-effectiveness has not been assessed. METHODS: We built a Markov state-transition model to evaluate the medical and economic outcomes of a 6-day course with fixed-dose LMWH or adjusted-dose unfractionated heparin (UFH) in a hypothetical cohort of 60-year-old patients with acute submassive PE. Probabilities for clinical outcomes were obtained from a meta-analysis of clinical trials. Cost estimates were derived from Medicare reimbursement data and other sources. The base-case analysis used an inpatient setting, whereas secondary analyses examined early discharge and outpatient treatment with LMWH. Using a societal perspective, strategies were compared based on lifetime costs, quality-adjusted life-years (QALYs), and the incremental cost-effectiveness ratio. RESULTS: Inpatient treatment costs were higher for LMWH treatment than for UFH (dollar 13,001 vs dollar 12,780), but LMWH yielded a greater number of QALYs than did UFH (7.677 QALYs vs 7.493 QALYs). The incremental costs of dollar 221 and the corresponding incremental effectiveness of 0.184 QALYs resulted in an incremental cost-effectiveness ratio of dollar 1,209/QALY. Our results were highly robust in sensitivity analyses. LMWH became cost-saving if the daily pharmacy costs for LMWH were < dollar 51, if > or = 8% of patients were eligible for early discharge, or if > or = 5% of patients could be treated entirely as outpatients. CONCLUSION: For inpatient treatment of PE, the use of LMWH is cost-effective compared to UFH. Early discharge or outpatient treatment in suitable patients with PE would lead to substantial cost savings.
Resumo:
Background: Alcohol is a major risk factor for burden of disease and injuries globally. This paper presents a systematic method to compute the 95% confidence intervals of alcohol-attributable fractions (AAFs) with exposure and risk relations stemming from different sources.Methods: The computation was based on previous work done on modelling drinking prevalence using the gamma distribution and the inherent properties of this distribution. The Monte Carlo approach was applied to derive the variance for each AAF by generating random sets of all the parameters. A large number of random samples were thus created for each AAF to estimate variances. The derivation of the distributions of the different parameters is presented as well as sensitivity analyses which give an estimation of the number of samples required to determine the variance with predetermined precision, and to determine which parameter had the most impact on the variance of the AAFs.Results: The analysis of the five Asian regions showed that 150 000 samples gave a sufficiently accurate estimation of the 95% confidence intervals for each disease. The relative risk functions accounted for most of the variance in the majority of cases.Conclusions: Within reasonable computation time, the method yielded very accurate values for variances of AAFs.
Resumo:
The activity of radiopharmaceuticals in nuclear medicine is measured before patient injection with radionuclide calibrators. In Switzerland, the general requirements for quality controls are defined in a federal ordinance and a directive of the Federal Office of Metrology (METAS) which require each instrument to be verified. A set of three gamma sources (Co-57, Cs-137 and Co-60) is used to verify the response of radionuclide calibrators in the gamma energy range of their use. A beta source, a mixture of (90)Sr and (90)Y in secular equilibrium, is used as well. Manufacturers are responsible for the calibration factors. The main goal of the study was to monitor the validity of the calibration factors by using two sources: a (90)Sr/(90)Y source and a (18)F source. The three types of commercial radionuclide calibrators tested do not have a calibration factor for the mixture but only for (90)Y. Activity measurements of a (90)Sr/(90)Y source with the (90)Y calibration factor are performed in order to correct for the extra-contribution of (90)Sr. The value of the correction factor was found to be 1.113 whereas Monte Carlo simulations of the radionuclide calibrators estimate the correction factor to be 1.117. Measurements with (18)F sources in a specific geometry are also performed. Since this radionuclide is widely used in Swiss hospitals equipped with PET and PET-CT, the metrology of the (18)F is very important. The (18)F response normalized to the (137)Cs response shows that the difference with a reference value does not exceed 3% for the three types of radionuclide calibrators.
Resumo:
OBJECTIVES: To assess the incremental cost-effectiveness ratio (ICER) and incremental cost-utility ratio (ICUR) of risedronate compared to no intervention in postmenopausal osteoporotic women in a Swiss perspective. METHODS: A previously validated Markov model was populated with epidemiological and cost data specific to Switzerland and published utility values, and run on a population of 1,000 women of 70 years with established osteoporosis and previous vertebral fracture, treated over 5 years with risedronate 35 mg weekly or no intervention (base case), and five cohorts (according to age at therapy start) with eight risk factor distributions and three lengths of residual effects. RESULTS: In the base case population, the ICER of averting a hip fracture and the ICUR per quality-adjusted life year gained were both dominant. In the presence of a previous vertebral fracture, the ICUR was below euro45,000 (pound30,000) in all the scenarios. For all osteoporotic women>or=70 years of age with at least one risk factor, the ICUR was below euro45,000 or the intervention may even be cost saving. Age at the start of therapy and the fracture risk profile had a significant impact on results. CONCLUSION: Assuming a 2-year residual effect, that ICUR of risedronate in women with postmenopausal osteoporosis is below accepted thresholds from the age of 65 and even cost saving above the age of 70 with at least one risk factor.
Resumo:
PURPOSE: To assess how different diagnostic decision aids perform in terms of sensitivity, specificity, and harm. METHODS: Four diagnostic decision aids were compared, as applied to a simulated patient population: a findings-based algorithm following a linear or branched pathway, a serial threshold-based strategy, and a parallel threshold-based strategy. Headache in immune-compromised HIV patients in a developing country was used as an example. Diagnoses included cryptococcal meningitis, cerebral toxoplasmosis, tuberculous meningitis, bacterial meningitis, and malaria. Data were derived from literature and expert opinion. Diagnostic strategies' validity was assessed in terms of sensitivity, specificity, and harm related to mortality and morbidity. Sensitivity analyses and Monte Carlo simulation were performed. RESULTS: The parallel threshold-based approach led to a sensitivity of 92% and a specificity of 65%. Sensitivities of the serial threshold-based approach and the branched and linear algorithms were 47%, 47%, and 74%, respectively, and the specificities were 85%, 95%, and 96%. The parallel threshold-based approach resulted in the least harm, with the serial threshold-based approach, the branched algorithm, and the linear algorithm being associated with 1.56-, 1.44-, and 1.17-times higher harm, respectively. Findings were corroborated by sensitivity and Monte Carlo analyses. CONCLUSION: A threshold-based diagnostic approach is designed to find the optimal trade-off that minimizes expected harm, enhancing sensitivity and lowering specificity when appropriate, as in the given example of a symptom pointing to several life-threatening diseases. Findings-based algorithms, in contrast, solely consider clinical observations. A parallel workup, as opposed to a serial workup, additionally allows for all potential diseases to be reviewed, further reducing false negatives. The parallel threshold-based approach might, however, not be as good in other disease settings.