90 resultados para Monte Carlo algorithms


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Axial deflection of DNA molecules in solution results from thermal motion and intrinsic curvature related to the DNA sequence. In order to measure directly the contribution of thermal motion we constructed intrinsically straight DNA molecules and measured their persistence length by cryo-electron microscopy. The persistence length of such intrinsically straight DNA molecules suspended in thin layers of cryo-vitrified solutions is about 80 nm. In order to test our experimental approach, we measured the apparent persistence length of DNA molecules with natural "random" sequences. The result of about 45 nm is consistent with the generally accepted value of the apparent persistence length of natural DNA sequences. By comparing the apparent persistence length to intrinsically straight DNA with that of natural DNA, it is possible to determine both the dynamic and the static contributions to the apparent persistence length.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cette thèse s'intéresse à étudier les propriétés extrémales de certains modèles de risque d'intérêt dans diverses applications de l'assurance, de la finance et des statistiques. Cette thèse se développe selon deux axes principaux, à savoir: Dans la première partie, nous nous concentrons sur deux modèles de risques univariés, c'est-à- dire, un modèle de risque de déflation et un modèle de risque de réassurance. Nous étudions le développement des queues de distribution sous certaines conditions des risques commun¬s. Les principaux résultats sont ainsi illustrés par des exemples typiques et des simulations numériques. Enfin, les résultats sont appliqués aux domaines des assurances, par exemple, les approximations de Value-at-Risk, d'espérance conditionnelle unilatérale etc. La deuxième partie de cette thèse est consacrée à trois modèles à deux variables: Le premier modèle concerne la censure à deux variables des événements extrême. Pour ce modèle, nous proposons tout d'abord une classe d'estimateurs pour les coefficients de dépendance et la probabilité des queues de distributions. Ces estimateurs sont flexibles en raison d'un paramètre de réglage. Leurs distributions asymptotiques sont obtenues sous certaines condi¬tions lentes bivariées de second ordre. Ensuite, nous donnons quelques exemples et présentons une petite étude de simulations de Monte Carlo, suivie par une application sur un ensemble de données réelles d'assurance. L'objectif de notre deuxième modèle de risque à deux variables est l'étude de coefficients de dépendance des queues de distributions obliques et asymétriques à deux variables. Ces distri¬butions obliques et asymétriques sont largement utiles dans les applications statistiques. Elles sont générées principalement par le mélange moyenne-variance de lois normales et le mélange de lois normales asymétriques d'échelles, qui distinguent la structure de dépendance de queue comme indiqué par nos principaux résultats. Le troisième modèle de risque à deux variables concerne le rapprochement des maxima de séries triangulaires elliptiques obliques. Les résultats théoriques sont fondés sur certaines hypothèses concernant le périmètre aléatoire sous-jacent des queues de distributions. -- This thesis aims to investigate the extremal properties of certain risk models of interest in vari¬ous applications from insurance, finance and statistics. This thesis develops along two principal lines, namely: In the first part, we focus on two univariate risk models, i.e., deflated risk and reinsurance risk models. Therein we investigate their tail expansions under certain tail conditions of the common risks. Our main results are illustrated by some typical examples and numerical simu¬lations as well. Finally, the findings are formulated into some applications in insurance fields, for instance, the approximations of Value-at-Risk, conditional tail expectations etc. The second part of this thesis is devoted to the following three bivariate models: The first model is concerned with bivariate censoring of extreme events. For this model, we first propose a class of estimators for both tail dependence coefficient and tail probability. These estimators are flexible due to a tuning parameter and their asymptotic distributions are obtained under some second order bivariate slowly varying conditions of the model. Then, we give some examples and present a small Monte Carlo simulation study followed by an application on a real-data set from insurance. The objective of our second bivariate risk model is the investigation of tail dependence coefficient of bivariate skew slash distributions. Such skew slash distributions are extensively useful in statistical applications and they are generated mainly by normal mean-variance mixture and scaled skew-normal mixture, which distinguish the tail dependence structure as shown by our principle results. The third bivariate risk model is concerned with the approximation of the component-wise maxima of skew elliptical triangular arrays. The theoretical results are based on certain tail assumptions on the underlying random radius.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Pulse wave velocity (PWV) is a surrogate of arterial stiffness and represents a non-invasive marker of cardiovascular risk. The non-invasive measurement of PWV requires tracking the arrival time of pressure pulses recorded in vivo, commonly referred to as pulse arrival time (PAT). In the state of the art, PAT is estimated by identifying a characteristic point of the pressure pulse waveform. This paper demonstrates that for ambulatory scenarios, where signal-to-noise ratios are below 10 dB, the performance in terms of repeatability of PAT measurements through characteristic points identification degrades drastically. Hence, we introduce a novel family of PAT estimators based on the parametric modeling of the anacrotic phase of a pressure pulse. In particular, we propose a parametric PAT estimator (TANH) that depicts high correlation with the Complior(R) characteristic point D1 (CC = 0.99), increases noise robustness and reduces by a five-fold factor the number of heartbeats required to obtain reliable PAT measurements.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A solution of (18)F was standardised with a 4pibeta-4pigamma coincidence counting system in which the beta detector is a one-inch diameter cylindrical UPS89 plastic scintillator, positioned at the bottom of a well-type 5''x5'' NaI(Tl) gamma-ray detector. Almost full detection efficiency-which was varied downwards electronically-was achieved in the beta-channel. Aliquots of this (18)F solution were also measured using 4pigamma NaI(Tl) integral counting and Monte Carlo calculated efficiencies as well as the CIEMAT-NIST method. Secondary measurements of the same solution were also performed with an IG11 ionisation chamber whose equivalent activity is traceable to the Système International de Référence through the contribution IRA-METAS made to it in 2001; IRA's degree of equivalence was found to be close to the key comparison reference value (KCRV). The (18)F activity predicted by this coincidence system agrees closely with the ionisation chamber measurement and is compatible within one standard deviation of the other primary measurements. This work demonstrates that our new coincidence system can standardise short-lived radionuclides used in nuclear medicine.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Astrocytes have recently become a major center of interest in neurochemistry with the discoveries on their major role in brain energy metabolism. An interesting way to probe this glial contribution is given by in vivo (13) C NMR spectroscopy coupled with the infusion labeled glial-specific substrate, such as acetate. In this study, we infused alpha-chloralose anesthetized rats with [2-(13) C]acetate and followed the dynamics of the fractional enrichment (FE) in the positions C4 and C3 of glutamate and glutamine with high sensitivity, using (1) H-[(13) C] magnetic resonance spectroscopy (MRS) at 14.1T. Applying a two-compartment mathematical model to the measured time courses yielded a glial tricarboxylic acid (TCA) cycle rate (Vg ) of 0.27 ± 0.02 μmol/g/min and a glutamatergic neurotransmission rate (VNT ) of 0.15 ± 0.01 μmol/g/min. Glial oxidative ATP metabolism thus accounts for 38% of total oxidative metabolism measured by NMR. Pyruvate carboxylase (VPC ) was 0.09 ± 0.01 μmol/g/min, corresponding to 37% of the glial glutamine synthesis rate. The glial and neuronal transmitochondrial fluxes (Vx (g) and Vx (n) ) were of the same order of magnitude as the respective TCA cycle fluxes. In addition, we estimated a glial glutamate pool size of 0.6 ± 0.1 μmol/g. The effect of spectral data quality on the fluxes estimates was analyzed by Monte Carlo simulations. In this (13) C-acetate labeling study, we propose a refined two-compartment analysis of brain energy metabolism based on (13) C turnover curves of acetate, glutamate and glutamine measured with state of the art in vivo dynamic MRS at high magnetic field in rats, enabling a deeper understanding of the specific role of glial cells in brain oxidative metabolism. In addition, the robustness of the metabolic fluxes determination relative to MRS data quality was carefully studied.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Understanding why dispersal is sex-biased in many taxa is still a major concern in evolutionary ecology. Dispersal tends to be male-biased in mammals and female-biased in birds, but counter-examples exist and little is known about sex bias in other taxa. Obtaining accurate measures of dispersal in the field remains a problem. Here we describe and compare several methods for detecting sex-biased dispersal using bi-parentally inherited, codominant genetic markers. If gene flow is restricted among populations, then the genotype of an individual tells something about its origin. Provided that dispersal occurs at the juvenile stage and that sampling is carried out on adults, genotypes sampled from the dispersing sex should on average be less likely (compared to genotypes from the philopatric sex) in the population in which they were sampled. The dispersing sex should be less genetically structured and should present a larger heterozygote deficit. In this study we use computer simulations and a permutation test on four statistics to investigate the conditions under which sex-biased dispersal can be detected. Two tests emerge as fairly powerful. We present results concerning the optimal sampling strategy (varying number of samples, individuals, loci per individual and level of polymorphism) under different amounts of dispersal for each sex. These tests for biases in dispersal are also appropriate for any attribute (e.g. size, colour, status) suspected to influence the probability of dispersal. A windows program carrying out these tests can be freely downloaded from http://www.unil.ch/izea/softwares/fstat.html

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Forest fire sequences can be modelled as a stochastic point process where events are characterized by their spatial locations and occurrence in time. Cluster analysis permits the detection of the space/time pattern distribution of forest fires. These analyses are useful to assist fire-managers in identifying risk areas, implementing preventive measures and conducting strategies for an efficient distribution of the firefighting resources. This paper aims to identify hot spots in forest fire sequences by means of the space-time scan statistics permutation model (STSSP) and a geographical information system (GIS) for data and results visualization. The scan statistical methodology uses a scanning window, which moves across space and time, detecting local excesses of events in specific areas over a certain period of time. Finally, the statistical significance of each cluster is evaluated through Monte Carlo hypothesis testing. The case study is the forest fires registered by the Forest Service in Canton Ticino (Switzerland) from 1969 to 2008. This dataset consists of geo-referenced single events including the location of the ignition points and additional information. The data were aggregated into three sub-periods (considering important preventive legal dispositions) and two main ignition-causes (lightning and anthropogenic causes). Results revealed that forest fire events in Ticino are mainly clustered in the southern region where most of the population is settled. Our analysis uncovered local hot spots arising from extemporaneous arson activities. Results regarding the naturally-caused fires (lightning fires) disclosed two clusters detected in the northern mountainous area.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Over the past decade, significant interest has been expressed in relating the spatial statistics of surface-based reflection ground-penetrating radar (GPR) data to those of the imaged subsurface volume. A primary motivation for this work is that changes in the radar wave velocity, which largely control the character of the observed data, are expected to be related to corresponding changes in subsurface water content. Although previous work has indeed indicated that the spatial statistics of GPR images are linked to those of the water content distribution of the probed region, a viable method for quantitatively analyzing the GPR data and solving the corresponding inverse problem has not yet been presented. Here we address this issue by first deriving a relationship between the 2-D autocorrelation of a water content distribution and that of the corresponding GPR reflection image. We then show how a Bayesian inversion strategy based on Markov chain Monte Carlo sampling can be used to estimate the posterior distribution of subsurface correlation model parameters that are consistent with the GPR data. Our results indicate that if the underlying assumptions are valid and we possess adequate prior knowledge regarding the water content distribution, in particular its vertical variability, this methodology allows not only for the reliable recovery of lateral correlation model parameters but also for estimates of parameter uncertainties. In the case where prior knowledge regarding the vertical variability of water content is not available, the results show that the methodology still reliably recovers the aspect ratio of the heterogeneity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Time-lapse geophysical data acquired during transient hydrological experiments are being increasingly employed to estimate subsurface hydraulic properties at the field scale. In particular, crosshole ground-penetrating radar (GPR) data, collected while water infiltrates into the subsurface either by natural or artificial means, have been demonstrated in a number of studies to contain valuable information concerning the hydraulic properties of the unsaturated zone. Previous work in this domain has considered a variety of infiltration conditions and different amounts of time-lapse GPR data in the estimation procedure. However, the particular benefits and drawbacks of these different strategies as well as the impact of a variety of key and common assumptions remain unclear. Using a Bayesian Markov-chain-Monte-Carlo stochastic inversion methodology, we examine in this paper the information content of time-lapse zero-offset-profile (ZOP) GPR traveltime data, collected under three different infiltration conditions, for the estimation of van Genuchten-Mualem (VGM) parameters in a layered subsurface medium. Specifically, we systematically analyze synthetic and field GPR data acquired under natural loading and two rates of forced infiltration, and we consider the value of incorporating different amounts of time-lapse measurements into the estimation procedure. Our results confirm that, for all infiltration scenarios considered, the ZOP GPR traveltime data contain important information about subsurface hydraulic properties as a function of depth, with forced infiltration offering the greatest potential for VGM parameter refinement because of the higher stressing of the hydrological system. Considering greater amounts of time-lapse data in the inversion procedure is also found to help refine VGM parameter estimates. Quite importantly, however, inconsistencies observed in the field results point to the strong possibility that posterior uncertainties are being influenced by model structural errors, which in turn underlines the fundamental importance of a systematic analysis of such errors in future related studies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

PURPOSE: In the radiopharmaceutical therapy approach to the fight against cancer, in particular when it comes to translating laboratory results to the clinical setting, modeling has served as an invaluable tool for guidance and for understanding the processes operating at the cellular level and how these relate to macroscopic observables. Tumor control probability (TCP) is the dosimetric end point quantity of choice which relates to experimental and clinical data: it requires knowledge of individual cellular absorbed doses since it depends on the assessment of the treatment's ability to kill each and every cell. Macroscopic tumors, seen in both clinical and experimental studies, contain too many cells to be modeled individually in Monte Carlo simulation; yet, in particular for low ratios of decays to cells, a cell-based model that does not smooth away statistical considerations associated with low activity is a necessity. The authors present here an adaptation of the simple sphere-based model from which cellular level dosimetry for macroscopic tumors and their end point quantities, such as TCP, may be extrapolated more reliably. METHODS: Ten homogenous spheres representing tumors of different sizes were constructed in GEANT4. The radionuclide 131I was randomly allowed to decay for each model size and for seven different ratios of number of decays to number of cells, N(r): 1000, 500, 200, 100, 50, 20, and 10 decays per cell. The deposited energy was collected in radial bins and divided by the bin mass to obtain the average bin absorbed dose. To simulate a cellular model, the number of cells present in each bin was calculated and an absorbed dose attributed to each cell equal to the bin average absorbed dose with a randomly determined adjustment based on a Gaussian probability distribution with a width equal to the statistical uncertainty consistent with the ratio of decays to cells, i.e., equal to Nr-1/2. From dose volume histograms the surviving fraction of cells, equivalent uniform dose (EUD), and TCP for the different scenarios were calculated. Comparably sized spherical models containing individual spherical cells (15 microm diameter) in hexagonal lattices were constructed, and Monte Carlo simulations were executed for all the same previous scenarios. The dosimetric quantities were calculated and compared to the adjusted simple sphere model results. The model was then applied to the Bortezomib-induced enzyme-targeted radiotherapy (BETR) strategy of targeting Epstein-Barr virus (EBV)-expressing cancers. RESULTS: The TCP values were comparable to within 2% between the adjusted simple sphere and full cellular models. Additionally, models were generated for a nonuniform distribution of activity, and results were compared between the adjusted spherical and cellular models with similar comparability. The TCP values from the experimental macroscopic tumor results were consistent with the experimental observations for BETR-treated 1 g EBV-expressing lymphoma tumors in mice. CONCLUSIONS: The adjusted spherical model presented here provides more accurate TCP values than simple spheres, on par with full cellular Monte Carlo simulations while maintaining the simplicity of the simple sphere model. This model provides a basis for complementing and understanding laboratory and clinical results pertaining to radiopharmaceutical therapy.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Monte Carlo simulations were carried out to study the response of a thyroid monitor for measuring intake activities of (125)I and (131)I. The aim of the study was 3-fold: to cross-validate the Monte Carlo simulation programs, to study the response of the detector using different phantoms and to study the effects of anatomical variations. Simulations were performed using the Swiss reference phantom and several voxelised phantoms. Determining the position of the thyroid is crucial for an accurate determination of radiological risks. The detector response using the Swiss reference phantom was in fairly good agreement with the response obtained using adult voxelised phantoms for (131)I, but should be revised for a better calibration for (125)I and for any measurements taken on paediatric patients.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The utility of sequencing a second highly variable locus in addition to the spa gene (e.g., double-locus sequence typing [DLST]) was investigated to overcome limitations of a Staphylococcus aureus single-locus typing method. Although adding a second locus seemed to increase discriminatory power, it was not sufficient to definitively infer evolutionary relationships within a single multilocus sequence type (ST-5).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Despite the fact that in living cells DNA molecules are long and highly crowded, they are rarely knotted. DNA knotting interferes with the normal functioning of the DNA and, therefore, molecular mechanisms evolved that maintain the knotting and catenation level below that which would be achieved if the DNA segments could pass randomly through each other. Biochemical experiments with torsionally relaxed DNA demonstrated earlier that type II DNA topoisomerases that permit inter- and intramolecular passages between segments of DNA molecules use the energy of ATP hydrolysis to select passages that lead to unknotting rather than to the formation of knots. Using numerical simulations, we identify here another mechanism by which topoisomerases can keep the knotting level low. We observe that DNA supercoiling, such as found in bacterial cells, creates a situation where intramolecular passages leading to knotting are opposed by the free-energy change connected to transitions from unknotted to knotted circular DNA molecules.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Aim of the present article was to perform three-dimensional (3D) single photon emission tomography-based dosimetry in radioimmunotherapy (RIT) with (90)Y-ibritumomab-tiuxetan. A custom MATLAB-based code was used to elaborate 3D images and to compare average 3D doses to lesions and to organs at risk (OARs) with those obtained with planar (2D) dosimetry. Our 3D dosimetry procedure was validated through preliminary phantom studies using a body phantom consisting of a lung insert and six spheres with various sizes. In phantom study, the accuracy of dose determination of our imaging protocol decreased when the object volume decreased below 5 mL, approximately. The poorest results were obtained for the 2.58 mL and 1.30 mL spheres where the dose error evaluated on corrected images with regard to the theoretical dose value was -12.97% and -18.69%, respectively. Our 3D dosimetry protocol was subsequently applied on four patients before RIT with (90)Y-ibritumomab-tiuxetan for a total of 5 lesions and 4 OARs (2 livers, 2 spleens). In patient study, without the implementation of volume recovery technique, tumor absorbed doses calculated with the voxel-based approach were systematically lower than those calculated with the planar protocol, with average underestimation of -39% (range from -13.1% to -62.7%). After volume recovery, dose differences reduce significantly, with average deviation of -14.2% (range from -38.7.4% to +3.4%, 1 overestimation, 4 underestimations). Organ dosimetry in one case overestimated, in the other underestimated the dose delivered to liver and spleen. However, both for 2D and 3D approach, absorbed doses to organs per unit administered activity are comparable with most recent literature findings.