960 resultados para variational Monte-Carlo method
Resumo:
We developed a ratiometric method capable of estimating total hemoglobin concentration from optically measured diffuse reflectance spectra. The three isosbestic wavelength ratio pairs that best correlated to total hemoglobin concentration independent of saturation and scattering were 545/390, 452/390, and 529/390 nm. These wavelength pairs were selected using forward Monte Carlo simulations which were used to extract hemoglobin concentration from experimental phantom measurements. Linear regression coefficients from the simulated data were directly applied to the phantom data, by calibrating for instrument throughput using a single phantom. Phantoms with variable scattering and hemoglobin saturation were tested with two different instruments, and the average percent errors between the expected and ratiometrically-extracted hemoglobin concentration were as low as 6.3%. A correlation of r = 0.88 between hemoglobin concentration extracted using the 529/390 nm isosbestic ratio and a scalable inverse Monte Carlo model was achieved for in vivo dysplastic cervical measurements (hemoglobin concentrations have been shown to be diagnostic for the detection of cervical pre-cancer by our group). These results indicate that use of such a simple ratiometric method has the potential to be used in clinical applications where tissue hemoglobin concentrations need to be rapidly quantified in vivo.
Resumo:
Quantitative optical spectroscopy has the potential to provide an effective low cost, and portable solution for cervical pre-cancer screening in resource-limited communities. However, clinical studies to validate the use of this technology in resource-limited settings require low power consumption and good quality control that is minimally influenced by the operator or variable environmental conditions in the field. The goal of this study was to evaluate the effects of two sources of potential error: calibration and pressure on the extraction of absorption and scattering properties of normal cervical tissues in a resource-limited setting in Leogane, Haiti. Our results show that self-calibrated measurements improved scattering measurements through real-time correction of system drift, in addition to minimizing the time required for post-calibration. Variations in pressure (tested without the potential confounding effects of calibration error) caused local changes in vasculature and scatterer density that significantly impacted the tissue absorption and scattering properties Future spectroscopic systems intended for clinical use, particularly where operator training is not viable and environmental conditions unpredictable, should incorporate a real-time self-calibration channel and collect diffuse reflectance spectra at a consistent pressure to maximize data integrity.
Resumo:
Transcriptional regulation has been studied intensively in recent decades. One important aspect of this regulation is the interaction between regulatory proteins, such as transcription factors (TF) and nucleosomes, and the genome. Different high-throughput techniques have been invented to map these interactions genome-wide, including ChIP-based methods (ChIP-chip, ChIP-seq, etc.), nuclease digestion methods (DNase-seq, MNase-seq, etc.), and others. However, a single experimental technique often only provides partial and noisy information about the whole picture of protein-DNA interactions. Therefore, the overarching goal of this dissertation is to provide computational developments for jointly modeling different experimental datasets to achieve a holistic inference on the protein-DNA interaction landscape.
We first present a computational framework that can incorporate the protein binding information in MNase-seq data into a thermodynamic model of protein-DNA interaction. We use a correlation-based objective function to model the MNase-seq data and a Markov chain Monte Carlo method to maximize the function. Our results show that the inferred protein-DNA interaction landscape is concordant with the MNase-seq data and provides a mechanistic explanation for the experimentally collected MNase-seq fragments. Our framework is flexible and can easily incorporate other data sources. To demonstrate this flexibility, we use prior distributions to integrate experimentally measured protein concentrations.
We also study the ability of DNase-seq data to position nucleosomes. Traditionally, DNase-seq has only been widely used to identify DNase hypersensitive sites, which tend to be open chromatin regulatory regions devoid of nucleosomes. We reveal for the first time that DNase-seq datasets also contain substantial information about nucleosome translational positioning, and that existing DNase-seq data can be used to infer nucleosome positions with high accuracy. We develop a Bayes-factor-based nucleosome scoring method to position nucleosomes using DNase-seq data. Our approach utilizes several effective strategies to extract nucleosome positioning signals from the noisy DNase-seq data, including jointly modeling data points across the nucleosome body and explicitly modeling the quadratic and oscillatory DNase I digestion pattern on nucleosomes. We show that our DNase-seq-based nucleosome map is highly consistent with previous high-resolution maps. We also show that the oscillatory DNase I digestion pattern is useful in revealing the nucleosome rotational context around TF binding sites.
Finally, we present a state-space model (SSM) for jointly modeling different kinds of genomic data to provide an accurate view of the protein-DNA interaction landscape. We also provide an efficient expectation-maximization algorithm to learn model parameters from data. We first show in simulation studies that the SSM can effectively recover underlying true protein binding configurations. We then apply the SSM to model real genomic data (both DNase-seq and MNase-seq data). Through incrementally increasing the types of genomic data in the SSM, we show that different data types can contribute complementary information for the inference of protein binding landscape and that the most accurate inference comes from modeling all available datasets.
This dissertation provides a foundation for future research by taking a step toward the genome-wide inference of protein-DNA interaction landscape through data integration.
Resumo:
This paper discusses the reliability of power electronics modules. The approach taken combines numerical modeling techniques with experimentation and accelerated testing to identify failure modes and mechanisms for the power module structure and most importantly the root cause of a potential failure. The paper details results for two types of failure (i) wire bond fatigue and (ii) substrate delamination. Finite element method modeling techniques have been used to predict the stress distribution within the module structures. A response surface optimisation approach has been employed to enable the optimal design and parameter sensitivity to be determined. The response surface is used by a Monte Carlo method to determine the effects of uncertainty in the design.
Resumo:
We present an occultation of the newly discovered hot Jupiter system WASP-19, observed with the High Acuity Wide-field K-band Imager instrument on the VLT, in order to measure thermal emission from the planet's dayside at ~2µm. The light curve was analysed using a Markov Chain Monte Carlo method to find the eclipse depth and the central transit time. The transit depth was found to be 0.366 +/- 0.072 per cent, corresponding to a brightness temperature of 2540 +/- 180 K. This is significantly higher than the calculated (zero-albedo) equilibrium temperature and indicates that the planet shows poor redistribution of heat to the night side, consistent with models of highly irradiated planets. Further observations are needed to confirm the existence of a temperature inversion and possibly molecular emission lines. The central eclipse time was found to be consistent with a circular orbit.
Resumo:
The overall quantum efficiency in surface plasmon (SP) enhanced Schottky barrier photodetectors is examined by considering both the external and internal yield. The external yield is considered through calculations of absorption and transmission of light in a configuration that allows reflectance minimization due to SP excitation. Following a Monte Carlo method, a procedure is presented to estimate the internal yield while taking into account the effect of elastic and inelastic scattering processes on excited carriers subsequent to photon absorption. The relative importance of internal photoemission and band-to-band contributions to the internal yield is highlighted along with the variation of the yield as a function of wavelength, metal thickness and other salient parameters of the detector. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Context. Several competing scenarios for planetary-system formation and evolution seek to explain how hot Jupiters came to be so close to their parent stars. Most planetary parameters evolve with time, making it hard to distinguish between models. The obliquity of an orbit with respect to the stellar rotation axis is thought to be more stable than other parameters such as eccentricity. Most planets, to date, appear aligned with the stellar rotation axis; the few misaligned planets so far detected are massive (> 2 MJ). Aims: Our goal is to measure the degree of alignment between planetary orbits and stellar spin axes, to search for potential correlations with eccentricity or other planetary parameters and to measure long term radial velocity variability indicating the presence of other bodies in the system. Methods: For transiting planets, the Rossiter-McLaughlin effect allows the measurement of the sky-projected angle ß between the stellar rotation axis and a planet's orbital axis. Using the HARPS spectrograph, we observed the Rossiter-McLaughlin effect for six transiting hot Jupiters found by the WASP consortium. We combine these with long term radial velocity measurements obtained with CORALIE. We used a combined analysis of photometry and radial velocities, fitting model parameters with the Markov Chain Monte Carlo method. After obtaining ß we attempt to statistically determine the distribution of the real spin-orbit angle ?. Results: We found that three of our targets have ß above 90°: WASP-2b: ß = 153°+11-15, WASP-15b: ß = 139.6°+5.2-4.3 and WASP-17b: ß = 148.5°+5.1-4.2; the other three (WASP-4b, WASP-5b and WASP-18b) have angles compatible with 0°. We find no dependence between the misaligned angle and planet mass nor with any other planetary parameter. All six orbits are close to circular, with only one firm detection of eccentricity e = 0.00848+0.00085-0.00095 in WASP-18b. No long-term radial acceleration was detected for any of the targets. Combining all previous 20 measurements of ß and our six and transforming them into a distribution of ? we find that between about 45 and 85% of hot Jupiters have ? > 30°. Conclusions: Most hot Jupiters are misaligned, with a large variety of spin-orbit angles. We find observations and predictions using the Kozai mechanism match well. If these observational facts are confirmed in the future, we may then conclude that most hot Jupiters are formed from a dynamical and tidal origin without the necessity to use type I or II migration. At present, standard disc migration cannot explain the observations without invoking at least another additional process.
Resumo:
The purpose of this study was to investigate the occupational hazards within the tanning industry caused by contaminated dust. A qualitative assessment of the risk of human exposure to dust was made throughout a commercial Kenyan tannery. Using this information, high-risk points in the processing line were identified and dust sampling regimes developed. An optical set-up using microscopy and digital imaging techniques was used to determine dust particle numbers and size distributions. The results showed that chemical handling was the most hazardous (12 mg m(-3)). A Monte Carlo method was used to estimate the concentration of the dust in the air throughout the tannery during an 8 h working day. This showed that the high-risk area of the tannery was associated with mean concentrations of dust greater than the UK Statutory Instrument 2002 No. 2677. stipulated limits (exceeding 10 mg m(-3) (Inhalable dust limits) and 4 mg m(-3) (Respirable dust limits). This therefore has implications in terms of provision of personal protective equipment (PPE) to the tannery workers for the mitigation of occupational risk.
Resumo:
Objective: To simultaneously evaluate 14 biomarkers from distinct biological pathways for risk prediction of ischemic stroke, including biomarkers of hemostasis, inflammation, and endothelial activation as well as chemokines and adipocytokines.
Methods and Results: The Prospective Epidemiological Study on Myocardial Infarction (PRIME) is a cohort of 9771 healthy men 50 to 59 years of age who were followed up over 10 years. In a nested case–control study, 95 ischemic stroke cases were matched with 190 controls. After multivariable adjustment for traditional risk factors, fibrinogen (odds ratio [OR], 1.53; 95% confidence interval [CI], 1.03–2.28), E-selectin (OR, 1.76; 95% CI, 1.06–2.93), interferon-γ-inducible-protein-10 (OR, 1.72; 95% CI, 1.06–2.78), resistin (OR, 2.86; 95% CI, 1.30–6.27), and total adiponectin (OR, 1.82; 95% CI, 1.04–3.19) were significantly associated with ischemic stroke. Adding E-selectin and resistin to a traditional risk factor model significantly increased the area under the receiver-operating characteristic curve from 0.679 (95% CI, 0.612–0.745) to 0.785 and 0.788, respectively, and yielded a categorical net reclassification improvement of 29.9% (P=0.001) and 28.4% (P=0.002), respectively. Their simultaneous inclusion in the traditional risk factor model increased the area under the receiver-operating characteristic curve to 0.824 (95% CI, 0.770–0.877) and resulted in an net reclassification improvement of 41.4% (P<0.001). Results were confirmed when using continuous net reclassification improvement.
Conclusion: Among multiple biomarkers from distinct biological pathways, E-selectin and resistin provided incremental and additive value to traditional risk factors in predicting ischemic stroke.
Resumo:
This article proposes a closed-loop control scheme based on joint-angle feedback for cable-driven parallel manipulators (CDPMs), which is able to overcome various difficulties resulting from the flexible nature of the driven cables to achieve higher control accuracy. By introducing a unique structure design that accommodates built-in encoders in passive joints, the seven degrees of freedom (7-DOF) CDPM can obtain joint angle values without external sensing devices, and it is used for feedback control together with a proper closed-loop control algorithm. The control algorithm has been derived from the time differential of the kinematic formulation, which relates the joint angular velocities to the time derivative of cable lengths. In addition, the Lyapunov stability theory and Monte Carlo method have been used to mathematically verify the self-feedback control law that has tolerance for parameter errors. With the aid of co-simulation technique, the self-feedback closed-loop control is applied on a 7-DOF CDPM and it shows higher motion accuracy than the one with an open-loop control. The trajectory tracking experiment on the motion control of the 7-DOF CDPM demonstrated a good performance of the self-feedback control method.
Resumo:
A randomly distributed multi-particle model considering the effects of particle/matrix interface and strengthening mechanisms introduced by the particles has been constructed. Particle shape, distribution, volume fraction and the particles/matrix interface due to the factors including element diffusion were considered in the model. The effects of strengthening mechanisms, caused by the introduction of particles on the mechanical properties of the composites, including grain refinement strengthening, dislocation strengthening and Orowan strengthening, are incorporated. In the model, the particles are assumed to have spheroidal shape, with uniform distribution of the centre, long axis length and inclination angle. The axis ratio follows a right half-normal distribution. Using Monte Carlo method, the location and shape parameters of the spheroids are randomly selected. The particle volume fraction is calculated using the area ratio of the spheroids. Then, the effects of particle/matrix interface and strengthening mechanism on the distribution of Mises stress and equivalent strain and the flow behaviour for the composites are discussed.
Resumo:
PURPOSE: We have been developing an image-guided single vocal cord irradiation technique to treat patients with stage T1a glottic carcinoma. In the present study, we compared the dose coverage to the affected vocal cord and the dose delivered to the organs at risk using conventional, intensity-modulated radiotherapy (IMRT) coplanar, and IMRT non-coplanar techniques.
METHODS AND MATERIALS: For 10 patients, conventional treatment plans using two laterally opposed wedged 6-MV photon beams were calculated in XiO (Elekta-CMS treatment planning system). An in-house IMRT/beam angle optimization algorithm was used to obtain the coplanar and non-coplanar optimized beam angles. Using these angles, the IMRT plans were generated in Monaco (IMRT treatment planning system, Elekta-CMS) with the implemented Monte Carlo dose calculation algorithm. The organs at risk included the contralateral vocal cord, arytenoids, swallowing muscles, carotid arteries, and spinal cord. The prescription dose was 66 Gy in 33 fractions.
RESULTS: For the conventional plans and coplanar and non-coplanar IMRT plans, the population-averaged mean dose ± standard deviation to the planning target volume was 67 ± 1 Gy. The contralateral vocal cord dose was reduced from 66 ± 1 Gy in the conventional plans to 39 ± 8 Gy and 36 ± 6 Gy in the coplanar and non-coplanar IMRT plans, respectively. IMRT consistently reduced the doses to the other organs at risk.
CONCLUSIONS: Single vocal cord irradiation with IMRT resulted in good target coverage and provided significant sparing of the critical structures. This has the potential to improve the quality-of-life outcomes after RT and maintain the same local control rates.
Resumo:
Differential emission measures (DEMs) during the impulsive phase of solar flares were constructed using observations from the EUV Variability Experiment (EVE) and the Markov-Chain Monte Carlo method. Emission lines from ions formed over the temperature range log Te = 5.8-7.2 allow the evolution of the DEM to be studied over a wide temperature range at 10 s cadence. The technique was applied to several M- and X-class flares, where impulsive phase EUV emission is observable in the disk-integrated EVE spectra from emission lines formed up to 3-4 MK and we use spatially unresolved EVE observations to infer the thermal structure of the emitting region. For the nine events studied, the DEMs exhibited a two-component distribution during the impulsive phase, a low-temperature component with peak temperature of 1-2 MK, and a broad high-temperature component from 7 to 30 MK. A bimodal high-temperature component is also found for several events, with peaks at 8 and 25 MK during the impulsive phase. The origin of the emission was verified using Atmospheric Imaging Assembly images to be the flare ribbons and footpoints, indicating that the constructed DEMs represent the spatially average thermal structure of the chromospheric flare emission during the impulsive phase.
Resumo:
This chapter discusses that the theoretical studies, using both atomistic and phenomenological approaches, have made clear predictions about the existence and behaviour of ferroelectric (FE) vortices. Effective Hamiltonians can be implemented within both Monte Carlo (MC) and molecular dynamics (MD) simulations. In contrast to the effective Hamiltonian method, which is atomistic in nature, the phase field method employs a continuum approach, in which the polarization field is the order parameter. Properties of FE nanostructures are largely governed by the existence of a depolarization field, which is much stronger than the demagnetization field in magnetic nanosystems. The topological patterns seen in rare earth manganites are often referred to as vortices and yet this claim never seems to be explicitly justified. By inspection, the form of a vortex structure is such that there is a continuous rotation in the orientation of dipole vectors around the singularity at the centre of the vortex.
Resumo:
BACKGROUND: Lipid-lowering therapy is costly but effective at reducing coronary heart disease (CHD) risk. OBJECTIVE: To assess the cost-effectiveness and public health impact of Adult Treatment Panel III (ATP III) guidelines and compare with a range of risk- and age-based alternative strategies. DESIGN: The CHD Policy Model, a Markov-type cost-effectiveness model. DATA SOURCES: National surveys (1999 to 2004), vital statistics (2000), the Framingham Heart Study (1948 to 2000), other published data, and a direct survey of statin costs (2008). TARGET POPULATION: U.S. population age 35 to 85 years. Time Horizon: 2010 to 2040. PERSPECTIVE: Health care system. INTERVENTION: Lowering of low-density lipoprotein cholesterol with HMG-CoA reductase inhibitors (statins). OUTCOME MEASURE: Incremental cost-effectiveness. RESULTS OF BASE-CASE ANALYSIS: Full adherence to ATP III primary prevention guidelines would require starting (9.7 million) or intensifying (1.4 million) statin therapy for 11.1 million adults and would prevent 20,000 myocardial infarctions and 10,000 CHD deaths per year at an annual net cost of $3.6 billion ($42,000/QALY) if low-intensity statins cost $2.11 per pill. The ATP III guidelines would be preferred over alternative strategies if society is willing to pay $50,000/QALY and statins cost $1.54 to $2.21 per pill. At higher statin costs, ATP III is not cost-effective; at lower costs, more liberal statin-prescribing strategies would be preferred; and at costs less than $0.10 per pill, treating all persons with low-density lipoprotein cholesterol levels greater than 3.4 mmol/L (>130 mg/dL) would yield net cost savings. RESULTS OF SENSITIVITY ANALYSIS: Results are sensitive to the assumptions that LDL cholesterol becomes less important as a risk factor with increasing age and that little disutility results from taking a pill every day. LIMITATION: Randomized trial evidence for statin effectiveness is not available for all subgroups. CONCLUSION: The ATP III guidelines are relatively cost-effective and would have a large public health impact if implemented fully in the United States. Alternate strategies may be preferred, however, depending on the cost of statins and how much society is willing to pay for better health outcomes. FUNDING: Flight Attendants' Medical Research Institute and the Swanson Family Fund. The Framingham Heart Study and Framingham Offspring Study are conducted and supported by the National Heart, Lung, and Blood Institute.