963 resultados para Monte-Carlo Simulation Method


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The available results of deep imaging searches for planetary companions around nearby stars provide us useful constraints on the frequencies of giant planets in very wide orbits. Here we present some preliminary results of the Monte Carlo simulation which compare the published detection limits with the generated planetary masses and orbital parameters. This allow us to consider the impications of the null detection, which comes from the direct imaging techniques, on the distributions of mass and semimajor axis derived from the results of the other search techniques and also to check the agreement of the observations with the available planetary formation models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Little is known about the learning of the skills needed to perform ultrasound- or nerve stimulator-guided peripheral nerve blocks. The aim of this study was to compare the learning curves of residents trained in ultrasound guidance versus residents trained in nerve stimulation for axillary brachial plexus block. Ten residents with no previous experience with using ultrasound received ultrasound training and another ten residents with no previous experience with using nerve stimulation received nerve stimulation training. The novices' learning curves were generated by retrospective data analysis out of our electronic anaesthesia database. Individual success rates were pooled, and the institutional learning curve was calculated using a bootstrapping technique in combination with a Monte Carlo simulation procedure. The skills required to perform successful ultrasound-guided axillary brachial plexus block can be learnt faster and lead to a higher final success rate compared to nerve stimulator-guided axillary brachial plexus block.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The present study was conducted to estimate the direct losses due to Neospora caninum in Swiss dairy cattle and to assess the costs and benefits of different potential control strategies. A Monte Carlo simulation spreadsheet module was developed to estimate the direct costs caused by N. caninum, with and without control strategies, and to estimate the costs of these control strategies in a financial analysis. The control strategies considered were "testing and culling of seropositive female cattle", "discontinued breeding with offspring from seropositive cows", "chemotherapeutical treatment of female offspring" and "vaccination of all female cattle". Each parameter in the module that was considered to be uncertain, was described using probability distributions. The simulations were run with 20,000 iterations over a time period of 25 years. The median annual losses due to N. caninum in the Swiss dairy cow population were estimated to be euro 9.7 million euros. All control strategies that required yearly serological testing of all cattle in the population produced high costs and thus were not financially profitable. Among the other control strategies, two showed benefit-cost ratios (BCR) >1 and positive net present values (NPV): "Discontinued breeding with offspring from seropositive cows" (BCR=1.29, NPV=25 million euros ) and "chemotherapeutical treatment of all female offspring" (BCR=2.95, NPV=59 million euros). In economic terms, the best control strategy currently available would therefore be "discontinued breeding with offspring from seropositive cows".

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Smoothing splines are a popular approach for non-parametric regression problems. We use periodic smoothing splines to fit a periodic signal plus noise model to data for which we assume there are underlying circadian patterns. In the smoothing spline methodology, choosing an appropriate smoothness parameter is an important step in practice. In this paper, we draw a connection between smoothing splines and REACT estimators that provides motivation for the creation of criteria for choosing the smoothness parameter. The new criteria are compared to three existing methods, namely cross-validation, generalized cross-validation, and generalization of maximum likelihood criteria, by a Monte Carlo simulation and by an application to the study of circadian patterns. For most of the situations presented in the simulations, including the practical example, the new criteria out-perform the three existing criteria.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In many clinical trials to evaluate treatment efficacy, it is believed that there may exist latent treatment effectiveness lag times after which medical procedure or chemical compound would be in full effect. In this article, semiparametric regression models are proposed and studied to estimate the treatment effect accounting for such latent lag times. The new models take advantage of the invariance property of the additive hazards model in marginalizing over random effects, so parameters in the models are easy to be estimated and interpreted, while the flexibility without specifying baseline hazard function is kept. Monte Carlo simulation studies demonstrate the appropriateness of the proposed semiparametric estimation procedure. Data collected in the actual randomized clinical trial, which evaluates the effectiveness of biodegradable carmustine polymers for treatment of recurrent brain tumors, are analyzed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Metals price risk management is a key issue related to financial risk in metal markets because of uncertainty of commodity price fluctuation, exchange rate, interest rate changes and huge price risk either to metals’ producers or consumers. Thus, it has been taken into account by all participants in metal markets including metals’ producers, consumers, merchants, banks, investment funds, speculators, traders and so on. Managing price risk provides stable income for both metals’ producers and consumers, so it increases the chance that a firm will invest in attractive projects. The purpose of this research is to evaluate risk management strategies in the copper market. The main tools and strategies of price risk management are hedging and other derivatives such as futures contracts, swaps and options contracts. Hedging is a transaction designed to reduce or eliminate price risk. Derivatives are financial instruments, whose returns are derived from other financial instruments and they are commonly used for managing financial risks. Although derivatives have been around in some form for centuries, their growth has accelerated rapidly during the last 20 years. Nowadays, they are widely used by financial institutions, corporations, professional investors, and individuals. This project is focused on the over-the-counter (OTC) market and its products such as exotic options, particularly Asian options. The first part of the project is a description of basic derivatives and risk management strategies. In addition, this part discusses basic concepts of spot and futures (forward) markets, benefits and costs of risk management and risks and rewards of positions in the derivative markets. The second part considers valuations of commodity derivatives. In this part, the options pricing model DerivaGem is applied to Asian call and put options on London Metal Exchange (LME) copper because it is important to understand how Asian options are valued and to compare theoretical values of the options with their market observed values. Predicting future trends of copper prices is important and would be essential to manage market price risk successfully. Therefore, the third part is a discussion about econometric commodity models. Based on this literature review, the fourth part of the project reports the construction and testing of an econometric model designed to forecast the monthly average price of copper on the LME. More specifically, this part aims at showing how LME copper prices can be explained by means of a simultaneous equation structural model (two-stage least squares regression) connecting supply and demand variables. A simultaneous econometric model for the copper industry is built: {█(Q_t^D=e^((-5.0485))∙P_((t-1))^((-0.1868) )∙〖GDP〗_t^((1.7151) )∙e^((0.0158)∙〖IP〗_t ) @Q_t^S=e^((-3.0785))∙P_((t-1))^((0.5960))∙T_t^((0.1408))∙P_(OIL(t))^((-0.1559))∙〖USDI〗_t^((1.2432))∙〖LIBOR〗_((t-6))^((-0.0561))@Q_t^D=Q_t^S )┤ P_((t-1))^CU=e^((-2.5165))∙〖GDP〗_t^((2.1910))∙e^((0.0202)∙〖IP〗_t )∙T_t^((-0.1799))∙P_(OIL(t))^((0.1991))∙〖USDI〗_t^((-1.5881))∙〖LIBOR〗_((t-6))^((0.0717) Where, Q_t^D and Q_t^Sare world demand for and supply of copper at time t respectively. P(t-1) is the lagged price of copper, which is the focus of the analysis in this part. GDPt is world gross domestic product at time t, which represents aggregate economic activity. In addition, industrial production should be considered here, so the global industrial production growth that is noted as IPt is included in the model. Tt is the time variable, which is a useful proxy for technological change. A proxy variable for the cost of energy in producing copper is the price of oil at time t, which is noted as POIL(t ) . USDIt is the U.S. dollar index variable at time t, which is an important variable for explaining the copper supply and copper prices. At last, LIBOR(t-6) is the 6-month lagged 1-year London Inter bank offering rate of interest. Although, the model can be applicable for different base metals' industries, the omitted exogenous variables such as the price of substitute or a combined variable related to the price of substitutes have not been considered in this study. Based on this econometric model and using a Monte-Carlo simulation analysis, the probabilities that the monthly average copper prices in 2006 and 2007 will be greater than specific strike price of an option are defined. The final part evaluates risk management strategies including options strategies, metal swaps and simple options in relation to the simulation results. The basic options strategies such as bull spreads, bear spreads and butterfly spreads, which are created by using both call and put options in 2006 and 2007 are evaluated. Consequently, each risk management strategy in 2006 and 2007 is analyzed based on the day of data and the price prediction model. As a result, applications stemming from this project include valuing Asian options, developing a copper price prediction model, forecasting and planning, and decision making for price risk management in the copper market.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Civil infrastructure provides essential services for the development of both society and economy. It is very important to manage systems efficiently to ensure sound performance. However, there are challenges in information extraction from available data, which also necessitates the establishment of methodologies and frameworks to assist stakeholders in the decision making process. This research proposes methodologies to evaluate systems performance by maximizing the use of available information, in an effort to build and maintain sustainable systems. Under the guidance of problem formulation from a holistic view proposed by Mukherjee and Muga, this research specifically investigates problem solving methods that measure and analyze metrics to support decision making. Failures are inevitable in system management. A methodology is developed to describe arrival pattern of failures in order to assist engineers in failure rescues and budget prioritization especially when funding is limited. It reveals that blockage arrivals are not totally random. Smaller meaningful subsets show good random behavior. Additional overtime failure rate is analyzed by applying existing reliability models and non-parametric approaches. A scheme is further proposed to depict rates over the lifetime of a given facility system. Further analysis of sub-data sets is also performed with the discussion of context reduction. Infrastructure condition is another important indicator of systems performance. The challenges in predicting facility condition are the transition probability estimates and model sensitivity analysis. Methods are proposed to estimate transition probabilities by investigating long term behavior of the model and the relationship between transition rates and probabilities. To integrate heterogeneities, model sensitivity is performed for the application of non-homogeneous Markov chains model. Scenarios are investigated by assuming transition probabilities follow a Weibull regressed function and fall within an interval estimate. For each scenario, multiple cases are simulated using a Monte Carlo simulation. Results show that variations on the outputs are sensitive to the probability regression. While for the interval estimate, outputs have similar variations to the inputs. Life cycle cost analysis and life cycle assessment of a sewer system are performed comparing three different pipe types, which are reinforced concrete pipe (RCP) and non-reinforced concrete pipe (NRCP), and vitrified clay pipe (VCP). Life cycle cost analysis is performed for material extraction, construction and rehabilitation phases. In the rehabilitation phase, Markov chains model is applied in the support of rehabilitation strategy. In the life cycle assessment, the Economic Input-Output Life Cycle Assessment (EIO-LCA) tools are used in estimating environmental emissions for all three phases. Emissions are then compared quantitatively among alternatives to support decision making.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The physics of the operation of singe-electron tunneling devices (SEDs) and singe-electron tunneling transistors (SETs), especially of those with multiple nanometer-sized islands, has remained poorly understood in spite of some intensive experimental and theoretical research. This computational study examines the current-voltage (IV) characteristics of multi-island single-electron devices using a newly developed multi-island transport simulator (MITS) that is based on semi-classical tunneling theory and kinetic Monte Carlo simulation. The dependence of device characteristics on physical device parameters is explored, and the physical mechanisms that lead to the Coulomb blockade (CB) and Coulomb staircase (CS) characteristics are proposed. Simulations using MITS demonstrate that the overall IV characteristics in a device with a random distribution of islands are a result of a complex interplay among those factors that affect the tunneling rates that are fixed a priori (e.g. island sizes, island separations, temperature, gate bias, etc.), and the evolving charge state of the system, which changes as the source-drain bias (VSD) is changed. With increasing VSD, a multi-island device has to overcome multiple discrete energy barriers (up-steps) before it reaches the threshold voltage (Vth). Beyond Vth, current flow is rate-limited by slow junctions, which leads to the CS structures in the IV characteristic. Each step in the CS is characterized by a unique distribution of island charges with an associated distribution of tunneling probabilities. MITS simulation studies done on one-dimensional (1D) disordered chains show that longer chains are better suited for switching applications as Vth increases with increasing chain length. They are also able to retain CS structures at higher temperatures better than shorter chains. In sufficiently disordered 2D systems, we demonstrate that there may exist a dominant conducting path (DCP) for conduction, which makes the 2D device behave as a quasi-1D device. The existence of a DCP is sensitive to the device structure, but is robust with respect to changes in temperature, gate bias, and VSD. A side gate in 1D and 2D systems can effectively control Vth. We argue that devices with smaller island sizes and narrower junctions may be better suited for practical applications, especially at room temperature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of our study was to develop a modeling framework suitable to quantify the incidence, absolute number and economic impact of osteoporosis-attributable hip, vertebral and distal forearm fractures, with a particular focus on change over time, and with application to the situation in Switzerland from 2000 to 2020. A Markov process model was developed and analyzed by Monte Carlo simulation. A demographic scenario provided by the Swiss Federal Statistical Office and various Swiss and international data sources were used as model inputs. Demographic and epidemiologic input parameters were reproduced correctly, confirming the internal validity of the model. The proportion of the Swiss population aged 50 years or over will rise from 33.3% in 2000 to 41.3% in 2020. At the total population level, osteoporosis-attributable incidence will rise from 1.16 to 1.54 per 1,000 person-years in the case of hip fracture, from 3.28 to 4.18 per 1,000 person-years in the case of radiographic vertebral fracture, and from 0.59 to 0.70 per 1,000 person-years in the case of distal forearm fracture. Osteoporosis-attributable hip fracture numbers will rise from 8,375 to 11,353, vertebral fracture numbers will rise from 23,584 to 30,883, and distal forearm fracture numbers will rise from 4,209 to 5,186. Population-level osteoporosis-related direct medical inpatient costs per year will rise from 713.4 million Swiss francs (CHF) to CHF946.2 million. These figures correspond to 1.6% and 2.2% of Swiss health care expenditures in 2000. The modeling framework described can be applied to a wide variety of settings. It can be used to assess the impact of new prevention, diagnostic and treatment strategies. In Switzerland incidences of osteoporotic hip, vertebral and distal forearm fracture will rise by 33%, 27%, and 19%, respectively, between 2000 and 2020, if current prevention and treatment patterns are maintained. Corresponding absolute fracture numbers will rise by 36%, 31%, and 23%. Related direct medical inpatient costs are predicted to increase by 33%; however, this estimate is subject to uncertainty due to limited availability of input data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Transparent and translucent objects involve both light reflection and transmission at surfaces. This paper presents a physically based transmission model of rough surface. The surface is assumed to be locally smooth, and statistical techniques is applied to calculate light transmission through a local illumination area. We have obtained an analytical expression for single scattering. The analytical model has been compared to our Monte Carlo simulations as well as to the previous simulations, and good agreements have been achieved. The presented model has potential applications for realistic rendering of transparent and translucent objects.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background: Recently, Cipriani and colleagues examined the relative efficacy of 12 new-generation antidepressants on major depression using network meta-analytic methods. They found that some of these medications outperformed others in patient response to treatment. However, several methodological criticisms have been raised about network meta-analysis and Cipriani’s analysis in particular which creates the concern that the stated superiority of some antidepressants relative to others may be unwarranted. Materials and Methods: A Monte Carlo simulation was conducted which involved replicating Cipriani’s network metaanalysis under the null hypothesis (i.e., no true differences between antidepressants). The following simulation strategy was implemented: (1) 1000 simulations were generated under the null hypothesis (i.e., under the assumption that there were no differences among the 12 antidepressants), (2) each of the 1000 simulations were network meta-analyzed, and (3) the total number of false positive results from the network meta-analyses were calculated. Findings: Greater than 7 times out of 10, the network meta-analysis resulted in one or more comparisons that indicated the superiority of at least one antidepressant when no such true differences among them existed. Interpretation: Based on our simulation study, the results indicated that under identical conditions to those of the 117 RCTs with 236 treatment arms contained in Cipriani et al.’s meta-analysis, one or more false claims about the relative efficacy of antidepressants will be made over 70% of the time. As others have shown as well, there is little evidence in these trials that any antidepressant is more effective than another. The tendency of network meta-analyses to generate false positive results should be considered when conducting multiple comparison analyses.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Oxygen-isotope variations were analyzed on bulk samples of shallow-water lake marl from Gerzensee, Switzerland, in order to evaluate major and minor climatic oscillations during the late-glacial. To highlight the overall signature of the Gerzensee δ18O record, δ18O records of four parallel sediment cores were first correlated by synchronizing major isotope shifts and pollen abundances. Then the records were stacked with a weighting depending on the differing sampling resolution. To develop a precise chronology, the δ18O-stack was then correlated with the NGRIP δ18O record applying a Monte Carlo simulation, relying on the assumption that the shifts in δ18O were climate-driven and synchronous in both archives. The established chronology on the GICC05 time scale is the basis for (1) comparing the δ18O changes recorded in Gerzensee with observed climatic and environmental fluctuations over the whole North Atlantic region, and (2) comparing sedimentological and biological changes during the rapid warming with smaller climatic variations during the Bølling/Allerød period. The δ18O record of Gerzensee is characterized by two major isotope shifts at the onset and at the termination of the Bølling/Allerød warm period, as well as four intervening negative shifts labeled GI-1e2, d, c2, and b, which show a shift of one third to one fourth of the major δ18O shifts at the beginning and end of the Bølling/Allerød. Despite some inconsistency in terminology, these oscillations can be observed in various climatic proxies over wide regions in the North Atlantic region, especially in reconstructed colder temperatures, and they seem to be caused by hemispheric climatic variations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The uncertainty on the calorimeter energy response to jets of particles is derived for the ATLAS experiment at the Large Hadron Collider (LHC). First, the calorimeter response to single isolated charged hadrons is measured and compared to the Monte Carlo simulation using proton-proton collisions at centre-of-mass energies of root s = 900 GeV and 7 TeV collected during 2009 and 2010. Then, using the decay of K-s and Lambda particles, the calorimeter response to specific types of particles (positively and negatively charged pions, protons, and anti-protons) is measured and compared to the Monte Carlo predictions. Finally, the jet energy scale uncertainty is determined by propagating the response uncertainty for single charged and neutral particles to jets. The response uncertainty is 2-5 % for central isolated hadrons and 1-3 % for the final calorimeter jet energy scale.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The inclusive jet cross-section has been measured in proton-proton collisions at root s = 2.76 TeV in a dataset corresponding to an integrated luminosity of 0.20 pb(-1) collected with the ATLAS detector at the Large Hadron Collider in 2011. Jets are identified using the anti-k(t) algorithm with two radius parameters of 0.4 and 0.6. The inclusive jet double-differential cross-section is presented as a function of the jet transverse momentum p(T) and jet rapidity y, covering a range of 20 <= p(T) < 430 GeV and vertical bar y vertical bar < 4.4. The ratio of the cross-section to the inclusive jet cross-section measurement at root s = 7 TeV, published by the ATLAS Collaboration, is calculated as a function of both transverse momentum and the dimensionless quantity x(T) = 2p(T)/root s, in bins of jet rapidity. The systematic uncertainties on the ratios are significantly reduced due to the cancellation of correlated uncertainties in the two measurements. Results are compared to the prediction from next-to-leading order perturbative QCD calculations corrected for non-perturbative effects, and next-to-leading order Monte Carlo simulation. Furthermore, the ATLAS jet cross-section measurements at root s = 2.76 TeV and root s = 7 TeV are analysed within a framework of next-to-leading order perturbative QCD calculations to determine parton distribution functions of the proton, taking into account the correlations between the measurements.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The measurement of the jet energy resolution is presented using data recorded with the ATLAS detector in proton-proton collisions at root s = 7 TeV. The sample corresponds to an integrated luminosity of 35 pb(-1). Jets are reconstructed from energy deposits measured by the calorimeters and calibrated using different jet calibration schemes. The jet energy resolution is measured with two different in situ methods which are found to be in agreement within uncertainties. The total uncertainties on these measurements range from 20 % to 10 % for jets within vertical bar y vertical bar < 2.8 and with transverse momenta increasing from 30 GeV to 500 GeV. Overall, the Monte Carlo simulation of the jet energy resolution agrees with the data within 10 %.