971 resultados para Sequential Monte Carlo methods


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years is becoming increasingly important to handle credit risk. Credit risk is the risk associated with the possibility of bankruptcy. More precisely, if a derivative provides for a payment at cert time T but before that time the counterparty defaults, at maturity the payment cannot be effectively performed, so the owner of the contract loses it entirely or a part of it. It means that the payoff of the derivative, and consequently its price, depends on the underlying of the basic derivative and on the risk of bankruptcy of the counterparty. To value and to hedge credit risk in a consistent way, one needs to develop a quantitative model. We have studied analytical approximation formulas and numerical methods such as Monte Carlo method in order to calculate the price of a bond. We have illustrated how to obtain fast and accurate pricing approximations by expanding the drift and diffusion as a Taylor series and we have compared the second and third order approximation of the Bond and Call price with an accurate Monte Carlo simulation. We have analysed JDCEV model with constant or stochastic interest rate. We have provided numerical examples that illustrate the effectiveness and versatility of our methods. We have used Wolfram Mathematica and Matlab.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Die vorliegende Dissertation dient dazu, das Verständnis des Exzitonentransports in organischen Halbleitern, wie sie in Leuchtdioden oder Solarzellen eingesetzt werden, zu vertiefen. Mithilfe von Computersimulationen wurde der Transport von Exzitonen in amorphen und kristallinen organischen Materialien beschrieben, angefangen auf mikroskopischer Ebene, auf der quantenmechanische Prozesse ablaufen, bis hin zur makroskopischen Ebene, auf welcher physikalisch bestimmbare Größen wie der Diffusionskoeffizient extrahierbar werden. Die Modellbildung basiert auf dem inkohärenten elektronischen Energietransfer. In diesem Rahmen wird der Transport des Exzitons als Hüpfprozess aufgefasst, welcher mit kinetischen Monte-Carlo Methoden simuliert wurde. Die notwendigen quantenmechanischen Übergangsraten zwischen den Molekülen wurden anhand der molekularen Struktur fester Phasen berechnet. Die Übergangsraten lassen sich in ein elektronisches Kopplungselement und die Franck-Condon-gewichtete Zustandsdichte aufteilen. Der Fokus dieser Arbeit lag einerseits darauf die Methoden zu evaluieren, die zur Berechnung der Übergangsraten in Frage kommen und andererseits den Hüpftransport zu simulieren und eine atomistische Interpretation der makroskopischen Transporteigenschaften der Exzitonen zu liefern. rnrnVon den drei untersuchten organischen Systemen, diente Aluminium-tris-(8-hydroxychinolin) der umfassenden Prüfung des Verfahrens. Es wurde gezeigt, dass stark vereinfachte Modelle wie die Marcus-Theorie die Übergangsraten und damit das Transportverhalten der Exzitonen oftmals qualitativ korrekt wiedergeben. Die meist deutlich größeren Diffusionskonstanten von Singulett- im Vergleich zu Triplett-Exzitonen haben ihren Ursprung in der längeren Reichweite der Kopplungselemente der Singulett-Exzitonen, wodurch ein stärker verzweigtes Netzwerk gebildet wird. Der Verlauf des zeitabhängigen Diffusionskoeffizienten zeigt subdiffusives Verhalten für kurze Beobachtungszeiten. Für Singulett-Exzitonen wechselt dieses Verhalten meist innerhalb der Lebensdauer des Exzitons in ein normales Diffusionsregime, während Triplett-Exzitonen das normale Regime deutlich langsamer erreichen. Das stärker anomale Verhalten der Triplett-Exzitonen wird auf eine ungleichmäßige Verteilung der Übergangsraten zurückgeführt. Beim Vergleich mit experimentell bestimmten Diffusionskonstanten muss das anomale Verhalten der Exzitonen berücksichtigt werden. Insgesamt stimmten simulierte und experimentelle Diffusionskonstanten für das Testsystem gut überein. Das Modellierungsverfahren sollte sich somit zur Charakterisierung des Exzitonentransports in neuen organischen Halbleitermaterialien eignen.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coarse graining is a popular technique used in physics to speed up the computer simulation of molecular fluids. An essential part of this technique is a method that solves the inverse problem of determining the interaction potential or its parameters from the given structural data. Due to discrepancies between model and reality, the potential is not unique, such that stability of such method and its convergence to a meaningful solution are issues.rnrnIn this work, we investigate empirically whether coarse graining can be improved by applying the theory of inverse problems from applied mathematics. In particular, we use the singular value analysis to reveal the weak interaction parameters, that have a negligible influence on the structure of the fluid and which cause non-uniqueness of the solution. Further, we apply a regularizing Levenberg-Marquardt method, which is stable against the mentioned discrepancies. Then, we compare it to the existing physical methods - the Iterative Boltzmann Inversion and the Inverse Monte Carlo method, which are fast and well adapted to the problem, but sometimes have convergence problems.rnrnFrom analysis of the Iterative Boltzmann Inversion, we elaborate a meaningful approximation of the structure and use it to derive a modification of the Levenberg-Marquardt method. We engage the latter for reconstruction of the interaction parameters from experimental data for liquid argon and nitrogen. We show that the modified method is stable, convergent and fast. Further, the singular value analysis of the structure and its approximation allows to determine the crucial interaction parameters, that is, to simplify the modeling of interactions. Therefore, our results build a rigorous bridge between the inverse problem from physics and the powerful solution tools from mathematics. rn

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This work aims to evaluate the reliability of these levee systems, calculating the probability of “failure” of determined levee stretches under different loads, using probabilistic methods that take into account the fragility curves obtained through the Monte Carlo Method. For this study overtopping and piping are considered as failure mechanisms (since these are the most frequent) and the major levee system of the Po River with a primary focus on the section between Piacenza and Cremona, in the lower-middle area of the Padana Plain, is analysed. The novelty of this approach is to check the reliability of individual embankment stretches, not just a single section, while taking into account the variability of the levee system geometry from one stretch to another. This work takes also into consideration, for each levee stretch analysed, a probability distribution of the load variables involved in the definition of the fragility curves, where it is influenced by the differences in the topography and morphology of the riverbed along the sectional depth analysed as it pertains to the levee system in its entirety. A type of classification is proposed, for both failure mechanisms, to give an indication of the reliability of the levee system based of the information obtained by the fragility curve analysis. To accomplish this work, an hydraulic model has been developed where a 500-year flood is modelled to determinate the residual hazard value of failure for each stretch of levee near the corresponding water depth, then comparing the results with the obtained classifications. This work has the additional the aim of acting as an interface between the world of Applied Geology and Environmental Hydraulic Engineering where a strong collaboration is needed between the two professions to resolve and improve the estimation of hydraulic risk.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In an accelerated exclusion process (AEP), each particle can "hop" to its adjacent site if empty as well as "kick" the frontmost particle when joining a cluster of size ℓ⩽ℓ_{max}. With various choices of the interaction range, ℓ_{max}, we find that the steady state of AEP can be found in a homogeneous phase with augmented currents (AC) or a segregated phase with holes moving at unit velocity (UV). Here we present a detailed study on the emergence of the novel phases, from two perspectives: the AEP and a mass transport process (MTP). In the latter picture, the system in the UV phase is composed of a condensate in coexistence with a fluid, while the transition from AC to UV can be regarded as condensation. Using Monte Carlo simulations, exact results for special cases, and analytic methods in a mean field approach (within the MTP), we focus on steady state currents and cluster sizes. Excellent agreement between data and theory is found, providing an insightful picture for understanding this model system.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objectives To examine the extent of multiplicity of data in trial reports and to assess the impact of multiplicity on meta-analysis results. Design Empirical study on a cohort of Cochrane systematic reviews. Data sources All Cochrane systematic reviews published from issue 3 in 2006 to issue 2 in 2007 that presented a result as a standardised mean difference (SMD). We retrieved trial reports contributing to the first SMD result in each review, and downloaded review protocols. We used these SMDs to identify a specific outcome for each meta-analysis from its protocol. Review methods Reviews were eligible if SMD results were based on two to ten randomised trials and if protocols described the outcome. We excluded reviews if they only presented results of subgroup analyses. Based on review protocols and index outcomes, two observers independently extracted the data necessary to calculate SMDs from the original trial reports for any intervention group, time point, or outcome measure compatible with the protocol. From the extracted data, we used Monte Carlo simulations to calculate all possible SMDs for every meta-analysis. Results We identified 19 eligible meta-analyses (including 83 trials). Published review protocols often lacked information about which data to choose. Twenty-four (29%) trials reported data for multiple intervention groups, 30 (36%) reported data for multiple time points, and 29 (35%) reported the index outcome measured on multiple scales. In 18 meta-analyses, we found multiplicity of data in at least one trial report; the median difference between the smallest and largest SMD results within a meta-analysis was 0.40 standard deviation units (range 0.04 to 0.91). Conclusions Multiplicity of data can affect the findings of systematic reviews and meta-analyses. To reduce the risk of bias, reviews and meta-analyses should comply with prespecified protocols that clearly identify time points, intervention groups, and scales of interest.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A main field in biomedical optics research is diffuse optical tomography, where intensity variations of the transmitted light traversing through tissue are detected. Mathematical models and reconstruction algorithms based on finite element methods and Monte Carlo simulations describe the light transport inside the tissue and determine differences in absorption and scattering coefficients. Precise knowledge of the sample's surface shape and orientation is required to provide boundary conditions for these techniques. We propose an integrated method based on structured light three-dimensional (3-D) scanning that provides detailed surface information of the object, which is usable for volume mesh creation and allows the normalization of the intensity dispersion between surface and camera. The experimental setup is complemented by polarization difference imaging to avoid overlaying byproducts caused by inter-reflections and multiple scattering in semitransparent tissue.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tissue phantoms play a central role in validating biomedical imaging techniques. Here we employ a series of methods that aim to fully determine the optical properties, i.e., the refractive index n, absorption coefficient μa, transport mean free path ℓ∗, and scattering coefficient μs of a TiO2 in gelatin phantom intended for use in optoacoustic imaging. For the determination of the key parameters μa and ℓ∗, we employ a variant of time of flight measurements, where fiber optodes are immersed into the phantom to minimize the influence of boundaries. The robustness of the method was verified with Monte Carlo simulations, where the experimentally obtained values served as input parameters for the simulations. The excellent agreement between simulations and experiments confirmed the reliability of the results. The parameters determined at 780 nm are n=1.359(±0.002), μ′s=1/ℓ∗=0.22(±0.02) mm-1, μa= 0.0053(+0.0006-0.0003) mm-1, and μs=2.86(±0.04) mm-1. The asymmetry parameter g obtained from the parameters ℓ∗ and μ′s is 0.93, which indicates that the scattering entities are not bare TiO2 particles but large sparse clusters. The interaction between the scattering particles and the gelatin matrix should be taken into account when developing such phantoms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Smoothing splines are a popular approach for non-parametric regression problems. We use periodic smoothing splines to fit a periodic signal plus noise model to data for which we assume there are underlying circadian patterns. In the smoothing spline methodology, choosing an appropriate smoothness parameter is an important step in practice. In this paper, we draw a connection between smoothing splines and REACT estimators that provides motivation for the creation of criteria for choosing the smoothness parameter. The new criteria are compared to three existing methods, namely cross-validation, generalized cross-validation, and generalization of maximum likelihood criteria, by a Monte Carlo simulation and by an application to the study of circadian patterns. For most of the situations presented in the simulations, including the practical example, the new criteria out-perform the three existing criteria.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Estimation of the number of mixture components (k) is an unsolved problem. Available methods for estimation of k include bootstrapping the likelihood ratio test statistics and optimizing a variety of validity functionals such as AIC, BIC/MDL, and ICOMP. We investigate the minimization of distance between fitted mixture model and the true density as a method for estimating k. The distances considered are Kullback-Leibler (KL) and “L sub 2”. We estimate these distances using cross validation. A reliable estimate of k is obtained by voting of B estimates of k corresponding to B cross validation estimates of distance. This estimation methods with KL distance is very similar to Monte Carlo cross validated likelihood methods discussed by Smyth (2000). With focus on univariate normal mixtures, we present simulation studies that compare the cross validated distance method with AIC, BIC/MDL, and ICOMP. We also apply the cross validation estimate of distance approach along with AIC, BIC/MDL and ICOMP approach, to data from an osteoporosis drug trial in order to find groups that differentially respond to treatment.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A simulation model adopting a health system perspective showed population-based screening with DXA, followed by alendronate treatment of persons with osteoporosis, or with anamnestic fracture and osteopenia, to be cost-effective in Swiss postmenopausal women from age 70, but not in men. INTRODUCTION: We assessed the cost-effectiveness of a population-based screen-and-treat strategy for osteoporosis (DXA followed by alendronate treatment if osteoporotic, or osteopenic in the presence of fracture), compared to no intervention, from the perspective of the Swiss health care system. METHODS: A published Markov model assessed by first-order Monte Carlo simulation was refined to reflect the diagnostic process and treatment effects. Women and men entered the model at age 50. Main screening ages were 65, 75, and 85 years. Age at bone densitometry was flexible for persons fracturing before the main screening age. Realistic assumptions were made with respect to persistence with intended 5 years of alendronate treatment. The main outcome was cost per quality-adjusted life year (QALY) gained. RESULTS: In women, costs per QALY were Swiss francs (CHF) 71,000, CHF 35,000, and CHF 28,000 for the main screening ages of 65, 75, and 85 years. The threshold of CHF 50,000 per QALY was reached between main screening ages 65 and 75 years. Population-based screening was not cost-effective in men. CONCLUSION: Population-based DXA screening, followed by alendronate treatment in the presence of osteoporosis, or of fracture and osteopenia, is a cost-effective option in Swiss postmenopausal women after age 70.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traffic particle concentrations show considerable spatial variability within a metropolitan area. We consider latent variable semiparametric regression models for modeling the spatial and temporal variability of black carbon and elemental carbon concentrations in the greater Boston area. Measurements of these pollutants, which are markers of traffic particles, were obtained from several individual exposure studies conducted at specific household locations as well as 15 ambient monitoring sites in the city. The models allow for both flexible, nonlinear effects of covariates and for unexplained spatial and temporal variability in exposure. In addition, the different individual exposure studies recorded different surrogates of traffic particles, with some recording only outdoor concentrations of black or elemental carbon, some recording indoor concentrations of black carbon, and others recording both indoor and outdoor concentrations of black carbon. A joint model for outdoor and indoor exposure that specifies a spatially varying latent variable provides greater spatial coverage in the area of interest. We propose a penalised spline formation of the model that relates to generalised kringing of the latent traffic pollution variable and leads to a natural Bayesian Markov Chain Monte Carlo algorithm for model fitting. We propose methods that allow us to control the degress of freedom of the smoother in a Bayesian framework. Finally, we present results from an analysis that applies the model to data from summer and winter separately

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recurrent event data are largely characterized by the rate function but smoothing techniques for estimating the rate function have never been rigorously developed or studied in statistical literature. This paper considers the moment and least squares methods for estimating the rate function from recurrent event data. With an independent censoring assumption on the recurrent event process, we study statistical properties of the proposed estimators and propose bootstrap procedures for the bandwidth selection and for the approximation of confidence intervals in the estimation of the occurrence rate function. It is identified that the moment method without resmoothing via a smaller bandwidth will produce curve with nicks occurring at the censoring times, whereas there is no such problem with the least squares method. Furthermore, the asymptotic variance of the least squares estimator is shown to be smaller under regularity conditions. However, in the implementation of the bootstrap procedures, the moment method is computationally more efficient than the least squares method because the former approach uses condensed bootstrap data. The performance of the proposed procedures is studied through Monte Carlo simulations and an epidemiological example on intravenous drug users.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Civil infrastructure provides essential services for the development of both society and economy. It is very important to manage systems efficiently to ensure sound performance. However, there are challenges in information extraction from available data, which also necessitates the establishment of methodologies and frameworks to assist stakeholders in the decision making process. This research proposes methodologies to evaluate systems performance by maximizing the use of available information, in an effort to build and maintain sustainable systems. Under the guidance of problem formulation from a holistic view proposed by Mukherjee and Muga, this research specifically investigates problem solving methods that measure and analyze metrics to support decision making. Failures are inevitable in system management. A methodology is developed to describe arrival pattern of failures in order to assist engineers in failure rescues and budget prioritization especially when funding is limited. It reveals that blockage arrivals are not totally random. Smaller meaningful subsets show good random behavior. Additional overtime failure rate is analyzed by applying existing reliability models and non-parametric approaches. A scheme is further proposed to depict rates over the lifetime of a given facility system. Further analysis of sub-data sets is also performed with the discussion of context reduction. Infrastructure condition is another important indicator of systems performance. The challenges in predicting facility condition are the transition probability estimates and model sensitivity analysis. Methods are proposed to estimate transition probabilities by investigating long term behavior of the model and the relationship between transition rates and probabilities. To integrate heterogeneities, model sensitivity is performed for the application of non-homogeneous Markov chains model. Scenarios are investigated by assuming transition probabilities follow a Weibull regressed function and fall within an interval estimate. For each scenario, multiple cases are simulated using a Monte Carlo simulation. Results show that variations on the outputs are sensitive to the probability regression. While for the interval estimate, outputs have similar variations to the inputs. Life cycle cost analysis and life cycle assessment of a sewer system are performed comparing three different pipe types, which are reinforced concrete pipe (RCP) and non-reinforced concrete pipe (NRCP), and vitrified clay pipe (VCP). Life cycle cost analysis is performed for material extraction, construction and rehabilitation phases. In the rehabilitation phase, Markov chains model is applied in the support of rehabilitation strategy. In the life cycle assessment, the Economic Input-Output Life Cycle Assessment (EIO-LCA) tools are used in estimating environmental emissions for all three phases. Emissions are then compared quantitatively among alternatives to support decision making.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The conformational properties of the microtubule-stabilizing agent epothilone A ( 1a) and its 3-deoxy and 3-deoxy-2,3-didehydro derivatives 2 and 3 have been investigated in aqueous solution by a combination of NMR spectroscopic methods, Monte Carlo conformational searches, and NAMFIS calculations. The tubulin-bound conformation of epothilone A ( 1a), as previously proposed on the basis of solution NMR data, was found to represent a significant fraction of the ensemble of conformations present for the free ligands in aqueous solution.