972 resultados para Sequential Monte Carlo


Relevância:

90.00% 90.00%

Publicador:

Resumo:

O algoritmo de simulação seqüencial estocástica mais amplamente utilizado é o de simulação seqüencial Gaussiana (ssG). Teoricamente, os métodos estocásticos reproduzem tão bem o espaço de incerteza da VA Z(u) quanto maior for o número L de realizações executadas. Entretanto, às vezes, L precisa ser tão alto que o uso dessa técnica pode se tornar proibitivo. Essa Tese apresenta uma estratégia mais eficiente a ser adotada. O algoritmo de simulação seqüencial Gaussiana foi alterado para se obter um aumento em sua eficiência. A substituição do método de Monte Carlo pela técnica de Latin Hypercube Sampling (LHS), fez com que a caracterização do espaço de incerteza da VA Z(u), para uma dada precisão, fosse alcançado mais rapidamente. A técnica proposta também garante que todo o modelo de incerteza teórico seja amostrado, sobretudo em seus trechos extremos.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The ferromagnetic and antiferromagnetic Ising model on a two dimensional inhomogeneous lattice characterized by two exchange constants (J1 and J2) is investigated. The lattice allows, in a continuous manner, the interpolation between the uniforme square (J2 = 0) and triangular (J2 = J1) lattices. By performing Monte Carlo simulation using the sequential Metropolis algorithm, we calculate the magnetization and the magnetic susceptibility on lattices of differents sizes. Applying the finite size scaling method through a data colappse, we obtained the critical temperatures as well as the critical exponents of the model for several values of the parameter α = J2 J1 in the [0, 1] range. The ferromagnetic case shows a linear increasing behavior of the critical temperature Tc for increasing values of α. Inwhich concerns the antiferromagnetic system, we observe a linear (decreasing) behavior of Tc, only for small values of α; in the range [0.6, 1], where frustrations effects are more pronunciated, the critical temperature Tc decays more quickly, possibly in a non-linear way, to the limiting value Tc = 0, cor-responding to the homogeneous fully frustrated antiferromagnetic triangular case.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We study the critical behavior of the one-dimensional pair contact process (PCP), using the Monte Carlo method for several lattice sizes and three different updating: random, sequential and parallel. We also added a small modification to the model, called Monte Carlo com Ressucitamento" (MCR), which consists of resuscitating one particle when the order parameter goes to zero. This was done because it is difficult to accurately determine the critical point of the model, since the order parameter(particle pair density) rapidly goes to zero using the traditional approach. With the MCR, the order parameter becomes null in a softer way, allowing us to use finite-size scaling to determine the critical point and the critical exponents β, ν and z. Our results are consistent with the ones already found in literature for this model, showing that not only the process of resuscitating one particle does not change the critical behavior of the system, it also makes it easier to determine the critical point and critical exponents of the model. This extension to the Monte Carlo method has already been used in other contact process models, leading us to believe its usefulness to study several others non-equilibrium models

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In an ever more competitive environment, power distribution companies must satisfy two conflicting objectives: minimizing investment costs and the satisfaction of reliability targets. The network reconfiguration of a distribution system is a technique that well adapts to this new deregulated environment for it allows improvement of reliability indices only opening and closing switches, without the onus involved in acquiring new equipment. Due to combinatorial explosion problem characteristic, in the solution are employed metaheuristics methods, which converge to optimal or quasi-optimal solutions, but with a high computational effort. As the main objective of this work is to find the best configuration(s) of the distribution system with the best levels of reliability, the objective function used in the metaheuristics is to minimize the LOLC - Loss Of Load Cost, which is associated with both, number and duration of electric power interruptions. Several metaheuristics techniques are tested, and the tabu search has proven to be most appropriate to solve the proposed problem. To characterize computationally the problem of the switches reconfiguring was developed a vector model (with integers) of the representation of the switches, where each normally open switch is associated with a group of normally closed switches. In this model simplifications have been introduced to reduce computational time and restrictions were made to exclude solutions that do not supply energy to any load point of the system. To check violation of the voltage and loading criteria a study of power flow for the ten best solutions is performed. Also for the ten best solutions a reliability evaluation using Monte Carlo sequential simulation is performed, where it is possible to obtain the probability distributions of the indices and thus calculate the risk of paying penalty due to not meeting the goals. Finally, the methodology is applied in a real Brazilian distribution network, and the results are discussed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The photophysics of the 1-nitronaphthalene molecular system, after the absorption transition to the first singlet excited state, is theoretically studied for investigating the ultrafast multiplicity change to the triplet manifold. The consecutive transient absorption spectra experimentally observed in this molecular system are also studied. To identify the electronic states involved in the nonradiative decay, the minimum energy path of the first singlet excited state is obtained using the complete active space self-consistent field//configurational second-order perturbation approach. A near degeneracy region was found between the first singlet and the second triplet excited states with large spin-orbit coupling between them. The intersystem crossing rate was also evaluated. To support the proposed deactivation model the transient absorption spectra observed in the experiments were also considered. For this, computer simulations using sequential quantum mechanic-molecular mechanic methodology was used to consider the solvent effect in the ground and excited states for proper comparison with the experimental results. The absorption transitions from the second triplet excited state in the relaxed geometry permit to describe the transient absorption band experimentally observed around 200 fs after the absorption transition. This indicates that the T-2 electronic state is populated through the intersystem crossing presented here. The two transient absorption bands experimentally observed between 2 and 45 ps after the absorption transition are described here as the T-1 -> T-3 and T-1 -> T-5 transitions, supporting that the intermediate triplet state (T-2) decays by internal conversion to T-1. (C) 2012 American Institute of Physics. [http://dx.doi.org/10.1063/1.4738757]

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The ionization of chlorophyll-c(2) in liquid methanol was investigated by a sequential quantum mechanical/Monte Carlo approach. Focus was placed on the determination of the first ionization energy of chlorophyll-c(2). The results show that the first vertical ionization energy (IE) is red-shifted by 0.47 +/- 0.24 eV relative to the gas-phase value. The red-shift of the chlorophyll-c(2) IE in the liquid phase can be explained by Mg center dot center dot center dot OH hydrogen bonding and long-ranged electrostatic interactions in solution. The ionization threshold for chlorophyll-c2 in liquid methanol is close to 6 eV. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis adresses the problem of localization, and analyzes its crucial aspects, within the context of cooperative WSNs. The three main issues discussed in the following are: network synchronization, position estimate and tracking. Time synchronization is a fundamental requirement for every network. In this context, a new approach based on the estimation theory is proposed to evaluate the ultimate performance limit in network time synchronization. In particular the lower bound on the variance of the average synchronization error in a fully connected network is derived by taking into account the statistical characterization of the Message Delivering Time (MDT) . Sensor network localization algorithms estimate the locations of sensors with initially unknown location information by using knowledge of the absolute positions of a few sensors and inter-sensor measurements such as distance and bearing measurements. Concerning this issue, i.e. the position estimate problem, two main contributions are given. The first is a new Semidefinite Programming (SDP) framework to analyze and solve the problem of flip-ambiguity that afflicts range-based network localization algorithms with incomplete ranging information. The occurrence of flip-ambiguous nodes and errors due to flip ambiguity is studied, then with this information a new SDP formulation of the localization problem is built. Finally a flip-ambiguity-robust network localization algorithm is derived and its performance is studied by Monte-Carlo simulations. The second contribution in the field of position estimate is about multihop networks. A multihop network is a network with a low degree of connectivity, in which couples of given any nodes, in order to communicate, they have to rely on one or more intermediate nodes (hops). Two new distance-based source localization algorithms, highly robust to distance overestimates, typically present in multihop networks, are presented and studied. The last point of this thesis discuss a new low-complexity tracking algorithm, inspired by the Fano’s sequential decoding algorithm for the position tracking of a user in a WLAN-based indoor localization system.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Im ersten Teil der Arbeit wurde das Bindungsverhalten von Annexin A1 und Annexin A2t an festkörperunterstützte Lipidmembranen aus POPC und POPS untersucht. Für beide Proteine konnte mit Hilfe der Fluoreszenzmikroskopie gezeigt werden, dass irreversible Bindung nur in Anwesenheit von POPS auftritt. Durch rasterkraftmikroskopische Aufnahmen konnte die laterale Organisation der Annexine auf der Lipidmembran dargestellt werden. Beide Proteine lagern sich in Form lateraler Aggregate (zweidimensionale Domänen) auf der Oberfläche an, außerdem ist der Belegungsgrad und die Größe der Domänen von der Membranzusammensetzung und der Calciumkonzentration abhängig. Mit zunehmendem POPS-Gehalt und Calciumkonzentration steigt der Belegungsgrad an und der mittlere Domänenradius wird kleiner. Diese Ergebnisse konnten in Verbindung mit detaillierten Bindungsstudien des Annexins A1 mit der Quarzmikrowaage verwendet werden, um ein Bindungsmodell auf Basis einer heterogenen Oberfläche zu entwickeln. Auf einer POPC-reichen Matrix findet reversible Adsorption statt und auf POPS-reichen Domänen irreversible Adsorption. Durch die Anpassung von dynamischen Monte Carlo-Simulationen basierend auf einer zweidimensionalen zufälligen sequentiellen Adsorption konnten Erkenntnisse über die Membranstruktur und die kinetischen Ratenkonstanten in Abhängigkeit von der Calciumkonzentration und der Inkubationszeit des Proteins gewonnen werden. Die irreversible Bindung ist in allen Calciumkonzentrationsbereichen schneller als die reversible. Außerdem zeigt die irreversible Adsorption eine deutlich stärkere Abhängigkeit von der Calciumkonzentration. Ein kleinerer Belegungsgrad bei niedrigen Ca2+-Gehalten ist hauptsächlich durch die Abnahme der verfügbaren Bindungsplätze auf der Oberfläche zu erklären. Die gute Übereinstimmung der aus den Monte Carlo-Simulationen erhaltenen Domänenstrukturen mit den rasterkraftmikroskopischen Aufnahmen und die Tatsache, dass sich die simulierten Resonanzfrequenzverläufe problemlos an die experimentellen Kurven aus den QCM-Messungen anpassen ließen, zeigt die gute Anwendbarkeit des entwickelten Simulationsprogramms auf die Adsorption von Annexin A1. Die Extraktion der kinetischen Parameter aus dem zweidimensionalen RSA-Modell ist mit Sicherheit einem einfachen Langmuir-Ansatz überlegen. Bei einem Langmuir-Modell erfolgt eine integrale Erfassung einer einzelnen makroskopischen Geschwindigkeitskonstante, während durch das RSA-Modell eine differenzierte Betrachtung des reversiblen und irreversiblen Bindungsprozesses möglich ist. Zusätzlich lassen sich mikroskopische Informationen über die Oberflächenbeschaffenheit gewinnen. Im zweiten Teil der Arbeit wurde das thermotrope Phasenverhalten von festkörperunterstützten Phospholipidbilayern untersucht. Dazu wurden mikrostrukturierte, frei stehende Membranstreifen präpariert und mit Hilfe der bildgebenden Ellipsometrie untersucht. Dadurch konnten die temperaturabhängigen Verläufe der Schichtdicke und der lateralen Membranausdehnung parallel beobachtet werden. Die ermittelten Phasenübergangstemperaturen von DMPC, diC15PC und DPPC lagen 2 - 3 °C oberhalb der Literaturwerte für vesikuläre Systeme. Außerdem wurde eine deutliche Verringerung der Kooperativität der Phasenumwandlung gefunden, was auf einen großen Einfluss des Substrats bei den festkörperunterstützten Lipidmembranen schließen lässt. Zusätzlich wurde ein nicht systematischer Zusammenhang der Ergebnisse von der Oberflächenpräparation gefunden, der es unabdingbar macht, bei Untersuchungen von festkörperunterstützten Substraten einen internen Standard einzuführen. Bei der Analyse des thermotropen Phasenübergangsverhaltens von DMPC/Cholesterol - Gemischen wurde daher die individuelle Adressierbarkeit der strukturierten Lipidmembranen ausgenutzt und ein Lipidstreifen aus reinem DMPC als Standard verwendet. Auf diese Weise konnte gezeigt werden, dass das für Phospholipide typische Phasenübergangsverhalten ab 30 mol% Cholesterol in der Membran nicht mehr vorhanden ist. Dies ist auf die Bildung einer nur durch höhere Sterole induzierten fluiden Phase mit hoch geordneten Acylketten zurückzuführen. Abschließend konnte durch die Zugabe von Ethanol zu einer mikrostrukturierten DMPC-Membran die Bildung eines interdigitierten Bilayers nachgewiesen werden. Die bildgebende Ellipsometrie ist eine sehr gute Methode zur Untersuchung festkörperunterstützter Lipidmembranen, da sie über ein sehr gutes vertikales und ein ausreichendes laterales Auflösungsvermögen besitzt. Sie ist darin zwar einem Rasterkraftmikroskop noch unterlegen, besitzt dafür aber eine einfachere Handhabung beim Umgang mit Flüssigkeiten und in der Temperierung, eine schnellere Bildgebung und ist als optische Methode nicht-invasiv.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The Standard Model of particle physics was developed to describe the fundamental particles, which form matter, and their interactions via the strong, electromagnetic and weak force. Although most measurements are described with high accuracy, some observations indicate that the Standard Model is incomplete. Numerous extensions were developed to solve these limitations. Several of these extensions predict heavy resonances, so-called Z' bosons, that can decay into an electron positron pair. The particle accelerator Large Hadron Collider (LHC) at CERN in Switzerland was built to collide protons at unprecedented center-of-mass energies, namely 7 TeV in 2011. With the data set recorded in 2011 by the ATLAS detector, a large multi-purpose detector located at the LHC, the electron positron pair mass spectrum was measured up to high masses in the TeV range. The properties of electrons and the probability that other particles are mis-identified as electrons were studied in detail. Using the obtained information, a sophisticated Standard Model expectation was derived with data-driven methods and Monte Carlo simulations. In the comparison of the measurement with the expectation, no significant deviations from the Standard Model expectations were observed. Therefore exclusion limits for several Standard Model extensions were calculated. For example, Sequential Standard Model (SSM) Z' bosons with masses below 2.10 TeV were excluded with 95% Confidence Level (C.L.).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Monte Carlo simulation has been conducted to investigate parameter estimation and hypothesis testing in some well known adaptive randomization procedures. The four urn models studied are Randomized Play-the-Winner (RPW), Randomized Pôlya Urn (RPU), Birth and Death Urn with Immigration (BDUI), and Drop-the-Loses Urn (DL). Two sequential estimation methods, the sequential maximum likelihood estimation (SMLE) and the doubly adaptive biased coin design (DABC), are simulated at three optimal allocation targets that minimize the expected number of failures under the assumption of constant variance of simple difference (RSIHR), relative risk (ORR), and odds ratio (OOR) respectively. Log likelihood ratio test and three Wald-type tests (simple difference, log of relative risk, log of odds ratio) are compared in different adaptive procedures. ^ Simulation results indicates that although RPW is slightly better in assigning more patients to the superior treatment, the DL method is considerably less variable and the test statistics have better normality. When compared with SMLE, DABC has slightly higher overall response rate with lower variance, but has larger bias and variance in parameter estimation. Additionally, the test statistics in SMLE have better normality and lower type I error rate, and the power of hypothesis testing is more comparable with the equal randomization. Usually, RSIHR has the highest power among the 3 optimal allocation ratios. However, the ORR allocation has better power and lower type I error rate when the log of relative risk is the test statistics. The number of expected failures in ORR is smaller than RSIHR. It is also shown that the simple difference of response rates has the worst normality among all 4 test statistics. The power of hypothesis test is always inflated when simple difference is used. On the other hand, the normality of the log likelihood ratio test statistics is robust against the change of adaptive randomization procedures. ^

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Coastal managers require reliable spatial data on the extent and timing of potential coastal inundation, particularly in a changing climate. Most sea level rise (SLR) vulnerability assessments are undertaken using the easily implemented bathtub approach, where areas adjacent to the sea and below a given elevation are mapped using a deterministic line dividing potentially inundated from dry areas. This method only requires elevation data usually in the form of a digital elevation model (DEM). However, inherent errors in the DEM and spatial analysis of the bathtub model propagate into the inundation mapping. The aim of this study was to assess the impacts of spatially variable and spatially correlated elevation errors in high-spatial resolution DEMs for mapping coastal inundation. Elevation errors were best modelled using regression-kriging. This geostatistical model takes the spatial correlation in elevation errors into account, which has a significant impact on analyses that include spatial interactions, such as inundation modelling. The spatial variability of elevation errors was partially explained by land cover and terrain variables. Elevation errors were simulated using sequential Gaussian simulation, a Monte Carlo probabilistic approach. 1,000 error simulations were added to the original DEM and reclassified using a hydrologically correct bathtub method. The probability of inundation to a scenario combining a 1 in 100 year storm event over a 1 m SLR was calculated by counting the proportion of times from the 1,000 simulations that a location was inundated. This probabilistic approach can be used in a risk-aversive decision making process by planning for scenarios with different probabilities of occurrence. For example, results showed that when considering a 1% probability exceedance, the inundated area was approximately 11% larger than mapped using the deterministic bathtub approach. The probabilistic approach provides visually intuitive maps that convey uncertainties inherent to spatial data and analysis.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The retrieval of wind vectors from satellite scatterometer observations is a non-linear inverse problem. A common approach to solving inverse problems is to adopt a Bayesian framework and to infer the posterior distribution of the parameters of interest given the observations by using a likelihood model relating the observations to the parameters, and a prior distribution over the parameters. We show how Gaussian process priors can be used efficiently with a variety of likelihood models, using local forward (observation) models and direct inverse models for the scatterometer. We present an enhanced Markov chain Monte Carlo method to sample from the resulting multimodal posterior distribution. We go on to show how the computational complexity of the inference can be controlled by using a sparse, sequential Bayes algorithm for estimation with Gaussian processes. This helps to overcome the most serious barrier to the use of probabilistic, Gaussian process methods in remote sensing inverse problems, which is the prohibitively large size of the data sets. We contrast the sampling results with the approximations that are found by using the sparse, sequential Bayes algorithm.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Large monitoring networks are becoming increasingly common and can generate large datasets from thousands to millions of observations in size, often with high temporal resolution. Processing large datasets using traditional geostatistical methods is prohibitively slow and in real world applications different types of sensor can be found across a monitoring network. Heterogeneities in the error characteristics of different sensors, both in terms of distribution and magnitude, presents problems for generating coherent maps. An assumption in traditional geostatistics is that observations are made directly of the underlying process being studied and that the observations are contaminated with Gaussian errors. Under this assumption, sub–optimal predictions will be obtained if the error characteristics of the sensor are effectively non–Gaussian. One method, model based geostatistics, assumes that a Gaussian process prior is imposed over the (latent) process being studied and that the sensor model forms part of the likelihood term. One problem with this type of approach is that the corresponding posterior distribution will be non–Gaussian and computationally demanding as Monte Carlo methods have to be used. An extension of a sequential, approximate Bayesian inference method enables observations with arbitrary likelihoods to be treated, in a projected process kriging framework which is less computationally intensive. The approach is illustrated using a simulated dataset with a range of sensor models and error characteristics.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the highly competitive world of modern finance, new derivatives are continually required to take advantage of changes in financial markets, and to hedge businesses against new risks. The research described in this paper aims to accelerate the development and pricing of new derivatives in two different ways. Firstly, new derivatives can be specified mathematically within a general framework, enabling new mathematical formulae to be specified rather than just new parameter settings. This Generic Pricing Engine (GPE) is expressively powerful enough to specify a wide range of stand¬ard pricing engines. Secondly, the associated price simulation using the Monte Carlo method is accelerated using GPU or multicore hardware. The parallel implementation (in OpenCL) is automatically derived from the mathematical description of the derivative. As a test, for a Basket Option Pricing Engine (BOPE) generated using the GPE, on the largest problem size, an NVidia GPU runs the generated pricing engine at 45 times the speed of a sequential, specific hand-coded implementation of the same BOPE. Thus a user can more rapidly devise, simulate and experiment with new derivatives without actual programming.