964 resultados para SEQUENTIAL MONTE-CARLO


Relevância:

90.00% 90.00%

Publicador:

Resumo:

We describe a strategy for Markov chain Monte Carlo analysis of non-linear, non-Gaussian state-space models involving batch analysis for inference on dynamic, latent state variables and fixed model parameters. The key innovation is a Metropolis-Hastings method for the time series of state variables based on sequential approximation of filtering and smoothing densities using normal mixtures. These mixtures are propagated through the non-linearities using an accurate, local mixture approximation method, and we use a regenerating procedure to deal with potential degeneracy of mixture components. This provides accurate, direct approximations to sequential filtering and retrospective smoothing distributions, and hence a useful construction of global Metropolis proposal distributions for simulation of posteriors for the set of states. This analysis is embedded within a Gibbs sampler to include uncertain fixed parameters. We give an example motivated by an application in systems biology. Supplemental materials provide an example based on a stochastic volatility model as well as MATLAB code.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Objective To demonstrate the potential value of three-stage sequential screening for Down syndrome. Methods Protocols were considered in which maternal serum pregnancy associated plasma protein-A (PAPP-A) and free -human chorionic gonadotropin (hCG) measurements were taken on all women in the first trimester. Those women with very low Down syndrome risks were screened negative at that stage and nuchal translucency (NT) was measured on the remainder and the risk reassessed. Those with very low risk were then screened negative and those with very high risk were offered early diagnostic testing. Those with intermediate risks received second-trimester maternal serum -fetoprotein, free -hCG, unconjugated estriol and inhibin-A. Risk was then reassessed and those with high risk were offered diagnosis. Detection rates and false-positive rates were estimated by multivariate Gaussian modelling using Monte-Carlo simulation. Results The modelling suggests that, with full adherence to a three-stage policy, overall detection rates of nearly 90% and false-positive rates below 2.0% can be achieved. Approximately two-thirds of pregnancies are screened on the basis of first-trimester biochemistry alone, five out of six women complete their screening in the first trimester, and the first-trimester detection rate is over 60%. Conclusion Three-stage contingent sequential screening is potentially highly effective for Down syndrome screening. The acceptability of this protocol and its performance in practice, should be tested in prospective studies. Copyright © 2006 John Wiley & Sons, Ltd.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

El presente trabajo intenta estimar si las empresas emplean estratégicamente la deuda para limitar la entrada de potenciales rivales. Mediante la metodología de Método Generalizado de Momentos (GMM) se evalúa el efecto que tienen los activos específicos, la cuota de mercado y el tamaño, como proxies de las rentas del mercado, y las barreras de entrada sobre los niveles de endeudamiento, a nivel de empresa para Colombia, durante 1995-2003. Se encuentra que las empresas utilizan los activos específicos para limitar la entrada al mercado y que el endeudamiento decrece a medida que las empresas aumentan su cuota en el mercado

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Sequential techniques can enhance the efficiency of the approximate Bayesian computation algorithm, as in Sisson et al.'s (2007) partial rejection control version. While this method is based upon the theoretical works of Del Moral et al. (2006), the application to approximate Bayesian computation results in a bias in the approximation to the posterior. An alternative version based on genuine importance sampling arguments bypasses this difficulty, in connection with the population Monte Carlo method of Cappe et al. (2004), and it includes an automatic scaling of the forward kernel. When applied to a population genetics example, it compares favourably with two other versions of the approximate algorithm.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, we propose a novel approach to econometric forecasting of stationary and ergodic time series within a panel-data framework. Our key element is to employ the (feasible) bias-corrected average forecast. Using panel-data sequential asymptotics we show that it is potentially superior to other techniques in several contexts. In particular, it is asymptotically equivalent to the conditional expectation, i.e., has an optimal limiting mean-squared error. We also develop a zeromean test for the average bias and discuss the forecast-combination puzzle in small and large samples. Monte-Carlo simulations are conducted to evaluate the performance of the feasible bias-corrected average forecast in finite samples. An empirical exercise based upon data from a well known survey is also presented. Overall, theoretical and empirical results show promise for the feasible bias-corrected average forecast.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, we propose a novel approach to econometric forecasting of stationary and ergodic time series within a panel-data framework. Our key element is to employ the bias-corrected average forecast. Using panel-data sequential asymptotics we show that it is potentially superior to other techniques in several contexts. In particular it delivers a zero-limiting mean-squared error if the number of forecasts and the number of post-sample time periods is sufficiently large. We also develop a zero-mean test for the average bias. Monte-Carlo simulations are conducted to evaluate the performance of this new technique in finite samples. An empirical exercise, based upon data from well known surveys is also presented. Overall, these results show promise for the bias-corrected average forecast.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, we propose a novel approach to econometric forecasting of stationary and ergodic time series within a panel-data framework. Our key element is to employ the (feasible) bias-corrected average forecast. Using panel-data sequential asymptotics we show that it is potentially superior to other techniques in several contexts. In particular, it is asymptotically equivalent to the conditional expectation, i.e., has an optimal limiting mean-squared error. We also develop a zeromean test for the average bias and discuss the forecast-combination puzzle in small and large samples. Monte-Carlo simulations are conducted to evaluate the performance of the feasible bias-corrected average forecast in finite samples. An empirical exercise, based upon data from a well known survey is also presented. Overall, these results show promise for the feasible bias-corrected average forecast.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

O algoritmo de simulação seqüencial estocástica mais amplamente utilizado é o de simulação seqüencial Gaussiana (ssG). Teoricamente, os métodos estocásticos reproduzem tão bem o espaço de incerteza da VA Z(u) quanto maior for o número L de realizações executadas. Entretanto, às vezes, L precisa ser tão alto que o uso dessa técnica pode se tornar proibitivo. Essa Tese apresenta uma estratégia mais eficiente a ser adotada. O algoritmo de simulação seqüencial Gaussiana foi alterado para se obter um aumento em sua eficiência. A substituição do método de Monte Carlo pela técnica de Latin Hypercube Sampling (LHS), fez com que a caracterização do espaço de incerteza da VA Z(u), para uma dada precisão, fosse alcançado mais rapidamente. A técnica proposta também garante que todo o modelo de incerteza teórico seja amostrado, sobretudo em seus trechos extremos.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The ferromagnetic and antiferromagnetic Ising model on a two dimensional inhomogeneous lattice characterized by two exchange constants (J1 and J2) is investigated. The lattice allows, in a continuous manner, the interpolation between the uniforme square (J2 = 0) and triangular (J2 = J1) lattices. By performing Monte Carlo simulation using the sequential Metropolis algorithm, we calculate the magnetization and the magnetic susceptibility on lattices of differents sizes. Applying the finite size scaling method through a data colappse, we obtained the critical temperatures as well as the critical exponents of the model for several values of the parameter α = J2 J1 in the [0, 1] range. The ferromagnetic case shows a linear increasing behavior of the critical temperature Tc for increasing values of α. Inwhich concerns the antiferromagnetic system, we observe a linear (decreasing) behavior of Tc, only for small values of α; in the range [0.6, 1], where frustrations effects are more pronunciated, the critical temperature Tc decays more quickly, possibly in a non-linear way, to the limiting value Tc = 0, cor-responding to the homogeneous fully frustrated antiferromagnetic triangular case.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We study the critical behavior of the one-dimensional pair contact process (PCP), using the Monte Carlo method for several lattice sizes and three different updating: random, sequential and parallel. We also added a small modification to the model, called Monte Carlo com Ressucitamento" (MCR), which consists of resuscitating one particle when the order parameter goes to zero. This was done because it is difficult to accurately determine the critical point of the model, since the order parameter(particle pair density) rapidly goes to zero using the traditional approach. With the MCR, the order parameter becomes null in a softer way, allowing us to use finite-size scaling to determine the critical point and the critical exponents β, ν and z. Our results are consistent with the ones already found in literature for this model, showing that not only the process of resuscitating one particle does not change the critical behavior of the system, it also makes it easier to determine the critical point and critical exponents of the model. This extension to the Monte Carlo method has already been used in other contact process models, leading us to believe its usefulness to study several others non-equilibrium models

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In an ever more competitive environment, power distribution companies must satisfy two conflicting objectives: minimizing investment costs and the satisfaction of reliability targets. The network reconfiguration of a distribution system is a technique that well adapts to this new deregulated environment for it allows improvement of reliability indices only opening and closing switches, without the onus involved in acquiring new equipment. Due to combinatorial explosion problem characteristic, in the solution are employed metaheuristics methods, which converge to optimal or quasi-optimal solutions, but with a high computational effort. As the main objective of this work is to find the best configuration(s) of the distribution system with the best levels of reliability, the objective function used in the metaheuristics is to minimize the LOLC - Loss Of Load Cost, which is associated with both, number and duration of electric power interruptions. Several metaheuristics techniques are tested, and the tabu search has proven to be most appropriate to solve the proposed problem. To characterize computationally the problem of the switches reconfiguring was developed a vector model (with integers) of the representation of the switches, where each normally open switch is associated with a group of normally closed switches. In this model simplifications have been introduced to reduce computational time and restrictions were made to exclude solutions that do not supply energy to any load point of the system. To check violation of the voltage and loading criteria a study of power flow for the ten best solutions is performed. Also for the ten best solutions a reliability evaluation using Monte Carlo sequential simulation is performed, where it is possible to obtain the probability distributions of the indices and thus calculate the risk of paying penalty due to not meeting the goals. Finally, the methodology is applied in a real Brazilian distribution network, and the results are discussed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The photophysics of the 1-nitronaphthalene molecular system, after the absorption transition to the first singlet excited state, is theoretically studied for investigating the ultrafast multiplicity change to the triplet manifold. The consecutive transient absorption spectra experimentally observed in this molecular system are also studied. To identify the electronic states involved in the nonradiative decay, the minimum energy path of the first singlet excited state is obtained using the complete active space self-consistent field//configurational second-order perturbation approach. A near degeneracy region was found between the first singlet and the second triplet excited states with large spin-orbit coupling between them. The intersystem crossing rate was also evaluated. To support the proposed deactivation model the transient absorption spectra observed in the experiments were also considered. For this, computer simulations using sequential quantum mechanic-molecular mechanic methodology was used to consider the solvent effect in the ground and excited states for proper comparison with the experimental results. The absorption transitions from the second triplet excited state in the relaxed geometry permit to describe the transient absorption band experimentally observed around 200 fs after the absorption transition. This indicates that the T-2 electronic state is populated through the intersystem crossing presented here. The two transient absorption bands experimentally observed between 2 and 45 ps after the absorption transition are described here as the T-1 -> T-3 and T-1 -> T-5 transitions, supporting that the intermediate triplet state (T-2) decays by internal conversion to T-1. (C) 2012 American Institute of Physics. [http://dx.doi.org/10.1063/1.4738757]

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The ionization of chlorophyll-c(2) in liquid methanol was investigated by a sequential quantum mechanical/Monte Carlo approach. Focus was placed on the determination of the first ionization energy of chlorophyll-c(2). The results show that the first vertical ionization energy (IE) is red-shifted by 0.47 +/- 0.24 eV relative to the gas-phase value. The red-shift of the chlorophyll-c(2) IE in the liquid phase can be explained by Mg center dot center dot center dot OH hydrogen bonding and long-ranged electrostatic interactions in solution. The ionization threshold for chlorophyll-c2 in liquid methanol is close to 6 eV. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis adresses the problem of localization, and analyzes its crucial aspects, within the context of cooperative WSNs. The three main issues discussed in the following are: network synchronization, position estimate and tracking. Time synchronization is a fundamental requirement for every network. In this context, a new approach based on the estimation theory is proposed to evaluate the ultimate performance limit in network time synchronization. In particular the lower bound on the variance of the average synchronization error in a fully connected network is derived by taking into account the statistical characterization of the Message Delivering Time (MDT) . Sensor network localization algorithms estimate the locations of sensors with initially unknown location information by using knowledge of the absolute positions of a few sensors and inter-sensor measurements such as distance and bearing measurements. Concerning this issue, i.e. the position estimate problem, two main contributions are given. The first is a new Semidefinite Programming (SDP) framework to analyze and solve the problem of flip-ambiguity that afflicts range-based network localization algorithms with incomplete ranging information. The occurrence of flip-ambiguous nodes and errors due to flip ambiguity is studied, then with this information a new SDP formulation of the localization problem is built. Finally a flip-ambiguity-robust network localization algorithm is derived and its performance is studied by Monte-Carlo simulations. The second contribution in the field of position estimate is about multihop networks. A multihop network is a network with a low degree of connectivity, in which couples of given any nodes, in order to communicate, they have to rely on one or more intermediate nodes (hops). Two new distance-based source localization algorithms, highly robust to distance overestimates, typically present in multihop networks, are presented and studied. The last point of this thesis discuss a new low-complexity tracking algorithm, inspired by the Fano’s sequential decoding algorithm for the position tracking of a user in a WLAN-based indoor localization system.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Im ersten Teil der Arbeit wurde das Bindungsverhalten von Annexin A1 und Annexin A2t an festkörperunterstützte Lipidmembranen aus POPC und POPS untersucht. Für beide Proteine konnte mit Hilfe der Fluoreszenzmikroskopie gezeigt werden, dass irreversible Bindung nur in Anwesenheit von POPS auftritt. Durch rasterkraftmikroskopische Aufnahmen konnte die laterale Organisation der Annexine auf der Lipidmembran dargestellt werden. Beide Proteine lagern sich in Form lateraler Aggregate (zweidimensionale Domänen) auf der Oberfläche an, außerdem ist der Belegungsgrad und die Größe der Domänen von der Membranzusammensetzung und der Calciumkonzentration abhängig. Mit zunehmendem POPS-Gehalt und Calciumkonzentration steigt der Belegungsgrad an und der mittlere Domänenradius wird kleiner. Diese Ergebnisse konnten in Verbindung mit detaillierten Bindungsstudien des Annexins A1 mit der Quarzmikrowaage verwendet werden, um ein Bindungsmodell auf Basis einer heterogenen Oberfläche zu entwickeln. Auf einer POPC-reichen Matrix findet reversible Adsorption statt und auf POPS-reichen Domänen irreversible Adsorption. Durch die Anpassung von dynamischen Monte Carlo-Simulationen basierend auf einer zweidimensionalen zufälligen sequentiellen Adsorption konnten Erkenntnisse über die Membranstruktur und die kinetischen Ratenkonstanten in Abhängigkeit von der Calciumkonzentration und der Inkubationszeit des Proteins gewonnen werden. Die irreversible Bindung ist in allen Calciumkonzentrationsbereichen schneller als die reversible. Außerdem zeigt die irreversible Adsorption eine deutlich stärkere Abhängigkeit von der Calciumkonzentration. Ein kleinerer Belegungsgrad bei niedrigen Ca2+-Gehalten ist hauptsächlich durch die Abnahme der verfügbaren Bindungsplätze auf der Oberfläche zu erklären. Die gute Übereinstimmung der aus den Monte Carlo-Simulationen erhaltenen Domänenstrukturen mit den rasterkraftmikroskopischen Aufnahmen und die Tatsache, dass sich die simulierten Resonanzfrequenzverläufe problemlos an die experimentellen Kurven aus den QCM-Messungen anpassen ließen, zeigt die gute Anwendbarkeit des entwickelten Simulationsprogramms auf die Adsorption von Annexin A1. Die Extraktion der kinetischen Parameter aus dem zweidimensionalen RSA-Modell ist mit Sicherheit einem einfachen Langmuir-Ansatz überlegen. Bei einem Langmuir-Modell erfolgt eine integrale Erfassung einer einzelnen makroskopischen Geschwindigkeitskonstante, während durch das RSA-Modell eine differenzierte Betrachtung des reversiblen und irreversiblen Bindungsprozesses möglich ist. Zusätzlich lassen sich mikroskopische Informationen über die Oberflächenbeschaffenheit gewinnen. Im zweiten Teil der Arbeit wurde das thermotrope Phasenverhalten von festkörperunterstützten Phospholipidbilayern untersucht. Dazu wurden mikrostrukturierte, frei stehende Membranstreifen präpariert und mit Hilfe der bildgebenden Ellipsometrie untersucht. Dadurch konnten die temperaturabhängigen Verläufe der Schichtdicke und der lateralen Membranausdehnung parallel beobachtet werden. Die ermittelten Phasenübergangstemperaturen von DMPC, diC15PC und DPPC lagen 2 - 3 °C oberhalb der Literaturwerte für vesikuläre Systeme. Außerdem wurde eine deutliche Verringerung der Kooperativität der Phasenumwandlung gefunden, was auf einen großen Einfluss des Substrats bei den festkörperunterstützten Lipidmembranen schließen lässt. Zusätzlich wurde ein nicht systematischer Zusammenhang der Ergebnisse von der Oberflächenpräparation gefunden, der es unabdingbar macht, bei Untersuchungen von festkörperunterstützten Substraten einen internen Standard einzuführen. Bei der Analyse des thermotropen Phasenübergangsverhaltens von DMPC/Cholesterol - Gemischen wurde daher die individuelle Adressierbarkeit der strukturierten Lipidmembranen ausgenutzt und ein Lipidstreifen aus reinem DMPC als Standard verwendet. Auf diese Weise konnte gezeigt werden, dass das für Phospholipide typische Phasenübergangsverhalten ab 30 mol% Cholesterol in der Membran nicht mehr vorhanden ist. Dies ist auf die Bildung einer nur durch höhere Sterole induzierten fluiden Phase mit hoch geordneten Acylketten zurückzuführen. Abschließend konnte durch die Zugabe von Ethanol zu einer mikrostrukturierten DMPC-Membran die Bildung eines interdigitierten Bilayers nachgewiesen werden. Die bildgebende Ellipsometrie ist eine sehr gute Methode zur Untersuchung festkörperunterstützter Lipidmembranen, da sie über ein sehr gutes vertikales und ein ausreichendes laterales Auflösungsvermögen besitzt. Sie ist darin zwar einem Rasterkraftmikroskop noch unterlegen, besitzt dafür aber eine einfachere Handhabung beim Umgang mit Flüssigkeiten und in der Temperierung, eine schnellere Bildgebung und ist als optische Methode nicht-invasiv.