991 resultados para Sequential Monte Carlo


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Neste trabalho reportamos a investigação teórica da solvatação dos isômeros do tris- (8-idroxiquinolinolato) de alumínio III – Alq3, as propriedades eletroluminescentes na solvatação de Alq3 em líquidos orgânicos como metanol, etanol, dimetilformamida (DMF) e acetonitrila, a fim de se entender a dependência na variação de ambientes do sistema, aperfeiçoando o funcionamento de filmes transportadores em dispositivos eletroluminescentes do tipo OLED (Organic Light-Emitting Diodes) e por fim investigamos o mecanismo do transporte eletrônico no Alq3 aplicando uma baixa corrente elétrica na molécula e evidenciando as curvas corrente-voltagem característica do dispositivo. A simulação consiste na aplicação do método sequencial Monte Carlo / Mecânica quântica (S-MC/MQ), que parte de um tratamento inicial estocástico para separação das estruturas mais prováveis de menor energia e posteriormente com um tratamento quântico para plotar os espectros eletrônicos das camadas de solvatação separadas através do método ZINDOS/S. Nas propriedades elétricas do transporte utilizamos o método da função de Green de não equilíbrio acoplado a teoria do funcional densidade (DFT) inferindo que as ramificações mais externas correspondentes aos anéis no Alq3 seriam terminais para o translado eletrônico. Nossos resultados mostraram que a média dos espectros de absorção para solvatação do Alq3 em soluções sofre um desvio mínimo com a mudança de ambiente, estando em ótimo acordo com os resultados experimentais da literatura; e as curvas I-V confirmaram o comportamento diodo do dispositivo, corroborando com os sentidos mais pertinentes quanto aos terminais no Alq3 para se ter um transporte eletrônico satisfatório.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Free energy calculations are a computational method for determining thermodynamic quantities, such as free energies of binding, via simulation.

Currently, due to computational and algorithmic limitations, free energy calculations are limited in scope.

In this work, we propose two methods for improving the efficiency of free energy calculations.

First, we expand the state space of alchemical intermediates, and show that this expansion enables us to calculate free energies along lower variance paths.

We use Q-learning, a reinforcement learning technique, to discover and optimize paths at low computational cost.

Second, we reduce the cost of sampling along a given path by using sequential Monte Carlo samplers.

We develop a new free energy estimator, pCrooks (pairwise Crooks), a variant on the Crooks fluctuation theorem (CFT), which enables decomposition of the variance of the free energy estimate for discrete paths, while retaining beneficial characteristics of CFT.

Combining these two advancements, we show that for some test models, optimal expanded-space paths have a nearly 80% reduction in variance relative to the standard path.

Additionally, our free energy estimator converges at a more consistent rate and on average 1.8 times faster when we enable path searching, even when the cost of path discovery and refinement is considered.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The electronic properties of liquid ammonia are investigated by a sequential molecular dynamics/quantum mechanics approach. Quantum mechanics calculations for the liquid phase are based on a reparametrized hybrid exchange-correlation functional that reproduces the electronic properties of ammonia clusters [(NH(3))(n); n=1-5]. For these small clusters, electron binding energies based on Green's function or electron propagator theory, coupled cluster with single, double, and perturbative triple excitations, and density functional theory (DFT) are compared. Reparametrized DFT results for the dipole moment, electron binding energies, and electronic density of states of liquid ammonia are reported. The calculated average dipole moment of liquid ammonia (2.05 +/- 0.09 D) corresponds to an increase of 27% compared to the gas phase value and it is 0.23 D above a prediction based on a polarizable model of liquid ammonia [Deng , J. Chem. Phys. 100, 7590 (1994)]. Our estimate for the ionization potential of liquid ammonia is 9.74 +/- 0.73 eV, which is approximately 1.0 eV below the gas phase value for the isolated molecule. The theoretical vertical electron affinity of liquid ammonia is predicted as 0.16 +/- 0.22 eV, in good agreement with the experimental result for the location of the bottom of the conduction band (-V(0)=0.2 eV). Vertical ionization potentials and electron affinities correlate with the total dipole moment of ammonia aggregates. (c) 2008 American Institute of Physics.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The NMR spin coupling parameters, (1)J(N,H) and (2)J(H,H), and the chemical shielding, sigma((15)N), of liquid ammonia are studied from a combined and sequential QM/MM methodology. Monte Carlo simulations are performed to generate statistically uncorrelated configurations that are submitted to density functional theory calculations. Two different Lennard-Jones potentials are used in the liquid simulations. Electronic polarization is included in these two potentials via an iterative procedure with and without geometry relaxation, and the influence on the calculated properties are analyzed. B3LYP/aug-cc-pVTZ-J calculations were used to compute the V(N,H) constants in the interval of -67.8 to -63.9 Hz, depending on the theoretical model used. These can be compared with the experimental results of -61.6 Hz. For the (2)J(H,H) coupling the theoretical results vary between -10.6 to -13.01 Hz. The indirect experimental result derived from partially deuterated liquid is -11.1 Hz. Inclusion of explicit hydrogen bonded molecules gives a small but important contribution. The vapor-to-liquid shifts are also considered. This shift is calculated to be negligible for (1)J(N,H) in agreement with experiment. This is rationalized as a cancellation of the geometry relaxation and pure solvent effects. For the chemical shielding, U(15 N) Calculations at the B3LYP/aug-pcS-3 show that the vapor-to-liquid chemical shift requires the explicit use of solvent molecules. Considering only one ammonia molecule in an electrostatic embedding gives a wrong sign for the chemical shift that is corrected only with the use of explicit additional molecules. The best result calculated for the vapor to liquid chemical shift Delta sigma((15)N) is -25.2 ppm, in good agreement with the experimental value of -22.6 ppm.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper explains why the reliability assessment of energy limited systems requires more detailed models for primary generating resources availability, internal and external generating dispatch and customer demand than the ones commonly used for large power systems and presents a methodology based on the full sequential Montecarlo simulation technique with AC power flow for their long term reliability assessment which can properly include these detailed models. By means of a real example, it is shown how the simplified modeling traditionally used for large power systems leads to pessimistic predictions if it is applied to an energy limited system and also that it cannot predict all the load point adequacy problems. © 2006 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The hydration of mesityl oxide (MOx) was investigated through a sequential quantum mechanics/molecular mechanics approach. Emphasis was placed on the analysis of the role played by water in the MOx syn-anti equilibrium and the electronic absorption spectrum. Results for the structure of the MOx-water solution, free energy of solvation and polarization effects are also reported. Our main conclusion was that in gas-phase and in low-polarity solvents, the MOx exists dominantly in syn-form and in aqueous solution in anti-form. This conclusion was supported by Gibbs free energy calculations in gas phase and in-water by quantum mechanical calculations with polarizable continuum model and thermodynamic perturbation theory in Monte Carlo simulations using a polarized MOx model. The consideration of the in-water polarization of the MOx is very important to correctly describe the solute-solvent electrostatic interaction. Our best estimate for the shift of the pi-pi* transition energy of MOx, when it changes from gas-phase to water solvent, shows a red-shift of -2,520 +/- 90 cm(-1), which is only 110 cm(-1) (0.014 eV) below the experimental extrapolation of -2,410 +/- 90 cm(-1). This red-shift of around -2,500 cm(-1) can be divided in two distinct and opposite contributions. One contribution is related to the syn -> anti conformational change leading to a blue-shift of similar to 1,700 cm(-1). Other contribution is the solvent effect on the electronic structure of the MOx leading to a red-shift of around -4,200 cm(-1). Additionally, this red-shift caused by the solvent effect on the electronic structure can by composed by approximately 60 % due to the electrostatic bulk effect, 10 % due to the explicit inclusion of the hydrogen-bonded water molecules and 30 % due to the explicit inclusion of the nearest water molecules.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is a continuous search for theoretical methods that are able to describe the effects of the liquid environment on molecular systems. Different methods emphasize different aspects, and the treatment of both the local and bulk properties is still a great challenge. In this work, the electronic properties of a water molecule in liquid environment is studied by performing a relaxation of the geometry and electronic distribution using the free energy gradient method. This is made using a series of steps in each of which we run a purely molecular mechanical (MM) Monte Carlo Metropolis simulation of liquid water and subsequently perform a quantum mechanical/molecular mechanical (QM/MM) calculation of the ensemble averages of the charge distribution, atomic forces, and second derivatives. The MP2/aug-cc-pV5Z level is used to describe the electronic properties of the QM water. B3LYP with specially designed basis functions are used for the magnetic properties. Very good agreement is found for the local properties of water, such as geometry, vibrational frequencies, dipole moment, dipole polarizability, chemical shift, and spin-spin coupling constants. The very good performance of the free energy method combined with a QM/MM approach along with the possible limitations are briefly discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Heterogeneous datasets arise naturally in most applications due to the use of a variety of sensors and measuring platforms. Such datasets can be heterogeneous in terms of the error characteristics and sensor models. Treating such data is most naturally accomplished using a Bayesian or model-based geostatistical approach; however, such methods generally scale rather badly with the size of dataset, and require computationally expensive Monte Carlo based inference. Recently within the machine learning and spatial statistics communities many papers have explored the potential of reduced rank representations of the covariance matrix, often referred to as projected or fixed rank approaches. In such methods the covariance function of the posterior process is represented by a reduced rank approximation which is chosen such that there is minimal information loss. In this paper a sequential Bayesian framework for inference in such projected processes is presented. The observations are considered one at a time which avoids the need for high dimensional integrals typically required in a Bayesian approach. A C++ library, gptk, which is part of the INTAMAP web service, is introduced which implements projected, sequential estimation and adds several novel features. In particular the library includes the ability to use a generic observation operator, or sensor model, to permit data fusion. It is also possible to cope with a range of observation error characteristics, including non-Gaussian observation errors. Inference for the covariance parameters is explored, including the impact of the projected process approximation on likelihood profiles. We illustrate the projected sequential method in application to synthetic and real datasets. Limitations and extensions are discussed. © 2010 Elsevier Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The problem of decentralized sequential detection is studied in this thesis, where local sensors are memoryless, receive independent observations, and no feedback from the fusion center. In addition to traditional criteria of detection delay and error probability, we introduce a new constraint: the number of communications between local sensors and the fusion center. This metric is able to reflect both the cost of establishing communication links as well as overall energy consumption over time. A new formulation for communication-efficient decentralized sequential detection is proposed where the overall detection delay is minimized with constraints on both error probabilities and the communication cost. Two types of problems are investigated based on the communication-efficient formulation: decentralized hypothesis testing and decentralized change detection. In the former case, an asymptotically person-by-person optimum detection framework is developed, where the fusion center performs a sequential probability ratio test based on dependent observations. The proposed algorithm utilizes not only reported statistics from local sensors, but also the reporting times. The asymptotically relative efficiency of proposed algorithm with respect to the centralized strategy is expressed in closed form. When the probabilities of false alarm and missed detection are close to one another, a reduced-complexity algorithm is proposed based on a Poisson arrival approximation. In addition, decentralized change detection with a communication cost constraint is also investigated. A person-by-person optimum change detection algorithm is proposed, where transmissions of sensing reports are modeled as a Poisson process. The optimum threshold value is obtained through dynamic programming. An alternative method with a simpler fusion rule is also proposed, where the threshold values in the algorithm are determined by a combination of sequential detection analysis and constrained optimization. In both decentralized hypothesis testing and change detection problems, tradeoffs in parameter choices are investigated through Monte Carlo simulations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

1. There are a variety of methods that could be used to increase the efficiency of the design of experiments. However, it is only recently that such methods have been considered in the design of clinical pharmacology trials. 2. Two such methods, termed data-dependent (e.g. simulation) and data-independent (e.g. analytical evaluation of the information in a particular design), are becoming increasingly used as efficient methods for designing clinical trials. These two design methods have tended to be viewed as competitive, although a complementary role in design is proposed here. 3. The impetus for the use of these two methods has been the need for a more fully integrated approach to the drug development process that specifically allows for sequential development (i.e. where the results of early phase studies influence later-phase studies). 4. The present article briefly presents the background and theory that underpins both the data-dependent and -independent methods with the use of illustrative examples from the literature. In addition, the potential advantages and disadvantages of each method are discussed.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Des progrès significatifs ont été réalisés dans le domaine de l'intégration quantitative des données géophysique et hydrologique l'échelle locale. Cependant, l'extension à de plus grandes échelles des approches correspondantes constitue encore un défi majeur. Il est néanmoins extrêmement important de relever ce défi pour développer des modèles fiables de flux des eaux souterraines et de transport de contaminant. Pour résoudre ce problème, j'ai développé une technique d'intégration des données hydrogéophysiques basée sur une procédure bayésienne de simulation séquentielle en deux étapes. Cette procédure vise des problèmes à plus grande échelle. L'objectif est de simuler la distribution d'un paramètre hydraulique cible à partir, d'une part, de mesures d'un paramètre géophysique pertinent qui couvrent l'espace de manière exhaustive, mais avec une faible résolution (spatiale) et, d'autre part, de mesures locales de très haute résolution des mêmes paramètres géophysique et hydraulique. Pour cela, mon algorithme lie dans un premier temps les données géophysiques de faible et de haute résolution à travers une procédure de réduction déchelle. Les données géophysiques régionales réduites sont ensuite reliées au champ du paramètre hydraulique à haute résolution. J'illustre d'abord l'application de cette nouvelle approche dintégration des données à une base de données synthétiques réaliste. Celle-ci est constituée de mesures de conductivité hydraulique et électrique de haute résolution réalisées dans les mêmes forages ainsi que destimations des conductivités électriques obtenues à partir de mesures de tomographic de résistivité électrique (ERT) sur l'ensemble de l'espace. Ces dernières mesures ont une faible résolution spatiale. La viabilité globale de cette méthode est testée en effectuant les simulations de flux et de transport au travers du modèle original du champ de conductivité hydraulique ainsi que du modèle simulé. Les simulations sont alors comparées. Les résultats obtenus indiquent que la procédure dintégration des données proposée permet d'obtenir des estimations de la conductivité en adéquation avec la structure à grande échelle ainsi que des predictions fiables des caractéristiques de transports sur des distances de moyenne à grande échelle. Les résultats correspondant au scénario de terrain indiquent que l'approche d'intégration des données nouvellement mise au point est capable d'appréhender correctement les hétérogénéitées à petite échelle aussi bien que les tendances à gande échelle du champ hydraulique prévalent. Les résultats montrent également une flexibilté remarquable et une robustesse de cette nouvelle approche dintégration des données. De ce fait, elle est susceptible d'être appliquée à un large éventail de données géophysiques et hydrologiques, à toutes les gammes déchelles. Dans la deuxième partie de ma thèse, j'évalue en détail la viabilité du réechantillonnage geostatique séquentiel comme mécanisme de proposition pour les méthodes Markov Chain Monte Carlo (MCMC) appliquées à des probmes inverses géophysiques et hydrologiques de grande dimension . L'objectif est de permettre une quantification plus précise et plus réaliste des incertitudes associées aux modèles obtenus. En considérant une série dexemples de tomographic radar puits à puits, j'étudie deux classes de stratégies de rééchantillonnage spatial en considérant leur habilité à générer efficacement et précisément des réalisations de la distribution postérieure bayésienne. Les résultats obtenus montrent que, malgré sa popularité, le réechantillonnage séquentiel est plutôt inefficace à générer des échantillons postérieurs indépendants pour des études de cas synthétiques réalistes, notamment pour le cas assez communs et importants où il existe de fortes corrélations spatiales entre le modèle et les paramètres. Pour résoudre ce problème, j'ai développé un nouvelle approche de perturbation basée sur une déformation progressive. Cette approche est flexible en ce qui concerne le nombre de paramètres du modèle et lintensité de la perturbation. Par rapport au rééchantillonage séquentiel, cette nouvelle approche s'avère être très efficace pour diminuer le nombre requis d'itérations pour générer des échantillons indépendants à partir de la distribution postérieure bayésienne. - Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending corresponding approaches beyond the local scale still represents a major challenge, yet is critically important for the development of reliable groundwater flow and contaminant transport models. To address this issue, I have developed a hydrogeophysical data integration technique based on a two-step Bayesian sequential simulation procedure that is specifically targeted towards larger-scale problems. The objective is to simulate the distribution of a target hydraulic parameter based on spatially exhaustive, but poorly resolved, measurements of a pertinent geophysical parameter and locally highly resolved, but spatially sparse, measurements of the considered geophysical and hydraulic parameters. To this end, my algorithm links the low- and high-resolution geophysical data via a downscaling procedure before relating the downscaled regional-scale geophysical data to the high-resolution hydraulic parameter field. I first illustrate the application of this novel data integration approach to a realistic synthetic database consisting of collocated high-resolution borehole measurements of the hydraulic and electrical conductivities and spatially exhaustive, low-resolution electrical conductivity estimates obtained from electrical resistivity tomography (ERT). The overall viability of this method is tested and verified by performing and comparing flow and transport simulations through the original and simulated hydraulic conductivity fields. The corresponding results indicate that the proposed data integration procedure does indeed allow for obtaining faithful estimates of the larger-scale hydraulic conductivity structure and reliable predictions of the transport characteristics over medium- to regional-scale distances. The approach is then applied to a corresponding field scenario consisting of collocated high- resolution measurements of the electrical conductivity, as measured using a cone penetrometer testing (CPT) system, and the hydraulic conductivity, as estimated from electromagnetic flowmeter and slug test measurements, in combination with spatially exhaustive low-resolution electrical conductivity estimates obtained from surface-based electrical resistivity tomography (ERT). The corresponding results indicate that the newly developed data integration approach is indeed capable of adequately capturing both the small-scale heterogeneity as well as the larger-scale trend of the prevailing hydraulic conductivity field. The results also indicate that this novel data integration approach is remarkably flexible and robust and hence can be expected to be applicable to a wide range of geophysical and hydrological data at all scale ranges. In the second part of my thesis, I evaluate in detail the viability of sequential geostatistical resampling as a proposal mechanism for Markov Chain Monte Carlo (MCMC) methods applied to high-dimensional geophysical and hydrological inverse problems in order to allow for a more accurate and realistic quantification of the uncertainty associated with the thus inferred models. Focusing on a series of pertinent crosshole georadar tomographic examples, I investigated two classes of geostatistical resampling strategies with regard to their ability to efficiently and accurately generate independent realizations from the Bayesian posterior distribution. The corresponding results indicate that, despite its popularity, sequential resampling is rather inefficient at drawing independent posterior samples for realistic synthetic case studies, notably for the practically common and important scenario of pronounced spatial correlation between model parameters. To address this issue, I have developed a new gradual-deformation-based perturbation approach, which is flexible with regard to the number of model parameters as well as the perturbation strength. Compared to sequential resampling, this newly proposed approach was proven to be highly effective in decreasing the number of iterations required for drawing independent samples from the Bayesian posterior distribution.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this letter, we obtain the Maximum LikelihoodEstimator of position in the framework of Global NavigationSatellite Systems. This theoretical result is the basis of a completelydifferent approach to the positioning problem, in contrastto the conventional two-steps position estimation, consistingof estimating the synchronization parameters of the in-viewsatellites and then performing a position estimation with thatinformation. To the authors’ knowledge, this is a novel approachwhich copes with signal fading and it mitigates multipath andjamming interferences. Besides, the concept of Position–basedSynchronization is introduced, which states that synchronizationparameters can be recovered from a user position estimation. Weprovide computer simulation results showing the robustness ofthe proposed approach in fading multipath channels. The RootMean Square Error performance of the proposed algorithm iscompared to those achieved with state-of-the-art synchronizationtechniques. A Sequential MonteCarlo based method is used todeal with the multivariate optimization problem resulting fromthe ML solution in an iterative way.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The original contribution of this thesis to knowledge are novel digital readout architectures for hybrid pixel readout chips. The thesis presents asynchronous bus-based architecture, a data-node based column architecture and a network-based pixel matrix architecture for data transportation. It is shown that the data-node architecture achieves readout efficiency 99% with half the output rate as a bus-based system. The network-based solution avoids “broken” columns due to some manufacturing errors, and it distributes internal data traffic more evenly across the pixel matrix than column-based architectures. An improvement of > 10% to the efficiency is achieved with uniform and non-uniform hit occupancies. Architectural design has been done using transaction level modeling (TLM) and sequential high-level design techniques for reducing the design and simulation time. It has been possible to simulate tens of column and full chip architectures using the high-level techniques. A decrease of > 10 in run-time is observed using these techniques compared to register transfer level (RTL) design technique. Reduction of 50% for lines-of-code (LoC) for the high-level models compared to the RTL description has been achieved. Two architectures are then demonstrated in two hybrid pixel readout chips. The first chip, Timepix3 has been designed for the Medipix3 collaboration. According to the measurements, it consumes < 1 W/cm^2. It also delivers up to 40 Mhits/s/cm^2 with 10-bit time-over-threshold (ToT) and 18-bit time-of-arrival (ToA) of 1.5625 ns. The chip uses a token-arbitrated, asynchronous two-phase handshake column bus for internal data transfer. It has also been successfully used in a multi-chip particle tracking telescope. The second chip, VeloPix, is a readout chip being designed for the upgrade of Vertex Locator (VELO) of the LHCb experiment at CERN. Based on the simulations, it consumes < 1.5 W/cm^2 while delivering up to 320 Mpackets/s/cm^2, each packet containing up to 8 pixels. VeloPix uses a node-based data fabric for achieving throughput of 13.3 Mpackets/s from the column to the EoC. By combining Monte Carlo physics data with high-level simulations, it has been demonstrated that the architecture meets requirements of the VELO (260 Mpackets/s/cm^2 with efficiency of 99%).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

El presente trabajo intenta estimar si las empresas emplean estratégicamente la deuda para limitar la entrada de potenciales rivales. Mediante la metodología de Método Generalizado de Momentos (GMM) se evalúa el efecto que tienen los activos específicos, la cuota de mercado y el tamaño, como proxies de las rentas del mercado, y las barreras de entrada sobre los niveles de endeudamiento, a nivel de empresa para Colombia, durante 1995-2003. Se encuentra que las empresas utilizan los activos específicos para limitar la entrada al mercado y que el endeudamiento decrece a medida que las empresas aumentan su cuota en el mercado