988 resultados para Monte Carlo algorithms


Relevância:

90.00% 90.00%

Publicador:

Resumo:

The advent of the Auger Engineering Radio Array (AERA) necessitates the development of a powerful framework for the analysis of radio measurements of cosmic ray air showers. As AERA performs ""radio-hybrid"" measurements of air shower radio emission in coincidence with the surface particle detectors and fluorescence telescopes of the Pierre Auger Observatory, the radio analysis functionality had to be incorporated in the existing hybrid analysis solutions for fluorescence and surface detector data. This goal has been achieved in a natural way by extending the existing Auger Offline software framework with radio functionality. In this article, we lay out the design, highlights and features of the radio extension implemented in the Auger Offline framework. Its functionality has achieved a high degree of sophistication and offers advanced features such as vectorial reconstruction of the electric field, advanced signal processing algorithms, a transparent and efficient handling of FFTs, a very detailed simulation of detector effects, and the read-in of multiple data formats including data from various radio simulation codes. The source code of this radio functionality can be made available to interested parties on request. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Item response theory (IRT) comprises a set of statistical models which are useful in many fields, especially when there is interest in studying latent variables. These latent variables are directly considered in the Item Response Models (IRM) and they are usually called latent traits. A usual assumption for parameter estimation of the IRM, considering one group of examinees, is to assume that the latent traits are random variables which follow a standard normal distribution. However, many works suggest that this assumption does not apply in many cases. Furthermore, when this assumption does not hold, the parameter estimates tend to be biased and misleading inference can be obtained. Therefore, it is important to model the distribution of the latent traits properly. In this paper we present an alternative latent traits modeling based on the so-called skew-normal distribution; see Genton (2004). We used the centred parameterization, which was proposed by Azzalini (1985). This approach ensures the model identifiability as pointed out by Azevedo et al. (2009b). Also, a Metropolis Hastings within Gibbs sampling (MHWGS) algorithm was built for parameter estimation by using an augmented data approach. A simulation study was performed in order to assess the parameter recovery in the proposed model and the estimation method, and the effect of the asymmetry level of the latent traits distribution on the parameter estimation. Also, a comparison of our approach with other estimation methods (which consider the assumption of symmetric normality for the latent traits distribution) was considered. The results indicated that our proposed algorithm recovers properly all parameters. Specifically, the greater the asymmetry level, the better the performance of our approach compared with other approaches, mainly in the presence of small sample sizes (number of examinees). Furthermore, we analyzed a real data set which presents indication of asymmetry concerning the latent traits distribution. The results obtained by using our approach confirmed the presence of strong negative asymmetry of the latent traits distribution. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents a two-step pseudo likelihood estimation technique for generalized linear mixed models with the random effects being correlated between groups. The core idea is to deal with the intractable integrals in the likelihood function by multivariate Taylor's approximation. The accuracy of the estimation technique is assessed in a Monte-Carlo study. An application of it with a binary response variable is presented using a real data set on credit defaults from two Swedish banks. Thanks to the use of two-step estimation technique, the proposed algorithm outperforms conventional pseudo likelihood algorithms in terms of computational time.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Esse trabalho comparou, para condições macroeconômicas usuais, a eficiência do modelo de Redes Neurais Artificiais (RNAs) otimizadas por Algoritmos Genéticos (AGs) na precificação de opções de Dólar à Vista aos seguintes modelos de precificação convencionais: Black-Scholes, Garman-Kohlhagen, Árvores Trinomiais e Simulações de Monte Carlo. As informações utilizadas nesta análise, compreendidas entre janeiro de 1999 e novembro de 2006, foram disponibilizadas pela Bolsa de Mercadorias e Futuros (BM&F) e pelo Federal Reserve americano. As comparações e avaliações foram realizadas com o software MATLAB, versão 7.0, e suas respectivas caixas de ferramentas que ofereceram o ambiente e as ferramentas necessárias à implementação e customização dos modelos mencionados acima. As análises do custo do delta-hedging para cada modelo indicaram que, apesar de mais complexa, a utilização dos Algoritmos Genéticos exclusivamente para otimização direta (binária) dos pesos sinápticos das Redes Neurais não produziu resultados significativamente superiores aos modelos convencionais.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents new methodology for making Bayesian inference about dy~ o!s for exponential famiIy observations. The approach is simulation-based _~t> use of ~vlarkov chain Monte Carlo techniques. A yletropolis-Hastings i:U~UnLlllll 1::; combined with the Gibbs sampler in repeated use of an adjusted version of normal dynamic linear models. Different alternative schemes are derived and compared. The approach is fully Bayesian in obtaining posterior samples for state parameters and unknown hyperparameters. Illustrations to real data sets with sparse counts and missing values are presented. Extensions to accommodate for general distributions for observations and disturbances. intervention. non-linear models and rnultivariate time series are outlined.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We consider a class of sampling-based decomposition methods to solve risk-averse multistage stochastic convex programs. We prove a formula for the computation of the cuts necessary to build the outer linearizations of the recourse functions. This formula can be used to obtain an efficient implementation of Stochastic Dual Dynamic Programming applied to convex nonlinear problems. We prove the almost sure convergence of these decomposition methods when the relatively complete recourse assumption holds. We also prove the almost sure convergence of these algorithms when applied to risk-averse multistage stochastic linear programs that do not satisfy the relatively complete recourse assumption. The analysis is first done assuming the underlying stochastic process is interstage independent and discrete, with a finite set of possible realizations at each stage. We then indicate two ways of extending the methods and convergence analysis to the case when the process is interstage dependent.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We analyze the average performance of a general class of learning algorithms for the nondeterministic polynomial time complete problem of rule extraction by a binary perceptron. The examples are generated by a rule implemented by a teacher network of similar architecture. A variational approach is used in trying to identify the potential energy that leads to the largest generalization in the thermodynamic limit. We restrict our search to algorithms that always satisfy the binary constraints. A replica symmetric ansatz leads to a learning algorithm which presents a phase transition in violation of an information theoretical bound. Stability analysis shows that this is due to a failure of the replica symmetric ansatz and the first step of replica symmetry breaking (RSB) is studied. The variational method does not determine a unique potential but it allows construction of a class with a unique minimum within each first order valley. Members of this class improve on the performance of Gibbs algorithm but fail to reach the Bayesian limit in the low generalization phase. They even fail to reach the performance of the best binary, an optimal clipping of the barycenter of version space. We find a trade-off between a good low performance and early onset of perfect generalization. Although the RSB may be locally stable we discuss the possibility that it fails to be the correct saddle point globally. ©2000 The American Physical Society.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A measurement technique of charm baryons lifetimes from hadro-production data was presented. The measurement verified the lifetime analysis procedure in a sample with higher statistical precision. Other effects studied include mass reflections; effects of the presence of a second charm particle; and mismeasurement of charm decays. Monte carlo simulations were used for the detailed study of systematic effects using the charm data.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The performance of muon reconstruction in CMS is evaluated using a large data sample of cosmic-ray muons recorded in 2008. Efficiencies of various high-level trigger, identification, and reconstruction algorithms have been measured for a broad range of muon momenta, and were found to be in good agreement with expectations from Monte Carlo simulation. The relative momentum resolution for muons crossing the barrel part of the detector is better than 1% at 10 GeV/c and is about 8% at 500 GeV/c, the latter being only a factor of two worse than expected with ideal alignment conditions. Muon charge misassignment ranges from less than 0.01% at 10GeV/c to about 1% at 500 GeV/c. © 2010 IOP Publishing Ltd and SISSA.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper proposes a new approach for optimal phasor measurement units placement for fault location on electric power distribution systems using Greedy Randomized Adaptive Search Procedure metaheuristic and Monte Carlo simulation. The optimized placement model herein proposed is a general methodology that can be used to place devices aiming to record the voltage sag magnitudes for any fault location algorithm that uses voltage information measured at a limited set of nodes along the feeder. An overhead, three-phase, three-wire, 13.8 kV, 134-node, real-life feeder model is used to evaluate the algorithm. Tests show that the results of the fault location methodology were improved thanks to the new optimized allocation of the meters pinpointed using this methodology. © 2011 IEEE.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper a framework based on the decomposition of the first-order optimality conditions is described and applied to solve the Probabilistic Power Flow (PPF) problem in a coordinated but decentralized way in the context of multi-area power systems. The purpose of the decomposition framework is to solve the problem through a process of solving smaller subproblems, associated with each area of the power system, iteratively. This strategy allows the probabilistic analysis of the variables of interest, in a particular area, without explicit knowledge of network data of the other interconnected areas, being only necessary to exchange border information related to the tie-lines between areas. An efficient method for probabilistic analysis, considering uncertainty in n system loads, is applied. The proposal is to use a particular case of the point estimate method, known as Two-Point Estimate Method (TPM), rather than the traditional approach based on Monte Carlo simulation. The main feature of the TPM is that it only requires resolve 2n power flows for to obtain the behavior of any random variable. An iterative coordination algorithm between areas is also presented. This algorithm solves the Multi-Area PPF problem in a decentralized way, ensures the independent operation of each area and integrates the decomposition framework and the TPM appropriately. The IEEE RTS-96 system is used in order to show the operation and effectiveness of the proposed approach and the Monte Carlo simulations are used to validation of the results. © 2011 IEEE.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The use of QoS parameters to evaluate the quality of service in a mesh network is essential mainly when providing multimedia services. This paper proposes an algorithm for planning wireless mesh networks in order to satisfy some QoS parameters, given a set of test points (TPs) and potential access points (APs). Examples of QoS parameters include: probability of packet loss and mean delay in responding to a request. The proposed algorithm uses a Mathematical Programming model to determine an adequate topology for the network and Monte Carlo simulation to verify whether the QoS parameters are being satisfied. The results obtained show that the proposed algorithm is able to find satisfactory solutions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Traditionally, ancillary services are supplied by large conventional generators. However, with the huge penetration of distributed generators (DGs) as a result of the growing interest in satisfying energy requirements, and considering the benefits that they can bring along to the electrical system and to the environment, it appears reasonable to assume that ancillary services could also be provided by DGs in an economical and efficient way. In this paper, a settlement procedure for a reactive power market for DGs in distribution systems is proposed. Attention is directed to wind turbines connected to the network through synchronous generators with permanent magnets and doubly-fed induction generators. The generation uncertainty of this kind of DG is reduced by running a multi-objective optimization algorithm in multiple probabilistic scenarios through the Monte Carlo method and by representing the active power generated by the DGs through Markov models. The objectives to be minimized are the payments of the distribution system operator to the DGs for reactive power, the curtailment of transactions committed in an active power market previously settled, the losses in the lines of the network, and a voltage profile index. The proposed methodology was tested using a modified IEEE 37-bus distribution test system. © 1969-2012 IEEE.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The transcription process is crucial to life and the enzyme RNA polymerase (RNAP) is the major component of the transcription machinery. The development of single-molecule techniques, such as magnetic and optical tweezers, atomic-force microscopy and single-molecule fluorescence, increased our understanding of the transcription process and complements traditional biochemical studies. Based on these studies, theoretical models have been proposed to explain and predict the kinetics of the RNAP during the polymerization, highlighting the results achieved by models based on the thermodynamic stability of the transcription elongation complex. However, experiments showed that if more than one RNAP initiates from the same promoter, the transcription behavior slightly changes and new phenomenona are observed. We proposed and implemented a theoretical model that considers collisions between RNAPs and predicts their cooperative behavior during multi-round transcription generalizing the Bai et al. stochastic sequence-dependent model. In our approach, collisions between elongating enzymes modify their transcription rate values. We performed the simulations in Mathematica® and compared the results of the single and the multiple-molecule transcription with experimental results and other theoretical models. Our multi-round approach can recover several expected behaviors, showing that the transcription process for the studied sequences can be accelerated up to 48% when collisions are allowed: the dwell times on pause sites are reduced as well as the distance that the RNAPs backtracked from backtracking sites. © 2013 Costa et al.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Invariant mass spectra for jets reconstructed using the anti-k T and CambridgeAachen algorithms are studied for different jet grooming techniques in data corresponding to an integrated luminosity of 5 fb-1, recorded with the CMS detector in proton-proton collisions at the LHC at a center-of-mass energy of 7 TeV. Leading-order QCD predictions for inclusive dijet and W/Z+jet production combined with parton-shower Monte Carlo models are found to agree overall with the data, and the agreement improves with the implementation of jet grooming methods used to distinguish merged jets of large transverse momentum from softer QCD gluon radiation. © 2013 CERN for the benefit of the CMS collaboration.