45 resultados para Simulation-based methods
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
Miller and Gerlai proposed two methods for determining shoal membership in Danio rerio, one based on momentary mean inter-individual distances and the other on post hoc analysis of the trajectories of nearest-neighbor distances. We propose a method based on momentary nearest-neighbor distances and compare the three methods using simulation. In general, our method yielded results that were more similar to their second method than their first one, and is computationally simpler.
Resumo:
This paper proposes an heuristic for the scheduling of capacity requests and the periodic assignment of radio resources in geostationary (GEO) satellite networks with star topology, using the Demand Assigned Multiple Access (DAMA) protocol in the link layer, and Multi-Frequency Time Division Multiple Access (MF-TDMA) and Adaptive Coding and Modulation (ACM) in the physical layer.
Resumo:
Usual image fusion methods inject features from a high spatial resolution panchromatic sensor into every low spatial resolution multispectral band trying to preserve spectral signatures and improve spatial resolution to that of the panchromatic sensor. The objective is to obtain the image that would be observed by a sensor with the same spectral response (i.e., spectral sensitivity and quantum efficiency) as the multispectral sensors and the spatial resolution of the panchromatic sensor. But in these methods, features from electromagnetic spectrum regions not covered by multispectral sensors are injected into them, and physical spectral responses of the sensors are not considered during this process. This produces some undesirable effects, such as resolution overinjection images and slightly modified spectral signatures in some features. The authors present a technique which takes into account the physical electromagnetic spectrum responses of sensors during the fusion process, which produces images closer to the image obtained by the ideal sensor than those obtained by usual wavelet-based image fusion methods. This technique is used to define a new wavelet-based image fusion method.
Resumo:
Motivation: The comparative analysis of gene gain and loss rates is critical for understanding the role of natural selection and adaptation in shaping gene family sizes. Studying complete genome data from closely related species allows accurate estimation of gene family turnover rates. Current methods and software tools, however, are not well designed for dealing with certain kinds of functional elements, such as microRNAs or transcription factor binding sites. Results: Here, we describe BadiRate, a new software tool to estimate family turnover rates, as well as the number of elements in internal phylogenetic nodes, by likelihood-based methods and parsimony. It implements two stochastic population models, which provide the appropriate statistical framework for testing hypothesis, such as lineage-specific gene family expansions or contractions. We have assessed the accuracy of BadiRate by computer simulations, and have also illustrated its functionality by analyzing a representative empirical dataset.
Resumo:
Motivation: The comparative analysis of gene gain and loss rates is critical for understanding the role of natural selection and adaptation in shaping gene family sizes. Studying complete genome data from closely related species allows accurate estimation of gene family turnover rates. Current methods and software tools, however, are not well designed for dealing with certain kinds of functional elements, such as microRNAs or transcription factor binding sites. Results: Here, we describe BadiRate, a new software tool to estimate family turnover rates, as well as the number of elements in internal phylogenetic nodes, by likelihood-based methods and parsimony. It implements two stochastic population models, which provide the appropriate statistical framework for testing hypothesis, such as lineage-specific gene family expansions or contractions. We have assessed the accuracy of BadiRate by computer simulations, and have also illustrated its functionality by analyzing a representative empirical dataset.
Resumo:
Standard indirect Inference (II) estimators take a given finite-dimensional statistic, Z_{n} , and then estimate the parameters by matching the sample statistic with the model-implied population moment. We here propose a novel estimation method that utilizes all available information contained in the distribution of Z_{n} , not just its first moment. This is done by computing the likelihood of Z_{n}, and then estimating the parameters by either maximizing the likelihood or computing the posterior mean for a given prior of the parameters. These are referred to as the maximum indirect likelihood (MIL) and Bayesian Indirect Likelihood (BIL) estimators, respectively. We show that the IL estimators are first-order equivalent to the corresponding moment-based II estimator that employs the optimal weighting matrix. However, due to higher-order features of Z_{n} , the IL estimators are higher order efficient relative to the standard II estimator. The likelihood of Z_{n} will in general be unknown and so simulated versions of IL estimators are developed. Monte Carlo results for a structural auction model and a DSGE model show that the proposed estimators indeed have attractive finite sample properties.
Resumo:
This paper illustrates the philosophy which forms the basis of calibrationexercises in general equilibrium macroeconomic models and the details of theprocedure, the advantages and the disadvantages of the approach, with particularreference to the issue of testing ``false'' economic models. We provide anoverview of the most recent simulation--based approaches to the testing problemand compare them to standard econometric methods used to test the fit of non--lineardynamic general equilibrium models. We illustrate how simulation--based techniques can be used to formally evaluate the fit of a calibrated modelto the data and obtain ideas on how to improve the model design using a standardproblem in the international real business cycle literature, i.e. whether amodel with complete financial markets and no restrictions to capital mobility is able to reproduce the second order properties of aggregate savingand aggregate investment in an open economy.
Resumo:
The Network Revenue Management problem can be formulated as a stochastic dynamic programming problem (DP or the\optimal" solution V *) whose exact solution is computationally intractable. Consequently, a number of heuristics have been proposed in the literature, the most popular of which are the deterministic linear programming (DLP) model, and a simulation based method, the randomized linear programming (RLP) model. Both methods give upper bounds on the optimal solution value (DLP and PHLP respectively). These bounds are used to provide control values that can be used in practice to make accept/deny decisions for booking requests. Recently Adelman [1] and Topaloglu [18] have proposed alternate upper bounds, the affine relaxation (AR) bound and the Lagrangian relaxation (LR) bound respectively, and showed that their bounds are tighter than the DLP bound. Tight bounds are of great interest as it appears from empirical studies and practical experience that models that give tighter bounds also lead to better controls (better in the sense that they lead to more revenue). In this paper we give tightened versions of three bounds, calling themsAR (strong Affine Relaxation), sLR (strong Lagrangian Relaxation) and sPHLP (strong Perfect Hindsight LP), and show relations between them. Speciffically, we show that the sPHLP bound is tighter than sLR bound and sAR bound is tighter than the LR bound. The techniques for deriving the sLR and sPHLP bounds can potentially be applied to other instances of weakly-coupled dynamic programming.
Resumo:
This paper shows how recently developed regression-based methods for thedecomposition of health inequality can be extended to incorporateindividual heterogeneity in the responses of health to the explanatoryvariables. We illustrate our method with an application to the CanadianNPHS of 1994. Our strategy for the estimation of heterogeneous responsesis based on the quantile regression model. The results suggest that thereis an important degree of heterogeneity in the association of health toexplanatory variables which, in turn, accounts for a substantial percentageof inequality in observed health. A particularly interesting finding isthat the marginal response of health to income is zero for healthyindividuals but positive and significant for unhealthy individuals. Theheterogeneity in the income response reduces both overall health inequalityand income related health inequality.
Resumo:
A nonlocal variational formulation for interpolating a sparsel sampled image is introduced in this paper. The proposed variational formulation, originally motivated by image inpainting problems, encouragesthe transfer of information between similar image patches, following the paradigm of exemplar-based methods. Contrary to the classical inpaintingproblem, no complete patches are available from the sparse imagesamples, and the patch similarity criterion has to be redefined as here proposed. Initial experimental results with the proposed framework, at very low sampling densities, are very encouraging. We also explore somedepartures from the variational setting, showing a remarkable ability to recover textures at low sampling densities.
Resumo:
In this paper, a hybrid simulation-based algorithm is proposed for the StochasticFlow Shop Problem. The main idea of the methodology is to transform the stochastic problem into a deterministic problem and then apply simulation to the latter. In order to achieve this goal, we rely on Monte Carlo Simulation and an adapted version of a deterministic heuristic. This approach aims to provide flexibility and simplicity due to the fact that it is not constrained by any previous assumption and relies in well-tested heuristics.
Resumo:
In this paper, a hybrid simulation-based algorithm is proposed for the StochasticFlow Shop Problem. The main idea of the methodology is to transform the stochastic problem into a deterministic problem and then apply simulation to the latter. In order to achieve this goal, we rely on Monte Carlo Simulation and an adapted version of a deterministic heuristic. This approach aims to provide flexibility and simplicity due to the fact that it is not constrained by any previous assumption and relies in well-tested heuristics.
Resumo:
El projecte exposat té com a propòsit definir i implementar un model de simulació basat en la coordinació i assignació dels serveis d’emergència en accidents de trànsit. La definició del model s’ha realitzat amb l’ús de les Xarxes de Petri Acolorides i la implementació amb el software Rockwell Arena 7.0. El modelatge de la primera simulació ens mostra un model teòric basat en cues mentre que el segon, mostra un model més complet i real gràcies a la connexió mitjançant la plataforma Corba a una base de dades amb informació geogràfica de les flotes i de les rutes. Com a resultat de l’estudi i amb l’ajuda de GoogleEarth, podem realitzar simulacions gràfiques per veure els accidents generats, les flotes dels serveis i el moviment dels vehicles des de les bases fins als accidents.
Resumo:
This paper presents an analysis of motor vehicle insurance claims relating to vehicle damage and to associated medical expenses. We use univariate severity distributions estimated with parametric and non-parametric methods. The methods are implemented using the statistical package R. Parametric analysis is limited to estimation of normal and lognormal distributions for each of the two claim types. The nonparametric analysis presented involves kernel density estimation. We illustrate the benefits of applying transformations to data prior to employing kernel based methods. We use a log-transformation and an optimal transformation amongst a class of transformations that produces symmetry in the data. The central aim of this paper is to provide educators with material that can be used in the classroom to teach statistical estimation methods, goodness of fit analysis and importantly statistical computing in the context of insurance and risk management. To this end, we have included in the Appendix of this paper all the R code that has been used in the analysis so that readers, both students and educators, can fully explore the techniques described
Resumo:
Recently, the surprising result that ab initio calculations on benzene and other planar arenes at correlated MP2, MP3, configuration interaction with singles and doubles (CISD), and coupled cluster with singles and doubles levels of theory using standard Pople’s basis sets yield nonplanar minima has been reported. The planar optimized structures turn out to be transition states presenting one or more large imaginary frequencies, whereas single-determinant-based methods lead to the expected planar minima and no imaginary frequencies. It has been suggested that such anomalous behavior can be originated by two-electron basis set incompleteness error. In this work, we show that the reported pitfalls can be interpreted in terms of intramolecular basis set superposition error (BSSE) effects, mostly between the C–H moieties constituting the arenes. We have carried out counterpoise-corrected optimizations and frequency calculations at the Hartree–Fock, B3LYP, MP2, and CISD levels of theory with several basis sets for a number of arenes. In all cases, correcting for intramolecular BSSE fixes the anomalous behavior of the correlated methods, whereas no significant differences are observed in the single-determinant case. Consequently, all systems studied are planar at all levels of theory. The effect of different intramolecular fragment definitions and the particular case of charged species, namely, cyclopentadienyl and indenyl anions, respectively, are also discussed