983 resultados para Simulation Testing
Resumo:
The two-node tandem Jackson network serves as a convenient reference model for the analysis and testing of different methodologies and techniques in rare event simulation. In this paper we consider a new approach to efficiently estimate the probability that the content of the second buffer exceeds some high level L before it becomes empty, starting from a given state. The approach is based on a Markov additive process representation of the buffer processes, leading to an exponential change of measure to be used in an importance sampling procedure. Unlike changes of measures proposed and studied in recent literature, the one derived here is a function of the content of the first buffer. We prove that when the first buffer is finite, this method yields asymptotically efficient simulation for any set of arrival and service rates. In fact, the relative error is bounded independent of the level L; a new result which is not established for any other known method. When the first buffer is infinite, we propose a natural extension of the exponential change of measure for the finite buffer case. In this case, the relative error is shown to be bounded (independent of L) only when the second server is the bottleneck; a result which is known to hold for some other methods derived through large deviations analysis. When the first server is the bottleneck, experimental results using our method seem to suggest that the relative error is bounded linearly in L.
Resumo:
The Agricultural Production Systems Simulator (APSIM) is a modular modelling framework that has been developed by the Agricultural Production Systems Research Unit in Australia. APSIM was developed to simulate biophysical process in farming systems, in particular where there is interest in the economic and ecological outcomes of management practice in the face of climatic risk. The paper outlines APSIM's structure and provides details of the concepts behind the different plant, soil and management modules. These modules include a diverse range of crops, pastures and trees, soil processes including water balance, N and P transformations, soil pH, erosion and a full range of management controls. Reports of APSIM testing in a diverse range of systems and environments are summarised. An example of model performance in a long-term cropping systems trial is provided. APSIM has been used in a broad range of applications, including support for on-farm decision making, farming systems design for production or resource management objectives, assessment of the value of seasonal climate forecasting, analysis of supply chain issues in agribusiness activities, development of waste management guidelines, risk assessment for government policy making and as a guide to research and education activity. An extensive citation list for these model testing and application studies is provided. Crown Copyright (C) 2002 Published by Elsevier Science B.V. All rights reserved.
Resumo:
In Part 1 of this paper a methodology for back-to-back testing of simulation software was described. Residuals with error-dependent geometric properties were generated. A set of potential coding errors was enumerated, along with a corresponding set of feature matrices, which describe the geometric properties imposed on the residuals by each of the errors. In this part of the paper, an algorithm is developed to isolate the coding errors present by analysing the residuals. A set of errors is isolated when the subspace spanned by their combined feature matrices corresponds to that of the residuals. Individual feature matrices are compared to the residuals and classified as 'definite', 'possible' or 'impossible'. The status of 'possible' errors is resolved using a dynamic subset testing algorithm. To demonstrate and validate the testing methodology presented in Part 1 and the isolation algorithm presented in Part 2, a case study is presented using a model for biological wastewater treatment. Both single and simultaneous errors that are deliberately introduced into the simulation code are correctly detected and isolated. Copyright (C) 2003 John Wiley Sons, Ltd.
Resumo:
This study develops a theoretical model that explains the effectiveness of the balanced scorecard approach by means of a system dynamics and feedback learning perspective. Presumably, the balanced scorecard leads to a better understanding of context, allowing managers to externalize and improve their mental models. We present a set of hypotheses about the influence of the balanced scorecard approach on mental models and performance. A test based on a simulation experiment that uses a system dynamics model is performed. The experiment included three types of parameters: financial indicators; balanced scorecard indicators; and balanced scorecard indicators with the aid of a strategy map review. Two out of the three hypotheses were confirmed. It was concluded that a strategy map review positively influences mental model similarity, and mental model similarity positively influences performance.
Resumo:
The performance of the Weather Research and Forecast (WRF) model in wind simulation was evaluated under different numerical and physical options for an area of Portugal, located in complex terrain and characterized by its significant wind energy resource. The grid nudging and integration time of the simulations were the tested numerical options. Since the goal is to simulate the near-surface wind, the physical parameterization schemes regarding the boundary layer were the ones under evaluation. Also, the influences of the local terrain complexity and simulation domain resolution on the model results were also studied. Data from three wind measuring stations located within the chosen area were compared with the model results, in terms of Root Mean Square Error, Standard Deviation Error and Bias. Wind speed histograms, occurrences and energy wind roses were also used for model evaluation. Globally, the model accurately reproduced the local wind regime, despite a significant underestimation of the wind speed. The wind direction is reasonably simulated by the model especially in wind regimes where there is a clear dominant sector, but in the presence of low wind speeds the characterization of the wind direction (observed and simulated) is very subjective and led to higher deviations between simulations and observations. Within the tested options, results show that the use of grid nudging in simulations that should not exceed an integration time of 2 days is the best numerical configuration, and the parameterization set composed by the physical schemes MM5–Yonsei University–Noah are the most suitable for this site. Results were poorer in sites with higher terrain complexity, mainly due to limitations of the terrain data supplied to the model. The increase of the simulation domain resolution alone is not enough to significantly improve the model performance. Results suggest that error minimization in the wind simulation can be achieved by testing and choosing a suitable numerical and physical configuration for the region of interest together with the use of high resolution terrain data, if available.
Resumo:
This paper is about a PV system connected to the electric grid by power electronic converters, using classical PI controller. The modelling for the converters emulates the association of a DC-DC boost with a two-level power inverter (TwLI) or three-level power inverter (ThLI) in order to follow the performance of a testing experimental system. Pulse width modulation (PWMo) by sliding mode control (SMCo) associated with space vector modulation (SVMo) is applied to the boost and the inverter. The PV system is described by the five parameters equivalent circuit. Parameter identification and simulation studies are performed for comparison with the testing experimental system.
Resumo:
In this article, we develop a specification technique for building multiplicative time-varying GARCH models of Amado and Teräsvirta (2008, 2013). The variance is decomposed into an unconditional and a conditional component such that the unconditional variance component is allowed to evolve smoothly over time. This nonstationary component is defined as a linear combination of logistic transition functions with time as the transition variable. The appropriate number of transition functions is determined by a sequence of specification tests. For that purpose, a coherent modelling strategy based on statistical inference is presented. It is heavily dependent on Lagrange multiplier type misspecification tests. The tests are easily implemented as they are entirely based on auxiliary regressions. Finite-sample properties of the strategy and tests are examined by simulation. The modelling strategy is illustrated in practice with two real examples: an empirical application to daily exchange rate returns and another one to daily coffee futures returns.
Resumo:
The application of DNA-based markers toward the task of discriminating among alternate salmon runs has evolved in accordance with ongoing genomic developments and increasingly has enabled resolution of which genetic markers associate with important life-history differences. Accurate and efficient identification of the most likely origin for salmon encountered during ocean fisheries, or at salvage from fresh water diversion and monitoring facilities, has far-reaching consequences for improving measures for management, restoration and conservation. Near-real-time provision of high-resolution identity information enables prompt response to changes in encounter rates. We thus continue to develop new tools to provide the greatest statistical power for run identification. As a proof of concept for genetic identification improvements, we conducted simulation and blind tests for 623 known-origin Chinook salmon (Oncorhynchus tshawytscha) to compare and contrast the accuracy of different population sampling baselines and microsatellite loci panels. This test included 35 microsatellite loci (1266 alleles), some known to be associated with specific coding regions of functional significance, such as the circadian rhythm cryptochrome genes, and others not known to be associated with any functional importance. The identification of fall run with unprecedented accuracy was demonstrated. Overall, the top performing panel and baseline (HMSC21) were predicted to have a success rate of 98%, but the blind-test success rate was 84%. Findings for bias or non-bias are discussed to target primary areas for further research and resolution.
Resumo:
We consider a dynamic multifactor model of investment with financing imperfections,adjustment costs and fixed and variable capital. We use the model to derive a test offinancing constraints based on a reduced form variable capital equation. Simulation resultsshow that this test correctly identifies financially constrained firms even when the estimationof firms investment opportunities is very noisy. In addition, the test is well specified inthe presence of both concave and convex adjustment costs of fixed capital. We confirmempirically the validity of this test on a sample of small Italian manufacturing companies.
Resumo:
This paper illustrates the philosophy which forms the basis of calibrationexercises in general equilibrium macroeconomic models and the details of theprocedure, the advantages and the disadvantages of the approach, with particularreference to the issue of testing ``false'' economic models. We provide anoverview of the most recent simulation--based approaches to the testing problemand compare them to standard econometric methods used to test the fit of non--lineardynamic general equilibrium models. We illustrate how simulation--based techniques can be used to formally evaluate the fit of a calibrated modelto the data and obtain ideas on how to improve the model design using a standardproblem in the international real business cycle literature, i.e. whether amodel with complete financial markets and no restrictions to capital mobility is able to reproduce the second order properties of aggregate savingand aggregate investment in an open economy.
Resumo:
The present work focuses the attention on the skew-symmetry index as a measure of social reciprocity. This index is based on the correspondence between the amount of behaviour that each individual addresses to its partners and what it receives from them in return. Although the skew-symmetry index enables researchers to describe social groups, statistical inferential tests are required. The main aim of the present study is to propose an overall statistical technique for testing symmetry in experimental conditions, calculating the skew-symmetry statistic (Φ) at group level. Sampling distributions for the skew- symmetry statistic have been estimated by means of a Monte Carlo simulation in order to allow researchers to make statistical decisions. Furthermore, this study will allow researchers to choose the optimal experimental conditions for carrying out their research, as the power of the statistical test has been estimated. This statistical test could be used in experimental social psychology studies in which researchers may control the group size and the number of interactions within dyads.
Resumo:
In the first part of the study, nine estimators of the first-order autoregressive parameter are reviewed and a new estimator is proposed. The relationships and discrepancies between the estimators are discussed in order to achieve a clear differentiation. In the second part of the study, the precision in the estimation of autocorrelation is studied. The performance of the ten lag-one autocorrelation estimators is compared in terms of Mean Square Error (combining bias and variance) using data series generated by Monte Carlo simulation. The results show that there is not a single optimal estimator for all conditions, suggesting that the estimator ought to be chosen according to sample size and to the information available of the possible direction of the serial dependence. Additionally, the probability of labelling an actually existing autocorrelation as statistically significant is explored using Monte Carlo sampling. The power estimates obtained are quite similar among the tests associated with the different estimators. These estimates evidence the small probability of detecting autocorrelation in series with less than 20 measurement times.
Resumo:
Theultimate goal of any research in the mechanism/kinematic/design area may be called predictive design, ie the optimisation of mechanism proportions in the design stage without requiring extensive life and wear testing. This is an ambitious goal and can be realised through development and refinement of numerical (computational) technology in order to facilitate the design analysis and optimisation of complex mechanisms, mechanical components and systems. As a part of the systematic design methodology this thesis concentrates on kinematic synthesis (kinematic design and analysis) methods in the mechanism synthesis process. The main task of kinematic design is to find all possible solutions in the form of structural parameters to accomplish the desired requirements of motion. Main formulations of kinematic design can be broadly divided to exact synthesis and approximate synthesis formulations. The exact synthesis formulation is based in solving n linear or nonlinear equations in n variables and the solutions for the problem areget by adopting closed form classical or modern algebraic solution methods or using numerical solution methods based on the polynomial continuation or homotopy. The approximate synthesis formulations is based on minimising the approximation error by direct optimisation The main drawbacks of exact synthesis formulationare: (ia) limitations of number of design specifications and (iia) failure in handling design constraints- especially inequality constraints. The main drawbacks of approximate synthesis formulations are: (ib) it is difficult to choose a proper initial linkage and (iib) it is hard to find more than one solution. Recentformulations in solving the approximate synthesis problem adopts polynomial continuation providing several solutions, but it can not handle inequality const-raints. Based on the practical design needs the mixed exact-approximate position synthesis with two exact and an unlimited number of approximate positions has also been developed. The solutions space is presented as a ground pivot map but thepole between the exact positions cannot be selected as a ground pivot. In this thesis the exact synthesis problem of planar mechanism is solved by generating all possible solutions for the optimisation process ¿ including solutions in positive dimensional solution sets - within inequality constraints of structural parameters. Through the literature research it is first shown that the algebraic and numerical solution methods ¿ used in the research area of computational kinematics ¿ are capable of solving non-parametric algebraic systems of n equations inn variables and cannot handle the singularities associated with positive-dimensional solution sets. In this thesis the problem of positive-dimensional solutionsets is solved adopting the main principles from mathematical research area of algebraic geometry in solving parametric ( in the mathematical sense that all parameter values are considered ¿ including the degenerate cases ¿ for which the system is solvable ) algebraic systems of n equations and at least n+1 variables.Adopting the developed solution method in solving the dyadic equations in direct polynomial form in two- to three-precision-points it has been algebraically proved and numerically demonstrated that the map of the ground pivots is ambiguousand that the singularities associated with positive-dimensional solution sets can be solved. The positive-dimensional solution sets associated with the poles might contain physically meaningful solutions in the form of optimal defectfree mechanisms. Traditionally the mechanism optimisation of hydraulically driven boommechanisms is done at early state of the design process. This will result in optimal component design rather than optimal system level design. Modern mechanismoptimisation at system level demands integration of kinematic design methods with mechanical system simulation techniques. In this thesis a new kinematic design method for hydraulically driven boom mechanism is developed and integrated in mechanical system simulation techniques. The developed kinematic design method is based on the combinations of two-precision-point formulation and on optimisation ( with mathematical programming techniques or adopting optimisation methods based on probability and statistics ) of substructures using calculated criteria from the system level response of multidegree-of-freedom mechanisms. Eg. by adopting the mixed exact-approximate position synthesis in direct optimisation (using mathematical programming techniques) with two exact positions and an unlimitednumber of approximate positions the drawbacks of (ia)-(iib) has been cancelled.The design principles of the developed method are based on the design-tree -approach of the mechanical systems and the design method ¿ in principle ¿ is capable of capturing the interrelationship between kinematic and dynamic synthesis simultaneously when the developed kinematic design method is integrated with the mechanical system simulation techniques.
Resumo:
In this study the theoretical part was created to make comparison between different Value at Risk models. Based on that comparison one model was chosen to the empirical part which concentrated to find out whether the model is accurate to measure market risk. The purpose of this study was to test if Volatility-weighted Historical Simulation is accurate in measuring market risk and what improvements does it bring to market risk measurement compared to traditional Historical Simulation. Volatility-weighted method by Hull and White (1998) was chosen In order to improve the traditional methods capability to measure market risk. In this study we found out that result based on Historical Simulation are dependent on chosen time period, confidence level and how samples are weighted. The findings of this study are that we cannot say that the chosen method is fully reliable in measuring market risk because back testing results are changing during the time period of this study.
Resumo:
In the present work we focus on two indices that quantify directionality and skew-symmetrical patterns in social interactions as measures of social reciprocity: the Directional consistency (DC) and Skew symmetry indices. Although both indices enable researchers to describe social groups, most studies require statistical inferential tests. The main aims of the present study are: firstly, to propose an overall statistical technique for testing null hypotheses regarding social reciprocity in behavioral studies, using the DC and Skew symmetry statistics (Φ) at group level; and secondly, to compare both statistics in order to allow researchers to choose the optimal measure depending on the conditions. In order to allow researchers to make statistical decisions, statistical significance for both statistics has been estimated by means of a Monte Carlo simulation. Furthermore, this study will enable researchers to choose the optimal observational conditions for carrying out their research, as the power of the statistical tests has been estimated.