901 resultados para stochastic simulation method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this thesis, research for tsunami remote sensing using the Global Navigation Satellite System-Reflectometry (GNSS-R) delay-Doppler maps (DDMs) is presented. Firstly, a process for simulating GNSS-R DDMs of a tsunami-dominated sea sur- face is described. In this method, the bistatic scattering Zavorotny-Voronovich (Z-V) model, the sea surface mean square slope model of Cox and Munk, and the tsunami- induced wind perturbation model are employed. The feasibility of the Cox and Munk model under a tsunami scenario is examined by comparing the Cox and Munk model- based scattering coefficient with the Jason-1 measurement. A good consistency be- tween these two results is obtained with a correlation coefficient of 0.93. After con- firming the applicability of the Cox and Munk model for a tsunami-dominated sea, this work provides the simulations of the scattering coefficient distribution and the corresponding DDMs of a fixed region of interest before and during the tsunami. Fur- thermore, by subtracting the simulation results that are free of tsunami from those with presence of tsunami, the tsunami-induced variations in scattering coefficients and DDMs can be clearly observed. Secondly, a scheme to detect tsunamis and estimate tsunami parameters from such tsunami-dominant sea surface DDMs is developed. As a first step, a procedure to de- termine tsunami-induced sea surface height anomalies (SSHAs) from DDMs is demon- strated and a tsunami detection precept is proposed. Subsequently, the tsunami parameters (wave amplitude, direction and speed of propagation, wavelength, and the tsunami source location) are estimated based upon the detected tsunami-induced SSHAs. In application, the sea surface scattering coefficients are unambiguously re- trieved by employing the spatial integration approach (SIA) and the dual-antenna technique. Next, the effective wind speed distribution can be restored from the scat- tering coefficients. Assuming all DDMs are of a tsunami-dominated sea surface, the tsunami-induced SSHAs can be derived with the knowledge of background wind speed distribution. In addition, the SSHA distribution resulting from the tsunami-free DDM (which is supposed to be zero) is considered as an error map introduced during the overall retrieving stage and is utilized to mitigate such errors from influencing sub- sequent SSHA results. In particular, a tsunami detection procedure is conducted to judge the SSHAs to be truly tsunami-induced or not through a fitting process, which makes it possible to decrease the false alarm. After this step, tsunami parameter estimation is proceeded based upon the fitted results in the former tsunami detec- tion procedure. Moreover, an additional method is proposed for estimating tsunami propagation velocity and is believed to be more desirable in real-world scenarios. The above-mentioned tsunami-dominated sea surface DDM simulation, tsunami detection precept and parameter estimation have been tested with simulated data based on the 2004 Sumatra-Andaman tsunami event.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The development of critical thinking and communication skills is an essential part of Baccalaureate and Practical Nursing education. Scenario-based simulation, a form of experiential learning, directly engages students in the learning process. This teaching learning method has been shown to increase students’ understanding of the influence of their personal beliefs and values when working with clients and to improve therapeutic communication and critical thinking skills. Students in both the BN (Collaborative) and PN Programs at the Centre for Nursing Studies demonstrate a strong theoretical understanding of the impact of income and social status on population health but often experience difficulty applying this knowledge to the clinical situations involving clients and families. The purpose of the project was to develop a scenario-based simulation activity to provide nursing students with first-hand experiences of the impact of income and social status on health service accessibility. A literature review and stakeholder consultations were conducted to inform the project. The findings of these initiatives and Kolb’s Experiential Learning Theory were used to guide all aspects of the project. This report is an account of how the income and social status simulation and its accompanying materials were developed. This project provided an excellent learning opportunity that demonstrated the use of advanced nursing competencies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work studied the drying kinetics of the organic fractions of municipal solid waste (MSW) samples with different initial moisture contents and presented a new method for determination of drying kinetic parameters. A series of drying experiments at different temperatures were performed by using a thermogravimetric technique. Based on the modified Page drying model and the general pattern search method, a new drying kinetic method was developed using multiple isothermal drying curves simultaneously. The new method fitted the experimental data more accurately than the traditional method. Drying kinetic behaviors under extrapolated conditions were also predicted and validated. The new method indicated that the drying activation energies for the samples with initial moisture contents of 31.1 and 17.2 % on wet basis were 25.97 and 24.73 kJ mol−1. These results are useful for drying process simulation and industrial dryer design. This new method can be also applied to determine the drying parameters of other materials with high reliability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Discrete-event simulation (DES) is a developed technology used to model manufacturing and service systems. However, although the importance of modelling people in a DES has been recognised, there is little guidance on how this can be achieved in practice. The results from a literature review were used in order to identify examples of the use of DES to model people. Each article was examined in order to determine the method used to model people within the simulation study. It was found that there are no common methods but a diverse range of approaches used to model human behaviour in DES. This paper provides an outline of the approaches used to model people in terms of their decision making, availability for work, task performance and arrival rate. The outcome brings together the current knowledge in this area and will be of interest to researchers considering developing a methodology for modelling people in DES and to practitioners engaged with a simulation project involving the model ling of people’s behaviour.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this dissertation, we develop a novel methodology for characterizing and simulating nonstationary, full-field, stochastic turbulent wind fields.

In this new method, nonstationarity is characterized and modeled via temporal coherence, which is quantified in the discrete frequency domain by probability distributions of the differences in phase between adjacent Fourier components.

The empirical distributions of the phase differences can also be extracted from measured data, and the resulting temporal coherence parameters can quantify the occurrence of nonstationarity in empirical wind data.

This dissertation (1) implements temporal coherence in a desktop turbulence simulator, (2) calibrates empirical temporal coherence models for four wind datasets, and (3) quantifies the increase in lifetime wind turbine loads caused by temporal coherence.

The four wind datasets were intentionally chosen from locations around the world so that they had significantly different ambient atmospheric conditions.

The prevalence of temporal coherence and its relationship to other standard wind parameters was modeled through empirical joint distributions (EJDs), which involved fitting marginal distributions and calculating correlations.

EJDs have the added benefit of being able to generate samples of wind parameters that reflect the characteristics of a particular site.

Lastly, to characterize the effect of temporal coherence on design loads, we created four models in the open-source wind turbine simulator FAST based on the \windpact turbines, fit response surfaces to them, and used the response surfaces to calculate lifetime turbine responses to wind fields simulated with and without temporal coherence.

The training data for the response surfaces was generated from exhaustive FAST simulations that were run on the high-performance computing (HPC) facilities at the National Renewable Energy Laboratory.

This process was repeated for wind field parameters drawn from the empirical distributions and for wind samples drawn using the recommended procedure in the wind turbine design standard \iec.

The effect of temporal coherence was calculated as a percent increase in the lifetime load over the base value with no temporal coherence.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Prior research has established that idiosyncratic volatility of the securities prices exhibits a positive trend. This trend and other factors have made the merits of investment diversification and portfolio construction more compelling. A new optimization technique, a greedy algorithm, is proposed to optimize the weights of assets in a portfolio. The main benefits of using this algorithm are to: a) increase the efficiency of the portfolio optimization process, b) implement large-scale optimizations, and c) improve the resulting optimal weights. In addition, the technique utilizes a novel approach in the construction of a time-varying covariance matrix. This involves the application of a modified integrated dynamic conditional correlation GARCH (IDCC - GARCH) model to account for the dynamics of the conditional covariance matrices that are employed. The stochastic aspects of the expected return of the securities are integrated into the technique through Monte Carlo simulations. Instead of representing the expected returns as deterministic values, they are assigned simulated values based on their historical measures. The time-series of the securities are fitted into a probability distribution that matches the time-series characteristics using the Anderson-Darling goodness-of-fit criterion. Simulated and actual data sets are used to further generalize the results. Employing the S&P500 securities as the base, 2000 simulated data sets are created using Monte Carlo simulation. In addition, the Russell 1000 securities are used to generate 50 sample data sets. The results indicate an increase in risk-return performance. Choosing the Value-at-Risk (VaR) as the criterion and the Crystal Ball portfolio optimizer, a commercial product currently available on the market, as the comparison for benchmarking, the new greedy technique clearly outperforms others using a sample of the S&P500 and the Russell 1000 securities. The resulting improvements in performance are consistent among five securities selection methods (maximum, minimum, random, absolute minimum, and absolute maximum) and three covariance structures (unconditional, orthogonal GARCH, and integrated dynamic conditional GARCH).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel surrogate model is proposed in lieu of Computational Fluid Dynamics (CFD) solvers, for fast nonlinear aerodynamic and aeroelastic modeling. A nonlinear function is identified on selected interpolation points by
a discrete empirical interpolation method (DEIM). The flow field is then reconstructed using a least square approximation of the flow modes extracted
by proper orthogonal decomposition (POD). The aeroelastic reduce order
model (ROM) is completed by introducing a nonlinear mapping function
between displacements and the DEIM points. The proposed model is investigated to predict the aerodynamic forces due to forced motions using
a N ACA 0012 airfoil undergoing a prescribed pitching oscillation. To investigate aeroelastic problems at transonic conditions, a pitch/plunge airfoil
and a cropped delta wing aeroelastic models are built using linear structural models. The presence of shock-waves triggers the appearance of limit
cycle oscillations (LCO), which the model is able to predict. For all cases
tested, the new ROM shows the ability to replicate the nonlinear aerodynamic forces, structural displacements and reconstruct the complete flow
field with sufficient accuracy at a fraction of the cost of full order CFD
model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ultrasonic non-destructive testing of components may encounter considerable difficulties to interpret some inspections results mainly in anisotropic crystalline structures. A numerical method for the simulation of elastic wave propagation in homogeneous elastically anisotropic media, based on the general finite element approach, is used to help this interpretation. The successful modeling of elastic field associated with NDE is based on the generation of a realistic pulsed ultrasonic wave, which is launched from a piezoelectric transducer into the material under inspection. The values of elastic constants are great interest information that provide the application of equations analytical models, until small and medium complexity problems through programs of numerical analysis as finite elements and/or boundary elements. The aim of this work is the comparison between the results of numerical solution of an ultrasonic wave, which is obtained from transient excitation pulse that can be specified by either force or displacement variation across the aperture of the transducer, and the results obtained from a experiment that was realized in an aluminum block in the IEN Ultrasonic Laboratory. The wave propagation can be simulated using all the characteristics of the material used in the experiment evaluation associated to boundary conditions and from these results, the comparison can be made.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of simulation games as a pedagogic method is well established though its effective use is context-driven. This study adds to the increasing growing body of empirical evidence of the effectiveness of simulation games but more importantly emphasises why by explaining the instructional design implemented reflecting best practices. This multimethod study finds evidence that student learning was enhanced through the use of simulation games, reflected in the two key themes; simulation games as a catalyst for learning and simulation games as a vehicle for learning. In so doing the research provides one of the few empirically based studies that support simulation games in enhancing learning and, more importantly, contextualizes the enhancement in terms of the instructional design of the curriculum. This research should prove valuable for those with an academic interest in the use of simulation games and management educators who use, or are considering its use. Further, the findings contribute to the academic debate concerning the effective implementation of simulation game-based training in business and management education.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2016-08

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2016-06