48 resultados para intrinsic Gaussian Markov random field
Resumo:
This paper presents a three-dimensional comprehensive model for the calculation of vibration in a building based on pile-foundation due to moving trains in a nearby underground tunnel. The model calculates the Power Spectral Density (PSD) of the building's responses due to trains moving on floating-slab tracks with random roughness. The tunnel and its surrounding soil are modelled as a cylindrical shell embedded in half-space using the well-known PiP model. The building and its piles are modelled as a 2D frame using the dynamic stiffness matrix. Coupling between the foundation and the ground is performed using the theory of joining subsystems in the frequency domain. The latter requires calculations of transfer functions of a half-space model. A convenient choice based on the thin-layer method is selected in this work for the calculations of responses in a half-space due to circular strip loadings. The coupling considers the influence of the building's dynamics on the incident wave field from the tunnel, but ignores any reflections of building's waves from the tunnel. The derivation made in the paper shows that the incident vibration field at the building's foundation gets modified by a term reflecting the coupling and the dynamics of the building and its foundation. The comparisons presented in the paper show that the dynamics of the building and its foundation significantly change the incident vibration field from the tunnel and they can lead to loss of accuracy of predictions if not considered in the calculation.
Resumo:
A partially observable Markov decision process (POMDP) has been proposed as a dialog model that enables automatic optimization of the dialog policy and provides robustness to speech understanding errors. Various approximations allow such a model to be used for building real-world dialog systems. However, they require a large number of dialogs to train the dialog policy and hence they typically rely on the availability of a user simulator. They also require significant designer effort to hand-craft the policy representation. We investigate the use of Gaussian processes (GPs) in policy modeling to overcome these problems. We show that GP policy optimization can be implemented for a real world POMDP dialog manager, and in particular: 1) we examine different formulations of a GP policy to minimize variability in the learning process; 2) we find that the use of GP increases the learning rate by an order of magnitude thereby allowing learning by direct interaction with human users; and 3) we demonstrate that designer effort can be substantially reduced by basing the policy directly on the full belief space thereby avoiding ad hoc feature space modeling. Overall, the GP approach represents an important step forward towards fully automatic dialog policy optimization in real world systems. © 2013 IEEE.
Resumo:
State-space models are successfully used in many areas of science, engineering and economics to model time series and dynamical systems. We present a fully Bayesian approach to inference and learning (i.e. state estimation and system identification) in nonlinear nonparametric state-space models. We place a Gaussian process prior over the state transition dynamics, resulting in a flexible model able to capture complex dynamical phenomena. To enable efficient inference, we marginalize over the transition dynamics function and, instead, infer directly the joint smoothing distribution using specially tailored Particle Markov Chain Monte Carlo samplers. Once a sample from the smoothing distribution is computed, the state transition predictive distribution can be formulated analytically. Our approach preserves the full nonparametric expressivity of the model and can make use of sparse Gaussian processes to greatly reduce computational complexity.