939 resultados para Galilean covariance


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Noise and vibration from underground railways is a major source of disturbance to inhabitants near subways. To help designers meet noise and vibration limits, numerical models are used to understand vibration propagation from these underground railways. However, the models commonly assume the ground is homogeneous and neglect to include local variability in the soil properties. Such simplifying assumptions add a level of uncertainty to the predictions which is not well understood. The goal of the current paper is to quantify the effect of soil inhomogeneity on surface vibration. The thin-layer method (TLM) is suggested as an efficient and accurate means of simulating vibration from underground railways in arbitrarily layered half-spaces. Stochastic variability of the soils elastic modulus is introduced using a KL expansion; the modulus is assumed to have a log-normal distribution and a modified exponential covariance kernel. The effect of horizontal soil variability is investigated by comparing the stochastic results for soils varied only in the vertical direction to soils with 2D variability. Results suggest that local soil inhomogeneity can significantly affect surface velocity predictions; 90 percent confidence intervals showing 8 dB averages and peak values up to 12 dB are computed. This is a significant source of uncertainty and should be considered when using predictions from models assuming homogeneous soil properties. Furthermore, the effect of horizontal variability of the elastic modulus on the confidence interval appears to be negligible. This suggests that only vertical variation needs to be taken into account when modelling ground vibration from underground railways. © 2012 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A group of mobile robots can localize cooperatively, using relative position and absolute orientation measurements, fused through an extended Kalman filter (ekf). The topology of the graph of relative measurements is known to affect the steady-state value of the position error covariance matrix. Classes of sensor graphs are identified, for which tight bounds for the trace of the covariance matrix can be obtained based on the algebraic properties of the underlying relative measurement graph. The string and the star graph topologies are considered, and the explicit form of the eigenvalues of error covariance matrix is given. More general sensor graph topologies are considered as combinations of the string and star topologies, when additional edges are added. It is demonstrated how the addition of edges increases the trace of the steady-state value of the position error covariance matrix, and the theoretical predictions are verified through simulation analysis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Vector Taylor Series (VTS) model based compensation is a powerful approach for noise robust speech recognition. An important extension to this approach is VTS adaptive training (VAT), which allows canonical models to be estimated on diverse noise-degraded training data. These canonical model can be estimated using EM-based approaches, allowing simple extensions to discriminative VAT (DVAT). However to ensure a diagonal corrupted speech covariance matrix the Jacobian (loading matrix) relating the noise and clean speech is diagonalised. In this work an approach for yielding optimal diagonal loading matrices based on minimising the expected KL-divergence between the diagonal loading matrix and "correct" distributions is proposed. The performance of DVAT using the standard and optimal diagonalisation was evaluated on both in-car collected data and the Aurora4 task. © 2012 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Modeling of the joint probability density function of the mixture fraction and progress variable with a given covariance value is studied. This modeling is validated using experimental and direct numerical simulation (DNS) data. A very good agreement with experimental data of turbulent stratified flames and DNS data of a lifted hydrogen jet flame is obtained. The effect of using this joint pdf modeling to calculate the mean reaction rate with a flamelet closure in Reynolds averaged Navier-Stokes (RANS) calculation of stratified flames is studied. The covariance effect is observed to be large within the flame brush. The results obtained from RANS calculations using this modeling for stratified jet- and rod-stabilized V-flames are discussed and compared to the measurements as a posteriori validation for the joint probability density function model with the flamelet closure. The agreement between the computed and measured values of flame and turbulence quantities is found to be good. © 2012 Copyright Taylor and Francis Group, LLC.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Modelling is fundamental to many fields of science and engineering. A model can be thought of as a representation of possible data one could predict from a system. The probabilistic approach to modelling uses probability theory to express all aspects of uncertainty in the model. The probabilistic approach is synonymous with Bayesian modelling, which simply uses the rules of probability theory in order to make predictions, compare alternative models, and learn model parameters and structure from data. This simple and elegant framework is most powerful when coupled with flexible probabilistic models. Flexibility is achieved through the use of Bayesian non-parametrics. This article provides an overview of probabilistic modelling and an accessible survey of some of the main tools in Bayesian non-parametrics. The survey covers the use of Bayesian non-parametrics for modelling unknown functions, density estimation, clustering, time-series modelling, and representing sparsity, hierarchies, and covariance structure. More specifically, it gives brief non-technical overviews of Gaussian processes, Dirichlet processes, infinite hidden Markov models, Indian buffet processes, Kingman's coalescent, Dirichlet diffusion trees and Wishart processes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Recent experiments have shown that spike-timing-dependent plasticity is influenced by neuromodulation. We derive theoretical conditions for successful learning of reward-related behavior for a large class of learning rules where Hebbian synaptic plasticity is conditioned on a global modulatory factor signaling reward. We show that all learning rules in this class can be separated into a term that captures the covariance of neuronal firing and reward and a second term that presents the influence of unsupervised learning. The unsupervised term, which is, in general, detrimental for reward-based learning, can be suppressed if the neuromodulatory signal encodes the difference between the reward and the expected reward-but only if the expected reward is calculated for each task and stimulus separately. If several tasks are to be learned simultaneously, the nervous system needs an internal critic that is able to predict the expected reward for arbitrary stimuli. We show that, with a critic, reward-modulated spike-timing-dependent plasticity is capable of learning motor trajectories with a temporal resolution of tens of milliseconds. The relation to temporal difference learning, the relevance of block-based learning paradigms, and the limitations of learning with a critic are discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Some amount of differential settlement occurs even in the most uniform soil deposit, but it is extremely difficult to estimate because of the natural heterogeneity of the soil. The compression response of the soil and its variability must be characterised in order to estimate the probability of the differential settlement exceeding a certain threshold value. The work presented in this paper introduces a probabilistic framework to address this issue in a rigorous manner, while preserving the format of a typical geotechnical settlement analysis. In order to avoid dealing with different approaches for each category of soil, a simplified unified compression model is used to characterise the nonlinear compression behavior of soils of varying gradation through a single constitutive law. The Bayesian updating rule is used to incorporate information from three different laboratory datasets in the computation of the statistics (estimates of the means and covariance matrix) of the compression model parameters, as well as of the uncertainty inherent in the model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We investigate the Student-t process as an alternative to the Gaussian process as a non-parametric prior over functions. We derive closed form expressions for the marginal likelihood and predictive distribution of a Student-t process, by integrating away an inverse Wishart process prior over the co-variance kernel of a Gaussian process model. We show surprising equivalences between different hierarchical Gaussian process models leading to Student-t processes, and derive a new sampling scheme for the inverse Wishart process, which helps elucidate these equivalences. Overall, we show that a Student-t process can retain the attractive properties of a Gaussian process - a nonparamet-ric representation, analytic marginal and predictive distributions, and easy model selection through covariance kernels - but has enhanced flexibility, and predictive covariances that, unlike a Gaussian process, explicitly depend on the values of training observations. We verify empirically that a Student-t process is especially useful in situations where there are changes in covariance structure, or in applications such as Bayesian optimization, where accurate predictive covariances are critical for good performance. These advantages come at no additional computational cost over Gaussian processes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Choosing appropriate architectures and regularization strategies of deep networks is crucial to good predictive performance. To shed light on this problem, we analyze the analogous problem of constructing useful priors on compositions of functions. Specifically, we study the deep Gaussian process, a type of infinitely-wide, deep neural network. We show that in standard architectures, the representational capacity of the network tends to capture fewer degrees of freedom as the number of layers increases, retaining only a single degree of freedom in the limit. We propose an alternate network architecture which does not suffer from this pathology. We also examine deep covariance functions, obtained by composing infinitely many feature transforms. Lastly, we characterize the class of models obtained by performing dropout on Gaussian processes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

An accurate description of atomic interactions, such as that provided by first principles quantum mechanics, is fundamental to realistic prediction of the properties that govern plasticity, fracture or crack propagation in metals. However, the computational complexity associated with modern schemes explicitly based on quantum mechanics limits their applications to systems of a few hundreds of atoms at most. This thesis investigates the application of the Gaussian Approximation Potential (GAP) scheme to atomistic modelling of tungsten - a bcc transition metal which exhibits a brittle-to-ductile transition and whose plasticity behaviour is controlled by the properties of $\frac{1}{2} \langle 111 \rangle$ screw dislocations. We apply Gaussian process regression to interpolate the quantum-mechanical (QM) potential energy surface from a set of points in atomic configuration space. Our training data is based on QM information that is computed directly using density functional theory (DFT). To perform the fitting, we represent atomic environments using a set of rotationally, permutationally and reflection invariant parameters which act as the independent variables in our equations of non-parametric, non-linear regression. We develop a protocol for generating GAP models capable of describing lattice defects in metals by building a series of interatomic potentials for tungsten. We then demonstrate that a GAP potential based on a Smooth Overlap of Atomic Positions (SOAP) covariance function provides a description of the $\frac{1}{2} \langle 111 \rangle$ screw dislocation that is in agreement with the DFT model. We use this potential to simulate the mobility of $\frac{1}{2} \langle 111 \rangle$ screw dislocations by computing the Peierls barrier and model dislocation-vacancy interactions to QM accuracy in a system containing more than 100,000 atoms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

© 2015 John P. Cunningham and Zoubin Ghahramani. Linear dimensionality reduction methods are a cornerstone of analyzing high dimensional data, due to their simple geometric interpretations and typically attractive computational properties. These methods capture many data features of interest, such as covariance, dynamical structure, correlation between data sets, input-output relationships, and margin between data classes. Methods have been developed with a variety of names and motivations in many fields, and perhaps as a result the connections between all these methods have not been highlighted. Here we survey methods from this disparate literature as optimization programs over matrix manifolds. We discuss principal component analysis, factor analysis, linear multidimensional scaling, Fisher's linear discriminant analysis, canonical correlations analysis, maximum autocorrelation factors, slow feature analysis, sufficient dimensionality reduction, undercomplete independent component analysis, linear regression, distance metric learning, and more. This optimization framework gives insight to some rarely discussed shortcomings of well-known methods, such as the suboptimality of certain eigenvector solutions. Modern techniques for optimization over matrix manifolds enable a generic linear dimensionality reduction solver, which accepts as input data and an objective to be optimized, and returns, as output, an optimal low-dimensional projection of the data. This simple optimization framework further allows straightforward generalizations and novel variants of classical methods, which we demonstrate here by creating an orthogonal-projection canonical correlations analysis. More broadly, this survey and generic solver suggest that linear dimensionality reduction can move toward becoming a blackbox, objective-agnostic numerical technology.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present the normal form of the covariance matrix for three-mode tripartite Gaussian states. By means of this result, the general form of a necessary and sufficient criterion for the possibility of a state transformation from one tripartite entangled Gaussian state to another with three modes is found. Moreover, we show that the conditions presented include not only inequalities but equalities as well.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

大气中CO2、CH4和其它温室气体浓度升高导致的全球气候变化引起了人们对全球碳循环和碳收支的关注,植被与大气间CO2通量的长期测定能够加深对陆地生态系统在全球碳循环作用的科学理解。本文以我国北方典型的温带植被类型长白山阔叶红松林为研究对象,利用观测塔上的涡动相关系统对长白山阔仆卜红松林进行长期的CO2通量监测,并分析CO2通量的周年动态,估算森林净生态系统生产力;同时基于测树学方法,进行群落调查,根据已有的经验公式,估算森林净生态系统生产力,综合评价长白山阔汗卜红松林碳收支,为森林碳收支的研究提供基础。主要结论有:(1)FSAM模型的分析结果表明,观测塔上40m高度的涡动相关仪器测量的信息中,76%来自于西北至西南方相对均质的阔叶红松原始林,其中footprint最大的源区在塔西南方100m-400m范围内。因此,森林群落调查选择在此区内进行,使得涡动相关法和测树学方法估算的生产力具有可比性。(2)2003-2004年碳通量季节变化趋势基本一致,从年初到4月上旬该森林生态系统保持较弱的正的碳通量(释放CO2),5月开始表现为净的碳吸收,且吸收量迅速增加,到6月达到最大值,然后又逐渐减小;9月末到10月末随着生长季的结束,净生态系统COZ交换(NEE)开始由负转为正,11-12月NEE为正,生态系统以呼吸为主。净生态系统COZ交换的年累计量表明长白山阔叶红松林为明显的碳汇,2003年和2004年净生态系统生产力NEP分别为-217±75gcm-2a-1和-190±85gcm-2a-1,相当于-2.17±0.75tCha-1a-1和-1.90±0.85tCha-1a-1。(3)根据经验公式和材积法得到阔汗卜红松林的生物量在343.9-362.3tha-l之间,应用两种方法得到2003一2004年群落的净初级生产力在10.22-10.40tCha-1a-1之间,净生态系统生产力在2.50±1.12tCha-1a-1-2.68±1.20tCha-1a-1之间。(4)测树学方法与涡动相关法测得的净生态系统生产力略有差异,但在误差有效范围内基本一致。

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Diurnal and seasonal variation of CO_2 flux above the Korean Pine and broad_leaved mixed forest in Changbai Mountain were expounded according to the measurements by eddy covariance technique. The results showed that the diurnal variation during growing season was closely correlated with photosynthetically active radiation (PAR). The forest assimilated the CO_2 in daytime and released in night. The maximum uptake occurred about 9 o'clock of local time in clear day. Assimilation was synchronous to PAR in cloudy day. The night respiration increased with increasing of shallow soil temperature. The CO_2 flux also had obviously seasonal variation that was mainly controlled by temperature. Relationship between monthly net exchange of CO_2 and monthly mean air temperature fit cubic equation. Remarkable uptake occurred in blooming growing season,May to August,and weak respiration occurred in dormant season,October to March,and relatively big release happed in October. Assimilation and respiration were nearly balanced during the transition of growing and dormant seasons. The annual carbon uptake of the ecosystem was-184 gC·m -2 .

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Reducing uncertainties in the estimation of land surface evapotranspiration (ET) from remote-sensing data is essential to better understand earth-atmosphere interactions. This paper demonstrates the applicability of temperature-vegetation index triangle (T-s-VI) method in estimating regional ET and evaporative fraction (EF, defined as the ratio of latent heat flux to surface available energy) from MODIS/Terra and MODIS/Aqua products in a semiarid region. We have compared the satellite-based estimates of ET and EF with eddy covariance measurements made over 4 years at two semiarid grassland sites: Audubon Ranch (AR) and Kendall Grassland (KG). The lack of closure in the eddy covariance measured surface energy components is shown to be more serious at MODIS/Aqua overpass time than that at MODIS/Terra overpass time for both AR and KG sites. The T-s-VI-derived EF could reproduce in situ EF reasonably well with BIAS and root-mean-square difference (RMSD) of less than 0.07 and 0.13, respectively. Surface net radiation has been shown to be systematically overestimated by as large as about 60 W/m(2). Satisfactory validation results of the T-s-VI-derived sensible and latent heat fluxes have been obtained with RMSD within 54 W/m(2). The simplicity and yet easy use of the T-s-VI triangle method show a great potential in estimating regional ET with highly acceptable accuracy that is of critical significance in better understanding water and energy budgets on the Earth. Nevertheless, more validation work should be carried out over various climatic regions and under other different land use/land cover conditions in the future.