29 resultados para Canonical momenta


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The normal shock wave / boundary layer interaction (normal SBLI) is important to the operation and performance of a supersonic inlet, and the normal SBLI is particularly prominent in external compression inlets. To improve our understanding of such interactions, it is helpful to make use of fundamental flows which capture the main elements of inlets, without resorting to the level of complexity and system integration associated with full-geometry inlets. In this paper, several fundamental fiow-fleld configurations have been considered as possible test cases to represent the normal SBLI aspects found in typical external compression inlets, and it was found that the spillage-diffuser more closely retains the basic flow features of an external compression inlet than the other configurations. In particular, this flow-fleld allows the normal shock Mach number as well as the amount and rate of subsonic diffusion to be all held approximately constant mid independent of the application of flow control. In addition, a survey of several external compression inlets was conducted to quantify the flow and geometric parameters of the spillage-diffuser relevant to actual inlets. The results indicated that such a flow may be especially relevant if the terminal Mach number is about 1.3 to 1.4, the confinement parameter is around 10%, the width around twice or three times the height, and with the area expansion just downstream of the shock on the conservative side of the stall limit for incompressible diffusers. © 2013 by the American Institute of Aeronautics and Astronautics, Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The normal shock wave/boundary-layer interaction is important to the operation and performance of a supersonic inlet, and the normal shock wave/boundary-layer interaction is particularly prominent in external compression inlets. To improve understanding of such interactions, it is helpful to make use of fundamental flows that capture the main elements of inlets, without resorting to the level of complexity and system integration associated with full-geometry inlets. In this paper, several fundamental flowfield configurations have been considered as possible test cases to represent the normal shock wave/boundary-layer interaction aspects found in typical external compression inlets, and it was found that the spillage diffuser more closely retains the basic flow features of an external compression inlet than the other configurations. In particular, this flowfield allows the normal shock Mach number as well as the amount and rate of subsonic diffusion to all be held approximately constant and independent of the application of flow control. In addition, a survey of several external compression inlets was conducted to quantify the flow and geometric parameters of the spillage diffuser relevant to actual inlets. The results indicated that such a flow may be especially relevant if the terminal Mach number is about 1.3 to 1.4, the confinement parameter is around 10%, and the width is around twice or three times the height. In addition, the area expansion downstream of the shock should be limited to the conservative side of incipient stall based on incompressible diffusers. Copyright © 2013 by the authors.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

As the use of found data increases, more systems are being built using adaptive training. Here transforms are used to represent unwanted acoustic variability, e.g. speaker and acoustic environment changes, allowing a canonical model that models only the "pure" variability of speech to be trained. Adaptive training may be described within a Bayesian framework. By using complexity control approaches to ensure robust parameter estimates, the standard point estimate adaptive training can be justified within this Bayesian framework. However during recognition there is usually no control over the amount of data available. It is therefore preferable to be able to use a full Bayesian approach to applying transforms during recognition rather than the standard point estimates. This paper discusses various approximations to Bayesian approaches including a new variational Bayes approximation. The application of these approaches to state-of-the-art adaptively trained systems using both CAT and MLLR transforms is then described and evaluated on a large vocabulary speech recognition task. © 2005 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Model based compensation schemes are a powerful approach for noise robust speech recognition. Recently there have been a number of investigations into adaptive training, and estimating the noise models used for model adaptation. This paper examines the use of EM-based schemes for both canonical models and noise estimation, including discriminative adaptive training. One issue that arises when estimating the noise model is a mismatch between the noise estimation approximation and final model compensation scheme. This paper proposes FA-style compensation where this mismatch is eliminated, though at the expense of a sensitivity to the initial noise estimates. EM-based discriminative adaptive training is evaluated on in-car and Aurora4 tasks. FA-style compensation is then evaluated in an incremental mode on the in-car task. © 2011 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider unforced, statistically-axisymmetric turbulence evolving in the presence of a background rotation, an imposed stratification, or a uniform magnetic field. We focus on two canonical cases: Saffman turbulence, in which E(κ → 0) ∼ κ 2, and Batchelor turbulence, in which E(κ → 0) ∼ κ 4. It has recently been shown that, provided the large scales evolve in a self-similar manner, then u ⊥ 2ℓ ⊥ 2ℓ // = constant in Saffman turbulence and u ⊥ 2ℓ ⊥ 4ℓ // = constant in Batchelor turbulence (Davidson, 2009, 2010). Here the subscripts ⊥ and // indicate directions perpendicular and parallel to the axis of symmetry, and ℓ ⊥, ℓ //, and u ⊥ are suitably defined integral scales. These constraints on the integral scales allow us to make simple, testable predictions for the temporal evolution of ℓ ⊥, ℓ //, and u ⊥ in rotating, stratified and MHD turbulence.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Vector Taylor Series (VTS) model based compensation is a powerful approach for noise robust speech recognition. An important extension to this approach is VTS adaptive training (VAT), which allows canonical models to be estimated on diverse noise-degraded training data. These canonical model can be estimated using EM-based approaches, allowing simple extensions to discriminative VAT (DVAT). However to ensure a diagonal corrupted speech covariance matrix the Jacobian (loading matrix) relating the noise and clean speech is diagonalised. In this work an approach for yielding optimal diagonal loading matrices based on minimising the expected KL-divergence between the diagonal loading matrix and "correct" distributions is proposed. The performance of DVAT using the standard and optimal diagonalisation was evaluated on both in-car collected data and the Aurora4 task. © 2012 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The object of this paper is to give a complete treatment of the realizability of positive-real biquadratic impedance functions by six-element series-parallel networks comprising resistors, capacitors, and inductors. This question was studied but not fully resolved in the classical electrical circuit literature. Renewed interest in this question arises in the synthesis of passive mechanical impedances. Recent work by the authors has introduced the concept of a regular positive-real functions. It was shown that five-element networks are capable of realizing all regular and some (but not all) nonregular biquadratic positive-real functions. Accordingly, the focus of this paper is on the realizability of nonregular biquadratics. It will be shown that the only six-element series-parallel networks which are capable of realizing nonregular biquadratic impedances are those with three reactive elements or four reactive elements. We identify a set of networks that can realize all the nonregular biquadratic functions for each of the two cases. The realizability conditions for the networks are expressed in terms of a canonical form for biquadratics. The nonregular realizable region for each of the networks is explicitly characterized. © 2004-2012 IEEE.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Three-dimensional direct numerical simulation (DNS) of exhaust gas recirculation (EGR)-type turbulent combustion operated in moderate and intense low-oxygen dilution (MILD) condition has been carried out to study the flame structure and flame interaction. In order to achieve adequate EGR-type initial/inlet mixture fields, partially premixed mixture fields which are correlated with the turbulence are carefully preprocessed. The chemical kinetics is modelled using a skeletal mechanism for methane-air combustion. The results suggest that the flame fronts have thin flame structure and the direct link between the mean reaction rate and scalar dissipation rate remains valid in the EGR-type combustion with MILD condition. However, the commonly used canonical flamelet is not fully representative for MILD combustion. During the flame-flame interactions, the heat release rate increases higher than the maximum laminar flame value, while the gradient of progress variable becomes smaller than laminar value. It is also proposed that the reaction rate and the scalar gradient can be used as a marker for the flame interaction. © 2012 The Combustion Institute. Published by Elsevier Inc. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We give simple formulas for the canonical metric, gradient, Lie derivative, Riemannian connection, parallel translation, geodesics and distance on the Grassmann manifold of p-planes in ℝn. In these formulas, p-planes are represented as the column space of n × p matrices. The Newton method on abstract Riemannian manifolds proposed by Smith is made explicit on the Grassmann manifold. Two applications - computing an invariant subspace of a matrix and the mean of subspaces - are worked out.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present a combined analytical and numerical study of the early stages (sub-100-fs) of the nonequilibrium dynamics of photoexcited electrons in graphene. We employ the semiclassical Boltzmann equation with a collision integral that includes contributions from electron-electron (e-e) and electron-optical phonon interactions. Taking advantage of circular symmetry and employing the massless Dirac fermion (MDF) Hamiltonian, we are able to perform an essentially analytical study of the e-e contribution to the collision integral. This allows us to take particular care of subtle collinear scattering processes - processes in which incoming and outgoing momenta of the scattering particles lie on the same line - including carrier multiplication (CM) and Auger recombination (AR). These processes have a vanishing phase space for two-dimensional MDF bare bands. However, we argue that electron-lifetime effects, seen in experiments based on angle-resolved photoemission spectroscopy, provide a natural pathway to regularize this pathology, yielding a finite contribution due to CM and AR to the Coulomb collision integral. Finally, we discuss in detail the role of physics beyond the Fermi golden rule by including screening in the matrix element of the Coulomb interaction at the level of the random phase approximation (RPA), focusing in particular on the consequences of various approximations including static RPA screening, which maximizes the impact of CM and AR processes, and dynamical RPA screening, which completely suppresses them. © 2013 American Physical Society.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Copyright © (2014) by the International Machine Learning Society (IMLS) All rights reserved. Classical methods such as Principal Component Analysis (PCA) and Canonical Correlation Analysis (CCA) are ubiquitous in statistics. However, these techniques are only able to reveal linear re-lationships in data. Although nonlinear variants of PCA and CCA have been proposed, these are computationally prohibitive in the large scale. In a separate strand of recent research, randomized methods have been proposed to construct features that help reveal nonlinear patterns in data. For basic tasks such as regression or classification, random features exhibit little or no loss in performance, while achieving drastic savings in computational requirements. In this paper we leverage randomness to design scalable new variants of nonlinear PCA and CCA; our ideas extend to key multivariate analysis tools such as spectral clustering or LDA. We demonstrate our algorithms through experiments on real- world data, on which we compare against the state-of-the-art. A simple R implementation of the presented algorithms is provided.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

© 2015 John P. Cunningham and Zoubin Ghahramani. Linear dimensionality reduction methods are a cornerstone of analyzing high dimensional data, due to their simple geometric interpretations and typically attractive computational properties. These methods capture many data features of interest, such as covariance, dynamical structure, correlation between data sets, input-output relationships, and margin between data classes. Methods have been developed with a variety of names and motivations in many fields, and perhaps as a result the connections between all these methods have not been highlighted. Here we survey methods from this disparate literature as optimization programs over matrix manifolds. We discuss principal component analysis, factor analysis, linear multidimensional scaling, Fisher's linear discriminant analysis, canonical correlations analysis, maximum autocorrelation factors, slow feature analysis, sufficient dimensionality reduction, undercomplete independent component analysis, linear regression, distance metric learning, and more. This optimization framework gives insight to some rarely discussed shortcomings of well-known methods, such as the suboptimality of certain eigenvector solutions. Modern techniques for optimization over matrix manifolds enable a generic linear dimensionality reduction solver, which accepts as input data and an objective to be optimized, and returns, as output, an optimal low-dimensional projection of the data. This simple optimization framework further allows straightforward generalizations and novel variants of classical methods, which we demonstrate here by creating an orthogonal-projection canonical correlations analysis. More broadly, this survey and generic solver suggest that linear dimensionality reduction can move toward becoming a blackbox, objective-agnostic numerical technology.