923 resultados para Extensions
Resumo:
We propose a system that can reliably track multiple cars in congested traffic environments. Our system's key basis is the implementation of a sequential Monte Carlo algorithm, which introduces robustness against problems arising due to the proximity between vehicles. By directly modelling occlusions and collisions between cars we obtain promising results on an urban traffic dataset. Extensions to this initial framework are also suggested. © 2010 IEEE.
Resumo:
Cambridge Flow Solutions Ltd, Compass House, Vision Park, Cambridge, CB4 9AD, UK Real-world simulation challenges are getting bigger: virtual aero-engines with multistage blade rows coupled with their secondary air systems & with fully featured geometry; environmental flows at meta-scales over resolved cities; synthetic battlefields. It is clear that the future of simulation is scalable, end-to-end parallelism. To address these challenges we have reported in a sequence of papers a series of inherently parallel building blocks based on the integration of a Level Set based geometry kernel with an octree-based cut-Cartesian mesh generator, RANS flow solver, post-processing and geometry management & editing. The cut-cells which characterize the approach are eliminated by exporting a body-conformal mesh driven by the underpinning Level Set and managed by mesh quality optimization algorithms; this permits third party flow solvers to be deployed. This paper continues this sequence by reporting & demonstrating two main novelties: variable depth volume mesh refinement enabling variable surface mesh refinement and a radical rework of the mesh generation into a bottom-up system based on Space Filling Curves. Also reported are the associated extensions to body-conformal mesh export. Everything is implemented in a scalable, parallel manner. As a practical demonstration, meshes of guaranteed quality are generated for a fully resolved, generic aircraft carrier geometry, a cooled disc brake assembly and a B747 in landing configuration. Copyright © 2009 by W.N.Dawes.
Resumo:
It is shown in the paper how robustness can be guaranteed for consensus protocols with heterogeneous dynamics in a scalable and decentralized way i.e. by each agent satisfying a test that does not require knowledge of the entire network. Random graph examples illustrate that the proposed certificates are not conservative for classes of large scale networks, despite the heterogeneity of the dynamics, which is a distinctive feature of this work. The conditions hold for symmetric protocols and more conservative stability conditions are given for general nonsymmetric interconnections. Nonlinear extensions in an IQC framework are finally discussed. Copyright © 2005 IFAC.
Resumo:
This paper describes a structured SVM framework suitable for noise-robust medium/large vocabulary speech recognition. Several theoretical and practical extensions to previous work on small vocabulary tasks are detailed. The joint feature space based on word models is extended to allow context-dependent triphone models to be used. By interpreting the structured SVM as a large margin log-linear model, illustrates that there is an implicit assumption that the prior of the discriminative parameter is a zero mean Gaussian. However, depending on the definition of likelihood feature space, a non-zero prior may be more appropriate. A general Gaussian prior is incorporated into the large margin training criterion in a form that allows the cutting plan algorithm to be directly applied. To further speed up the training process, 1-slack algorithm, caching competing hypothesis and parallelization strategies are also proposed. The performance of structured SVMs is evaluated on noise corrupted medium vocabulary speech recognition task: AURORA 4. © 2011 IEEE.
Resumo:
Vector Taylor Series (VTS) model based compensation is a powerful approach for noise robust speech recognition. An important extension to this approach is VTS adaptive training (VAT), which allows canonical models to be estimated on diverse noise-degraded training data. These canonical model can be estimated using EM-based approaches, allowing simple extensions to discriminative VAT (DVAT). However to ensure a diagonal corrupted speech covariance matrix the Jacobian (loading matrix) relating the noise and clean speech is diagonalised. In this work an approach for yielding optimal diagonal loading matrices based on minimising the expected KL-divergence between the diagonal loading matrix and "correct" distributions is proposed. The performance of DVAT using the standard and optimal diagonalisation was evaluated on both in-car collected data and the Aurora4 task. © 2012 IEEE.
Resumo:
Developing a theoretical description of turbulent plumes, the likes of which may be seen rising above industrial chimneys, is a daunting thought. Plumes are ubiquitous on a wide range of scales in both the natural and the man-made environments. Examples that immediately come to mind are the vapour plumes above industrial smoke stacks or the ash plumes forming particle-laden clouds above an erupting volcano. However, plumes also occur where they are less visually apparent, such as the rising stream of warmair above a domestic radiator, of oil from a subsea blowout or, at a larger scale, of air above the so-called urban heat island. In many instances, not only the plume itself is of interest but also its influence on the environment as a whole through the process of entrainment. Zeldovich (1937, The asymptotic laws of freely-ascending convective flows. Zh. Eksp. Teor. Fiz., 7, 1463-1465 (in Russian)), Batchelor (1954, Heat convection and buoyancy effects in fluids. Q. J. R. Meteor. Soc., 80, 339-358) and Morton et al. (1956, Turbulent gravitational convection from maintained and instantaneous sources. Proc. R. Soc. Lond. A, 234, 1-23) laid the foundations for classical plume theory, a theoretical description that is elegant in its simplicity and yet encapsulates the complex turbulent engulfment of ambient fluid into the plume. Testament to the insight and approach developed in these early models of plumes is that the essential theory remains unchanged and is widely applied today. We describe the foundations of plume theory and link the theoretical developments with the measurements made in experiments necessary to close these models before discussing some recent developments in plume theory, including an approach which generalizes results obtained separately for the Boussinesq and the non-Boussinesq plume cases. The theory presented - despite its simplicity - has been very successful at describing and explaining the behaviour of plumes across the wide range of scales they are observed. We present solutions to the coupled set of ordinary differential equations (the plume conservation equations) that Morton et al. (1956) derived from the Navier-Stokes equations which govern fluid motion. In order to describe and contrast the bulk behaviour of rising plumes from general area sources, we present closed-form solutions to the plume conservation equations that were achieved by solving for the variation with height of Morton's non-dimensional flux parameter Γ - this single flux parameter gives a unique representation of the behaviour of steady plumes and enables a characterization of the different types of plume. We discuss advantages of solutions in this form before describing extensions to plume theory and suggesting directions for new research. © 2010 The Author. Published by Oxford University Press on behalf of the Institute of Mathematics and its Applications. All rights reserved.
Resumo:
The brain extracts useful features from a maelstrom of sensory information, and a fundamental goal of theoretical neuroscience is to work out how it does so. One proposed feature extraction strategy is motivated by the observation that the meaning of sensory data, such as the identity of a moving visual object, is often more persistent than the activation of any single sensory receptor. This notion is embodied in the slow feature analysis (SFA) algorithm, which uses “slowness” as an heuristic by which to extract semantic information from multi-dimensional time-series. Here, we develop a probabilistic interpretation of this algorithm showing that inference and learning in the limiting case of a suitable probabilistic model yield exactly the results of SFA. Similar equivalences have proved useful in interpreting and extending comparable algorithms such as independent component analysis. For SFA, we use the equivalent probabilistic model as a conceptual spring-board, with which to motivate several novel extensions to the algorithm.
Resumo:
Quantile regression refers to the process of estimating the quantiles of a conditional distribution and has many important applications within econometrics and data mining, among other domains. In this paper, we show how to estimate these conditional quantile functions within a Bayes risk minimization framework using a Gaussian process prior. The resulting non-parametric probabilistic model is easy to implement and allows non-crossing quantile functions to be enforced. Moreover, it can directly be used in combination with tools and extensions of standard Gaussian Processes such as principled hyperparameter estimation, sparsification, and quantile regression with input-dependent noise rates. No existing approach enjoys all of these desirable properties. Experiments on benchmark datasets show that our method is competitive with state-of-the-art approaches. © 2009 IEEE.
Resumo:
An ultrasound image is created from backscattered echoes originating from both diffuse and directional scattering. It is potentially useful to separate these two components for the purpose of tissue characterization. This article presents several models for visualization of scattering fields on 3-dimensional (3D) ultrasound imaging. By scanning the same anatomy from multiple directions, we can observe the variation of specular intensity as a function of the viewing angle. This article considers two models for estimating the diffuse and specular components of the backscattered intensity: a modification of the well-known Phong reflection model and an existing exponential model. We examine 2-dimensional implementations and also propose novel 3D extensions of these models in which the probe is not constrained to rotate within a plane. Both simulation and experimental results show that improved performance can be achieved with 3D models. © 2013 by the American Institute of Ultrasound in Medicine.
Resumo:
Control laws to synchronize attitudes in a swarm of fully actuated rigid bodies, in the absence of a common reference attitude or hierarchy in the swarm, are proposed in [Smith, T. R., Hanssmann, H., & Leonard, N.E. (2001). Orientation control of multiple underwater vehicles with symmetry-breaking potentials. In Proc. 40th IEEE conf. decision and control (pp. 4598-4603); Nair, S., Leonard, N. E. (2007). Stable synchronization of rigid body networks. Networks and Heterogeneous Media, 2(4), 595-624]. The present paper studies two separate extensions with the same energy shaping approach: (i) locally synchronizing the rigid bodies' attitudes, but without restricting their final motion and (ii) relaxing the communication topology from undirected, fixed and connected to directed, varying and uniformly connected. The specific strategies that must be developed for these extensions illustrate the limitations of attitude control with reduced information. © 2008 Elsevier Ltd.
Resumo:
This paper studies some extensions to the decentralized attitude synchronization of identical rigid bodies. Considering fully actuated Euler equations, the communication links between the rigid bodies are limited and the available information is restricted to relative orientations and angular velocities. In particular, no leader nor external reference dictates the swarm's behavior. The control laws are derived using two classical approaches of nonlinear control - tracking and energy shaping. This leads to a comparison of two corresponding methods which are currently considered for distributed synchronization - consensus and stabilization of mechanical systems with symmetries. © 2007 IEEE.
Resumo:
This paper investigates the effect of the burnup coupling scheme on the numerical stability and accuracy of coupled Monte-Carlo depletion calculations. We show that in some cases, even the Predictor Corrector method with relatively short time steps can be numerically unstable. In addition, we present two possible extensions to the Euler predictor-corrector (PC) method, which is typically used in coupled burnup calculations. These modifications allow using longer time steps, while maintaining numerical stability and accuracy. © 2013 Elsevier Ltd. All rights reserved.
Resumo:
The Pharma(ceuticals) industry is at a cross-roads. There are growing concerns that illegitimate products are penetrating the supply chain. There are proposals in many countries to apply RFID and other traceability technologies to solve this problem. However there are several trade-offs and one of the most crucial is between data visibility and confidentiality. In this paper, we use the TrakChain assessment framework tools to study the US Pharma supply chain and to compare candidate solutions to achieve traceability data security: Point-of-Dispense Authentication, Network-based electronic Pedigree, and Document-based electronic Pedigree. We also propose extensions to a supply chain authorization language that is able to capture expressive data sharing conditions considered necessary by the industry's trading partners. © 2013 IEEE.
Resumo:
This paper is about detecting bipedal motion in video sequences by using point trajectories in a framework of classification. Given a number of point trajectories, we find a subset of points which are arising from feet in bipedal motion by analysing their spatio-temporal correlation in a pairwise fashion. To this end, we introduce probabilistic trajectories as our new features which associate each point over a sufficiently long time period in the presence of noise. They are extracted from directed acyclic graphs whose edges represent temporal point correspondences and are weighted with their matching probability in terms of appearance and location. The benefit of the new representation is that it practically tolerates inherent ambiguity for example due to occlusions. We then learn the correlation between the motion of two feet using the probabilistic trajectories in a decision forest classifier. The effectiveness of the algorithm is demonstrated in experiments on image sequences captured with a static camera, and extensions to deal with a moving camera are discussed. © 2013 Elsevier B.V. All rights reserved.
Resumo:
This paper presents a complete system for expressive visual text-to-speech (VTTS), which is capable of producing expressive output, in the form of a 'talking head', given an input text and a set of continuous expression weights. The face is modeled using an active appearance model (AAM), and several extensions are proposed which make it more applicable to the task of VTTS. The model allows for normalization with respect to both pose and blink state which significantly reduces artifacts in the resulting synthesized sequences. We demonstrate quantitative improvements in terms of reconstruction error over a million frames, as well as in large-scale user studies, comparing the output of different systems. © 2013 IEEE.