64 resultados para Real-world


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The inhomogeneous Poisson process is a point process that has varying intensity across its domain (usually time or space). For nonparametric Bayesian modeling, the Gaussian process is a useful way to place a prior distribution on this intensity. The combination of a Poisson process and GP is known as a Gaussian Cox process, or doubly-stochastic Poisson process. Likelihood-based inference in these models requires an intractable integral over an infinite-dimensional random function. In this paper we present the first approach to Gaussian Cox processes in which it is possible to perform inference without introducing approximations or finitedimensional proxy distributions. We call our method the Sigmoidal Gaussian Cox Process, which uses a generative model for Poisson data to enable tractable inference via Markov chain Monte Carlo. We compare our methods to competing methods on synthetic data and apply it to several real-world data sets. Copyright 2009.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The inhomogeneous Poisson process is a point process that has varying intensity across its domain (usually time or space). For nonparametric Bayesian modeling, the Gaussian process is a useful way to place a prior distribution on this intensity. The combination of a Poisson process and GP is known as a Gaussian Cox process, or doubly-stochastic Poisson process. Likelihood-based inference in these models requires an intractable integral over an infinite-dimensional random function. In this paper we present the first approach to Gaussian Cox processes in which it is possible to perform inference without introducing approximations or finite-dimensional proxy distributions. We call our method the Sigmoidal Gaussian Cox Process, which uses a generative model for Poisson data to enable tractable inference via Markov chain Monte Carlo. We compare our methods to competing methods on synthetic data and apply it to several real-world data sets.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

If a product is being designed to be genuinely inclusive, then the designers need to be able to assess the level of exclusion of the product that they are working on and to identify possible areas of improvement. To be of practical use, the assessments need to be quick, consistent and repeatable. The aim of this workshop is to invite attendees to participate in the evaluation of a number of everyday objects using an assessment technique being considered by the workshop organisers. The objectives of the workshop include evaluating the effectiveness of the assessment method, evaluating the accessibility of the products being assessed and to suggest revisions to the assessment scales being used. The assessment technique is to be based on the ONS capability measures [1]. This source recognises fourteen capability scales of which seven are particularly pertinent to product evaluation, namely: motion, dexterity, reach and stretch, vision, hearing, communication, and intellectual functioning. Each of these scales ranges from 0 (fully able) through 1 (minimal impairment) to 10 (severe impairment). The attendees will be asked to rate the products on these scales. Clearly the assessed accessibility of the product depends on the assumptions made about the context of use. The attendees will be asked to clearly note the assumptions that they are making about the context in which the product is being assessed. For instance, with a hot water bottle, assumptions have to be made about the availability of hot water and these can affect the overall accessibility rating. The workshop organisers will not specify the context of use as the aim is to identify how assessors would use the assessment method in the real world. The objects being assessed will include items such as remote controls, pill bottles, food packaging, hot water bottles and mobile telephones. the attendees will be encouraged to assess two or more products in detail. Helpers will be on hand to assist and observe the assessments. The assessments will be collated and compared and feedback about the assessment method sought from the attendees. Drawing on a preliminary review of the assessment results, initial conclusions will be presented at the end of the workshop. More detailed analyses will be made available in subsequent proceedings. It is intended that the workshop will provide workshop attendees with an opportunity to perform hands-on assessment of a number everyday products and identify features which are inclusive and those that are not. It is also intended to encourage an appreciation of the capabilities to be considered when evaluating accessibility.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We introduce the Pitman Yor Diffusion Tree (PYDT) for hierarchical clustering, a generalization of the Dirichlet Diffusion Tree (Neal, 2001) which removes the restriction to binary branching structure. The generative process is described and shown to result in an exchangeable distribution over data points. We prove some theoretical properties of the model and then present two inference methods: a collapsed MCMC sampler which allows us to model uncertainty over tree structures, and a computationally efficient greedy Bayesian EM search algorithm. Both algorithms use message passing on the tree structure. The utility of the model and algorithms is demonstrated on synthetic and real world data, both continuous and binary.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cluster analysis of ranking data, which occurs in consumer questionnaires, voting forms or other inquiries of preferences, attempts to identify typical groups of rank choices. Empirically measured rankings are often incomplete, i.e. different numbers of filled rank positions cause heterogeneity in the data. We propose a mixture approach for clustering of heterogeneous rank data. Rankings of different lengths can be described and compared by means of a single probabilistic model. A maximum entropy approach avoids hidden assumptions about missing rank positions. Parameter estimators and an efficient EM algorithm for unsupervised inference are derived for the ranking mixture model. Experiments on both synthetic data and real-world data demonstrate significantly improved parameter estimates on heterogeneous data when the incomplete rankings are included in the inference process.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Human locomotion is known to be influenced by observation of another person's gait. For example, athletes often synchronize their step in long distance races. However, how interaction with a virtual runner affects the gait of a real runner has not been studied. We investigated this by creating an illusion of running behind a virtual model (VM) using a treadmill and large screen virtual environment showing a video of a VM. We looked at step synchronization between the real and virtual runner and at the role of the step frequency (SF) in the real runner's perception of VM speed. We found that subjects match VM SF when asked to match VM speed with their own (Figure 1). This indicates step synchronization may be a strategy of speed matching or speed perception. Subjects chose higher speeds when VMSF was higher (though VM was 12km/h in all videos). This effect was more pronounced when the speed estimate was rated verbally while standing still. (Figure 2). This may due to correlated physical activity affecting the perception of VM speed [Jacobs et al. 2005]; or step synchronization altering the subjects' perception of self speed [Durgin et al. 2007]. Our findings indicate that third person activity in a collaborative virtual locomotive environment can have a pronounced effect on an observer's gait activity and their perceptual judgments of the activity of others: the SF of others (virtual or real) can potentially influence one's perception of self speed and lead to changes in speed and SF. A better understanding of the underlying mechanisms would support the design of more compelling virtual trainers and may be instructive for competitive athletics in the real world. © 2009 ACM.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We combine Bayesian online change point detection with Gaussian processes to create a nonparametric time series model which can handle change points. The model can be used to locate change points in an online manner; and, unlike other Bayesian online change point detection algorithms, is applicable when temporal correlations in a regime are expected. We show three variations on how to apply Gaussian processes in the change point context, each with their own advantages. We present methods to reduce the computational burden of these models and demonstrate it on several real world data sets. Copyright 2010 by the author(s)/owner(s).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The application of automated design optimization to real-world, complex geometry problems is a significant challenge - especially if the topology is not known a priori like in turbine internal cooling. The long term goal of our work is to focus on an end-to-end integration of the whole CFD Process, from solid model through meshing, solving and post-processing to enable this type of design optimization to become viable & practical. In recent papers we have reported the integration of a Level Set based geometry kernel with an octree-based cut- Cartesian mesh generator, RANS flow solver, post-processing & geometry editing all within a single piece of software - and all implemented in parallel with commodity PC clusters as the target. The cut-cells which characterize the approach are eliminated by exporting a body-conformal mesh guided by the underpinning Level Set. This paper extends this work still further with a simple scoping study showing how the basic functionality can be scripted & automated and then used as the basis for automated optimization of a generic gas turbine cooling geometry. Copyright © 2008 by W.N.Dawes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Cambridge Flow Solutions Ltd, Compass House, Vision Park, Cambridge, CB4 9AD, UK Real-world simulation challenges are getting bigger: virtual aero-engines with multistage blade rows coupled with their secondary air systems & with fully featured geometry; environmental flows at meta-scales over resolved cities; synthetic battlefields. It is clear that the future of simulation is scalable, end-to-end parallelism. To address these challenges we have reported in a sequence of papers a series of inherently parallel building blocks based on the integration of a Level Set based geometry kernel with an octree-based cut-Cartesian mesh generator, RANS flow solver, post-processing and geometry management & editing. The cut-cells which characterize the approach are eliminated by exporting a body-conformal mesh driven by the underpinning Level Set and managed by mesh quality optimization algorithms; this permits third party flow solvers to be deployed. This paper continues this sequence by reporting & demonstrating two main novelties: variable depth volume mesh refinement enabling variable surface mesh refinement and a radical rework of the mesh generation into a bottom-up system based on Space Filling Curves. Also reported are the associated extensions to body-conformal mesh export. Everything is implemented in a scalable, parallel manner. As a practical demonstration, meshes of guaranteed quality are generated for a fully resolved, generic aircraft carrier geometry, a cooled disc brake assembly and a B747 in landing configuration. Copyright © 2009 by W.N.Dawes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The background to this review paper is research we have performed over recent years aimed at developing a simulation system capable of handling large scale, real world applications implemented in an end-to-end parallel, scalable manner. The particular focus of this paper is the use of a Level Set solid modeling geometry kernel within this parallel framework to enable automated design optimization without topological restrictions and on geometries of arbitrary complexity. Also described is another interesting application of Level Sets: their use in guiding the export of a body-conformal mesh from our basic cut-Cartesian background octree - mesh - this permits third party flow solvers to be deployed. As a practical demonstrations meshes of guaranteed quality are generated and flow-solved for a B747 in full landing configuration and an automated optimization is performed on a cooled turbine tip geometry. Copyright © 2009 by W.N.Dawes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Over recent years we have developed and published research aimed at producing a meshing, geometry editing and simulation system capable of handling large scale, real world applications and implemented in an end-to-end parallel, scalable manner. The particular focus of this paper is the extension of this meshing system to include conjugate meshes for multi-physics simulations. Two contrasting applications are presented: export of a body-conformal mesh to drive a commercial, third-party simulation system; and direct use of the cut-Cartesian octree mesh with a single, integrated, close-coupled multi-physics simulation system. Copyright © 2010 by W.N.Dawes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This article presents a novel algorithm for learning parameters in statistical dialogue systems which are modeled as Partially Observable Markov Decision Processes (POMDPs). The three main components of a POMDP dialogue manager are a dialogue model representing dialogue state information; a policy that selects the system's responses based on the inferred state; and a reward function that specifies the desired behavior of the system. Ideally both the model parameters and the policy would be designed to maximize the cumulative reward. However, while there are many techniques available for learning the optimal policy, no good ways of learning the optimal model parameters that scale to real-world dialogue systems have been found yet. The presented algorithm, called the Natural Actor and Belief Critic (NABC), is a policy gradient method that offers a solution to this problem. Based on observed rewards, the algorithm estimates the natural gradient of the expected cumulative reward. The resulting gradient is then used to adapt both the prior distribution of the dialogue model parameters and the policy parameters. In addition, the article presents a variant of the NABC algorithm, called the Natural Belief Critic (NBC), which assumes that the policy is fixed and only the model parameters need to be estimated. The algorithms are evaluated on a spoken dialogue system in the tourist information domain. The experiments show that model parameters estimated to maximize the expected cumulative reward result in significantly improved performance compared to the baseline hand-crafted model parameters. The algorithms are also compared to optimization techniques using plain gradients and state-of-the-art random search algorithms. In all cases, the algorithms based on the natural gradient work significantly better. © 2011 ACM.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Internet of Things (IOT) concept and enabling technologies such as RFID offer the prospect of linking the real world of physical objects with the virtual world of information technology to improve visibility and traceability information within supply chains and across the entire lifecycles of products, as well as enabling more intuitive interactions and greater automation possibilities. There is a huge potential for savings through process optimization and profit generation within the IOT, but the sharing of financial benefits across companies remains an unsolved issue. Existing approaches towards sharing of costs and benefits have failed to scale so far. The integration of payment solutions into the IOT architecture could solve this problem. We have reviewed different possible levels of integration. Multiple payment solutions have been researched. Finally we have developed a model that meets the requirements of the IOT in relation to openness and scalability. It supports both hardware-centric and software-centric approaches to integration of payment solutions with the IOT. Different requirements concerning payment solutions within the IOT have been defined and considered in the proposed model. Possible solution providers include telcos, e-payment service providers and new players such as banks and standardization bodies. The proposed model of integrating the Internet of Things with payment solutions will lower the barrier to invoicing for the more granular visibility information generated using the IOT. Thus, it has the potential to enable recovery of the necessary investments in IOT infrastructure and accelerate adoption of the IOT, especially for projects that are only viable when multiple benefits throughout the supply chain need to be accumulated in order to achieve a Return on Investment (ROI). In a long-term perspective, it may enable IT-departments to become profit centres instead of cost centres. © 2010 - IOS Press and the authors. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The modern CFD process consists of mesh generation, flow solving and post-processing integrated into an automated workflow. During the last several years we have developed and published research aimed at producing a meshing and geometry editing system, implemented in an end-to-end parallel, scalable manner and capable of automatic handling of large scale, real world applications. The particular focus of this paper is the associated unstructured mesh RANS flow solver and the porting of it to GPU architectures. After briefly describing the solver itself, the special issues associated with porting codes using unstructured data structures are discussed - followed by some application examples. Copyright © 2011 by W.N. Dawes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper provides an overview of the rationale behind the significant interest in polymer-based on-board optical links together with a brief review of recently reported work addressing certain challenges in this field. Polymer-based optical links have garnered considerable research attention due to their important functional attributes and compelling cost-benefit advantages in on-board optoelectronic systems as they can be cost-effectively integrated on conventional printed circuit boards. To date, significant work on the polymer materials, their fabrication process and their integration on standard board substrates have enabled the demonstration of numerous high-speed on-board optical links. However, to be deployed in real-world systems, these optoelectronic printed circuit boards (OE PCBs) must also be cost-effective. Here, recent advances in the integration process focusing on simple direct end-fire coupling schemes and the use of low-cost FR4 PCB substrates are presented. Performance of two proof-of-principle 10 Gb/s systems based on this integration method are summarised while work in realising more complex yet compact planar optical components is outlined. © 2011 IEEE.