19 resultados para implementations
em Cambridge University Engineering Department Publications Database
Resumo:
This paper compares parallel and distributed implementations of an iterative, Gibbs sampling, machine learning algorithm. Distributed implementations run under Hadoop on facility computing clouds. The probabilistic model under study is the infinite HMM [1], in which parameters are learnt using an instance blocked Gibbs sampling, with a step consisting of a dynamic program. We apply this model to learn part-of-speech tags from newswire text in an unsupervised fashion. However our focus here is on runtime performance, as opposed to NLP-relevant scores, embodied by iteration duration, ease of development, deployment and debugging. © 2010 IEEE.
Resumo:
Sequential Monte Carlo (SMC) methods are popular computational tools for Bayesian inference in non-linear non-Gaussian state-space models. For this class of models, we propose SMC algorithms to compute the score vector and observed information matrix recursively in time. We propose two different SMC implementations, one with computational complexity $\mathcal{O}(N)$ and the other with complexity $\mathcal{O}(N^{2})$ where $N$ is the number of importance sampling draws. Although cheaper, the performance of the $\mathcal{O}(N)$ method degrades quickly in time as it inherently relies on the SMC approximation of a sequence of probability distributions whose dimension is increasing linearly with time. In particular, even under strong \textit{mixing} assumptions, the variance of the estimates computed with the $\mathcal{O}(N)$ method increases at least quadratically in time. The $\mathcal{O}(N^{2})$ is a non-standard SMC implementation that does not suffer from this rapid degrade. We then show how both methods can be used to perform batch and recursive parameter estimation.
Resumo:
Optimal Bayesian multi-target filtering is in general computationally impractical owing to the high dimensionality of the multi-target state. The Probability Hypothesis Density (PHD) filter propagates the first moment of the multi-target posterior distribution. While this reduces the dimensionality of the problem, the PHD filter still involves intractable integrals in many cases of interest. Several authors have proposed Sequential Monte Carlo (SMC) implementations of the PHD filter. However, these implementations are the equivalent of the Bootstrap Particle Filter, and the latter is well known to be inefficient. Drawing on ideas from the Auxiliary Particle Filter (APF), a SMC implementation of the PHD filter which employs auxiliary variables to enhance its efficiency was proposed by Whiteley et. al. Numerical examples were presented for two scenarios, including a challenging nonlinear observation model, to support the claim. This paper studies the theoretical properties of this auxiliary particle implementation. $\mathbb{L}_p$ error bounds are established from which almost sure convergence follows.
Resumo:
Optimal Bayesian multi-target filtering is, in general, computationally impractical owing to the high dimensionality of the multi-target state. The Probability Hypothesis Density (PHD) filter propagates the first moment of the multi-target posterior distribution. While this reduces the dimensionality of the problem, the PHD filter still involves intractable integrals in many cases of interest. Several authors have proposed Sequential Monte Carlo (SMC) implementations of the PHD filter. However, these implementations are the equivalent of the Bootstrap Particle Filter, and the latter is well known to be inefficient. Drawing on ideas from the Auxiliary Particle Filter (APF), we present a SMC implementation of the PHD filter which employs auxiliary variables to enhance its efficiency. Numerical examples are presented for two scenarios, including a challenging nonlinear observation model.
Resumo:
The liquid-crystal light valve (LCLV) is a useful component for performing integration, thresholding, and gain functions in optical neural networks. Integration of the neural activation channels is implemented by pixelation of the LCLV, with use of a structured metallic layer between the photoconductor and the liquid-crystal layer. Measurements are presented for this type of valve, examples of which were prepared for two specific neural network implementations. The valve fabrication and measurement were carried out at the State Optical Institute, St. Petersburg, Russia, and the modeling and system applications were investigated at the Institute of Microtechnology, Neuchâtel, Switzerland.
Resumo:
Concurrent Engineering demands a new way of working and many organisations experience difficulty during implementation. The research described in this paper has the aim to develop a paper-based workbook style methodology that companies can use to increase the benefits generated by Concurrent Engineering, while reducing implementation costs, risk and time. The three-stage methodology provides guidance based on knowledge accumulated from implementation experience and best practitioners. It encourages companies to learn to manage their Concurrent Engineering implementation by taking actions which expose them to new and valuable experiences. This helps to continuously improve understanding of how to maximise the benefits from Concurrent Engineering. The methodology is particularly designed to cater for organisational and contextual uniqueness, as Concurrent Engineering implementations will vary from company to company. Using key actions which improve the Concurrent Engineering implementation process, individual companies can develop their own 'best practice' for product development. The methodology ensures that key implementation issues, which are primarily human and organisational, are addressed using simple but proven techniques. This paper describes the key issues that the majority of companies face when implementing Concurrent Engineering. The structure of the methodology is described to show how the issues are addressed and resolved. The key actions used to improve the Concurrent Engineering implementation process are explained and their inclusion in the implementation methodology described. Relevance to industry. Implementation of Concurrent Engineering concepts in manufacturing industry has not been a straightforward process. This paper describes a workbook-style tool that manufacturing companies can use to accelerate and improve their Concurrent Engineering implementation. © 1995.
Resumo:
in the last 10 years many designs and trial implementations of holonic manufacturing systems have been reported in the literature. Few of these have resulted in any industrial take up of the approach and part of this lack of adoption might be attributed to a shortage of evaluations of the resulting designs and implementations and their comparison with more conventional approaches. This paper proposes a simple approach for evaluating the effectiveness of a holonic system design, with particular focus on the ability of the system to support reconfiguration (in the face of change). A case study relating to a laboratory assembly system is provided to demonstrate the evaluation approach. Copyright © 2005 IFAC.
Resumo:
An innovative technique based on optical fibre sensing that allows continuous strain measurement has recently been introduced in structural health monitoring. Known as Brillouin Optical Time-Domain Reflectometry (BOTDR), this distributed optical fibre sensing technique allows measurement of strain along the full length (up to 10km) of a suitably installed optical fibre. Examples of recent implementations of BOTDR fibre optic sensing in piles are described in this paper. Two examples of distributed optical fibre sensing in piles are demonstrated using different installation techniques. In a load bearing pile, optical cables were attached along the reinforcing bars by equally spaced spot gluing to measure the axial response of pile to ground excavation induced heave and construction loading. Measurement of flexural behaviour of piles is demonstrated in the instrumentation of a secant piled wall where optical fibres were embedded in the concrete by simple endpoint clamping. Both methods have been verified via laboratory works. © 2009 IOS Press.
Resumo:
This paper reports the design and numerical analysis of a three-dimensional biochip plasma blood separator using computational fluid dynamics techniques. Based on the initial configuration of a two-dimensional (2D) separator, five three-dimensional (3D) microchannel biochip designs are categorically developed through axial and plenary symmetrical expansions. These include the geometric variations of three types of the branch side channels (circular, rectangular, disc) and two types of the main channel (solid and concentric). Ignoring the initial transient behaviour and assuming that steady-state flow has been established, the behaviour of the blood fluid in the devices is algebraically analysed and numerically modelled. The roles of the relevant microchannel mechanisms, i.e. bifurcation, constriction and bending channel, on promoting the separation process are analysed based on modelling results. The differences among the different 3D implementations are compared and discussed. The advantages of 3D over 2D separator in increasing separation volume and effectively depleting cell-free layer fluid from the whole cross section circumference are addressed and illustrated. © 2011 John Wiley & Sons, Ltd.
Resumo:
This paper describes the design and development cycle of a 3D biochip separator and the modelling analysis of flow behaviour in the biochip microchannel features. The focus is on identifying the difference between 2D and 3D implementations as well as developing basic forms of 3D microfluidic separators. Five variants, based around the device are proposed and analysed. These include three variations of the branch channels (circular, rectangular, disc) and two variations of the main channel (solid and concentric). Ignoring the initial transient behaviour and assuming steady state flow has been established, the efficiencies of the flow between the main and side channels for the different designs are analysed and compared with regard to relevant biomicrofluidic laws or effects (bifurcation law, Fahraeus effect, cell-free phenomenon, bending channel effect and laminar flow behaviour). The modelling results identify flow features in microchannels, a constriction and bifurcations and show detailed differences in flow fields between the various designs. The manufacturing process using injection moulding for the initial base case design is also presented and discussed. The work reported here is supported as part of the UK funded 3D-MINTEGRATION project. © 2010 IEEE.
Resumo:
A novel technique is presented to facilitate the implementation of hierarchical b-splines and their interfacing with conventional finite element implementations. The discrete interpretation of the two-scale relation, as common in subdivision schemes, is used to establish algebraic relations between the basis functions and their coefficients on different levels of the hierarchical b-spline basis. The subdivision projection technique introduced allows us first to compute all element matrices and vectors using a fixed number of same-level basis functions. Their subsequent multiplication with subdivision matrices projects them, during the assembly stage, to the correct levels of the hierarchical b-spline basis. The proposed technique is applied to convergence studies of linear and geometrically nonlinear problems in one, two and three space dimensions. © 2012 Elsevier B.V.
Resumo:
Service-Oriented Architecture (SOA) and Web Services (WS) offer advanced flexibility and interoperability capabilities. However they imply significant performance overheads that need to be carefully considered. Supply Chain Management (SCM) and Traceability systems are an interesting domain for the use of WS technologies that are usually deemed to be too complex and unnecessary in practical applications, especially regarding security. This paper presents an externalized security architecture that uses the eXtensible Access Control Markup Language (XACML) authorization standard to enforce visibility restrictions on trace-ability data in a supply chain where multiple companies collaborate; the performance overheads are assessed by comparing 'raw' authorization implementations - Access Control Lists, Tokens, and RDF Assertions - with their XACML-equivalents. © 2012 IEEE.