16 resultados para Many-fermion systems
em Cambridge University Engineering Department Publications Database
Resumo:
We present methods for fixed-lag smoothing using Sequential Importance sampling (SIS) on a discrete non-linear, non-Gaussian state space system with unknown parameters. Our particular application is in the field of digital communication systems. Each input data point is taken from a finite set of symbols. We represent transmission media as a fixed filter with a finite impulse response (FIR), hence a discrete state-space system is formed. Conventional Markov chain Monte Carlo (MCMC) techniques such as the Gibbs sampler are unsuitable for this task because they can only perform processing on a batch of data. Data arrives sequentially, so it would seem sensible to process it in this way. In addition, many communication systems are interactive, so there is a maximum level of latency that can be tolerated before a symbol is decoded. We will demonstrate this method by simulation and compare its performance to existing techniques.
Resumo:
Differential growth of thin elastic bodies furnishes a surprisingly simple explanation of the complex and intriguing shapes of many biological systems, such as plant leaves and organs. Similarly, inelastic strains induced by thermal effects or active materials in layered plates are extensively used to control the curvature of thin engineering structures. Such behaviour inspires us to distinguish and to compare two possible modes of differential growth not normally compared to each other, in order to reveal the full range of out-of-plane shapes of an initially flat disk. The first growth mode, frequently employed by engineers, is characterised by direct bending strains through the thickness, and the second mode, mainly apparent in biological systems, is driven by extensional strains of the middle surface. When each mode is considered separately, it is shown that buckling is common to both modes, leading to bistable shapes: growth from bending strains results in a double-curvature limit at buckling, followed by almost developable deformation in which the Gaussian curvature at buckling is conserved; during extensional growth, out-of-plane distortions occur only when the buckling condition is reached, and the Gaussian curvature continues to increase. When both growth modes are present, it is shown that, generally, larger displacements are obtained under in-plane growth when the disk is relatively thick and growth strains are small, and vice versa. It is also shown that shapes can be mono-, bi-, tri- or neutrally stable, depending on the growth strain levels and the material properties: furthermore, it is shown that certain combinations of growth modes result in a free, or natural, response in which the doubly curved shape of disk exactly matches the imposed strains. Such diverse behaviour, in general, may help to realise more effective actuation schemes for engineering structures. © 2012 Elsevier Ltd. All rights reserved.
Resumo:
The double-heterogeneity characterising pebble-bed high temperature reactors (HTRs) makes Monte Carlo based calculation tools the most suitable for detailed core analyses. These codes can be successfully used to predict the isotopic evolution during irradiation of the fuel of this kind of cores. At the moment, there are many computational systems based on MCNP that are available for performing depletion calculation. All these systems use MCNP to supply problem dependent fluxes and/or microscopic cross sections to the depletion module. This latter then calculates the isotopic evolution of the fuel resolving Bateman's equations. In this paper, a comparative analysis of three different MCNP-based depletion codes is performed: Montburns2.0, MCNPX2.6.0 and BGCore. Monteburns code can be considered as the reference code for HTR calculations, since it has been already verified during HTR-N and HTR-N1 EU project. All calculations have been performed on a reference model representing an infinite lattice of thorium-plutonium fuelled pebbles. The evolution of k-inf as a function of burnup has been compared, as well as the inventory of the important actinides. The k-inf comparison among the codes shows a good agreement during the entire burnup history with the maximum difference lower than 1%. The actinide inventory prediction agrees well. However significant discrepancy in Am and Cm concentrations calculated by MCNPX as compared to those of Monteburns and BGCore has been observed. This is mainly due to different Am-241 (n,γ) branching ratio utilized by the codes. The important advantage of BGCore is its significantly lower execution time required to perform considered depletion calculations. While providing reasonably accurate results BGCore runs depletion problem about two times faster than Monteburns and two to five times faster than MCNPX. © 2009 Elsevier B.V. All rights reserved.
Resumo:
Desired performance of unpressurized integral collector storage systems hinges on the appropriate selection of storage volume and the immersed heat exchanger. This paper presents analytical results expressing the relation between storage volume, number of heat exchanger transfer units and temperature limited performance. For a system composed of a single storage element, the limiting behavior of a perfectly stratified storage element is shown to be superior to a fully-mixed storage element, consistent with more general analysis of thermal storage. Since, however, only the fully-mixed limit is readily obtainable in a physical system, the present paper also examines a division of the storage volume into separate compartments. This multi-element storage system shows significantly improved discharge characteristics as a result of improved elemental area utilization and temperature variation between elements, comparable in many cases to a single perfectly-stratified storage element. In addition, the multi-element system shows increased robustness with respect to variations in heat exchanger effectiveness and initial storage temperature.
Resumo:
This article presents a novel algorithm for learning parameters in statistical dialogue systems which are modeled as Partially Observable Markov Decision Processes (POMDPs). The three main components of a POMDP dialogue manager are a dialogue model representing dialogue state information; a policy that selects the system's responses based on the inferred state; and a reward function that specifies the desired behavior of the system. Ideally both the model parameters and the policy would be designed to maximize the cumulative reward. However, while there are many techniques available for learning the optimal policy, no good ways of learning the optimal model parameters that scale to real-world dialogue systems have been found yet. The presented algorithm, called the Natural Actor and Belief Critic (NABC), is a policy gradient method that offers a solution to this problem. Based on observed rewards, the algorithm estimates the natural gradient of the expected cumulative reward. The resulting gradient is then used to adapt both the prior distribution of the dialogue model parameters and the policy parameters. In addition, the article presents a variant of the NABC algorithm, called the Natural Belief Critic (NBC), which assumes that the policy is fixed and only the model parameters need to be estimated. The algorithms are evaluated on a spoken dialogue system in the tourist information domain. The experiments show that model parameters estimated to maximize the expected cumulative reward result in significantly improved performance compared to the baseline hand-crafted model parameters. The algorithms are also compared to optimization techniques using plain gradients and state-of-the-art random search algorithms. In all cases, the algorithms based on the natural gradient work significantly better. © 2011 ACM.
Resumo:
Synchronization is now well established as representing coherent behaviour between two or more otherwise autonomous nonlinear systems subject to some degree of coupling. Such behaviour has mainly been studied to date, however, in relatively low-dimensional discrete systems or networks. But the possibility of similar kinds of behaviour in continuous or extended spatiotemporal systems has many potential practical implications, especially in various areas of geophysics. We review here a range of cyclically varying phenomena within the Earth's climate system for which there may be some evidence or indication of the possibility of synchronized behaviour, albeit perhaps imperfect or highly intermittent. The exploitation of this approach is still at a relatively early stage within climate science and dynamics, in which the climate system is regarded as a hierarchy of many coupled sub-systems with complex nonlinear feedbacks and forcings. The possibility of synchronization between climate oscillations (global or local) and a predictable external forcing raises important questions of how models of such phenomena can be validated and verified, since the resulting response may be relatively insensitive to the details of the model being synchronized. The use of laboratory analogues may therefore have an important role to play in the study of natural systems that can only be observed and for which controlled experiments are impossible. We go on to demonstrate that synchronization can be observed in the laboratory, even in weakly coupled fluid dynamical systems that may serve as direct analogues of the behaviour of major components of the Earth's climate system. The potential implications and observability of these effects in the long-term climate variability of the Earth is further discussed. © 2010 Springer-Verlag Berlin Heidelberg.
Influencing factors of successful transitions towards product-service systems: A simulation approach
Resumo:
Product-Service Systems (PSS) are new business strategies moving and extending the product value towards its functional usage and related required services. From a theoretical point of view the PSS concept is known since a decade and many Authors reported reasonable possible success factors: higher profits over the entire life-cycle, diminished environmental burden, and localization of required services. Nevertheless the PSS promises remain quantitatively unproven relaying on a simple theory that involves a few constructs with some empirical grounding, but that is limited by weak conceptualization, few propositions, and/or rough underlying theoretical logic. A plausible interpretation to analyze the possible evolution of a PSS strategy could be considering it as a new business proposition competing on a traditional Product-Oriented (PO) market, assumed at its own equilibrium state at a given time. The analysis of the dynamics associated to a possible transition from a traditional PO to a PSS strategy allows investigating the main parameters and variables influencing an eventual successful adoption. This research is worthwhile because organizations undergoing fundamental PSS strategy are concerned about change and inertia key processes which, despite equilibrium theory and because of negative feedback loops, could undermine, economically, the return of their PSS proposition. In this paper Authors propose a qualitative System Dynamics (SD) approach by considering the PSS as a perturbation of an existing PO market featured by a set of known parameters. The proposed model incorporates several PSS factors able to influence the success of a PSS proposition under a set of given and justified assumptions, attempting to place this business strategy in a dynamic framework.
Resumo:
in the last 10 years many designs and trial implementations of holonic manufacturing systems have been reported in the literature. Few of these have resulted in any industrial take up of the approach and part of this lack of adoption might be attributed to a shortage of evaluations of the resulting designs and implementations and their comparison with more conventional approaches. This paper proposes a simple approach for evaluating the effectiveness of a holonic system design, with particular focus on the ability of the system to support reconfiguration (in the face of change). A case study relating to a laboratory assembly system is provided to demonstrate the evaluation approach. Copyright © 2005 IFAC.
Resumo:
Statistical dialogue models have required a large number of dialogues to optimise the dialogue policy, relying on the use of a simulated user. This results in a mismatch between training and live conditions, and significant development costs for the simulator thereby mitigating many of the claimed benefits of such models. Recent work on Gaussian process reinforcement learning, has shown that learning can be substantially accelerated. This paper reports on an experiment to learn a policy for a real-world task directly from human interaction using rewards provided by users. It shows that a usable policy can be learnt in just a few hundred dialogues without needing a user simulator and, using a learning strategy that reduces the risk of taking bad actions. The paper also investigates adaptation behaviour when the system continues learning for several thousand dialogues and highlights the need for robustness to noisy rewards. © 2011 IEEE.
Resumo:
Speech recognition systems typically contain many Gaussian distributions, and hence a large number of parameters. This makes them both slow to decode speech, and large to store. Techniques have been proposed to decrease the number of parameters. One approach is to share parameters between multiple Gaussians, thus reducing the total number of parameters and allowing for shared likelihood calculation. Gaussian tying and subspace clustering are two related techniques which take this approach to system compression. These techniques can decrease the number of parameters with no noticeable drop in performance for single systems. However, multiple acoustic models are often used in real speech recognition systems. This paper considers the application of Gaussian tying and subspace compression to multiple systems. Results show that two speech recognition systems can be modelled using the same number of Gaussians as just one system, with little effect on individual system performance. Copyright © 2009 ISCA.
Resumo:
While a large amount of research over the past two decades has focused on discrete abstractions of infinite-state dynamical systems, many structural and algorithmic details of these abstractions remain unknown. To clarify the computational resources needed to perform discrete abstractions, this paper examines the algorithmic properties of an existing method for deriving finite-state systems that are bisimilar to linear discrete-time control systems. We explicitly find the structure of the finite-state system, show that it can be enormous compared to the original linear system, and give conditions to guarantee that the finite-state system is reasonably sized and efficiently computable. Though constructing the finite-state system is generally impractical, we see that special cases could be amenable to satisfiability based verification techniques. ©2009 IEEE.
Resumo:
Sociomateriality has been attracting growing attention in the Organization Studies and Information Systems literatures since 2007, with more than 140 journal articles now referring to the concept. Over 80 percent of these articles have been published since January 2011 and almost all cite the work of Orlikowski (2007, 2010; Orlikowski and Scott 2008) as the source of the concept. Only a few, however, address all of the notions that Orlikowski suggests are entailed in sociomateriality, namely materiality, inseparability, relationality, performativity, and practices, with many employing the concept quite selectively. The contribution of sociomateriality to these literatures is, therefore, still unclear. Drawing on evidence from an ongoing study of the adoption of a computer-based clinical information system in a hospital critical care unit, this paper explores whether the notions, individually and collectively, offer a distinctive and coherent account of the relationship between the social and the material that may be useful in Information Systems research. It is argued that if sociomateriality is to be more than simply a label for research employing a number of loosely related existing theoretical approaches, then studies employing the concept need to pay greater attention to the notions entailed in it and to differences in their interpretation.
Resumo:
Confronted with high variety and low volume market demands, many companies, especially the Japanese electronics manufacturing companies, have reconfigured their conveyor assembly lines and adopted seru production systems. Seru production system is a new type of work-cell-based manufacturing system. A lot of successful practices and experience show that seru production system can gain considerable flexibility of job shop and high efficiency of conveyor assembly line. In implementing seru production, the multi-skilled worker is the most important precondition, and some issues about multi-skilled workers are central and foremost. In this paper, we investigate the training and assignment problem of workers when a conveyor assembly line is entirely reconfigured into several serus. We formulate a mathematical model with double objectives which aim to minimize the total training cost and to balance the total processing times among multi-skilled workers in each seru. To obtain the satisfied task-to-worker training plan and worker-to-seru assignment plan, a three-stage heuristic algorithm with nine steps is developed to solve this mathematical model. Then, several computational cases are taken and computed by MATLAB programming. The computation and analysis results validate the performances of the proposed mathematical model and heuristic algorithm. © 2013 Springer-Verlag London.
Resumo:
Standard forms of density-functional theory (DFT) have good predictive power for many materials, but are not yet fully satisfactory for cluster, solid, and liquid forms of water. Recent work has stressed the importance of DFT errors in describing dispersion, but we note that errors in other parts of the energy may also contribute. We obtain information about the nature of DFT errors by using a many-body separation of the total energy into its 1-body, 2-body, and beyond-2-body components to analyze the deficiencies of the popular PBE and BLYP approximations for the energetics of water clusters and ice structures. The errors of these approximations are computed by using accurate benchmark energies from the coupled-cluster technique of molecular quantum chemistry and from quantum Monte Carlo calculations. The systems studied are isomers of the water hexamer cluster, the crystal structures Ih, II, XV, and VIII of ice, and two clusters extracted from ice VIII. For the binding energies of these systems, we use the machine-learning technique of Gaussian Approximation Potentials to correct successively for 1-body and 2-body errors of the DFT approximations. We find that even after correction for these errors, substantial beyond-2-body errors remain. The characteristics of the 2-body and beyond-2-body errors of PBE are completely different from those of BLYP, but the errors of both approximations disfavor the close approach of non-hydrogen-bonded monomers. We note the possible relevance of our findings to the understanding of liquid water.