22 resultados para uncertainty-based coordination
em Cambridge University Engineering Department Publications Database
Resumo:
The optimization of dialogue policies using reinforcement learning (RL) is now an accepted part of the state of the art in spoken dialogue systems (SDS). Yet, it is still the case that the commonly used training algorithms for SDS require a large number of dialogues and hence most systems still rely on artificial data generated by a user simulator. Optimization is therefore performed off-line before releasing the system to real users. Gaussian Processes (GP) for RL have recently been applied to dialogue systems. One advantage of GP is that they compute an explicit measure of uncertainty in the value function estimates computed during learning. In this paper, a class of novel learning strategies is described which use uncertainty to control exploration on-line. Comparisons between several exploration schemes show that significant improvements to learning speed can be obtained and that rapid and safe online optimisation is possible, even on a complex task. Copyright © 2011 ISCA.
Resumo:
We study the role of connectivity on the linear and nonlinear elastic behavior of amorphous systems using a two-dimensional random network of harmonic springs as a model system. A natural characterization of these systems arises in terms of the network coordination relative to that of an isostatic network $\delta z$; a floppy network has $\delta z<0$, while a stiff network has $\delta z>0$. Under the influence of an externally applied load we observe that the response of both floppy and rigid network are controlled by the same critical point, corresponding to the onset of rigidity. We use numerical simulations to compute the exponents which characterize the shear modulus, the amplitude of non-affine displacements, and the network stiffening as a function of $\delta z$, derive these theoretically and make predictions for the mechanical response of glasses and fibrous networks.
Resumo:
Design work involves uncertainty that arises from, and influences, the progressive development of solutions. This paper analyses the influences of evolving uncertainty levels on the design process. We focus on uncertainties associated with choosing the values of design parameters, and do not consider in detail the issues that arise when parameters must first be identified. Aspects of uncertainty and its evolution are discussed, and a new task-based model is introduced to describe process behaviour in terms of changing uncertainty levels. The model is applied to study two process configuration problems based on aircraft wing design: one using an analytical solution and one using Monte-Carlo simulation. The applications show that modelling uncertainty levels during design can help assess management policies, such as how many concepts should be considered during design and to what level of accuracy. © 2011 Springer-Verlag.
Resumo:
To address future uncertainty within strategy and innovation, managers extrapolate past patterns and trends into the future. Several disciplines make use of lifecycles, often with a linear sequence of identified phases, to make predictions and address likely uncertainties. Often the aggregation of several cycles is then interpreted as a new cycle - such as product lifecycles into an industry lifecycle. However, frequently different lifecycle terms - technology, product, industry - are used interchangeably and without clear definition. Within the interdisciplinary context of technology management, this juxtaposition of dynamics can create confusion, rather than clarification. This paper explores some typical dynamics associated with technology-based industries, using illustrative examples from the automotive industry. A wide range of dimensions are seen to influence the path of a technology-based industry, and stakeholders need to consider the likely causality and synchronicity of these. Some curves can simply present the aggregation of components; other dynamics incur time lags, rather than being superimposed, but still have a significant impact. To optimise alignment of the important dimensions within any development, and for future strategy decisions, understanding these interactions will be critical. © 2011 IEEE.
Resumo:
Optimal feedback control postulates that feedback responses depend on the task relevance of any perturbations. We test this prediction in a bimanual task, conceptually similar to balancing a laden tray, in which each hand could be perturbed up or down. Single-limb mechanical perturbations produced long-latency reflex responses ("rapid motor responses") in the contralateral limb of appropriate direction and magnitude to maintain the tray horizontal. During bimanual perturbations, rapid motor responses modulated appropriately depending on the extent to which perturbations affected tray orientation. Specifically, despite receiving the same mechanical perturbation causing muscle stretch, the strongest responses were produced when the contralateral arm was perturbed in the opposite direction (large tray tilt) rather than in the same direction or not perturbed at all. Rapid responses from shortening extensors depended on a nonlinear summation of the sensory information from the arms, with the response to a bimanual same-direction perturbation (orientation maintained) being less than the sum of the component unimanual perturbations (task relevant). We conclude that task-dependent tuning of reflexes can be modulated online within a single trial based on a complex interaction across the arms.
Resumo:
Reducing energy consumption is a major challenge for "energy-intensive" industries such as papermaking. A commercially viable energy saving solution is to employ data-based optimization techniques to obtain a set of "optimized" operational settings that satisfy certain performance indices. The difficulties of this are: 1) the problems of this type are inherently multicriteria in the sense that improving one performance index might result in compromising the other important measures; 2) practical systems often exhibit unknown complex dynamics and several interconnections which make the modeling task difficult; and 3) as the models are acquired from the existing historical data, they are valid only locally and extrapolations incorporate risk of increasing process variability. To overcome these difficulties, this paper presents a new decision support system for robust multiobjective optimization of interconnected processes. The plant is first divided into serially connected units to model the process, product quality, energy consumption, and corresponding uncertainty measures. Then multiobjective gradient descent algorithm is used to solve the problem in line with user's preference information. Finally, the optimization results are visualized for analysis and decision making. In practice, if further iterations of the optimization algorithm are considered, validity of the local models must be checked prior to proceeding to further iterations. The method is implemented by a MATLAB-based interactive tool DataExplorer supporting a range of data analysis, modeling, and multiobjective optimization techniques. The proposed approach was tested in two U.K.-based commercial paper mills where the aim was reducing steam consumption and increasing productivity while maintaining the product quality by optimization of vacuum pressures in forming and press sections. The experimental results demonstrate the effectiveness of the method.
Resumo:
Reducing energy consumption is a major challenge for energy-intensive industries such as papermaking. A commercially viable energy saving solution is to employ data-based optimization techniques to obtain a set of optimized operational settings that satisfy certain performance indices. The difficulties of this are: 1) the problems of this type are inherently multicriteria in the sense that improving one performance index might result in compromising the other important measures; 2) practical systems often exhibit unknown complex dynamics and several interconnections which make the modeling task difficult; and 3) as the models are acquired from the existing historical data, they are valid only locally and extrapolations incorporate risk of increasing process variability. To overcome these difficulties, this paper presents a new decision support system for robust multiobjective optimization of interconnected processes. The plant is first divided into serially connected units to model the process, product quality, energy consumption, and corresponding uncertainty measures. Then multiobjective gradient descent algorithm is used to solve the problem in line with user's preference information. Finally, the optimization results are visualized for analysis and decision making. In practice, if further iterations of the optimization algorithm are considered, validity of the local models must be checked prior to proceeding to further iterations. The method is implemented by a MATLAB-based interactive tool DataExplorer supporting a range of data analysis, modeling, and multiobjective optimization techniques. The proposed approach was tested in two U.K.-based commercial paper mills where the aim was reducing steam consumption and increasing productivity while maintaining the product quality by optimization of vacuum pressures in forming and press sections. The experimental results demonstrate the effectiveness of the method. © 2006 IEEE.
Resumo:
Modelling dialogue as a Partially Observable Markov Decision Process (POMDP) enables a dialogue policy robust to speech understanding errors to be learnt. However, a major challenge in POMDP policy learning is to maintain tractability, so the use of approximation is inevitable. We propose applying Gaussian Processes in Reinforcement learning of optimal POMDP dialogue policies, in order (1) to make the learning process faster and (2) to obtain an estimate of the uncertainty of the approximation. We first demonstrate the idea on a simple voice mail dialogue task and then apply this method to a real-world tourist information dialogue task. © 2010 Association for Computational Linguistics.
Resumo:
The uncertainty associated with a rainfall-runoff and non-point source loading (NPS) model can be attributed to both the parameterization and model structure. An interesting implication of the areal nature of NPS models is the direct relationship between model structure (i.e. sub-watershed size) and sample size for the parameterization of spatial data. The approach of this research is to find structural limitations in scale for the use of the conceptual NPS model, then examine the scales at which suitable stochastic depictions of key parameter sets can be generated. The overlapping regions are optimal (and possibly the only suitable regions) for conducting meaningful stochastic analysis with a given NPS model. Previous work has sought to find optimal scales for deterministic analysis (where, in fact, calibration can be adjusted to compensate for sub-optimal scale selection); however, analysis of stochastic suitability and uncertainty associated with both the conceptual model and the parameter set, as presented here, is novel; as is the strategy of delineating a watershed based on the uncertainty distribution. The results of this paper demonstrate a narrow range of acceptable model structure for stochastic analysis in the chosen NPS model. In the case examined, the uncertainties associated with parameterization and parameter sensitivity are shown to be outweighed in significance by those resulting from structural and conceptual decisions. © 2011 Copyright IAHS Press.
Resumo:
Matching a new technology to an appropriate market is a major challenge for new technology-based firms (NTBF). Such firms are often advised to target niche-markets where the firms and their technologies can establish themselves relatively free of incumbent competition. However, technologies are diverse in nature and do not benefit from identical strategies. In contrast to many Information and Communication Technology (ICT) innovations which build on an established knowledge base for fairly specific applications, technologies based on emerging science are often generic and so have a number of markets and applications open to them, each carrying considerable technological and market uncertainty. Each of these potential markets is part of a complex and evolving ecosystem from which the venture may have to access significant complementary assets in order to create and sustain commercial value. Based on dataset and case study research on UK advanced material university spin-outs (USO), we find that, contrary to conventional wisdom, the more commercially successful ventures were targeting mainstream markets by working closely with large, established competitors during early development. While niche markets promise protection from incumbent firms, science-based innovations, such as new materials, often require the presence, and participation, of established companies in order to create value. © 2012 IEEE.
Resumo:
The desire to seek new and unfamiliar experiences is a fundamental behavioral tendency in humans and other species. In economic decision making, novelty seeking is often rational, insofar as uncertain options may prove valuable and advantageous in the long run. Here, we show that, even when the degree of perceptual familiarity of an option is unrelated to choice outcome, novelty nevertheless drives choice behavior. Using functional magnetic resonance imaging (fMRI), we show that this behavior is specifically associated with striatal activity, in a manner consistent with computational accounts of decision making under uncertainty. Furthermore, this activity predicts interindividual differences in susceptibility to novelty. These data indicate that the brain uses perceptual novelty to approximate choice uncertainty in decision making, which in certain contexts gives rise to a newly identified and quantifiable source of human irrationality.
Resumo:
Statistical dialog systems (SDSs) are motivated by the need for a data-driven framework that reduces the cost of laboriously handcrafting complex dialog managers and that provides robustness against the errors created by speech recognizers operating in noisy environments. By including an explicit Bayesian model of uncertainty and by optimizing the policy via a reward-driven process, partially observable Markov decision processes (POMDPs) provide such a framework. However, exact model representation and optimization is computationally intractable. Hence, the practical application of POMDP-based systems requires efficient algorithms and carefully constructed approximations. This review article provides an overview of the current state of the art in the development of POMDP-based spoken dialog systems. © 1963-2012 IEEE.