909 resultados para Bayesian adaptive design
Resumo:
This paper presents a Bayesian probabilistic framework to assess soil properties and model uncertainty to better predict excavation-induced deformations using field deformation data. The potential correlations between deformations at different depths are accounted for in the likelihood function needed in the Bayesian approach. The proposed approach also accounts for inclinometer measurement errors. The posterior statistics of the unknown soil properties and the model parameters are computed using the Delayed Rejection (DR) method and the Adaptive Metropolis (AM) method. As an application, the proposed framework is used to assess the unknown soil properties of multiple soil layers using deformation data at different locations and for incremental excavation stages. The developed approach can be used for the design of optimal revisions for supported excavation systems. © 2010 ASCE.
Resumo:
The control of a class of combustion systems, suceptible to damage from self-excited combustion oscillations, is considered. An adaptive stable controller, called Self-Tuning Regulator (STR), has recently been developed, which meets the apparently contradictory challenge of relying as little as possible on a particular combustion model while providing some guarantee that the controller will cause no harm. The controller injects some fuel unsteadily into the burning region, thereby altering the heat release, in response to an input signal detecting the oscillation. This paper focuses on an extension of the STR design, when, due to stringent emission requirements and to the danger of flame extension, the amount of fuel used for control is limited in amplitude. A Lyapunov stability analysis is used to prove the stability of the modified STR when the saturation constraint is imposed. The practical implementation of the modified STR remains straightforward, and simulation results, based on the nonlinear premixed flame model developed by Dowling, show that in the presence of a saturation constraint, the self-excited oscillations are damped more rapidly with the modified STR than with the original STR. © 2001 by S. Evesque. Published by the American Institute of Aeronautics and Astronautics, Inc.
Resumo:
New robotics is an approach to robotics that, in contrast to traditional robotics, employs ideas and principles from biology. While in the traditional approach there are generally accepted methods (e. g., from control theory), designing agents in the new robotics approach is still largely considered an art. In recent years, we have been developing a set of heuristics, or design principles, that on the one hand capture theoretical insights about intelligent (adaptive) behavior, and on the other provide guidance in actually designing and building systems. In this article we provide an overview of all the principles but focus on the principles of ecological balance, which concerns the relation between environment, morphology, materials, and control, and sensory-motor coordination, which concerns self-generated sensory stimulation as the agent interacts with the environment and which is a key to the development of high-level intelligence. As we argue, artificial evolution together with morphogenesis is not only "nice to have" but is in fact a necessary tool for designing embodied agents.
Resumo:
There is much to gain from providing walking machines with passive dynamics, e.g. by including compliant elements in the structure. These elements can offer interesting properties such as self-stabilization, energy efficiency and simplified control. However, there is still no general design strategy for such robots and their controllers. In particular, the calibration of control parameters is often complicated because of the highly nonlinear behavior of the interactions between passive components and the environment. In this article, we propose an approach in which the calibration of a key parameter of a walking controller, namely its intrinsic frequency, is done automatically. The approach uses adaptive frequency oscillators to automatically tune the intrinsic frequency of the oscillators to the resonant frequency of a compliant quadruped robot The tuning goes beyond simple synchronization and the learned frequency stays in the controller when the robot is put to halt. The controller is model free, robust and simple. Results are presented illustrating how the controller can robustly tune itself to the robot, as well as readapt when the mass of the robot is changed. We also provide an analysis of the convergence of the frequency adaptation for a linearized plant, and show how that analysis is useful for determining which type of sensory feedback must be used for stable convergence. This approach is expected to explain some aspects of developmental processes in biological and artificial adaptive systems that "develop" through the embodied system-environment interactions. © 2006 IEEE.
Resumo:
Locomotion is of fundamental importance in understanding adaptive behavior. In this paper we present two case studies of robot locomotion that demonstrate how higher level of behavioral diversity can be achieved while observing the principle of cheap design. More precisely, it is shown that, by exploiting the dynamics of the system-environment interaction, very simple controllers can be designed which is essential to achieve rapid locomotion. Special consideration must be given to the choice of body materials. We conclude with some speculation about the importance of locomotion for understanding cognition. © Springer-Verlag Berlin Heidelberg 2004.
Resumo:
P-glycoprotein (P-gp), an ATP-binding cassette (ABC) transporter, functions as a biological barrier by extruding cytotoxic agents out of cells, resulting in an obstacle in chemotherapeutic treatment of cancer. In order to aid in the development of potential P-gp inhibitors, we constructed a quantitative structure-activity relationship (QSAR) model of flavonoids as P-gp inhibitors based on Bayesian-regularized neural network (BRNN). A dataset of 57 flavonoids collected from a literature binding to the C-terminal nucleotide-binding domain of mouse P-gp was compiled. The predictive ability of the model was assessed using a test set that was independent of the training set, which showed a standard error of prediction of 0.146 +/- 0.006 (data scaled from 0 to 1). Meanwhile, two other mathematical tools, back-propagation neural network (BPNN) and partial least squares (PLS) were also attempted to build QSAR models. The BRNN provided slightly better results for the test set compared to BPNN, but the difference was not significant according to F-statistic at p = 0.05. The PLS failed to build a reliable model in the present study. Our study indicates that the BRNN-based in silico model has good potential in facilitating the prediction of P-gp flavonoid inhibitors and might be applied in further drug design.
Resumo:
Real-time adaptive music is now well-established as a popular medium, largely through its use in video game soundtracks. Commercial packages, such as fmod, make freely available the underlying technical methods for use in educational contexts, making adaptive music technologies accessible to students. Writing adaptive music, however, presents a significant learning challenge, not least because it requires a different mode of thought, and tutor and learner may have few mutual points of connection in discovering and understanding the musical drivers, relationships and structures in these works. This article discusses the creation of ‘BitBox!’, a gestural music interface designed to deconstruct and explain the component elements of adaptive composition through interactive play. The interface was displayed at the Dare Protoplay games exposition in Dundee in August 2014. The initial proof-of- concept study proved successful, suggesting possible refinements in design and a broader range of applications.
Resumo:
(This Technical Report revises TR-BUCS-2003-011) The Transmission Control Protocol (TCP) has been the protocol of choice for many Internet applications requiring reliable connections. The design of TCP has been challenged by the extension of connections over wireless links. In this paper, we investigate a Bayesian approach to infer at the source host the reason of a packet loss, whether congestion or wireless transmission error. Our approach is "mostly" end-to-end since it requires only one long-term average quantity (namely, long-term average packet loss probability over the wireless segment) that may be best obtained with help from the network (e.g. wireless access agent).Specifically, we use Maximum Likelihood Ratio tests to evaluate TCP as a classifier of the type of packet loss. We study the effectiveness of short-term classification of packet errors (congestion vs. wireless), given stationary prior error probabilities and distributions of packet delays conditioned on the type of packet loss (measured over a larger time scale). Using our Bayesian-based approach and extensive simulations, we demonstrate that congestion-induced losses and losses due to wireless transmission errors produce sufficiently different statistics upon which an efficient online error classifier can be built. We introduce a simple queueing model to underline the conditional delay distributions arising from different kinds of packet losses over a heterogeneous wired/wireless path. We show how Hidden Markov Models (HMMs) can be used by a TCP connection to infer efficiently conditional delay distributions. We demonstrate how estimation accuracy is influenced by different proportions of congestion versus wireless losses and penalties on incorrect classification.
Resumo:
Attributing a dollar value to a keyword is an essential part of running any profitable search engine advertising campaign. When an advertiser has complete control over the interaction with and monetization of each user arriving on a given keyword, the value of that term can be accurately tracked. However, in many instances, the advertiser may monetize arrivals indirectly through one or more third parties. In such cases, it is typical for the third party to provide only coarse-grained reporting: rather than report each monetization event, users are aggregated into larger channels and the third party reports aggregate information such as total daily revenue for each channel. Examples of third parties that use channels include Amazon and Google AdSense. In such scenarios, the number of channels is generally much smaller than the number of keywords whose value per click (VPC) we wish to learn. However, the advertiser has flexibility as to how to assign keywords to channels over time. We introduce the channelization problem: how do we adaptively assign keywords to channels over the course of multiple days to quickly obtain accurate VPC estimates of all keywords? We relate this problem to classical results in weighing design, devise new adaptive algorithms for this problem, and quantify the performance of these algorithms experimentally. Our results demonstrate that adaptive weighing designs that exploit statistics of term frequency, variability in VPCs across keywords, and flexible channel assignments over time provide the best estimators of keyword VPCs.
Resumo:
Temporal structure in skilled, fluent action exists at several nested levels. At the largest scale considered here, short sequences of actions that are planned collectively in prefrontal cortex appear to be queued for performance by a cyclic competitive process that operates in concert with a parallel analog representation that implicitly specifies the relative priority of elements of the sequence. At an intermediate scale, single acts, like reaching to grasp, depend on coordinated scaling of the rates at which many muscles shorten or lengthen in parallel. To ensure success of acts such as catching an approaching ball, such parallel rate scaling, which appears to be one function of the basal ganglia, must be coupled to perceptual variables, such as time-to-contact. At a fine scale, within each act, desired rate scaling can be realized only if precisely timed muscle activations first accelerate and then decelerate the limbs, to ensure that muscle length changes do not under- or over-shoot the amounts needed for the precise acts. Each context of action may require a much different timed muscle activation pattern than similar contexts. Because context differences that require different treatment cannot be known in advance, a formidable adaptive engine-the cerebellum-is needed to amplify differences within, and continuosly search, a vast parallel signal flow, in order to discover contextual "leading indicators" of when to generate distinctive parallel patterns of analog signals. From some parts of the cerebellum, such signals controls muscles. But a recent model shows how the lateral cerebellum, such signals control muscles. But a recent model shows how the lateral cerebellum may serve the competitive queuing system (in frontal cortex) as a repository of quickly accessed long-term sequence memories. Thus different parts of the cerebellum may use the same adaptive engine system design to serve the lowest and the highest of the three levels of temporal structure treated. If so, no one-to-one mapping exists between levels of temporal structure and major parts of the brain. Finally, recent data cast doubt on network-delay models of cerebellar adaptive timing.
Resumo:
This paper shows how a minimal neural network model of the cerebellum may be embedded within a sensory-neuro-muscular control system that mimics known anatomy and physiology. With this embedding, cerebellar learning promotes load compensation while also allowing both coactivation and reciprocal inhibition of sets of antagonist muscles. In particular, we show how synaptic long term depression guided by feedback from muscle stretch receptors can lead to trans-cerebellar gain changes that are load-compensating. It is argued that the same processes help to adaptively discover multi-joint synergies. Simulations of rapid single joint rotations under load illustrates design feasibility and stability.
Resumo:
A key goal of computational neuroscience is to link brain mechanisms to behavioral functions. The present article describes recent progress towards explaining how laminar neocortical circuits give rise to biological intelligence. These circuits embody two new and revolutionary computational paradigms: Complementary Computing and Laminar Computing. Circuit properties include a novel synthesis of feedforward and feedback processing, of digital and analog processing, and of pre-attentive and attentive processing. This synthesis clarifies the appeal of Bayesian approaches but has a far greater predictive range that naturally extends to self-organizing processes. Examples from vision and cognition are summarized. A LAMINART architecture unifies properties of visual development, learning, perceptual grouping, attention, and 3D vision. A key modeling theme is that the mechanisms which enable development and learning to occur in a stable way imply properties of adult behavior. It is noted how higher-order attentional constraints can influence multiple cortical regions, and how spatial and object attention work together to learn view-invariant object categories. In particular, a form-fitting spatial attentional shroud can allow an emerging view-invariant object category to remain active while multiple view categories are associated with it during sequences of saccadic eye movements. Finally, the chapter summarizes recent work on the LIST PARSE model of cognitive information processing by the laminar circuits of prefrontal cortex. LIST PARSE models the short-term storage of event sequences in working memory, their unitization through learning into sequence, or list, chunks, and their read-out in planned sequential performance that is under volitional control. LIST PARSE provides a laminar embodiment of Item and Order working memories, also called Competitive Queuing models, that have been supported by both psychophysical and neurobiological data. These examples show how variations of a common laminar cortical design can embody properties of visual and cognitive intelligence that seem, at least on the surface, to be mechanistically unrelated.
Resumo:
The advent of modern wireless technologies has seen a shift in focus towards the design and development of educational systems for deployment through mobile devices. The use of mobile phones, tablets and Personal Digital Assistants (PDAs) is steadily growing across the educational sector as a whole. Mobile learning (mLearning) systems developed for deployment on such devices hold great significance for the future of education. However, mLearning systems must be built around the particular learner’s needs based on both their motivation to learn and subsequent learning outcomes. This thesis investigates how biometric technologies, in particular accelerometer and eye-tracking technologies, could effectively be employed within the development of mobile learning systems to facilitate the needs of individual learners. The creation of personalised learning environments must enable the achievement of improved learning outcomes for users, particularly at an individual level. Therefore consideration is given to individual learning-style differences within the electronic learning (eLearning) space. The overall area of eLearning is considered and areas such as biometric technology and educational psychology are explored for the development of personalised educational systems. This thesis explains the basis of the author’s hypotheses and presents the results of several studies carried out throughout the PhD research period. These results show that both accelerometer and eye-tracking technologies can be employed as an Human Computer Interaction (HCI) method in the detection of student learning-styles to facilitate the provision of automatically adapted eLearning spaces. Finally the author provides recommendations for developers in the creation of adaptive mobile learning systems through the employment of biometric technology as a user interaction tool within mLearning applications. Further research paths are identified and a roadmap for future of research in this area is defined.
Resumo:
This paper introduces the concept of adaptive temporal compressive sensing (CS) for video. We propose a CS algorithm to adapt the compression ratio based on the scene's temporal complexity, computed from the compressed data, without compromising the quality of the reconstructed video. The temporal adaptivity is manifested by manipulating the integration time of the camera, opening the possibility to realtime implementation. The proposed algorithm is a generalized temporal CS approach that can be incorporated with a diverse set of existing hardware systems. © 2013 IEEE.
Resumo:
The emergent behaviour of autonomic systems, together with the scale of their deployment, impedes prediction of the full range of configuration and failure scenarios; thus it is not possible to devise management and recovery strategies to cover all possible outcomes. One solution to this problem is to embed self-managing and self-healing abilities into such applications. Traditional design approaches favour determinism, even when unnecessary. This can lead to conflicts between the non-functional requirements. Natural systems such as ant colonies have evolved cooperative, finely tuned emergent behaviours which allow the colonies to function at very large scale and to be very robust, although non-deterministic. Simple pheromone-exchange communication systems are highly efficient and are a major contribution to their success. This paper proposes that we look to natural systems for inspiration when designing architecture and communications strategies, and presents an election algorithm which encapsulates non-deterministic behaviour to achieve high scalability, robustness and stability.