859 resultados para Time-sharing computer systems
Resumo:
Gaussian processes are gaining increasing popularity among the control community, in particular for the modelling of discrete time state space systems. However, it has not been clear how to incorporate model information, in the form of known state relationships, when using a Gaussian process as a predictive model. An obvious example of known prior information is position and velocity related states. Incorporation of such information would be beneficial both computationally and for faster dynamics learning. This paper introduces a method of achieving this, yielding faster dynamics learning and a reduction in computational effort from O(Dn2) to O((D - F)n2) in the prediction stage for a system with D states, F known state relationships and n observations. The effectiveness of the method is demonstrated through its inclusion in the PILCO learning algorithm with application to the swing-up and balance of a torque-limited pendulum and the balancing of a robotic unicycle in simulation. © 2012 IEEE.
Resumo:
This paper presents an efficient algorithm for robust network reconstruction of Linear Time-Invariant (LTI) systems in the presence of noise, estimation errors and unmodelled nonlinearities. The method here builds on previous work [1] on robust reconstruction to provide a practical implementation with polynomial computational complexity. Following the same experimental protocol, the algorithm obtains a set of structurally-related candidate solutions spanning every level of sparsity. We prove the existence of a magnitude bound on the noise, which if satisfied, guarantees that one of these structures is the correct solution. A problem-specific model-selection procedure then selects a single solution from this set and provides a measure of confidence in that solution. Extensive simulations quantify the expected performance for different levels of noise and show that significantly more noise can be tolerated in comparison to the original method. © 2012 IEEE.
Resumo:
We introduce a characterization of contraction for bounded convex sets. For discrete-time multi-agent systems we provide an explicit upperbound on the rate of convergence to a consensus under the assumptions of contractiveness and (weak) connectedness (across an interval.) Convergence is shown to be exponential when either the system or the function characterizing the contraction is linear. Copyright © 2007 IFAC.
Resumo:
This paper presents an two weighted neural network approach to determine the delay time for a heating, ventilating and air-conditioning (HVAC) plan to respond to control actions. The two weighted neural network is a fully connected four-layer network. An acceleration technique was used to improve the General Delta Rule for the learning process. Experimental data for heating and cooling modes were used with both the two weighted neural network and a traditional mathematical method to determine the delay time. The results show that two weighted neural networks can be used effectively determining the delay time for AVAC systems.
Resumo:
This paper presents an multi weights neurons approach to determine the delay time for a Heating ventilating and air-conditioning (HVAC) plan to respond to control actions. The multi weights neurons is a fully connected four-layer network. An acceleration technique was used to improve the general delta rule for the learning process. Experimental data for heating and cooling modes were used with both the multi weights neurons and a traditional mathematical method to determine the delay time. The results show that multi weights neurons can be used effectively determining the delay time for HVAC systems.
Resumo:
Based on an idea that spatial separation of charge states can enhance quantum coherence, we propose a scheme for a quantum computation with the quantum bit (qubit) constructed from two coupled quantum dots. Quantum information is stored in the electron-hole pair state with the electron and hole located in different dots, which enables the qubit state to be very long-lived. Universal quantum gates involving any pair of qubits are realized by coupling the quantum dots through the cavity photon which is a hopeful candidate for the transfer of long-range information. The operation analysis is carried out by estimating the gate time versus the decoherence time.
Resumo:
PILOT is a programming system constructed in LISP. It is designed to facilitate the development of programs by easing the familiar sequence: write some code, run the program, make some changes, write some more code, run the program again, etc. As a program becomes more complex, making these changes becomes harder and harder because the implications of changes are harder to anticipate. In the PILOT system, the computer plays an active role in this evolutionary process by providing the means whereby changes can be effected immediately, and in ways that seem natural to the user. The user of PILOT feels that he is giving advice, or making suggestions, to the computer about the operation of his programs, and that the system then performs the work necessary. The PILOT system is thus an interface between the user and his program, monitoring both in the requests of the user and operation of his program. The user may easily modify the PILOT system itself by giving it advice about its own operation. This allows him to develop his own language and to shift gradually onto PILOT the burden of performing routine but increasingly complicated tasks. In this way, he can concentrate on the conceptual difficulties in the original problem, rather than on the niggling tasks of editing, rewriting, or adding to his programs. Two detailed examples are presented. PILOT is a first step toward computer systems that will help man to formulate problems in the same way they now help him to solve them. Experience with it supports the claim that such "symbiotic systems" allow the programmer to attack and solve more difficult problems.
Resumo:
The actor message-passing model of concurrent computation has inspired new ideas in the areas of knowledge-based systems, programming languages and their semantics, and computer systems architecture. The model itself grew out of computer languages such as Planner, Smalltalk, and Simula, and out of the use of continuations to interpret imperative constructs within A-calculus. The mathematical content of the model has been developed by Carl Hewitt, Irene Greif, Henry Baker, and Giuseppe Attardi. This thesis extends and unifies their work through the following observations. The ordering laws postulated by Hewitt and Baker can be proved using a notion of global time. The most general ordering laws are in fact equivalent to an axiom of realizability in global time. Independence results suggest that some notion of global time is essential to any model of concurrent computation. Since nondeterministic concurrency is more fundamental than deterministic sequential computation, there may be no need to take fixed points in the underlying domain of a power domain. Power domains built from incomplete domains can solve the problem of providing a fixed point semantics for a class of nondeterministic programming languages in which a fair merge can be written. The event diagrams of Greif's behavioral semantics, augmented by Baker's pending events, form an incomplete domain. Its power domain is the semantic domain in which programs written in actor-based languages are assigned meanings. This denotational semantics is compatible with behavioral semantics. The locality laws postulated by Hewitt and Baker may be proved for the semantics of an actor-based language. Altering the semantics slightly can falsify the locality laws. The locality laws thus constrain what counts as an actor semantics.
Resumo:
Tedd, L.(2006). Program: a record of the first 40 years of electronic library and information systems. Program: electronic library and information systems,40(1), 11-26.
Resumo:
This paper describes an experiment developed to study the performance of virtual agent animated cues within digital interfaces. Increasingly, agents are used in virtual environments as part of the branding process and to guide user interaction. However, the level of agent detail required to establish and enhance efficient allocation of attention remains unclear. Although complex agent motion is now possible, it is costly to implement and so should only be routinely implemented if a clear benefit can be shown. Pevious methods of assessing the effect of gaze-cueing as a solution to scene complexity have relied principally on two-dimensional static scenes and manual peripheral inputs. Two experiments were run to address the question of agent cues on human-computer interfaces. Both experiments measured the efficiency of agent cues analyzing participant responses either by gaze or by touch respectively. In the first experiment, an eye-movement recorder was used to directly assess the immediate overt allocation of attention by capturing the participant’s eyefixations following presentation of a cueing stimulus. We found that a fully animated agent could speed up user interaction with the interface. When user attention was directed using a fully animated agent cue, users responded 35% faster when compared with stepped 2-image agent cues, and 42% faster when compared with a static 1-image cue. The second experiment recorded participant responses on a touch screen using same agent cues. Analysis of touch inputs confirmed the results of gaze-experiment, where fully animated agent made shortest time response with a slight decrease on the time difference comparisons. Responses to fully animated agent were 17% and 20% faster when compared with 2-image and 1-image cue severally. These results inform techniques aimed at engaging users’ attention in complex scenes such as computer games and digital transactions within public or social interaction contexts by demonstrating the benefits of dynamic gaze and head cueing directly on the users’ eye movements and touch responses.
Resumo:
Generally speaking, the term temporal logic refers to any system of rules and symbolism for representing and reasoning about propositions qualified in terms of time. In computer science, particularly in the domain of Artificial Intelligence, there are mainly two known approaches to the representation of temporal information: modal logic approaches including tense logic and hybrid temporal logic, and predicate logic approaches including temporal arguement method and reified temporal logic. On one hand, while tense logic, hybrid temporal logic and temporal argument method enjoy formal theoretical foundations, their expressiveness has been criticised as not power enough for representing general temporal knowledge; on the other hand, although reified temporal logic provides greater expressive power, most of the current systems following the temporal reification lack of complete and sound axiomatic theories. With there observations in mind, a new reified temporal logic with clear syntax and semantics in terms of a sound and complete axiomatic formalism is introduced in this paper, which retains all the expressive power of temporal reification.
Resumo:
This paper, chosen as a best paper from the 2004 SAMOS Workshop on Computer Systems: describes a novel, efficient methodology for automatically creating embedded DSP computer systems. The novelty arises since now embedded electronic signal processing systems, such as radar or sonar, can be designed by anyone from the algorithm level, i.e. no low level system design experience is required, whilst still achieving low controllable implementation overheads and high real time performance. In the chosen design example, a bank of Normalised Lattice Filter (NLF) components is created which a four-fold reduction in the required processing resource with no performance decrease.
Resumo:
The authors are concerned with the development of computer systems that are capable of using information from faces and voices to recognise people's emotions in real-life situations. The paper addresses the nature of the challenges that lie ahead, and provides an assessment of the progress that has been made in the areas of signal processing and analysis techniques (with regard to speech and face), and the psychological and linguistic analyses of emotion. Ongoing developmental work by the authors in each of these areas is described.