852 resultados para Intelligent systems
Resumo:
Dynamic Power Management (DPM) is a technique to reduce power consumption of electronic system by selectively shutting down idle components. In this article we try to introduce back propagation network and radial basis network into the research of the system-level power management policies. We proposed two PM policies-Back propagation Power Management (BPPM) and Radial Basis Function Power Management (RBFPM) which are based on Artificial Neural Networks (ANN). Our experiments show that the two power management policies greatly lowered the system-level power consumption and have higher performance than traditional Power Management(PM) techniques-BPPM is 1.09-competitive and RBFPM is 1.08-competitive vs. 1.79 . 1.45 . 1.18-competitive separately for traditional timeout PM . adaptive predictive PM and stochastic PM.
Resumo:
In the four years that the MIT Mobile Robot Project has benn in existence, we have built ten robots that focus research in various areas concerned with building intelligent systems. Towards this end, we have embarked on trying to build useful autonomous creatures that live and work in the real world. Many of the preconceived notions entertained before we started building our robots turned out to be misguided. Some issues we thought would be hard have worked successfully from day one and subsystems we imagined to be trivial have become tremendous time sinks. Oddly enough, one of our biggest failures has led to some of our favorite successes. This paper describes the changing paths our research has taken due to the lessons learned from the practical realities of building robots.
Resumo:
Timmis J and Neal M J. A resource limited artificial immune system for data analysis. In Proceedings of ES2000 - Research and Development of Intelligent Systems, pages 19-32, Cambrige, U.K., 2000. Springer.
Resumo:
Meng, Q., & Lee, M. (2005). Novelty and Habituation: the Driving Forces in Early Stage Learning for Developmental Robotics. Wermter, S., Palm, G., & Elshaw, M. (Eds.), In: Biomimetic Neural Learning for Intelligent Robots: Intelligent Systems, Cognitive Robotics, and Neuroscience. (pp. 315-332). (Lecture Notes in Computer Science). Springer Berlin Heidelberg.
Resumo:
This paper describes an experiment developed to study the performance of virtual agent animated cues within digital interfaces. Increasingly, agents are used in virtual environments as part of the branding process and to guide user interaction. However, the level of agent detail required to establish and enhance efficient allocation of attention remains unclear. Although complex agent motion is now possible, it is costly to implement and so should only be routinely implemented if a clear benefit can be shown. Pevious methods of assessing the effect of gaze-cueing as a solution to scene complexity have relied principally on two-dimensional static scenes and manual peripheral inputs. Two experiments were run to address the question of agent cues on human-computer interfaces. Both experiments measured the efficiency of agent cues analyzing participant responses either by gaze or by touch respectively. In the first experiment, an eye-movement recorder was used to directly assess the immediate overt allocation of attention by capturing the participant’s eyefixations following presentation of a cueing stimulus. We found that a fully animated agent could speed up user interaction with the interface. When user attention was directed using a fully animated agent cue, users responded 35% faster when compared with stepped 2-image agent cues, and 42% faster when compared with a static 1-image cue. The second experiment recorded participant responses on a touch screen using same agent cues. Analysis of touch inputs confirmed the results of gaze-experiment, where fully animated agent made shortest time response with a slight decrease on the time difference comparisons. Responses to fully animated agent were 17% and 20% faster when compared with 2-image and 1-image cue severally. These results inform techniques aimed at engaging users’ attention in complex scenes such as computer games and digital transactions within public or social interaction contexts by demonstrating the benefits of dynamic gaze and head cueing directly on the users’ eye movements and touch responses.
Resumo:
In the analysis of relations among the elements of two sets it is usual to obtain different values depending on the point of view from which these relations are measured. The main goal of the paper is the modelization of these situations by means of a generalization of the L-fuzzy concept analysis called L-fuzzy bicontext. We study the L-fuzzy concepts of these L-fuzzy bicontexts obtaining some interesting results. Specifically, we will be able to classify the biconcepts of the L-fuzzy bicontext. Finally, a practical case is developed using this new tool.
Resumo:
The Bioinformatics Open Source Conference (BOSC) is organized by the Open Bioinformatics Foundation (OBF), a nonprofit group dedicated to promoting the practice and philosophy of open source software development and open science within the biological research community. Since its inception in 2000, BOSC has provided bioinformatics developers with a forum for communicating the results of their latest efforts to the wider research community. BOSC offers a focused environment for developers and users to interact and share ideas about standards; software development practices; practical techniques for solving bioinformatics problems; and approaches that promote open science and sharing of data, results, and software. BOSC is run as a two-day special interest group (SIG) before the annual Intelligent Systems in Molecular Biology (ISMB) conference. BOSC 2015 took place in Dublin, Ireland, and was attended by over 125 people, about half of whom were first-time attendees. Session topics included "Data Science;" "Standards and Interoperability;" "Open Science and Reproducibility;" "Translational Bioinformatics;" "Visualization;" and "Bioinformatics Open Source Project Updates". In addition to two keynote talks and dozens of shorter talks chosen from submitted abstracts, BOSC 2015 included a panel, titled "Open Source, Open Door: Increasing Diversity in the Bioinformatics Open Source Community," that provided an opportunity for open discussion about ways to increase the diversity of participants in BOSC in particular, and in open source bioinformatics in general. The complete program of BOSC 2015 is available online at http://www.open-bio.org/wiki/BOSC_2015_Schedule.
Resumo:
In this paper we propose a method for interpolation over a set of retrieved cases in the adaptation phase of the case-based reasoning cycle. The method has two advantages over traditional systems: the first is that it can predict “new” instances, not yet present in the case base; the second is that it can predict solutions not present in the retrieval set. The method is a generalisation of Shepard’s Interpolation method, formulated as the minimisation of an error function defined in terms of distance metrics in the solution and problem spaces. We term the retrieval algorithm the Generalised Shepard Nearest Neighbour (GSNN) method. A novel aspect of GSNN is that it provides a general method for interpolation over nominal solution domains. The method is illustrated in the paper with reference to the Irises classification problem. It is evaluated with reference to a simulated nominal value test problem, and to a benchmark case base from the travel domain. The algorithm is shown to out-perform conventional nearest neighbour methods on these problems. Finally, GSNN is shown to improve in efficiency when used in conjunction with a diverse retrieval algorithm.
Resumo:
This paper presents two new approaches for use in complete process monitoring. The firstconcerns the identification of nonlinear principal component models. This involves the application of linear
principal component analysis (PCA), prior to the identification of a modified autoassociative neural network (AAN) as the required nonlinear PCA (NLPCA) model. The benefits are that (i) the number of the reduced set of linear principal components (PCs) is smaller than the number of recorded process variables, and (ii) the set of PCs is better conditioned as redundant information is removed. The result is a new set of input data for a modified neural representation, referred to as a T2T network. The T2T NLPCA model is then used for complete process monitoring, involving fault detection, identification and isolation. The second approach introduces a new variable reconstruction algorithm, developed from the T2T NLPCA model. Variable reconstruction can enhance the findings of the contribution charts still widely used in industry by reconstructing the outputs from faulty sensors to produce more accurate fault isolation. These ideas are illustrated using recorded industrial data relating to developing cracks in an industrial glass melter process. A comparison of linear and nonlinear models, together with the combined use of contribution charts and variable reconstruction, is presented.
Resumo:
Dealing with uncertainty problems in intelligent systems has attracted a lot of attention in the AI community. Quite a few techniques have been proposed. Among them, the Dempster-Shafer theory of evidence (DS theory) has been widely appreciated. In DS theory, Dempster's combination rule plays a major role. However, it has been pointed out that the application domains of the rule are rather limited and the application of the theory sometimes gives unexpected results. We have previously explored the problem with Dempster's combination rule and proposed an alternative combination mechanism in generalized incidence calculus. In this paper we give a comprehensive comparison between generalized incidence calculus and the Dempster-Shafer theory of evidence. We first prove that these two theories have the same ability in representing evidence and combining DS-independent evidence. We then show that the new approach can deal with some dependent situations while Dempster's combination rule cannot. Various examples in the paper show the ways of using generalized incidence calculus in expert systems.