956 resultados para Probabilistic robotics
Resumo:
RoboCup was created in 1996 by a group of Japanese, American, and European Artificial Intelligence and Robotics researchers with a formidable, visionary long-term challenge: “By 2050 a team of robot soccer players will beat the human World Cup champion team.” At that time, in the mid 90s, when there were very few effective mobile robots and the Honda P2 humanoid robot was presented to a stunning public for the first time also in 1996, the RoboCup challenge, set as an adversarial game between teams of autonomous robots, was fascinating and exciting. RoboCup enthusiastically and concretely introduced three robot soccer leagues, namely “Simulation,” “Small-Size,” and “Middle-Size,” as we explain below, and organized its first competitions at IJCAI’97 in Nagoya with a surprising number of 100 participants [RC97]. It was the beginning of what became a continously growing research community. RoboCup established itself as a structured organization (the RoboCup Federation www.RoboCup.org). RoboCup fosters annual competition events, where the scientific challenges faced by the researchers are addressed in a setting that is attractive also to the general public. and the RoboCup events are the ones most popular and attended in the research fields of AI and Robotics.RoboCup further includes a technical symposium with contributions relevant to the RoboCup competitions and beyond to the general AI and robotics.
Resumo:
[Excerpt] The 11th RoboCup International Symposium was held during July 9–10, 2007 at the Fox Theatre in Atlanta, GA, immediately after the 2007 Soccer, Rescue and Junior Competitions. The RoboCup community has observed an increasing interest from other communities over the past few years, e.g., the robotics community.RoboCupisseenasasignificantapproachtotheevaluationofnewlydeveloped methods to many difficult problems in robotics. Atlanta was also the location of a RoboCup@Space demonstration, which reflected the role of AI and robotics in space exploration. Prior to the symposium, space agencies had expressed an interest in cooperating with RoboCup. A first step in this direction was a successful demonstration at RoboCup 2007, which was accompanied with aninvitedtalkgivenbyaleadingscientistfromtheJapanAerospaceExploration Agency JAXA. [...]
Resumo:
Many of our everyday tasks require the control of the serial order and the timing of component actions. Using the dynamic neural field (DNF) framework, we address the learning of representations that support the performance of precisely time action sequences. In continuation of previous modeling work and robotics implementations, we ask specifically the question how feedback about executed actions might be used by the learning system to fine tune a joint memory representation of the ordinal and the temporal structure which has been initially acquired by observation. The perceptual memory is represented by a self-stabilized, multi-bump activity pattern of neurons encoding instances of a sensory event (e.g., color, position or pitch) which guides sequence learning. The strength of the population representation of each event is a function of elapsed time since sequence onset. We propose and test in simulations a simple learning rule that detects a mismatch between the expected and realized timing of events and adapts the activation strengths in order to compensate for the movement time needed to achieve the desired effect. The simulation results show that the effector-specific memory representation can be robustly recalled. We discuss the impact of the fast, activation-based learning that the DNF framework provides for robotics applications.
Resumo:
There is currently an increasing demand for robots able to acquire the sequential organization of tasks from social learning interactions with ordinary people. Interactive learning-by-demonstration and communication is a promising research topic in current robotics research. However, the efficient acquisition of generalized task representations that allow the robot to adapt to different users and contexts is a major challenge. In this paper, we present a dynamic neural field (DNF) model that is inspired by the hypothesis that the nervous system uses the off-line re-activation of initial memory traces to incrementally incorporate new information into structured knowledge. To achieve this, the model combines fast activation-based learning to robustly represent sequential information from single task demonstrations with slower, weight-based learning during internal simulations to establish longer-term associations between neural populations representing individual subtasks. The efficiency of the learning process is tested in an assembly paradigm in which the humanoid robot ARoS learns to construct a toy vehicle from its parts. User demonstrations with different serial orders together with the correction of initial prediction errors allow the robot to acquire generalized task knowledge about possible serial orders and the longer term dependencies between subgoals in very few social learning interactions. This success is shown in a joint action scenario in which ARoS uses the newly acquired assembly plan to construct the toy together with a human partner.
Resumo:
Dissertação de mestrado em Engenharia Eletrónica Industrial e Computadores (área de especialização em Robótica)
Resumo:
Studies have shown that the age of 12 was determined as the age of global monitoring of caries for international comparisons and monitoring of disease trends. The aimed was to evaluate the prevalence of dental caries, fluorosis and periodontal condition and their relation with socioeconomic factors among schoolchildren aged twelve in the city of Manaus, AM. This study with a probabilistic sample of 661 children was conducted, 609 from public and 52 from private schools, in 2008. Dental caries, periodontal condition and dental fluorosis were evaluated. In order to obtain the socioeconomic classification of each child (high, upper middle, middle, lower middle, low and lower low socioeconomic classes), the guardians were given a questionnaire. The mean decayed teeth, missing teeth, and filled teeth (DMFT) found at age twelve was 1.89. It was observed that the presence of dental calculus was the most severe periodontal condition detected in 39.48%. In relation to dental fluorosis, there was a low prevalence in the children examined, i.e., the more pronounced lines of opacity only occasionally merge, forming small white areas. The study showed a significant association of 5% among social class with dental caries and periodontal condition. In schoolchildren of Manaus there are low mean of DMFT and fluorosis, but a high occurrence of gingival bleeding.
Resumo:
Dissertação de mestrado integrado em Engenharia e Gestão de Sistemas de Informação
Resumo:
In the trend towards tolerating hardware unreliability, accuracy is exchanged for cost savings. Running on less reliable machines, functionally correct code becomes risky and one needs to know how risk propagates so as to mitigate it. Risk estimation, however, seems to live outside the average programmer’s technical competence and core practice. In this paper we propose that program design by source-to-source transformation be risk-aware in the sense of making probabilistic faults visible and supporting equational reasoning on the probabilistic behaviour of programs caused by faults. This reasoning is carried out in a linear algebra extension to the standard, `a la Bird-Moor algebra of programming. This paper studies, in particular, the propagation of faults across standard program transformation techniques known as tupling and fusion, enabling the fault of the whole to be expressed in terms of the faults of its parts.
Resumo:
Software reconfigurability became increasingly relevant to the architectural process due to the crescent dependency of modern societies on reliable and adaptable systems. Such systems are supposed to adapt themselves to surrounding environmental changes with minimal service disruption, if any. This paper introduces an engine that statically applies reconfigurations to (formal) models of software architectures. Reconfigurations are specified using a domain specific language— ReCooPLa—which targets the manipulation of software coordinationstructures,typicallyusedinservice-orientedarchitectures(soa).Theengine is responsible for the compilation of ReCooPLa instances and their application to the relevant coordination structures. The resulting configurations are amenable to formal analysis of qualitative and quantitative (probabilistic) properties.
Resumo:
ABSTRACT Objective Investigate the occurrence of dual diagnosis in users of legal and illegal drugs. Methods It is an analytical, cross-sectional study with a quantitative approach, non-probabilistic intentional sampling, carried out in two centers for drug addiction treatment, by means of individual interviews. A sociodemographic questionnaire, the Alcohol, Smoking and Substance Involvement Screening Test (ASSIST) and the Mini-International Neuropsychiatric Interview (MINI) were used. Results One hundred and ten volunteers divided into abstinent users (group 1), alcoholics (group 2) and users of alcohol and illicit drugs (group 3). The substances were alcohol, tobacco, crack and marijuana. A higher presence of dual diagnosis in group 3 (71.8%) was observed, which decreased in group 2 (60%) and 37.1% of drug abstinent users had psychiatric disorder. Dual diagnosis was associated with the risk of suicide, suicide attempts and the practice of infractions. The crack consumption was associated with the occurrence of major depressive episode and antisocial personality disorder. Conclusion It was concluded that the illicit drug users had a higher presence of dual diagnosis showing the severity of this clinical condition. It is considered essential that this clinical reality is included in intervention strategies in order to decrease the negative effects of consumption of these substances and provide better quality of life for these people.
Resumo:
Dissertação de mestrado integrado em Engenharia Civil
Resumo:
First published online: December 16, 2014.
Resumo:
Telecommunications and network technology is now the driving force that ensures continued progress of world civilization. Design of new and expansion of existing network infrastructures requires improving the quality of service(QoS). Modeling probabilistic and time characteristics of telecommunication systems is an integral part of modern algorithms of administration of quality of service. At present, for the assessment of quality parameters except simulation models analytical models in the form of systems and queuing networks are widely used. Because of the limited mathematical tools of models of these classes the corresponding parameter estimation of parameters of quality of service are inadequate by definition. Especially concerning the models of telecommunication systems with packet transmission of multimedia real-time traffic.
Resumo:
We analyze the classical Bertrand model when consumers exhibit some strategic behavior in deciding from which seller they will buy. We use two related but different tools. Both consider a probabilistic learning (or evolutionary) mechanism, and in the two of them consumers' behavior in uences the competition between the sellers. The results obtained show that, in general, developing some sort of loyalty is a good strategy for the buyers as it works in their best interest. First, we consider a learning procedure described by a deterministic dynamic system and, using strong simplifying assumptions, we can produce a description of the process behavior. Second, we use nite automata to represent the strategies played by the agents and an adaptive process based on genetic algorithms to simulate the stochastic process of learning. By doing so we can relax some of the strong assumptions used in the rst approach and still obtain the same basic results. It is suggested that the limitations of the rst approach (analytical) provide a good motivation for the second approach (Agent-Based). Indeed, although both approaches address the same problem, the use of Agent-Based computational techniques allows us to relax hypothesis and overcome the limitations of the analytical approach.
Resumo:
In this paper, a new class of generalized backward doubly stochastic differential equations is investigated. This class involves an integral with respect to an adapted continuous increasing process. A probabilistic representation for viscosity solutions of semi-linear stochastic partial differential equations with a Neumann boundary condition is given.