902 resultados para Autonomous robot
Resumo:
A dynamical characterization of the stability boundary for a fairly large class of nonlinear autonomous dynamical systems is developed in this paper. This characterization generalizes the existing results by allowing the existence of saddle-node equilibrium points on the stability boundary. The stability boundary of an asymptotically stable equilibrium point is shown to consist of the stable manifolds of the hyperbolic equilibrium points on the stability boundary and the stable, stable center and center manifolds of the saddle-node equilibrium points on the stability boundary.
Resumo:
This paper describes a logic-based formalism for qualitative spatial reasoning with cast shadows (Perceptual Qualitative Relations on Shadows, or PQRS) and presents results of a mobile robot qualitative self-localisation experiment using this formalism. Shadow detection was accomplished by mapping the images from the robot’s monocular colour camera into a HSV colour space and then thresholding on the V dimension. We present results of selflocalisation using two methods for obtaining the threshold automatically: in one method the images are segmented according to their grey-scale histograms, in the other, the threshold is set according to a prediction about the robot’s location, based upon a qualitative spatial reasoning theory about shadows. This theory-driven threshold search and the qualitative self-localisation procedure are the main contributions of the present research. To the best of our knowledge this is the first work that uses qualitative spatial representations both to perform robot self-localisation and to calibrate a robot’s interpretation of its perceptual input.
Resumo:
Máster Universitario en Sistemas Inteligentes y Aplicaciones Numéricas en Ingeniería (SIANI)
Resumo:
Celebrado el 24 de mayo en el Edificio de Informática y Matemáticas de la ULPGC
Resumo:
Máster Universitario en Sistemas Inteligentes y Aplicaciones Numéricas en Ingeniería (SIANI)
Resumo:
[EN]Detecting people is a key capability for robots that operate in populated environments. In this paper, we have adopted a hierarchical approach that combines classifiers created using supervised learning in order to identify whether a person is in the view-scope of the robot or not. Our approach makes use of vision, depth and thermal sensors mounted on top of a mobile platform.
Resumo:
[ES]This paper describes some simple but useful computer vision techniques for human-robot interaction. First, an omnidirectional camera setting is described that can detect people in the surroundings of the robot, giving their angular positions and a rough estimate of the distance. The device can be easily built with inexpensive components. Second, we comment on a color-based face detection technique that can alleviate skin-color false positives. Third, a simple head nod and shake detector is described, suitable for detecting affirmative/negative, approval/dissaproval, understanding/disbelief head gestures.
Resumo:
In the collective imaginaries a robot is a human like machine as any androids in science fiction. However the type of robots that you will encounter most frequently are machinery that do work that is too dangerous, boring or onerous. Most of the robots in the world are of this type. They can be found in auto, medical, manufacturing and space industries. Therefore a robot is a system that contains sensors, control systems, manipulators, power supplies and software all working together to perform a task. The development and use of such a system is an active area of research and one of the main problems is the development of interaction skills with the surrounding environment, which include the ability to grasp objects. To perform this task the robot needs to sense the environment and acquire the object informations, physical attributes that may influence a grasp. Humans can solve this grasping problem easily due to their past experiences, that is why many researchers are approaching it from a machine learning perspective finding grasp of an object using information of already known objects. But humans can select the best grasp amongst a vast repertoire not only considering the physical attributes of the object to grasp but even to obtain a certain effect. This is why in our case the study in the area of robot manipulation is focused on grasping and integrating symbolic tasks with data gained through sensors. The learning model is based on Bayesian Network to encode the statistical dependencies between the data collected by the sensors and the symbolic task. This data representation has several advantages. It allows to take into account the uncertainty of the real world, allowing to deal with sensor noise, encodes notion of causality and provides an unified network for learning. Since the network is actually implemented and based on the human expert knowledge, it is very interesting to implement an automated method to learn the structure as in the future more tasks and object features can be introduced and a complex network design based only on human expert knowledge can become unreliable. Since structure learning algorithms presents some weaknesses, the goal of this thesis is to analyze real data used in the network modeled by the human expert, implement a feasible structure learning approach and compare the results with the network designed by the expert in order to possibly enhance it.
Resumo:
La tesi descrive il funzionamento del robot chirurgico Da Vinci S ed illustra la sua architettura. Si analizza poi il suo impiego in ambito otorinolaringoiatrico mostrando vantaggi e limiti di questa innovativa tecnologia.
Resumo:
Eukaryotic ribosomal DNA constitutes a multi gene family organized in a cluster called nucleolar organizer region (NOR); this region is composed usually by hundreds to thousands of tandemly repeated units. Ribosomal genes, being repeated sequences, evolve following the typical pattern of concerted evolution. The autonomous retroelement R2 inserts in the ribosomal gene 28S, leading to defective 28S rDNA genes. R2 element, being a retrotransposon, performs its activity in the genome multiplying its copy number through a “copy and paste” mechanism called target primed reverse transcription. It consists in the retrotranscription of the element’s mRNA into DNA, then the DNA is integrated in the target site. Since the retrotranscription can be interrupted, but the integration will be carried out anyway, truncated copies of the element will also be present in the genome. The study of these truncated variants is a tool to examine the activity of the element. R2 phylogeny appears, in general, not consistent with that of its hosts, except some cases (e.g. Drosophila spp. and Reticulitermes spp.); moreover R2 is absent in some species (Fugu rubripes, human, mouse, etc.), while other species have more R2 lineages in their genome (the turtle Mauremys reevesii, the Japanese beetle Popilia japonica, etc). R2 elements here presented are isolated in 4 species of notostracan branchiopods and in two species of stick insects, whose reproductive strategies range from strict gonochorism to unisexuality. From sequencing data emerges that in Triops cancriformis (Spanish gonochoric population), in Lepidurus arcticus (two putatively unisexual populations from Iceland) and in Bacillus rossius (gonochoric population from Capalbio) the R2 elements are complete and encode functional proteins, reflecting the general features of this family of transposable elements. On the other hand, R2 from Italian and Austrian populations of T. cancriformis (respectively unisexual and hermaphroditic), Lepidurus lubbocki (two elements within the same Italian population, gonochoric but with unfunctional males) and Bacillus grandii grandii (gonochoric population from Ponte Manghisi) have sequences that encode incomplete or non-functional proteins in which it is possible to recognize only part of the characteristic domains. In Lepidurus couesii (Italian gonochoric populations) different elements were found as in L. lubbocki, and the sequencing is still in progress. Two hypothesis are given to explain the inconsistency of R2/host phylogeny: vertical inheritance of the element followed by extinction/diversification or horizontal transmission. My data support previous study that state the vertical transmission as the most likely explanation; nevertheless horizontal transfer events can’t be excluded. I also studied the element’s activity in Spanish populations of T. cancriformis, in L. lubbocki, in L. arcticus and in gonochoric and parthenogenetic populations of B. rossius. In gonochoric populations of T. cancriformis and B. rossius I found that each individual has its own private set of truncated variants. The situation is the opposite for the remaining hermaphroditic/parthenogenetic species and populations, all individuals sharing – in the so far analyzed samples - the majority of variants. This situation is very interesting, because it isn’t concordant with the Muller’s ratchet theory that hypothesizes the parthenogenetic populations being either devoided of transposable elements or TEs overloaded. My data suggest a possible epigenetic mechanism that can block the retrotransposon activity, and in this way deleterious mutations don’t accumulate.
Resumo:
Uno dei principali ambiti di ricerca dell’intelligenza artificiale concerne la realizzazione di agenti (in particolare, robot) in grado di aiutare o sostituire l’uomo nell’esecuzione di determinate attività. A tal fine, è possibile procedere seguendo due diversi metodi di progettazione: la progettazione manuale e la progettazione automatica. Quest’ultima può essere preferita alla prima nei contesti in cui occorra tenere in considerazione requisiti quali flessibilità e adattamento, spesso essenziali per lo svolgimento di compiti non banali in contesti reali. La progettazione automatica prende in considerazione un modello col quale rappresentare il comportamento dell’agente e una tecnica di ricerca (oppure di apprendimento) che iterativamente modifica il modello al fine di renderlo il più adatto possibile al compito in esame. In questo lavoro, il modello utilizzato per la rappresentazione del comportamento del robot è una rete booleana (Boolean network o Kauffman network). La scelta di tale modello deriva dal fatto che possiede una semplice struttura che rende agevolmente studiabili le dinamiche tuttavia complesse che si manifestano al suo interno. Inoltre, la letteratura recente mostra che i modelli a rete, quali ad esempio le reti neuronali artificiali, si sono dimostrati efficaci nella programmazione di robot. La metodologia per l’evoluzione di tale modello riguarda l’uso di tecniche di ricerca meta-euristiche in grado di trovare buone soluzioni in tempi contenuti, nonostante i grandi spazi di ricerca. Lavori precedenti hanno gia dimostrato l’applicabilità e investigato la metodologia su un singolo robot. Lo scopo di questo lavoro è quello di fornire prova di principio relativa a un insieme di robot, aprendo nuove strade per la progettazione in swarm robotics. In questo scenario, semplici agenti autonomi, interagendo fra loro, portano all’emergere di un comportamento coordinato adempiendo a task impossibili per la singola unità. Questo lavoro fornisce utili ed interessanti opportunità anche per lo studio delle interazioni fra reti booleane. Infatti, ogni robot è controllato da una rete booleana che determina l’output in funzione della propria configurazione interna ma anche dagli input ricevuti dai robot vicini. In questo lavoro definiamo un task in cui lo swarm deve discriminare due diversi pattern sul pavimento dell’arena utilizzando solo informazioni scambiate localmente. Dopo una prima serie di esperimenti preliminari che hanno permesso di identificare i parametri e il migliore algoritmo di ricerca, abbiamo semplificato l’istanza del problema per meglio investigare i criteri che possono influire sulle prestazioni. E’ stata così identificata una particolare combinazione di informazione che, scambiata localmente fra robot, porta al miglioramento delle prestazioni. L’ipotesi è stata confermata applicando successivamente questo risultato ad un’istanza più difficile del problema. Il lavoro si conclude suggerendo nuovi strumenti per lo studio dei fenomeni emergenti in contesti in cui le reti booleane interagiscono fra loro.
Resumo:
Visual tracking is the problem of estimating some variables related to a target given a video sequence depicting the target. Visual tracking is key to the automation of many tasks, such as visual surveillance, robot or vehicle autonomous navigation, automatic video indexing in multimedia databases. Despite many years of research, long term tracking in real world scenarios for generic targets is still unaccomplished. The main contribution of this thesis is the definition of effective algorithms that can foster a general solution to visual tracking by letting the tracker adapt to mutating working conditions. In particular, we propose to adapt two crucial components of visual trackers: the transition model and the appearance model. The less general but widespread case of tracking from a static camera is also considered and a novel change detection algorithm robust to sudden illumination changes is proposed. Based on this, a principled adaptive framework to model the interaction between Bayesian change detection and recursive Bayesian trackers is introduced. Finally, the problem of automatic tracker initialization is considered. In particular, a novel solution for categorization of 3D data is presented. The novel category recognition algorithm is based on a novel 3D descriptors that is shown to achieve state of the art performances in several applications of surface matching.