999 resultados para Robotic Navigation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

SLAM is a popular task used by robots and autonomous vehicles to build a map of an unknown environment and, at the same time, to determine their location within the map. This paper describes a SLAM-based, probabilistic robotic system able to learn the essential features of different parts of its environment. Some previous SLAM implementations had computational complexities ranging from O(Nlog(N)) to O(N2), where N is the number of map features. Unlike these methods, our approach reduces the computational complexity to O(N) by using a model to fuse the information from the sensors after applying the Bayesian paradigm. Once the training process is completed, the robot identifies and locates those areas that potentially match the sections that have been previously learned. After the training, the robot navigates and extracts a three-dimensional map of the environment using a single laser sensor. Thus, it perceives different sections of its world. In addition, in order to make our system able to be used in a low-cost robot, low-complexity algorithms that can be easily implemented on embedded processors or microcontrollers are used.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Nowadays, new computers generation provides a high performance that enables to build computationally expensive computer vision applications applied to mobile robotics. Building a map of the environment is a common task of a robot and is an essential part to allow the robots to move through these environments. Traditionally, mobile robots used a combination of several sensors from different technologies. Lasers, sonars and contact sensors have been typically used in any mobile robotic architecture, however color cameras are an important sensor due to we want the robots to use the same information that humans to sense and move through the different environments. Color cameras are cheap and flexible but a lot of work need to be done to give robots enough visual understanding of the scenes. Computer vision algorithms are computational complex problems but nowadays robots have access to different and powerful architectures that can be used for mobile robotics purposes. The advent of low-cost RGB-D sensors like Microsoft Kinect which provide 3D colored point clouds at high frame rates made the computer vision even more relevant in the mobile robotics field. The combination of visual and 3D data allows the systems to use both computer vision and 3D processing and therefore to be aware of more details of the surrounding environment. The research described in this thesis was motivated by the need of scene mapping. Being aware of the surrounding environment is a key feature in many mobile robotics applications from simple robotic navigation to complex surveillance applications. In addition, the acquisition of a 3D model of the scenes is useful in many areas as video games scene modeling where well-known places are reconstructed and added to game systems or advertising where once you get the 3D model of one room the system can add furniture pieces using augmented reality techniques. In this thesis we perform an experimental study of the state-of-the-art registration methods to find which one fits better to our scene mapping purposes. Different methods are tested and analyzed on different scene distributions of visual and geometry appearance. In addition, this thesis proposes two methods for 3d data compression and representation of 3D maps. Our 3D representation proposal is based on the use of Growing Neural Gas (GNG) method. This Self-Organizing Maps (SOMs) has been successfully used for clustering, pattern recognition and topology representation of various kind of data. Until now, Self-Organizing Maps have been primarily computed offline and their application in 3D data has mainly focused on free noise models without considering time constraints. Self-organising neural models have the ability to provide a good representation of the input space. In particular, the Growing Neural Gas (GNG) is a suitable model because of its flexibility, rapid adaptation and excellent quality of representation. However, this type of learning is time consuming, specially for high-dimensional input data. Since real applications often work under time constraints, it is necessary to adapt the learning process in order to complete it in a predefined time. This thesis proposes a hardware implementation leveraging the computing power of modern GPUs which takes advantage of a new paradigm coined as General-Purpose Computing on Graphics Processing Units (GPGPU). Our proposed geometrical 3D compression method seeks to reduce the 3D information using plane detection as basic structure to compress the data. This is due to our target environments are man-made and therefore there are a lot of points that belong to a plane surface. Our proposed method is able to get good compression results in those man-made scenarios. The detected and compressed planes can be also used in other applications as surface reconstruction or plane-based registration algorithms. Finally, we have also demonstrated the goodness of the GPU technologies getting a high performance implementation of a CAD/CAM common technique called Virtual Digitizing.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Depth estimation from images has long been regarded as a preferable alternative compared to expensive and intrusive active sensors, such as LiDAR and ToF. The topic has attracted the attention of an increasingly wide audience thanks to the great amount of application domains, such as autonomous driving, robotic navigation and 3D reconstruction. Among the various techniques employed for depth estimation, stereo matching is one of the most widespread, owing to its robustness, speed and simplicity in setup. Recent developments has been aided by the abundance of annotated stereo images, which granted to deep learning the opportunity to thrive in a research area where deep networks can reach state-of-the-art sub-pixel precision in most cases. Despite the recent findings, stereo matching still begets many open challenges, two among them being finding pixel correspondences in presence of objects that exhibits a non-Lambertian behaviour and processing high-resolution images. Recently, a novel dataset named Booster, which contains high-resolution stereo pairs featuring a large collection of labeled non-Lambertian objects, has been released. The work shown that training state-of-the-art deep neural network on such data improves the generalization capabilities of these networks also in presence of non-Lambertian surfaces. Regardless being a further step to tackle the aforementioned challenge, Booster includes a rather small number of annotated images, and thus cannot satisfy the intensive training requirements of deep learning. This thesis work aims to investigate novel view synthesis techniques to augment the Booster dataset, with ultimate goal of improving stereo matching reliability in presence of high-resolution images that displays non-Lambertian surfaces.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Teaching robotics to students at the beginning of their studies has become a huge challenge. Simulation environments can be an effective solution to that challenge where students can interact with simulated robots and have the first contact with robotic constraints. From our previous experience with simulation environments it was possible to observe that students with lower background knowledge in robotics where able to deal with a limited number of constraints, implement a simulated robotic platform and study several sensors. The question is: after this first phase what should be the best approach? Should the student start developing their own hardware? Hardware development is a very important part of an engineer's education but it can also be a difficult phase that could lead to discouragement and loss of motivation in some students. Considering the previous constraints and first year engineering students’ high abandonment rate it is important to develop teaching strategies to deal with this problem in a feasible way. The solution that we propose is the integration of a low-cost standard robotic platform WowWee Rovio as an intermediate solution between the simulation phase and the stage where the students can develop their own robots. This approach will allow the students to keep working in robotic areas such as: cooperative behaviour, perception, navigation and data fusion. The propose approach proved to be a motivation step not only for the students but also for the teachers. Students and teachers were able to reach an agreement between the level of demand imposed by the teachers and satisfaction/motivation of the students.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of robotic vehicles for environmental modeling is discussed. This paper presents diverse results in autonomous marine missions with the ROAZ autonomous surface vehicle. The vehicle can perform autonomous missions while gathering marine data with high inertial and positioning precision. The underwater world is an, economical and environmental, asset that need new tools to study and preserve it. ROAZ is used in marine environment missions since it can sense and monitor the surface and underwater scenarios. Is equipped with a diverse set of sensors, cameras and underwater sonars that generate 3D environmental models. It is used for study the marine life and possible underwater wrecks that can pollute or be a danger to marine navigation. The 3D model and integration of multibeam and sidescan sonars represent a challenge in nowadays. Adding that it is important that robots can explore an area and make decisions based on their surroundings and goals. Regard that, autonomous robotic systems can relieve human beings of repetitive and dangerous tasks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Introduction: In order to improve safety of pedicle screw placement several techniques have been developed. More recently robotically assisted pedicle insertion has been introduced aiming at increasing accuracy. The aim of this study was to compare this new technique with the two main pedicle insertion techniques in our unit namely fluoroscopically assisted vs EMG aided insertion. Material and methods: A total of 382 screws (78 thoracic,304 lumbar) were introduced in 64 patients (m/f = 1.37, equally distributed between insertion technique groups) by a single experienced spinal surgeon. From those, 64 (10 thoracic, 54 lumbar) were introduced in 11 patients using a miniature robotic device based on pre operative CT images under fluoroscopic control. 142 (4 thoracic, 138 lumbar) screws were introduced using lateral fluoroscopy in 27 patients while 176 (64 thoracic, 112 lumbar) screws in 26 patients were inserted using both fluoroscopy and EMG monitoring. There was no difference in the distribution of scoliotic spines between the 3 groups (n = 13). Screw position was assessed by an independent observer on CTs in axial, sagittal and coronal planes using the Rampersaud A to D classification. Data of lumbar and thoracic screws were processed separately as well as data obtained from axial, sagittal and coronal CT planes. Results: Intra- and interobserver reliability of the Rampersaud classification was moderate, (0.35 and 0.45 respectively) being the least good on axial plane. The total number of misplaced screws (C&D grades) was generally low (12 thoracic and 12 lumbar screws). Misplacement rates were same in straight and scoliotic spines. The only difference in misplacement rates was observed on axial and coronal images in the EMG assisted thoracic screw group with a higher proportion of C or D grades (p <0.05) in that group. Recorded compound muscle action potentials (CMAP) values of the inserted screws were 30.4 mA for the robot and 24.9mA for the freehand technique with a CI of 3.8 of the mean difference of 5.5 mA. Discussion: Robotic placement did improve the placement of thoracic screws but not that of lumbar screws possibly because our misplacement rates in general near that of published navigation series. Robotically assisted spine surgery might therefore enhance the safety of screw placement in particular in training settings were different users at various stages of their learning curve are involved in pedicle instrumentation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Bone-mounted robotic guidance for pedicle screw placement has been recently introduced, aiming at increasing accuracy. The aim of this prospective study was to compare this novel approach with the conventional fluoroscopy assisted freehand technique (not the two- or three-dimensional fluoroscopy-based navigation). Two groups were compared: 11 patients, constituting the robotical group, were instrumented with 64 pedicle screws; 23 other patients, constituting the fluoroscopic group, were also instrumented with 64 pedicle screws. Screw position was assessed by two independent observers on postoperative CT-scans using the Rampersaud A to D classification. No neurological complications were noted. Grade A (totally within pedicle margins) accounted for 79% of the screws in the robotically assisted and for 83% of the screws in the fluoroscopic group respectively (p = 0.8). Grade C and D screws, considered as misplacements, accounted for 4.7% of all robotically inserted screws and 7.8% of the fluoroscopically inserted screws (p = 0.71). The current study did not allow to state that robotically assisted screw placement supersedes the conventional fluoroscopy assisted technique, although the literature is more optimistic about the former.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Developing successful navigation and mapping strategies is an essential part of autonomous robot research. However, hardware limitations often make for inaccurate systems. This project serves to investigate efficient alternatives to mapping an environment, by first creating a mobile robot, and then applying machine learning to the robot and controlling systems to increase the robustness of the robot system. My mapping system consists of a semi-autonomous robot drone in communication with a stationary Linux computer system. There are learning systems running on both the robot and the more powerful Linux system. The first stage of this project was devoted to designing and building an inexpensive robot. Utilizing my prior experience from independent studies in robotics, I designed a small mobile robot that was well suited for simple navigation and mapping research. When the major components of the robot base were designed, I began to implement my design. This involved physically constructing the base of the robot, as well as researching and acquiring components such as sensors. Implementing the more complex sensors became a time-consuming task, involving much research and assistance from a variety of sources. A concurrent stage of the project involved researching and experimenting with different types of machine learning systems. I finally settled on using neural networks as the machine learning system to incorporate into my project. Neural nets can be thought of as a structure of interconnected nodes, through which information filters. The type of neural net that I chose to use is a type that requires a known set of data that serves to train the net to produce the desired output. Neural nets are particularly well suited for use with robotic systems as they can handle cases that lie at the extreme edges of the training set, such as may be produced by "noisy" sensor data. Through experimenting with available neural net code, I became familiar with the code and its function, and modified it to be more generic and reusable for multiple applications of neural nets.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the virtual environment implementation for project simulation and conception of supervision and control systems for mobile robots, that are capable to operate and adapting in different environments and conditions. This virtual system has as purpose to facilitate the development of embedded architecture systems, emphasizing the implementation of tools that allow the simulation of the kinematic conditions, dynamic and control, with real time monitoring of all important system points. For this, an open control architecture is proposal, integrating the two main techniques of robotic control implementation in the hardware level: systems microprocessors and reconfigurable hardware devices. The implemented simulator system is composed of a trajectory generating module, a kinematic and dynamic simulator module and of a analysis module of results and errors. All the kinematic and dynamic results shown during the simulation can be evaluated and visualized in graphs and tables formats, in the results analysis module, allowing an improvement in the system, minimizing the errors with the necessary adjustments optimization. For controller implementation in the embedded system, it uses the rapid prototyping, that is the technology that allows, in set with the virtual simulation environment, the development of a controller project for mobile robots. The validation and tests had been accomplish with nonholonomics mobile robots models with diferencial transmission. © 2008 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This work presents and discusses the main topics involved on the design of a mobile robot system and focus on the control and navigation systems for autonomous mobile robots. Introduces the main aspects of the Robot design, which is a holistic vision about all the steps of the development process of an autonomous mobile robot; discusses the problems addressed to the conceptualization of the mobile robot physical structure and its relation to the world. Presents the dynamic and control analysis for navigation robots with kinematic and dynamic model and, for final, presents applications for a robotic platform of Automation, Simulation, Control and Supervision of Mobile Robots Navigation, with studies of dynamic and kinematic modelling, control algorithms, mechanisms for mapping and localization, trajectory planning and the platform simulator. © 2012 Praise Worthy Prize S.r.l. - All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Image-guided microsurgery requires accuracies an order of magnitude higher than today's navigation systems provide. A critical step toward the achievement of such low-error requirements is a highly accurate and verified patient-to-image registration. With the aim of reducing target registration error to a level that would facilitate the use of image-guided robotic microsurgery on the rigid anatomy of the head, we have developed a semiautomatic fiducial detection technique. Automatic force-controlled localization of fiducials on the patient is achieved through the implementation of a robotic-controlled tactile search within the head of a standard surgical screw. Precise detection of the corresponding fiducials in the image data is realized using an automated model-based matching algorithm on high-resolution, isometric cone beam CT images. Verification of the registration technique on phantoms demonstrated that through the elimination of user variability, clinically relevant target registration errors of approximately 0.1 mm could be achieved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Computer-aided microscopic surgery of the lateral skull base is a rare intervention in daily practice. It is often a delicate and difficult minimally invasive intervention, since orientation between the petrous bone and the petrous bone apex is often challenging. In the case of aural atresia or tumors the normal anatomical landmarks are often absent, making orientation more difficult. Navigation support, together with imaging techniques such as CT, MR and angiography, enable the surgeon in such cases to perform the operation more accurately and, in some cases, also in a shorter time. However, there are no internationally standardised indications for navigated surgery on the lateral skull base. Miniaturised robotic systems are still in the initial validation phase.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The application of image-guided systems with or without support by surgical robots relies on the accuracy of the navigation process, including patient-to-image registration. The surgeon must carry out the procedure based on the information provided by the navigation system, usually without being able to verify its correctness beyond visual inspection. Misleading surrogate parameters such as the fiducial registration error are often used to describe the success of the registration process, while a lack of methods describing the effects of navigation errors, such as those caused by tracking or calibration, may prevent the application of image guidance in certain accuracy-critical interventions. During minimally invasive mastoidectomy for cochlear implantation, a direct tunnel is drilled from the outside of the mastoid to a target on the cochlea based on registration using landmarks solely on the surface of the skull. Using this methodology, it is impossible to detect if the drill is advancing in the correct direction and that injury of the facial nerve will be avoided. To overcome this problem, a tool localization method based on drilling process information is proposed. The algorithm estimates the pose of a robot-guided surgical tool during a drilling task based on the correlation of the observed axial drilling force and the heterogeneous bone density in the mastoid extracted from 3-D image data. We present here one possible implementation of this method tested on ten tunnels drilled into three human cadaver specimens where an average tool localization accuracy of 0.29 mm was observed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND Stereotactic navigation technology can enhance guidance during surgery and enable the precise reproduction of planned surgical strategies. Currently, specific systems (such as the CAS-One system) are available for instrument guidance in open liver surgery. This study aims to evaluate the implementation of such a system for the targeting of hepatic tumors during robotic liver surgery. MATERIAL AND METHODS Optical tracking references were attached to one of the robotic instruments and to the robotic endoscopic camera. After instrument and video calibration and patient-to-image registration, a virtual model of the tracked instrument and the available three-dimensional images of the liver were displayed directly within the robotic console, superimposed onto the endoscopic video image. An additional superimposed targeting viewer allowed for the visualization of the target tumor, relative to the tip of the instrument, for an assessment of the distance between the tumor and the tool for the realization of safe resection margins. RESULTS Two cirrhotic patients underwent robotic navigated atypical hepatic resections for hepatocellular carcinoma. The augmented endoscopic view allowed for the definition of an accurate resection margin around the tumor. The overlay of reconstructed three-dimensional models was also used during parenchymal transection for the identification of vascular and biliary structures. Operative times were 240 min in the first case and 300 min in the second. There were no intraoperative complications. CONCLUSIONS The da Vinci Surgical System provided an excellent platform for image-guided liver surgery with a stable optic and instrumentation. Robotic image guidance might improve the surgeon's orientation during the operation and increase accuracy in tumor resection. Further developments of this technological combination are needed to deal with organ deformation during surgery.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a novel robotic system that is able to move along the outside of the oil pipelines used in Electric Submersible Pumps (ESP) and Progressive Cavity Pumps (PCP) applications. This novel design, called RETOV, proposes a light weight structure robot that can be equipped with sensors to measure environmental variables avoiding damage in pumps and wells. In this paper, the main considerations and methodology of design and implementation are discussed. Finally, the first experimental results that show RETOV moving in vertical pipelines are analyzed.