966 resultados para Robotic control


Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the implementation of a robust grasp mapping between a 3-finger haptic device (master) and a robotic hand (slave). Mapping is based on a grasp equivalence defined considering the manipulation capabilities of the master and slave devices. The metrics that translate the human hand gesture to the robotic hand workspace are obtained through an analytical user study. This allows a natural control of the robotic hand. The grasp mapping is accomplished defining 4 control modes that encapsulate all the grasps gestures considered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Robotics is an emerging field with great activity. Robotics is a field that presents several problems because it depends on a large number of disciplines, technologies, devices and tasks. Its expansion from perfectly controlled industrial environments toward open and dynamic environment presents a many new challenges. New uses are, for example, household robots or professional robots. To facilitate the low cost, rapid development of robotic systems, reusability of code, its medium and long term maintainability and robustness are required novel approaches to provide generic models and software systems who develop paradigms capable of solving these problems. For this purpose, in this paper we propose a model based on multi-agent systems inspired by the human nervous system able to transfer the control characteristics of the biological system and able to take advantage of the best properties of distributed software systems. Specifically, we model the decentralized activity and hormonal variation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

New low cost sensors and open free libraries for 3D image processing are making important advances in robot vision applications possible, such as three-dimensional object recognition, semantic mapping, navigation and localization of robots, human detection and/or gesture recognition for human-machine interaction. In this paper, a novel method for recognizing and tracking the fingers of a human hand is presented. This method is based on point clouds from range images captured by a RGBD sensor. It works in real time and it does not require visual marks, camera calibration or previous knowledge of the environment. Moreover, it works successfully even when multiple objects appear in the scene or when the ambient light is changed. Furthermore, this method was designed to develop a human interface to control domestic or industrial devices, remotely. In this paper, the method was tested by operating a robotic hand. Firstly, the human hand was recognized and the fingers were detected. Secondly, the movement of the fingers was analysed and mapped to be imitated by a robotic hand.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Tactile sensors play an important role in robotics manipulation to perform dexterous and complex tasks. This paper presents a novel control framework to perform dexterous manipulation with multi-fingered robotic hands using feedback data from tactile and visual sensors. This control framework permits the definition of new visual controllers which allow the path tracking of the object motion taking into account both the dynamics model of the robot hand and the grasping force of the fingertips under a hybrid control scheme. In addition, the proposed general method employs optimal control to obtain the desired behaviour in the joint space of the fingers based on an indicated cost function which determines how the control effort is distributed over the joints of the robotic hand. Finally, authors show experimental verifications on a real robotic manipulation system for some of the controllers derived from the control framework.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Humans and machines have shared the same physical space for many years. To share the same space, we want the robots to behave like human beings. This will facilitate their social integration, their interaction with humans and create an intelligent behavior. To achieve this goal, we need to understand how human behavior is generated, analyze tasks running our nerves and how they relate to them. Then and only then can we implement these mechanisms in robotic beings. In this study, we propose a model of competencies based on human neuroregulator system for analysis and decomposition of behavior into functional modules. Using this model allow separate and locate the tasks to be implemented in a robot that displays human-like behavior. As an example, we show the application of model to the autonomous movement behavior on unfamiliar environments and its implementation in various simulated and real robots with different physical configurations and physical devices of different nature. The main result of this study has been to build a model of competencies that is being used to build robotic systems capable of displaying behaviors similar to humans and consider the specific characteristics of robots.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A large part of the new generation of computer numerical control systems has adopted an architecture based on robotic systems. This architecture improves the implementation of many manufacturing processes in terms of flexibility, efficiency, accuracy and velocity. This paper presents a 4-axis robot tool based on a joint structure whose primary use is to perform complex machining shapes in some non-contact processes. A new dynamic visual controller is proposed in order to control the 4-axis joint structure, where image information is used in the control loop to guide the robot tool in the machining task. In addition, this controller eliminates the chaotic joint behavior which appears during tracking of the quasi-repetitive trajectories required in machining processes. Moreover, this robot tool can be coupled to a manipulator robot in order to form a multi-robot platform for complex manufacturing tasks. Therefore, the robot tool could perform a machining task using a piece grasped from the workspace by a manipulator robot. This manipulator robot could be guided by using visual information given by the robot tool, thereby obtaining an intelligent multi-robot platform controlled by only one camera.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Visual information is increasingly being used in a great number of applications in order to perform the guidance of joint structures. This paper proposes an image-based controller which allows the joint structure guidance when its number of degrees of freedom is greater than the required for the developed task. In this case, the controller solves the redundancy combining two different tasks: the primary task allows the correct guidance using image information, and the secondary task determines the most adequate joint structure posture solving the possible joint redundancy regarding the performed task in the image space. The method proposed to guide the joint structure also employs a smoothing Kalman filter not only to determine the moment when abrupt changes occur in the tracked trajectory, but also to estimate and compensate these changes using the proposed filter. Furthermore, a direct visual control approach is proposed which integrates the visual information provided by this smoothing Kalman filter. This last aspect permits the correct tracking when noisy measurements are obtained. All the contributions are integrated in an application which requires the tracking of the faces of Asperger children.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a new framework based on optimal control to define new dynamic visual controllers to carry out the guidance of any serial link structure. The proposed general method employs optimal control to obtain the desired behaviour in the joint space based on an indicated cost function which determines how the control effort is distributed over the joints. The proposed approach allows the development of new direct visual controllers for any mechanical joint system with redundancy. Finally, authors show experimental results and verifications on a real robotic system for some derived controllers obtained from the control framework.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stroke is a prevalent disorder with immense socioeconomic impact. A variety of chronic neurological deficits result from stroke. In particular, sensorimotor deficits are a significant barrier to achieving post-stroke independence. Unfortunately, the majority of pre-clinical studies that show improved outcomes in animal stroke models have failed in clinical trials. Pre-clinical studies using non-human primate (NHP) stroke models prior to initiating human trials are a potential step to improving translation from animal studies to clinical trials. Robotic assessment tools represent a quantitative, reliable, and reproducible means to assess reaching behaviour following stroke in both humans and NHPs. We investigated the use of robotic technology to assess sensorimotor impairments in NHPs following middle cerebral artery occlusion (MCAO). Two cynomolgus macaques underwent transient MCAO for 90 minutes. Approximately 1.5 years following the procedure these NHPs and two non-stroke control monkeys were trained in a reaching task with both arms in the KINARM exoskeleton. This robot permits elbow and shoulder movements in the horizontal plane. The task required NHPs to make reaching movements from a centrally positioned start target to 1 of 8 peripheral targets uniformly distributed around the first target. We analyzed four movement parameters: reaction time, movement time (MT), initial direction error (IDE), and number of speed maxima to characterize sensorimotor deficiencies. We hypothesized reduced performance in these attributes during a neurobehavioural task with the paretic limb of NHPs following MCAO compared to controls. Reaching movements in the non-affected limbs of control and experimental NHPs showed bell-shaped velocity profiles. In contrast, the reaching movements with the affected limbs were highly variable. We found distinctive patterns in MT, IDE, and number of speed peaks between control and experimental monkeys and between limbs of NHPs with MCAO. NHPs with MCAO demonstrated more speed peaks, longer MTs, and greater IDE in their paretic limb compared to controls. These initial results qualitatively match human stroke subjects’ performance, suggesting that robotic neurobehavioural assessment in NHPs with stroke is feasible and could have translational relevance in subsequent human studies. Further studies will be necessary to replicate and expand on these preliminary findings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis deals with the challenging problem of designing systems able to perceive objects in underwater environments. In the last few decades research activities in robotics have advanced the state of art regarding intervention capabilities of autonomous systems. State of art in fields such as localization and navigation, real time perception and cognition, safe action and manipulation capabilities, applied to ground environments (both indoor and outdoor) has now reached such a readiness level that it allows high level autonomous operations. On the opposite side, the underwater environment remains a very difficult one for autonomous robots. Water influences the mechanical and electrical design of systems, interferes with sensors by limiting their capabilities, heavily impacts on data transmissions, and generally requires systems with low power consumption in order to enable reasonable mission duration. Interest in underwater applications is driven by needs of exploring and intervening in environments in which human capabilities are very limited. Nowadays, most underwater field operations are carried out by manned or remotely operated vehicles, deployed for explorations and limited intervention missions. Manned vehicles, directly on-board controlled, expose human operators to risks related to the stay in field of the mission, within a hostile environment. Remotely Operated Vehicles (ROV) currently represent the most advanced technology for underwater intervention services available on the market. These vehicles can be remotely operated for long time but they need support from an oceanographic vessel with multiple teams of highly specialized pilots. Vehicles equipped with multiple state-of-art sensors and capable to autonomously plan missions have been deployed in the last ten years and exploited as observers for underwater fauna, seabed, ship wrecks, and so on. On the other hand, underwater operations like object recovery and equipment maintenance are still challenging tasks to be conducted without human supervision since they require object perception and localization with much higher accuracy and robustness, to a degree seldom available in Autonomous Underwater Vehicles (AUV). This thesis reports the study, from design to deployment and evaluation, of a general purpose and configurable platform dedicated to stereo-vision perception in underwater environments. Several aspects related to the peculiar environment characteristics have been taken into account during all stages of system design and evaluation: depth of operation and light conditions, together with water turbidity and external weather, heavily impact on perception capabilities. The vision platform proposed in this work is a modular system comprising off-the-shelf components for both the imaging sensors and the computational unit, linked by a high performance ethernet network bus. The adopted design philosophy aims at achieving high flexibility in terms of feasible perception applications, that should not be as limited as in case of a special-purpose and dedicated hardware. Flexibility is required by the variability of underwater environments, with water conditions ranging from clear to turbid, light backscattering varying with daylight and depth, strong color distortion, and other environmental factors. Furthermore, the proposed modular design ensures an easier maintenance and update of the system over time. Performance of the proposed system, in terms of perception capabilities, has been evaluated in several underwater contexts taking advantage of the opportunity offered by the MARIS national project. Design issues like energy power consumption, heat dissipation and network capabilities have been evaluated in different scenarios. Finally, real-world experiments, conducted in multiple and variable underwater contexts, including open sea waters, have led to the collection of several datasets that have been publicly released to the scientific community. The vision system has been integrated in a state of the art AUV equipped with a robotic arm and gripper, and has been exploited in the robot control loop to successfully perform underwater grasping operations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Flexible Assembly Systems (FASs) are normally associated with the automatic, or robotic, assembly of products, supported by automated material handling systems. However, manual assembly operations are still prevalent within many industries, where the complexity and variety of products prohibit the development of suitable automated assembly equipment. This article presents a generic model for incorporating flexibility into the design and control of assembly operations concerned with high variety/low volume manufacture, drawing on the principles for Flexible Manufacturing Systems (FMS) and Just-in-Time (JIT) delivery. It is based on work being undertaken in an electronics company where the assembly operations have been overhauled and restructured in response to a need for greater flexibility, shorter cycle times and reduced inventory levels. The principles employed are in themselves not original. However, the way they have been combined and tailored has created a total manufacturing control system which represents a new concept for responding to demands placed on market driven firms operating in an uncertain environment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper is related with the problem of developing autonomous intelligent robots for complex environments. In details it outlines a knowledge-based robot control architecture that combines several techniques in order to supply an ability to adapt and act autonomously in complex environments. The described architecture has been implemented as a robotic system that demonstrates its operation in dynamic environment. Although the robotic system demonstrates a certain level of autonomy, the experiments show that there are situation, in which the developed base architecture should be complemented with additional modules. The last few chapters of the paper describe the experimentation results and the current state of further research towards the developed architecture.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper deals with a problem of intelligent system’s design for complex environments. There is discussed a possibility to integrate several technologies into one basic structure that could form a kernel of an autonomous intelligent robotic system. One alternative structure is proposed in order to form a basis of an intelligent system that would be able to operate in complex environments. The proposed structure is very flexible because of features that allow adapting via learning and adjustment of the used knowledge. Therefore, the proposed structure may be used in environments with stochastic features such as hardly predictable events or elements. The basic elements of the proposed structure have found their implementation in software system and experimental robotic system. The software system as well as the robotic system has been used for experimentation in order to validate the proposed structure - its functionality, flexibility and reliability. Both of them are presented in the paper. The basic features of each system are presented as well. The most important results of experiments are outlined and discussed at the end of the paper. Some possible directions of further research are also sketched at the end of the paper.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main focus of this thesis is to address the relative localization problem of a heterogenous team which comprises of both ground and micro aerial vehicle robots. This team configuration allows to combine the advantages of increased accessibility and better perspective provided by aerial robots with the higher computational and sensory resources provided by the ground agents, to realize a cooperative multi robotic system suitable for hostile autonomous missions. However, in such a scenario, the strict constraints in flight time, sensor pay load, and computational capability of micro aerial vehicles limits the practical applicability of popular map-based localization schemes for GPS denied navigation. Therefore, the resource limited aerial platforms of this team demand simpler localization means for autonomous navigation. Relative localization is the process of estimating the formation of a robot team using the acquired inter-robot relative measurements. This allows the team members to know their relative formation even without a global localization reference, such as GPS or a map. Thus a typical robot team would benefit from a relative localization service since it would allow the team to implement formation control, collision avoidance, and supervisory control tasks, independent of a global localization service. More importantly, a heterogenous team such as ground robots and computationally constrained aerial vehicles would benefit from a relative localization service since it provides the crucial localization information required for autonomous operation of the weaker agents. This enables less capable robots to assume supportive roles and contribute to the more powerful robots executing the mission. Hence this study proposes a relative localization-based approach for ground and micro aerial vehicle cooperation, and develops inter-robot measurement, filtering, and distributed computing modules, necessary to realize the system. The research study results in three significant contributions. First, the work designs and validates a novel inter-robot relative measurement hardware solution which has accuracy, range, and scalability characteristics, necessary for relative localization. Second, the research work performs an analysis and design of a novel nonlinear filtering method, which allows the implementation of relative localization modules and attitude reference filters on low cost devices with optimal tuning parameters. Third, this work designs and validates a novel distributed relative localization approach, which harnesses the distributed computing capability of the team to minimize communication requirements, achieve consistent estimation, and enable efficient data correspondence within the network. The work validates the complete relative localization-based system through multiple indoor experiments and numerical simulations. The relative localization based navigation concept with its sensing, filtering, and distributed computing methods introduced in this thesis complements system limitations of a ground and micro aerial vehicle team, and also targets hostile environmental conditions. Thus the work constitutes an essential step towards realizing autonomous navigation of heterogenous teams in real world applications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper a surgical robotic device for cochlear implantation surgery is described that is able to discriminate tissue interfaces and other controlling parameters ahead of a drill tip. The advantage in surgery is that tissues at interfaces can be preserved. The smart tool is able to control interaction with respect to the flexing tissue to avoid penetration control the extent of protrusion with respect to the real-time position of the tissue. To interpret drilling conditions, and conditions leading up to breakthrough at a tissue interface, the sensing scheme used enables discrimination between the variety of conditions posed in the drilling environment. The result is a robust fully autonomous system able to respond to tissue type, behaviour and deflection in real-time. The paper describes the robotic tool that has been designed to be used in the surgical environment where it has been used in the operating room.