322 resultados para Humanoid Robot
Resumo:
Globally, it is estimated that 24 million people live with schizophrenia (WHO, 2008), while 1.2 million people have been diagnosed with schizophrenia in Indonesia. Auditory hallucinations are a key symptom of schizophrenia according to the DSM IV-TR (Frances, First, & Pincus, 2002). It is estimated that the prevalence of auditory hallucinations in people with schizophrenia range from 64.3% to 83.4% (Thomas et al., 2007). Until recently, the majority of studies were conducted in Western societies the primary focus of which, has been on the causes and treatments of auditory hallucinations (Walton, 1999) and on the biological and cognitive aspects of the phenomenon (Changas, Garcia-Montes, de Lemus & Olivencia, 2003). While a few studies have explored the lived experience of people with schizophrenia, there is little research about the experience of auditory hallucinations. Therefore, the focus of this study was on an exploration of the experience of auditory hallucinations as described by Indonesian people living with schizophrenia. Based on the available literature, there have been no published qualitative studies relating to the lived experience of auditory hallucinations as described by Indonesian people diagnosed with schizophrenia. Husserlian descriptive phenomenological approach was applied in explicating the phenomenon of auditory hallucinations in this study. In-depth audio-taped interviews were conducted with 13 participants. Analysis of participant transcripts was undertaken using Colaizzi.s (1973) approach. Eight major themes were explicated: Feeling more like a robot than a human being - feeling compelled to respond to auditory hallucinations; voices of contradiction - a point of confusion; a frightening experience, the voices emerged at times of loss and grief; disruption to daily living; tattered relationships and family disarray; finding a personal path to living with auditory hallucinations; seeking relief in Allah through prayer and ritual. Experiencing auditory hallucinations for people diagnosed with schizophrenia is a journey of challenges as each individual struggles to understand their now changed life-world, reconstruct a sense of meaning within their illness experience, and to carve out a pathway to wellness. The challenge for practitioners is to learn from those who have experienced auditory hallucinations, to be with them in their journey of recovery and wellness, and to apply a person-centered approach to care within the context of a multidisciplinary team.
Resumo:
Background and purpose: The purpose of the work presented in this paper was to determine whether patient positioning and delivery errors could be detected using electronic portal images of intensity modulated radiotherapy (IMRT). Patients and methods: We carried out a series of controlled experiments delivering an IMRT beam to a humanoid phantom using both the dynamic and multiple static field method of delivery. The beams were imaged, the images calibrated to remove the IMRT fluence variation and then compared with calibrated images of the reference beams without any delivery or position errors. The first set of experiments involved translating the position of the phantom both laterally and in a superior/inferior direction a distance of 1, 2, 5 and 10 mm. The phantom was also rotated 1 and 28. For the second set of measurements the phantom position was kept fixed and delivery errors were introduced to the beam. The delivery errors took the form of leaf position and segment intensity errors. Results: The method was able to detect shifts in the phantom position of 1 mm, leaf position errors of 2 mm, and dosimetry errors of 10% on a single segment of a 15 segment IMRT step and shoot delivery (significantly less than 1% of the total dose). Conclusions: The results of this work have shown that the method of imaging the IMRT beam and calibrating the images to remove the intensity modulations could be a useful tool in verifying both the patient position and the delivery of the beam.
Resumo:
We have taken a new method of calibrating portal images of IMRT beams and used this to measure patient set-up accuracy and delivery errors, such as leaf errors and segment intensity errors during treatment. A calibration technique was used to remove the intensity modulations from the images leaving equivalent open field images that show patient anatomy that can be used for verification of the patient position. The images of the treatment beam can also be used to verify the delivery of the beam in terms of multileaf collimator leaf position and dosimetric errors. A series of controlled experiments delivering an IMRT anterior beam to the head and neck of a humanoid phantom were undertaken. A 2mm translation in the position of the phantom could be detected. With intentional introduction of delivery errors into the beam this method allowed us to detect leaf positioning errors of 2mm and variation in monitor units of 1%. The method was then applied to the case of a patient who received IMRT treatment to the larynx and cervical nodes. The anterior IMRT beam was imaged during four fractions and the images calibrated and investigated for the characteristic signs of patient position error and delivery error that were shown in the control experiments. No significant errors were seen. The method of imaging the IMRT beam and calibrating the images to remove the intensity modulations can be a useful tool in verifying both the patient position and the delivery of the beam.
Resumo:
This paper presents practical vision-based collision avoidance for objects approximating a single point feature. Using a spherical camera model, a visual predictive control scheme guides the aircraft around the object along a conical spiral trajectory. Visibility, state and control constraints are considered explicitly in the controller design by combining image and vehicle dynamics in the process model, and solving the nonlinear optimization problem over the resulting state space. Importantly, range is not required. Instead, the principles of conical spiral motion are used to design an objective function that simultaneously guides the aircraft along the avoidance trajectory, whilst providing an indication of the appropriate point to stop the spiral behaviour. Our approach is aimed at providing a potential solution to the See and Avoid problem for unmanned aircraft and is demonstrated through a series.
Resumo:
This paper presents a mapping and navigation system for a mobile robot, which uses vision as its sole sensor modality. The system enables the robot to navigate autonomously, plan paths and avoid obstacles using a vision based topometric map of its environment. The map consists of a globally-consistent pose-graph with a local 3D point cloud attached to each of its nodes. These point clouds are used for direction independent loop closure and to dynamically generate 2D metric maps for locally optimal path planning. Using this locally semi-continuous metric space, the robot performs shortest path planning instead of following the nodes of the graph --- as is done with most other vision-only navigation approaches. The system exploits the local accuracy of visual odometry in creating local metric maps, and uses pose graph SLAM, visual appearance-based place recognition and point clouds registration to create the topometric map. The ability of the framework to sustain vision-only navigation is validated experimentally, and the system is provided as open-source software.
Resumo:
Robotic systems are increasingly being utilised as fundamental data-gathering tools by scientists, allowing new perspectives and a greater understanding of the planet and its environmental processes. Today's robots are already exploring our deep oceans, tracking harmful algal blooms and pollution spread, monitoring climate variables, and even studying remote volcanoes. This article collates and discusses the significant advancements and applications of marine, terrestrial, and airborne robotic systems developed for environmental monitoring during the last two decades. Emerging research trends for achieving large-scale environmental monitoring are also reviewed, including cooperative robotic teams, robot and wireless sensor network (WSN) interaction, adaptive sampling and model-aided path planning. These trends offer efficient and precise measurement of environmental processes at unprecedented scales that will push the frontiers of robotic and natural sciences.
Resumo:
Distributed Wireless Smart Camera (DWSC) network is a special type of Wireless Sensor Network (WSN) that processes captured images in a distributed manner. While image processing on DWSCs sees a great potential for growth, with its applications possessing a vast practical application domain such as security surveillance and health care, it suffers from tremendous constraints. In addition to the limitations of conventional WSNs, image processing on DWSCs requires more computational power, bandwidth and energy that presents significant challenges for large scale deployments. This dissertation has developed a number of algorithms that are highly scalable, portable, energy efficient and performance efficient, with considerations of practical constraints imposed by the hardware and the nature of WSN. More specifically, these algorithms tackle the problems of multi-object tracking and localisation in distributed wireless smart camera net- works and optimal camera configuration determination. Addressing the first problem of multi-object tracking and localisation requires solving a large array of sub-problems. The sub-problems that are discussed in this dissertation are calibration of internal parameters, multi-camera calibration for localisation and object handover for tracking. These topics have been covered extensively in computer vision literatures, however new algorithms must be invented to accommodate the various constraints introduced and required by the DWSC platform. A technique has been developed for the automatic calibration of low-cost cameras which are assumed to be restricted in their freedom of movement to either pan or tilt movements. Camera internal parameters, including focal length, principal point, lens distortion parameter and the angle and axis of rotation, can be recovered from a minimum set of two images of the camera, provided that the axis of rotation between the two images goes through the camera's optical centre and is parallel to either the vertical (panning) or horizontal (tilting) axis of the image. For object localisation, a novel approach has been developed for the calibration of a network of non-overlapping DWSCs in terms of their ground plane homographies, which can then be used for localising objects. In the proposed approach, a robot travels through the camera network while updating its position in a global coordinate frame, which it broadcasts to the cameras. The cameras use this, along with the image plane location of the robot, to compute a mapping from their image planes to the global coordinate frame. This is combined with an occupancy map generated by the robot during the mapping process to localised objects moving within the network. In addition, to deal with the problem of object handover between DWSCs of non-overlapping fields of view, a highly-scalable, distributed protocol has been designed. Cameras that follow the proposed protocol transmit object descriptions to a selected set of neighbours that are determined using a predictive forwarding strategy. The received descriptions are then matched at the subsequent camera on the object's path using a probability maximisation process with locally generated descriptions. The second problem of camera placement emerges naturally when these pervasive devices are put into real use. The locations, orientations, lens types etc. of the cameras must be chosen in a way that the utility of the network is maximised (e.g. maximum coverage) while user requirements are met. To deal with this, a statistical formulation of the problem of determining optimal camera configurations has been introduced and a Trans-Dimensional Simulated Annealing (TDSA) algorithm has been proposed to effectively solve the problem.
Resumo:
This paper presents a practical scheme to control heave motion for hover and automatic landing of a Rotary-wing Unmanned Aerial Vehicle (RUAV) in the presence of strong horizontal gusts. A heave motion model is constructed for the purpose of capturing dynamic variations of thrust due to horizontal gusts. Through construction of an effective gust estimator, a feedback-feedforward controller is developed which uses available measurements from onboard sensors. The proposed controller dynamically and synchronously compensates for aerodynamic variations of heave motion, enhancing disturbance-attenuation capability of the RUAV. Simulation results justify the reliability and efficiency of the suggested gust estimator. Moreover, flight tests conducted on our Eagle helicopter verify suitability of the proposed control strategy for small RUAVs operating in a gusty environment.
Resumo:
This study was a phenomenological inquiry of the experience of auditory hallucinations as described by 13 Indonesian people diagnosed with schizophrenia. The interviewees included 6 men and 7 women and they were aged between 19 and 56 years. Four themes emerged from this study: feeling more like a robot than a human being; voices of contradiction - a point of confusion; tattered relationships and family disarray; and normalizing the presence of voices as part of everyday life. The findings of this study have the potential to contribute to new understandings of how people live with and manage auditory hallucinations and so enhance client-centered nursing care.
Resumo:
This thesis presents a novel approach to mobile robot navigation using visual information towards the goal of long-term autonomy. A novel concept of a continuous appearance-based trajectory is proposed in order to solve the limitations of previous robot navigation systems, and two new algorithms for mobile robots, CAT-SLAM and CAT-Graph, are presented and evaluated. These algorithms yield performance exceeding state-of-the-art methods on public benchmark datasets and large-scale real-world environments, and will help enable widespread use of mobile robots in everyday applications.
Resumo:
This paper presents a long-term experiment where a mobile robot uses adaptive spherical views to localize itself and navigate inside a non-stationary office environment. The office contains seven members of staff and experiences a continuous change in its appearance over time due to their daily activities. The experiment runs as an episodic navigation task in the office over a period of eight weeks. The spherical views are stored in the nodes of a pose graph and they are updated in response to the changes in the environment. The updating mechanism is inspired by the concepts of long- and short-term memories. The experimental evaluation is done using three performance metrics which evaluate the quality of both the adaptive spherical views and the navigation over time.
Resumo:
A major challenge for robot localization and mapping systems is maintaining reliable operation in a changing environment. Vision-based systems in particular are susceptible to changes in illumination and weather, and the same location at another time of day may appear radically different to a system using a feature-based visual localization system. One approach for mapping changing environments is to create and maintain maps that contain multiple representations of each physical location in a topological framework or manifold. However, this requires the system to be able to correctly link two or more appearance representations to the same spatial location, even though the representations may appear quite dissimilar. This paper proposes a method of linking visual representations from the same location without requiring a visual match, thereby allowing vision-based localization systems to create multiple appearance representations of physical locations. The most likely position on the robot path is determined using particle filter methods based on dead reckoning data and recent visual loop closures. In order to avoid erroneous loop closures, the odometry-based inferences are only accepted when the inferred path's end point is confirmed as correct by the visual matching system. Algorithm performance is demonstrated using an indoor robot dataset and a large outdoor camera dataset.
Resumo:
Achieving energy efficient legged locomotion is an important goal for the future of robot mobility. This paper presents a novel joint for legged locomotion that is energy efficient for two reasons. The first reason is the configuration of the elastic elements and actuator which we show analytically has lower energy losses than the typical arrangement. The second is that the joint stiffness, and hence stance duration, is controllable without requiring any energy from the actuator. Further, the joint stiffness can be changed significantly during the flight phase, from zero to highly rigid. Results obtained from a prototype hopper, demonstrate that the joint allows continuous and peak hopping via torque control. The results also demonstrate that the hopping frequency can be varied between 2.2Hz and 4.6Hz with associated stance duration of between 0.35 and 0.15 seconds.
Resumo:
This paper proposes an online learning control system that uses the strategy of Model Predictive Control (MPC) in a model based locally weighted learning framework. The new approach, named Locally Weighted Learning Model Predictive Control (LWL-MPC), is proposed as a solution to learn to control robotic systems with nonlinear and time varying dynamics. This paper demonstrates the capability of LWL-MPC to perform online learning while controlling the joint trajectories of a low cost, three degree of freedom elastic joint robot. The learning performance is investigated in both an initial learning phase, and when the system dynamics change due to a heavy object added to the tool point. The experiment on the real elastic joint robot is presented and LWL-MPC is shown to successfully learn to control the system with and without the object. The results highlight the capability of the learning control system to accommodate the lack of mechanical consistency and linearity in a low cost robot arm.
Resumo:
Current state of the art robot mapping and navigation systems produce impressive performance under a narrow range of robot platform, sensor and environmental conditions, in contrast to animals such as rats that produce “good enough” maps that enable them to function under an incredible range of situations. In this paper we present a rat-inspired featureless sensor-fusion system that assesses the usefulness of multiple sensor modalities based on their utility and coherence for place recognition, without knowledge as to the type of sensor. We demonstrate the system on a Pioneer robot in indoor and outdoor environments with abrupt lighting changes. Through dynamic weighting of the sensors, the system is able to perform correct place recognition and mapping where the static sensor weighting approach fails.