319 resultados para Kitchen robot
Resumo:
This paper presents a shared autonomy control scheme for a quadcopter that is suited for inspection of vertical infrastructure — tall man-made structures such as streetlights, electricity poles or the exterior surfaces of buildings. Current approaches to inspection of such structures is slow, expensive, and potentially hazardous. Low-cost aerial platforms with an ability to hover now have sufficient payload and endurance for this kind of task, but require significant human skill to fly. We develop a control architecture that enables synergy between the ground-based operator and the aerial inspection robot. An unskilled operator is assisted by onboard sensing and partial autonomy to safely fly the robot in close proximity to the structure. The operator uses their domain knowledge and problem solving skills to guide the robot in difficult to reach locations to inspect and assess the condition of the infrastructure. The operator commands the robot in a local task coordinate frame with limited degrees of freedom (DOF). For instance: up/down, left/right, toward/away with respect to the infrastructure. We therefore avoid problems of global mapping and navigation while providing an intuitive interface to the operator. We describe algorithms for pole detection, robot velocity estimation with respect to the pole, and position estimation in 3D space as well as the control algorithms and overall system architecture. We present initial results of shared autonomy of a quadrotor with respect to a vertical pole and robot performance is evaluated by comparing with motion capture data.
Resumo:
At present, for mechanical power transmission, Cycloidal drives are most preferred - for compact, high transmission ratio speed reduction, especially for robot joints and manipulator applications. Research on drive-train dynamics of Cycloidal drives is not well-established. This paper presents a testing rig for Cycloidal drives, which would produce data for development of mathematical models and investigation of drive-train dynamics, further aiding in optimising its design
Resumo:
This paper presents a method for investigating ship emissions, the plume capture and analysis system (PCAS), and its application in measuring airborne pollutant emission factors (EFs) and particle size distributions. The current investigation was conducted in situ, aboard two dredgers (Amity: a cutter suction dredger and Brisbane: a hopper suction dredger) but the PCAS is also capable of performing such measurements remotely at a distant point within the plume. EFs were measured relative to the fuel consumption using the fuel combustion derived plume CO2. All plume measurements were corrected by subtracting background concentrations sampled regularly from upwind of the stacks. Each measurement typically took 6 minutes to complete and during one day, 40 to 50 measurements were possible. The relationship between the EFs and plume sample dilution was examined to determine the plume dilution range over which the technique could deliver consistent results when measuring EFs for particle number (PN), NOx, SO2, and PM2.5 within a targeted dilution factor range of 50-1000 suitable for remote sampling. The EFs for NOx, SO2, and PM2.5 were found to be independent of dilution, for dilution factors within that range. The EF measurement for PN was corrected for coagulation losses by applying a time dependant particle loss correction to the particle number concentration data. For the Amity, the EF ranges were PN: 2.2 - 9.6 × 1015 (kg-fuel)-1; NOx: 35-72 g(NO2).(kg-fuel)-1, SO2 0.6 - 1.1 g(SO2).(kg-fuel)-1and PM2.5: 0.7 – 6.1 g(PM2.5).(kg-fuel)-1. For the Brisbane they were PN: 1.0 – 1.5 x 1016 (kg-fuel)-1, NOx: 3.4 – 8.0 g(NO2).(kg-fuel)-1, SO2: 1.3 – 1.7 g(SO2).(kg-fuel)-1 and PM2.5: 1.2 – 5.6 g(PM2.5).(kg-fuel)-1. The results are discussed in terms of the operating conditions of the vessels’ engines. Particle number emission factors as a function of size as well as the count median diameter (CMD), and geometric standard deviation of the size distributions are provided. The size distributions were found to be consistently uni-modal in the range below 500 nm, and this mode was within the accumulation mode range for both vessels. The representative CMDs for the various activities performed by the dredgers ranged from 94-131 nm in the case of the Amity, and 58-80 nm for the Brisbane. A strong inverse relationship between CMD and EF(PN) was observed.
Resumo:
This paper describes the implementation of the first portable, embedded data acquisition unit (BabelFuse) that is able to acquire and timestamp generic sensor data and trigger General Purpose I/O (GPIO) events against a microsecond-accurate wirelessly-distributed ‘global’ clock. A significant issue encountered when fusing data received from multiple sensors is the accuracy of the timestamp associated with each piece of data. This is particularly important in applications such as Simultaneous Localisation and Mapping (SLAM) where vehicle velocity forms an important part of the mapping algorithms; on fast-moving vehicles, even millisecond inconsistencies in data timestamping can produce errors which need to be compensated for. The timestamping problem is compounded in a robot swarm environment especially if non-deterministic communication hardware (such as IEEE-802.11-based wireless) and inaccurate clock synchronisation protocols are used. The issue of differing timebases makes correlation of data difficult and prevents the units from reliably performing synchronised operations or manoeuvres. By utilising hardware-assisted timestamping, clock synchronisation protocols based on industry standards and firmware designed to minimise indeterminism, an embedded data acquisition unit capable of microsecond-level clock synchronisation is presented.
Resumo:
Biorobotics has the potential to provide an integrated understanding from neural systems to behavior that is neither ethical nor technically feasible with living systems. Robots that can interact with animals in their natural environment open new possibilities for empirical studies in neuroscience. However, designing a robot that can interact with a rodent requires considerations that span a range of disciplines. For the rat's safety, the body form and movements of the robot need to take into consideration the safety of the animal, an appropriate size for the rodent arenas, and behaviors for interaction. For the robot's safety, its form must be robust in the face of typically inquisitive and potentially aggressive behaviors by the rodent, which can include chewing on exposed parts, including electronics, and deliberate or accidental fouling. We designed a rat-sized robot, the iRat (intelligent rat animat technology) for studies in neuroscience. The iRat is about the same size as a rat and has the ability to navigate autonomously around small environments. In this study we report the first interactions between the iRat and real rodents in a free exploration task. Studies with five rats show that the rats and iRat interact safely for both parties.
Resumo:
Time and space are fundamental to human language and embodied cognition. In our early work we investigated how Lingodroids, robots with the ability to build their own maps, could evolve their own geopersonal spatial language. In subsequent studies we extended the framework developed for learning spatial concepts and words to learning temporal intervals. This paper considers a new aspect of time, the naming of concepts like morning, afternoon, dawn, and dusk, which are events that are part of day-night cycles, but are not defined by specific time points on a clock. Grounding of such terms refers to events and features of the diurnal cycle, such as light levels. We studied event-based time in which robots experienced day-night cycles that varied with the seasons throughout a year. Then we used meet-at tasks to demonstrate that the words learned were grounded, where the times to meet were morning and afternoon, rather than specific clock times. The studies show how words and concepts for a novel aspect of cyclic time can be grounded through experience with events rather than by times as measured by clocks or calendars
Resumo:
We introduce a new image-based visual navigation algorithm that allows the Cartesian velocity of a robot to be defined with respect to a set of visually observed features corresponding to previously unseen and unmapped world points. The technique is well suited to mobile robot tasks such as moving along a road or flying over the ground. We describe the algorithm in general form and present detailed simulation results for an aerial robot scenario using a spherical camera and a wide angle perspective camera, and present experimental results for a mobile ground robot.
Resumo:
Interview and discussion on Robot University and AUTHENTIC IN ALL CAPS, transmedia creative works by Christy Dena.
Resumo:
Globally, it is estimated that 24 million people live with schizophrenia (WHO, 2008), while 1.2 million people have been diagnosed with schizophrenia in Indonesia. Auditory hallucinations are a key symptom of schizophrenia according to the DSM IV-TR (Frances, First, & Pincus, 2002). It is estimated that the prevalence of auditory hallucinations in people with schizophrenia range from 64.3% to 83.4% (Thomas et al., 2007). Until recently, the majority of studies were conducted in Western societies the primary focus of which, has been on the causes and treatments of auditory hallucinations (Walton, 1999) and on the biological and cognitive aspects of the phenomenon (Changas, Garcia-Montes, de Lemus & Olivencia, 2003). While a few studies have explored the lived experience of people with schizophrenia, there is little research about the experience of auditory hallucinations. Therefore, the focus of this study was on an exploration of the experience of auditory hallucinations as described by Indonesian people living with schizophrenia. Based on the available literature, there have been no published qualitative studies relating to the lived experience of auditory hallucinations as described by Indonesian people diagnosed with schizophrenia. Husserlian descriptive phenomenological approach was applied in explicating the phenomenon of auditory hallucinations in this study. In-depth audio-taped interviews were conducted with 13 participants. Analysis of participant transcripts was undertaken using Colaizzi.s (1973) approach. Eight major themes were explicated: Feeling more like a robot than a human being - feeling compelled to respond to auditory hallucinations; voices of contradiction - a point of confusion; a frightening experience, the voices emerged at times of loss and grief; disruption to daily living; tattered relationships and family disarray; finding a personal path to living with auditory hallucinations; seeking relief in Allah through prayer and ritual. Experiencing auditory hallucinations for people diagnosed with schizophrenia is a journey of challenges as each individual struggles to understand their now changed life-world, reconstruct a sense of meaning within their illness experience, and to carve out a pathway to wellness. The challenge for practitioners is to learn from those who have experienced auditory hallucinations, to be with them in their journey of recovery and wellness, and to apply a person-centered approach to care within the context of a multidisciplinary team.
Resumo:
This paper presents practical vision-based collision avoidance for objects approximating a single point feature. Using a spherical camera model, a visual predictive control scheme guides the aircraft around the object along a conical spiral trajectory. Visibility, state and control constraints are considered explicitly in the controller design by combining image and vehicle dynamics in the process model, and solving the nonlinear optimization problem over the resulting state space. Importantly, range is not required. Instead, the principles of conical spiral motion are used to design an objective function that simultaneously guides the aircraft along the avoidance trajectory, whilst providing an indication of the appropriate point to stop the spiral behaviour. Our approach is aimed at providing a potential solution to the See and Avoid problem for unmanned aircraft and is demonstrated through a series.
Resumo:
This paper presents a mapping and navigation system for a mobile robot, which uses vision as its sole sensor modality. The system enables the robot to navigate autonomously, plan paths and avoid obstacles using a vision based topometric map of its environment. The map consists of a globally-consistent pose-graph with a local 3D point cloud attached to each of its nodes. These point clouds are used for direction independent loop closure and to dynamically generate 2D metric maps for locally optimal path planning. Using this locally semi-continuous metric space, the robot performs shortest path planning instead of following the nodes of the graph --- as is done with most other vision-only navigation approaches. The system exploits the local accuracy of visual odometry in creating local metric maps, and uses pose graph SLAM, visual appearance-based place recognition and point clouds registration to create the topometric map. The ability of the framework to sustain vision-only navigation is validated experimentally, and the system is provided as open-source software.
Resumo:
A technique for analysing exhaust emission plumes from unmodified locomotives under real world conditions is described and applied to the task of characterizing plumes from railway trains servicing an Australian shipping port. The method utilizes the simultaneous measurement, downwind of the railway line, of the following pollutants; particle number, PM2.5 mass fraction, SO2, NOx and CO2, with the last of these being used as an indicator of fuel combustion. Emission factors are then derived, in terms of number of particles and mass of pollutant emitted per unit mass of fuel consumed. Particle number size distributions are also presented. The practical advantages of the method are discussed including the capacity to routinely collect emission factor data for passing trains and to thereby build up a comprehensive real world database for a wide range of pollutants. Samples from 56 train movements were collected, analyzed and presented. The quantitative results for emission factors are: EF(N)=(1.7±1)×1016 kg-1, EF(PM2.5)= (1.1±0.5) g·kg-1, EF(NOx)= (28±14) g·kg-1, and EF(SO2 )= (1.4±0.4) g·kg-1. The findings are compared with comparable previously published work. Statistically significant (p<α, α=0.05) correlations within the group of locomotives sampled were found between the emission factors for particle number and both SO2 and NOx.
Resumo:
Robotic systems are increasingly being utilised as fundamental data-gathering tools by scientists, allowing new perspectives and a greater understanding of the planet and its environmental processes. Today's robots are already exploring our deep oceans, tracking harmful algal blooms and pollution spread, monitoring climate variables, and even studying remote volcanoes. This article collates and discusses the significant advancements and applications of marine, terrestrial, and airborne robotic systems developed for environmental monitoring during the last two decades. Emerging research trends for achieving large-scale environmental monitoring are also reviewed, including cooperative robotic teams, robot and wireless sensor network (WSN) interaction, adaptive sampling and model-aided path planning. These trends offer efficient and precise measurement of environmental processes at unprecedented scales that will push the frontiers of robotic and natural sciences.
Resumo:
Distributed Wireless Smart Camera (DWSC) network is a special type of Wireless Sensor Network (WSN) that processes captured images in a distributed manner. While image processing on DWSCs sees a great potential for growth, with its applications possessing a vast practical application domain such as security surveillance and health care, it suffers from tremendous constraints. In addition to the limitations of conventional WSNs, image processing on DWSCs requires more computational power, bandwidth and energy that presents significant challenges for large scale deployments. This dissertation has developed a number of algorithms that are highly scalable, portable, energy efficient and performance efficient, with considerations of practical constraints imposed by the hardware and the nature of WSN. More specifically, these algorithms tackle the problems of multi-object tracking and localisation in distributed wireless smart camera net- works and optimal camera configuration determination. Addressing the first problem of multi-object tracking and localisation requires solving a large array of sub-problems. The sub-problems that are discussed in this dissertation are calibration of internal parameters, multi-camera calibration for localisation and object handover for tracking. These topics have been covered extensively in computer vision literatures, however new algorithms must be invented to accommodate the various constraints introduced and required by the DWSC platform. A technique has been developed for the automatic calibration of low-cost cameras which are assumed to be restricted in their freedom of movement to either pan or tilt movements. Camera internal parameters, including focal length, principal point, lens distortion parameter and the angle and axis of rotation, can be recovered from a minimum set of two images of the camera, provided that the axis of rotation between the two images goes through the camera's optical centre and is parallel to either the vertical (panning) or horizontal (tilting) axis of the image. For object localisation, a novel approach has been developed for the calibration of a network of non-overlapping DWSCs in terms of their ground plane homographies, which can then be used for localising objects. In the proposed approach, a robot travels through the camera network while updating its position in a global coordinate frame, which it broadcasts to the cameras. The cameras use this, along with the image plane location of the robot, to compute a mapping from their image planes to the global coordinate frame. This is combined with an occupancy map generated by the robot during the mapping process to localised objects moving within the network. In addition, to deal with the problem of object handover between DWSCs of non-overlapping fields of view, a highly-scalable, distributed protocol has been designed. Cameras that follow the proposed protocol transmit object descriptions to a selected set of neighbours that are determined using a predictive forwarding strategy. The received descriptions are then matched at the subsequent camera on the object's path using a probability maximisation process with locally generated descriptions. The second problem of camera placement emerges naturally when these pervasive devices are put into real use. The locations, orientations, lens types etc. of the cameras must be chosen in a way that the utility of the network is maximised (e.g. maximum coverage) while user requirements are met. To deal with this, a statistical formulation of the problem of determining optimal camera configurations has been introduced and a Trans-Dimensional Simulated Annealing (TDSA) algorithm has been proposed to effectively solve the problem.
Resumo:
This paper presents a practical scheme to control heave motion for hover and automatic landing of a Rotary-wing Unmanned Aerial Vehicle (RUAV) in the presence of strong horizontal gusts. A heave motion model is constructed for the purpose of capturing dynamic variations of thrust due to horizontal gusts. Through construction of an effective gust estimator, a feedback-feedforward controller is developed which uses available measurements from onboard sensors. The proposed controller dynamically and synchronously compensates for aerodynamic variations of heave motion, enhancing disturbance-attenuation capability of the RUAV. Simulation results justify the reliability and efficiency of the suggested gust estimator. Moreover, flight tests conducted on our Eagle helicopter verify suitability of the proposed control strategy for small RUAVs operating in a gusty environment.