978 resultados para Robot motion
Resumo:
This paper presents a shared autonomy control scheme for a quadcopter that is suited for inspection of vertical infrastructure — tall man-made structures such as streetlights, electricity poles or the exterior surfaces of buildings. Current approaches to inspection of such structures is slow, expensive, and potentially hazardous. Low-cost aerial platforms with an ability to hover now have sufficient payload and endurance for this kind of task, but require significant human skill to fly. We develop a control architecture that enables synergy between the ground-based operator and the aerial inspection robot. An unskilled operator is assisted by onboard sensing and partial autonomy to safely fly the robot in close proximity to the structure. The operator uses their domain knowledge and problem solving skills to guide the robot in difficult to reach locations to inspect and assess the condition of the infrastructure. The operator commands the robot in a local task coordinate frame with limited degrees of freedom (DOF). For instance: up/down, left/right, toward/away with respect to the infrastructure. We therefore avoid problems of global mapping and navigation while providing an intuitive interface to the operator. We describe algorithms for pole detection, robot velocity estimation with respect to the pole, and position estimation in 3D space as well as the control algorithms and overall system architecture. We present initial results of shared autonomy of a quadrotor with respect to a vertical pole and robot performance is evaluated by comparing with motion capture data.
Resumo:
This paper presents practical vision-based collision avoidance for objects approximating a single point feature. Using a spherical camera model, a visual predictive control scheme guides the aircraft around the object along a conical spiral trajectory. Visibility, state and control constraints are considered explicitly in the controller design by combining image and vehicle dynamics in the process model, and solving the nonlinear optimization problem over the resulting state space. Importantly, range is not required. Instead, the principles of conical spiral motion are used to design an objective function that simultaneously guides the aircraft along the avoidance trajectory, whilst providing an indication of the appropriate point to stop the spiral behaviour. Our approach is aimed at providing a potential solution to the See and Avoid problem for unmanned aircraft and is demonstrated through a series.
Resumo:
Distal radius fractures stabilized by open reduction internal fixation (ORIF) have become increasingly common. There is currently no consensus on the optimal time to commence range of motion (ROM) exercises post-ORIF. A retrospective cohort review was conducted over a five-year period to compare wrist and forearm range of motion outcomes and number of therapy sessions between patients who commenced active ROM exercises within the first seven days and from day eight onward following ORIF of distal radius fractures. One hundred and twenty-one patient cases were identified. Clinical data, active ROM at initial and discharge therapy assessments, fracture type, surgical approaches, and number of therapy sessions attended were recorded. One hundred and seven (88.4%) cases had complete datasets. The early active ROM group (n = 37) commenced ROM a mean (SD) of 4.27 (1.8) days post-ORIF. The comparator group (n = 70) commenced ROM exercises 24.3 (13.6) days post-ORIF. No significant differences were identified between groups in ROM at initial or discharge assessments, or therapy sessions attended. The results from this study indicate that patients who commenced active ROM exercises an average of 24 days after surgery achieved comparable ROM outcomes with similar number of therapy sessions to those who commenced ROM exercises within the first week.
Resumo:
This paper proposes a practical prediction procedure for vertical displacement of a Rotarywing Unmanned Aerial Vehicle (RUAV) landing deck in the presence of stochastic sea state disturbances. A proper time series model tending to capture characteristics of the dynamic relationship between an observer and a landing deck is constructed, with model orders determined by a novel principle based on Bayes Information Criterion (BIC) and coefficients identified using the Forgetting Factor Recursive Least Square (FFRLS) method. In addition, a fast-converging online multi-step predictor is developed, which can be implemented more rapidly than the Auto-Regressive (AR) predictor as it requires less memory allocations when updating coefficients. Simulation results demonstrate that the proposed prediction approach exhibits satisfactory prediction performance, making it suitable for integration into ship-helicopter approach and landing guidance systems in consideration of computational capacity of the flight computer.
Resumo:
Evolutionary computation is an effective tool for solving optimization problems. However, its significant computational demand has limited its real-time and on-line applications, especially in embedded systems with limited computing resources, e.g., mobile robots. Heuristic methods such as the genetic algorithm (GA) based approaches have been investigated for robot path planning in dynamic environments. However, research on the simulated annealing (SA) algorithm, another popular evolutionary computation algorithm, for dynamic path planning is still limited mainly due to its high computational demand. An enhanced SA approach, which integrates two additional mathematical operators and initial path selection heuristics into the standard SA, is developed in this work for robot path planning in dynamic environments with both static and dynamic obstacles. It improves the computing performance of the standard SA significantly while giving an optimal or near-optimal robot path solution, making its real-time and on-line applications possible. Using the classic and deterministic Dijkstra algorithm as a benchmark, comprehensive case studies are carried out to demonstrate the performance of the enhanced SA and other SA algorithms in various dynamic path planning scenarios.
Resumo:
Motion capture continues to be adopted across a range of creative fields including, animation, games, visual effects, dance, live theatre and the visual arts. This panel will discuss and showcase the use of motion capture across these creative fields and consider the future of virtual production in the creative industries.
Resumo:
This paper presents a practical scheme to control heave motion for hover and automatic landing of a Rotary-wing Unmanned Aerial Vehicle (RUAV) in the presence of strong horizontal gusts. A heave motion model is constructed for the purpose of capturing dynamic variations of thrust due to horizontal gusts. Through construction of an effective gust estimator, a feedback-feedforward controller is developed which uses available measurements from onboard sensors. The proposed controller dynamically and synchronously compensates for aerodynamic variations of heave motion, enhancing disturbance-attenuation capability of the RUAV. Simulation results justify the reliability and efficiency of the suggested gust estimator. Moreover, flight tests conducted on our Eagle helicopter verify suitability of the proposed control strategy for small RUAVs operating in a gusty environment.
Resumo:
This paper presents an account of an autonomous mobile robot deployment in a densely crowded public event with thousands of people from different age groups attending. The robot operated for eight hours on an open floor surrounded by tables, chairs and massive touchscreen displays. Due to the large number of people who were in close vicinity of the robot, different safety measures were implemented including the use of no-go zones which prevent the robot from blocking emergency exits or moving too close to the display screens. The paper presents the lessons learnt and experiences obtained from this experiment, and provides a discussion about the state of mobile service robots in such crowded environments.
Resumo:
A multi-segment foot model was used to develop an accurate and reliable kinematic model to describe in-shoe foot kinematics during gait.
Resumo:
In the movie industry, the extraordinarily successful theatrical performance of certain films is largely attributed to buzz. Despite longstanding commentary about the role of buzz in successful movie marketing and the belief that it accelerates new product diffusion, limited scholarly evidence exists to support these assertions. This is primarily due to the lack of conceptual distinction of buzz from word-of-mouth, which is often used as the main basis for conceptualising buzz. However, word-of-mouth does not fully explain the buzz surrounding films such as 'Gone With The Wind', 'The Dark Knight' and 'Avatar'. Informed by valuable insights from key experts who have launched some of the most successful movies in box office history, as well as a range of moviegoers, this thesis developed a deeper understanding of what buzz is and how it is created. This thesis concludes that buzz is not the same as word-of-mouth.
Resumo:
Organ motion as a result of respiration is an important field of research for medical physics. Knowledge of magnitude and direction of this motion is necessary to allow for more accurate radiotherapy treatment planning. This will result in higher doses to the tumour whilst sparing healthy tissue. This project involved human trials, where the radiation therapy patient's kidneys were CT scanned under three different conditions; whilst free breathing (FB), breath-hold at normal tidal inspiration (BHIN), and breath-hold at normal tidal expiration (BHEX). The magnitude of motion was measured by recording the outline of the kidney from a Beam's Eye View (BEV). The centre of mass of this 2D shape was calculated for each set using "ImageJ" software and the magnitude of movement determined from the change in the centroid's coordinates between the BHIN and BHEX scans. The movement ranged from, for the left and right kidneys, 4-46mm and 2-44mm in the superior/inferior (axial) plane, 1-21mm and 2- 16mm in the anterior/posterior (coronal) plane, and 0-6mm and 0-8mm in the lateral/medial (sagittal) plane. From exhale to inhale, the kidneys tended to move inferiorly, anteriorly and laterally. A standard radiotherapy plan, designed to treat the para-aortics with opposed lateral fields was performed on the free breathing (planning) CT set. The field size and arrangement was set up using the same parameters for each subject. The prescription was to deliver 45 Gray in 25 fractions. This field arrangement and prescription was then copied over to the breath hold CT sets, and the dosimetric differences were compared using Dose Volume Histograms (DVH). The point of comparison for the three sets was recorded as the percentage volume of kidney receiving less than or equal to 10 Gray. The QUASAR respiratory motion phantom was used with the range of motion determined from the human study. The phantom was imaged, planned and treated with a linear accelerator with dose determined by film. The effect of the motion was measured by the change in the penumbra of the film and compared to the penumbra from the treatment planning system.
Resumo:
“Made by Motion” is a collaboration between digital artist Paul Van Opdenbosch and performer and choreographer Elise May; a series of studies on captured motion data used to generating experimental visual forms that reverberate in space and time. The project investigates the invisible forces generated by and influencing the movement of a dancer. Along with how the forces can be captured and applied to generating visual outcomes that surpass simple data visualisation, projecting the intent of the performer’s movements. The source or ‘seed’ comes from using an Xsens MVN - Inertial Motion Capture system to capture spontaneous dance movements, with the visual generation conducted through a customised dynamics simulation. In this first series the visual investigation focused on manipulating the movement date at the instance of capture, capture been the recording of three-dimensional movement as ‘seen’ by the hardware and ‘understood’ through the calibration of software. By repositioning the capture hardware on the body we can effectively change how the same sequence of movements is ‘seen’ by the motion capture system thus generating a different visual result from effetely identical movement. The outcomes from the experiments clearly demonstrates the effectiveness of using motion capture hardware as a creative tool to manipulate the perception of the capture subject, in this case been a sequence of dance movements. The creative work exhibited is a cross-section of the experiments conducted in practice with the first animated work (Movement A - Control) using the motion capture hardware in its default ‘normal’ configuration. Following this is the lower body moved to the upper body (Lb-Ub), right arm moved onto the left arm (Ra-La), right leg moved onto the left leg (Rl-Ll) and finally the left leg moved onto a object that is then held in the left hand (Ll-Pf (Lh)).
Resumo:
My practice-led research explores and maps workflows for generating experimental creative work involving inertia based motion capture technology. Motion capture has often been used as a way to bridge animation and dance resulting in abstracted visuals outcomes. In early works this process was largely done by rotoscoping, reference footage and mechanical forms of motion capture. With the evolution of technology, optical and inertial forms of motion capture are now more accessible and able to accurately capture a larger range of complex movements. Made by Motion is a collaboration between digital artist Paul Van Opdenbosch and performer and choreographer Elise May; a series of studies on captured motion data used to generate experimental visual forms that reverberate in space and time. The project investigates the invisible forces generated by and influencing the movement of a dancer. Along with how the forces can be captured and applied to generating visual outcomes that surpass simple data visualisation, projecting the intent of the performer’s movements. The source or ‘seed’ comes from using an Xsens MVN – Inertial Motion Capture system to capture spontaneous dance movements, with the visual generation conducted through a customised dynamics simulation. In my presentation I will be displaying and discussing a selected creative works from the project along with the process and considerations behind the work.
Resumo:
Cell-to-cell adhesion is an important aspect of malignant spreading that is often observed in images from the experimental cell biology literature. Since cell-to-cell adhesion plays an important role in controlling the movement of individual malignant cells, it is likely that cell-to-cell adhesion also influences the spatial spreading of populations of such cells. Therefore, it is important for us to develop biologically realistic simulation tools that can mimic the key features of such collective spreading processes to improve our understanding of how cell-to-cell adhesion influences the spreading of cell populations. Previous models of collective cell spreading with adhesion have used lattice-based random walk frameworks which may lead to unrealistic results, since the agents in the random walk simulations always move across an artificial underlying lattice structure. This is particularly problematic in high-density regions where it is clear that agents in the random walk align along the underlying lattice, whereas no such regular alignment is ever observed experimentally. To address these limitations, we present a lattice-free model of collective cell migration that explicitly incorporates crowding and adhesion. We derive a partial differential equation description of the discrete process and show that averaged simulation results compare very well with numerical solutions of the partial differential equation.
Resumo:
In this paper we present a method for autonomously tuning the threshold between learning and recognizing a place in the world, based on both how the rodent brain is thought to process and calibrate multisensory data and the pivoting movement behaviour that rodents perform in doing so. The approach makes no assumptions about the number and type of sensors, the robot platform, or the environment, relying only on the ability of a robot to perform two revolutions on the spot. In addition, it self-assesses the quality of the tuning process in order to identify situations in which tuning may have failed. We demonstrate the autonomous movement-driven threshold tuning on a Pioneer 3DX robot in eight locations spread over an office environment and a building car park, and then evaluate the mapping capability of the system on journeys through these environments. The system is able to pick a place recognition threshold that enables successful environment mapping in six of the eight locations while also autonomously flagging the tuning failure in the remaining two locations. We discuss how the method, in combination with parallel work on autonomous weighting of individual sensors, moves the parameter dependent RatSLAM system significantly closer to sensor, platform and environment agnostic operation.