389 resultados para Asynchronous vision sensor
Resumo:
In this paper we present research adapting a state of the art condition-invariant robotic place recognition algorithm to the role of automated inter- and intra-image alignment of sensor observations of environmental and skin change over time. The approach involves inverting the typical criteria placed upon navigation algorithms in robotics; we exploit rather than attempt to fix the limited camera viewpoint invariance of such algorithms, showing that approximate viewpoint repetition is realistic in a wide range of environments and medical applications. We demonstrate the algorithms automatically aligning challenging visual data from a range of real-world applications: ecological monitoring of environmental change, aerial observation of natural disasters including flooding, tsunamis and bushfires and tracking wound recovery and sun damage over time and present a prototype active guidance system for enforcing viewpoint repetition. We hope to provide an interesting case study for how traditional research criteria in robotics can be inverted to provide useful outcomes in applied situations.
Resumo:
Underwater wireless sensor networks (UWSNs) have become the seat of researchers' attention recently due to their proficiency to explore underwater areas and design different applications for marine discovery and oceanic surveillance. One of the main objectives of each deployed underwater network is discovering the optimized path over sensor nodes to transmit the monitored data to onshore station. The process of transmitting data consumes energy of each node, while energy is limited in UWSNs. So energy efficiency is a challenge in underwater wireless sensor network. Dual sinks vector based forwarding (DS-VBF) takes both residual energy and location information into consideration as priority factors to discover an optimized routing path to save energy in underwater networks. The modified routing protocol employs dual sinks on the water surface which improves network lifetime. According to deployment of dual sinks, packet delivery ratio and the average end to end delay are enhanced. Based on our simulation results in comparison with VBF, average end to end delay reduced more than 80%, remaining energy increased 10%, and the increment of packet reception ratio was about 70%.
Resumo:
We propose a method for learning specific object representations that can be applied (and reused) in visual detection and identification tasks. A machine learning technique called Cartesian Genetic Programming (CGP) is used to create these models based on a series of images. Our research investigates how manipulation actions might allow for the development of better visual models and therefore better robot vision. This paper describes how visual object representations can be learned and improved by performing object manipulation actions, such as, poke, push and pick-up with a humanoid robot. The improvement can be measured and allows for the robot to select and perform the `right' action, i.e. the action with the best possible improvement of the detector.
Resumo:
This paper describes a series of trials that were done at an underground mine in New South Wales, Australia. Experimental results are presented from the data obtained during the field trials and suitable sensor suites for an autonomous mining vehicle navigation system are evaluated.
Resumo:
The mining industry is highly suitable for the application of robotics and automation technology since the work is both arduous and dangerous. However, while the industry makes extensive use of mechanisation it has shown a slow uptake of automation. A major cause of this is the complexity of the task, and the limitations of existing automation technology which is predicated on a structured and time invariant working environment. Here we discuss the topic of mining automation from a robotics and computer vision perspective — as a problem in sensor based robot control, an issue which the robotics community has been studying for nearly two decades. We then describe two of our current mining automation projects to demonstrate what is possible for both open-pit and underground mining operations.
Resumo:
The mining industry is highly suitable for the application of robotics and automation technology since the work is both arduous and dangerous. Visual servoing is a means of integrating noncontact visual sensing with machine control to augment or replace operator based control. This article describes two of our current mining automation projects in order to demonstrate some, perhaps unusual, applications of visual servoing, and also to illustrate some very real problems with robust computer vision
Resumo:
The International Journal of Robotics Research (IJRR) has a long history of publishing the state-of-the-art in the field of robotic vision. This is the fourth special issue devoted to the topic. Previous special issues were published in 2012 (Volume 31, No. 4), 2010 (Volume 29, Nos 2–3) and 2007 (Volume 26, No. 7, jointly with the International Journal of Computer Vision). In a closely related field was the special issue on Visual Servoing published in IJRR, 2003 (Volume 22, Nos 10–11). These issues nicely summarize the highlights and progress of the past 12 years of research devoted to the use of visual perception for robotics.
Resumo:
Background Assessing hand injury is of great interest given the level of involvement of the hand with the environment. Knowing different assessment systems and their limitations generates new perspectives. The integration of digital systems (accelerometry and electromyography) as a tool to supplement functional assessment allows the clinician to know more about the motor component and its relation to movement. Therefore, the purpose of this study was the kinematic and electromyography analysis during functional hand movements. Method Ten subjects carried out six functional movements (terminal pinch, termino-lateral pinch, tripod pinch, power grip, extension grip and ball grip). Muscle activity (hand and forearm) was measured in real time using electromyograms, acquired with the Mega ME 6000, whilst acceleration was measured using the AcceleGlove. Results Electrical activity and acceleration variables were recorded simultaneously during the carrying out of the functional movements. The acceleration outcome variables were the modular vectors of each finger of the hand and the palm. In the electromyography, the main variables were normalized by the mean and by the maximum muscle activity of the thenar region, hypothenar, first interosseous dorsal, wrist flexors, carpal flexors and wrist extensors. Conclusions Knowing muscle behavior allows the clinician to take a more direct approach in the treatment. Based on the results, the tripod grip shows greater kinetic activity and the middle finger is the most relevant in this regard. Ball grip involves most muscle activity, with the thenar region playing a fundamental role in hand activity. Clinical relevance Relating muscle activation, movements, individual load and displacement offers the possibility to proceed with rehabilitation by individual component.
Resumo:
Over the past several decades there has been a sharp increase in the number of studies focused on the relationship between vision and driving. The intensified attention to this topic has most likely been stimulated by the lack of an evidence basis for determining vision standards for driving licensure and a poor understanding about how vision impairment impacts driver safety and performance. Clinicians depend on the literature on vision and driving to advise visually impaired patients appropriately about driving fitness. Policy makers also depend on the scientific literature in order to develop guidelines that are evidence-based and are thus fair to persons who are visually impaired. Thus it is important for clinicians and policy makers alike to understand how various study designs and measurement methods should be interpreted so that the conclusions and recommendations they make are not overly broad, too narrowly constrained, or even misguided. We offer a methodological framework to guide interpretations of studies on vision and driving that can also serve as a heuristic for researchers in the area. Here, we discuss research designs and general measurement methods for the study of vision as they relate to driver safety, driver performance, and driver-centered (self-reported) outcomes.
Resumo:
Falls are the leading cause of injury-related morbidity and mortality among older adults. In addition to the resulting physical injury and potential disability after a fall, there are also important psychological consequences, including depression, anxiety, activity restriction, and fear of falling. Fear of falling affects 20 to 43% of community-dwelling older adults and is not limited to those who have previously experienced a fall. About half of older adults who experience fear of falling subsequently restrict their physical and everyday activities, which can lead to functional decline, depression, increased falls risk, and reduced quality of life. Although there is clear evidence that older adults with visual impairment have higher falls risk, only a limited number of studies have investigated fear of falling in older adults with visual impairment and the findings have been mixed. Recent studies suggest increased levels of fear of falling among older adults with various eye conditions, including glaucoma and age-related macular degeneration, whereas other studies have failed to find differences. Interventions, which are still in their infancy in the general population, are also largely unexplored in those with visual impairment. The major aims of this review were to provide an overview of the literature on fear of falling, its measurement, and risk factors among older populations, with specific focus on older adults with visual impairment, and to identify directions for future research in this area.
Resumo:
2,4,6-trinitrotoluene (TNT) is one of the most commonly used nitro aromatic explosives in landmine, military and mining industry. This article demonstrates rapid and selective identification of TNT by surface-enhanced Raman spectroscopy (SERS) using 6-aminohexanethiol (AHT) as a new recognition molecule. First, Meisenheimer complex formation between AHT and TNT is confirmed by the development of pink colour and appearance of new band around 500 nm in UV-visible spectrum. Solution Raman spectroscopy study also supported the AHT:TNT complex formation by demonstrating changes in the vibrational stretching of AHT molecule between 2800-3000 cm−1. For surface enhanced Raman spectroscopy analysis, a self-assembled monolayer (SAM) of AHT is formed over the gold nanostructure (AuNS) SERS substrate in order to selectively capture TNT onto the surface. Electrochemical desorption and X-ray photoelectron studies are performed over AHT SAM modified surface to examine the presence of free amine groups with appropriate orientation for complex formation. Further, AHT and butanethiol (BT) mixed monolayer system is explored to improve the AHT:TNT complex formation efficiency. Using a 9:1 AHT:BT mixed monolayer, a very low detection limit (LOD) of 100 fM TNT was realized. The new method delivers high selectivity towards TNT over 2,4 DNT and picric acid. Finally, real sample analysis is demonstrated by the extraction and SERS detection of 302 pM of TNT from spiked.
Resumo:
This paper details the design and performance assessment of a unique collision avoidance decision and control strategy for autonomous vision-based See and Avoid systems. The general approach revolves around re-positioning a collision object in the image using image-based visual servoing, without estimating range or time to collision. The decision strategy thus involves determining where to move the collision object, to induce a safe avoidance manuever, and when to cease the avoidance behaviour. These tasks are accomplished by exploiting human navigation models, spiral motion properties, expected image feature uncertainty and the rules of the air. The result is a simple threshold based system that can be tuned and statistically evaluated by extending performance assessment techniques derived for alerting systems. Our results demonstrate how autonomous vision-only See and Avoid systems may be designed under realistic problem constraints, and then evaluated in a manner consistent to aviation expectations.
Resumo:
In vegetated environments, reliable obstacle detection remains a challenge for state-of-the-art methods, which are usually based on geometrical representations of the environment built from LIDAR and/or visual data. In many cases, in practice field robots could safely traverse through vegetation, thereby avoiding costly detours. However, it is often mistakenly interpreted as an obstacle. Classifying vegetation is insufficient since there might be an obstacle hidden behind or within it. Some Ultra-wide band (UWB) radars can penetrate through vegetation to help distinguish actual obstacles from obstacle-free vegetation. However, these sensors provide noisy and low-accuracy data. Therefore, in this work we address the problem of reliable traversability estimation in vegetation by augmenting LIDAR-based traversability mapping with UWB radar data. A sensor model is learned from experimental data using a support vector machine to convert the radar data into occupancy probabilities. These are then fused with LIDAR-based traversability data. The resulting augmented traversability maps capture the fine resolution of LIDAR-based maps but clear safely traversable foliage from being interpreted as obstacle. We validate the approach experimentally using sensors mounted on two different mobile robots, navigating in two different environments.
Resumo:
This paper presents an unmanned aircraft system (UAS) that uses a probabilistic model for autonomous front-on environmental sensing or photography of a target. The system is based on low-cost and readily-available sensor systems in dynamic environments and with the general intent of improving the capabilities of dynamic waypoint-based navigation systems for a low-cost UAS. The behavioural dynamics of target movement for the design of a Kalman filter and Markov model-based prediction algorithm are included. Geometrical concepts and the Haversine formula are applied to the maximum likelihood case in order to make a prediction regarding a future state of a target, thus delivering a new waypoint for autonomous navigation. The results of the application to aerial filming with low-cost UAS are presented, achieving the desired goal of maintained front-on perspective without significant constraint to the route or pace of target movement.