332 resultados para Pushbroom camera
Resumo:
Although robotics research has seen advances over the last decades robots are still not in widespread use outside industrial applications. Yet a range of proposed scenarios have robots working together, helping and coexisting with humans in daily life. In all these a clear need to deal with a more unstructured, changing environment arises. I herein present a system that aims to overcome the limitations of highly complex robotic systems, in terms of autonomy and adaptation. The main focus of research is to investigate the use of visual feedback for improving reaching and grasping capabilities of complex robots. To facilitate this a combined integration of computer vision and machine learning techniques is employed. From a robot vision point of view the combination of domain knowledge from both imaging processing and machine learning techniques, can expand the capabilities of robots. I present a novel framework called Cartesian Genetic Programming for Image Processing (CGP-IP). CGP-IP can be trained to detect objects in the incoming camera streams and successfully demonstrated on many different problem domains. The approach requires only a few training images (it was tested with 5 to 10 images per experiment) is fast, scalable and robust yet requires very small training sets. Additionally, it can generate human readable programs that can be further customized and tuned. While CGP-IP is a supervised-learning technique, I show an integration on the iCub, that allows for the autonomous learning of object detection and identification. Finally this dissertation includes two proof-of-concepts that integrate the motion and action sides. First, reactive reaching and grasping is shown. It allows the robot to avoid obstacles detected in the visual stream, while reaching for the intended target object. Furthermore the integration enables us to use the robot in non-static environments, i.e. the reaching is adapted on-the- fly from the visual feedback received, e.g. when an obstacle is moved into the trajectory. The second integration highlights the capabilities of these frameworks, by improving the visual detection by performing object manipulation actions.
Resumo:
This paper describes a vision-only system for place recognition in environments that are tra- versed at different times of day, when chang- ing conditions drastically affect visual appear- ance, and at different speeds, where places aren’t visited at a consistent linear rate. The ma- jor contribution is the removal of wheel-based odometry from the previously presented algo- rithm (SMART), allowing the technique to op- erate on any camera-based device; in our case a mobile phone. While we show that the di- rect application of visual odometry to our night- time datasets does not achieve a level of perfor- mance typically needed, the VO requirements of SMART are orthogonal to typical usage: firstly only the magnitude of the velocity is required, and secondly the calculated velocity signal only needs to be repeatable in any one part of the environment over day and night cycles, but not necessarily globally consistent. Our results show that the smoothing effect of motion constraints is highly beneficial for achieving a locally consis- tent, lighting-independent velocity estimate. We also show that the advantage of our patch-based technique used previously for frame recogni- tion, surprisingly, does not transfer to VO, where SIFT demonstrates equally good performance. Nevertheless, we present the SMART system us- ing only vision, which performs sequence-base place recognition in extreme low-light condi- tions where standard 6-DOF VO fails and that improves place recognition performance over odometry-less benchmarks, approaching that of wheel odometry.
Resumo:
The vision sense of standalone robots is limited by line of sight and onboard camera capabilities, but processing video from remote cameras puts a high computational burden on robots. This paper describes the Distributed Robotic Vision Service, DRVS, which implements an on-demand distributed visual object detection service. Robots specify visual information requirements in terms of regions of interest and object detection algorithms. DRVS dynamically distributes the object detection computation to remote vision systems with processing capabilities, and the robots receive high-level object detection information. DRVS relieves robots of managing sensor discovery and reduces data transmission compared to image sharing models of distributed vision. Navigating a sensorless robot from remote vision systems is demonstrated in simulation as a proof of concept.
Resumo:
The purpose of this study was to establish a three-dimensional fluorescent tooth model to investigate bacterial viability against intra-canal medicaments across the thickness and surface of root dentine. Dental microbial biofilms (Enterococcus faecalis and Streptococcus mutans) were established on the external root surface and bacterial kill was monitored over time against intra-canal medicament (Ca(OH)2 ) using fluorescent microscopy in conjunction with BacLight SYTO9 and propidium iodide stains. An Olympus digital camera fitted to SZX16 fluorescent microscope captured images of bacterial cells in biofilms on the external root surface. Viability of biofilm was measured by calculating the total pixel area of green (viable bacteria) and red (non-viable bacteria) for each image using ImageJ® software. All data generated were assessed for normality and then analysed using a Mann-Whitney t-test. The viability of S. mutans biofilm following Ca(OH)2 treatment showed a significant decline compared with the untreated group (P = 0.0418). No significant difference was seen for E. faecalis biofilm between the Ca(OH)2 and untreated groups indicating Ca(OH)2 medicament is ineffective against E. faecalis biofilm. This novel three-dimensional fluorescent biofilm model provides a new clinically relevant tool for testing of medicaments against dental biofilms.
Resumo:
There is an increased interest on the use of Unmanned Aerial Vehicles (UAVs) for wildlife and feral animal monitoring around the world. This paper describes a novel system which uses a predictive dynamic application that places the UAV ahead of a user, with a low cost thermal camera, a small onboard computer that identifies heat signatures of a target animal from a predetermined altitude and transmits that target’s GPS coordinates. A map is generated and various data sets and graphs are displayed using a GUI designed for easy use. The paper describes the hardware and software architecture and the probabilistic model for downward facing camera for the detection of an animal. Behavioral dynamics of target movement for the design of a Kalman filter and Markov model based prediction algorithm are used to place the UAV ahead of the user. Geometrical concepts and Haversine formula are applied to the maximum likelihood case in order to make a prediction regarding a future state of the user, thus delivering a new way point for autonomous navigation. Results show that the system is capable of autonomously locating animals from a predetermined height and generate a map showing the location of the animals ahead of the user.
Resumo:
There is an increased interest in the use of Unmanned Aerial Vehicles for load transportation from environmental remote sensing to construction and parcel delivery. One of the main challenges is accurate control of the load position and trajectory. This paper presents an assessment of real flight trials for the control of an autonomous multi-rotor with a suspended slung load using only visual feedback to determine the load position. This method uses an onboard camera to take advantage of a common visual marker detection algorithm to robustly detect the load location. The load position is calculated using an onboard processor, and transmitted over a wireless network to a ground station integrating MATLAB/SIMULINK and Robotic Operating System (ROS) and a Model Predictive Controller (MPC) to control both the load and the UAV. To evaluate the system performance, the position of the load determined by the visual detection system in real flight is compared with data received by a motion tracking system. The multi-rotor position tracking performance is also analyzed by conducting flight trials using perfect load position data and data obtained only from the visual system. Results show very accurate estimation of the load position (~5% Offset) using only the visual system and demonstrate that the need for an external motion tracking system is not needed for this task.
Resumo:
This paper introduces a machine learning based system for controlling a robotic manipulator with visual perception only. The capability to autonomously learn robot controllers solely from raw-pixel images and without any prior knowledge of configuration is shown for the first time. We build upon the success of recent deep reinforcement learning and develop a system for learning target reaching with a three-joint robot manipulator using external visual observation. A Deep Q Network (DQN) was demonstrated to perform target reaching after training in simulation. Transferring the network to real hardware and real observation in a naive approach failed, but experiments show that the network works when replacing camera images with synthetic images.
Resumo:
Reputation systems are employed to measure the quality of items on the Web. Incorporating accurate reputation scores in recommender systems is useful to provide more accurate recommendations as recommenders are agnostic to reputation. The ratings aggregation process is a vital component of a reputation system. Reputation models available do not consider statistical data in the rating aggregation process. This limitation can reduce the accuracy of generated reputation scores. In this paper, we propose a new reputation model that considers previously ignored statistical data. We compare our proposed model against state-of the-art models using top-N recommender system experiment.
Resumo:
Background Although thermal imaging can be a valuable technology in the prevention and management of diabetic foot disease, it is not yet widely used in clinical practice. Technological advancement in infrared imaging increases its application range. The aim was to explore the first steps in the applicability of high-resolution infrared thermal imaging for noninvasive automated detection of signs of diabetic foot disease. Methods The plantar foot surfaces of 15 diabetes patients were imaged with an infrared camera (resolution, 1.2 mm/pixel): 5 patients had no visible signs of foot complications, 5 patients had local complications (e.g., abundant callus or neuropathic ulcer), and 5 patients had difuse complications (e.g., Charcot foot, infected ulcer, or critical ischemia). Foot temperature was calculated as mean temperature across pixels for the whole foot and for specified regions of interest (ROIs). Results No diferences in mean temperature >1.5 °C between the ipsilateral and the contralateral foot were found in patients without complications. In patients with local complications, mean temperatures of the ipsilateral and the contralateral foot were similar, but temperature at the ROI was >2 °C higher compared with the corresponding region in the contralateral foot and to the mean of the whole ipsilateral foot. In patients with difuse complications, mean temperature diferences of >3 °C between ipsilateral and contralateral foot were found. Conclusions With an algorithm based on parameters that can be captured and analyzed with a high-resolution infrared camera and a computer, it is possible to detect signs of diabetic foot disease and to discriminate between no, local, or difuse diabetic foot complications. As such, an intelligent telemedicine monitoring system for noninvasive automated detection of signs of diabetic foot disease is one step closer. Future studies are essential to confirm and extend these promising early findings.
Resumo:
Background Skin temperature assessment is a promising modality for early detection of diabetic foot problems, but its diagnostic value has not been studied. Our aims were to investigate the diagnostic value of different cutoff skin temperature values for detecting diabetes-related foot complications such as ulceration, infection, and Charcot foot and to determine urgency of treatment in case of diagnosed infection or a red-hot swollen foot. Materials and Methods The plantar foot surfaces of 54 patients with diabetes visiting the outpatient foot clinic were imaged with an infrared camera. Nine patients had complications requiring immediate treatment, 25 patients had complications requiring non-immediate treatment, and 20 patients had no complications requiring treatment. Average pixel temperature was calculated for six predefined spots and for the whole foot. We calculated the area under the receiver operating characteristic curve for different cutoff skin temperature values using clinical assessment as reference and defined the sensitivity and specificity for the most optimal cutoff temperature value. Mean temperature difference between feet was analyzed using the Kruskal–Wallis tests. Results The most optimal cutoff skin temperature value for detection of diabetes-related foot complications was a 2.2°C difference between contralateral spots (sensitivity, 76%; specificity, 40%). The most optimal cutoff skin temperature value for determining urgency of treatment was a 1.35°C difference between the mean temperature of the left and right foot (sensitivity, 89%; specificity, 78%). Conclusions Detection of diabetes-related foot complications based on local skin temperature assessment is hindered by low diagnostic values. Mean temperature difference between two feet may be an adequate marker for determining urgency of treatment.
Resumo:
Early detection of (pre-)signs of ulceration on a diabetic foot is valuable for clinical practice. Hyperspectral imaging is a promising technique for detection and classification of such (pre-)signs. However, the number of the spectral bands should be limited to avoid overfitting, which is critical for pixel classification with hyperspectral image data. The goal was to design a detector/classifier based on spectral imaging (SI) with a small number of optical bandpass filters. The performance and stability of the design were also investigated. The selection of the bandpass filters boils down to a feature selection problem. A dataset was built, containing reflectance spectra of 227 skin spots from 64 patients, measured with a spectrometer. Each skin spot was annotated manually by clinicians as "healthy" or a specific (pre-)sign of ulceration. Statistical analysis on the data set showed the number of required filters is between 3 and 7, depending on additional constraints on the filter set. The stability analysis revealed that shot noise was the most critical factor affecting the classification performance. It indicated that this impact could be avoided in future SI systems with a camera sensor whose saturation level is higher than 106, or by postimage processing.
Resumo:
Surveying threatened and invasive species to obtain accurate population estimates is an important but challenging task that requires a considerable investment in time and resources. Estimates using existing ground-based monitoring techniques, such as camera traps and surveys performed on foot, are known to be resource intensive, potentially inaccurate and imprecise, and difficult to validate. Recent developments in unmanned aerial vehicles (UAV), artificial intelligence and miniaturized thermal imaging systems represent a new opportunity for wildlife experts to inexpensively survey relatively large areas. The system presented in this paper includes thermal image acquisition as well as a video processing pipeline to perform object detection, classification and tracking of wildlife in forest or open areas. The system is tested on thermal video data from ground based and test flight footage, and is found to be able to detect all the target wildlife located in the surveyed area. The system is flexible in that the user can readily define the types of objects to classify and the object characteristics that should be considered during classification.
Resumo:
The use of UAVs for remote sensing tasks; e.g. agriculture, search and rescue is increasing. The ability for UAVs to autonomously find a target and perform on-board decision making, such as descending to a new altitude or landing next to a target is a desired capability. Computer-vision functionality allows the Unmanned Aerial Vehicle (UAV) to follow a designated flight plan, detect an object of interest, and change its planned path. In this paper we describe a low cost and an open source system where all image processing is achieved on-board the UAV using a Raspberry Pi 2 microprocessor interfaced with a camera. The Raspberry Pi and the autopilot are physically connected through serial and communicate via MAVProxy. The Raspberry Pi continuously monitors the flight path in real time through USB camera module. The algorithm checks whether the target is captured or not. If the target is detected, the position of the object in frame is represented in Cartesian coordinates and converted into estimate GPS coordinates. In parallel, the autopilot receives the target location approximate GPS and makes a decision to guide the UAV to a new location. This system also has potential uses in the field of Precision Agriculture, plant pest detection and disease outbreaks which cause detrimental financial damage to crop yields if not detected early on. Results show the algorithm is accurate to detect 99% of object of interest and the UAV is capable of navigation and doing on-board decision making.
Resumo:
Photography is now a highly automated activity where people enjoy phototaking by pointing and pressing a button. While this liberates people from having to interact with the processes of photography, e.g., controlling the parameters of the camera or printing images in the darkroom, we argue that an engagement with such processes can in fact enrich people's experience of phototaking. Drawing from fieldwork with members of a film-based photography club, we found that people who engage deeply with the various processes of phototaking experienced photography richly and meaningfully. Being able to participate fully in the entire process gave them a sense of achievement over the final result. Having the opportunity to engage with the process also allowed them to learn and hone their photographic skills. Through this understanding, we can imagine future technologies that enrich experiences of photography through providing the means to interact with photographic processes in new ways.
Resumo:
Robotic vision is limited by line of sight and onboard camera capabilities. Robots can acquire video or images from remote cameras, but processing additional data has a computational burden. This paper applies the Distributed Robotic Vision Service, DRVS, to robot path planning using data outside line-of-sight of the robot. DRVS implements a distributed visual object detection service to distributes the computation to remote camera nodes with processing capabilities. Robots request task-specific object detection from DRVS by specifying a geographic region of interest and object type. The remote camera nodes perform the visual processing and send the high-level object information to the robot. Additionally, DRVS relieves robots of sensor discovery by dynamically distributing object detection requests to remote camera nodes. Tested over two different indoor path planning tasks DRVS showed dramatic reduction in mobile robot compute load and wireless network utilization.