903 resultados para night vision system
Resumo:
Background: This study investigated the effects of experimentally induced visual impairment, headlamp glare and clothing on pedestrian visibility. Methods: 28 young adults (M=27.6±4.7 yrs) drove around a closed road circuit at night while pedestrians walked in place at the roadside. Pedestrians wore either black clothing, black clothing with a rectangular vest consisting of 1325 cm2 of retroreflective tape, or the same amount of tape positioned on the extremities in a configuration that conveyed biological motion (“biomotion”). Visual impairment was induced by goggles containing either blurring lenses, simulated cataracts, or clear lenses; visual acuity for the cataract and blurred lens conditions was matched. Drivers pressed a response pad when they first recognized that a pedestrian was present. Sixteen participants drove around the circuit in the presence of headlamp glare while twelve drove without glare. Results: Visual impairment, headlamp glare and pedestrian clothing all significantly affected drivers’ ability to recognize pedestrians (p<0.05). The simulated cataracts were more disruptive than blur, even though acuity was matched across the two manipulations. Pedestrians were recognized more often and at longer distances when they wore “biomotion” clothing than either the vest or black clothing, even in the presence of visual impairment and glare. Conclusions: Drivers’ ability to see and respond to pedestrians at night is degraded by modest visual impairments even when vision meets driver licensing requirements; glare further exacerbates these effects. Clothing that includes retroreflective tape in a biological motion configuration is relatively robust to visual impairment and glare.
Resumo:
This paper describes a novel obstacle detection system for autonomous robots in agricultural field environments that uses a novelty detector to inform stereo matching. Stereo vision alone erroneously detects obstacles in environments with ambiguous appearance and ground plane such as in broad-acre crop fields with harvested crop residue. The novelty detector estimates the probability density in image descriptor space and incorporates image-space positional understanding to identify potential regions for obstacle detection using dense stereo matching. The results demonstrate that the system is able to detect obstacles typical to a farm at day and night. This system was successfully used as the sole means of obstacle detection for an autonomous robot performing a long term two hour coverage task travelling 8.5 km.
Resumo:
Hot metal carriers (HMCs) are large forklift-type vehicles used to move molten metal in aluminum smelters. This paper reports on field experiments that demonstrate that HMCs can operate autonomously and in particular can use vision as a primary sensor to locate the load of aluminum. We present our complete system but focus on the vision system elements and also detail experiments demonstrating reliable operation of the materials handling task. Two key experiments are described, lasting 2 and 5 h, in which the HMC traveled 15 km in total and handled the load 80 times.
Resumo:
In recent years a significant amount of research has been undertaken in collision avoidance and personnel location technology in order to reduce the number of incidents involving pedestrians and mobile plant equipment which are a high risk in underground coal mines. Improving the visibility of pedestrians to drivers would potentially reduce the likelihood of these incidents. In the road safety context, a variety of approaches have been used to make pedestrians more conspicuous to drivers at night (including vehicle and roadway lighting technologies and night vision enhancement systems). However, emerging research from our group and others has demonstrated that clothing incorporating retroreflective markers on the movable joints as well as the torso can provide highly significant improvements in pedestrian visibility in reduced illumination. Importantly, retroreflective markers are most effective when positioned on the moveable joints creating a sensation of “biological motion”. Based only on the motion of points on the moveable joints of an otherwise invisible body, observers can quickly recognize a walking human form, and even correctly judge characteristics such as gender and weight. An important and as yet unexplored question is whether the benefits of these retroreflective clothing configurations translate to the context of mining where workers are operating under low light conditions. Given that the benefits of biomotion clothing are effective for both young and older drivers, as well as those with various eye conditions common in those >50 years reinforces their potential application in the mining industry which employs many workers in this age bracket. This paper will summarise the visibility benefits of retroreflective markers in a biomotion configuration for the mining industry, highlighting that this form of clothing has the potential to be an affordable and convenient way to provide a sizeable safety benefit. It does not involve modifications to vehicles, drivers, or infrastructure. Instead, adding biomotion markings to standard retroreflective vests can enhance the night-time conspicuity of mining workers by capitalising on perceptual capabilities that have already been well documented.
Resumo:
The bipolar point spread function (PSF) corresponding to the Wiener filter tor correcting linear-motion-blurred pictures is implemented in a noncoherent optical processor. The following two approaches are taken for this implementation: (1) the PSF is modulated and biased so that the resulting function is non-negative and (2) the PSF is split into its positive and sign-reversed negative parts, and these two parts are dealt with separately. The phase problem associated with arriving at the pupil function from these modified PSFs is solved using both analytical and combined analytical-iterative techniques available in the literature. The designed pupil functions are experimentally implemented, and deblurring in a noncoherent processor is demonstrated. The postprocessing required (i.e., demodulation in the first approach to modulating the PSF and intensity subtraction in the second approach) are carried out either in a coherent processor or with the help of a PC-based vision system. The deblurred outputs are presented.
Resumo:
An attempt is made to present some challenging problems (mainly to the technically minded researchers) in the development of computational models for certain (visual) processes which are executed with, apparently, deceptive ease by the human visual system. However, in the interest of simplicity (and with a nonmathematical audience in mind), the presentation is almost completely devoid of mathematical formalism. Some of the findings in biological vision are presented in order to provoke some approaches to their computational models, The development of ideas is not complete, and the vast literature on biological and computational vision cannot be reviewed here. A related but rather specific aspect of computational vision (namely, detection of edges) has been discussed by Zucker, who brings out some of the difficulties experienced in the classical approaches.Space limitations here preclude any detailed analysis of even the elementary aspects of information processing in biological vision, However, the main purpose of the present paper is to highlight some of the fascinating problems in the frontier area of modelling mathematically the human vision system.
Resumo:
D-vision系统(这里"D"有"Divide Screen"和"Duplex-Vision"双重含义)是一类基于PC机群的多投影虚拟现实系统(或简称多投影系统).给出D-vision系统中双手6自由度力觉交互的实现过程:在客户端协同控制两个力觉交互设备Spidar-G(Space Interface for Artificial Reality withGrip)实现双手协作交互,其次构造一个基于UDP的Socket类完成客户端和绘制服务器节点之间的通讯,传递跟踪球的位置、方向等信息;然后,通过分布绘制实现在大屏幕上无缝显示.最后实验结果表明:在D-vision系统中双手6自由度力觉交互是一种自然直观的人机交互方式.
Resumo:
Early and intermediate vision algorithms, such as smoothing and discontinuity detection, are often implemented on general-purpose serial, and more recently, parallel computers. Special-purpose hardware implementations of low-level vision algorithms may be needed to achieve real-time processing. This memo reviews and analyzes some hardware implementations of low-level vision algorithms. Two types of hardware implementations are considered: the digital signal processing chips of Ruetz (and Broderson) and the analog VLSI circuits of Carver Mead. The advantages and disadvantages of these two approaches for producing a general, real-time vision system are considered.
Resumo:
Earlier, we introduced a direct method called fixation for the recovery of shape and motion in the general case. The method uses neither feature correspondence nor optical flow. Instead, it directly employs the spatiotemporal gradients of image brightness. This work reports the experimental results of applying some of our fixation algorithms to a sequence of real images where the motion is a combination of translation and rotation. These results show that parameters such as the fization patch size have crucial effects on the estimation of some motion parameters. Some of the critical issues involved in the implementaion of our autonomous motion vision system are also discussed here. Among those are the criteria for automatic choice of an optimum size for the fixation patch, and an appropriate location for the fixation point which result in good estimates for important motion parameters. Finally, a calibration method is described for identifying the real location of the rotation axis in imaging systems.
Resumo:
A fundamental task of vision systems is to infer the state of the world given some form of visual observations. From a computational perspective, this often involves facing an ill-posed problem; e.g., information is lost via projection of the 3D world into a 2D image. Solution of an ill-posed problem requires additional information, usually provided as a model of the underlying process. It is important that the model be both computationally feasible as well as theoretically well-founded. In this thesis, a probabilistic, nonlinear supervised computational learning model is proposed: the Specialized Mappings Architecture (SMA). The SMA framework is demonstrated in a computer vision system that can estimate the articulated pose parameters of a human body or human hands, given images obtained via one or more uncalibrated cameras. The SMA consists of several specialized forward mapping functions that are estimated automatically from training data, and a possibly known feedback function. Each specialized function maps certain domains of the input space (e.g., image features) onto the output space (e.g., articulated body parameters). A probabilistic model for the architecture is first formalized. Solutions to key algorithmic problems are then derived: simultaneous learning of the specialized domains along with the mapping functions, as well as performing inference given inputs and a feedback function. The SMA employs a variant of the Expectation-Maximization algorithm and approximate inference. The approach allows the use of alternative conditional independence assumptions for learning and inference, which are derived from a forward model and a feedback model. Experimental validation of the proposed approach is conducted in the task of estimating articulated body pose from image silhouettes. Accuracy and stability of the SMA framework is tested using artificial data sets, as well as synthetic and real video sequences of human bodies and hands.
Resumo:
Adequate hand-washing has been shown to be a critical activity in preventing the transmission of infections such as MRSA in health-care environments. Hand-washing guidelines published by various health-care related institutions recommend a technique incorporating six hand-washing poses that ensure all areas of the hands are thoroughly cleaned. In this paper, an embedded wireless vision system (VAMP) capable of accurately monitoring hand-washing quality is presented. The VAMP system hardware consists of a low resolution CMOS image sensor and FPGA processor which are integrated with a microcontroller and ZigBee standard wireless transceiver to create a wireless sensor network (WSN) based vision system that can be retargeted at a variety of health care applications. The device captures and processes images locally in real-time, determines if hand-washing procedures have been correctly undertaken and then passes the resulting high-level data over a low-bandwidth wireless link. The paper outlines the hardware and software mechanisms of the VAMP system and illustrates that it offers an easy to integrate sensor solution to adequately monitor and improve hand hygiene quality. Future work to develop a miniaturized, low cost system capable of being integrated into everyday products is also discussed.
Resumo:
Gemstone Team FACE
Resumo:
Smart Spaces, Ambient Intelligence, and Ambient Assisted Living are environmental paradigms that strongly depend on their capability to recognize human actions. While most solutions rest on sensor value interpretations and video analysis applications, few have realized the importance of incorporating common-sense capabilities to support the recognition process. Unfortunately, human action recognition cannot be successfully accomplished by only analyzing body postures. On the contrary, this task should be supported by profound knowledge of human agency nature and its tight connection to the reasons and motivations that explain it. The combination of this knowledge and the knowledge about how the world works is essential for recognizing and understanding human actions without committing common-senseless mistakes. This work demonstrates the impact that episodic reasoning has in improving the accuracy of a computer vision system for human action recognition. This work also presents formalization, implementation, and evaluation details of the knowledge model that supports the episodic reasoning.
Resumo:
Ultrasonic, infrared, laser and other sensors are being applied in robotics. Although combinations of these have allowed robots to navigate, they are only suited for specific scenarios, depending on their limitations. Recent advances in computer vision are turning cameras into useful low-cost sensors that can operate in most types of environments. Cameras enable robots to detect obstacles, recognize objects, obtain visual odometry, detect and recognize people and gestures, among other possibilities. In this paper we present a completely biologically inspired vision system for robot navigation. It comprises stereo vision for obstacle detection, and object recognition for landmark-based navigation. We employ a novel keypoint descriptor which codes responses of cortical complex cells. We also present a biologically inspired saliency component, based on disparity and colour.
Resumo:
This work presents an automatic calibration method for a vision based external underwater ground-truth positioning system. These systems are a relevant tool in benchmarking and assessing the quality of research in underwater robotics applications. A stereo vision system can in suitable environments such as test tanks or in clear water conditions provide accurate position with low cost and flexible operation. In this work we present a two step extrinsic camera parameter calibration procedure in order to reduce the setup time and provide accurate results. The proposed method uses a planar homography decomposition in order to determine the relative camera poses and the determination of vanishing points of detected lines in the image to obtain the global pose of the stereo rig in the reference frame. This method was applied to our external vision based ground-truth at the INESC TEC/Robotics test tank. Results are presented in comparison with an precise calibration performed using points obtained from an accurate 3D LIDAR modelling of the environment.