185 resultados para Cameras
em Queensland University of Technology - ePrints Archive
Resumo:
In this paper a generic decoupled imaged-based control scheme for calibrated cameras obeying the unified projection model is proposed. The proposed decoupled scheme is based on the surface of object projections onto the unit sphere. Such features are invariant to rotational motions. This allows the control of translational motion independently from the rotational motion. Finally, the proposed results are validated with experiments using a classical perspective camera as well as a fisheye camera mounted on a 6 dofs robot platform.
Resumo:
Red light cameras (RLCs) have been used in a number of US cities to yield a demonstrable reduction in red light violations; however, evaluating their impact on safety (crashes) has been relatively more difficult. Accurately estimating the safety impacts of RLCs is challenging for several reasons. First, many safety related factors are uncontrolled and/or confounded during the periods of observation. Second, “spillover” effects caused by drivers reacting to non-RLC equipped intersections and approaches can make the selection of comparison sites difficult. Third, sites selected for RLC installation may not be selected randomly, and as a result may suffer from the regression to the mean bias. Finally, crash severity and resulting costs need to be considered in order to fully understand the safety impacts of RLCs. Recognizing these challenges, a study was conducted to estimate the safety impacts of RLCs on traffic crashes at signalized intersections in the cities of Phoenix and Scottsdale, Arizona. Twenty-four RLC equipped intersections in both cities are examined in detail and conclusions are drawn. Four different evaluation methodologies were employed to cope with the technical challenges described in this paper and to assess the sensitivity of results based on analytical assumptions. The evaluation results indicated that both Phoenix and Scottsdale are operating cost-effective installations of RLCs: however, the variability in RLC effectiveness within jurisdictions is larger in Phoenix. Consistent with findings in other regions, angle and left-turn crashes are reduced in general, while rear-end crashes tend to increase as a result of RLCs.
Resumo:
This paper proposes a generic decoupled imagebased control scheme for cameras obeying the unified projection model. The scheme is based on the spherical projection model. Invariants to rotational motion are computed from this projection and used to control the translational degrees of freedom. Importantly we form invariants which decrease the sensitivity of the interaction matrix to object depth variation. Finally, the proposed results are validated with experiments using a classical perspective camera as well as a fisheye camera mounted on a 6-DOF robotic platform.
Resumo:
Visual recording devices such as video cameras, CCTVs, or webcams have been broadly used to facilitate work progress or safety monitoring on construction sites. Without human intervention, however, both real-time reasoning about captured scenes and interpretation of recorded images are challenging tasks. This article presents an exploratory method for automated object identification using standard video cameras on construction sites. The proposed method supports real-time detection and classification of mobile heavy equipment and workers. The background subtraction algorithm extracts motion pixels from an image sequence, the pixels are then grouped into regions to represent moving objects, and finally the regions are identified as a certain object using classifiers. For evaluating the method, the formulated computer-aided process was implemented on actual construction sites, and promising results were obtained. This article is expected to contribute to future applications of automated monitoring systems of work zone safety or productivity.
Resumo:
Accurate and efficient thermal-infrared (IR) camera calibration is important for advancing computer vision research within the thermal modality. This paper presents an approach for geometrically calibrating individual and multiple cameras in both the thermal and visible modalities. The proposed technique can be used to correct for lens distortion and to simultaneously reference both visible and thermal-IR cameras to a single coordinate frame. The most popular existing approach for the geometric calibration of thermal cameras uses a printed chessboard heated by a flood lamp and is comparatively inaccurate and difficult to execute. Additionally, software toolkits provided for calibration either are unsuitable for this task or require substantial manual intervention. A new geometric mask with high thermal contrast and not requiring a flood lamp is presented as an alternative calibration pattern. Calibration points on the pattern are then accurately located using a clustering-based algorithm which utilizes the maximally stable extremal region detector. This algorithm is integrated into an automatic end-to-end system for calibrating single or multiple cameras. The evaluation shows that using the proposed mask achieves a mean reprojection error up to 78% lower than that using a heated chessboard. The effectiveness of the approach is further demonstrated by using it to calibrate two multiple-camera multiple-modality setups. Source code and binaries for the developed software are provided on the project Web site.
Resumo:
Red light cameras (RLC) have been used to reduce right-angle collisions at signalized intersections. However, the effect of RLCs on motorcycle crashes has not been well investigated. The objective of this study is to evaluate the effectiveness of RLCs on motorcycle safety in Singapore. This is done by comparing their exposure, proneness of at-fault right-angle crashes as well as the resulting right-angle collisions at RLC with those at non-RLC sites. Estimating the crash vulnerability from not-at-fault crash involvements, the study shows that with a RLC, the relative crash vulnerability or crash-involved exposure of motorcycles at right-angle crashes is reduced. Furthermore, field investigation of motorcycle maneuvers reveal that at non-RLC arms, motorcyclists usually queue beyond the stop-line, facilitating an earlier discharge and hence become more exposed to the conflicting stream. However at arms with a RLC, motorcyclists are more restrained to avoid activating the RLC and hence become less exposed to conflicting traffic during the initial period of the green. The study also shows that in right-angle collisions, the proneness of at-fault crashes of motorcycles is lowest among all vehicle types. Hence motorcycles are more likely to be victims than the responsible parties in right-angle crashes. RLCs have also been found to be very effective in reducing at-fault crash involvements of other vehicle types which may implicate exposed motorcycles in the conflicting stream. Taking all these into account, the presence of RLCs should significantly reduce the vulnerability of motorcycles at signalized intersections.
Resumo:
This paper presents an approach for the automatic calibration of low-cost cameras which are assumed to be restricted in their freedom of movement to either pan or tilt movements. Camera parameters, including focal length, principal point, lens distortion parameter and the angle and axis of rotation, can be recovered from a minimum set of two images of the camera, provided that the axis of rotation between the two images goes through the camera’s optical center and is parallel to either the vertical (panning) or horizontal (tilting) axis of the image. Previous methods for auto-calibration of cameras based on pure rotations fail to work in these two degenerate cases. In addition, our approach includes a modified RANdom SAmple Consensus (RANSAC) algorithm, as well as improved integration of the radial distortion coefficient in the computation of inter-image homographies. We show that these modifications are able to increase the overall efficiency, reliability and accuracy of the homography computation and calibration procedure using both synthetic and real image sequences
Resumo:
Audio-visualspeechrecognition, or the combination of visual lip-reading with traditional acoustic speechrecognition, has been previously shown to provide a considerable improvement over acoustic-only approaches in noisy environments, such as that present in an automotive cabin. The research presented in this paper will extend upon the established audio-visualspeechrecognition literature to show that further improvements in speechrecognition accuracy can be obtained when multiple frontal or near-frontal views of a speaker's face are available. A series of visualspeechrecognition experiments using a four-stream visual synchronous hidden Markov model (SHMM) are conducted on the four-camera AVICAR automotiveaudio-visualspeech database. We study the relative contribution between the side and central orientated cameras in improving visualspeechrecognition accuracy. Finally combination of the four visual streams with a single audio stream in a five-stream SHMM demonstrates a relative improvement of over 56% in word recognition accuracy when compared to the acoustic-only approach in the noisiest conditions of the AVICAR database.
Resumo:
This paper looks at the accuracy of using the built-in camera of smart phones and free software as an economical way to quantify and analyse light exposure by producing luminance maps from High Dynamic Range (HDR) images. HDR images were captured with an Apple iPhone 4S to capture a wide variation of luminance within an indoor and outdoor scene. The HDR images were then processed using Photosphere software (Ward, 2010.) to produce luminance maps, where individual pixel values were compared with calibrated luminance meter readings. This comparison has shown an average luminance error of ~8% between the HDR image pixel values and luminance meter readings, when the range of luminances in the image is limited to approximately 1,500cd/m2.
Resumo:
This paper addresses the problem of automatically estimating the relative pose between a push-broom LIDAR and a camera without the need for artificial calibration targets or other human intervention. Further we do not require the sensors to have an overlapping field of view, it is enough that they observe the same scene but at different times from a moving platform. Matching between sensor modalities is achieved without feature extraction. We present results from field trials which suggest that this new approach achieves an extrinsic calibration accuracy of millimeters in translation and deci-degrees in rotation.
Resumo:
In this paper we demonstrate passive vision-based localization in environments more than two orders of magnitude darker than the current benchmark using a 100 webcam and a 500 camera. Our approach uses the camera’s maximum exposure duration and sensor gain to achieve appropriately exposed images even in unlit night-time environments, albeit with extreme levels of motion blur. Using the SeqSLAM algorithm, we first evaluate the effect of variable motion blur caused by simulated exposures of 132 ms to 10000 ms duration on localization performance. We then use actual long exposure camera datasets to demonstrate day-night localization in two different environments. Finally we perform a statistical analysis that compares the baseline performance of matching unprocessed greyscale images to using patch normalization and local neighbourhood normalization – the two key SeqSLAM components. Our results and analysis show for the first time why the SeqSLAM algorithm is effective, and demonstrate the potential for cheap camera-based localization systems that function across extreme perceptual change.
Resumo:
This work aims to contribute to the reliability and integrity of perceptual systems of unmanned ground vehicles (UGV). A method is proposed to evaluate the quality of sensor data prior to its use in a perception system by utilising a quality metric applied to heterogeneous sensor data such as visual and infrared camera images. The concept is illustrated specifically with sensor data that is evaluated prior to the use of the data in a standard SIFT feature extraction and matching technique. The method is then evaluated using various experimental data sets that were collected from a UGV in challenging environmental conditions, represented by the presence of airborne dust and smoke. In the first series of experiments, a motionless vehicle is observing a ’reference’ scene, then the method is extended to the case of a moving vehicle by compensating for its motion. This paper shows that it is possible to anticipate degradation of a perception algorithm by evaluating the input data prior to any actual execution of the algorithm.
Resumo:
Disjoint top-view networked cameras are among the most commonly utilized networks in many applications. One of the open questions for these cameras' study is the computation of extrinsic parameters (positions and orientations), named extrinsic calibration or localization of cameras. Current approaches either rely on strict assumptions of the object motion for accurate results or fail to provide results of high accuracy without the requirement of the object motion. To address these shortcomings, we present a location-constrained maximum a posteriori (LMAP) approach by applying known locations in the surveillance area, some of which would be passed by the object opportunistically. The LMAP approach formulates the problem as a joint inference of the extrinsic parameters and object trajectory based on the cameras' observations and the known locations. In addition, a new task-oriented evaluation metric, named MABR (the Maximum value of All image points' Back-projected localization errors' L2 norms Relative to the area of field of view), is presented to assess the quality of the calibration results in an indoor object tracking context. Finally, results herein demonstrate the superior performance of the proposed method over the state-of-the-art algorithm based on the presented MABR and classical evaluation metric in simulations and real experiments.