332 resultados para Pushbroom camera
Resumo:
In this paper we present large, accurately calibrated and time-synchronized data sets, gathered outdoors in controlled and variable environmental conditions, using an unmanned ground vehicle (UGV), equipped with a wide variety of sensors. These include four 2D laser scanners, a radar scanner, a color camera and an infrared camera. It provides a full description of the system used for data collection and the types of environments and conditions in which these data sets have been gathered, which include the presence of airborne dust, smoke and rain.
Resumo:
This thesis developed a method for real-time and handheld 3D temperature mapping using a combination of off-the-shelf devices and efficient computer algorithms. It contributes a new sensing and data processing framework to the science of 3D thermography, unlocking its potential for application areas such as building energy auditing and industrial monitoring. New techniques for the precise calibration of multi-sensor configurations were developed, along with several algorithms that ensure both accurate and comprehensive surface temperature estimates can be made for rich 3D models as they are generated by a non-expert user.
Resumo:
We present a pole inspection system for outdoor environments comprising a high-speed camera on a vertical take-off and landing (VTOL) aerial platform. The pole inspection task requires a vehicle to fly close to a structure while maintaining a fixed stand-off distance from it. Typical GPS errors make GPS-based navigation unsuitable for this task however. When flying outdoors a vehicle is also affected by aerodynamics disturbances such as wind gusts, so the onboard controller must be robust to these disturbances in order to maintain the stand-off distance. Two problems must therefor be addressed: fast and accurate state estimation without GPS, and the design of a robust controller. We resolve these problems by a) performing visual + inertial relative state estimation and b) using a robust line tracker and a nested controller design. Our state estimation exploits high-speed camera images (100Hz) and 70Hz IMU data fused in an Extended Kalman Filter (EKF). We demonstrate results from outdoor experiments for pole-relative hovering, and pole circumnavigation where the operator provides only yaw commands. Lastly, we show results for image-based 3D reconstruction and texture mapping of a pole to demonstrate the usefulness for inspection tasks.
Resumo:
This paper examines the role of visual information in a remote help-giving situation involving the collaborative physical task of designing a prototype remote control. We analyze a set of video recordings captured within an experimental setting. Our analysis shows that using gestures and relevant artefacts and by projecting activities on the camera, participants were able to discuss several design-related issues. The results indicate that with a limited camera view (mainly faces and shoulders), participants' conversations were centered at the physical prototype that they were designing. The socially organized use of our experimental setting provides some key implications for designing future remote collaborative systems.
Resumo:
This paper discusses the idea and demonstrates an early prototype of a novel method of interacting with security surveillance footage using natural user interfaces in place of traditional mouse and keyboard interaction. Current surveillance monitoring stations and systems provide the user with a vast array of video feeds from multiple locations on a video wall, relying on the user’s ability to distinguish locations of the live feeds from experience or list based key-value pair of location and camera IDs. During an incident, this current method of interaction may cause the user to spend increased amounts time obtaining situational and location awareness, which is counter-productive. The system proposed in this paper demonstrates how a multi-touch screen and natural interaction can enable the surveillance monitoring station users to quickly identify the location of a security camera and efficiently respond to an incident.
Resumo:
Virtual Production is a rapidly growing approach to filmmaking that utilises 3D software, virtual camera systems and motion capture technology to visually interact with a real-time virtual environment. The use of these technologies has continued to increase, however, little has been done to document the various approaches for incorporating this new film making technique into a production. This practice-led research project outlines the development of virtual production in the entertainment industry and explores possible strategies for adopting aspects of this new film making technique into the production of short animated films. The outcome is an improved understanding of possible strategies that could be utilised to assist producers and directors with the transition into this new film making technique. - See more at: http://dl4.globalstf.org/?wpsc-product=adopting-virtual-production-for-animated-filmaking#sthash.DLzRph4Z.dpuf
Resumo:
Disjoint top-view networked cameras are among the most commonly utilized networks in many applications. One of the open questions for these cameras' study is the computation of extrinsic parameters (positions and orientations), named extrinsic calibration or localization of cameras. Current approaches either rely on strict assumptions of the object motion for accurate results or fail to provide results of high accuracy without the requirement of the object motion. To address these shortcomings, we present a location-constrained maximum a posteriori (LMAP) approach by applying known locations in the surveillance area, some of which would be passed by the object opportunistically. The LMAP approach formulates the problem as a joint inference of the extrinsic parameters and object trajectory based on the cameras' observations and the known locations. In addition, a new task-oriented evaluation metric, named MABR (the Maximum value of All image points' Back-projected localization errors' L2 norms Relative to the area of field of view), is presented to assess the quality of the calibration results in an indoor object tracking context. Finally, results herein demonstrate the superior performance of the proposed method over the state-of-the-art algorithm based on the presented MABR and classical evaluation metric in simulations and real experiments.
Resumo:
This paper presents a method for the continuous segmentation of dynamic objects using only a vehicle mounted monocular camera without any prior knowledge of the object’s appearance. Prior work in online static/dynamic segmentation is extended to identify multiple instances of dynamic objects by introducing an unsupervised motion clustering step. These clusters are then used to update a multi-class classifier within a self-supervised framework. In contrast to many tracking-by-detection based methods, our system is able to detect dynamic objects without any prior knowledge of their visual appearance shape or location. Furthermore, the classifier is used to propagate labels of the same object in previous frames, which facilitates the continuous tracking of individual objects based on motion. The proposed system is evaluated using recall and false alarm metrics in addition to a new multi-instance labelled dataset to evaluate the performance of segmenting multiple instances of objects.
Resumo:
In many cities around the world, surveillance by a pervasive net of CCTV cameras is a common phenomenon in an attempt to uphold safety and security across the urban environment. Video footage is being recorded and stored, sometimes live feeds are being watched in control rooms hidden from public access and view. In this study, we were inspired by Steve Mann’s original work on sousveillance (surveillance from below) to examine how a network of camera equipped urban screens could allow the residents of Oulu in Finland to collaborate on the safekeeping of their city. An agile, rapid prototyping process led to the design, implementation and ‘in the wild’ deployment of the UbiOpticon screen application. Live video streams captured by web cams integrated at the top of 12 distributed urban screens were broadcast and displayed in a matrix arrangement on all screens. The matrix also included live video streams of two roaming mobile phone cameras. In our field study we explored the reactions of passers-by and users of this screen application that seeks to inverse Bentham’s original panopticon by allowing the watched to be watchers at the same time. In addition to the original goal of participatory sousveillance, the system’s live video feature sparked fun and novel user-led apprlopriations.
Resumo:
Aground-based tracking camera and coaligned slitless spectrograph were used to measure the spectral signature of visible radiation emitted from the Hayabusa capsule as it entered into the Earth’s atmosphere in June 2010. Good quality spectra were obtained, which showed the presence of radiation from the heat shield of the vehicle and the shock-heated air in front of the vehicle. An analysis of the blackbody nature of the radiation concluded that the peak average temperature of the surface was about (3100± 100)K. Line spectra from oxygen and nitrogen atoms were used to infer a peak average shock-heated gas temperature of around((7000±400))K.
Resumo:
Person re-identification is particularly challenging due to significant appearance changes across separate camera views. In order to re-identify people, a representative human signature should effectively handle differences in illumination, pose and camera parameters. While general appearance-based methods are modelled in Euclidean spaces, it has been argued that some applications in image and video analysis are better modelled via non-Euclidean manifold geometry. To this end, recent approaches represent images as covariance matrices, and interpret such matrices as points on Riemannian manifolds. As direct classification on such manifolds can be difficult, in this paper we propose to represent each manifold point as a vector of similarities to class representers, via a recently introduced form of Bregman matrix divergence known as the Stein divergence. This is followed by using a discriminative mapping of similarity vectors for final classification. The use of similarity vectors is in contrast to the traditional approach of embedding manifolds into tangent spaces, which can suffer from representing the manifold structure inaccurately. Comparative evaluations on benchmark ETHZ and iLIDS datasets for the person re-identification task show that the proposed approach obtains better performance than recent techniques such as Histogram Plus Epitome, Partial Least Squares, and Symmetry-Driven Accumulation of Local Features.
Resumo:
PURPOSE To investigate changes in the characteristics of the corneal optics, total optics, anterior biometrics and axial length of the eye during a near task, in downward gaze, over 10 min. METHODS Ten emmetropes (mean - 0.14 ± 0.24 DS) and 10 myopes (mean - 2.26 ± 1.42 DS) aged from 18 to 30 years were recruited. To measure ocular biometrics and corneal topography in downward gaze, an optical biometer (Lenstar LS900) and a rotating Scheimpflug camera (Pentacam HR) were inclined on a custom built, height and tilt adjustable table. The total optics of the eye were measured in downward gaze with binocular fixation using a modified Shack-Hartmann wavefront sensor. Initially, subjects performed a distance viewing task at primary gaze for 10 min to provide a "wash-out" period for prior visual tasks. A distance task (watching video at 6 m) in downward gaze (25°) and a near task (watching video on a portable LCD screen with 2.5 D accommodation demand) in primary gaze and 25°downward gaze were then carried out, each for 10 min in a randomized order. During measurements, in dichoptic view, a Maltese cross was fixated with the right (untested) eye and the instrument’s fixation target was fixated with the subject’s tested left eye. Immediately after (0 min), 5 and 10 min from the commencement of each trial, measurements of ocular parameters were acquired in downward gaze. RESULTS Axial length exhibited a significant increase with downward gaze and accommodation over time (p<0.05). The greatest axial elongation was observed in downward gaze with 2.5 D accommodation after 10 min (mean change from baseline 23±3 µm). Downward gaze also caused greater changes in anterior chamber depth (ACD) and lens thickness (LT) with accommodation (ACD mean change -163±12µm at 10 min; LT mean change 173±17 µm at 10 min) compared to primary gaze with accommodation (ACD mean change -138±12µm at 10 min; LT mean change 131±15 µm at 10 min). Both corneal power and total ocular power changed by a small but significant amount with downward gaze (p<0.05), resulting in a myopic shift (~0.10 D) in the spherical power of the eye compared with primary gaze. CONCLUSION The axial length, anterior biometrics and ocular refraction change significantly with accommodation in downward gaze as a function of time. These findings provide new insights into the optical and bio-mechanical changes of the eye during typical near tasks.
Resumo:
Although accelerometers are extensively used for assessing gait, limited research has evaluated the concurrent validity of these devices on less predictable walking surfaces or the comparability of different methods used for gravitational acceleration compensation. This study evaluated the concurrent validity of trunk accelerations derived from a tri-axial inertial measurement unit while walking on firm, compliant and uneven surfaces and contrasted two methods used to remove gravitational accelerations: i) subtraction of the best linear fit from the data (detrending), and; ii) use of orientation information (quaternions) from the inertial measurement unit. Twelve older and twelve younger adults walked at their preferred speed along firm, compliant and uneven walkways. Accelerations were evaluated for the thoracic spine (T12) using a tri-axial inertial measurement unit and an eleven-camera Vicon system. The findings demonstrated excellent agreement between accelerations derived from the inertial measurement unit and motion analysis system, including while walking on uneven surfaces that better approximate a real-world setting (all differences <0.16 m.s−2). Detrending produced slightly better agreement between the inertial measurement unit and Vicon system on firm surfaces (delta range: −0.05 to 0.06 vs. 0.00 to 0.14 m.s−2), whereas the quaternion method performed better when walking on compliant and uneven walkways (delta range: −0.16 to −0.02 vs. −0.07 to 0.07 m.s−2). The technique used to compensate for gravitational accelerations requires consideration in future research, particularly when walking on compliant and uneven surfaces. These findings demonstrate trunk accelerations can be accurately measured using a wireless inertial measurement unit and are appropriate for research that evaluates healthy populations in complex environments.
Resumo:
Various tools have been developed to assist designers in making interfaces easier to use although none yet offer a complete solution. Through previous work we have established that intuitive interaction is based on past experience. From this we have developed theory around intuitive interaction, a continuum and a conceptual tool for intuitive use. We then trialled our tool. Firstly, one designer used the tool to design a camera. Secondly, seven groups of postgraduate students re-designed various products using our tool. We then chose one of these - a microwave – and prototyped the new and original microwave interfaces on a touchscreen. We tested them on three different age groups. We found that the new design was more intuitive and rated by participants as more familiar. Therefore, design interventions based on our intuitive interaction theory can work. Work is ongoing to develop the tool further.
Resumo:
The location of previously unseen and unregistered individuals in complex camera networks from semantic descriptions is a time consuming and often inaccurate process carried out by human operators, or security staff on the ground. To promote the development and evaluation of automated semantic description based localisation systems, we present a new, publicly available, unconstrained 110 sequence database, collected from 6 stationary cameras. Each sequence contains detailed semantic information for a single search subject who appears in the clip (gender, age, height, build, hair and skin colour, clothing type, texture and colour), and between 21 and 290 frames for each clip are annotated with the target subject location (over 11,000 frames are annotated in total). A novel approach for localising a person given a semantic query is also proposed and demonstrated on this database. The proposed approach incorporates clothing colour and type (for clothing worn below the waist), as well as height and build to detect people. A method to assess the quality of candidate regions, as well as a symmetry driven approach to aid in modelling clothing on the lower half of the body, is proposed within this approach. An evaluation on the proposed dataset shows that a relative improvement in localisation accuracy of up to 21 is achieved over the baseline technique.