361 resultados para Visual estimate
Resumo:
PURPOSE: To examine the basis of previous findings of an association between indices of driving safety and visual motion sensitivity and to examine whether this association could be explained by low-level changes in visual function. METHODS: 36 visually normal participants (aged 19 – 80 years), completed a battery of standard vision tests including visual acuity, contrast sensitivity and automated visual fields. and two tests of motion perception including sensitivity for movement of a drifting Gabor stimulus, and sensitivity for displacement in a random-dot kinematogram (Dmin). Participants also completed a hazard perception test (HPT) which measured participants’ response times to hazards embedded in video recordings of real world driving which has been shown to be linked to crash risk. RESULTS: Dmin for the random-dot stimulus ranged from -0.88 to -0.12 log minutes of arc, and the minimum drift rate for the Gabor stimulus ranged from 0.01 to 0.35 cycles per second. Both measures of motion sensitivity significantly predicted response times on the HPT. In addition, while the relationship involving the HPT and motion sensitivity for the random-dot kinematogram was partially explained by the other visual function measures, the relationship with sensitivity for detection of the drifting Gabor stimulus remained significant even after controlling for these variables. CONCLUSION: These findings suggest that motion perception plays an important role in the visual perception of driving-relevant hazards independent of other areas of visual function and should be further explored as a predictive test of driving safety. Future research should explore the causes of reduced motion perception in order to develop better interventions to improve road safety.
Resumo:
In various industrial and scientific fields, conceptual models are derived from real world problem spaces to understand and communicate containing entities and coherencies. Abstracted models mirror the common understanding and information demand of engineers, who apply conceptual models for performing their daily tasks. However, most standardized models in Process Management, Product Lifecycle Management and Enterprise Resource Planning lack of a scientific foundation for their notation. In collaboration scenarios with stakeholders from several disciplines, tailored conceptual models complicate communication processes, as a common understanding is not shared or implemented in specific models. To support direct communication between experts from several disciplines, a visual language is developed which allows a common visualization of discipline-specific conceptual models. For visual discrimination and to overcome visual complexity issues, conceptual models are arranged in a three-dimensional space. The visual language introduced here follows and extends established principles of Visual Language science.
Resumo:
In most visual mapping applications suited to Autonomous Underwater Vehicles (AUVs), stereo visual odometry (VO) is rarely utilised as a pose estimator as imagery is typically of very low framerate due to energy conservation and data storage requirements. This adversely affects the robustness of a vision-based pose estimator and its ability to generate a smooth trajectory. This paper presents a novel VO pipeline for low-overlap imagery from an AUV that utilises constrained motion and integrates magnetometer data in a bi-objective bundle adjustment stage to achieve low-drift pose estimates over large trajectories. We analyse the performance of a standard stereo VO algorithm and compare the results to the modified vo algorithm. Results are demonstrated in a virtual environment in addition to low-overlap imagery gathered from an AUV. The modified VO algorithm shows significantly improved pose accuracy and performance over trajectories of more than 300m. In addition, dense 3D meshes generated from the visual odometry pipeline are presented as a qualitative output of the solution.
Resumo:
Debugging control software for Micro Aerial Vehicles (MAV) can be risky out of the simulator, especially with professional drones that might harm people around or result in a high bill after a crash. We have designed a framework that enables a software application to communicate with multiple MAVs from a single unified interface. In this way, visual controllers can be first tested on a low-cost harmless MAV and, after safety is guaranteed, they can be moved to the production MAV at no additional cost. The framework is based on a distributed architecture over a network. This allows multiple configurations, like drone swarms or parallel processing of drones' video streams. Live tests have been performed and the results show comparatively low additional communication delays, while adding new functionalities and flexibility. This implementation is open-source and can be downloaded from github.com/uavster/mavwork
Resumo:
In the modern connected world, pervasive computing has become reality. Thanks to the ubiquity of mobile computing devices and emerging cloud-based services, the users permanently stay connected to their data. This introduces a slew of new security challenges, including the problem of multi-device key management and single-sign-on architectures. One solution to this problem is the utilization of secure side-channels for authentication, including the visual channel as vicinity proof. However, existing approaches often assume confidentiality of the visual channel, or provide only insufficient means of mitigating a man-in-the-middle attack. In this work, we introduce QR-Auth, a two-step, 2D barcode based authentication scheme for mobile devices which aims specifically at key management and key sharing across devices in a pervasive environment. It requires minimal user interaction and therefore provides better usability than most existing schemes, without compromising its security. We show how our approach fits in existing authorization delegation and one-time-password generation schemes, and that it is resilient to man-in-the-middle attacks.
Resumo:
Background It has been proposed that the feral horse foot is a benchmark model for foot health in horses. However, the foot health of feral horses has not been formally investigated. Objectives To investigate the foot health of Australian feral horses and determine if foot health is affected by environmental factors, such as substrate properties and distance travelled. Methods Twenty adult feral horses from five populations (n = 100) were investigated. Populations were selected on the basis of substrate hardness and the amount of travel typical for the population. Feet were radiographed and photographed, and digital images were surveyed by two experienced assessors blinded to each other's assessment and to the population origin. Lamellar samples from 15 feet from three populations were investigated histologically for evidence of laminitis. Results There was a total of 377 gross foot abnormalities identified in 100 left forefeet. There were no abnormalities detected in three of the feet surveyed. Each population had a comparable prevalence of foot abnormalities, although the type and severity of abnormality varied among populations. Of the three populations surveyed by histopathology, the prevalence of chronic laminitis ranged between 40% and 93%. Conclusions Foot health appeared to be affected by the environment inhabited by the horses. The observed chronic laminitis may be attributable to either nutritional or traumatic causes. Given the overwhelming evidence of suboptimal foot health, it may not be appropriate for the feral horse foot to be the benchmark model for equine foot health.
Resumo:
This paper presents an Image Based Visual Servo control design for Fixed Wing Unmanned Aerial Vehicles tracking locally linear infrastructure in the presence of wind using a body fixed imaging sensor. Visual servoing offers improved data collection by posing the tracking task as one of controlling a feature as viewed by the inspection sensor, although is complicated by the introduction of wind as aircraft heading and course angle no longer align. In this work it is shown that the effects of wind alter the desired line angle required for continuous tracking to equal the wind correction angle as would be calculated to set a desired course. A control solution is then sort by linearizing the interaction matrix about the new feature pose such that kinematics of the feature can be augmented with the lateral dynamics of the aircraft, from which a state feedback control design is developed. Simulation results are presented comparing no compensation, integral control and the proposed controller using the wind correction angle, followed by an assessment of response to atmospheric disturbances in the form of turbulence and wind gusts
Resumo:
Bus Rapid Transit (BRT) station is the interface between passenger and service. The station is crucial to line operation as it is typically the only location where buses can pass each other. Congestion may occur here when buses maneuvering into and out of the platform lane interfere with bus flow, or when a queue of buses forms upstream of the platform lane blocking the passing lane. However, some systems include operation where express buses pass the critical station, resulting in a proportion of non stopping buses. It is important to understand the operation of the critical busway station under this type of operation, as it affects busway line capacity. This study uses micro simulation to treat the BRT station operation and to analyze the relationship between station Limit state bus capacity (B_ls), Total Bus Capacity (B_ttl). First, the simulation model is developed for Limit state scenario and then a mathematical model is defined, calibrated for a specified range of controlled scenarios of mean and coefficient of variation of dwell time. Thereafter, the proposed B_ls model is extended to consider non stopping buses and B_ttlmodel is defined. The proposed models provides better understanding to the BRT line capacity and is useful for transit authorities for designing better BRT operation.
Resumo:
Vision-based SLAM is mostly a solved problem providing clear, sharp images can be obtained. However, in outdoor environments a number of factors such as rough terrain, high speeds and hardware limitations can result in these conditions not being met. High speed transit on rough terrain can lead to image blur and under/over exposure, problems that cannot easily be dealt with using low cost hardware. Furthermore, recently there has been a growth in interest in lifelong autonomy for robots, which brings with it the challenge in outdoor environments of dealing with a moving sun and lack of constant artificial lighting. In this paper, we present a lightweight approach to visual localization and visual odometry that addresses the challenges posed by perceptual change and low cost cameras. The approach combines low resolution imagery with the SLAM algorithm, RatSLAM. We test the system using a cheap consumer camera mounted on a small vehicle in a mixed urban and vegetated environment, at times ranging from dawn to dusk and in conditions ranging from sunny weather to rain. We first show that the system is able to provide reliable mapping and recall over the course of the day and incrementally incorporate new visual scenes from different times into an existing map. We then restrict the system to only learning visual scenes at one time of day, and show that the system is still able to localize and map at other times of day. The results demonstrate the viability of the approach in situations where image quality is poor and environmental or hardware factors preclude the use of visual features.
Resumo:
Current older adult capability data-sets fail to account for the effects of everyday environmental conditions on capability. This article details a study that investigates the effects of everyday ambient illumination conditions (overcast, 6000 lx; in-house lighting, 150 lx and street lighting, 7.5 lx) and contrast (90%, 70%, 50% and 30%) on the near visual acuity (VA) of older adults (n= 38, 65-87 years). VA was measured at a 1-m viewing distance using logarithm of minimum angle of resolution (LogMAR) acuity charts. Results from the study showed that for all contrast levels tested, VA decreased by 0.2 log units between the overcast and street lighting conditions. On average, in overcast conditions, participants could detect detail around 1.6 times smaller on the LogMAR charts compared with street lighting. VA also significantly decreased when contrast was reduced from 70% to 50%, and from 50% to 30% in each of the ambient illumination conditions. Practitioner summary: This article presents an experimental study that investigates the impact of everyday ambient illumination levels and contrast on older adults' VA. Results show that both factors have a significant effect on their VA. Findings suggest that environmental conditions need to be accounted for in older adult capability data-sets/designs.
Resumo:
The building sector is the dominant consumer of energy and therefore a major contributor to anthropomorphic climate change. The rapid generation of photorealistic, 3D environment models with incorporated surface temperature data has the potential to improve thermographic monitoring of building energy efficiency. In pursuit of this goal, we propose a system which combines a range sensor with a thermal-infrared camera. Our proposed system can generate dense 3D models of environments with both appearance and temperature information, and is the first such system to be developed using a low-cost RGB-D camera. The proposed pipeline processes depth maps successively, forming an ongoing pose estimate of the depth camera and optimizing a voxel occupancy map. Voxels are assigned 4 channels representing estimates of their true RGB and thermal-infrared intensity values. Poses corresponding to each RGB and thermal-infrared image are estimated through a combination of timestamp-based interpolation and a pre-determined knowledge of the extrinsic calibration of the system. Raycasting is then used to color the voxels to represent both visual appearance using RGB, and an estimate of the surface temperature. The output of the system is a dense 3D model which can simultaneously represent both RGB and thermal-infrared data using one of two alternative representation schemes. Experimental results demonstrate that the system is capable of accurately mapping difficult environments, even in complete darkness.
Resumo:
We introduce a new image-based visual navigation algorithm that allows the Cartesian velocity of a robot to be defined with respect to a set of visually observed features corresponding to previously unseen and unmapped world points. The technique is well suited to mobile robot tasks such as moving along a road or flying over the ground. We describe the algorithm in general form and present detailed simulation results for an aerial robot scenario using a spherical camera and a wide angle perspective camera, and present experimental results for a mobile ground robot.
Resumo:
Stereo-based visual odometry algorithms are heavily dependent on an accurate calibration of the rigidly fixed stereo pair. Even small shifts in the rigid transform between the cameras can impact on feature matching and 3D scene triangulation, adversely affecting pose estimates and applications dependent on long-term autonomy. In many field-based scenarios where vibration, knocks and pressure change affect a robotic vehicle, maintaining an accurate stereo calibration cannot be guaranteed over long periods. This paper presents a novel method of recalibrating overlapping stereo camera rigs from online visual data while simultaneously providing an up-to-date and up-to-scale pose estimate. The proposed technique implements a novel form of partitioned bundle adjustment that explicitly includes the homogeneous transform between a stereo camera pair to generate an optimal calibration. Pose estimates are computed in parallel to the calibration, providing online recalibration which seamlessly integrates into a stereo visual odometry framework. We present results demonstrating accurate performance of the algorithm on both simulated scenarios and real data gathered from a wide-baseline stereo pair on a ground vehicle traversing urban roads.
Resumo:
User interfaces for source code editing are a crucial component in any software development environment, and in many editors visual annotations (overlaid on the textual source code) are used to provide important contextual information to the programmer. This paper focuses on the real-time programming activity of ‘cyberphysical’ programming, and considers the type of visual annotations which may be helpful in this programming context.
Resumo:
This paper presents practical vision-based collision avoidance for objects approximating a single point feature. Using a spherical camera model, a visual predictive control scheme guides the aircraft around the object along a conical spiral trajectory. Visibility, state and control constraints are considered explicitly in the controller design by combining image and vehicle dynamics in the process model, and solving the nonlinear optimization problem over the resulting state space. Importantly, range is not required. Instead, the principles of conical spiral motion are used to design an objective function that simultaneously guides the aircraft along the avoidance trajectory, whilst providing an indication of the appropriate point to stop the spiral behaviour. Our approach is aimed at providing a potential solution to the See and Avoid problem for unmanned aircraft and is demonstrated through a series.