988 resultados para 3D vision


Relevância:

30.00% 30.00%

Publicador:

Resumo:

We aim to demonstrate unaided visual 3D pose estimation and map reconstruction using both monocular and stereo vision techniques. To date, our work has focused on collecting data from Unmanned Aerial Vehicles, which generates a number of significant issues specific to the application. Such issues include scene reconstruction degeneracy from planar data, poor structure initialisation for monocular schemes and difficult 3D reconstruction due to high feature covariance. Most modern Visual Odometry (VO) and related SLAM systems make use of a number of sensors to inform pose and map generation, including laser range-finders, radar, inertial units and vision [1]. By fusing sensor inputs, the advantages and deficiencies of each sensor type can be handled in an efficient manner. However, many of these sensors are costly and each adds to the complexity of such robotic systems. With continual advances in the abilities, small size, passivity and low cost of visual sensors along with the dense, information rich data that they provide our research focuses on the use of unaided vision to generate pose estimates and maps from robotic platforms. We propose that highly accurate (�5cm) dense 3D reconstructions of large scale environments can be obtained in addition to the localisation of the platform described in other work [2]. Using images taken from cameras, our algorithm simultaneously generates an initial visual odometry estimate and scene reconstruction from visible features, then passes this estimate to a bundle-adjustment routine to optimise the solution. From this optimised scene structure and the original images, we aim to create a detailed, textured reconstruction of the scene. By applying such techniques to a unique airborne scenario, we hope to expose new robotic applications of SLAM techniques. The ability to obtain highly accurate 3D measurements of an environment at a low cost is critical in a number of agricultural and urban monitoring situations. We focus on cameras as such sensors are small, cheap and light-weight and can therefore be deployed in smaller aerial vehicles. This, coupled with the ability of small aerial vehicles to fly near to the ground in a controlled fashion, will assist in increasing the effective resolution of the reconstructed maps.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Construction 2020 is a national initiative undertaken by CRC for Construction Innovation to focus its ongoing leadership of the Australian property and construction industry in applied research and best contribute to the industry's national and international growth and competitiveness. It is the first major report on the long-term outlook for the industry since the late 1990s. The report identifies nine key themes for the future of the property and construction industry. These visions describe the major concerns of the industry and the improved future working environment favoured by its stakeholders. The first and clearest vision, agreed across the industry, is that environmentally sustainable construction the creation of buildings and infrastructure that minimise their impact on the natural environment is an area of huge potential. Here technologies like Construction Innovation's LCADesign can make a big difference. This is a calculator that works out automatically from 3D computer-aided design the environmental costs of materials in a building all at the push of a button. By working with industry, we'd expect to have a comprehensive set of eco-design tools for all stages of the construction life cycle, to minimise energy use, greenhouse and other forms of waste or pollution. Other significant areas of focus in the report include the development of nationally uniform codes of practice, new tools to evaluate design and product performance, comparisons with overseas industries, and a worldwide research network to ensure that Australian technology is at the cutting edge.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Eigen-based techniques and other monolithic approaches to face recognition have long been a cornerstone in the face recognition community due to the high dimensionality of face images. Eigen-face techniques provide minimal reconstruction error and limit high-frequency content while linear discriminant-based techniques (fisher-faces) allow the construction of subspaces which preserve discriminatory information. This paper presents a frequency decomposition approach for improved face recognition performance utilising three well-known techniques: Wavelets; Gabor / Log-Gabor; and the Discrete Cosine Transform. Experimentation illustrates that frequency domain partitioning prior to dimensionality reduction increases the information available for classification and greatly increases face recognition performance for both eigen-face and fisher-face approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a technique for estimating the 6DOF pose of a PTZ camera by tracking a single moving target in the image with known 3D position. This is useful in situations where it is not practical to measure the camera pose directly. Our application domain is estimating the pose of a PTZ camerso so that it can be used for automated GPS-based tracking and filming of UAV flight trials. We present results which show the technique is able to localize a PTZ after a short vision-tracked flight, and that the estimated pose is sufficiently accurate for the PTZ to then actively track a UAV based on GPS position data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Gait recognition approaches continue to struggle with challenges including view-invariance, low-resolution data, robustness to unconstrained environments, and fluctuating gait patterns due to subjects carrying goods or wearing different clothes. Although computationally expensive, model based techniques offer promise over appearance based techniques for these challenges as they gather gait features and interpret gait dynamics in skeleton form. In this paper, we propose a fast 3D ellipsoidal-based gait recognition algorithm using a 3D voxel model derived from multi-view silhouette images. This approach directly solves the limitations of view dependency and self-occlusion in existing ellipse fitting model-based approaches. Voxel models are segmented into four components (left and right legs, above and below the knee), and ellipsoids are fitted to each region using eigenvalue decomposition. Features derived from the ellipsoid parameters are modeled using a Fourier representation to retain the temporal dynamic pattern for classification. We demonstrate the proposed approach using the CMU MoBo database and show that an improvement of 15-20% can be achieved over a 2D ellipse fitting baseline.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents the application of a monocular visual SLAMon a fixed-wing small Unmanned Aerial System (sUAS) capable of simultaneous estimation of aircraft pose and scene structure. We demonstrate the robustness of unconstrained vision alone in producing reliable pose estimates of a sUAS, at altitude. It is ultimately capable of online state estimation feedback for aircraft control and next-best-view estimation for complete map coverage without the use of additional sensors.We explore some of the challenges of visual SLAM from a sUAS including dealing with planar structure, distant scenes and noisy observations. The developed techniques are applied on vision data gathered from a fast-moving fixed-wing radio control aircraft flown over a 1×1km rural area at an altitude of 20-100m.We present both raw Structure from Motion results and a SLAM solution that includes FAB-MAP based loop-closures and graph-optimised pose. Timing information is also presented to demonstrate near online capabilities. We compare the accuracy of the 6-DOF pose estimates to an off-the-shelfGPS aided INS over a 1.7kmtrajectory.We also present output 3D reconstructions of the observed scene structure and texture that demonstrates future applications in autonomous monitoring and surveying.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we present an unsupervised graph cut based object segmentation method using 3D information provided by Structure from Motion (SFM), called Grab- CutSFM. Rather than focusing on the segmentation problem using a trained model or human intervention, our approach aims to achieve meaningful segmentation autonomously with direct application to vision based robotics. Generally, object (foreground) and background have certain discriminative geometric information in 3D space. By exploring the 3D information from multiple views, our proposed method can segment potential objects correctly and automatically compared to conventional unsupervised segmentation using only 2D visual cues. Experiments with real video data collected from indoor and outdoor environments verify the proposed approach.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we propose a method to generate a large scale and accurate dense 3D semantic map of street scenes. A dense 3D semantic model of the environment can significantly improve a number of robotic applications such as autonomous driving, navigation or localisation. Instead of using offline trained classifiers for semantic segmentation, our approach employs a data-driven, nonparametric method to parse scenes which easily scale to a large environment and generalise to different scenes. We use stereo image pairs collected from cameras mounted on a moving car to produce dense depth maps which are combined into a global 3D reconstruction using camera poses from stereo visual odometry. Simultaneously, 2D automatic semantic segmentation using a nonparametric scene parsing method is fused into the 3D model. Furthermore, the resultant 3D semantic model is improved with the consideration of moving objects in the scene. We demonstrate our method on the publicly available KITTI dataset and evaluate the performance against manually generated ground truth.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a mapping and navigation system for a mobile robot, which uses vision as its sole sensor modality. The system enables the robot to navigate autonomously, plan paths and avoid obstacles using a vision based topometric map of its environment. The map consists of a globally-consistent pose-graph with a local 3D point cloud attached to each of its nodes. These point clouds are used for direction independent loop closure and to dynamically generate 2D metric maps for locally optimal path planning. Using this locally semi-continuous metric space, the robot performs shortest path planning instead of following the nodes of the graph --- as is done with most other vision-only navigation approaches. The system exploits the local accuracy of visual odometry in creating local metric maps, and uses pose graph SLAM, visual appearance-based place recognition and point clouds registration to create the topometric map. The ability of the framework to sustain vision-only navigation is validated experimentally, and the system is provided as open-source software.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a pole inspection system for outdoor environments comprising a high-speed camera on a vertical take-off and landing (VTOL) aerial platform. The pole inspection task requires a vehicle to fly close to a structure while maintaining a fixed stand-off distance from it. Typical GPS errors make GPS-based navigation unsuitable for this task however. When flying outdoors a vehicle is also affected by aerodynamics disturbances such as wind gusts, so the onboard controller must be robust to these disturbances in order to maintain the stand-off distance. Two problems must therefor be addressed: fast and accurate state estimation without GPS, and the design of a robust controller. We resolve these problems by a) performing visual + inertial relative state estimation and b) using a robust line tracker and a nested controller design. Our state estimation exploits high-speed camera images (100Hz) and 70Hz IMU data fused in an Extended Kalman Filter (EKF). We demonstrate results from outdoor experiments for pole-relative hovering, and pole circumnavigation where the operator provides only yaw commands. Lastly, we show results for image-based 3D reconstruction and texture mapping of a pole to demonstrate the usefulness for inspection tasks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis investigates the fusion of 3D visual information with 2D image cues to provide 3D semantic maps of large-scale environments in which a robot traverses for robotic applications. A major theme of this thesis was to exploit the availability of 3D information acquired from robot sensors to improve upon 2D object classification alone. The proposed methods have been evaluated on several indoor and outdoor datasets collected from mobile robotic platforms including a quadcopter and ground vehicle covering several kilometres of urban roads.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We learn from the past that invasive species have caused tremendous damage to native species and serious disruption to agricultural industries. It is crucial for us to prevent this in the future. The first step of this process is to identify correctly an invasive species from native ones. Current identification methods, relying on mainly 2D images, can result in low accuracy and be time consuming. Such methods provide little help to a quarantine officer who has time constraints to response when on duty. To deal with this problem, we propose new solutions using 3D virtual models of insects. We explain how working with insects in the 3D domain can be much better than the 2D domain. We also describe how to create true-color 3D models of insects using an image-based 3D reconstruction method. This method is ideal for quarantine control and inspection tasks that involve the verification of a physical specimen against known invasive species. Finally we show that these insect models provide valuable material for other applications such as research, education, arts and entertainment. © 2013 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Collections of biological specimens are fundamental to scientific understanding and characterization of natural diversity - past, present and future. This paper presents a system for liberating useful information from physical collections by bringing specimens into the digital domain so they can be more readily shared, analyzed, annotated and compared. It focuses on insects and is strongly motivated by the desire to accelerate and augment current practices in insect taxonomy which predominantly use text, 2D diagrams and images to describe and characterize species. While these traditional kinds of descriptions are informative and useful, they cannot cover insect specimens "from all angles" and precious specimens are still exchanged between researchers and collections for this reason. Furthermore, insects can be complex in structure and pose many challenges to computer vision systems. We present a new prototype for a practical, cost-effective system of off-the-shelf components to acquire natural-colour 3D models of insects from around 3 mm to 30 mm in length. ("Natural-colour" is used to contrast with "false-colour", i.e., colour generated from, or applied to, gray-scale data post-acquisition.) Colour images are captured from different angles and focal depths using a digital single lens reflex (DSLR) camera rig and two-axis turntable. These 2D images are processed into 3D reconstructions using software based on a visual hull algorithm. The resulting models are compact (around 10 megabytes), afford excellent optical resolution, and can be readily embedded into documents and web pages, as well as viewed on mobile devices. The system is portable, safe, relatively affordable, and complements the sort of volumetric data that can be acquired by computed tomography. This system provides a new way to augment the description and documentation of insect species holotypes, reducing the need to handle or ship specimens. It opens up new opportunities to collect data for research, education, art, entertainment, biodiversity assessment and biosecurity control. © 2014 Nguyen et al.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reconstructing 3D motion data is highly under-constrained due to several common sources of data loss during measurement, such as projection, occlusion, or miscorrespondence. We present a statistical model of 3D motion data, based on the Kronecker structure of the spatiotemporal covariance of natural motion, as a prior on 3D motion. This prior is expressed as a matrix normal distribution, composed of separable and compact row and column covariances. We relate the marginals of the distribution to the shape, trajectory, and shape-trajectory models of prior art. When the marginal shape distribution is not available from training data, we show how placing a hierarchical prior over shapes results in a convex MAP solution in terms of the trace-norm. The matrix normal distribution, fit to a single sequence, outperforms state-of-the-art methods at reconstructing 3D motion data in the presence of significant data loss, while providing covariance estimates of the imputed points.