169 resultados para three-dimensional (3-D) vision
em Cambridge University Engineering Department Publications Database
Resumo:
A novel ultra-lightweight three-dimensional (3-D) cathode system for lithium sulphur (Li-S) batteries has been synthesised by loading sulphur on to an interconnected 3-D network of few-layered graphene (FLG) via a sulphur solution infiltration method. A free-standing FLG monolithic network foam was formed as a negative of a Ni metallic foam template by CVD followed by etching away of Ni. The FLG foam offers excellent electrical conductivity, an appropriate hierarchical pore structure for containing the electro-active sulphur and facilitates rapid electron/ion transport. This cathode system does not require any additional binding agents, conductive additives or a separate metallic current collector thus decreasing the weight of the cathode by typically ∼20-30 wt%. A Li-S battery with the sulphur-FLG foam cathode shows good electrochemical stability and high rate discharge capacity retention for up to 400 discharge/charge cycles at a high current density of 3200 mA g(-1). Even after 400 cycles the capacity decay is only ∼0.064% per cycle relative to the early (e.g. the 5th cycle) discharge capacity, while yielding an average columbic efficiency of ∼96.2%. Our results indicate the potential suitability of graphene foam for efficient, ultra-light and high-performance batteries.
Resumo:
An approach of rapid hologram generation for the realistic three-dimensional (3-D) image reconstruction based on the angular tiling concept is proposed, using a new graphic rendering approach integrated with a previously developed layer-based method for hologram calculation. A 3-D object is simplified as layered cross-sectional images perpendicular to a chosen viewing direction, and our graphics rendering approach allows the incorporation of clear depth cues, occlusion, and shading in the generated holograms for angular tiling. The combination of these techniques together with parallel computing reduces the computation time of a single-view hologram for a 3-D image of extended graphics array resolution to 176 ms using a single consumer graphics processing unit card. © 2014 SPIE and IS and T.
Innovative Stereo Vision-Based Approach to Generate Dense Depth Map of Transportation Infrastructure
Resumo:
Three-dimensional (3-D) spatial data of a transportation infrastructure contain useful information for civil engineering applications, including as-built documentation, on-site safety enhancements, and progress monitoring. Several techniques have been developed for acquiring 3-D point coordinates of infrastructure, such as laser scanning. Although the method yields accurate results, the high device costs and human effort required render the process infeasible for generic applications in the construction industry. A quick and reliable approach, which is based on the principles of stereo vision, is proposed for generating a depth map of an infrastructure. Initially, two images are captured by two similar stereo cameras at the scene of the infrastructure. A Harris feature detector is used to extract feature points from the first view, and an innovative adaptive window-matching technique is used to compute feature point correspondences in the second view. A robust algorithm computes the nonfeature point correspondences. Thus, the correspondences of all the points in the scene are obtained. After all correspondences have been obtained, the geometric principles of stereo vision are used to generate a dense depth map of the scene. The proposed algorithm has been tested on several data sets, and results illustrate its potential for stereo correspondence and depth map generation.
Innovative Stereo Vision-Based Approach to Generate Dense Depth Map of Transportation Infrastructure
Resumo:
Three-dimensional (3-D) spatial data of a transportation infrastructure contain useful information for civil engineering applications, including as-built documentation, on-site safety enhancements, and progress monitoring. Several techniques have been developed for acquiring 3-D point coordinates of infrastructure, such as laser scanning. Although the method yields accurate results, the high device costs and human effort required render the process infeasible for generic applications in the construction industry. A quick and reliable approach, which is based on the principles of stereo vision, is proposed for generating a depth map of an infrastructure. Initially, two images are captured by two similar stereo cameras at the scene of the infrastructure. A Harris feature detector is used to extract feature points from the first view, and an innovative adaptive window-matching technique is used to compute feature point correspondences in the second view. A robust algorithm computes the nonfeature point correspondences. Thus, the correspondences of all the points in the scene are obtained. After all correspondences have been obtained, the geometric principles of stereo vision are used to generate a dense depth map of the scene. The proposed algorithm has been tested on several data sets, and results illustrate its potential for stereo correspondence and depth map generation.
Resumo:
This article presents a new method for acquiring three-dimensional (3-D) volumes of ultrasonic axial strain data. The method uses a mechanically-swept probe to sweep out a single volume while applying a continuously varying axial compression. Acquisition of a volume takes 15-20 s. A strain volume is then calculated by comparing frame pairs throughout the sequence. The method uses strain quality estimates to automatically pick out high quality frame pairs, and so does not require careful control of the axial compression. In a series of in vitro and in vivo experiments, we quantify the image quality of the new method and also assess its ease of use. Results are compared with those for the current best alternative, which calculates strain between two complete volumes. The volume pair approach can produce high quality data, but skillful scanning is required to acquire two volumes with appropriate relative strain. In the new method, the automatic quality-weighted selection of image pairs overcomes this difficulty and the method produces superior quality images with a relatively relaxed scanning technique.
Resumo:
Abstract—There are sometimes occasions when ultrasound beamforming is performed with only a subset of the total data that will eventually be available. The most obvious example is a mechanically-swept (wobbler) probe in which the three-dimensional data block is formed from a set of individual B-scans. In these circumstances, non-blind deconvolution can be used to improve the resolution of the data. Unfortunately, most of these situations involve large blocks of three-dimensional data. Furthermore, the ultrasound blur function varies spatially with distance from the transducer. These two facts make the deconvolution process time-consuming to implement. This paper is about ways to address this problem and produce spatially-varying deconvolution of large blocks of three-dimensional data in a matter of seconds. We present two approaches, one based on hardware and the other based on software. We compare the time they each take to achieve similar results and discuss the computational resources and form of blur model that each requires.
Resumo:
Increasing the field of view of a holographic display while maintaining adequate image size is a difficult task. To address this problem, we designed a system that tessellates several sub-holograms into one large hologram at the output. The sub-holograms we generate is similar to a kinoform but without the paraxial approximation during computation. The sub-holograms are loaded onto a single spatial light modulator consecutively and relayed to the appropriate position at the output through a combination of optics and scanning reconstruction light. We will review the method of computer generated hologram and describe the working principles of our system. Results from our proof-of-concept system are shown to have an improved field of view and reconstructed image size. ©2009 IEEE.