234 resultados para Texture image
Resumo:
Image-based (i.e., photo/videogrammetry) and time-of-flight-based (i.e., laser scanning) technologies are typically used to collect spatial data of infrastructure. In order to help architecture, engineering, and construction (AEC) industries make cost-effective decisions in selecting between these two technologies with respect to their settings, this paper makes an attempt to measure the accuracy, quality, time efficiency, and cost of applying image-based and time-of-flight-based technologies to conduct as-built 3D reconstruction of infrastructure. In this paper, a novel comparison method is proposed, and preliminary experiments are conducted. The results reveal that if the accuracy and quality level desired for a particular application is not high (i.e., error < 10 cm, and completeness rate > 80%), image-based technologies constitute a good alternative for time-of-flight-based technologies and significantly reduce the time and cost needed for collecting the data on site.
Resumo:
C++ Prototype implementation of multi-modal image classification and retrieval method for construction site images
Resumo:
Image-based (i.e., photo/videogrammetry) and time-of-flight-based (i.e., laser scanning) technologies are typically used to collect spatial data of infrastructure. In order to help architecture, engineering, and construction (AEC) industries make cost-effective decisions in selecting between these two technologies with respect to their settings, this paper makes an attempt to measure the accuracy, quality, time efficiency, and cost of applying image-based and time-of-flight-based technologies to conduct as-built 3D reconstruction of infrastructure. In this paper, a novel comparison method is proposed, and preliminary experiments are conducted. The results reveal that if the accuracy and quality level desired for a particular application is not high (i.e., error < 10 cm, and completeness rate > 80%), image-based technologies constitute a good alternative for time-of-flight-based technologies and significantly reduce the time and cost needed for collecting the data on site.
Resumo:
The technological advancements in digital imaging, the widespread popularity of digital cameras, and the increasing demand by owners and contractors for detailed and complete site photograph logs have triggered an ever-increasing growth in the rate of construction image data collection, with thousands of images being stored for each project. However, the sheer volume of images and the difficulties in accurately and manually indexing them have generated a pressing need for methods that can index and retrieve images with minimal or no user intervention. This paper reports recent developments from research efforts in the indexing and retrieval of construction site images in architecture, engineering, construction, and facilities management image database systems. The limitations and benefits of the existing methodologies will be presented, as well as an explanation of the reasons for the development of a novel image retrieval approach that not only can recognize construction materials within the image content in order to index images, but also can be compatible with existing retrieval methods, enabling enhanced results.
Resumo:
Images represent a valuable source of information for the construction industry. Due to technological advancements in digital imaging, the increasing use of digital cameras is leading to an ever-increasing volume of images being stored in construction image databases and thus makes it hard for engineers to retrieve useful information from them. Content-Based Search Engines are tools that utilize the rich image content and apply pattern recognition methods in order to retrieve similar images. In this paper, we illustrate several project management tasks and show how Content-Based Search Engines can facilitate automatic retrieval, and indexing of construction images in image databases.
Resumo:
In the modern and dynamic construction environment it is important to access information in a fast and efficient manner in order to improve the decision making processes for construction managers. This capability is, in most cases, straightforward with today’s technologies for data types with an inherent structure that resides primarily on established database structures like estimating and scheduling software. However, previous research has demonstrated that a significant percentage of construction data is stored in semi-structured or unstructured data formats (text, images, etc.) and that manually locating and identifying such data is a very hard and time-consuming task. This paper focuses on construction site image data and presents a novel image retrieval model that interfaces with established construction data management structures. This model is designed to retrieve images from related objects in project models or construction databases using location, date, and material information (extracted from the image content with pattern recognition techniques).
Resumo:
Computational models of visual cortex, and in particular those based on sparse coding, have enjoyed much recent attention. Despite this currency, the question of how sparse or how over-complete a sparse representation should be, has gone without principled answer. Here, we use Bayesian model-selection methods to address these questions for a sparse-coding model based on a Student-t prior. Having validated our methods on toy data, we find that natural images are indeed best modelled by extremely sparse distributions; although for the Student-t prior, the associated optimal basis size is only modestly over-complete.
Resumo:
Ideally, one would like to perform image search using an intuitive and friendly approach. Many existing image search engines, however, present users with sets of images arranged in some default order on the screen, typically the relevance to a query, only. While this certainly has its advantages, arguably, a more flexible and intuitive way would be to sort images into arbitrary structures such as grids, hierarchies, or spheres so that images that are visually or semantically alike are placed together. This paper focuses on designing such a navigation system for image browsers. This is a challenging task because arbitrary layout structure makes it difficult - if not impossible - to compute cross-similarities between images and structure coordinates, the main ingredient of traditional layouting approaches. For this reason, we resort to a recently developed machine learning technique: kernelized sorting. It is a general technique for matching pairs of objects from different domains without requiring cross-domain similarity measures and hence elegantly allows sorting images into arbitrary structures. Moreover, we extend it so that some images can be preselected for instance forming the tip of the hierarchy allowing to subsequently navigate through the search results in the lower levels in an intuitive way. Copyright 2010 ACM.
Resumo:
Statistical approaches for building non-rigid deformable models, such as the Active Appearance Model (AAM), have enjoyed great popularity in recent years, but typically require tedious manual annotation of training images. In this paper, a learning based approach for the automatic annotation of visually deformable objects from a single annotated frontal image is presented and demonstrated on the example of automatically annotating face images that can be used for building AAMs for fitting and tracking. This approach employs the idea of initially learning the correspondences between landmarks in a frontal image and a set of training images with a face in arbitrary poses. Using this learner, virtual images of unseen faces at any arbitrary pose for which the learner was trained can be reconstructed by predicting the new landmark locations and warping the texture from the frontal image. View-based AAMs are then built from the virtual images and used for automatically annotating unseen images, including images of different facial expressions, at any random pose within the maximum range spanned by the virtually reconstructed images. The approach is experimentally validated by automatically annotating face images from three different databases. © 2009 IEEE.
Resumo:
Time-resolved particle image velocimetry (PIV) has been performed inside the nozzle of a commercially available inkjet print-head to obtain the time-dependent velocity waveform. A printhead with a single transparent nozzle 80 μm in orifice diameter was used to eject single droplets at a speed of 5 m/s. An optical microscope was used with an ultra-high-speed camera to capture the motion of particles suspended in a transparent liquid at the center of the nozzle and above the fluid meniscus at a rate of half a million frames per second. Time-resolved velocity fields were obtained from a fluid layer approximately 200 μm thick within the nozzle for a complete jetting cycle. A Lagrangian finite-element numerical model with experimental measurements as inputs was used to predict the meniscus movement. The model predictions showed good agreement with the experimental results. This work provides the first experimental verification of physical models and numerical simulations of flows within a drop-on-demand nozzle. © 2012 Society for Imaging Science and Technology.
Resumo:
We present quantitative analysis of the ultra-high photoconductivity in amorphous oxide semiconductor (AOS) thin film transistors (TFTs), taking into account the sub-gap optical absorption in oxygen deficiency defects. We analyze the basis of photoconductivity in AOSs, explained in terms of the extended electron lifetime due to retarded recombination as a result of hole localization. Also, photoconductive gain in AOS photo-TFTs can be maximized by reducing the transit time associated with short channel lengths, making device scaling favourable for high sensitivity operation. © 2012 IEEE.
Resumo:
We present a method for producing dense Active Appearance Models (AAMs), suitable for video-realistic synthesis. To this end we estimate a joint alignment of all training images using a set of pairwise registrations and ensure that these pairwise registrations are only calculated between similar images. This is achieved by defining a graph on the image set whose edge weights correspond to registration errors and computing a bounded diameter minimum spanning tree (BDMST). Dense optical flow is used to compute pairwise registration and we introduce a flow refinement method to align small scale texture. Once registration between training images has been established we propose a method to add vertices to the AAM in a way that minimises error between the observed flow fields and a flow field interpolated between the AAM mesh points. We demonstrate a significant improvement in model compactness using the proposed method and show it dealing with cases that are problematic for current state-of-the-art approaches.