898 resultados para image-based rendering
Resumo:
Vision based tracking can provide the spatial location of project related entities such as equipment, workers, and materials in a large-scale congested construction site. It tracks entities in a video stream by inferring their motion. To initiate the process, it is required to determine the pixel areas of the entities to be tracked in the following consecutive video frames. For the purpose of fully automating the process, this paper presents an automated way of initializing trackers using Semantic Texton Forests (STFs) method. STFs method performs simultaneously the segmentation of the image and the classification of the segments based on the low-level semantic information and the context information. In this paper, STFs method is tested in the case of wheel loaders recognition. In the experiments, wheel loaders are further divided into several parts such as wheels and body parts to help learn the context information. The results show 79% accuracy of recognizing the pixel areas of the wheel loader. These results signify that STFs method has the potential to automate the initialization process of vision based tracking.
Resumo:
This book explores the processes for retrieval, classification, and integration of construction images in AEC/FM model based systems. The author describes a combination of techniques from the areas of image and video processing, computer vision, information retrieval, statistics and content-based image and video retrieval that have been integrated into a novel method for the retrieval of related construction site image data from components of a project model. This method has been tested on available construction site images from a variety of sources like past and current building construction and transportation projects and is able to automatically classify, store, integrate and retrieve image data files in inter-organizational systems so as to allow their usage in project management related tasks. objects. Therefore, automated methods for the integration of construction images are important for construction information management. During this research, processes for retrieval, classification, and integration of construction images in AEC/FM model based systems have been explored. Specifically, a combination of techniques from the areas of image and video processing, computer vision, information retrieval, statistics and content-based image and video retrieval have been deployed in order to develop a methodology for the retrieval of related construction site image data from components of a project model. This method has been tested on available construction site images from a variety of sources like past and current building construction and transportation projects and is able to automatically classify, store, integrate and retrieve image data files in inter-organizational systems so as to allow their usage in project management related tasks.
Resumo:
The Architecture, Engineering, Construction and Facilities Management (AEC/FM) industry is rapidly becoming a multidisciplinary, multinational and multi-billion dollar economy, involving large numbers of actors working concurrently at different locations and using heterogeneous software and hardware technologies. Since the beginning of the last decade, a great deal of effort has been spent within the field of construction IT in order to integrate data and information from most computer tools used to carry out engineering projects. For this purpose, a number of integration models have been developed, like web-centric systems and construction project modeling, a useful approach in representing construction projects and integrating data from various civil engineering applications. In the modern, distributed and dynamic construction environment it is important to retrieve and exchange information from different sources and in different data formats in order to improve the processes supported by these systems. Previous research demonstrated that a major hurdle in AEC/FM data integration in such systems is caused by its variety of data types and that a significant part of the data is stored in semi-structured or unstructured formats. Therefore, new integrative approaches are needed to handle non-structured data types like images and text files. This research is focused on the integration of construction site images. These images are a significant part of the construction documentation with thousands stored in site photographs logs of large scale projects. However, locating and identifying such data needed for the important decision making processes is a very hard and time-consuming task, while so far, there are no automated methods for associating them with other related objects. Therefore, automated methods for the integration of construction images are important for construction information management. During this research, processes for retrieval, classification, and integration of construction images in AEC/FM model based systems have been explored. Specifically, a combination of techniques from the areas of image and video processing, computer vision, information retrieval, statistics and content-based image and video retrieval have been deployed in order to develop a methodology for the retrieval of related construction site image data from components of a project model. This method has been tested on available construction site images from a variety of sources like past and current building construction and transportation projects and is able to automatically classify, store, integrate and retrieve image data files in inter-organizational systems so as to allow their usage in project management related tasks.
Resumo:
Digital photographs of construction site activities are gradually replacing their traditional paper based counterparts. Existing digital imaging technologies in hardware and software make it easy for site engineers to take numerous photographs of “interesting” processes and activities on a daily basis. The resulting photographic data are evidence of the “as-built” project, and can therefore be used in a number of project life cycle tasks. However, the task of retrieving the relevant photographs needed in these tasks is often burdened by the sheer volume of photographs accumulating in project databases over time and the numerous objects present in each photograph. To solve this problem, the writers have recently developed a number of complementary techniques that can automatically classify and retrieve construction site images according to a variety of criteria (materials, time, date, location, etc.). This paper presents a novel complementary technique that can automatically identify linear (i.e., beam, column) and nonlinear (i.e., wall, slab) construction objects within the image content and use that information to enhance the performance of the writers’ existing construction site image retrieval approach.
Resumo:
The current procedures in post-earthquake safety and structural assessment are performed manually by a skilled triage team of structural engineers/certified inspectors. These procedures, and particularly the physical measurement of the damage properties, are time-consuming and qualitative in nature. This paper proposes a novel method that automatically detects spalled regions on the surface of reinforced concrete columns and measures their properties in image data. Spalling has been accepted as an important indicator of significant damage to structural elements during an earthquake. According to this method, the region of spalling is first isolated by way of a local entropy-based thresholding algorithm. Following this, the exposure of longitudinal reinforcement (depth of spalling into the column) and length of spalling along the column are measured using a novel global adaptive thresholding algorithm in conjunction with image processing methods in template matching and morphological operations. The method was tested on a database of damaged RC column images collected after the 2010 Haiti earthquake, and comparison of the results with manual measurements indicate the validity of the method.
Resumo:
By using carbon nanotubes as the smallest possible scattering element, light can be diffracted in a highly controlled manner to produce a 2D image, as reported by Haider Butt and co-workers on page OP331. An array of carbon nanotubes is elegantly patterned to produce a high resolution hologram. In response to incident light on the hologram, a high contrast and wide field of view "CAMBRIDGE" image is produced.
Resumo:
Carbon nanotubes are used as the smallest possible scattering element for diffracting light in a highly controlled manner to produce a 2D image. An array of carbon nanotubes is elegantly patterned to produce a high resolution hologram. In response to incident light on the hologram, a high contrast and wide field of view CAMBRIDGE image is produced.
Resumo:
Computational models of visual cortex, and in particular those based on sparse coding, have enjoyed much recent attention. Despite this currency, the question of how sparse or how over-complete a sparse representation should be, has gone without principled answer. Here, we use Bayesian model-selection methods to address these questions for a sparse-coding model based on a Student-t prior. Having validated our methods on toy data, we find that natural images are indeed best modelled by extremely sparse distributions; although for the Student-t prior, the associated optimal basis size is only modestly over-complete.
Resumo:
Statistical approaches for building non-rigid deformable models, such as the Active Appearance Model (AAM), have enjoyed great popularity in recent years, but typically require tedious manual annotation of training images. In this paper, a learning based approach for the automatic annotation of visually deformable objects from a single annotated frontal image is presented and demonstrated on the example of automatically annotating face images that can be used for building AAMs for fitting and tracking. This approach employs the idea of initially learning the correspondences between landmarks in a frontal image and a set of training images with a face in arbitrary poses. Using this learner, virtual images of unseen faces at any arbitrary pose for which the learner was trained can be reconstructed by predicting the new landmark locations and warping the texture from the frontal image. View-based AAMs are then built from the virtual images and used for automatically annotating unseen images, including images of different facial expressions, at any random pose within the maximum range spanned by the virtually reconstructed images. The approach is experimentally validated by automatically annotating face images from three different databases. © 2009 IEEE.
Resumo:
Restoring a scene distorted by atmospheric turbulence is a challenging problem in video surveillance. The effect, caused by random, spatially varying, perturbations, makes a model-based solution difficult and in most cases, impractical. In this paper, we propose a novel method for mitigating the effects of atmospheric distortion on observed images, particularly airborne turbulence which can severely degrade a region of interest (ROI). In order to extract accurate detail about objects behind the distorting layer, a simple and efficient frame selection method is proposed to select informative ROIs only from good-quality frames. The ROIs in each frame are then registered to further reduce offsets and distortions. We solve the space-varying distortion problem using region-level fusion based on the dual tree complex wavelet transform. Finally, contrast enhancement is applied. We further propose a learning-based metric specifically for image quality assessment in the presence of atmospheric distortion. This is capable of estimating quality in both full-and no-reference scenarios. The proposed method is shown to significantly outperform existing methods, providing enhanced situational awareness in a range of surveillance scenarios. © 1992-2012 IEEE.
Resumo:
The size of pixels is one of the key limiting features in the state of the art of holographic displays systems. The resolution and field of view in these systems are dictated by the size of the pixel (the smallest light scattering element). We have demonstrated the utilization of carbon nanotubes (nanostructures) as the smallest possible scattering element for diffracting light in a highly controlled manner to produce a two dimensional image. An array of carbon nanotubes was elegantly patterned to produce a high resolution hologram. In response to the incident light on the hologram a high contrast image was produced. Due to the nanoscale dimension of the carbon nanotube array the image presented a wide field of view and high resolution. These results pave way towards the utilization of nanostructures for producing 3D holograms with wide field of view and high resolution. © 2013 IEEE.
Computational modelling and characterisation of nanoparticle-based tuneable photonic crystal sensors
Resumo:
Photonic crystals are materials that are used to control or manipulate the propagation of light through a medium for a desired application. Common fabrication methods to prepare photonic crystals are both costly and intricate. However, through a cost-effective laser-induced photochemical patterning, one-dimensional responsive and tuneable photonic crystals can easily be fabricated. These structures act as optical transducers and respond to external stimuli. These photonic crystals are generally made of a responsive hydrogel that can host metallic nanoparticles in the form of arrays. The hydrogel-based photonic crystal has the capability to alter its periodicity in situ but also recover its initial geometrical dimensions, thereby rendering it fully reversible and reusable. Such responsive photonic crystals have applications in various responsive and tuneable optical devices. In this study, we fabricated a pH-sensitive photonic crystal sensor through photochemical patterning and demonstrated computational simulations of the sensor through a finite element modelling technique in order to analyse its optical properties on varying the pattern and characteristics of the nanoparticle arrays within the responsive hydrogel matrix. Both simulations and experimental results show the wavelength tuneability of the sensor with good agreement. Various factors, including nanoparticle size and distribution within the hydrogel-based responsive matrices that directly affect the performance of the sensors, are also studied computationally. © 2014 The Royal Society of Chemistry.
Resumo:
Cellular behavior is strongly influenced by the architecture and pattern of its interfacing extracellular matrix (ECM). For an artificial culture system which could eventually benefit the translation of scientific findings into therapeutic development, the system should capture the key characteristics of a physiological microenvironment. At the same time, it should also enable standardized, high throughput data acquisition. Since an ECM is composed of different fibrous proteins, studying cellular interaction with individual fibrils will be of physiological relevance. In this study, we employ near-field electrospinning to create ordered patterns of collagenous fibrils of gelatin, based on an acetic acid and ethyl acetate aqueous co-solvent system. Tunable conformations of micro-fibrils were directly deposited onto soft polymeric substrates in a single step. We observe that global topographical features of straight lines, beads-on-strings, and curls are dictated by solution conductivity; whereas the finer details such as the fiber cross-sectional profile are tuned by solution viscosity. Using these fibril constructs as cellular assays, we study EA.hy926 endothelial cells' response to ROCK inhibition, because of ROCK's key role in the regulation of cell shape. The fibril array was shown to modulate the cellular morphology towards a pre-capillary cord-like phenotype, which was otherwise not observed on a flat 2-D substrate. Further facilitated by quantitative analysis of morphological parameters, the fibril platform also provides better dissection in the cells' response to a H1152 ROCK inhibitor. In conclusion, the near-field electrospun fibril constructs provide a more physiologically-relevant platform compared to a featureless 2-D surface, and simultaneously permit statistical single-cell image cytometry using conventional microscopy systems. The patterning approach described here is also expected to form the basics for depositing other protein fibrils, seen among potential applications as culture platforms for drug screening.
Resumo:
We investigate the use of independent component analysis (ICA) for speech feature extraction in digits speech recognition systems.We observe that this may be true for a recognition tasks based on geometrical learning with little training data. In contrast to image processing, phase information is not essential for digits speech recognition. We therefore propose a new scheme that shows how the phase sensitivity can be removed by using an analytical description of the ICA-adapted basis functions via the Hilbert transform. Furthermore, since the basis functions are not shift invariant, we extend the method to include a frequency-based ICA stage that removes redundant time shift information. The digits speech recognition results show promising accuracy, Experiments show method based on ICA and geometrical learning outperforms HMM in different number of train samples.
Resumo:
We report the growth of hexagonal ZnO nanorods and nanoflowers on GaN-based LED epiwafer using a solution deposition method. We also discuss the mechanisms of epitaxial nucleation and of the growth of ZnO nanorods and nanoflowers. A GaN-based LED epiwafer was first deposited on a sapphire substrate by MOCVD with no electrode being fabricated on it. Vertically aligned ZnO nanorods with an average height of similar to 2.4 mu m were then grown on the LED epiwafer, and nanoflowers were synthesized on the nanorods. The growth orientation of the nanorods was perpendicular to the surface, and the synthesized nanoflowers were composed of nanorods. The micro-Raman spectra of the ZnO nanorods and nanoflowers are similar and both exhibit the E-2 (high) mode and the second-order multiple-phonon mode. The photoluminescence spectrum of ZnO nanostructures exhibits ultraviolet emission centred at about 380 nm and a broad and enhanced green emission centred at about 526 nm. The green emission of the ZnO nanostructures combined with the emission of InGaN quantum wells provides a valuable method to improve the colour rendering index (CRI) of LEDs.