921 resultados para Facial Object Based Method


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we present an unsupervised graph cut based object segmentation method using 3D information provided by Structure from Motion (SFM), called Grab- CutSFM. Rather than focusing on the segmentation problem using a trained model or human intervention, our approach aims to achieve meaningful segmentation autonomously with direct application to vision based robotics. Generally, object (foreground) and background have certain discriminative geometric information in 3D space. By exploring the 3D information from multiple views, our proposed method can segment potential objects correctly and automatically compared to conventional unsupervised segmentation using only 2D visual cues. Experiments with real video data collected from indoor and outdoor environments verify the proposed approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper the approach for automatic road extraction for an urban region using structural, spectral and geometric characteristics of roads has been presented. Roads have been extracted based on two levels: Pre-processing and road extraction methods. Initially, the image is pre-processed to improve the tolerance by reducing the clutter (that mostly represents the buildings, parking lots, vegetation regions and other open spaces). The road segments are then extracted using Texture Progressive Analysis (TPA) and Normalized cut algorithm. The TPA technique uses binary segmentation based on three levels of texture statistical evaluation to extract road segments where as, Normalizedcut method for road extraction is a graph based method that generates optimal partition of road segments. The performance evaluation (quality measures) for road extraction using TPA and normalized cut method is compared. Thus the experimental result show that normalized cut method is efficient in extracting road segments in urban region from high resolution satellite image.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Two methods based on wavelet/wavelet packet expansion to denoise and compress optical tomography data containing scattered noise are presented, In the first, the wavelet expansion coefficients of noisy data are shrunk using a soft threshold. In the second, the data are expanded into a wavelet packet tree upon which a best basis search is done. The resulting coefficients are truncated on the basis of energy content. It can be seen that the first method results in efficient denoising of experimental data when scattering particle density in the medium surrounding the object was up to 12.0 x 10(6) per cm(3). This method achieves a compression ratio of approximate to 8:1. The wavelet packet based method resulted in a compression of up to 11:1 and also exhibited reasonable noise reduction capability. Tomographic reconstructions obtained from denoised data are presented. (C) 1999 Published by Elsevier Science B.V. All rights reserved,

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper discusses basically a wave propagation based method for identifying the damage due to skin-stiffener debonding in a stiffened structure. First, a spectral finite element model (SFEM) is developed for modeling wave propagation in general built-up structures, using the concept of assembling 2D spectral plate elements and the model is then used in modeling wave propagation in a skin-stiffener type structure. The damage force indicator (DFI) technique, which is derived from the dynamic stiffness matrix of the healthy stiffened structure (obtained from the SFEM model) along with the nodal displacements of the debonded stiffened structure (obtained from 2D finite element model), is used to identify the damage due to the presence of debond in a stiffened structure.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper considers a class of dynamic Spatial Point Processes (PP) that evolves over time in a Markovian fashion. This Markov in time PP is hidden and observed indirectly through another PP via thinning, displacement and noise. This statistical model is important for Multi object Tracking applications and we present an approximate likelihood based method for estimating the model parameters. The work is supported by an extensive numerical study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The capability to automatically identify shapes, objects and materials from the image content through direct and indirect methodologies has enabled the development of several civil engineering related applications that assist in the design, construction and maintenance of construction projects. Examples include surface cracks detection, assessment of fire-damaged mortar, fatigue evaluation of asphalt mixes, aggregate shape measurements, velocimentry, vehicles detection, pore size distribution in geotextiles, damage detection and others. This capability is a product of the technological breakthroughs in the area of Image and Video Processing that has allowed for the development of a large number of digital imaging applications in all industries ranging from the well established medical diagnostic tools (magnetic resonance imaging, spectroscopy and nuclear medical imaging) to image searching mechanisms (image matching, content based image retrieval). Content based image retrieval techniques can also assist in the automated recognition of materials in construction site images and thus enable the development of reliable methods for image classification and retrieval. The amount of original imaging information produced yearly in the construction industry during the last decade has experienced a tremendous growth. Digital cameras and image databases are gradually replacing traditional photography while owners demand complete site photograph logs and engineers store thousands of images for each project to use in a number of construction management tasks. However, construction companies tend to store images without following any standardized indexing protocols, thus making the manual searching and retrieval a tedious and time-consuming effort. Alternatively, material and object identification techniques can be used for the development of automated, content based, construction site image retrieval methodology. These methods can utilize automatic material or object based indexing to remove the user from the time-consuming and tedious manual classification process. In this paper, a novel material identification methodology is presented. This method utilizes content based image retrieval concepts to match known material samples with material clusters within the image content. The results demonstrate the suitability of this methodology for construction site image retrieval purposes and reveal the capability of existing image processing technologies to accurately identify a wealth of materials from construction site images.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An approach of rapid hologram generation for the realistic three-dimensional (3-D) image reconstruction based on the angular tiling concept is proposed, using a new graphic rendering approach integrated with a previously developed layer-based method for hologram calculation. A 3-D object is simplified as layered cross-sectional images perpendicular to a chosen viewing direction, and our graphics rendering approach allows the incorporation of clear depth cues, occlusion, and shading in the generated holograms for angular tiling. The combination of these techniques together with parallel computing reduces the computation time of a single-view hologram for a 3-D image of extended graphics array resolution to 176 ms using a single consumer graphics processing unit card. © 2014 SPIE and IS and T.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

首先给出了一种通过融合多个超声波传感器和一台激光全局定位系统的数据建立机器人环境地图的方法 ,并在此基础上 ,首次提出了机器人在非结构环境下识别障碍物的一种新方法 ,即基于障碍物群的方法 .该方法的最大特点在于它可以更加简洁、有效地提取和描述机器人的环境特征 ,这对于较好地实现机器人的导航、避障 ,提高系统的自主性和实时性是至关重要的 .大量的实验结果表明了该方法的有效性 .

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Confronting the rapidly increasing, worldwide reliance on biometric technologies to surveil, manage, and police human beings, my dissertation Informatic Opacity: Biometric Facial Recognition and the Aesthetics and Politics of Defacement charts a series of queer, feminist, and anti-racist concepts and artworks that favor opacity as a means of political struggle against surveillance and capture technologies in the 21st century. Utilizing biometric facial recognition as a paradigmatic example, I argue that today's surveillance requires persons to be informatically visible in order to control them, and such visibility relies upon the production of technical standardizations of identification to operate globally, which most vehemently impact non- normative, minoritarian populations. Thus, as biometric technologies turn exposures of the face into sites of governance, activists and artists strive to make the face biometrically illegible and refuse the political recognition biometrics promises through acts of masking, escape, and imperceptibility. Although I specifically describe tactics of making the face unrecognizable as "defacement," I broadly theorize refusals to visually cohere to digital surveillance and capture technologies' gaze as "informatic opacity," an aesthetic-political theory and practice of anti- normativity at a global, technical scale whose goal is maintaining the autonomous determination of alterity and difference by evading the quantification, standardization, and regulation of identity imposed by biometrics and the state. My dissertation also features two artworks: Facial Weaponization Suite, a series of masks and public actions, and Face Cages, a critical, dystopic installation that investigates the abstract violence of biometric facial diagramming and analysis. I develop an interdisciplinary, practice-based method that pulls from contemporary art and aesthetic theory, media theory and surveillance studies, political and continental philosophy, queer and feminist theory, transgender studies, postcolonial theory, and critical race studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a novel detection method for broken rotor bar fault (BRB) in induction motors based on Estimation of Signal Parameters via Rotational Invariance Technique (ESPRIT) and Simulated Annealing Algorithm (SAA). The performance of ESPRIT is tested with simulated stator current signal of an induction motor with BRB. It shows that even with a short-time measurement data, the technique is capable of correctly identifying the frequencies of the BRB characteristic components but with a low accuracy on the amplitudes and initial phases of those components. SAA is then used to determine their amplitudes and initial phases and shows satisfactory results. Finally, experiments on a 3kW, 380V, 50Hz induction motor are conducted to demonstrate the effectiveness of the ESPRIT-SAA-based method in detecting BRB with short-time measurement data. It proves that the proposed method is a promising choice for BRB detection in induction motors operating with small slip and fluctuant load.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

his paper proposes an optimisation-based method to calculate the critical slip (speed) of dynamic stability and critical clearing time (CCT) of a self-excited induction generator (SEIG). A simple case study using the Matlab/Simulink environment has been included to exemplify the optimisation method. Relationships between terminal voltage, critical slip and reactance of transmission line, CCT and inertial constant have been determined, based on which analysis of impact on relaying setting has been further conducted for another simulation case.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years, there has been a move towards the development of indirect structural health monitoring (SHM)techniques for bridges; the low-cost vibration-based method presented in this paper is such an approach. It consists of the use of a moving vehicle fitted with accelerometers on its axles and incorporates wavelet analysis and statistical pattern recognition. The aim of the approach is to both detect and locate damage in bridges while reducing the need for direct instrumentation of the bridge. In theoretical simulations, a simplified vehicle-bridge interaction model is used to investigate the effectiveness of the approach in detecting damage in a bridge from vehicle accelerations. For this purpose, the accelerations are processed using a continuous wavelet transform as when the axle passes over a damaged section, any discontinuity in the signal would affect the wavelet coefficients. Based on these coefficients, a damage indicator is formulated which can distinguish between different damage levels. However, it is found to be difficult to quantify damage of varying levels when the vehicle’s transverse position is varied between bridge crossings. In a real bridge field experiment, damage was applied artificially to a steel truss bridge to test the effectiveness of the indirect approach in practice; for this purpose a two-axle van was driven across the bridge at constant speed. Both bridge and vehicle acceleration measurements were recorded. The dynamic properties of the test vehicle were identified initially via free vibration tests. It was found that the resulting damage indicators for the bridge and vehicle showed similar patterns, however, it was difficult to distinguish between different artificial damage scenarios.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For reasons of unequal distribution of more than one nematode species in wood, and limited availability of wood samples required for the PCR-based method for detecting pinewood nematodes in wood tissue of Pinus massoniana, a rapid staining-assisted wood sampling method aiding PCR-based detection of the pine wood nematode Bursaphelenchus xylophilus (Bx) in small wood samples of P. massoniana was developed in this study. This comprised a series of new techniques: sampling, mass estimations of nematodes using staining techniques, and lowest limit Bx nematode mass determination for PCR detection. The procedure was undertaken on three adjoining 5-mg wood cross-sections, of 0.5 · 0.5 · 0.015 cm dimension, that were cut from a wood sample of 0.5 · 0.5 · 0.5 cm initially, then the larger wood sample was stained by acid fuchsin, from which two 5-mg wood cross-sections (that adjoined the three 5-mg wood cross-sections, mentioned above) were cut. Nematode-staining-spots (NSSs) in each of the two stained sections were counted under a microscope at 100· magnification. If there were eight or more NSSs present, the adjoining three sections were used for PCR assays. The B. xylophilus – specific amplicon of 403 bp (DQ855275) was generated by PCR assay from 100.00% of 5-mg wood cross-sections that contained more than eight Bx NSSs by the PCR assay. The entire sampling procedure took only 10 min indicating that it is suitable for the fast estimation of nematode numbers in the wood of P. massonina as the prelimary sample selections for other more expensive Bx-detection methods such as PCR assay.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tsunoda et al. (2001) recently studied the nature of object representation in monkey inferotemporal cortex using a combination of optical imaging and extracellular recordings. In particular, they examined IT neuron responses to complex natural objects and "simplified" versions thereof. In that study, in 42% of the cases, optical imaging revealed a decrease in the number of activation patches in IT as stimuli were "simplified". However, in 58% of the cases, "simplification" of the stimuli actually led to the appearance of additional activation patches in IT. Based on these results, the authors propose a scheme in which an object is represented by combinations of active and inactive columns coding for individual features. We examine the patterns of activation caused by the same stimuli as used by Tsunoda et al. in our model of object recognition in cortex (Riesenhuber 99). We find that object-tuned units can show a pattern of appearance and disappearance of features identical to the experiment. Thus, the data of Tsunoda et al. appear to be in quantitative agreement with a simple object-based representation in which an object's identity is coded by its similarities to reference objects. Moreover, the agreement of simulations and experiment suggests that the simplification procedure used by Tsunoda (2001) is not necessarily an accurate method to determine neuronal tuning.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A Kriging interpolation method is combined with an object-based evaluation measure to assess the ability of the UK Met Office's dispersion and weather prediction models to predict the evolution of a plume of tracer as it was transported across Europe. The object-based evaluation method, SAL, considers aspects of the Structure, Amplitude and Location of the pollutant field. The SAL method is able to quantify errors in the predicted size and shape of the pollutant plume, through the structure component, the over- or under-prediction of the pollutant concentrations, through the amplitude component, and the position of the pollutant plume, through the location component. The quantitative results of the SAL evaluation are similar for both models and close to a subjective visual inspection of the predictions. A negative structure component for both models, throughout the entire 60 hour plume dispersion simulation, indicates that the modelled plumes are too small and/or too peaked compared to the observed plume at all times. The amplitude component for both models is strongly positive at the start of the simulation, indicating that surface concentrations are over-predicted by both models for the first 24 hours, but modelled concentrations are within a factor of 2 of the observations at later times. Finally, for both models, the location component is small for the first 48 hours after the start of the tracer release, indicating that the modelled plumes are situated close to the observed plume early on in the simulation, but this plume location error grows at later times. The SAL methodology has also been used to identify differences in the transport of pollution in the dispersion and weather prediction models. The convection scheme in the weather prediction model is found to transport more pollution vertically out of the boundary layer into the free troposphere than the dispersion model convection scheme resulting in lower pollutant concentrations near the surface and hence a better forecast for this case study.