849 resultados para Facial Object Based Method


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The capability to automatically identify shapes, objects and materials from the image content through direct and indirect methodologies has enabled the development of several civil engineering related applications that assist in the design, construction and maintenance of construction projects. Examples include surface cracks detection, assessment of fire-damaged mortar, fatigue evaluation of asphalt mixes, aggregate shape measurements, velocimentry, vehicles detection, pore size distribution in geotextiles, damage detection and others. This capability is a product of the technological breakthroughs in the area of Image and Video Processing that has allowed for the development of a large number of digital imaging applications in all industries ranging from the well established medical diagnostic tools (magnetic resonance imaging, spectroscopy and nuclear medical imaging) to image searching mechanisms (image matching, content based image retrieval). Content based image retrieval techniques can also assist in the automated recognition of materials in construction site images and thus enable the development of reliable methods for image classification and retrieval. The amount of original imaging information produced yearly in the construction industry during the last decade has experienced a tremendous growth. Digital cameras and image databases are gradually replacing traditional photography while owners demand complete site photograph logs and engineers store thousands of images for each project to use in a number of construction management tasks. However, construction companies tend to store images without following any standardized indexing protocols, thus making the manual searching and retrieval a tedious and time-consuming effort. Alternatively, material and object identification techniques can be used for the development of automated, content based, construction site image retrieval methodology. These methods can utilize automatic material or object based indexing to remove the user from the time-consuming and tedious manual classification process. In this paper, a novel material identification methodology is presented. This method utilizes content based image retrieval concepts to match known material samples with material clusters within the image content. The results demonstrate the suitability of this methodology for construction site image retrieval purposes and reveal the capability of existing image processing technologies to accurately identify a wealth of materials from construction site images.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An approach of rapid hologram generation for the realistic three-dimensional (3-D) image reconstruction based on the angular tiling concept is proposed, using a new graphic rendering approach integrated with a previously developed layer-based method for hologram calculation. A 3-D object is simplified as layered cross-sectional images perpendicular to a chosen viewing direction, and our graphics rendering approach allows the incorporation of clear depth cues, occlusion, and shading in the generated holograms for angular tiling. The combination of these techniques together with parallel computing reduces the computation time of a single-view hologram for a 3-D image of extended graphics array resolution to 176 ms using a single consumer graphics processing unit card. © 2014 SPIE and IS and T.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

首先给出了一种通过融合多个超声波传感器和一台激光全局定位系统的数据建立机器人环境地图的方法 ,并在此基础上 ,首次提出了机器人在非结构环境下识别障碍物的一种新方法 ,即基于障碍物群的方法 .该方法的最大特点在于它可以更加简洁、有效地提取和描述机器人的环境特征 ,这对于较好地实现机器人的导航、避障 ,提高系统的自主性和实时性是至关重要的 .大量的实验结果表明了该方法的有效性 .

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Confronting the rapidly increasing, worldwide reliance on biometric technologies to surveil, manage, and police human beings, my dissertation Informatic Opacity: Biometric Facial Recognition and the Aesthetics and Politics of Defacement charts a series of queer, feminist, and anti-racist concepts and artworks that favor opacity as a means of political struggle against surveillance and capture technologies in the 21st century. Utilizing biometric facial recognition as a paradigmatic example, I argue that today's surveillance requires persons to be informatically visible in order to control them, and such visibility relies upon the production of technical standardizations of identification to operate globally, which most vehemently impact non- normative, minoritarian populations. Thus, as biometric technologies turn exposures of the face into sites of governance, activists and artists strive to make the face biometrically illegible and refuse the political recognition biometrics promises through acts of masking, escape, and imperceptibility. Although I specifically describe tactics of making the face unrecognizable as "defacement," I broadly theorize refusals to visually cohere to digital surveillance and capture technologies' gaze as "informatic opacity," an aesthetic-political theory and practice of anti- normativity at a global, technical scale whose goal is maintaining the autonomous determination of alterity and difference by evading the quantification, standardization, and regulation of identity imposed by biometrics and the state. My dissertation also features two artworks: Facial Weaponization Suite, a series of masks and public actions, and Face Cages, a critical, dystopic installation that investigates the abstract violence of biometric facial diagramming and analysis. I develop an interdisciplinary, practice-based method that pulls from contemporary art and aesthetic theory, media theory and surveillance studies, political and continental philosophy, queer and feminist theory, transgender studies, postcolonial theory, and critical race studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a novel detection method for broken rotor bar fault (BRB) in induction motors based on Estimation of Signal Parameters via Rotational Invariance Technique (ESPRIT) and Simulated Annealing Algorithm (SAA). The performance of ESPRIT is tested with simulated stator current signal of an induction motor with BRB. It shows that even with a short-time measurement data, the technique is capable of correctly identifying the frequencies of the BRB characteristic components but with a low accuracy on the amplitudes and initial phases of those components. SAA is then used to determine their amplitudes and initial phases and shows satisfactory results. Finally, experiments on a 3kW, 380V, 50Hz induction motor are conducted to demonstrate the effectiveness of the ESPRIT-SAA-based method in detecting BRB with short-time measurement data. It proves that the proposed method is a promising choice for BRB detection in induction motors operating with small slip and fluctuant load.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

his paper proposes an optimisation-based method to calculate the critical slip (speed) of dynamic stability and critical clearing time (CCT) of a self-excited induction generator (SEIG). A simple case study using the Matlab/Simulink environment has been included to exemplify the optimisation method. Relationships between terminal voltage, critical slip and reactance of transmission line, CCT and inertial constant have been determined, based on which analysis of impact on relaying setting has been further conducted for another simulation case.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In recent years, there has been a move towards the development of indirect structural health monitoring (SHM)techniques for bridges; the low-cost vibration-based method presented in this paper is such an approach. It consists of the use of a moving vehicle fitted with accelerometers on its axles and incorporates wavelet analysis and statistical pattern recognition. The aim of the approach is to both detect and locate damage in bridges while reducing the need for direct instrumentation of the bridge. In theoretical simulations, a simplified vehicle-bridge interaction model is used to investigate the effectiveness of the approach in detecting damage in a bridge from vehicle accelerations. For this purpose, the accelerations are processed using a continuous wavelet transform as when the axle passes over a damaged section, any discontinuity in the signal would affect the wavelet coefficients. Based on these coefficients, a damage indicator is formulated which can distinguish between different damage levels. However, it is found to be difficult to quantify damage of varying levels when the vehicle’s transverse position is varied between bridge crossings. In a real bridge field experiment, damage was applied artificially to a steel truss bridge to test the effectiveness of the indirect approach in practice; for this purpose a two-axle van was driven across the bridge at constant speed. Both bridge and vehicle acceleration measurements were recorded. The dynamic properties of the test vehicle were identified initially via free vibration tests. It was found that the resulting damage indicators for the bridge and vehicle showed similar patterns, however, it was difficult to distinguish between different artificial damage scenarios.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For reasons of unequal distribution of more than one nematode species in wood, and limited availability of wood samples required for the PCR-based method for detecting pinewood nematodes in wood tissue of Pinus massoniana, a rapid staining-assisted wood sampling method aiding PCR-based detection of the pine wood nematode Bursaphelenchus xylophilus (Bx) in small wood samples of P. massoniana was developed in this study. This comprised a series of new techniques: sampling, mass estimations of nematodes using staining techniques, and lowest limit Bx nematode mass determination for PCR detection. The procedure was undertaken on three adjoining 5-mg wood cross-sections, of 0.5 · 0.5 · 0.015 cm dimension, that were cut from a wood sample of 0.5 · 0.5 · 0.5 cm initially, then the larger wood sample was stained by acid fuchsin, from which two 5-mg wood cross-sections (that adjoined the three 5-mg wood cross-sections, mentioned above) were cut. Nematode-staining-spots (NSSs) in each of the two stained sections were counted under a microscope at 100· magnification. If there were eight or more NSSs present, the adjoining three sections were used for PCR assays. The B. xylophilus – specific amplicon of 403 bp (DQ855275) was generated by PCR assay from 100.00% of 5-mg wood cross-sections that contained more than eight Bx NSSs by the PCR assay. The entire sampling procedure took only 10 min indicating that it is suitable for the fast estimation of nematode numbers in the wood of P. massonina as the prelimary sample selections for other more expensive Bx-detection methods such as PCR assay.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Tsunoda et al. (2001) recently studied the nature of object representation in monkey inferotemporal cortex using a combination of optical imaging and extracellular recordings. In particular, they examined IT neuron responses to complex natural objects and "simplified" versions thereof. In that study, in 42% of the cases, optical imaging revealed a decrease in the number of activation patches in IT as stimuli were "simplified". However, in 58% of the cases, "simplification" of the stimuli actually led to the appearance of additional activation patches in IT. Based on these results, the authors propose a scheme in which an object is represented by combinations of active and inactive columns coding for individual features. We examine the patterns of activation caused by the same stimuli as used by Tsunoda et al. in our model of object recognition in cortex (Riesenhuber 99). We find that object-tuned units can show a pattern of appearance and disappearance of features identical to the experiment. Thus, the data of Tsunoda et al. appear to be in quantitative agreement with a simple object-based representation in which an object's identity is coded by its similarities to reference objects. Moreover, the agreement of simulations and experiment suggests that the simplification procedure used by Tsunoda (2001) is not necessarily an accurate method to determine neuronal tuning.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A Kriging interpolation method is combined with an object-based evaluation measure to assess the ability of the UK Met Office's dispersion and weather prediction models to predict the evolution of a plume of tracer as it was transported across Europe. The object-based evaluation method, SAL, considers aspects of the Structure, Amplitude and Location of the pollutant field. The SAL method is able to quantify errors in the predicted size and shape of the pollutant plume, through the structure component, the over- or under-prediction of the pollutant concentrations, through the amplitude component, and the position of the pollutant plume, through the location component. The quantitative results of the SAL evaluation are similar for both models and close to a subjective visual inspection of the predictions. A negative structure component for both models, throughout the entire 60 hour plume dispersion simulation, indicates that the modelled plumes are too small and/or too peaked compared to the observed plume at all times. The amplitude component for both models is strongly positive at the start of the simulation, indicating that surface concentrations are over-predicted by both models for the first 24 hours, but modelled concentrations are within a factor of 2 of the observations at later times. Finally, for both models, the location component is small for the first 48 hours after the start of the tracer release, indicating that the modelled plumes are situated close to the observed plume early on in the simulation, but this plume location error grows at later times. The SAL methodology has also been used to identify differences in the transport of pollution in the dispersion and weather prediction models. The convection scheme in the weather prediction model is found to transport more pollution vertically out of the boundary layer into the free troposphere than the dispersion model convection scheme resulting in lower pollutant concentrations near the surface and hence a better forecast for this case study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Background Educational computer games are examples of computer-assisted learning objects, representing an educational strategy of growing interest. Given the changes in the digital world over the last decades, students of the current generation expect technology to be used in advancing their learning requiring a need to change traditional passive learning methodologies to an active multisensory experimental learning methodology. The objective of this study was to compare a computer game-based learning method with a traditional learning method, regarding learning gains and knowledge retention, as means of teaching head and neck Anatomy and Physiology to Speech-Language and Hearing pathology undergraduate students. Methods Students were randomized to participate to one of the learning methods and the data analyst was blinded to which method of learning the students had received. Students’ prior knowledge (i.e. before undergoing the learning method), short-term knowledge retention and long-term knowledge retention (i.e. six months after undergoing the learning method) were assessed with a multiple choice questionnaire. Students’ performance was compared considering the three moments of assessment for both for the mean total score and for separated mean scores for Anatomy questions and for Physiology questions. Results Students that received the game-based method performed better in the pos-test assessment only when considering the Anatomy questions section. Students that received the traditional lecture performed better in both post-test and long-term post-test when considering the Anatomy and Physiology questions. Conclusions The game-based learning method is comparable to the traditional learning method in general and in short-term gains, while the traditional lecture still seems to be more effective to improve students’ short and long-term knowledge retention.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Remote sensing information from spaceborne and airborne platforms continues to provide valuable data for different environmental monitoring applications. In this sense, high spatial resolution im-agery is an important source of information for land cover mapping. For the processing of high spa-tial resolution images, the object-based methodology is one of the most commonly used strategies. However, conventional pixel-based methods, which only use spectral information for land cover classification, are inadequate for classifying this type of images. This research presents a method-ology to characterise Mediterranean land covers in high resolution aerial images by means of an object-oriented approach. It uses a self-calibrating multi-band region growing approach optimised by pre-processing the image with a bilateral filtering. The obtained results show promise in terms of both segmentation quality and computational efficiency.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the last decade, Object Based Image Analysis (OBIA) has been accepted as an effective method for processing high spatial resolution multiband images. This image analysis method is an approach that starts with the segmentation of the image. Image segmentation in general is a procedure to partition an image into homogenous groups (segments). In practice, visual interpretation is often used to assess the quality of segmentation and the analysis relies on the experience of an analyst. In an effort to address the issue, in this study, we evaluate several seed selection strategies for an automatic image segmentation methodology based on a seeded region growing-merging approach. In order to evaluate the segmentation quality, segments were subjected to spatial autocorrelation analysis using Moran's I index and intra-segment variance analysis. We apply the algorithm to image segmentation using an aerial multiband image.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A recently proposed colour based tracking algorithm has been established to track objects in real circumstances [Zivkovic, Z., Krose, B. 2004. An EM-like algorithm for color-histogram-based object tracking. In: Proc, IEEE Conf. on Computer Vision and Pattern Recognition, pp. 798-803]. To improve the performance of this technique in complex scenes, in this paper we propose a new algorithm for optimally adapting the ellipse outlining the objects of interest. This paper presents a Lagrangian based method to integrate a regularising component into the covariance matrix to be computed. Technically, we intend to reduce the residuals between the estimated probability distribution and the expected one. We argue that, by doing this, the shape of the ellipse can be properly adapted in the tracking stage. Experimental results show that the proposed method has favourable performance in shape adaption and object localisation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traditional content-based filtering methods usually utilize text extraction and classification techniques for building user profiles as well as for representations of contents, i.e. item profiles. These methods have some disadvantages e.g. mismatch between user profile terms and item profile terms, leading to low performance. Some of the disadvantages can be overcome by incorporating a common ontology which enables representing both the users' and the items' profiles with concepts taken from the same vocabulary. We propose a new content-based method for filtering and ranking the relevancy of items for users, which utilizes a hierarchical ontology. The method measures the similarity of the user's profile to the items' profiles, considering the existing of mutual concepts in the two profiles, as well as the existence of "related" concepts, according to their position in the ontology. The proposed filtering algorithm computes the similarity between the users' profiles and the items' profiles, and rank-orders the relevant items according to their relevancy to each user. The method is being implemented in ePaper, a personalized electronic newspaper project, utilizing a hierarchical ontology designed specifically for classification of News items. It can, however, be utilized in other domains and extended to other ontologies.