921 resultados para Facial Object Based Method


Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a novel adaptive safe-band for quantization based audio watermarking methods, aiming to improve robustness. Considerable number of audio watermarking methods have been developed using quantization based techniques. These techniques are generally vulnerable to signal processing attacks. For these conventional quantization based techniques, robustness can be marginally improved by choosing larger step sizes at the cost of significant perceptual quality degradation. We first introduce fixed size safe-band between two quantization steps to improve robustness. This safe-band will act as a buffer to withstand certain types of attacks. Then we further improve the robustness by adaptively changing the size of the safe-band based on the audio signal feature used for watermarking. Compared with conventional quantization based method and the fixed size safe-band based method, the proposed adaptive safe-band based quantization method is more robust to attacks. The effectiveness of the proposed technique is demonstrated by simulation results. © 2014 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a convex geometry (CG)-based method for blind separation of nonnegative sources. First, the unaccessible source matrix is normalized to be column-sum-to-one by mapping the available observation matrix. Then, its zero-samples are found by searching the facets of the convex hull spanned by the mapped observations. Considering these zero-samples, a quadratic cost function with respect to each row of the unmixing matrix, together with a linear constraint in relation to the involved variables, is proposed. Upon which, an algorithm is presented to estimate the unmixing matrix by solving a classical convex optimization problem. Unlike the traditional blind source separation (BSS) methods, the CG-based method does not require the independence assumption, nor the uncorrelation assumption. Compared with the BSS methods that are specifically designed to distinguish between nonnegative sources, the proposed method requires a weaker sparsity condition. Provided simulation results illustrate the performance of our method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this paper is to design and develop an optimal motion cueing algorithm (MCA) based on the genetic algorithm (GA) that can generate high-fidelity motions within the motion simulator's physical limitations. Both, angular velocity and linear acceleration are adopted as the inputs to the MCA for producing the higher order optimal washout filter. The linear quadratic regulator (LQR) method is used to constrain the human perception error between the real and simulated driving tasks. To develop the optimal MCA, the latest mathematical models of the vestibular system and simulator motion are taken into account. A reference frame with the center of rotation at the driver's head to eliminate false motion cues caused by rotation of the simulator to the translational motion of the driver's head as well as to reduce the workspace displacement is employed. To improve the developed LQR-based optimal MCA, a new strategy based on optimal control theory and the GA is devised. The objective is to reproduce a signal that can follow closely the reference signal and avoid false motion cues by adjusting the parameters from the obtained LQR-based optimal washout filter. This is achieved by taking a series of factors into account, which include the vestibular sensation error between the real and simulated cases, the main dynamic limitations, the human threshold limiter in tilt coordination, the cross correlation coefficient, and the human sensation error fluctuation. It is worth pointing out that other related investigations in the literature normally do not consider the effects of these factors. The proposed optimized MCA based on the GA is implemented using the MATLAB/Simulink software. The results show the effectiveness of the proposed GA-based method in enhancing human sensation, maximizing the reference shape tracking, and reducing the workspace usage.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Abstract Background Educational computer games are examples of computer-assisted learning objects, representing an educational strategy of growing interest. Given the changes in the digital world over the last decades, students of the current generation expect technology to be used in advancing their learning requiring a need to change traditional passive learning methodologies to an active multisensory experimental learning methodology. The objective of this study was to compare a computer game-based learning method with a traditional learning method, regarding learning gains and knowledge retention, as means of teaching head and neck Anatomy and Physiology to Speech-Language and Hearing pathology undergraduate students. Methods Students were randomized to participate to one of the learning methods and the data analyst was blinded to which method of learning the students had received. Students’ prior knowledge (i.e. before undergoing the learning method), short-term knowledge retention and long-term knowledge retention (i.e. six months after undergoing the learning method) were assessed with a multiple choice questionnaire. Students’ performance was compared considering the three moments of assessment for both for the mean total score and for separated mean scores for Anatomy questions and for Physiology questions. Results Students that received the game-based method performed better in the pos-test assessment only when considering the Anatomy questions section. Students that received the traditional lecture performed better in both post-test and long-term post-test when considering the Anatomy and Physiology questions. Conclusions The game-based learning method is comparable to the traditional learning method in general and in short-term gains, while the traditional lecture still seems to be more effective to improve students’ short and long-term knowledge retention.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Remote sensing information from spaceborne and airborne platforms continues to provide valuable data for different environmental monitoring applications. In this sense, high spatial resolution im-agery is an important source of information for land cover mapping. For the processing of high spa-tial resolution images, the object-based methodology is one of the most commonly used strategies. However, conventional pixel-based methods, which only use spectral information for land cover classification, are inadequate for classifying this type of images. This research presents a method-ology to characterise Mediterranean land covers in high resolution aerial images by means of an object-oriented approach. It uses a self-calibrating multi-band region growing approach optimised by pre-processing the image with a bilateral filtering. The obtained results show promise in terms of both segmentation quality and computational efficiency.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In the last decade, Object Based Image Analysis (OBIA) has been accepted as an effective method for processing high spatial resolution multiband images. This image analysis method is an approach that starts with the segmentation of the image. Image segmentation in general is a procedure to partition an image into homogenous groups (segments). In practice, visual interpretation is often used to assess the quality of segmentation and the analysis relies on the experience of an analyst. In an effort to address the issue, in this study, we evaluate several seed selection strategies for an automatic image segmentation methodology based on a seeded region growing-merging approach. In order to evaluate the segmentation quality, segments were subjected to spatial autocorrelation analysis using Moran's I index and intra-segment variance analysis. We apply the algorithm to image segmentation using an aerial multiband image.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A recently proposed colour based tracking algorithm has been established to track objects in real circumstances [Zivkovic, Z., Krose, B. 2004. An EM-like algorithm for color-histogram-based object tracking. In: Proc, IEEE Conf. on Computer Vision and Pattern Recognition, pp. 798-803]. To improve the performance of this technique in complex scenes, in this paper we propose a new algorithm for optimally adapting the ellipse outlining the objects of interest. This paper presents a Lagrangian based method to integrate a regularising component into the covariance matrix to be computed. Technically, we intend to reduce the residuals between the estimated probability distribution and the expected one. We argue that, by doing this, the shape of the ellipse can be properly adapted in the tracking stage. Experimental results show that the proposed method has favourable performance in shape adaption and object localisation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Traditional content-based filtering methods usually utilize text extraction and classification techniques for building user profiles as well as for representations of contents, i.e. item profiles. These methods have some disadvantages e.g. mismatch between user profile terms and item profile terms, leading to low performance. Some of the disadvantages can be overcome by incorporating a common ontology which enables representing both the users' and the items' profiles with concepts taken from the same vocabulary. We propose a new content-based method for filtering and ranking the relevancy of items for users, which utilizes a hierarchical ontology. The method measures the similarity of the user's profile to the items' profiles, considering the existing of mutual concepts in the two profiles, as well as the existence of "related" concepts, according to their position in the ontology. The proposed filtering algorithm computes the similarity between the users' profiles and the items' profiles, and rank-orders the relevant items according to their relevancy to each user. The method is being implemented in ePaper, a personalized electronic newspaper project, utilizing a hierarchical ontology designed specifically for classification of News items. It can, however, be utilized in other domains and extended to other ontologies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a novel rank-based method for image watermarking. In the watermark embedding process, the host image is divided into blocks, followed by the 2-D discrete cosine transform (DCT). For each image block, a secret key is employed to randomly select a set of DCT coefficients suitable for watermark embedding. Watermark bits are inserted into an image block by modifying the set of DCT coefficients using a rank-based embedding rule. In the watermark detection process, the corresponding detection matrices are formed from the received image using the secret key. Afterward, the watermark bits are extracted by checking the ranks of the detection matrices. Since the proposed watermarking method only uses two DCT coefficients to hide one watermark bit, it can achieve very high embedding capacity. Moreover, our method is free of host signal interference. This desired feature and the usage of an error buffer in watermark embedding result in high robustness against attacks. Theoretical analysis and experimental results demonstrate the effectiveness of the proposed method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An estimation of costs for maintenance and rehabilitation is subject to variation due to the uncertainties of input parameters. This paper presents the results of an analysis to identify input parameters that affect the prediction of variation in road deterioration. Road data obtained from 1688 km of a national highway located in the tropical northeast of Queensland in Australia were used in the analysis. Data were analysed using a probability-based method, the Monte Carlo simulation technique and HDM-4’s roughness prediction model. The results of the analysis indicated that among the input parameters the variability of pavement strength, rut depth, annual equivalent axle load and initial roughness affected the variability of the predicted roughness. The second part of the paper presents an analysis to assess the variation in cost estimates due to the variability of the overall identified critical input parameters.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The application of object-based approaches to the problem of extracting vegetation information from images requires accurate delineation of individual tree crowns. This paper presents an automated method for individual tree crown detection and delineation by applying a simplified PCNN model in spectral feature space followed by post-processing using morphological reconstruction. The algorithm was tested on high resolution multi-spectral aerial images and the results are compared with two existing image segmentation algorithms. The results demonstrate that our algorithm outperforms the other two solutions with the average accuracy of 81.8%.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Identifying an individual from surveillance video is a difficult, time consuming and labour intensive process. The proposed system aims to streamline this process by filtering out unwanted scenes and enhancing an individual's face through super-resolution. An automatic face recognition system is then used to identify the subject or present the human operator with likely matches from a database. A person tracker is used to speed up the subject detection and super-resolution process by tracking moving subjects and cropping a region of interest around the subject's face to reduce the number and size of the image frames to be super-resolved respectively. In this paper, experiments have been conducted to demonstrate how the optical flow super-resolution method used improves surveillance imagery for visual inspection as well as automatic face recognition on an Eigenface and Elastic Bunch Graph Matching system. The optical flow based method has also been benchmarked against the ``hallucination'' algorithm, interpolation methods and the original low-resolution images. Results show that both super-resolution algorithms improved recognition rates significantly. Although the hallucination method resulted in slightly higher recognition rates, the optical flow method produced less artifacts and more visually correct images suitable for human consumption.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The paper presents a fast and robust stereo object recognition method. The method is currently unable to identify the rotation of objects. This makes it very good at locating spheres which are rotationally independent. Approximate methods for located non-spherical objects have been developed. Fundamental to the method is that the correspondence problem is solved using information about the dimensions of the object being located. This is in contrast to previous stereo object recognition systems where the scene is first reconstructed by point matching techniques. The method is suitable for real-time application on low-power devices.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents the feasibility of using structural modal strain energy as a parameter employed in correlation- based damage detection method for truss bridge structures. It is an extension of the damage detection method adopting multiple damage location assurance criterion. In this paper, the sensitivity of modal strain energy to damage obtained from the analytical model is incorporated into the correlation objective function. Firstly, the sensitivity matrix of modal strain energy to damage is conducted offline, and for an arbitrary damage case, the correlation coefficient (objective function) is calculated by multiplying the sensitivity matrix and damage vector. Then, a genetic algorithm is used to iteratively search the damage vector maximising the correlation between the corresponding modal strain energy change (hypothesised) and its counterpart in measurement. The proposed method is simulated and compared with the conventional methods, e.g. frequency-error method, coordinate modal assurance criterion and multiple damage location assurance criterion using mode shapes on a numerical truss bridge structure. The result demonstrates the modal strain energy correlation method is able to yield acceptable damage detection outcomes with less computing efforts, even in a noise contaminated condition.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Road accidents are of great concerns for road and transport departments around world, which cause tremendous loss and dangers for public. Reducing accident rates and crash severity are imperative goals that governments, road and transport authorities, and researchers are aimed to achieve. In Australia, road crash trauma costs the nation A$ 15 billion annually. Five people are killed, and 550 are injured every day. Each fatality costs the taxpayer A$1.7 million. Serious injury cases can cost the taxpayer many times the cost of a fatality. Crashes are in general uncontrolled events and are dependent on a number of interrelated factors such as driver behaviour, traffic conditions, travel speed, road geometry and condition, and vehicle characteristics (e.g. tyre type pressure and condition, and suspension type and condition). Skid resistance is considered one of the most important surface characteristics as it has a direct impact on traffic safety. Attempts have been made worldwide to study the relationship between skid resistance and road crashes. Most of these studies used the statistical regression and correlation methods in analysing the relationships between skid resistance and road crashes. The outcomes from these studies provided mix results and not conclusive. The objective of this paper is to present a probability-based method of an ongoing study in identifying the relationship between skid resistance and road crashes. Historical skid resistance and crash data of a road network located in the tropical east coast of Queensland were analysed using the probability-based method. Analysis methodology and results of the relationships between skid resistance, road characteristics and crashes are presented.