996 resultados para verification algorithm
Resumo:
Experimental studies have found that when the state-of-the-art probabilistic linear discriminant analysis (PLDA) speaker verification systems are trained using out-domain data, it significantly affects speaker verification performance due to the mismatch between development data and evaluation data. To overcome this problem we propose a novel unsupervised inter dataset variability (IDV) compensation approach to compensate the dataset mismatch. IDV-compensated PLDA system achieves over 10% relative improvement in EER values over out-domain PLDA system by effectively compensating the mismatch between in-domain and out-domain data.
Resumo:
In this paper, we propose a highly reliable fault diagnosis scheme for incipient low-speed rolling element bearing failures. The scheme consists of fault feature calculation, discriminative fault feature analysis, and fault classification. The proposed approach first computes wavelet-based fault features, including the respective relative wavelet packet node energy and entropy, by applying a wavelet packet transform to an incoming acoustic emission signal. The most discriminative fault features are then filtered from the originally produced feature vector by using discriminative fault feature analysis based on a binary bat algorithm (BBA). Finally, the proposed approach employs one-against-all multiclass support vector machines to identify multiple low-speed rolling element bearing defects. This study compares the proposed BBA-based dimensionality reduction scheme with four other dimensionality reduction methodologies in terms of classification performance. Experimental results show that the proposed methodology is superior to other dimensionality reduction approaches, yielding an average classification accuracy of 94.9%, 95.8%, and 98.4% under bearing rotational speeds at 20 revolutions-per-minute (RPM), 80 RPM, and 140 RPM, respectively.
Resumo:
The proliferation of the web presents an unsolved problem of automatically analyzing billions of pages of natural language. We introduce a scalable algorithm that clusters hundreds of millions of web pages into hundreds of thousands of clusters. It does this on a single mid-range machine using efficient algorithms and compressed document representations. It is applied to two web-scale crawls covering tens of terabytes. ClueWeb09 and ClueWeb12 contain 500 and 733 million web pages and were clustered into 500,000 to 700,000 clusters. To the best of our knowledge, such fine grained clustering has not been previously demonstrated. Previous approaches clustered a sample that limits the maximum number of discoverable clusters. The proposed EM-tree algorithm uses the entire collection in clustering and produces several orders of magnitude more clusters than the existing algorithms. Fine grained clustering is necessary for meaningful clustering in massive collections where the number of distinct topics grows linearly with collection size. These fine-grained clusters show an improved cluster quality when assessed with two novel evaluations using ad hoc search relevance judgments and spam classifications for external validation. These evaluations solve the problem of assessing the quality of clusters where categorical labeling is unavailable and unfeasible.
Resumo:
This paper discusses three different ways of applying the single-objective binary genetic algorithm into designing the wind farm. The introduction of different applications is through altering the binary encoding methods in GA codes. The first encoding method is the traditional one with fixed wind turbine positions. The second involves varying the initial positions from results of the first method, and it is achieved by using binary digits to represent the coordination of wind turbine on X or Y axis. The third is the mixing of the first encoding method with another one, which is by adding four more binary digits to represent one of the unavailable plots. The goal of this paper is to demonstrate how the single-objective binary algorithm can be applied and how the wind turbines are distributed under various conditions with best fitness. The main emphasis of discussion is focused on the scenario of wind direction varying from 0° to 45°. Results show that choosing the appropriate position of wind turbines is more significant than choosing the wind turbine numbers, considering that the former has a bigger influence on the whole farm fitness than the latter. And the farm has best performance of fitness values, farm efficiency, and total power with the direction between 20°to 30°.
Resumo:
This paper proposes the addition of a weighted median Fisher discriminator (WMFD) projection prior to length-normalised Gaussian probabilistic linear discriminant analysis (GPLDA) modelling in order to compensate the additional session variation. In limited microphone data conditions, a linear-weighted approach is introduced to increase the influence of microphone speech dataset. The linear-weighted WMFD-projected GPLDA system shows improvements in EER and DCF values over the pooled LDA- and WMFD-projected GPLDA systems in inter-view-interview condition as WMFD projection extracts more speaker discriminant information with limited number of sessions/ speaker data, and linear-weighted GPLDA approach estimates reliable model parameters with limited microphone data.
Resumo:
In this paper we introduce a novel domain-invariant covariance normalization (DICN) technique to relocate both in-domain and out-domain i-vectors into a third dataset-invariant space, providing an improvement for out-domain PLDA speaker verification with a very small number of unlabelled in-domain adaptation i-vectors. By capturing the dataset variance from a global mean using both development out-domain i-vectors and limited unlabelled in-domain i-vectors, we could obtain domain- invariant representations of PLDA training data. The DICN- compensated out-domain PLDA system is shown to perform as well as in-domain PLDA training with as few as 500 unlabelled in-domain i-vectors for NIST-2010 SRE and 2000 unlabelled in-domain i-vectors for NIST-2008 SRE, and considerable relative improvement over both out-domain and in-domain PLDA development if more are available.
Resumo:
Structural identification (St-Id) can be considered as the process of updating a finite element (FE) model of a structural system to match the measured response of the structure. This paper presents the St-Id of a laboratory-based steel through-truss cantilevered bridge with suspended span. There are a total of 600 degrees of freedom (DOFs) in the superstructure plus additional DOFs in the substructure. The St-Id of the bridge model used the modal parameters from a preliminary modal test in the objective function of a global optimisation technique using a layered genetic algorithm with patternsearch step (GAPS). Each layer of the St-Id process involved grouping of the structural parameters into a number of updating parameters and running parallel optimisations. The number of updating parameters was increased at each layer of the process. In order to accelerate the optimisation and ensure improved diversity within the population, a patternsearch step was applied to the fittest individuals at the end of each generation of the GA. The GAPS process was able to replicate the mode shapes for the first two lateral sway modes and the first vertical bending mode to a high degree of accuracy and, to a lesser degree, the mode shape of the first lateral bending mode. The mode shape and frequency of the torsional mode did not match very well. The frequencies of the first lateral bending mode, the first longitudinal mode and the first vertical mode matched very well. The frequency of the first sway mode was lower and that of the second sway mode was higher than the true values, indicating a possible problem with the FE model. Improvements to the model and the St-Id process will be presented at the upcoming conference and compared to the results presented in this paper. These improvements will include the use of multiple FE models in a multi-layered, multi-solution, GAPS St-Id approach.
Resumo:
In this paper, we develop and validate a new Statistically Assisted Fluid Registration Algorithm (SAFIRA) for brain images. A non-statistical version of this algorithm was first implemented in [2] and re-formulated using Lagrangian mechanics in [3]. Here we extend this algorithm to 3D: given 3D brain images from a population, vector fields and their corresponding deformation matrices are computed in a first round of registrations using the non-statistical implementation. Covariance matrices for both the deformation matrices and the vector fields are then obtained and incorporated (separately or jointly) in the regularizing (i.e., the non-conservative Lagrangian) terms, creating four versions of the algorithm. We evaluated the accuracy of each algorithm variant using the manually labeled LPBA40 dataset, which provides us with ground truth anatomical segmentations. We also compared the power of the different algorithms using tensor-based morphometry -a technique to analyze local volumetric differences in brain structure- applied to 46 3D brain scans from healthy monozygotic twins.
Resumo:
Piezoelectric polymers based on polyvinylidene fluoride (PVDF) are of interest as smart materials for novel space-based telescope applications. Dimensional adjustments of adaptive thin polymer films are achieved via controlled charge deposition. Predicting their long-term performance requires a detailed understanding of the piezoelectric property changes that develop during space environmental exposure. The overall materials performance is governed by a combination of chemical and physical degradation processes occurring in low Earth orbit as established by our past laboratory-based materials performance experiments (see report SAND 2005-6846). Molecular changes are primarily induced via radiative damage, and physical damage from temperature and atomic oxygen exposure is evident as depoling, loss of orientation and surface erosion. The current project extension has allowed us to design and fabricate small experimental units to be exposed to low Earth orbit environments as part of the Materials International Space Station Experiments program. The space exposure of these piezoelectric polymers will verify the observed trends and their degradation pathways, and provide feedback on using piezoelectric polymer films in space. This will be the first time that PVDF-based adaptive polymer films will be operated and exposed to combined atomic oxygen, solar UV and temperature variations in an actual space environment. The experiments are designed to be fully autonomous, involving cyclic application of excitation voltages, sensitive film position sensors and remote data logging. This mission will provide critically needed feedback on the long-term performance and degradation of such materials, and ultimately the feasibility of large adaptive and low weight optical systems utilizing these polymers in space.
Resumo:
The increase in data center dependent services has made energy optimization of data centers one of the most exigent challenges in today's Information Age. The necessity of green and energy-efficient measures is very high for reducing carbon footprint and exorbitant energy costs. However, inefficient application management of data centers results in high energy consumption and low resource utilization efficiency. Unfortunately, in most cases, deploying an energy-efficient application management solution inevitably degrades the resource utilization efficiency of the data centers. To address this problem, a Penalty-based Genetic Algorithm (GA) is presented in this paper to solve a defined profile-based application assignment problem whilst maintaining a trade-off between the power consumption performance and resource utilization performance. Case studies show that the penalty-based GA is highly scalable and provides 16% to 32% better solutions than a greedy algorithm.
Resumo:
This project analyses and evaluates the integrity assurance mechanisms used in four Authenticated Encryption schemes based on symmetric block ciphers. These schemes are all cross chaining block cipher modes that claim to provide both confidentiality and integrity assurance simultaneously, in one pass over the data. The investigations include assessing the validity of an existing forgery attack on certain schemes, applying the attack approach to other schemes and implementing the attacks to verify claimed probabilities of successful forgeries. For these schemes, the theoretical basis of the attack was developed, the attack algorithm implemented and computer simulations performed for experimental verification.
Resumo:
In the past few years, the virtual machine (VM) placement problem has been studied intensively and many algorithms for the VM placement problem have been proposed. However, those proposed VM placement algorithms have not been widely used in today's cloud data centers as they do not consider the migration cost from current VM placement to the new optimal VM placement. As a result, the gain from optimizing VM placement may be less than the loss of the migration cost from current VM placement to the new VM placement. To address this issue, this paper presents a penalty-based genetic algorithm (GA) for the VM placement problem that considers the migration cost in addition to the energy-consumption of the new VM placement and the total inter-VM traffic flow in the new VM placement. The GA has been implemented and evaluated by experiments, and the experimental results show that the GA outperforms two well known algorithms for the VM placement problem.
Resumo:
Although live VM migration has been intensively studied, the problem of live migration of multiple interdependent VMs has hardly been investigated. The most important problem in the live migration of multiple interdependent VMs is how to schedule VM migrations as the schedule will directly affect the total migration time and the total downtime of those VMs. Aiming at minimizing both the total migration time and the total downtime simultaneously, this paper presents a Strength Pareto Evolutionary Algorithm 2 (SPEA2) for the multi-VM migration scheduling problem. The SPEA2 has been evaluated by experiments, and the experimental results show that the SPEA2 can generate a set of VM migration schedules with a shorter total migration time and a shorter total downtime than an existing genetic algorithm, namely Random Key Genetic Algorithm (RKGA). This paper also studies the scalability of the SPEA2.
Resumo:
Projective Hjelmslev planes and affine Hjelmslev planes are generalisations of projective planes and affine planes. We present an algorithm for constructing projective Hjelmslev planes and affine Hjelmslev planes that uses projective planes, affine planes and orthogonal arrays. We show that all 2-uniform projective Hjelmslev planes, and all 2-uniform affine Hjelmslev planes can be constructed in this way. As a corollary it is shown that all $2$-uniform affine Hjelmslev planes are sub-geometries of $2$-uniform projective Hjelmslev planes.
Resumo:
Rolling-element bearing failures are the most frequent problems in rotating machinery, which can be catastrophic and cause major downtime. Hence, providing advance failure warning and precise fault detection in such components are pivotal and cost-effective. The vast majority of past research has focused on signal processing and spectral analysis for fault diagnostics in rotating components. In this study, a data mining approach using a machine learning technique called anomaly detection (AD) is presented. This method employs classification techniques to discriminate between defect examples. Two features, kurtosis and Non-Gaussianity Score (NGS), are extracted to develop anomaly detection algorithms. The performance of the developed algorithms was examined through real data from a test to failure bearing. Finally, the application of anomaly detection is compared with one of the popular methods called Support Vector Machine (SVM) to investigate the sensitivity and accuracy of this approach and its ability to detect the anomalies in early stages.