210 resultados para Cross-classification
Resumo:
Importance of the field: Antibiotic resistance in bacterial pathogens has increased worldwide leading to treatment failures. Concerns have been raised about the use of biocides as a contributing factor to the risk of antimicrobial resistance (AMR) development. In vitro studies demonstrating increase in resistance have often been cited as evidence for increased risks. It is therefore important to understand the mechanisms of resistance employed by bacteria toward biocides used in consumer products and their potential to impart cross-resistance to therapeutic antibiotics. Areas covered: In this review, the mechanisms of resistance and cross-resistance reported in the literature toward biocides commonly used in consumer products are summarized. The physiological and molecular techniques used in describing and examining these mechanisms are reviewed and application of these techniques for systematic assessment of biocides for their potential to develop resistance and/or cross-resistance is discussed. Expert opinion: The guidelines in the usage of biocides in household or industrial purpose should be monitored and regulated to avoid the emergence of any MDR strains. The genetic and molecular methods to monitor the resistance development to biocides should be developed and included in preclinical and clinical studies.
Resumo:
In the design of practical web page classification systems one often encounters a situation in which the labeled training set is created by choosing some examples from each class; but, the class proportions in this set are not the same as those in the test distribution to which the classifier will be actually applied. The problem is made worse when the amount of training data is also small. In this paper we explore and adapt binary SVM methods that make use of unlabeled data from the test distribution, viz., Transductive SVMs (TSVMs) and expectation regularization/constraint (ER/EC) methods to deal with this situation. We empirically show that when the labeled training data is small, TSVM designed using the class ratio tuned by minimizing the loss on the labeled set yields the best performance; its performance is good even when the deviation between the class ratios of the labeled training set and the test set is quite large. When the labeled training data is sufficiently large, an unsupervised Gaussian mixture model can be used to get a very good estimate of the class ratio in the test set; also, when this estimate is used, both TSVM and EC/ER give their best possible performance, with TSVM coming out superior. The ideas in the paper can be easily extended to multi-class SVMs and MaxEnt models.
Resumo:
The present approach uses stopwords and the gaps that oc- cur between successive stopwords –formed by contentwords– as features for sentiment classification.
Resumo:
Time series classification deals with the problem of classification of data that is multivariate in nature. This means that one or more of the attributes is in the form of a sequence. The notion of similarity or distance, used in time series data, is significant and affects the accuracy, time, and space complexity of the classification algorithm. There exist numerous similarity measures for time series data, but each of them has its own disadvantages. Instead of relying upon a single similarity measure, our aim is to find the near optimal solution to the classification problem by combining different similarity measures. In this work, we use genetic algorithms to combine the similarity measures so as to get the best performance. The weightage given to different similarity measures evolves over a number of generations so as to get the best combination. We test our approach on a number of benchmark time series datasets and present promising results.
Resumo:
We describe our kt-resummation model for total cross-sections and show its application to pp and ¯pp scattering. The model uses mini-jets to drive the rise of the cross-section and soft gluon resummation in the infrared region to transform the violent rise of the mini-jet cross-section into a logarithmic behaviour in agreement with the Froissart bound.
Resumo:
This paper presents a new hierarchical clustering algorithm for crop stage classification using hyperspectral satellite image. Amongst the multiple benefits and uses of remote sensing, one of the important application is to solve the problem of crop stage classification. Modern commercial imaging satellites, owing to their large volume of satellite imagery, offer greater opportunities for automated image analysis. Hence, we propose a unsupervised algorithm namely Hierarchical Artificial Immune System (HAIS) of two steps: splitting the cluster centers and merging them. The high dimensionality of the data has been reduced with the help of Principal Component Analysis (PCA). The classification results have been compared with K-means and Artificial Immune System algorithms. From the results obtained, we conclude that the proposed hierarchical clustering algorithm is accurate.
Resumo:
This paper presents an efficient approach to the modeling and classification of vehicles using the magnetic signature of the vehicle. A database was created using the magnetic signature collected over a wide range of vehicles(cars). A vehicle is modeled as an array of magnetic dipoles. The strength of the magnetic dipole and the separation between the magnetic dipoles varies for different vehicles and is dependent on the metallic composition and configuration of the vehicle. Based on the magnetic dipole data model, we present a novel method to extract a feature vector from the magnetic signature. In the classification of vehicles, a linear support vector machine configuration is used to classify the vehicles based on the obtained feature vectors.
Resumo:
The goal of optimization in vehicle design is often blurred by the myriads of requirements belonging to attributes that may not be quite related. If solutions are sought by optimizing attribute performance-related objectives separately starting with a common baseline design configuration as in a traditional design environment, it becomes an arduous task to integrate the potentially conflicting solutions into one satisfactory design. It may be thus more desirable to carry out a combined multi-disciplinary design optimization (MDO) with vehicle weight as an objective function and cross-functional attribute performance targets as constraints. For the particular case of vehicle body structure design, the initial design is likely to be arrived at taking into account styling, packaging and market-driven requirements. The problem with performing a combined cross-functional optimization is the time associated with running such CAE algorithms that can provide a single optimal solution for heterogeneous areas such as NVH and crash safety. In the present paper, a practical MDO methodology is suggested that can be applied to weight optimization of automotive body structures by specifying constraints on frequency and crash performance. Because of the reduced number of cases to be analyzed for crash safety in comparison with other MDO approaches, the present methodology can generate a single size-optimized solution without having to take recourse to empirical techniques such as response surface-based prediction of crash performance and associated successive response surface updating for convergence. An example of weight optimization of spaceframe-based BIW of an aluminum-intensive vehicle is given to illustrate the steps involved in the current optimization process.
Resumo:
Effective conservation and management of natural resources requires up-to-date information of the land cover (LC) types and their dynamics. The LC dynamics are being captured using multi-resolution remote sensing (RS) data with appropriate classification strategies. RS data with important environmental layers (either remotely acquired or derived from ground measurements) would however be more effective in addressing LC dynamics and associated changes. These ancillary layers provide additional information for delineating LC classes' decision boundaries compared to the conventional classification techniques. This communication ascertains the possibility of improved classification accuracy of RS data with ancillary and derived geographical layers such as vegetation index, temperature, digital elevation model (DEM), aspect, slope and texture. This has been implemented in three terrains of varying topography. The study would help in the selection of appropriate ancillary data depending on the terrain for better classified information.
Resumo:
This work proposes a boosting-based transfer learning approach for head-pose classification from multiple, low-resolution views. Head-pose classification performance is adversely affected when the source (training) and target (test) data arise from different distributions (due to change in face appearance, lighting, etc). Under such conditions, we employ Xferboost, a Logitboost-based transfer learning framework that integrates knowledge from a few labeled target samples with the source model to effectively minimize misclassifications on the target data. Experiments confirm that the Xferboost framework can improve classification performance by up to 6%, when knowledge is transferred between the CLEAR and FBK four-view headpose datasets.
Resumo:
Multi-view head-pose estimation in low-resolution, dynamic scenes is difficult due to blurred facial appearance and perspective changes as targets move around freely in the environment. Under these conditions, acquiring sufficient training examples to learn the dynamic relationship between position, face appearance and head-pose can be very expensive. Instead, a transfer learning approach is proposed in this work. Upon learning a weighted-distance function from many examples where the target position is fixed, we adapt these weights to the scenario where target positions are varying. The adaptation framework incorporates reliability of the different face regions for pose estimation under positional variation, by transforming the target appearance to a canonical appearance corresponding to a reference scene location. Experimental results confirm effectiveness of the proposed approach, which outperforms state-of-the-art by 9.5% under relevant conditions. To aid further research on this topic, we also make DPOSE- a dynamic, multi-view head-pose dataset with ground-truth publicly available with this paper.
Resumo:
In document community support vector machines and naïve bayes classifier are known for their simplistic yet excellent performance. Normally the feature subsets used by these two approaches complement each other, however a little has been done to combine them. The essence of this paper is a linear classifier, very similar to these two. We propose a novel way of combining these two approaches, which synthesizes best of them into a hybrid model. We evaluate the proposed approach using 20ng dataset, and compare it with its counterparts. The efficacy of our results strongly corroborate the effectiveness of our approach.
Resumo:
Classification of a large document collection involves dealing with a huge feature space where each distinct word is a feature. In such an environment, classification is a costly task both in terms of running time and computing resources. Further it will not guarantee optimal results because it is likely to overfit by considering every feature for classification. In such a context, feature selection is inevitable. This work analyses the feature selection methods, explores the relations among them and attempts to find a minimal subset of features which are discriminative for document classification.
Resumo:
Seismic site classifications are used to represent site effects for estimating hazard parameters (response spectral ordinates) at the soil surface. Seismic site classifications have generally been carried out using average shear wave velocity and/or standard penetration test n-values of top 30-m soil layers, according to the recommendations of the National Earthquake Hazards Reduction Program (NEHRP) or the International Building Code (IBC). The site classification system in the NEHRP and the IBC is based on the studies carried out in the United States where soil layers extend up to several hundred meters before reaching any distinct soil-bedrock interface and may not be directly applicable to other regions, especially in regions having shallow geological deposits. This paper investigates the influence of rock depth on site classes based on the recommendations of the NEHRP and the IBC. For this study, soil sites having a wide range of average shear wave velocities (or standard penetration test n-values) have been collected from different parts of Australia, China, and India. Shear wave velocities of rock layers underneath soil layers have also been collected at depths from a few meters to 180 m. It is shown that a site classification system based on the top 30-m soil layers often represents stiffer site classes for soil sites having shallow rock depths (rock depths less than 25 m from the soil surface). A new site classification system based on average soil thickness up to engineering bedrock has been proposed herein, which is considered more representative for soil sites in shallow bedrock regions. It has been observed that response spectral ordinates, amplification factors, and site periods estimated using one-dimensional shear wave analysis considering the depth of engineering bedrock are different from those obtained considering top 30-m soil layers.
Resumo:
There are many popular models available for classification of documents like Naïve Bayes Classifier, k-Nearest Neighbors and Support Vector Machine. In all these cases, the representation is based on the “Bag of words” model. This model doesn't capture the actual semantic meaning of a word in a particular document. Semantics are better captured by proximity of words and their occurrence in the document. We propose a new “Bag of Phrases” model to capture this discriminative power of phrases for text classification. We present a novel algorithm to extract phrases from the corpus using the well known topic model, Latent Dirichlet Allocation(LDA), and to integrate them in vector space model for classification. Experiments show a better performance of classifiers with the new Bag of Phrases model against related representation models.