955 resultados para Multiple classification


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Details of the first total syntheses of the sesquiterpenes myltayl-8(12)-ene and 6-epijunicedran-8-ol are described. The aldehyde 13, obtained by Claisen rearrangement of cyclogeraniol, was transformed into the dienones 12 and 18. Boron trifluoride-diethyl ether mediated cyclization and rearrangement transformed the dienones 12 and 18 into the tricyclic ketones 16 and 17, efficiently creating three and four contiguous quaternary carbon atoms, respectively. Wittig methylenation of 16 furnished (+/-)-myltayl-8(12)-ene (11), whereas reduction of the ketone 17 furnished (+/-)-6-epijunicedranol (23).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we propose a multiple resource interaction model in a game-theoretical framework to solve resource allocation problems in theater level military campaigns. An air raid campaign using SEAD aircraft and bombers against an enemy target defended by air defense units is considered as the basic platform. Conditions for the existence of saddle point in pure strategies is proved and explicit feedback strategies are obtained for a simplified model with linear attrition function limited by resource availability. An illustrative example demonstrates the key features.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of estimating multiple Carrier Frequency Offsets (CFOs) in the uplink of MIMO-OFDM systems with Co-Channel (CC) and OFDMA based carrier allocation is considered. The tri-linear data model for generalized, multiuser OFDM system is formulated. Novel blind subspace based estimation of multiple CFOs in the case of arbitrary carrier allocation scheme in OFDMA systems and CC users in OFDM systems based on the Khatri-Rao product is proposed. The method works where the conventional subspace method fails. The performance of the proposed methods is compared with pilot based Least-Squares method.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper is on the design and performance analysis of practical distributed space-time codes for wireless relay networks with multiple antennas terminals. The amplify-andforward scheme is used in a way that each relay transmits a scaled version of the linear combination of the received symbols. We propose distributed generalized quasi-orthogonal space-time codes which are distributed among the source antennas and relays, and valid for any number of relays. Assuming M-PSK and M-QAM signals, we derive a formula for the symbol error probability of the investigated scheme over Rayleigh fading channels. For sufficiently large SNR, this paper derives closed-form average SER expression. The simplicity of the asymptotic results provides valuable insights into the performance of cooperative networks and suggests means of optimizing them. Our analytical results have been confirmed by simulation results, using full-rate full-diversity distributed codes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The integration of different wireless networks, such as GSM and WiFi, as a two-tier hybrid wireless network is more popular and economical. Efficient bandwidth management, call admission control strategies and mobility management are important issues in supporting multiple types of services with different bandwidth requirements in hybrid networks. In particular, bandwidth is a critical commodity because of the type of transactions supported by these hybrid networks, which may have varying bandwidth and time requirements. In this paper, we consider such a problem in a hybrid wireless network installed in a superstore environment and design a bandwidth management algorithm based on the priority level, classification of the incoming transactions. Our scheme uses a downlink transaction scheduling algorithm, which decides how to schedule the outgoing transactions based on their priority level with efficient use of available bandwidth. The transaction scheduling algorithm is used to maximize the number of transaction-executions. The proposed scheme is simulated in a superstore environment with multi Rooms. The performance results describe that the proposed scheme can considerably improve the bandwidth utilization by reducing transaction blocking and accommodating more essential transactions at the peak time of the business.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we consider the problem of learning an n × n kernel matrix from m(1) similarity matrices under general convex loss. Past research have extensively studied the m = 1 case and have derived several algorithms which require sophisticated techniques like ACCP, SOCP, etc. The existing algorithms do not apply if one uses arbitrary losses and often can not handle m > 1 case. We present several provably convergent iterative algorithms, where each iteration requires either an SVM or a Multiple Kernel Learning (MKL) solver for m > 1 case. One of the major contributions of the paper is to extend the well knownMirror Descent(MD) framework to handle Cartesian product of psd matrices. This novel extension leads to an algorithm, called EMKL, which solves the problem in O(m2 log n 2) iterations; in each iteration one solves an MKL involving m kernels and m eigen-decomposition of n × n matrices. By suitably defining a restriction on the objective function, a faster version of EMKL is proposed, called REKL,which avoids the eigen-decomposition. An alternative to both EMKL and REKL is also suggested which requires only an SVMsolver. Experimental results on real world protein data set involving several similarity matrices illustrate the efficacy of the proposed algorithms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we show that it is possible to reduce the complexity of Intra MB coding in H.264/AVC based on a novel chance constrained classifier. Using the pairs of simple mean-variances values, our technique is able to reduce the complexity of Intra MB coding process with a negligible loss in PSNR. We present an alternate approach to address the classification problem which is equivalent to machine learning. Implementation results show that the proposed method reduces encoding time to about 20% of the reference implementation with average loss of 0.05 dB in PSNR.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we address the fundamental question concerning the limits on the network lifetime in sensor networks when multiple base stations (BSs) are deployed as data sinks. Specifically, we derive upper bounds on the network lifetime when multiple BSs arc employed, and obtain optimum locations of the base stations that maximise these lifetime bounds. For the case of two BSs, we jointly optimise the BS locations by maximising the lifetime bound using genetic algorithm. Joint optimisation for more number of BSs becomes prohibitively complex. Further, we propose a suboptimal approach for higher number of BSs, Individually Optimum method, where we optimise the next BS location using optimum location of previous BSs. Individually Optimum method has advantage of being attractive for solving the problem with more number of BSs at the cost of little compromised accuracy. We show that accuracy degradation is quite small for the case of three BSs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study small perturbations of three linear Delay Differential Equations (DDEs) close to Hopf bifurcation points. In analytical treatments of such equations, many authors recommend a center manifold reduction as a first step. We demonstrate that the method of multiple scales, on simply discarding the infinitely many exponentially decaying components of the complementary solutions obtained at each stage of the approximation, can bypass the explicit center manifold calculation. Analytical approximations obtained for the DDEs studied closely match numerical solutions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

As research becomes more and more interdisciplinary, literature search from CD-ROM databases is often carried out on more than one CD-ROM database. This results in retrieving duplicate records due to same literature being covered (indexed) in more than one database. The retrieval software does not identify such duplicate records. Three different programs have been written to accomplish the task of identifying the duplicate records. These programs are executed from a shell script to minimize manual intervention. The various fields that have been used (extracted) to identify the duplicate records include the article title, year, volume number, issue number and pagination. The shell script when executed prompts for input file that may contain duplicate records. The programs identify the duplicate records and write them to a new file.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Due to its wide applicability, semi-supervised learning is an attractive method for using unlabeled data in classification. In this work, we present a semi-supervised support vector classifier that is designed using quasi-Newton method for nonsmooth convex functions. The proposed algorithm is suitable in dealing with very large number of examples and features. Numerical experiments on various benchmark datasets showed that the proposed algorithm is fast and gives improved generalization performance over the existing methods. Further, a non-linear semi-supervised SVM has been proposed based on a multiple label switching scheme. This non-linear semi-supervised SVM is found to converge faster and it is found to improve generalization performance on several benchmark datasets. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Hanuman langur is one of the most widely distributed and morphologically variable non-human primates in South Asia. Even though it has been extensively studied, the taxonomic status of this species remains unresolved due to incongruence between various classification schemes. This incongruence, we believe, is largely due to the use of plastic morphological characters such as coat color in classification. Additionally these classification schemes were largely based on reanalysis of the same set of museum specimens. To bring greater resolution in Hanuman langur taxonomy we undertook a field survey to study variation in external morphological characters among Hanuman langurs. The primary objective of this study is to ascertain the number of morphologically recognizable units (morphotypes) of Hanuman langur in peninsular India and to compare our field observations with published classification schemes. We typed five color-independent characters for multiple adults from various populations in South India. We used the presence-absence matrix of these characters to derive the pair-wise distance between individuals and used this to construct a neighbor-joining (NJ) tree. The resulting NJ tree retrieved six distinct clusters, which we assigned to different morphotypes. These morphotypes can be identified in the field by using a combination of five diagnostic characters. We determined the approximate distributions of these morphotypes by plotting the sampling locations of each morphotype on a map using GIS software. Our field observations are largely concordant with some of the earliest classification schemes, but are incongruent with recent classification schemes. Based on these results we recommend Hill (Ceylon Journal of Science, Colombo 21:277-305, 1939) and Pocock (Primates and carnivora (in part) (pp. 97-163). London: Taylor and Francis, 1939) classification schemes for future studies on Hanuman langurs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Multiple beam interference of light in a wedge is considered when the wedge is filled with an absorbing medium. The aim is to examine a method that may give values of both the real and the imaginary parts of the refractive index of the absorbing medium. We propose here a method to determine these quantities from simple techniques like fringe counting and interferometry, by using as the incident wave either a single Gaussian beam or two parallel Gaussian beams.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The specified range of free chlorine residual (between minimum and maximum) in water distribution systems needs to be maintained to avoid deterioration of the microbial quality of water, control taste and/or odor problems, and hinder formation of carcino-genic disinfection by-products. Multiple water quality sources for providing chlorine input are needed to maintain the chlorine residuals within a specified range throughout the distribution system. The determination of source dosage (i.e., chlorine concentrations/chlorine mass rates) at water quality sources to satisfy the above objective under dynamic conditions is a complex process. A nonlinear optimization problem is formulated to determine the chlorine dosage at the water quality sources subjected to minimum and maximum constraints on chlorine concentrations at all monitoring nodes. A genetic algorithm (GA) approach in which decision variables (chlorine dosage) are coded as binary strings is used to solve this highly nonlinear optimization problem, with nonlinearities arising due to set-point sources and non-first-order reactions. Application of the model is illustrated using three sample water distribution systems, and it indicates that the GA,is a useful tool for evaluating optimal water quality source chlorine schedules.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Part classification and coding is still considered as laborious and time-consuming exercise. Keeping in view, the crucial role, which it plays, in developing automated CAPP systems, the attempts have been made in this article to automate a few elements of this exercise using a shape analysis model. In this study, a 24-vector directional template is contemplated to represent the feature elements of the parts (candidate and prototype). Various transformation processes such as deformation, straightening, bypassing, insertion and deletion are embedded in the proposed simulated annealing (SA)-like hybrid algorithm to match the candidate part with their prototype. For a candidate part, searching its matching prototype from the information data is computationally expensive and requires large search space. However, the proposed SA-like hybrid algorithm for solving the part classification problem considerably minimizes the search space and ensures early convergence of the solution. The application of the proposed approach is illustrated by an example part. The proposed approach is applied for the classification of 100 candidate parts and their prototypes to demonstrate the effectiveness of the algorithm. (C) 2003 Elsevier Science Ltd. All rights reserved.