110 resultados para weights


Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present external memory data structures for efficiently answering range-aggregate queries. The range-aggregate problem is defined as follows: Given a set of weighted points in R-d, compute the aggregate of the weights of the points that lie inside a d-dimensional orthogonal query rectangle. The aggregates we consider in this paper include COUNT, sum, and MAX. First, we develop a structure for answering two-dimensional range-COUNT queries that uses O(N/B) disk blocks and answers a query in O(log(B) N) I/Os, where N is the number of input points and B is the disk block size. The structure can be extended to obtain a near-linear-size structure for answering range-sum queries using O(log(B) N) I/Os, and a linear-size structure for answering range-MAX queries in O(log(B)(2) N) I/Os. Our structures can be made dynamic and extended to higher dimensions. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Diffusion ordered spectroscopy (DOSY) generally fails to separate the peaks pertaining to isomeric species possessing identical molecular weights and similar hydrodynamic radii. The present study demonstrates the resolution of isomers using alpha/beta-cyclodextrin as a co-solute by Matrix Assisted Diffusion Ordered Spectroscopy. The resolution of isomers has been achieved by measuring the significant differences in the diffusion rates between the positional isomers of aminobenzoic acids, benzenedicarboxylic acids and between the cis, trans isomers, fumaric acid and maleic acid. (C) 2012 Elsevier B.V. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We address the problem of identifying the constituent sources in a single-sensor mixture signal consisting of contributions from multiple simultaneously active sources. We propose a generic framework for mixture signal analysis based on a latent variable approach. The basic idea of the approach is to detect known sources represented as stochastic models, in a single-channel mixture signal without performing signal separation. A given mixture signal is modeled as a convex combination of known source models and the weights of the models are estimated using the mixture signal. We show experimentally that these weights indicate the presence/absence of the respective sources. The performance of the proposed approach is illustrated through mixture speech data in a reverberant enclosure. For the task of identifying the constituent speakers using data from a single microphone, the proposed approach is able to identify the dominant source with up to 8 simultaneously active background sources in a room with RT60 = 250 ms, using models obtained from clean speech data for a Source to Interference Ratio (SIR) greater than 2 dB.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Poly{(N,N-(dimethylamino)ethyl methacrylate]-co-(methyl methacrylate)} copolymers of various compositions were synthesized by reversible addition-fragmentation chain transfer (RAFT) polymerization at 70 degrees C in N,N-dimethylformamide. The polymer molecular weights and molecular weight distributions were obtained from size exclusion chromatography, and they indicated the controlled nature of the RAFT polymerizations; the polydispersity indices are in the range 1.11.3. The reactivity ratios of N,N-(dimethylamino)ethyl methacrylate (DMAEMA) and methyl methacrylate (MMA) (rDMAEMA = 0.925 and rMMA = 0.854) were computed by the extended KelenTudos method at high conversions, using compositions obtained from 1H NMR. The pH- and temperature-sensitive behaviour were studied in aqueous solution to confirm dual responsiveness of these copolymers. The thermal properties of the copolymers with various compositions were investigated by differential scanning calorimetry and thermogravimetric analysis. The kinetics of thermal degradation were determined by Friedmann and Chang techniques to evaluate various parameters such as the activation energy, the order and the frequency factor. (c) 2012 Society of Chemical Industry

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Since the days of Digital Subscriber Links(DSL), Time Domain Equalizers(TEQ's) have been used to combat time dispersive channels in Multicarrier Systems. In this paper, we propose computationally inexpensive techniques to recompute TEQ weights in the presence of changes in the channel, especially over fast fading channels. The techniques use no extra information except the perturbation itself, and provide excellent approximations to the new TEQ weights. The proposed adaptation techniques are shown to perform admirably well for small changes in channels for OFDM systems.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Ubiquitous Computing is an emerging paradigm which facilitates user to access preferred services, wherever they are, whenever they want, and the way they need, with zero administration. While moving from one place to another the user does not need to specify and configure their surrounding environment, the system initiates necessary adaptation by itself to cope up with the changing environment. In this paper we propose a system to provide context-aware ubiquitous multimedia services, without user’s intervention. We analyze the context of the user based on weights, identify the UMMS (Ubiquitous Multimedia Service) based on the collected context information and user profile, search for the optimal server to provide the required service, then adapts the service according to user’s local environment and preferences, etc. The experiment conducted several times with different context parameters, their weights and various preferences for a user. The results are quite encouraging.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we present a fast learning neural network classifier for human action recognition. The proposed classifier is a fully complex-valued neural network with a single hidden layer. The neurons in the hidden layer employ the fully complex-valued hyperbolic secant as an activation function. The parameters of the hidden layer are chosen randomly and the output weights are estimated analytically as a minimum norm least square solution to a set of linear equations. The fast leaning fully complex-valued neural classifier is used for recognizing human actions accurately. Optical flow-based features extracted from the video sequences are utilized to recognize 10 different human actions. The feature vectors are computationally simple first order statistics of the optical flow vectors, obtained from coarse to fine rectangular patches centered around the object. The results indicate the superior performance of the complex-valued neural classifier for action recognition. The superior performance of the complex neural network for action recognition stems from the fact that motion, by nature, consists of two components, one along each of the axes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study presents an overview of seismic microzonation and existing methodologies with a newly proposed methodology covering all aspects. Earlier seismic microzonation methods focused on parameters that affect the structure or foundation related problems. But seismic microzonation has generally been recognized as an important component of urban planning and disaster management. So seismic microzonation should evaluate all possible hazards due to earthquake and represent the same by spatial distribution. This paper presents a new methodology for seismic microzonation which has been generated based on location of study area and possible associated hazards. This new method consists of seven important steps with defined output for each step and these steps are linked with each other. Addressing one step and respective result may not be seismic microzonation, which is practiced widely. This paper also presents importance of geotechnical aspects in seismic microzonation and how geotechnical aspects affect the final map. For the case study, seismic hazard values at rock level are estimated considering the seismotectonic parameters of the region using deterministic and probabilistic seismic hazard analysis. Surface level hazard values are estimated considering site specific study and local site effects based on site classification/characterization. The liquefaction hazard is estimated using standard penetration test data. These hazard parameters are integrated in Geographical Information System (GIS) using Analytic Hierarchy Process (AHP) and used to estimate hazard index. Hazard index is arrived by following a multi-criteria evaluation technique - AHP, in which each theme and features have been assigned weights and then ranked respectively according to a consensus opinion about their relative significance to the seismic hazard. The hazard values are integrated through spatial union to obtain the deterministic microzonation map and probabilistic microzonation map for a specific return period. Seismological parameters are widely used for microzonation rather than geotechnical parameters. But studies show that the hazard index values are based on site specific geotechnical parameters.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Multi-view head-pose estimation in low-resolution, dynamic scenes is difficult due to blurred facial appearance and perspective changes as targets move around freely in the environment. Under these conditions, acquiring sufficient training examples to learn the dynamic relationship between position, face appearance and head-pose can be very expensive. Instead, a transfer learning approach is proposed in this work. Upon learning a weighted-distance function from many examples where the target position is fixed, we adapt these weights to the scenario where target positions are varying. The adaptation framework incorporates reliability of the different face regions for pose estimation under positional variation, by transforming the target appearance to a canonical appearance corresponding to a reference scene location. Experimental results confirm effectiveness of the proposed approach, which outperforms state-of-the-art by 9.5% under relevant conditions. To aid further research on this topic, we also make DPOSE- a dynamic, multi-view head-pose dataset with ground-truth publicly available with this paper.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Clustering has been the most popular method for data exploration. Clustering is partitioning the data set into sub-partitions based on some measures say the distance measure, each partition has its own significant information. There are a number of algorithms explored for this purpose, one such algorithm is the Particle Swarm Optimization(PSO) which is a population based heuristic search technique derived from swarm intelligence. In this paper we present an improved version of the Particle Swarm Optimization where, each feature of the data set is given significance accordingly by adding some random weights, which also minimizes the distortions in the dataset if any. The performance of the above proposed algorithm is evaluated using some benchmark datasets from Machine Learning Repository. The experimental results shows that our proposed methodology performs significantly better than the previously performed experiments.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this letter, we compute the secrecy rate of decode-and-forward (DF) relay beamforming with finite input alphabet of size M. Source and relays operate under a total power constraint. First, we observe that the secrecy rate with finite-alphabet input can go to zero as the total power increases, when we use the source power and the relay weights obtained assuming Gaussian input. This is because the capacity of an eavesdropper can approach the finite-alphabet capacity of 1/2 log(2) M with increasing total power, due to the inability to completely null in the direction of the eavesdropper. We then propose a transmit power control scheme where the optimum source power and relay weights are obtained by carrying out transmit power (source power plus relay power) control on DF with Gaussian input using semi-definite programming, and then obtaining the corresponding source power and relay weights which maximize the secrecy rate for DF with finite-alphabet input. The proposed power control scheme is shown to achieve increasing secrecy rates with increasing total power with a saturation behavior at high total powers.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we evaluate secrecy rates in cooperative relay beamforming in the presence of imperfect channel state information (CSI) and multiple eavesdroppers. A source-destination pair aided by.. out of.. relays, 1 <= k <= M, using decode-and-forward relay beamforming is considered. We compute the worst case secrecy rate with imperfect CSI in the presence of multiple eavesdroppers, where the number of eavesdroppers can be more than the number of relays. We solve the optimization problem for all possible relay combinations to find the secrecy rate and optimum source and relay weights subject to a total power constraint. We relax the rank-1 constraint on the complex semi-definite relay weight matrix and use S-procedure to reformulate the optimization problem that can be solved using convex semi-definite programming.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Two multicriterion decision-making methods, namely `compromise programming' and the `technique for order preference by similarity to an ideal solution' are employed to prioritise 22 micro-catchments (A1 to A22) of Kherthal catchment, Rajasthan, India and comparative analysis is performed using the compound parameter approach. Seven criteria - drainage density, bifurcation ratio, stream frequency, form factor, elongation ratio, circulatory ratio and texture ratio - are chosen for the evaluation. The entropy method is employed to estimate weights or relative importance of the criterion which ultimately affects the ranking pattern or prioritisation of micro-catchments. Spearman rank correlation coefficients are estimated to measure the extent to which the ranks obtained are correlated. Based on the average ranking approach supported by sensitivity analysis, micro-catchments A6, A10, A3 are preferred (owing to their low ranking) for further improvements with suitable conservation and management practices, and other micro-catchments can be processed accordingly at a later phase on a priority basis. It is concluded that the present approach can be explored for other similar situations with appropriate modifications.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The mechanical behaviour of cohesive-frictional granular materials is a combination of the strength pervading as intergranular friction (represented as an angle of internal friction - Phi), and the cohesion (C) between these particles. Most behavioral or constitutive models of this class of granular materials comprise of a cohesion and frictional component with no regard to the length scale i.e. from the micro structural models through the continuum models. An experimental study has been made on a model granular material, viz. angular sand with different weights of binding agents (varying degrees of cohesion) at multiple length scales to physically map this phenomenon. Cylindrical specimen of various diameters - 10, 20, 38, 100, 150 mm (and with an aspect ratio of 2) are reconstituted with 2, 4 and 8% by weight of a binding agent. The magnitude of this cohesion is analyzed using uniaxial compression tests and it is assumed to correspond to the peak in the normalized stress-strain plot. Increase in the cohesive strength of the material is seen with increasing size of the specimen. A possibility of ``entanglement'' occurring in larger specimens is proposed as a possible reason for deviation from a continuum framework.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Transductive SVM (TSVM) is a well known semi-supervised large margin learning method for binary text classification. In this paper we extend this method to multi-class and hierarchical classification problems. We point out that the determination of labels of unlabeled examples with fixed classifier weights is a linear programming problem. We devise an efficient technique for solving it. The method is applicable to general loss functions. We demonstrate the value of the new method using large margin loss on a number of multi-class and hierarchical classification datasets. For maxent loss we show empirically that our method is better than expectation regularization/constraint and posterior regularization methods, and competitive with the version of entropy regularization method which uses label constraints.