201 resultados para Metric cone
Resumo:
In phase-encoded optical CDMA (OCDMA) spreading is achieved by encoding the phase of signal spectrum. Here, a mathematical model for the output signal of a phase-encoded OCDMA system is first derived. This is shown to lead to a performance metric for the design of spreading sequences for asynchronous transmission. Generalized bent functions are used to construct a family of efficient phase-encoding sequences. It is shown how M-ary modulation of these spreading sequences is possible. The problem of designing efficient phaseencoded sequences is then related to the problem of minimizing PMEPR (peak-to-mean envelope power ratio) in an OFDM communication system.
Resumo:
High-rate analysis of channel-optimized vector quantizationThis paper considers the high-rate performance of channel optimized source coding for noisy discrete symmetric channels with random index assignment. Specifically, with mean squared error (MSE) as the performance metric, an upper bound on the asymptotic (i.e., high-rate) distortion is derived by assuming a general structure on the codebook. This structure enables extension of the analysis of the channel optimized source quantizer to one with a singular point density: for channels with small errors, the point density that minimizes the upper bound is continuous, while as the error rate increases, the point density becomes singular. The extent of the singularity is also characterized. The accuracy of the expressions obtained are verified through Monte Carlo simulations.
Resumo:
This paper addresses the problem of maximum margin classification given the moments of class conditional densities and the false positive and false negative error rates. Using Chebyshev inequalities, the problem can be posed as a second order cone programming problem. The dual of the formulation leads to a geometric optimization problem, that of computing the distance between two ellipsoids, which is solved by an iterative algorithm. The formulation is extended to non-linear classifiers using kernel methods. The resultant classifiers are applied to the case of classification of unbalanced datasets with asymmetric costs for misclassification. Experimental results on benchmark datasets show the efficacy of the proposed method.
Resumo:
In this paper we propose a novel, scalable, clustering based Ordinal Regression formulation, which is an instance of a Second Order Cone Program (SOCP) with one Second Order Cone (SOC) constraint. The main contribution of the paper is a fast algorithm, CB-OR, which solves the proposed formulation more eficiently than general purpose solvers. Another main contribution of the paper is to pose the problem of focused crawling as a large scale Ordinal Regression problem and solve using the proposed CB-OR. Focused crawling is an efficient mechanism for discovering resources of interest on the web. Posing the problem of focused crawling as an Ordinal Regression problem avoids the need for a negative class and topic hierarchy, which are the main drawbacks of the existing focused crawling methods. Experiments on large synthetic and benchmark datasets show the scalability of CB-OR. Experiments also show that the proposed focused crawler outperforms the state-of-the-art.
Resumo:
To a reasonable approximation, a secondary structures of RNA is determined by Watson-Crick pairing without pseudo-knots in such a way as to minimise the number of unpaired bases: We show that this minimal number is determined by the maximal conjugacy-invariant pseudo-norm on the free group on two generators subject to bounds on the generators. This allows us to construct lower bounds on the minimal number of unpaired bases by constructing conjugacy invariant pseudo-norms. We show that one such construction, based on isometric actions on metric spaces, gives a sharp lower bound. A major goal here is to formulate a purely mathematical question, based on considering orthogonal representations, which we believe is of some interest independent of its biological roots.
Resumo:
This paper presents a novel Second Order Cone Programming (SOCP) formulation for large scale binary classification tasks. Assuming that the class conditional densities are mixture distributions, where each component of the mixture has a spherical covariance, the second order statistics of the components can be estimated efficiently using clustering algorithms like BIRCH. For each cluster, the second order moments are used to derive a second order cone constraint via a Chebyshev-Cantelli inequality. This constraint ensures that any data point in the cluster is classified correctly with a high probability. This leads to a large margin SOCP formulation whose size depends on the number of clusters rather than the number of training data points. Hence, the proposed formulation scales well for large datasets when compared to the state-of-the-art classifiers, Support Vector Machines (SVMs). Experiments on real world and synthetic datasets show that the proposed algorithm outperforms SVM solvers in terms of training time and achieves similar accuracies.
Resumo:
To effectively support today’s global economy, database systems need to manage data in multiple languages simultaneously. While current database systems do support the storage and management of multilingual data, they are not capable of querying across different natural languages. To address this lacuna, we have recently proposed two cross-lingual functionalities, LexEQUAL[13] and SemEQUAL[14], for matching multilingual names and concepts, respectively. In this paper, we investigate the native implementation of these multilingual functionalities as first-class operators on relational engines. Specifically, we propose a new multilingual storage datatype, and an associated algebra of the multilingual operators on this datatype. These components have been successfully implemented in the PostgreSQL database system, including integration of the algebra with the query optimizer and inclusion of a metric index in the access layer. Our experiments demonstrate that the performance of the native implementation is up to two orders-of-magnitude faster than the corresponding outsidethe- server implementation. Further, these multilingual additions do not adversely impact the existing functionality and performance. To the best of our knowledge, our prototype represents the first practical implementation of a crosslingual database query engine.
Resumo:
In a dense multi-hop network of mobile nodes capable of applying adaptive power control, we consider the problem of finding the optimal hop distance that maximizes a certain throughput measure in bit-metres/sec, subject to average network power constraints. The mobility of nodes is restricted to a circular periphery area centered at the nominal location of nodes. We incorporate only randomly varying path-loss characteristics of channel gain due to the random motion of nodes, excluding any multi-path fading or shadowing effects. Computation of the throughput metric in such a scenario leads us to compute the probability density function of random distance between points in two circles. Using numerical analysis we discover that choosing the nearest node as next hop is not always optimal. Optimal throughput performance is also attained at non-trivial hop distances depending on the available average network power.
Resumo:
A Space-Time Block Code (STBC) in K symbols (variables) is called g-group decodable STBC if its maximum-likelihood decoding metric can be written as a sum of g terms such that each term is a function of a subset of the K variables and each variable appears in only one term. In this paper we provide a general structure of the weight matrices of multi-group decodable codes using Clifford algebras. Without assuming that the number of variables in each group to be the same, a method of explicitly constructing the weight matrices of full-diversity, delay-optimal g-group decodable codes is presented for arbitrary number of antennas. For the special case of Nt=2a we construct two subclass of codes: (i) A class of 2a-group decodable codes with rate a2(a−1), which is, equivalently, a class of Single-Symbol Decodable codes, (ii) A class of (2a−2)-group decodable with rate (a−1)2(a−2), i.e., a class of Double-Symbol Decodable codes. Simulation results show that the DSD codes of this paper perform better than previously known Quasi-Orthogonal Designs.
Resumo:
In phase encoding optical CDMA (OCDMA) the spreading is achieved by encoding the phase of signal spectrum. In this paper we first derive a mathematical model for the output of phase encoding OCDMA systems. Based on this model we introduce a metric to design spreading sequences for asynchronous transmission. Then we connect the phase encoding sequence design problem to OFDM PMEPR (peak to mean envelope power ratio) problem. Using this connection we conclude that designing sequences with good properties for samples of timing delay guarantees that the same sequence to be good for all timing delays. Finally using generalized bent function we manage to construct a family of sequences which are good for asynchronous phase encoding OCDMA systems and using these sequences we introduce an M-ary modulation scheme for phase encoding OCDMA
Resumo:
Rate control regulates the instantaneous video bit -rate to maximize a picture quality metric while satisfying channel constraints. Typically, a quality metric such as Peak Signalto-Noise ratio (PSNR) or weighted signal -to-noise ratio(WSNR) is chosen out of convenience. However this metric is not always truly representative of perceptual video quality.Attempts to use perceptual metrics in rate control have been limited by the accuracy of the video quality metrics chosen.Recently, new and improved metrics of subjective quality such as the Video quality experts group's (VQEG) NTIA1 General Video Quality Model (VQM) have been proven to have strong correlation with subjective quality. Here, we apply the key principles of the NTIA -VQM model to rate control in order to maximize perceptual video quality. Our experiments demonstrate that applying NTIA -VQM motivated metrics to standard TMN8 rate control in an H.263 encoder results in perceivable quality improvements over a baseline TMN8 / MSE based implementation.
Resumo:
This paper addresses the problem of how to select the optimal number of sensors and how to determine their placement in a given monitored area for multimedia surveillance systems. We propose to solve this problem by obtaining a novel performance metric in terms of a probability measure for accomplishing the task as a function of set of sensors and their placement. This measure is then used to find the optimal set. The same measure can be used to analyze the degradation in system 's performance with respect to the failure of various sensors. We also build a surveillance system using the optimal set of sensors obtained based on the proposed design methodology. Experimental results show the effectiveness of the proposed design methodology in selecting the optimal set of sensors and their placement.
Resumo:
This paper considers the high-rate performance of source coding for noisy discrete symmetric channels with random index assignment (IA). Accurate analytical models are developed to characterize the expected distortion performance of vector quantization (VQ) for a large class of distortion measures. It is shown that when the point density is continuous, the distortion can be approximated as the sum of the source quantization distortion and the channel-error induced distortion. Expressions are also derived for the continuous point density that minimizes the expected distortion. Next, for the case of mean squared error distortion, a more accurate analytical model for the distortion is derived by allowing the point density to have a singular component. The extent of the singularity is also characterized. These results provide analytical models for the expected distortion performance of both conventional VQ as well as for channel-optimized VQ. As a practical example, compression of the linear predictive coding parameters in the wideband speech spectrum is considered, with the log spectral distortion as performance metric. The theory is able to correctly predict the channel error rate that is permissible for operation at a particular level of distortion.
Resumo:
This paper considers the degrees of freedom (DOF) for a K user multiple-input multiple-output (MIMO) M x N interference channel using interference alignment (IA). A new performance metric for evaluating the efficacy of IA algorithms is proposed, which measures the extent to which the desired signal dimensionality is preserved after zero-forcing the interference at the receiver. Inspired by the metric, two algorithms are proposed for designing the linear precoders and receive filters for IA in the constant MIMO interference channel with a finite number of symbol extensions. The first algorithm uses an eigenbeamforming method to align sub-streams of the interference to reduce the dimensionality of the interference at all the receivers. The second algorithm is iterative, and is based on minimizing the interference leakage power while preserving the dimensionality of the desired signal space at the intended receivers. The improved performance of the algorithms is illustrated by comparing them with existing algorithms for IA using Monte Carlo simulations.
Resumo:
A novel methodology for modeling the effects of process variations on circuit delay performance is proposed by relating the variations in process parameters to variations in delay metric of a complex digital circuit. The delay of a 2-input NAND gate with 65nm gate length transistors is extensively characterized by mixed-mode simulations which is then used as a library element. The variation in saturation current Ionat the device level, and the variation in rising/falling edge stage delay for the NAND gate at the circuit level, are taken as performance metrics. A 4-bit x 4-bit Wallace tree multiplier circuit is used as a representative combinational circuit to demonstrate the proposed methodology. The variation in the multiplier delay is characterized, to obtain delay distributions, by an extensive Monte Carlo analysis. An analytical model based on CV/I metric is proposed, to extend this methodology for a generic technology library with a variety of library elements.