127 resultados para general rule


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In the underlay mode of cognitive radio, secondary users are allowed to transmit when the primary is transmitting, but under tight interference constraints that protect the primary. However, these constraints limit the secondary system performance. Antenna selection (AS)-based multiple antenna techniques, which exploit spatial diversity with less hardware, help improve secondary system performance. We develop a novel and optimal transmit AS rule that minimizes the symbol error probability (SEP) of an average interference-constrained multiple-input-single-output secondary system that operates in the underlay mode. We show that the optimal rule is a non-linear function of the power gain of the channel from the secondary transmit antenna to the primary receiver and from the secondary transmit antenna to the secondary receive antenna. We also propose a simpler, tractable variant of the optimal rule that performs as well as the optimal rule. We then analyze its SEP with L transmit antennas, and extensively benchmark it with several heuristic selection rules proposed in the literature. We also enhance these rules in order to provide a fair comparison, and derive new expressions for their SEPs. The results bring out new inter-relationships between the various rules, and show that the optimal rule can significantly reduce the SEP.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An opportunistic, rate-adaptive system exploits multi-user diversity by selecting the best node, which has the highest channel power gain, and adapting the data rate to selected node's channel gain. Since channel knowledge is local to a node, we propose using a distributed, low-feedback timer backoff scheme to select the best node. It uses a mapping that maps the channel gain, or, in general, a real-valued metric, to a timer value. The mapping is such that timers of nodes with higher metrics expire earlier. Our goal is to maximize the system throughput when rate adaptation is discrete, as is the case in practice. To improve throughput, we use a pragmatic selection policy, in which even a node other than the best node can be selected. We derive several novel, insightful results about the optimal mapping and develop an algorithm to compute it. These results bring out the inter-relationship between the discrete rate adaptation rule, optimal mapping, and selection policy. We also extensively benchmark the performance of the optimal mapping with several timer and opportunistic multiple access schemes considered in the literature, and demonstrate that the developed scheme is effective in many regimes of interest.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A principal hypothesis for the evolution of leks (rare and intensely competitive territorial aggregations) is that leks result from females preferring to mate with clustered males. This hypothesis predicts more female visits and higher mating success per male on larger leks. Evidence for and against this hypothesis has been presented by different studies, primarily of individual populations, but its generality has not yet been formally investigated. We took a meta-analytical approach towards formally examining the generality of such a female bias in lekking species. Using available published data and using female visits as an index of female mating bias, we estimated the shape of the relationship between lek size and total female visits to a lek, female visits per lekking male and, where available, per capita male mating success. Individual analyses showed that female visits generally increased with lek size across the majority of taxa surveyed; the meta-analysis indicated that this relationship with lek size was disproportionately positive. The findings from analysing per capita female visits were mixed, with an increase with lek size detected in half of the species, which were, however, widely distributed taxonomically. Taken together, these findings suggest that a female bias for clustered males may be a general process across lekking species. Nevertheless, the substantial variation seen in these relationships implies that other processes are also important. Analyses of per capita copulation success suggested that, more generally, increased per capita mating benefits may be an important selective factor in lek maintenance.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recession flows in a basin are controlled by the temporal evolution of its active drainage network (ADN). The geomorphological recession flow model (GRFM) assumes that both the rate of flow generation per unit ADN length (q) and the speed at which ADN heads move downstream (c) remain constant during a recession event. Thereby, it connects the power law exponent of -dQ/dt versus Q (discharge at the outlet at time t) curve, , with the structure of the drainage network, a fixed entity. In this study, we first reformulate the GRFM for Horton-Strahler networks and show that the geomorphic ((g)) is equal to D/(D-1), where D is the fractal dimension of the drainage network. We then propose a more general recession flow model by expressing both q and c as functions of Horton-Strahler stream order. We show that it is possible to have = (g) for a recession event even when q and c do not remain constant. The modified GRFM suggests that is controlled by the spatial distribution of subsurface storage within the basin. By analyzing streamflow data from 39 U.S. Geological Survey basins, we show that is having a power law relationship with recession curve peak, which indicates that the spatial distribution of subsurface storage varies across recession events. Key Points The GRFM is reformulated for Horton-Strahler networks. The GRFM is modified by allowing its parameters to vary along streams. Sub-surface storage distribution controls recession flow characteristics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study consistency properties of surrogate loss functions for general multiclass classification problems, defined by a general loss matrix. We extend the notion of classification calibration, which has been studied for binary and multiclass 0-1 classification problems (and for certain other specific learning problems), to the general multiclass setting, and derive necessary and sufficient conditions for a surrogate loss to be classification calibrated with respect to a loss matrix in this setting. We then introduce the notion of \emph{classification calibration dimension} of a multiclass loss matrix, which measures the smallest `size' of a prediction space for which it is possible to design a convex surrogate that is classification calibrated with respect to the loss matrix. We derive both upper and lower bounds on this quantity, and use these results to analyze various loss matrices. In particular, as one application, we provide a different route from the recent result of Duchi et al.\ (2010) for analyzing the difficulty of designing `low-dimensional' convex surrogates that are consistent with respect to pairwise subset ranking losses. We anticipate the classification calibration dimension may prove to be a useful tool in the study and design of surrogate losses for general multiclass learning problems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Transductive SVM (TSVM) is a well known semi-supervised large margin learning method for binary text classification. In this paper we extend this method to multi-class and hierarchical classification problems. We point out that the determination of labels of unlabeled examples with fixed classifier weights is a linear programming problem. We devise an efficient technique for solving it. The method is applicable to general loss functions. We demonstrate the value of the new method using large margin loss on a number of multi-class and hierarchical classification datasets. For maxent loss we show empirically that our method is better than expectation regularization/constraint and posterior regularization methods, and competitive with the version of entropy regularization method which uses label constraints.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In many systems, nucleation of a stable solid may occur in the presence of other (often more than one) metastable phases. These may be polymorphic solids or even liquid phases. Sometimes, the metastable phase might have a lower free energy minimum than the liquid but higher than the stable-solid-phase minimum and have characteristics in between the parent liquid and the globally stable solid phase. In such cases, nucleation of the solid phase from the melt may be facilitated by the metastable phase because the latter can ``wet'' the interface between the parent and the daughter phases, even though there may be no signature of the existence of metastable phase in the thermodynamic properties of the parent liquid and the stable solid phase. Straightforward application of classical nucleation theory (CNT) is flawed here as it overestimates the nucleation barrier because surface tension is overestimated (by neglecting the metastable phases of intermediate order) while the thermodynamic free energy gap between daughter and parent phases remains unchanged. In this work, we discuss a density functional theory (DFT)-based statistical mechanical approach to explore and quantify such facilitation. We construct a simple order-parameter-dependent free energy surface that we then use in DFT to calculate (i) the order parameter profile, (ii) the overall nucleation free energy barrier, and (iii) the surface tension between the parent liquid and the metastable solid and also parent liquid and stable solid phases. The theory indeed finds that the nucleation free energy barrier can decrease significantly in the presence of wetting. This approach can provide a microscopic explanation of the Ostwald step rule and the well-known phenomenon of ``disappearing polymorphs'' that depends on temperature and other thermodynamic conditions. Theory reveals a diverse scenario for phase transformation kinetics, some of which may be explored via modem nanoscopic synthetic methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Entanglement entropy in local quantum field theories is typically ultraviolet divergent due to short distance effects in the neighborhood of the entangling region. In the context of gauge/gravity duality, we show that surface terms in general relativity are able to capture this entanglement entropy. In particular, we demonstrate that for 1+1-dimensional (1 + 1d) conformal field theories (CFTs) at finite temperature whose gravity dual is Banados-Teitelboim-Zanelli (BTZ) black hole, the Gibbons-Hawking-York term precisely reproduces the entanglement entropy which can be computed independently in the field theory.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A space vector-based hysteresis current controller for any general n-level three phase inverter fed induction motor drive is proposed in this study. It offers fast dynamics, inherent overload protection and low harmonic distortion for the phase voltages and currents. The controller performs online current error boundary calculations and a nearly constant switching frequency is obtained throughout the linear modulation range. The proposed scheme uses only the adjacent voltage vectors of the present sector, similar to space vector pulse-width modulation and exhibits fast dynamic behaviour under different transient conditions. The steps involved in the boundary calculation include the estimation of phase voltages from the current ripple, computation of switching time and voltage error vectors. Experimental results are given to show the performance of the drive at various speeds, effect of sudden change of the load, acceleration, speed reversal and validate the proposed advantages.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper, we propose FeatureMatch, a generalised approximate nearest-neighbour field (ANNF) computation framework, between a source and target image. The proposed algorithm can estimate ANNF maps between any image pairs, not necessarily related. This generalisation is achieved through appropriate spatial-range transforms. To compute ANNF maps, global colour adaptation is applied as a range transform on the source image. Image patches from the pair of images are approximated using low-dimensional features, which are used along with KD-tree to estimate the ANNF map. This ANNF map is further improved based on image coherency and spatial transforms. The proposed generalisation, enables us to handle a wider range of vision applications, which have not been tackled using the ANNF framework. We illustrate two such applications namely: 1) optic disk detection and 2) super resolution. The first application deals with medical imaging, where we locate optic disks in retinal images using a healthy optic disk image as common target image. The second application deals with super resolution of synthetic images using a common source image as dictionary. We make use of ANNF mappings in both these applications and show experimentally that our proposed approaches are faster and accurate, compared with the state-of-the-art techniques.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Consider a J-component series system which is put on Accelerated Life Test (ALT) involving K stress variables. First, a general formulation of ALT is provided for log-location-scale family of distributions. A general stress translation function of location parameter of the component log-lifetime distribution is proposed which can accommodate standard ones like Arrhenius, power-rule, log-linear model, etc., as special cases. Later, the component lives are assumed to be independent Weibull random variables with a common shape parameter. A full Bayesian methodology is then developed by letting only the scale parameters of the Weibull component lives depend on the stress variables through the general stress translation function. Priors on all the parameters, namely the stress coefficients and the Weibull shape parameter, are assumed to be log-concave and independent of each other. This assumption is to facilitate Gibbs sampling from the joint posterior. The samples thus generated from the joint posterior is then used to obtain the Bayesian point and interval estimates of the system reliability at usage condition.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The influence of the flow rule on the bearing capacity of strip foundations placed on sand was investigated using a new kinematic approach of upper-bound limit analysis. The method of stress characteristics was first used to find the mechanism of the failure and to compute the stress field by using the Mohr-Coulomb yield criterion. Once the failure mechanism had been established, the kinematics of the plastic deformation was established, based on the requirements of the upper-bound limit theorem. Both associated and nonassociated plastic flows were considered, and the bearing capacity was obtained by equating the rate of external plastic work to the rate of the internal energy dissipation for both smooth and rough base foundations. The results obtained from the analysis were compared with those available from the literature. (C) 2014 American Society of Civil Engineers.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Consider a J-component series system which is put on Accelerated Life Test (ALT) involving K stress variables. First, a general formulation of ALT is provided for log-location-scale family of distributions. A general stress translation function of location parameter of the component log-lifetime distribution is proposed which can accommodate standard ones like Arrhenius, power-rule, log-linear model, etc., as special cases. Later, the component lives are assumed to be independent Weibull random variables with a common shape parameter. A full Bayesian methodology is then developed by letting only the scale parameters of the Weibull component lives depend on the stress variables through the general stress translation function. Priors on all the parameters, namely the stress coefficients and the Weibull shape parameter, are assumed to be log-concave and independent of each other. This assumption is to facilitate Gibbs sampling from the joint posterior. The samples thus generated from the joint posterior is then used to obtain the Bayesian point and interval estimates of the system reliability at usage condition.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The correlation clustering problem is a fundamental problem in both theory and practice, and it involves identifying clusters of objects in a data set based on their similarity. A traditional modeling of this question as a graph theoretic problem involves associating vertices with data points and indicating similarity by adjacency. Clusters then correspond to cliques in the graph. The resulting optimization problem, Cluster Editing (and several variants) are very well-studied algorithmically. In many situations, however, translating clusters to cliques can be somewhat restrictive. A more flexible notion would be that of a structure where the vertices are mutually ``not too far apart'', without necessarily being adjacent. One such generalization is realized by structures called s-clubs, which are graphs of diameter at most s. In this work, we study the question of finding a set of at most k edges whose removal leaves us with a graph whose components are s-clubs. Recently, it has been shown that unless Exponential Time Hypothesis fail (ETH) fails Cluster Editing (whose components are 1-clubs) does not admit sub-exponential time algorithm STACS, 2013]. That is, there is no algorithm solving the problem in time 2 degrees((k))n(O(1)). However, surprisingly they show that when the number of cliques in the output graph is restricted to d, then the problem can be solved in time O(2(O(root dk)) + m + n). We show that this sub-exponential time algorithm for the fixed number of cliques is rather an exception than a rule. Our first result shows that assuming the ETH, there is no algorithm solving the s-Club Cluster Edge Deletion problem in time 2 degrees((k))n(O(1)). We show, further, that even the problem of deleting edges to obtain a graph with d s-clubs cannot be solved in time 2 degrees((k))n(O)(1) for any fixed s, d >= 2. This is a radical contrast from the situation established for cliques, where sub-exponential algorithms are known.