940 resultados para Metric cone
Resumo:
Chebyshev-inequality-based convex relaxations of Chance-Constrained Programs (CCPs) are shown to be useful for learning classifiers on massive datasets. In particular, an algorithm that integrates efficient clustering procedures and CCP approaches for computing classifiers on large datasets is proposed. The key idea is to identify high density regions or clusters from individual class conditional densities and then use a CCP formulation to learn a classifier on the clusters. The CCP formulation ensures that most of the data points in a cluster are correctly classified by employing a Chebyshev-inequality-based convex relaxation. This relaxation is heavily dependent on the second-order statistics. However, this formulation and in general such relaxations that depend on the second-order moments are susceptible to moment estimation errors. One of the contributions of the paper is to propose several formulations that are robust to such errors. In particular a generic way of making such formulations robust to moment estimation errors is illustrated using two novel confidence sets. An important contribution is to show that when either of the confidence sets is employed, for the special case of a spherical normal distribution of clusters, the robust variant of the formulation can be posed as a second-order cone program. Empirical results show that the robust formulations achieve accuracies comparable to that with true moments, even when moment estimates are erroneous. Results also illustrate the benefits of employing the proposed methodology for robust classification of large-scale datasets.
Resumo:
Protein structure space is believed to consist of a finite set of discrete folds, unlike the protein sequence space which is astronomically large, indicating that proteins from the available sequence space are likely to adopt one of the many folds already observed. In spite of extensive sequence-structure correlation data, protein structure prediction still remains an open question with researchers having tried different approaches (experimental as well as computational). One of the challenges of protein structure prediction is to identify the native protein structures from a milieu of decoys/models. In this work, a rigorous investigation of Protein Structure Networks (PSNs) has been performed to detect native structures from decoys/ models. Ninety four parameters obtained from network studies have been optimally combined with Support Vector Machines (SVM) to derive a general metric to distinguish decoys/models from the native protein structures with an accuracy of 94.11%. Recently, for the first time in the literature we had shown that PSN has the capability to distinguish native proteins from decoys. A major difference between the present work and the previous study is to explore the transition profiles at different strengths of non-covalent interactions and SVM has indeed identified this as an important parameter. Additionally, the SVM trained algorithm is also applied to the recent CASP10 predicted models. The novelty of the network approach is that it is based on general network properties of native protein structures and that a given model can be assessed independent of any reference structure. Thus, the approach presented in this paper can be valuable in validating the predicted structures. A web-server has been developed for this purpose and is freely available at http://vishgraph.mbu.iisc.ernet.in/GraProStr/PSN-QA.html.
Resumo:
The timer-based selection scheme is a popular, simple, and distributed scheme that is used to select the best node from a set of available nodes. In it, each node sets a timer as a function of a local preference number called a metric, and transmits a packet when its timer expires. The scheme ensures that the timer of the best node, which has the highest metric, expires first. However, it fails to select the best node if another node transmits a packet within Delta s of the transmission by the best node. We derive the optimal timer mapping that maximizes the average success probability for the practical scenario in which the number of nodes in the system is unknown but only its probability distribution is known. We show that it has a special discrete structure, and present a recursive characterization to determine it. We benchmark its performance with ad hoc approaches proposed in the literature, and show that it delivers significant gains. New insights about the optimality of some ad hoc approaches are also developed.
Resumo:
An opportunistic, rate-adaptive system exploits multi-user diversity by selecting the best node, which has the highest channel power gain, and adapting the data rate to selected node's channel gain. Since channel knowledge is local to a node, we propose using a distributed, low-feedback timer backoff scheme to select the best node. It uses a mapping that maps the channel gain, or, in general, a real-valued metric, to a timer value. The mapping is such that timers of nodes with higher metrics expire earlier. Our goal is to maximize the system throughput when rate adaptation is discrete, as is the case in practice. To improve throughput, we use a pragmatic selection policy, in which even a node other than the best node can be selected. We derive several novel, insightful results about the optimal mapping and develop an algorithm to compute it. These results bring out the inter-relationship between the discrete rate adaptation rule, optimal mapping, and selection policy. We also extensively benchmark the performance of the optimal mapping with several timer and opportunistic multiple access schemes considered in the literature, and demonstrate that the developed scheme is effective in many regimes of interest.
Resumo:
Delaunay and Gabriel graphs are widely studied geo-metric proximity structures. Motivated by applications in wireless routing, relaxed versions of these graphs known as Locally Delaunay Graphs (LDGs) and Lo-cally Gabriel Graphs (LGGs) have been proposed. We propose another generalization of LGGs called Gener-alized Locally Gabriel Graphs (GLGGs) in the context when certain edges are forbidden in the graph. Unlike a Gabriel Graph, there is no unique LGG or GLGG for a given point set because no edge is necessarily in-cluded or excluded. This property allows us to choose an LGG/GLGG that optimizes a parameter of interest in the graph. We show that computing an edge max-imum GLGG for a given problem instance is NP-hard and also APX-hard. We also show that computing an LGG on a given point set with dilation ≤k is NP-hard. Finally, we give an algorithm to verify whether a given geometric graph G= (V, E) is a valid LGG.
Resumo:
We consider a scenario where the communication nodes in a sensor network have limited energy, and the objective is to maximize the aggregate bits transported from sources to respective destinations before network partition due to node deaths. This performance metric is novel, and captures the useful information that a network can provide over its lifetime. The optimization problem that results from our approach is nonlinear; however, we show that it can be converted to a Multicommodity Flow (MCF) problem that yields the optimal value of the metric. Subsequently, we compare the performance of a practical routing strategy, based on Node Disjoint Paths (NDPs), with the ideal corresponding to the MCF formulation. Our results indicate that the performance of NDP-based routing is within 7.5% of the optimal.
Resumo:
Space shift keying (SSK) is a special case of spatial modulation (SM), which is a relatively new modulation technique that is getting recognized to be attractive in multi-antenna communications. Our new contribution in this paper is an analytical derivation of exact closed-form expression for the end-to-end bit error rate (BER) performance of SSK in decode-and-forward (1)1,) cooperative relaying. An incremental relaying (IR) scheme with selection combining (SC) at the destination is considered. In SSK, since the information is carried by the transmit antenna index, traditional selection combining methods based on instantaneous SNRs can not be directly used. To overcome this problem, we propose to do selection between direct and relayed paths based on the Euclidean distance between columns of the channel matrix. With this selection metric, an exact analytical expression for the end-to-end BER is derived in closed-form. Analytical results are shown to match with simulation results.
Resumo:
Energy harvesting sensor (EHS) nodes provide an attractive and green solution to the problem of limited lifetime of wireless sensor networks (WSNs). Unlike a conventional node that uses a non-rechargeable battery and dies once it runs out of energy, an EHS node can harvest energy from the environment and replenish its rechargeable battery. We consider hybrid WSNs that comprise of both EHS and conventional nodes; these arise when legacy WSNs are upgraded or due to EHS deployment cost issues. We compare conventional and hybrid WSNs on the basis of a new and insightful performance metric called k-outage duration, which captures the inability of the nodes to transmit data either due to lack of sufficient battery energy or wireless fading. The metric overcomes the problem of defining lifetime in networks with EHS nodes, which never die but are occasionally unable to transmit due to lack of sufficient battery energy. It also accounts for the effect of wireless channel fading on the ability of the WSN to transmit data. We develop two novel, tight, and computationally simple bounds for evaluating the k-outage duration. Our results show that increasing the number of EHS nodes has a markedly different effect on the k-outage duration than increasing the number of conventional nodes.
Resumo:
A necessary step for the recognition of scanned documents is binarization, which is essentially the segmentation of the document. In order to binarize a scanned document, we can find several algorithms in the literature. What is the best binarization result for a given document image? To answer this question, a user needs to check different binarization algorithms for suitability, since different algorithms may work better for different type of documents. Manually choosing the best from a set of binarized documents is time consuming. To automate the selection of the best segmented document, either we need to use ground-truth of the document or propose an evaluation metric. If ground-truth is available, then precision and recall can be used to choose the best binarized document. What is the case, when ground-truth is not available? Can we come up with a metric which evaluates these binarized documents? Hence, we propose a metric to evaluate binarized document images using eigen value decomposition. We have evaluated this measure on DIBCO and H-DIBCO datasets. The proposed method chooses the best binarized document that is close to the ground-truth of the document.
Resumo:
The curvature (T)(w) of a contraction T in the Cowen-Douglas class B-1() is bounded above by the curvature (S*)(w) of the backward shift operator. However, in general, an operator satisfying the curvature inequality need not be contractive. In this paper, we characterize a slightly smaller class of contractions using a stronger form of the curvature inequality. Along the way, we find conditions on the metric of the holomorphic Hermitian vector bundle E-T corresponding to the operator T in the Cowen-Douglas class B-1() which ensures negative definiteness of the curvature function. We obtain a generalization for commuting tuples of operators in the class B-1() for a bounded domain in C-m.
Resumo:
The distributed, low-feedback, timer scheme is used in several wireless systems to select the best node from the available nodes. In it, each node sets a timer as a function of a local preference number called a metric, and transmits a packet when its timer expires. The scheme ensures that the timer of the best node, which has the highest metric, expires first. However, it fails to select the best node if another node transmits a packet within Delta s of the transmission by the best node. We derive the optimal metric-to-timer mappings for the practical scenario where the number of nodes is unknown. We consider two cases in which the probability distribution of the number of nodes is either known a priori or is unknown. In the first case, the optimal mapping maximizes the success probability averaged over the probability distribution. In the second case, a robust mapping maximizes the worst case average success probability over all possible probability distributions on the number of nodes. Results reveal that the proposed mappings deliver significant gains compared to the mappings considered in the literature.
Resumo:
We investigate the dynamics of a sinusoidally driven ferromagnetic martensitic ribbon by adopting a recently introduced model that involves strain and magnetization as order parameters. Retaining only the dominant mode of excitation we reduce the coupled set of partial differential equations for strain and magnetization to a set of coupled ordinary nonlinear equations for the strain and magnetization amplitudes. The equation for the strain amplitude takes the form of parametrically driven oscillator. Finite strain amplitude can only be induced beyond a critical value of the strength of the magnetic field. Chaotic response is seen for a range of values of all the physically interesting parameters. The nature of the bifurcations depends on the choice of temperature relative to the ordering of the Curie and the martensite transformation temperatures. We have studied the nature of response as a function of the strength and frequency of the magnetic field, and magneto-elastic coupling. In general, the bifurcation diagrams with respect to these parameters do not follow any standard route. The rich dynamics exhibited by the model is further illustrated by the presence of mixed mode oscillations seen for low frequencies. The geometric structure of the mixed mode oscillations in the phase space has an unusual deep crater structure with an outer and inner cone on which the orbits circulate. We suggest that these features should be seen in experiments on driven magneto-martensitic ribbons. (C) 2014 Elsevier B. V. All rights reserved.
Resumo:
A novel peptide containing a single disulfide bond, CIWPWC (Vi804), has been isolated and characterised from the venom of the marine cone snail, Conus virgo. A precursor polypeptide sequence derived from complementary DNA, corresponding to the M-superfamily conotoxins, has been identified. The identity of the synthetic and natural peptide sequence has been established. A detailed analysis of the conformation in solution is reported for Vi804 and a synthetic analogue, (CIWPWC)-W-D ((D)W3-Vi804), in order to establish the structure of the novel WPW motif, which occurs in the context of a 20-membered macrocyclic disulfide. Vi804 exists exclusively in the cis W3P4 conformer in water and methanol, whereas (D)W3-Vi804 occurs exclusively as the trans conformer. NMR spectra revealed a W3P4 typeVI turn in Vi804 and a typeII turn in the analogue peptide, (D)W3-Vi804. The extremely high-field chemical shifts of the proline ring protons, together with specific nuclear Overhauser effects, are used to establish a conformation in which the proline ring is sandwiched between the flanking Trp residues, which emphasises a stabilising role for the aromatic-proline interactions, mediated predominantly by dispersion forces.
Resumo:
This paper reports first observations of transition in recirculation pattern from an open-bubble type axisymmetric vortex breakdown to partially open bubble mode through an intermediate, critical regime of conical sheet formation in an unconfined, co-axial isothermal swirling flow. This time-mean transition is studied for two distinct flow modes which are characterized based on the modified Rossby number (Ro(m)), i.e., Ro(m) <= 1 and Ro(m) > 1. Flow modes with Ro(m) <= 1 are observed to first undergo cone-type breakdown and then to partially open bubble state as the geometric swirl number (S-G) is increased by similar to 20% and similar to 40%, respectively, from the baseline open-bubble state. However, the flow modes with Ro(m) > 1 fail to undergo such sequential transition. This distinct behavior is explained based on the physical significance associated with Ro(m) and the swirl momentum factor (xi). In essence, xi represents the ratio of angular momentum distributed across the flow structure to that distributed from central axis to the edge of the vortex core. It is observed that xi increases by similar to 100% in the critical swirl number band where conical breakdown occurs as compared to its magnitude in the S-G regime where open bubble state is seen. This results from the fact that flow modes with Ro(m) <= 1 are dominated by radial pressure gradient due to swirl/rotational effect when compared to radial pressure deficit arising from entrainment (due to the presence of co-stream). Consequently, the imparted swirl tends to penetrate easily towards the central axis causing it to spread laterally and finally undergo conical sheet breakdown. However, the flow modes with Ro(m) > 1 are dominated by pressure deficit due to entrainment effect. This blocks the radial inward penetration of imparted angular momentum thus preventing the lateral spread of these flow modes. As such these structures fail to undergo cone mode of vortex breakdown which is substantiated by a mere 30%-40% rise in xi in the critical swirl number range. (C) 2014 AIP Publishing LLC.
Resumo:
This article describes a new performance-based approach for evaluating the return period of seismic soil liquefaction based on standard penetration test (SPT) and cone penetration test (CPT) data. The conventional liquefaction evaluation methods consider a single acceleration level and magnitude and these approaches fail to take into account the uncertainty in earthquake loading. The seismic hazard analysis based on the probabilistic method clearly shows that a particular acceleration value is being contributed by different magnitudes with varying probability. In the new method presented in this article, the entire range of ground shaking and the entire range of earthquake magnitude are considered and the liquefaction return period is evaluated based on the SPT and CPT data. This article explains the performance-based methodology for the liquefaction analysis – starting from probabilistic seismic hazard analysis (PSHA) for the evaluation of seismic hazard and the performance-based method to evaluate the liquefaction return period. A case study has been done for Bangalore, India, based on SPT data and converted CPT values. The comparison of results obtained from both the methods have been presented. In an area of 220 km2 in Bangalore city, the site class was assessed based on large number of borehole data and 58 Multi-channel analysis of surface wave survey. Using the site class and peak acceleration at rock depth from PSHA, the peak ground acceleration at the ground surface was estimated using probabilistic approach. The liquefaction analysis was done based on 450 borehole data obtained in the study area. The results of CPT match well with the results obtained from similar analysis with SPT data.