108 resultados para Hausdorff Metric
Resumo:
In the last decade, there has been a tremendous interest in Graphene transistors. The greatest advantage for CMOS nanoelectronics applications is the fact that Graphene is compatible with planar CMOS technology and potentially offers excellent short channel properties. Because of the zero bandgap, it will not be possible to turn off the MOSFET efficiently and hence the typical on current to off current ratio (Ion/Ioff) has been less than 10. Several techniques have been proposed to open the bandgap in Graphene. It has been demonstrated, both theoretically and experimentally, that Graphene Nanoribbons (GNR) show a bandgap which is inversely proportional to their width. GNRs with about 20 nm width have bandgaps in the range of 100meV. But it is very difficult to obtain GNRs with well defined edges. An alternate technique to open the band gap is to use bilayer Graphene (BLG), with an asymmetric bias applied in the direction perpendicular to their plane. Another important CMOS metric, the subthreshold slope is also limited by the inability to turn off the transistor. However, these devices could be attractive for RF CMOS applications. But even for analog and RF applications the non-saturating behavior of the drain current can be an issue. Although some studies have reported current saturation, the mechanisms are still not very clear. In this talk we present some of our recent findings, based on simulations and experiments, and propose possible solutions to obtain high on current to off current ratio. A detailed study on high field transport in grapheme transistors, relevant for analog and RF applications will also be presented.
Resumo:
Opportunistic selection is a practically appealing technique that is used in multi-node wireless systems to maximize throughput, implement proportional fairness, etc. However, selection is challenging since the information about a node's channel gains is often available only locally at each node and not centrally. We propose a novel multiple access-based distributed selection scheme that generalizes the best features of the timer scheme, which requires minimal feedback but does not always guarantee successful selection, and the fast splitting scheme, which requires more feedback but guarantees successful selection. The proposed scheme's design explicitly accounts for feedback time overheads unlike the conventional splitting scheme and guarantees selection of the user with the highest metric unlike the timer scheme. We analyze and minimize the average time including feedback required by the scheme to select. With feedback overheads, the proposed scheme is scalable and considerably faster than several schemes proposed in the literature. Furthermore, the gains increase as the feedback overhead increases.
Resumo:
This paper analyzes the error exponents in Bayesian decentralized spectrum sensing, i.e., the detection of occupancy of the primary spectrum by a cognitive radio, with probability of error as the performance metric. At the individual sensors, the error exponents of a Central Limit Theorem (CLT) based detection scheme are analyzed. At the fusion center, a K-out-of-N rule is employed to arrive at the overall decision. It is shown that, in the presence of fading, for a fixed number of sensors, the error exponents with respect to the number of observations at both the individual sensors as well as at the fusion center are zero. This motivates the development of the error exponent with a certain probability as a novel metric that can be used to compare different detection schemes in the presence of fading. The metric is useful, for example, in answering the question of whether to sense for a pilot tone in a narrow band (and suffer Rayleigh fading) or to sense the entire wide-band signal (and suffer log-normal shadowing), in terms of the error exponent performance. The error exponents with a certain probability at both the individual sensors and at the fusion center are derived, with both Rayleigh as well as log-normal shadow fading. Numerical results are used to illustrate and provide a visual feel for the theoretical expressions obtained.
Resumo:
We propose a set of metrics that evaluate the uniformity, sharpness, continuity, noise, stroke width variance,pulse width ratio, transient pixels density, entropy and variance of components to quantify the quality of a document image. The measures are intended to be used in any optical character recognition (OCR) engine to a priori estimate the expected performance of the OCR. The suggested measures have been evaluated on many document images, which have different scripts. The quality of a document image is manually annotated by users to create a ground truth. The idea is to correlate the values of the measures with the user annotated data. If the measure calculated matches the annotated description,then the metric is accepted; else it is rejected. In the set of metrics proposed, some of them are accepted and the rest are rejected. We have defined metrics that are easily estimatable. The metrics proposed in this paper are based on the feedback of homely grown OCR engines for Indic (Tamil and Kannada) languages. The metrics are independent of the scripts, and depend only on the quality and age of the paper and the printing. Experiments and results for each proposed metric are discussed. Actual recognition of the printed text is not performed to evaluate the proposed metrics. Sometimes, a document image containing broken characters results in good document image as per the evaluated metrics, which is part of the unsolved challenges. The proposed measures work on gray scale document images and fail to provide reliable information on binarized document image.
Resumo:
We address the problem of speech enhancement using a risk- estimation approach. In particular, we propose the use the Stein’s unbiased risk estimator (SURE) for solving the problem. The need for a suitable finite-sample risk estimator arises because the actual risks invariably depend on the unknown ground truth. We consider the popular mean-squared error (MSE) criterion first, and then compare it against the perceptually-motivated Itakura-Saito (IS) distortion, by deriving unbiased estimators of the corresponding risks. We use a generalized SURE (GSURE) development, recently proposed by Eldar for MSE. We consider dependent observation models from the exponential family with an additive noise model,and derive an unbiased estimator for the risk corresponding to the IS distortion, which is non-quadratic. This serves to address the speech enhancement problem in a more general setting. Experimental results illustrate that the IS metric is efficient in suppressing musical noise, which affects the MSE-enhanced speech. However, in terms of global signal-to-noise ratio (SNR), the minimum MSE solution gives better results.
Resumo:
It is increasingly being recognized that resting state brain connectivity derived from functional magnetic resonance imaging (fMRI) data is an important marker of brain function both in healthy and clinical populations. Though linear correlation has been extensively used to characterize brain connectivity, it is limited to detecting first order dependencies. In this study, we propose a framework where in phase synchronization (PS) between brain regions is characterized using a new metric ``correlation between probabilities of recurrence'' (CPR) and subsequent graph-theoretic analysis of the ensuing networks. We applied this method to resting state fMRI data obtained from human subjects with and without administration of propofol anesthetic. Our results showed decreased PS during anesthesia and a biologically more plausible community structure using CPR rather than linear correlation. We conclude that CPR provides an attractive nonparametric method for modeling interactions in brain networks as compared to standard correlation for obtaining physiologically meaningful insights about brain function.
Resumo:
Fast and efficient channel estimation is key to achieving high data rate performance in mobile and vehicular communication systems, where the channel is fast time-varying. To this end, this work proposes and optimizes channel-dependent training schemes for reciprocal Multiple-Input Multiple-Output (MIMO) channels with beamforming (BF) at the transmitter and receiver. First, assuming that Channel State Information (CSI) is available at the receiver, a channel-dependent Reverse Channel Training (RCT) signal is proposed that enables efficient estimation of the BF vector at the transmitter with a minimum training duration of only one symbol. In contrast, conventional orthogonal training requires a minimum training duration equal to the number of receive antennas. A tight approximation to the capacity lower bound on the system is derived, which is used as a performance metric to optimize the parameters of the RCT. Next, assuming that CSI is available at the transmitter, a channel-dependent forward-link training signal is proposed and its power and duration are optimized with respect to an approximate capacity lower bound. Monte Carlo simulations illustrate the significant performance improvement offered by the proposed channel-dependent training schemes over the existing channel-agnostic orthogonal training schemes.
Resumo:
Wilking has recently shown that one can associate a Ricci flow invariant cone of curvature operators , which are nonnegative in a suitable sense, to every invariant subset . In this article we show that if is an invariant subset of such that is closed and denotes the cone of curvature operators which are positive in the appropriate sense then one of the two possibilities holds: (a) The connected sum of any two Riemannian manifolds with curvature operators in also admits a metric with curvature operator in (b) The normalized Ricci flow on any compact Riemannian manifold with curvature operator in converges to a metric of constant positive sectional curvature. We also point out that if is an arbitrary subset, then is contained in the cone of curvature operators with nonnegative isotropic curvature.
Resumo:
Protein structure space is believed to consist of a finite set of discrete folds, unlike the protein sequence space which is astronomically large, indicating that proteins from the available sequence space are likely to adopt one of the many folds already observed. In spite of extensive sequence-structure correlation data, protein structure prediction still remains an open question with researchers having tried different approaches (experimental as well as computational). One of the challenges of protein structure prediction is to identify the native protein structures from a milieu of decoys/models. In this work, a rigorous investigation of Protein Structure Networks (PSNs) has been performed to detect native structures from decoys/ models. Ninety four parameters obtained from network studies have been optimally combined with Support Vector Machines (SVM) to derive a general metric to distinguish decoys/models from the native protein structures with an accuracy of 94.11%. Recently, for the first time in the literature we had shown that PSN has the capability to distinguish native proteins from decoys. A major difference between the present work and the previous study is to explore the transition profiles at different strengths of non-covalent interactions and SVM has indeed identified this as an important parameter. Additionally, the SVM trained algorithm is also applied to the recent CASP10 predicted models. The novelty of the network approach is that it is based on general network properties of native protein structures and that a given model can be assessed independent of any reference structure. Thus, the approach presented in this paper can be valuable in validating the predicted structures. A web-server has been developed for this purpose and is freely available at http://vishgraph.mbu.iisc.ernet.in/GraProStr/PSN-QA.html.
Resumo:
The timer-based selection scheme is a popular, simple, and distributed scheme that is used to select the best node from a set of available nodes. In it, each node sets a timer as a function of a local preference number called a metric, and transmits a packet when its timer expires. The scheme ensures that the timer of the best node, which has the highest metric, expires first. However, it fails to select the best node if another node transmits a packet within Delta s of the transmission by the best node. We derive the optimal timer mapping that maximizes the average success probability for the practical scenario in which the number of nodes in the system is unknown but only its probability distribution is known. We show that it has a special discrete structure, and present a recursive characterization to determine it. We benchmark its performance with ad hoc approaches proposed in the literature, and show that it delivers significant gains. New insights about the optimality of some ad hoc approaches are also developed.
Resumo:
An opportunistic, rate-adaptive system exploits multi-user diversity by selecting the best node, which has the highest channel power gain, and adapting the data rate to selected node's channel gain. Since channel knowledge is local to a node, we propose using a distributed, low-feedback timer backoff scheme to select the best node. It uses a mapping that maps the channel gain, or, in general, a real-valued metric, to a timer value. The mapping is such that timers of nodes with higher metrics expire earlier. Our goal is to maximize the system throughput when rate adaptation is discrete, as is the case in practice. To improve throughput, we use a pragmatic selection policy, in which even a node other than the best node can be selected. We derive several novel, insightful results about the optimal mapping and develop an algorithm to compute it. These results bring out the inter-relationship between the discrete rate adaptation rule, optimal mapping, and selection policy. We also extensively benchmark the performance of the optimal mapping with several timer and opportunistic multiple access schemes considered in the literature, and demonstrate that the developed scheme is effective in many regimes of interest.
Resumo:
Delaunay and Gabriel graphs are widely studied geo-metric proximity structures. Motivated by applications in wireless routing, relaxed versions of these graphs known as Locally Delaunay Graphs (LDGs) and Lo-cally Gabriel Graphs (LGGs) have been proposed. We propose another generalization of LGGs called Gener-alized Locally Gabriel Graphs (GLGGs) in the context when certain edges are forbidden in the graph. Unlike a Gabriel Graph, there is no unique LGG or GLGG for a given point set because no edge is necessarily in-cluded or excluded. This property allows us to choose an LGG/GLGG that optimizes a parameter of interest in the graph. We show that computing an edge max-imum GLGG for a given problem instance is NP-hard and also APX-hard. We also show that computing an LGG on a given point set with dilation ≤k is NP-hard. Finally, we give an algorithm to verify whether a given geometric graph G= (V, E) is a valid LGG.
Resumo:
We consider a scenario where the communication nodes in a sensor network have limited energy, and the objective is to maximize the aggregate bits transported from sources to respective destinations before network partition due to node deaths. This performance metric is novel, and captures the useful information that a network can provide over its lifetime. The optimization problem that results from our approach is nonlinear; however, we show that it can be converted to a Multicommodity Flow (MCF) problem that yields the optimal value of the metric. Subsequently, we compare the performance of a practical routing strategy, based on Node Disjoint Paths (NDPs), with the ideal corresponding to the MCF formulation. Our results indicate that the performance of NDP-based routing is within 7.5% of the optimal.
Resumo:
Space shift keying (SSK) is a special case of spatial modulation (SM), which is a relatively new modulation technique that is getting recognized to be attractive in multi-antenna communications. Our new contribution in this paper is an analytical derivation of exact closed-form expression for the end-to-end bit error rate (BER) performance of SSK in decode-and-forward (1)1,) cooperative relaying. An incremental relaying (IR) scheme with selection combining (SC) at the destination is considered. In SSK, since the information is carried by the transmit antenna index, traditional selection combining methods based on instantaneous SNRs can not be directly used. To overcome this problem, we propose to do selection between direct and relayed paths based on the Euclidean distance between columns of the channel matrix. With this selection metric, an exact analytical expression for the end-to-end BER is derived in closed-form. Analytical results are shown to match with simulation results.
Resumo:
Energy harvesting sensor (EHS) nodes provide an attractive and green solution to the problem of limited lifetime of wireless sensor networks (WSNs). Unlike a conventional node that uses a non-rechargeable battery and dies once it runs out of energy, an EHS node can harvest energy from the environment and replenish its rechargeable battery. We consider hybrid WSNs that comprise of both EHS and conventional nodes; these arise when legacy WSNs are upgraded or due to EHS deployment cost issues. We compare conventional and hybrid WSNs on the basis of a new and insightful performance metric called k-outage duration, which captures the inability of the nodes to transmit data either due to lack of sufficient battery energy or wireless fading. The metric overcomes the problem of defining lifetime in networks with EHS nodes, which never die but are occasionally unable to transmit due to lack of sufficient battery energy. It also accounts for the effect of wireless channel fading on the ability of the WSN to transmit data. We develop two novel, tight, and computationally simple bounds for evaluating the k-outage duration. Our results show that increasing the number of EHS nodes has a markedly different effect on the k-outage duration than increasing the number of conventional nodes.