973 resultados para Homography constraint
Resumo:
Using all atom molecular dynamics simulations, we report spontaneous unzipping and strong binding of small interfering RNA (siRNA) on graphene. Our dispersion corrected density functional theory based calculations suggest that nucleosides of RNA have stronger attractive interactions with graphene as compared to DNA residues. These stronger interactions force the double stranded siRNA to spontaneously unzip and bind to the graphene surface. Unzipping always nucleates at one end of the siRNA and propagates to the other end after few base-pairs get unzipped. While both the ends get unzipped, the middle part remains in double stranded form because of torsional constraint. Unzipping probability distributions fitted to single exponential function give unzipping time (tau) of the order of few nanoseconds which decrease exponentially with temperature. From the temperature variation of unzipping time we estimate the energy barrier to unzipping. (C) 2012 American Institute of Physics. http://dx.doi.org/10.1063/1.4742189]
Resumo:
We report on the status of supersymmetric seesaw models in the light of recent experimental results on mu -> e + gamma, theta(13) and the light Higgs mass at the LHC. SO(10)-like relations are assumed for neutrino Dirac Yukawa couplings and two cases of mixing, one large, PMNS-like, and another small, CKM-like, are considered. It is shown that for the large mixing case, only a small range of parameter space with moderate tan beta is still allowed. This remaining region can be ruled out by an order of magnitude improvement in the current limit on BR(mu -> e + gamma). We also explore a model with non-universal Higgs mass boundary conditions at the high scale. It is shown that the renormalization group induced flavor violating slepton mass terms are highly sensitive to the Higgs boundary conditions. Depending on the choice of the parameters, they can either lead to strong enhancements or cancellations within the flavor violating terms. Such cancellations might relax the severe constraints imposed by lepton flavor violation compared to mSUGRA. Nevertheless for a large region of parameter space the predicted rates lie within the reach of future experiments once the light Higgs mass constraint is imposed. We also update the potential of the ongoing and future experimental searches for lepton flavor violation in constraining the supersymmetric parameter space.
Resumo:
Bidirectional relaying, where a relay helps two user nodes to exchange equal length binary messages, has been an active area of recent research. A popular strategy involves a modified Gaussian MAC, where the relay decodes the XOR of the two messages using the naturally-occurring sum of symbols simultaneously transmitted by user nodes. In this work, we consider the Gaussian MAC in bidirectional relaying with an additional secrecy constraint for protection against a honest but curious relay. The constraint is that, while the relay should decode the XOR, it should be fully ignorant of the individual messages of the users. We exploit the symbol addition that occurs in a Gaussian MAC to design explicit strategies that achieve perfect independence between the received symbols and individual transmitted messages. Our results actually hold for a more general scenario where the messages at the two user nodes come from a finite Abelian group G, and the relay must decode the sum within G of the two messages. We provide a lattice coding strategy and study optimal rate versus average power trade-offs for asymptotically large dimensions.
Resumo:
Our work is motivated by geographical forwarding of sporadic alarm packets to a base station in a wireless sensor network (WSN), where the nodes are sleep-wake cycling periodically and asynchronously. We seek to develop local forwarding algorithms that can be tuned so as to tradeoff the end-to-end delay against a total cost, such as the hop count or total energy. Our approach is to solve, at each forwarding node enroute to the sink, the local forwarding problem of minimizing one-hop waiting delay subject to a lower bound constraint on a suitable reward offered by the next-hop relay; the constraint serves to tune the tradeoff. The reward metric used for the local problem is based on the end-to-end total cost objective (for instance, when the total cost is hop count, we choose to use the progress toward sink made by a relay as the reward). The forwarding node, to begin with, is uncertain about the number of relays, their wake-up times, and the reward values, but knows the probability distributions of these quantities. At each relay wake-up instant, when a relay reveals its reward value, the forwarding node's problem is to forward the packet or to wait for further relays to wake-up. In terms of the operations research literature, our work can be considered as a variant of the asset selling problem. We formulate our local forwarding problem as a partially observable Markov decision process (POMDP) and obtain inner and outer bounds for the optimal policy. Motivated by the computational complexity involved in the policies derived out of these bounds, we formulate an alternate simplified model, the optimal policy for which is a simple threshold rule. We provide simulation results to compare the performance of the inner and outer bound policies against the simple policy, and also against the optimal policy when the source knows the exact number of relays. Observing the good performance and the ease of implementation of the simple policy, we apply it to our motivating problem, i.e., local geographical routing of sporadic alarm packets in a large WSN. We compare the end-to-end performance (i.e., average total delay and average total cost) obtained by the simple policy, when used for local geographical forwarding, against that obtained by the globally optimal forwarding algorithm proposed by Kim et al. 1].
Resumo:
Dynamic Voltage and Frequency Scaling (DVFS) is a very effective tool for designing trade-offs between energy and performance. In this paper, we use a formal Petri net based program performance model that directly captures both the application and system properties, to find energy efficient DVFS settings for CMP systems, that satisfy a given performance constraint, for SPMD multithreaded programs. Experimental evaluation shows that we achieve significant energy savings, while meeting the performance constraints.
Resumo:
In the design of practical web page classification systems one often encounters a situation in which the labeled training set is created by choosing some examples from each class; but, the class proportions in this set are not the same as those in the test distribution to which the classifier will be actually applied. The problem is made worse when the amount of training data is also small. In this paper we explore and adapt binary SVM methods that make use of unlabeled data from the test distribution, viz., Transductive SVMs (TSVMs) and expectation regularization/constraint (ER/EC) methods to deal with this situation. We empirically show that when the labeled training data is small, TSVM designed using the class ratio tuned by minimizing the loss on the labeled set yields the best performance; its performance is good even when the deviation between the class ratios of the labeled training set and the test set is quite large. When the labeled training data is sufficiently large, an unsupervised Gaussian mixture model can be used to get a very good estimate of the class ratio in the test set; also, when this estimate is used, both TSVM and EC/ER give their best possible performance, with TSVM coming out superior. The ideas in the paper can be easily extended to multi-class SVMs and MaxEnt models.
Resumo:
In this paper, we study duty cycling and power management in a network of energy harvesting sensor (EHS) nodes. We consider a one-hop network, where K EHS nodes send data to a destination over a wireless fading channel. The goal is to find the optimum duty cycling and power scheduling across the nodes that maximizes the average sum data rate, subject to energy neutrality at each node. We adopt a two-stage approach to simplify the problem. In the inner stage, we solve the problem of optimal duty cycling of the nodes, subject to the short-term power constraint set by the outer stage. The outer stage sets the short-term power constraints on the inner stage to maximize the long-term expected sum data rate, subject to long-term energy neutrality at each node. Albeit suboptimal, our solutions turn out to have a surprisingly simple form: the duty cycle allotted to each node by the inner stage is simply the fractional allotted power of that node relative to the total allotted power. The sum power allotted is a clipped version of the sum harvested power across all the nodes. The average sum throughput thus ultimately depends only on the sum harvested power and its statistics. We illustrate the performance improvement offered by the proposed solution compared to other naive schemes via Monte-Carlo simulations.
Resumo:
We study the tradeoff between the average error probability and the average queueing delay of messages which randomly arrive to the transmitter of a point-to-point discrete memoryless channel that uses variable rate fixed codeword length random coding. Bounds to the exponential decay rate of the average error probability with average queueing delay in the regime of large average delay are obtained. Upper and lower bounds to the optimal average delay for a given average error probability constraint are presented. We then formulate a constrained Markov decision problem for characterizing the rate of transmission as a function of queue size given an average error probability constraint. Using a Lagrange multiplier the constrained Markov decision problem is then converted to a problem of minimizing the average cost for a Markov decision problem. A simple heuristic policy is proposed which approximately achieves the optimal average cost.
Resumo:
The treewidth of a linear code is the least constraint complexity of any of its cycle-free graphical realizations. This notion provides a useful parametrization of the maximum-likelihood decoding complexity for linear codes. In this paper, we compute exact expressions for the treewidth of maximum distance separable codes, and first- and second-order Reed-Muller codes. These results constitute the only known explicit expressions for the treewidth of algebraic codes.
Resumo:
Eclogites and associated high-pressure (HP) rocks in collisional and accretionary orogenic belts preserve a record of subduction and exhumation, and provide a key constraint on the tectonic evolution of the continents. Most eclogites that formed at high pressures but low temperatures at > 10-11 kbar and 450-650 degrees C can be interpreted as a result of subduction of cold oceanic lithosphere. A new class of high-temperature (HT) eclogites that formed above 900 degrees C and at 14 to 30 kbar occurs in the deep continental crust, but their geodynamic significance and processes of formation are poorly understood. Here we show that Neoarchaean mafic-ultramafic complexes in the central granulite facies region of the Lewisian in NW Scotland contain HP/HT garnet-bearing granulites (retrogressed eclogites), gabbros, Iherzolites, and websterites, and that the HP granulites have garnets that contain inclusions of omphacite. From thermodynamic modeling and compositional isopleths we calculate that peak eclogite-facies metamorphism took place at 24-22 kbar and 1060-1040 degrees C. The geochemical signature of one (G-21) of the samples shows a strong depletion of Eu indicating magma fractionation at a crustal level. The Sm-Nd isochron ages of HP phases record different cooling ages of ca. 2480 and 2330 Ma. We suggest that the layered mafic-ultramafic complexes, which may have formed in an oceanic environment, were subducted to eclogite depths, and exhumed as HP garnet-bearing orogenic peridotites. The layered complexes were engulfed by widespread orthogneisses of tonalite-trondhjemite-granodiorite (TTG) composition with granulite facies assemblages. We propose two possible tectonic models: (1) the fact that the relicts of eclogitic complexes are so widespread in the Scourian can be taken as evidence that a >90 km x 40 km-size slab of continental crust containing mafic-ultramafic complexes was subducted to at least 70 km depth in the late Archaean. During exhumation the gneiss protoliths were retrogressed to granulite facies assemblages, but the mafic-ultramafic rocks resisted retrogression. (2) The layered complexes of mafic and ultramafic rocks were subducted to eclogite-facies depths and during exhumation under crustal conditions they were intruded by the orthogneiss protoliths (TTG) that were metamorphosed in the granulite facies. Apart from poorly defined UHP metamorphic rocks in Norway, the retrogressed eclogites in the central granulite/retrogressed eclogite facies Lewisian region, NW Scotland have the highest crustal pressures so far reported for Archaean rocks, and demonstrate that lithospheric subduction was transporting crustal rocks to HP depths in the Neoarchaean. (C) 2012 International Association for Gondwana Research. Published by Elsevier B.V. All rights reserved.
Resumo:
In this paper, we consider a slow-fading nt ×nr multiple-input multiple-output (MIMO) channel subjected to block fading. Reliability (in terms of achieved diversity order) and rate (in number of symbols transmitted per channel use) are of interest in such channels. We propose a new precoding scheme which achieves both full diversity (nt ×nrth order diversity) as well as full rate (nt symbols per channel use) using partial channel state information at the transmitter (CSIT). The proposed scheme achieves full diversity and improved coding gain through an optimization over the choice of constellation sets. The optimization maximizes dmin2 for our precoding scheme subject to an energy constraint. The scheme requires feedback of nt - 1 angle parameter values, compared to 2ntnr real coefficients in case of full CSIT. Further, for the case of nt × 1 system, we prove that the capacity achieved by the proposed scheme is same as that achieved with full CSIT. Error rate performance results for nt = 3,4,8 show that the proposed scheme performs better than other precoding schemes in the literature; the better performance is due to the choice of the signal sets and the feedback angles in the proposed scheme.
Resumo:
In this letter, we compute the secrecy rate of decode-and-forward (DF) relay beamforming with finite input alphabet of size M. Source and relays operate under a total power constraint. First, we observe that the secrecy rate with finite-alphabet input can go to zero as the total power increases, when we use the source power and the relay weights obtained assuming Gaussian input. This is because the capacity of an eavesdropper can approach the finite-alphabet capacity of 1/2 log(2) M with increasing total power, due to the inability to completely null in the direction of the eavesdropper. We then propose a transmit power control scheme where the optimum source power and relay weights are obtained by carrying out transmit power (source power plus relay power) control on DF with Gaussian input using semi-definite programming, and then obtaining the corresponding source power and relay weights which maximize the secrecy rate for DF with finite-alphabet input. The proposed power control scheme is shown to achieve increasing secrecy rates with increasing total power with a saturation behavior at high total powers.
Resumo:
We propose power allocation algorithms for increasing the sum rate of two and three user interference channels. The channels experience fast fading and there is an average power constraint on each transmitter. Our achievable strategies for two and three user interference channels are based on the classification of the interference into very strong, strong and weak interferences. We present numerical results of the power allocation algorithm for two user Gaussian interference channel with Rician fading with mean total power gain of the fade Omega = 3 and Rician factor kappa = 0.5 and compare the sum rate with that obtained from ergodic interference alignment with water-filling. We show that our power allocation algorithm increases the sum rate with a gain of 1.66dB at average transmit SNR of 5dB. For the three user Gaussian interference channel with Rayleigh fading with distribution CN(0, 0.5), we show that our power allocation algorithm improves the sum rate with a gain of 1.5dB at average transmit SNR of 5dB.
Resumo:
We consider a power optimization problem with average delay constraint on the downlink of a Green Base-station. A Green Base-station is powered by both renewable energy such as solar or wind energy as well as conventional sources like diesel generators or the power grid. We try to minimize the energy drawn from conventional energy sources and utilize the harvested energy to the maximum extent. Each user also has an average delay constraint for its data. The optimal action consists of scheduling the users and allocating the optimal transmission rate for the chosen user. In this paper, we formulate the problem as a Markov Decision Problem and show the existence of a stationary average-cost optimal policy. We also derive some structural results for the optimal policy.
Resumo:
We consider the MIMO X channel (XC), a system consisting of two transmit-receive pairs, where each transmitter communicates with both the receivers. Both the transmitters and receivers are equipped with multiple antennas. First, we derive an upper bound on the sum-rate capacity of the MIMO XC under individual power constraint at each transmitter. The sum-rate capacity of the two-user multiple access channel (MAC) that results when receiver cooperation is assumed forms an upper bound on the sum-rate capacity of the MIMO XC. We tighten this bound by considering noise correlation between the receivers and deriving the worst noise covariance matrix. It is shown that the worst noise covariance matrix is a saddle-point of a zero-sum, two-player convex-concave game, which is solved through a primal-dual interior point method that solves the maximization and the minimization parts of the problem simultaneously. Next, we propose an achievable scheme which employs dirty paper coding at the transmitters and successive decoding at the receivers. We show that the derived upper bound is close to the achievable region of the proposed scheme at low to medium SNRs.