202 resultados para Maximum independent set
Resumo:
A compact model for noise margin (NM) of single-electron transistor (SET) logic is developed, which is a function of device capacitances and background charge (zeta). Noise margin is, then, used as a metric to evaluate the robustness of SET logic against background charge, temperature, and variation of SET gate and tunnel junction capacitances (CG and CT). It is shown that choosing alpha=CT/CG=1/3 maximizes the NM. An estimate of the maximum tolerable zeta is shown to be equal to plusmn0.03 e. Finally, the effect of mismatch in device parameters on the NM is studied through exhaustive simulations, which indicates that a isin [0.3, 0.4] provides maximum robustness. It is also observed that mismatch can have a significant impact on static power dissipation.
Resumo:
The stability of scheduled multiaccess communication with random coding and independent decoding of messages is investigated. The number of messages that may be scheduled for simultaneous transmission is limited to a given maximum value, and the channels from transmitters to receiver are quasistatic, flat, and have independent fades. Requests for message transmissions are assumed to arrive according to an i.i.d. arrival process. Then, we show the following: (1) in the limit of large message alphabet size, the stability region has an interference limited information-theoretic capacity interpretation, (2) state-independent scheduling policies achieve this asymptotic stability region, and (3) in the asymptotic limit corresponding to immediate access, the stability region for non-idling scheduling policies is shown to be identical irrespective of received signal powers.
Resumo:
The basis set dependence of the topographical structure of the molecular electrostatic potential (MESP), as well as the effect of substituents on the MESP distribution, has been investigated with substituted benzenes as test cases. The molecules are studied at HF-SCF 3�21G and 6�31G** levels, with a further MESP topographical investigation at the 3�21G, double-zeta, 6�31G*, 6�31G**, double-zeta polarized and triple-zeta polarized levels. The MESP critical points for a 3�21G optimized/6�31G** basis are similar to the corresponding 6�31G** optimized/6�31G** ones. More generally, the qualitative features of the MESP topography computed at the polarized level are independent of the level at which optimization is carried out. For a proper representation of oxygen lone pairs, however, optimization using a polarized basis set is required. The nature of the substituent drastically changes the MESP distribution over the phenyl ring. The values and positions of MESP minima indicate the most active site for electrophilic attack. This point is strengthened by a study of disubstituted benzenes.
Resumo:
We address the problem of allocating a single divisible good to a number of agents. The agents have concave valuation functions parameterized by a scalar type. The agents report only the type. The goal is to find allocatively efficient, strategy proof, nearly budget balanced mechanisms within the Groves class. Near budget balance is attained by returning as much of the received payments as rebates to agents. Two performance criteria are of interest: the maximum ratio of budget surplus to efficient surplus, and the expected budget surplus, within the class of linear rebate functions. The goal is to minimize them. Assuming that the valuation functions are known, we show that both problems reduce to convex optimization problems, where the convex constraint sets are characterized by a continuum of half-plane constraints parameterized by the vector of reported types. We then propose a randomized relaxation of these problems by sampling constraints. The relaxed problem is a linear programming problem (LP). We then identify the number of samples needed for ``near-feasibility'' of the relaxed constraint set. Under some conditions on the valuation function, we show that value of the approximate LP is close to the optimal value. Simulation results show significant improvements of our proposed method over the Vickrey-Clarke-Groves (VCG) mechanism without rebates. In the special case of indivisible goods, the mechanisms in this paper fall back to those proposed by Moulin, by Guo and Conitzer, and by Gujar and Narahari, without any need for randomization. Extension of the proposed mechanisms to situations when the valuation functions are not known to the central planner are also discussed. Note to Practitioners-Our results will be useful in all resource allocation problems that involve gathering of information privately held by strategic users, where the utilities are any concave function of the allocations, and where the resource planner is not interested in maximizing revenue, but in efficient sharing of the resource. Such situations arise quite often in fair sharing of internet resources, fair sharing of funds across departments within the same parent organization, auctioning of public goods, etc. We study methods to achieve near budget balance by first collecting payments according to the celebrated VCG mechanism, and then returning as much of the collected money as rebates. Our focus on linear rebate functions allows for easy implementation. The resulting convex optimization problem is solved via relaxation to a randomized linear programming problem, for which several efficient solvers exist. This relaxation is enabled by constraint sampling. Keeping practitioners in mind, we identify the number of samples that assures a desired level of ``near-feasibility'' with the desired confidence level. Our methodology will occasionally require subsidy from outside the system. We however demonstrate via simulation that, if the mechanism is repeated several times over independent instances, then past surplus can support the subsidy requirements. We also extend our results to situations where the strategic users' utility functions are not known to the allocating entity, a common situation in the context of internet users and other problems.
Resumo:
The capacity region of a two-user Gaussian Multiple Access Channel (GMAC) with complex finite input alphabets and continuous output alphabet is studied. When both the users are equipped with the same code alphabet, it is shown that, rotation of one of the user’s alphabets by an appropriate angle can make the new pair of alphabets not only uniquely decodable, but will result in enlargement of the capacity region. For this set-up, we identify the primary problem to be finding appropriate angle(s) of rotation between the alphabets such that the capacity region is maximally enlarged. It is shown that the angle of rotation which provides maximum enlargement of the capacity region also minimizes the union bound on the probability of error of the sumalphabet and vice-verse. The optimum angle(s) of rotation varies with the SNR. Through simulations, optimal angle(s) of rotation that gives maximum enlargement of the capacity region of GMAC with some well known alphabets such as M-QAM and M-PSK for some M are presented for several values of SNR. It is shown that for large number of points in the alphabets, capacity gains due to rotations progressively reduce. As the number of points N tends to infinity, our results match the results in the literature wherein the capacity region of the Gaussian code alphabet doesn’t change with rotation for any SNR.
Resumo:
It has been shown recently that the maximum rate of a 2-real-symbol (single-complex-symbol) maximum likelihood (ML) decodable, square space-time block codes (STBCs) with unitary weight matrices is 2a/2a complex symbols per channel use (cspcu) for 2a number of transmit antennas [1]. These STBCs are obtained from Unitary Weight Designs (UWDs). In this paper, we show that the maximum rates for 3- and 4-real-symbol (2-complex-symbol) ML decodable square STBCs from UWDs, for 2a transmit antennas, are 3(a-1)/2a and 4(a-1)/2a cspcu, respectively. STBCs achieving this maximum rate are constructed. A set of sufficient conditions on the signal set, required for these codes to achieve full-diversity are derived along with expressions for their coding gain.
Resumo:
A new technique is presented using principles of multisignal relaying for the synthesis of a universal-type quadrilateral polar characteristic. The modus operandi consists in the determination of the phase sequence of a set of voltage phasors and the provision of a trip signal for one sequence while blocking for the other. Two versions, one using ferrite-core logic and another using transistor logic, are described in detail. The former version has the merit of simplicity and has the added advantage of not requiring any d.c. supply. The unit is flexible, as it permits independent control of the characteristic along the resistance and reactance axis through suitable adjustments of replica impedance angles. The maximum operating time is about 20ms for all switching angles, and with faults within 95% of the protected section. The maximum transient overreach is about 8%.
Resumo:
We propose a new set of input voltage equations (IVEs) for independent double-gate MOSFET by solving the governing bipolar Poisson equation (PE) rigorously. The proposed IVEs, which involve the Legendre's incomplete elliptic integral of the first kind and Jacobian elliptic functions and are valid from accumulation to inversion regimes, are shown to have good agreement with the numerical solution of the same PE for all bias conditions.
Resumo:
The RILEM work-of-fracture method for measuring the specific fracture energy of concrete from notched three-point bend specimens is still the most common method used throughout the world, despite the fact that the specific fracture energy so measured is known to vary with the size and shape of the test specimen. The reasons for this variation have also been known for nearly two decades, and two methods have been proposed in the literature to correct the measured size-dependent specific fracture energy (G(f)) in order to obtain a size-independent value (G(F)). It has also been proved recently, on the basis of a limited set of results on a single concrete mix with a compressive strength of 37 MPa, that when the size-dependent G(f) measured by the RILEM method is corrected following either of these two methods, the resulting specific fracture energy G(F) is very nearly the same and independent of the size of the specimen. In this paper, we will provide further evidence in support of this important conclusion using extensive independent test results of three different concrete mixes ranging in compressive strength from 57 to 122 MPa. (c) 2013 Elsevier Ltd. All rights reserved.
Resumo:
Mass balance between metal and electrolytic solution, separated by a moving interface, in stable pit growth results in a set of governing equations which are solved for concentration field and interface position (pit boundary evolution). The interface experiences a jump discontinuity in metal concentration. The extended finite-element model (XFEM) handles this jump discontinuity by using discontinuous-derivative enrichment formulation, eliminating the requirement of using front conforming mesh and re-meshing after each time step as in the conventional finite-element method. However, prior interface location is required so as to solve the governing equations for concentration field for which a numerical technique, the level set method, is used for tracking the interface explicitly and updating it over time. The level set method is chosen as it is independent of shape and location of the interface. Thus, a combined XFEM and level set method is developed in this paper. Numerical analysis for pitting corrosion of stainless steel 304 is presented. The above proposed model is validated by comparing the numerical results with experimental results, exact solutions and some other approximate solutions. An empirical model for pitting potential is also derived based on the finite-element results. Studies show that pitting profile depends on factors such as ion concentration, solution pH and temperature to a large extent. Studying the individual and combined effects of these factors on pitting potential is worth knowing, as pitting potential directly influences corrosion rate.
On Precoding for Constant K-User MIMO Gaussian Interference Channel With Finite Constellation Inputs
Resumo:
This paper considers linear precoding for the constant channel-coefficient K-user MIMO Gaussian interference channel (MIMO GIC) where each transmitter-i (Tx-i) requires the sending of d(i) independent complex symbols per channel use that take values from fixed finite constellations with uniform distribution to receiver-i (Rx-i) for i = 1, 2, ..., K. We define the maximum rate achieved by Tx-i using any linear precoder as the signal-to-noise ratio (SNR) tends to infinity when the interference channel coefficients are zero to be the constellation constrained saturation capacity (CCSC) for Tx-i. We derive a high-SNR approximation for the rate achieved by Tx-i when interference is treated as noise and this rate is given by the mutual information between Tx-i and Rx-i, denoted as I(X) under bar (i); (Y) under bar (i)]. A set of necessary and sufficient conditions on the precoders under which I(X) under bar (i); (Y) under bar (i)] tends to CCSC for Tx-i is derived. Interestingly, the precoders designed for interference alignment (IA) satisfy these necessary and sufficient conditions. Furthermore, we propose gradient-ascentbased algorithms to optimize the sum rate achieved by precoding with finite constellation inputs and treating interference as noise. A simulation study using the proposed algorithms for a three-user MIMO GIC with two antennas at each node with d(i) = 1 for all i and with BPSK and QPSK inputs shows more than 0.1-b/s/Hz gain in the ergodic sum rate over that yielded by precoders obtained from some known IA algorithms at moderate SNRs.
Resumo:
This article presents frequentist inference of accelerated life test data of series systems with independent log-normal component lifetimes. The means of the component log-lifetimes are assumed to depend on the stress variables through a linear stress translation function that can accommodate the standard stress translation functions in the literature. An expectation-maximization algorithm is developed to obtain the maximum likelihood estimates of model parameters. The maximum likelihood estimates are then further refined by bootstrap, which is also used to infer about the component and system reliability metrics at usage stresses. The developed methodology is illustrated by analyzing a real as well as a simulated dataset. A simulation study is also carried out to judge the effectiveness of the bootstrap. It is found that in this model, application of bootstrap results in significant improvement over the simple maximum likelihood estimates.
Resumo:
Facial emotions are the most expressive way to display emotions. Many algorithms have been proposed which employ a particular set of people (usually a database) to both train and test their model. This paper focuses on the challenging task of database independent emotion recognition, which is a generalized case of subject-independent emotion recognition. The emotion recognition system employed in this work is a Meta-Cognitive Neuro-Fuzzy Inference System (McFIS). McFIS has two components, a neuro-fuzzy inference system, which is the cognitive component and a self-regulatory learning mechanism, which is the meta-cognitive component. The meta-cognitive component, monitors the knowledge in the neuro-fuzzy inference system and decides on what-to-learn, when-to-learn and how-to-learn the training samples, efficiently. For each sample, the McFIS decides whether to delete the sample without being learnt, use it to add/prune or update the network parameter or reserve it for future use. This helps the network avoid over-training and as a result improve its generalization performance over untrained databases. In this study, we extract pixel based emotion features from well-known (Japanese Female Facial Expression) JAFFE and (Taiwanese Female Expression Image) TFEID database. Two sets of experiment are conducted. First, we study the individual performance of both databases on McFIS based on 5-fold cross validation study. Next, in order to study the generalization performance, McFIS trained on JAFFE database is tested on TFEID and vice-versa. The performance The performance comparison in both experiments against SVNI classifier gives promising results.
Resumo:
In this paper, we consider spatial modulation (SM) operating in a frequency-selective single-carrier (SC) communication scenario and propose zero-padding instead of the cyclic-prefix considered in the existing literature. We show that the zero-padded single-carrier (ZP-SC) SM system offers full multipath diversity under maximum-likelihood (ML) detection, unlike the cyclic-prefix based SM system. Furthermore, we show that the order of ML detection complexity in our proposed ZP-SC SM system is independent of the frame length and depends only on the number of multipath links between the transmitter and the receiver. Thus, we show that the zero-padding applied in the SC SM system has two advantages over the cyclic prefix: 1) achieves full multipath diversity, and 2) imposes a relatively low ML detection complexity. Furthermore, we extend the partial interference cancellation receiver (PIC-R) proposed by Guo and Xia for the detection of space-time block codes (STBCs) in order to convert the ZP-SC system into a set of narrowband subsystems experiencing flat-fading. We show that full rank STBC transmissions over these subsystems achieves full transmit, receive as well as multipath diversity for the PIC-R. Furthermore, we show that the ZP-SC SM system achieves receive and multipath diversity for the PIC-R at a detection complexity order which is the same as that of the SM system in flat-fading scenario. Our simulation results demonstrate that the symbol error ratio performance of the proposed linear receiver for the ZP-SC SM system is significantly better than that of the SM in cyclic prefix based orthogonal frequency division multiplexing as well as of the SM in the cyclic-prefixed and zero-padded single carrier systems relying on zero-forcing/minimum mean-squared error equalizer based receivers.
Resumo:
Rapid reconstruction of multidimensional image is crucial for enabling real-time 3D fluorescence imaging. This becomes a key factor for imaging rapidly occurring events in the cellular environment. To facilitate real-time imaging, we have developed a graphics processing unit (GPU) based real-time maximum a-posteriori (MAP) image reconstruction system. The parallel processing capability of GPU device that consists of a large number of tiny processing cores and the adaptability of image reconstruction algorithm to parallel processing (that employ multiple independent computing modules called threads) results in high temporal resolution. Moreover, the proposed quadratic potential based MAP algorithm effectively deconvolves the images as well as suppresses the noise. The multi-node multi-threaded GPU and the Compute Unified Device Architecture (CUDA) efficiently execute the iterative image reconstruction algorithm that is similar to 200-fold faster (for large dataset) when compared to existing CPU based systems. (C) 2015 Author(s). All article content, except where otherwise noted, is licensed under a Creative Commons Attribution 3.0 Unported License.