932 resultados para Designs Qualitative
Resumo:
We know, from the classical work of Tarski on real closed fields, that elimination is, in principle, a fundamental engine for mechanized deduction. But, in practice, the high complexity of elimination algorithms has limited their use in the realization of mechanical theorem proving. We advocate qualitative theorem proving, where elimination is attractive since most processes of reasoning take place through the elimination of middle terms, and because the computational complexity of the proof is not an issue. Indeed what we need is the existence of the proof and not its mechanization. In this paper, we treat the linear case and illustrate the power of this paradigm by giving extremely simple proofs of two central theorems in the complexity and geometry of linear programming.
Resumo:
[1] D. Tse and P. Viswanath, Fundamentals of Wireless Communication.Cambridge University Press, 2006. [2] H. Bolcskei, D. Gesbert, C. B. Papadias, and A.-J. van der Veen, Spacetime Wireless Systems: From Array Processing to MIMO Communications.Cambridge University Press, 2006. [3] Q. H. Spencer, C. B. Peel, A. L. Swindlehurst, and M. Haardt, “An introduction to the multiuser MIMO downlink,” IEEE Commun. Mag.,vol. 42, pp. 60–67, Oct. 2004. [4] K. Kusume, M. Joham,W. Utschick, and G. Bauch, “Efficient tomlinsonharashima precoding for spatial multiplexing on flat MIMO channel,”in Proc. IEEE ICC’2005, May 2005, pp. 2021–2025. [5] R. Fischer, C. Windpassinger, A. Lampe, and J. Huber, “MIMO precoding for decentralized receivers,” in Proc. IEEE ISIT’2002, 2002, p.496. [6] M. Schubert and H. Boche, “Iterative multiuser uplink and downlink beamforming under SINR constraints,” IEEE Trans. Signal Process.,vol. 53, pp. 2324–2334, Jul. 2005. [7] ——, “Solution of multiuser downlink beamforming problem with individual SINR constraints,” IEEE Trans. Veh. Technol., vol. 53, pp.18–28, Jan. 2004. [8] A. Wiesel, Y. C. Eldar, and Shamai, “Linear precoder via conic optimization for fixed MIMO receivers,” IEEE Trans. Signal Process., vol. 52,pp. 161–176, Jan. 2006. [9] N. Jindal, “MIMO broadcast channels with finite rate feed-back,” in Proc. IEEE GLOBECOM’2005, Nov. 2005. [10] R. Hunger, F. Dietrich, M. Joham, and W. Utschick, “Robust transmit zero-forcing filters,” in Proc. ITG Workshop on Smart Antennas, Munich,Mar. 2004, pp. 130–137. [11] M. B. Shenouda and T. N. Davidson, “Linear matrix inequality formulations of robust QoS precoding for broadcast channels,” in Proc.CCECE’2007, Apr. 2007, pp. 324–328. [12] M. Payaro, A. Pascual-Iserte, and M. A. Lagunas, “Robust power allocation designs for multiuser and multiantenna downlink communication systems through convex optimization,” IEEE J. Sel. Areas Commun.,vol. 25, pp. 1392–1401, Sep. 2007. [13] M. Biguesh, S. Shahbazpanahi, and A. B. Gershman, “Robust downlink power control in wireless cellular systems,” EURASIP Jl. Wireless Commun. Networking, vol. 2, pp. 261–272, 2004. [14] B. Bandemer, M. Haardt, and S. Visuri, “Liner MMSE multi-user MIMO downlink precoding for users with multple antennas,” in Proc.PIMRC’06, Sep. 2006, pp. 1–5. [15] J. Zhang, Y. Wu, S. Zhou, and J. Wang, “Joint linear transmitter and receiver design for the downlink of multiuser MIMO systems,” IEEE Commun. Lett., vol. 9, pp. 991–993, Nov. 2005. [16] S. Shi, M. Schubert, and H. Boche, “Downlink MMSE transceiver optimization for multiuser MIMO systems: Duality and sum-mse minimization,”IEEE Trans. Signal Process., vol. 55, pp. 5436–5446, Nov.2007. [17] A. Mezghani, M. Joham, R. Hunger, and W. Utschick, “Transceiver design for multi-user MIMO systems,” in Proc. WSA 2006, Mar. 2006. [18] R. Doostnejad, T. J. Lim, and E. Sousa, “Joint precoding and beamforming design for the downlink in a multiuser MIMO system,” in Proc.WiMob’2005, Aug. 2005, pp. 153–159. [19] N. Vucic, H. Boche, and S. Shi, “Robust transceiver optimization in downlink multiuser MIMO systems with channel uncertainty,” in Proc.IEEE ICC’2008, Beijing, China, May 2008. [20] A. Ben-Tal and A. Nemirovsky, “Selected topics in robust optimization,”Math. Program., vol. 112, pp. 125–158, Feb. 2007. [21] D. Bertsimas and M. Sim, “Tractable approximations to robust conic optimization problems,” Math. Program., vol. 107, pp. 5–36, Jun. 2006. [22] P. Ubaidulla and A. Chockalingam, “Robust Transceiver Design for Multiuser MIMO Downlink,” in Proc. IEEE Globecom’2008, New Orleans, USA, Dec. 2008, to appear. [23] S. Boyd and L. Vandenberghe, Convex Optimization. Cambridge University Press, 2004. [24] G. H. Golub and C. F. V. Loan, Matrix Computations. The John Hopkins University Press, 1996.
Resumo:
Verification is one of the important stages in designing an SoC (system on chips) that consumes upto 70% of the design time. In this work, we present a methodology to automatically generate verification test-cases to verify a class of SoCs and also enable re-use of verification resources created from one SoC to another. A prototype implementation for generating the test-cases is also presented.
Resumo:
Statistical information about the wireless channel can be used at the transmitter side to enhance the performance of MIMO systems. This paper addresses how the concept of channel precoding can be used to enhance the performance of STBCs from Generalized Pseudo Orthogonal Designs which were first introduced by Zhu and Jafarkhani. Such designs include some important classes of STBCs that are directly derivable from Quasi-Orthogonal Designs and Co-ordinate Interleaved Orthogonal Designs.
Resumo:
Continuous advances in VLSI technology have made implementation of very complicated systems possible. Modern System-on -Chips (SoCs) have many processors, IP cores and other functional units. As a result, complete verification of whole systems before implementation is becoming infeasible; hence it is likely that these systems may have some errors after manufacturing. This increases the need to find design errors in chips after fabrication. The main challenge for post-silicon debug is the observability of the internal signals. Post-silicon debug is the problem of determining what's wrong when the fabricated chip of a new design behaves incorrectly. This problem now consumes over half of the overall verification effort on large designs, and the problem is growing worse.Traditional post-silicon debug methods concentrate on functional parts of systems and provide mechanisms to increase the observability of internal state of systems. Those methods may not be sufficient as modern SoCs have lots of blocks (processors, IP cores, etc.) which are communicating with one another and communication is another source of design errors. This tutorial will be provide an insight into various observability enhancement techniques, on chip instrumentation techniques and use of high level models to support the debug process targeting both inside blocks and communication among them. It will also cover the use of formal methods to help debug process.
Resumo:
An extension to a formal verification approach of hybrid systems is proposed to verify analog and mixed signal (AMS) designs. AMS designs can be formally modeled as hybrid systems and therefore lend themselves to the formal analysis and verification techniques applied to hybrid systems. The proposed approach employs simulation traces obtained from an actual design implementation of AMS circuit blocks (for example, in the form of SPICE netlists) to carry out formal analysis and verification. This enables the same platform used for formally validating an abstract model of an AMS design, to be also used for validating its different refinements and design implementation; thereby, providing a simple route to formal verification at different levels of implementation. The feasibility of the proposed approach is demonstrated with a case study based on a tunnel diode oscillator. Since the device characteristic of a tunnel diode is highly non-linear with a negative resistance region, dynamic behavior of circuits in which it is employed as an element is difficult to model, analyze and verify within a general hybrid system formal verification tool. In the case study presented the formal model and the proposed computational techniques have been incorporated into CheckMate, a formal verification tool based on MATLAB and Simulink-Stateflow Framework from MathWorks.
Resumo:
The data obtained in the earlier parts of this series for the donor and acceptor end parameters of N-H. O and O-H. O hydrogen bonds have been utilised to obtain a qualitative working criterion to classify the hydrogen bonds into three categories: “very good” (VG), “moderately good” (MG) and weak (W). The general distribution curves for all the four parameters are found to be nearly of the Gaussian type. Assuming that the VG hydrogen bonds lie between 0 and ± la, MG hydrogen bonds between ± 1s̀ and ± 2s̀, W hydrogen bonds beyond ± 2s̀ (where s̀ is the standard deviation), suitable cut-off limits for classifying the hydrogen bonds in the three categories have been derived. These limits are used to get VG and MG ranges for the four parameters 1 and θ (at the donor end) and ± and ± (at the acceptor end). The qualitative strength of a hydrogen bond is decided by the cumulative application of the criteria to all the four parameters. The criterion has been further applied to some practical examples in conformational studies such as α-helix and can be used for obtaining suitable location of hydrogen atoms to form good hydrogen bonds. An empirical approach to the energy of hydrogen bonds in the three categories has also been presented.
Resumo:
The maximal rate of a nonsquare complex orthogonal design for transmit antennas is 1/2 + 1/n if is even and 1/2 + 1/n+1 if is odd and the codes have been constructed for all by Liang (2003) and Lu et al. (2005) to achieve this rate. A lower bound on the decoding delay of maximal-rate complex orthogonal designs has been obtained by Adams et al. (2007) and it is observed that Liang's construction achieves the bound on delay for equal to 1 and 3 modulo 4 while Lu et al.'s construction achieves the bound for n = 0, 1, 3 mod 4. For n = 2 mod 4, Adams et al. (2010) have shown that the minimal decoding delay is twice the lower bound, in which case, both Liang's and Lu et al.'s construction achieve the minimum decoding delay. For large value of, it is observed that the rate is close to half and the decoding delay is very large. A class of rate-1/2 codes with low decoding delay for all has been constructed by Tarokh et al. (1999). In this paper, another class of rate-1/2 codes is constructed for all in which case the decoding delay is half the decoding delay of the rate-1/2 codes given by Tarokh et al. This is achieved by giving first a general construction of square real orthogonal designs which includes as special cases the well-known constructions of Adams, Lax, and Phillips and the construction of Geramita and Pullman, and then making use of it to obtain the desired rate-1/2 codes. For the case of nine transmit antennas, the proposed rate-1/2 code is shown to be of minimal delay. The proposed construction results in designs with zero entries which may have high peak-to-average power ratio and it is shown that by appropriate postmultiplication, a design with no zero entry can be obtained with no change in the code parameters.
Resumo:
Ensuring reliable operation over an extended period of time is one of the biggest challenges facing present day electronic systems. The increased vulnerability of the components to atmospheric particle strikes poses a big threat in attaining the reliability required for various mission critical applications. Various soft error mitigation methodologies exist to address this reliability challenge. A general solution to this problem is to arrive at a soft error mitigation methodology with an acceptable implementation overhead and error tolerance level. This implementation overhead can then be reduced by taking advantage of various derating effects like logical derating, electrical derating and timing window derating, and/or making use of application redundancy, e. g. redundancy in firmware/software executing on the so designed robust hardware. In this paper, we analyze the impact of various derating factors and show how they can be profitably employed to reduce the hardware overhead to implement a given level of soft error robustness. This analysis is performed on a set of benchmark circuits using the delayed capture methodology. Experimental results show upto 23% reduction in the hardware overhead when considering individual and combined derating factors.
Resumo:
The detection of contaminated food in every stage of processing required new technology for fast identification and isolation of toxicity in food. Since effect of food contaminant are severe to human health, the need of pioneer technologies also increasing over last few decades. In the current study, MDA was prepared by hydrolysis of 1,1,3,3-tetramethoxypropane in HCl media and used in the electrochemical studies. The electrochemical sensor was fabricated with modified glassy carbon electrode with polyaniline. These sensors were used for detection of sodium salt of malonaldehyde and observed that a high sensitivity in the concentration range similar to 1 x 10(-1) M and 1 x 10(-2) M. Tafel plots show the variation of over potential from -1.73 V to -3.74 V up to 10(-5) mol/L indicating the lower limit of detection of the system. (C) 2013 Elsevier Ltd. All rights reserved.
Resumo:
An n-length block code C is said to be r-query locally correctable, if for any codeword x ∈ C, one can probabilistically recover any one of the n coordinates of the codeword x by querying at most r coordinates of a possibly corrupted version of x. It is known that linear codes whose duals contain 2-designs are locally correctable. In this article, we consider linear codes whose duals contain t-designs for larger t. It is shown here that for such codes, for a given number of queries r, under linear decoding, one can, in general, handle a larger number of corrupted bits. We exhibit to our knowledge, for the first time, a finite length code, whose dual contains 4-designs, which can tolerate a fraction of up to 0.567/r corrupted symbols as against a maximum of 0.5/r in prior constructions. We also present an upper bound that shows that 0.567 is the best possible for this code length and query complexity over this symbol alphabet thereby establishing optimality of this code in this respect. A second result in the article is a finite-length bound which relates the number of queries r and the fraction of errors that can be tolerated, for a locally correctable code that employs a randomized algorithm in which each instance of the algorithm involves t-error correction.
Resumo:
For the first time, two units of KTA have been linked to three units of cyst-di-OMe. The reaction is noteworthy since it involves the formation of six amide bonds leading to a three-fold symmetric 23-cyclophane (3) harboring a cluster of three S-S bridges. The major product is a di-imide (4), arising from the interaction of a cystine NH with a neighbouring activated ester. A third reaction of tethering KTA with a single cyst-di-OMe unit afforded the flexible compound 6 and, with benzidine, the novel linker directed 7 with orthogonally disposed anchor modules.
Resumo:
Silver nanoparticles (AgNPs) pose a high risk of exposure to the natural environment owing to their extensive usage in various consumer products. In the present study we attempted to understand the harmful effect of AgNPs at environmentally relevant low concentration levels (<= 1 ppm) towards two different freshwater bacterial isolates and their consortium. The standard plate count assay suggested that the AgNPs were toxic towards the fresh water bacterial isolates as well as the consortium, though toxicity was significantly reduced for the cells in the consortium. The oxidative stress assessment and membrane permeability studies corroborated with the toxicity data. The detailed electron microscopic studies suggested the cell degrading potential of the AgNPs, and the FT-IR studies confirmed the involvement of the surface groups in the toxic effects. No significant ion leaching from the AgNPs was observed at the applied concentration levels signifying the dominant role of the particle size, and size distribution in bacterial toxicity. The reduced toxicity for the cells in the consortium than the individual isolates has major significance in further studies on the ecotoxicity of the AgNPs. (C) 2014 Elsevier Inc. All rights reserved.
Resumo:
Package-board co-design plays a crucial role in determining the performance of high-speed systems. Although there exist several commercial solutions for electromagnetic analysis and verification, lack of Computer Aided Design (CAD) tools for SI aware design and synthesis lead to longer design cycles and non-optimal package-board interconnect geometries. In this work, the functional similarities between package-board design and radio-frequency (RF) imaging are explored. Consequently, qualitative methods common to the imaging community, like Tikhonov Regularization (TR) and Landweber method are applied to solve multi-objective, multi-variable package design problems. In addition, a new hierarchical iterative piecewise linear algorithm is developed as a wrapper over LBP for an efficient solution in the design space.