34 resultados para Library Access Considerations: A User’s Perspective
em Indian Institute of Science - Bangalore - Índia
Resumo:
Constellation Constrained (CC) capacity regions of two-user Single-Input Single-Output (SISO) Gaussian Multiple Access Channels (GMAC) are computed for several Non-Orthogonal Multiple Access schemes (NO-MA) and Orthogonal Multiple Access schemes (O-MA). For NO-MA schemes, a metric is proposed to compute the angle(s) of rotation between the input constellations such that the CC capacity regions are maximally enlarged. Further, code pairs based on Trellis Coded Modulation (TCM) are designed with PSK constellation pairs and PAM constellation pairs such that any rate pair within the CC capacity region can be approached. Such a NO-MA scheme which employs CC capacity approaching trellis codes is referred to as Trellis Coded Multiple Access (TCMA). Then, CC capacity regions of O-MA schemes such as Frequency Division Multiple Access (FDMA) and Time Division Multiple Access (TDMA) are also computed and it is shown that, unlike the Gaussian distributed continuous constellations case, the CC capacity regions with FDMA are strictly contained inside the CC capacity regions with TCMA. Hence, for finite constellations, a NO-MA scheme such as TCMA is better than FDMA and TDMA which makes NO-MA schemes worth pursuing in practice for two-user GMAC. Then, the idea of introducing rotations between the input constellations is used to construct Space-Time Block Code (STBC) pairs for two-user Multiple-Input Single-Output (MISO) fading MAC. The proposed STBCs are shown to have reduced Maximum Likelihood (ML) decoding complexity and information-losslessness property. Finally, STBC pairs with reduced sphere decoding complexity are proposed for two-user Multiple-Input Multiple-Output (MIMO) fading MAC.
Resumo:
The capacity region of a two-user Gaussian Multiple Access Channel (GMAC) with complex finite input alphabets and continuous output alphabet is studied. When both the users are equipped with the same code alphabet, it is shown that, rotation of one of the user’s alphabets by an appropriate angle can make the new pair of alphabets not only uniquely decodable, but will result in enlargement of the capacity region. For this set-up, we identify the primary problem to be finding appropriate angle(s) of rotation between the alphabets such that the capacity region is maximally enlarged. It is shown that the angle of rotation which provides maximum enlargement of the capacity region also minimizes the union bound on the probability of error of the sumalphabet and vice-verse. The optimum angle(s) of rotation varies with the SNR. Through simulations, optimal angle(s) of rotation that gives maximum enlargement of the capacity region of GMAC with some well known alphabets such as M-QAM and M-PSK for some M are presented for several values of SNR. It is shown that for large number of points in the alphabets, capacity gains due to rotations progressively reduce. As the number of points N tends to infinity, our results match the results in the literature wherein the capacity region of the Gaussian code alphabet doesn’t change with rotation for any SNR.
Resumo:
The literature on pricing implicitly assumes an "infinite data" model, in which sources can sustain any data rate indefinitely. We assume a more realistic "finite data" model, in which sources occasionally run out of data; this leads to variable user data rates. Further, we assume that users have contracts with the service provider, specifying the rates at which they can inject traffic into the network. Our objective is to study how prices can be set such that a single link can be shared efficiently and fairly among users in a dynamically changing scenario where a subset of users occasionally has little data to send. User preferences are modelled by concave increasing utility functions. Further, we introduce two additional elements: a convex increasing disutility function and a convex increasing multiplicative congestion-penally function. The disutility function takes the shortfall (contracted rate minus present rate) as its argument, and essentially encourages users to send traffic at their contracted rates, while the congestion-penalty function discourages heavy users from sending excess data when the link is congested. We obtain simple necessary and sufficient conditions on prices for fair and efficient link sharing; moreover, we show that a single price for all users achieves this. We illustrate the ideas using a simple experiment.
Resumo:
We propose a Physical layer Network Coding (PNC) scheme for the K-user wireless Multiple Access Relay Channel, in which K source nodes want to transmit messages to a destination node D with the help of a relay node R. The proposed scheme involves (i) Phase 1 during which the source nodes alone transmit and (ii) Phase 2 during which the source nodes and the relay node transmit. At the end of Phase 1, the relay node decodes the messages of the source nodes and during Phase 2 transmits a many-to-one function of the decoded messages. To counter the error propagation from the relay node, we propose a novel decoder which takes into account the possibility of error events at R. It is shown that if certain parameters are chosen properly and if the network coding map used at R forms a Latin Hypercube, the proposed decoder offers the maximum diversity order of two. Also, it is shown that for a proper choice of the parameters, the proposed decoder admits fast decoding, with the same decoding complexity order as that of the reference scheme based on Complex Field Network Coding (CFNC). Simulation results indicate that the proposed PNC scheme offers a large gain over the CFNC scheme.
Resumo:
Database management systems offer a very reliable and attractive data organization for fast and economical information storage and processing for diverse applications. It is much more important that the information should be easily accessible to users with varied backgrounds, professional as well as casual, through a suitable data sublanguage. The language adopted here (APPLE) is one such language for relational database systems and is completely nonprocedural and well suited to users with minimum or no programming background. This is supported by an access path model which permits the user to formulate completely nonprocedural queries expressed solely in terms of attribute names. The data description language (DDL) and data manipulation language (DML) features of APPLE are also discussed. The underlying relational database has been implemented with the help of the DATATRIEVE-11 utility for record and domain definition which is available on the PDP-11/35. The package is coded in Pascal and MACRO-11. Further, most of the limitations of the DATATRIEVE-11 utility have been eliminated in the interface package.
Resumo:
In this paper, we propose an extension to the I/O device architecture, as recommended in the PCI-SIG IOV specification, for virtualizing network I/O devices. The aim is to enable fine-grained controls to a virtual machine on the I/O path of a shared device. The architecture allows native access of I/O devices to virtual machines and provides device level QoS hooks for controlling VM specific device usage. For evaluating the architecture we use layered queuing network (LQN) models. We implement the architecture and evaluate it using simulation techniques, on the LQN model, to demonstrate the benefits. With the architecture, the benefit for network I/O is 60% more than what can be expected on the existing architecture. Also, the proposed architecture improves scalability in terms of the number of virtual machines intending to share the I/O device.
Resumo:
A half-duplex constrained non-orthogonal cooperative multiple access (NCMA) protocol suitable for transmission of information from N users to a single destination in a wireless fading channel is proposed. Transmission in this protocol comprises of a broadcast phase and a cooperation phase. In the broadcast phase, each user takes turn broadcasting its data to all other users and the destination in an orthogonal fashion in time. In the cooperation phase, each user transmits a linear function of what it received from all other users as well as its own data. In contrast to the orthogonal extension of cooperative relay protocols to the cooperative multiple access channels wherein at any point of time, only one user is considered as a source and all the other users behave as relays and do not transmit their own data, the NCMA protocol relaxes the orthogonality built into the protocols and hence allows for a more spectrally efficient usage of resources. Code design criteria for achieving full diversity of N in the NCMA protocol is derived using pair wise error probability (PEP) analysis and it is shown that this can be achieved with a minimum total time duration of 2N - 1 channel uses. Explicit construction of full diversity codes is then provided for arbitrary number of users. Since the Maximum Likelihood decoding complexity grows exponentially with the number of users, the notion of g-group decodable codes is introduced for our setup and a set of necesary and sufficient conditions is also obtained.
Resumo:
The traditional 'publish for free and pay to read' business model adopted by publishers of academic journals can lead to disparity in access to scholarly literature, exacerbated by rising journal costs and shrinking library budgets. However, although the 'pay to publish and read for free' business model of open-access publishing has helped to create a level playing field for readers, it does more harm than good in the developing world.
Resumo:
Relay selection for cooperative communications promises significant performance improvements, and is, therefore, attracting considerable attention. While several criteria have been proposed for selecting one or more relays, distributed mechanisms that perform the selection have received relatively less attention. In this paper, we develop a novel, yet simple, asymptotic analysis of a splitting-based multiple access selection algorithm to find the single best relay. The analysis leads to simpler and alternate expressions for the average number of slots required to find the best user. By introducing a new contention load' parameter, the analysis shows that the parameter settings used in the existing literature can be improved upon. New and simple bounds are also derived. Furthermore, we propose a new algorithm that addresses the general problem of selecting the best Q >= 1 relays, and analyze and optimize it. Even for a large number of relays, the scalable algorithm selects the best two relays within 4.406 slots and the best three within 6.491 slots, on average. We also propose a new and simple scheme for the practically relevant case of discrete metrics. Altogether, our results develop a unifying perspective about the general problem of distributed selection in cooperative systems and several other multi-node systems.
Resumo:
Constellation Constrained (CC) capacity regions of a two-user Gaussian Multiple Access Channel(GMAC) have been recently reported. For such a channel, code pairs based on trellis coded modulation are proposed in this paper with MPSK and M-PAM alphabet pairs, for arbitrary values of M,toachieve sum rates close to the CC sum capacity of the GMAC. In particular, the structure of the sum alphabets of M-PSK and M-PAMmalphabet pairs are exploited to prove that, for certain angles of rotation between the alphabets, Ungerboeck labelling on the trellis of each user maximizes the guaranteed squared Euclidean distance of the sum trellis. Hence, such a labelling scheme can be used systematically,to construct trellis code pairs to achieve sum rates close to the CC sum capacity. More importantly, it is shown for the first time that ML decoding complexity at the destination is significantly reduced when M-PAM alphabet pairs are employed with almost no loss in the sum capacity.
Resumo:
In this paper, Space-Time Block Codes (STBCs) with reduced Sphere Decoding Complexity (SDC) are constructed for two-user Multiple-Input Multiple-Output (MIMO) fading multiple access channels. In this set-up, both the users employ identical STBCs and the destination performs sphere decoding for the symbols of the two users. First, we identify the positions of the zeros in the R matrix arising out of the Q-R decomposition of the lattice generator such that (i) the worst case SDC (WSDC) and (ii) the average SDC (ASDC) are reduced. Then, a set of necessary and sufficient conditions on the lattice generator is provided such that the R matrix has zeros at the identified positions. Subsequently, explicit constructions of STBCs which results in the reduced ASDC are presented. The rate (in complex symbols per channel use) of the proposed designs is at most 2/N-t where N-t denotes the number of transmit antennas for each user. We also show that the class of STBCs from complex orthogonal designs (other than the Alamouti design) reduce the WSDC but not the ASDC.
Resumo:
Frequency-domain scheduling and rate adaptation have helped next generation orthogonal frequency division multiple access (OFDMA) based wireless cellular systems such as Long Term Evolution (LTE) achieve significantly higher spectral efficiencies. To overcome the severe uplink feedback bandwidth constraints, LTE uses several techniques to reduce the feedback required by a frequency-domain scheduler about the channel state information of all subcarriers of all users. In this paper, we analyze the throughput achieved by the User Selected Subband feedback scheme of LTE. In it, a user feeds back only the indices of the best M subbands and a single 4-bit estimate of the average rate achievable over all selected M subbands. In addition, we compare the performance with the subband-level feedback scheme of LTE, and highlight the role of the scheduler by comparing the performances of the unfair greedy scheduler and the proportional fair (PF) scheduler. Our analysis sheds several insights into the working of the feedback reduction techniques used in LTE.
Resumo:
In this paper we propose a new method of data handling for web servers. We call this method Network Aware Buffering and Caching (NABC for short). NABC facilitates reduction of data copies in web server's data sending path, by doing three things: (1) Layout the data in main memory in a way that protocol processing can be done without data copies (2) Keep a unified cache of data in kernel and ensure safe access to it by various processes and kernel and (3) Pass only the necessary meta data between processes so that bulk data handling time spent during IPC can be reduced. We realize NABC by implementing a set of system calls and an user library. The end product of the implementation is a set of APIs specifically designed for use by the web servers. We port an in house web server called SWEET, to NABC APIs and evaluate performance using a range of workloads both simulated and real. The results show a very impressive gain of 12% to 21% in throughput for static file serving and 1.6 to 4 times gain in throughput for lightweight dynamic content serving for a server using NABC APIs over the one using UNIX APIs.