17 resultados para confidential information
em Indian Institute of Science - Bangalore - Índia
Resumo:
We consider a network in which several service providers offer wireless access to their respective subscribed customers through potentially multihop routes. If providers cooperate by jointly deploying and pooling their resources, such as spectrum and infrastructure (e.g., base stations) and agree to serve each others' customers, their aggregate payoffs, and individual shares, may substantially increase through opportunistic utilization of resources. The potential of such cooperation can, however, be realized only if each provider intelligently determines with whom it would cooperate, when it would cooperate, and how it would deploy and share its resources during such cooperation. Also, developing a rational basis for sharing the aggregate payoffs is imperative for the stability of the coalitions. We model such cooperation using the theory of transferable payoff coalitional games. We show that the optimum cooperation strategy, which involves the acquisition, deployment, and allocation of the channels and base stations (to customers), can be computed as the solution of a concave or an integer optimization. We next show that the grand coalition is stable in many different settings, i.e., if all providers cooperate, there is always an operating point that maximizes the providers' aggregate payoff, while offering each a share that removes any incentive to split from the coalition. The optimal cooperation strategy and the stabilizing payoff shares can be obtained in polynomial time by respectively solving the primals and the duals of the above optimizations, using distributed computations and limited exchange of confidential information among the providers. Numerical evaluations reveal that cooperation substantially enhances individual providers' payoffs under the optimal cooperation strategy and several different payoff sharing rules.
Resumo:
In this paper, we present a decentralized dynamic load scheduling/balancing algorithm called ELISA (Estimated Load Information Scheduling Algorithm) for general purpose distributed computing systems. ELISA uses estimated state information based upon periodic exchange of exact state information between neighbouring nodes to perform load scheduling. The primary objective of the algorithm is to cut down on the communication and load transfer overheads by minimizing the frequency of status exchange and by restricting the load transfer and status exchange within the buddy set of a processor. It is shown that the resulting algorithm performs almost as well as a perfect information algorithm and is superior to other load balancing schemes based on the random sharing and Ni-Hwang algorithms. A sensitivity analysis to study the effect of various design parameters on the effectiveness of load balancing is also carried out. Finally, the algorithm's performance is tested on large dimensional hypercubes in the presence of time-varying load arrival process and is shown to perform well in comparison to other algorithms. This makes ELISA a viable and implementable load balancing algorithm for use in general purpose distributed computing systems.
Resumo:
Protocols for secure archival storage are becoming increasingly important as the use of digital storage for sensitive documents is gaining wider practice. Wong et al.[8] combined verifiable secret sharing with proactive secret sharing without reconstruction and proposed a verifiable secret redistribution protocol for long term storage. However their protocol requires that each of the receivers is honest during redistribution. We proposed[3] an extension to their protocol wherein we relaxed the requirement that all the recipients should be honest to the condition that only a simple majority amongst the recipients need to be honest during the re(distribution) processes. Further, both of these protocols make use of Feldman's approach for achieving integrity during the (redistribution processes. In this paper, we present a revised version of our earlier protocol, and its adaptation to incorporate Pedersen's approach instead of Feldman's thereby achieving information theoretic secrecy while retaining integrity guarantees.
Resumo:
A novel method is proposed to treat the problem of the random resistance of a strictly one-dimensional conductor with static disorder. It is suggested, for the probability distribution of the transfer matrix of the conductor, the distribution of maximum information-entropy, constrained by the following physical requirements: 1) flux conservation, 2) time-reversal invariance and 3) scaling, with the length of the conductor, of the two lowest cumulants of ζ, where = sh2ζ. The preliminary results discussed in the text are in qualitative agreement with those obtained by sophisticated microscopic theories.
Resumo:
This paper describes the design and implementation of ADAMIS (‘A database for medical information systems’). ADAMIS is a relational database management system for a general hospital environment. Apart from the usual database (DB) facilities of data definition and data manipulation, ADAMIS supports a query language called the ‘simplified medical query language’ (SMQL) which is completely end-user oriented and highly non-procedural. Other features of ADAMIS include provision of facilities for statistics collection and report generation. ADAMIS also provides adequate security and integrity features and has been designed mainly for use on interactive terminals.
Resumo:
A novel method is proposed to treat the problem of the random resistance of a strictly one-dimensional conductor with static disorder. For the probability distribution of the transfer matrix R of the conductor we propose a distribution of maximum information entropy, constrained by the following physical requirements: (1) flux conservation, (2) time-reversal invariance, and (3) scaling with the length of the conductor of the two lowest cumulants of ω, where R=exp(iω→⋅Jbhat). The preliminary results discussed in the text are in qualitative agreement with those obtained by sophisticated microscopic theories.
Resumo:
This paper presents a Chance-constraint Programming approach for constructing maximum-margin classifiers which are robust to interval-valued uncertainty in training examples. The methodology ensures that uncertain examples are classified correctly with high probability by employing chance-constraints. The main contribution of the paper is to pose the resultant optimization problem as a Second Order Cone Program by using large deviation inequalities, due to Bernstein. Apart from support and mean of the uncertain examples these Bernstein based relaxations make no further assumptions on the underlying uncertainty. Classifiers built using the proposed approach are less conservative, yield higher margins and hence are expected to generalize better than existing methods. Experimental results on synthetic and real-world datasets show that the proposed classifiers are better equipped to handle interval-valued uncertainty than state-of-the-art.
Resumo:
We propose a quantity called information ambiguity that plays the same role in the worst-case information-theoretic nalyses as the well-known notion of information entropy performs in the corresponding average-case analyses. We prove various properties of information ambiguity and illustrate its usefulness in performing the worst-case analysis of a variant of distributed source coding problem.
Resumo:
Multielectrode neurophysiological recording and high-resolution neuroimaging generate multivariate data that are the basis for understanding the patterns of neural interactions. How to extract directions of information flow in brain networks from these data remains a key challenge. Research over the last few years has identified Granger causality as a statistically principled technique to furnish this capability. The estimation of Granger causality currently requires autoregressive modeling of neural data. Here, we propose a nonparametric approach based on widely used Fourier and wavelet transforms to estimate both pairwise and conditional measures of Granger causality, eliminating the need of explicit autoregressive data modeling. We demonstrate the effectiveness of this approach by applying it to synthetic data generated by network models with known connectivity and to local field potentials recorded from monkeys performing a sensorimotor task.
Resumo:
We consider the problem of transmission of several discrete sources over a multiple access channel (MAC) with side information at the sources and the decoder. Source-channel separation does not hold for this channel. Sufficient conditions are provided for transmission of sources with a given distortion. The channel could have continuous alphabets (Gaussian MAC is a special case). Various previous results are obtained as special cases.
Resumo:
Researchers are assessed from a researcher-centric perspective - by quantifying a researcher's contribution to the field. Citation and publication counts are some typical examples. We propose a student-centric measure to assess researchers on their mentoring abilities. Our approach quantifies benefits bestowed by researchers upon their students by characterizing the publication dynamics of research advisor-student interactions in author collaboration networks. We show that our measures could help aspiring students identify research advisors with proven mentoring skills. Our measures also help in stratification of researchers with similar ranks based on typical indices like publication and citation counts while being independent of their direct influences.
Resumo:
In a three player quantum `Dilemma' game each player takes independent decisions to maximize his/her individual gain. The optimal strategy in the quantum version of this game has a higher payoff compared to its classical counterpart. However, this advantage is lost if the initial qubits provided to the players are from a noisy source. We have experimentally implemented the three player quantum version of the `Dilemma' game as described by Johnson, [N.F. Johnson, Phys. Rev. A 63 (2001) 020302(R)] using nuclear magnetic resonance quantum information processor and have experimentally verified that the payoff of the quantum game for various levels of corruption matches the theoretical payoff. (c) 2007 Elsevier Inc. All rights reserved.
Resumo:
In this paper, we are concerned with energy efficient area monitoring using information coverage in wireless sensor networks, where collaboration among multiple sensors can enable accurate sensing of a point in a given area-to-monitor even if that point falls outside the physical coverage of all the sensors. We refer to any set of sensors that can collectively sense all points in the entire area-to-monitor as a full area information cover. We first propose a low-complexity heuristic algorithm to obtain full area information covers. Using these covers, we then obtain the optimum schedule for activating the sensing activity of various sensors that maximizes the sensing lifetime. The scheduling of sensor activity using the optimum schedules obtained using the proposed algorithm is shown to achieve significantly longer sensing lifetimes compared to those achieved using physical coverage. Relaxing the full area coverage requirement to a partial area coverage (e.g., 95% of area coverage as adequate instead of 100% area coverage) further enhances the lifetime.
Resumo:
In this paper, we are concerned with algorithms for scheduling the sensing activity of sensor nodes that are deployed to sense/measure point-targets in wireless sensor networks using information coverage. Defining a set of sensors which collectively can sense a target accurately as an information cover, we propose an algorithm to obtain Disjoint Set of Information Covers (DSIC), which achieves longer network life compared to the set of covers obtained using an Exhaustive-Greedy-Equalized Heuristic (EGEH) algorithm proposed recently in the literature. We also present a detailed complexity comparison between the DSIC and EGEH algorithms.