457 resultados para Combinatorial Hodge theory
em Queensland University of Technology - ePrints Archive
Resumo:
Dispersing a data object into a set of data shares is an elemental stage in distributed communication and storage systems. In comparison to data replication, data dispersal with redundancy saves space and bandwidth. Moreover, dispersing a data object to distinct communication links or storage sites limits adversarial access to whole data and tolerates loss of a part of data shares. Existing data dispersal schemes have been proposed mostly based on various mathematical transformations on the data which induce high computation overhead. This paper presents a novel data dispersal scheme where each part of a data object is replicated, without encoding, into a subset of data shares according to combinatorial design theory. Particularly, data parts are mapped to points and data shares are mapped to lines of a projective plane. Data parts are then distributed to data shares using the point and line incidence relations in the plane so that certain subsets of data shares collectively possess all data parts. The presented scheme incorporates combinatorial design theory with inseparability transformation to achieve secure data dispersal at reduced computation, communication and storage costs. Rigorous formal analysis and experimental study demonstrate significant cost-benefits of the presented scheme in comparison to existing methods.
Resumo:
We propose a keyless and lightweight message transformation scheme based on the combinatorial design theory for the confidentiality of a message transmitted in multiple parts through a network with multiple independent paths, or for data stored in multiple parts by a set of independent storage services such as the cloud providers. Our combinatorial scheme disperses a message into v output parts so that (k-1) or less parts do not reveal any information about any message part, and the message can only be recovered by the party who possesses all v output parts. Combinatorial scheme generates an xor transformation structure to disperse the message into v output parts. Inversion is done by applying the same xor transformation structure on output parts. The structure is generated using generalized quadrangles from design theory which represents symmetric point and line incidence relations in a projective plane. We randomize our solution by adding a random salt value and dispersing it together with the message. We show that a passive adversary with capability of accessing (k-1) communication links or storage services has no advantage so that the scheme is indistinguishable under adaptive chosen ciphertext attack (IND-CCA2).
Resumo:
We consider Cooperative Intrusion Detection System (CIDS) which is a distributed AIS-based (Artificial Immune System) IDS where nodes collaborate over a peer-to-peer overlay network. The AIS uses the negative selection algorithm for the selection of detectors (e.g., vectors of features such as CPU utilization, memory usage and network activity). For better detection performance, selection of all possible detectors for a node is desirable but it may not be feasible due to storage and computational overheads. Limiting the number of detectors on the other hand comes with the danger of missing attacks. We present a scheme for the controlled and decentralized division of detector sets where each IDS is assigned to a region of the feature space. We investigate the trade-off between scalability and robustness of detector sets. We address the problem of self-organization in CIDS so that each node generates a distinct set of the detectors to maximize the coverage of the feature space while pairs of nodes exchange their detector sets to provide a controlled level of redundancy. Our contribution is twofold. First, we use Symmetric Balanced Incomplete Block Design, Generalized Quadrangles and Ramanujan Expander Graph based deterministic techniques from combinatorial design theory and graph theory to decide how many and which detectors are exchanged between which pair of IDS nodes. Second, we use a classical epidemic model (SIR model) to show how properties from deterministic techniques can help us to reduce the attack spread rate.
Resumo:
Secure communications in distributed Wireless Sensor Networks (WSN) operating under adversarial conditions necessitate efficient key management schemes. In the absence of a priori knowledge of post-deployment network configuration and due to limited resources at sensor nodes, key management schemes cannot be based on post-deployment computations. Instead, a list of keys, called a key-chain, is distributed to each sensor node before the deployment. For secure communication, either two nodes should have a key in common in their key-chains, or they should establish a key through a secure-path on which every link is secured with a key. We first provide a comparative survey of well known key management solutions for WSN. Probabilistic, deterministic and hybrid key management solutions are presented, and they are compared based on their security properties and re-source usage. We provide a taxonomy of solutions, and identify trade-offs in them to conclude that there is no one size-fits-all solution. Second, we design and analyze deterministic and hybrid techniques to distribute pair-wise keys to sensor nodes before the deployment. We present novel deterministic and hybrid approaches based on combinatorial design theory and graph theory for deciding how many and which keys to assign to each key-chain before the sensor network deployment. Performance and security of the proposed schemes are studied both analytically and computationally. Third, we address the key establishment problem in WSN which requires key agreement algorithms without authentication are executed over a secure-path. The length of the secure-path impacts the power consumption and the initialization delay for a WSN before it becomes operational. We formulate the key establishment problem as a constrained bi-objective optimization problem, break it into two sub-problems, and show that they are both NP-Hard and MAX-SNP-Hard. Having established inapproximability results, we focus on addressing the authentication problem that prevents key agreement algorithms to be used directly over a wireless link. We present a fully distributed algorithm where each pair of nodes can establish a key with authentication by using their neighbors as the witnesses.
Resumo:
A high level of control over quantum dot (QD) properties such as size and composition during fabrication is required to precisely tune the eventual electronic properties of the QD. Nanoscale synthesis efforts and theoretical studies of electronic properties are traditionally treated quite separately. In this paper, a combinatorial approach has been taken to relate the process synthesis parameters and the electron confinement properties of the QDs. First, hybrid numerical calculations with different influx parameters for Si1-x Cx QDs were carried out to simulate the changes in carbon content x and size. Second, the ionization energy theory was applied to understand the electronic properties of Si1-x Cx QDs. Third, stoichiometric (x=0.5) silicon carbide QDs were grown by means of inductively coupled plasma-assisted rf magnetron sputtering. Finally, the effect of QD size and elemental composition were then incorporated in the ionization energy theory to explain the evolution of the Si1-x Cx photoluminescence spectra. These results are important for the development of deterministic synthesis approaches of self-assembled nanoscale quantum confinement structures.
Resumo:
Women with a disability continue to experience social oppression and domestic violence as a consequence of gender and disability dimensions. Current explanations of domestic violence and disability inadequately explain several features that lead women who have a disability to experience violent situations. This article incorporates both disability and material feminist theory as an alternative explanation to the dominant approaches (psychological and sociological traditions) of conceptualising domestic violence. This paper is informed by a study which was concerned with examining the nature and perceptions of violence against women with a physical impairment. The emerging analytical framework integrating material feminist interpretations and disability theory provided a basis for exploring gender and disability dimensions. Insight was also provided by the women who identified as having a disability in the study and who explained domestic violence in terms of a gendered and disabling experience. The article argues that material feminist interpretations and disability theory, with their emphasis on gender relations, disablism and poverty, should be used as an alternative tool for exploring the nature and consequences of violence against women with a disability.
Resumo:
This study develops a life-cycle model where investors make investment decisions in a realistic environment. Model results show that personal illiquid projects (housing and children), fixed costs (once-off/per-period participation costs plus variable/fixed transaction costs) and endogenous risky human capital (with permanent, transitory and disastrous shocks) together are able to address both the non-participation puzzle and the age-effects puzzle. Empirical implications of the model are examined using Heckman’s two-step method with the latest five Surveys of Consumer Finance (SCF). Regression results show that liquidity, informational cost and human capital are indeed the major determinants of participation and asset allocation decisions at different stages of an investor’s life.
Resumo:
The issue of ‘rigour vs. relevance’ in IS research has generated an intense, heated debate for over a decade. It is possible to identify, however, only a limited number of contributions on how to increase the relevance of IS research without compromising its rigour. Based on a lifecycle view of IS research, we propose the notion of ‘reality checks’ in order to review IS research outcomes in the light of actual industry demands. We assume that five barriers impact the efficient transfer of IS research outcomes; they are lack of awareness, lack of understandability, lack of relevance, lack of timeliness, and lack of applicability. In seeking to understand the effect of these barriers on the transfer of mature IS research into practice, we used focus groups. We chose DeLone and McLean’s IS success model as our stimulus because it is one of the more widely researched areas of IS.