51 resultados para Virtual storage (Computer science)
Self-organized public key management in MANETs with enhanced security and without certificate-chains
Resumo:
In the self-organized public key management approaches, public key verification is achieved through verification routes constituted by the transitive trust relationships among the network principals. Most of the existing approaches do not distinguish among different available verification routes. Moreover, to ensure stronger security, it is important to choose an appropriate metric to evaluate the strength of a route. Besides, all of the existing self-organized approaches use certificate-chains for achieving authentication, which are highly resource consuming. In this paper, we present a self-organized certificate-less on-demand public key management (CLPKM) protocol, which aims at providing the strongest verification routes for authentication purposes. It restricts the compromise probability for a verification route by restricting its length. Besides, we evaluate the strength of a verification route using its end-to-end trust value. The other important aspect of the protocol is that it uses a MAC function instead of RSA certificates to perform public key verifications. By doing this, the protocol saves considerable computation power, bandwidth and storage space. We have used an extended strand space model to analyze the correctness of the protocol. The analytical, simulation, and the testbed implementation results confirm the effectiveness of the proposed protocol. (c) 2014 Elsevier B.V. All rights reserved.
Resumo:
A link level reliable multicast requires a channel access protocol to resolve the collision of feedback messages sent by multicast data receivers. Several deterministic media access control protocols have been proposed to attain high reliability, but with large delay. Besides, there are also protocols which can only give probabilistic guarantee about reliability, but have the least delay. In this paper, we propose a virtual token-based channel access and feedback protocol (VTCAF) for link level reliable multicasting. The VTCAF protocol introduces a virtual (implicit) token passing mechanism based on carrier sensing to avoid the collision between feedback messages. The delay performance is improved in VTCAF protocol by reducing the number of feedback messages. Besides, the VTCAF protocol is parametric in nature and can easily trade off reliability with the delay as per the requirement of the underlying application. Such a cross layer design approach would be useful for a variety of multicast applications which require reliable communication with different levels of reliability and delay performance. We have analyzed our protocol to evaluate various performance parameters at different packet loss rate and compared its performance with those of others. Our protocol has also been simulated using Castalia network simulator to evaluate the same performance parameters. Simulation and analytical results together show that the VTCAF protocol is able to considerably reduce average access delay while ensuring very high reliability at the same time.
Resumo:
The problem addressed in this paper is sound, scalable, demand-driven null-dereference verification for Java programs. Our approach consists conceptually of a base analysis, plus two major extensions for enhanced precision. The base analysis is a dataflow analysis wherein we propagate formulas in the backward direction from a given dereference, and compute a necessary condition at the entry of the program for the dereference to be potentially unsafe. The extensions are motivated by the presence of certain ``difficult'' constructs in real programs, e.g., virtual calls with too many candidate targets, and library method calls, which happen to need excessive analysis time to be analyzed fully. The base analysis is hence configured to skip such a difficult construct when it is encountered by dropping all information that has been tracked so far that could potentially be affected by the construct. Our extensions are essentially more precise ways to account for the effect of these constructs on information that is being tracked, without requiring full analysis of these constructs. The first extension is a novel scheme to transmit formulas along certain kinds of def-use edges, while the second extension is based on using manually constructed backward-direction summary functions of library methods. We have implemented our approach, and applied it on a set of real-life benchmarks. The base analysis is on average able to declare about 84% of dereferences in each benchmark as safe, while the two extensions push this number up to 91%. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
A routing protocol in a mobile ad hoc network (MANET) should be secure against both the outside attackers which do not hold valid security credentials and the inside attackers which are the compromised nodes in the network. The outside attackers can be prevented with the help of an efficient key management protocol and cryptography. However, to prevent inside attackers, it should be accompanied with an intrusion detection system (IDS). In this paper, we propose a novel secure routing with an integrated localized key management (SR-LKM) protocol, which is aimed to prevent both inside and outside attackers. The localized key management mechanism is not dependent on any routing protocol. Thus, unlike many other existing schemes, the protocol does not suffer from the key management - secure routing interdependency problem. The key management mechanism is lightweight as it optimizes the use of public key cryptography with the help of a novel neighbor based handshaking and Least Common Multiple (LCM) based broadcast key distribution mechanism. The protocol is storage scalable and its efficiency is confirmed by the results obtained from simulation experiments.
Resumo:
Computer Assisted Assessment (CAA) has been existing for several years now. While some forms of CAA do not require sophisticated text understanding (e.g., multiple choice questions), there are also student answers that consist of free text and require analysis of text in the answer. Research towards the latter till date has concentrated on two main sub-tasks: (i) grading of essays, which is done mainly by checking the style, correctness of grammar, and coherence of the essay and (ii) assessment of short free-text answers. In this paper, we present a structured view of relevant research in automated assessment techniques for short free-text answers. We review papers spanning the last 15 years of research with emphasis on recent papers. Our main objectives are two folds. First we present the survey in a structured way by segregating information on dataset, problem formulation, techniques, and evaluation measures. Second we present a discussion on some of the potential future directions in this domain which we hope would be helpful for researchers.
Resumo:
This paper presents the design and implementation of PolyMage, a domain-specific language and compiler for image processing pipelines. An image processing pipeline can be viewed as a graph of interconnected stages which process images successively. Each stage typically performs one of point-wise, stencil, reduction or data-dependent operations on image pixels. Individual stages in a pipeline typically exhibit abundant data parallelism that can be exploited with relative ease. However, the stages also require high memory bandwidth preventing effective utilization of parallelism available on modern architectures. For applications that demand high performance, the traditional options are to use optimized libraries like OpenCV or to optimize manually. While using libraries precludes optimization across library routines, manual optimization accounting for both parallelism and locality is very tedious. The focus of our system, PolyMage, is on automatically generating high-performance implementations of image processing pipelines expressed in a high-level declarative language. Our optimization approach primarily relies on the transformation and code generation capabilities of the polyhedral compiler framework. To the best of our knowledge, this is the first model-driven compiler for image processing pipelines that performs complex fusion, tiling, and storage optimization automatically. Experimental results on a modern multicore system show that the performance achieved by our automatic approach is up to 1.81x better than that achieved through manual tuning in Halide, a state-of-the-art language and compiler for image processing pipelines. For a camera raw image processing pipeline, our performance is comparable to that of a hand-tuned implementation.