901 resultados para Information dispersal algorithm


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we present a decentralized dynamic load scheduling/balancing algorithm called ELISA (Estimated Load Information Scheduling Algorithm) for general purpose distributed computing systems. ELISA uses estimated state information based upon periodic exchange of exact state information between neighbouring nodes to perform load scheduling. The primary objective of the algorithm is to cut down on the communication and load transfer overheads by minimizing the frequency of status exchange and by restricting the load transfer and status exchange within the buddy set of a processor. It is shown that the resulting algorithm performs almost as well as a perfect information algorithm and is superior to other load balancing schemes based on the random sharing and Ni-Hwang algorithms. A sensitivity analysis to study the effect of various design parameters on the effectiveness of load balancing is also carried out. Finally, the algorithm's performance is tested on large dimensional hypercubes in the presence of time-varying load arrival process and is shown to perform well in comparison to other algorithms. This makes ELISA a viable and implementable load balancing algorithm for use in general purpose distributed computing systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Motivated by the observation that communities in real world social networks form due to actions of rational individuals in networks, we propose a novel game theory inspired algorithm to determine communities in networks. The algorithm is decentralized and only uses local information at each node. We show the efficacy of the proposed algorithm through extensive experimentation on several real world social network data sets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes a prototype implementation of a Distributed File System (DFS) based on the Adaptive Information Dispersal Algorithm (AIDA). Using AIDA, a file block is encoded and dispersed into smaller blocks stored on a number of DFS nodes distributed over a network. The implementation devises file creation, read, and write operations. In particular, when reading a file, the DFS accepts an optional timing constraint, which it uses to determine the level of redundancy needed for the read operation. The tighter the timing constraint, the more nodes in the DFS are queried for encoded blocks. Write operations update all blocks in all DFS nodes--with future implementations possibly including the use of read and write quorums. This work was conducted under the supervision of Professor Azer Bestavros (best@cs.bu.edu) in the Computer Science Department as part of Mohammad Makarechian's Master's project.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The proliferation of mobile computers and wireless networks requires the design of future distributed real-time applications to recognize and deal with the significant asymmetry between downstream and upstream communication capacities, and the significant disparity between server and client storage capacities. Recent research work proposed the use of Broadcast Disks as a scalable mechanism to deal with this problem. In this paper, we propose a new broadcast disks protocol, based on our Adaptive Information Dispersal Algorithm (AIDA). Our protocol is different from previous broadcast disks protocols in that it improves communication timeliness, fault-tolerance, and security, while allowing for a finer control of multiplexing of prioritized data (broadcast frequencies). We start with a general introduction of broadcast disks. Next, we propose broadcast disk organizations that are suitable for real-time applications. Next, we present AIDA and show its fault-tolerance and security properties. We conclude the paper with the description and analysis of AIDA-based broadcast disks organizations that achieve both timeliness and fault-tolerance, while preserving downstream communication capacity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The popularity of TCP/IP coupled with the premise of high speed communication using Asynchronous Transfer Mode (ATM) technology have prompted the network research community to propose a number of techniques to adapt TCP/IP to ATM network environments. ATM offers Available Bit Rate (ABR) and Unspecified Bit Rate (UBR) services for best-effort traffic, such as conventional file transfer. However, recent studies have shown that TCP/IP, when implemented using ABR or UBR, leads to serious performance degradations, especially when the utilization of network resources (such as switch buffers) is high. Proposed techniques-switch-level enhancements, for example-that attempt to patch up TCP/IP over ATMs have had limited success in alleviating this problem. The major reason for TCP/IP's poor performance over ATMs has been consistently attributed to packet fragmentation, which is the result of ATM's 53-byte cell-oriented switching architecture. In this paper, we present a new transport protocol, TCP Boston, that turns ATM's 53-byte cell-oriented switching architecture into an advantage for TCP/IP. At the core of TCP Boston is the Adaptive Information Dispersal Algorithm (AIDA), an efficient encoding technique that allows for dynamic redundancy control. AIDA makes TCP/IP's performance less sensitive to cell losses, thus ensuring a graceful degradation of TCP/IP's performance when faced with congested resources. In this paper, we introduce AIDA and overview the main features of TCP Boston. We present detailed simulation results that show the superiority of our protocol when compared to other adaptations of TCP/IP over ATMs. In particular, we show that TCP Boston improves TCP/IP's performance over ATMs for both network-centric metrics (e.g., effective throughput) and application-centric metrics (e.g., response time).

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The design of programs for broadcast disks which incorporate real-time and fault-tolerance requirements is considered. A generalized model for real-time fault-tolerant broadcast disks is defined. It is shown that designing programs for broadcast disks specified in this model is closely related to the scheduling of pinwheel task systems. Some new results in pinwheel scheduling theory are derived, which facilitate the efficient generation of real-time fault-tolerant broadcast disk programs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

While ATM bandwidth-reservation techniques are able to offer the guarantees necessary for the delivery of real-time streams in many applications (e.g. live audio and video), they suffer from many disadvantages that make them inattractive (or impractical) for many others. These limitations coupled with the flexibility and popularity of TCP/IP as a best-effort transport protocol have prompted the network research community to propose and implement a number of techniques that adapt TCP/IP to the Available Bit Rate (ABR) and Unspecified Bit Rate (UBR) services in ATM network environments. This allows these environments to smoothly integrate (and make use of) currently available TCP-based applications and services without much (if any) modifications. However, recent studies have shown that TCP/IP, when implemented over ATM networks, is susceptible to serious performance limitations. In a recently completed study, we have unveiled a new transport protocol, TCP Boston, that turns ATM's 53-byte cell-oriented switching architecture into an advantage for TCP/IP. In this paper, we demonstrate the real-time features of TCP Boston that allow communication bandwidth to be traded off for timeliness. We start with an overview of the protocol. Next, we analytically characterize the dynamic redundancy control features of TCP Boston. Next, We present detailed simulation results that show the superiority of our protocol when compared to other adaptations of TCP/IP over ATMs. In particular, we show that TCP Boston improves TCP/IP's performance over ATMs for both network-centric metrics (e.g., effective throughput and percent of missed deadlines) and real-time application-centric metrics (e.g., response time and jitter).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We present a new algorithm for exactly solving decision-making problems represented as an influence diagram. We do not require the usual assumptions of no forgetting and regularity, which allows us to solve problems with limited information. The algorithm, which implements a sophisticated variable elimination procedure, is empirically shown to outperform a state-of-the-art algorithm in randomly generated problems of up to 150 variables and 10^64 strategies.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Satellite image classification involves designing and developing efficient image classifiers. With satellite image data and image analysis methods multiplying rapidly, selecting the right mix of data sources and data analysis approaches has become critical to the generation of quality land-use maps. In this study, a new postprocessing information fusion algorithm for the extraction and representation of land-use information based on high-resolution satellite imagery is presented. This approach can produce land-use maps with sharp interregional boundaries and homogeneous regions. The proposed approach is conducted in five steps. First, a GIS layer - ATKIS data - was used to generate two coarse homogeneous regions, i.e. urban and rural areas. Second, a thematic (class) map was generated by use of a hybrid spectral classifier combining Gaussian Maximum Likelihood algorithm (GML) and ISODATA classifier. Third, a probabilistic relaxation algorithm was performed on the thematic map, resulting in a smoothed thematic map. Fourth, edge detection and edge thinning techniques were used to generate a contour map with pixel-width interclass boundaries. Fifth, the contour map was superimposed on the thematic map by use of a region-growing algorithm with the contour map and the smoothed thematic map as two constraints. For the operation of the proposed method, a software package is developed using programming language C. This software package comprises the GML algorithm, a probabilistic relaxation algorithm, TBL edge detector, an edge thresholding algorithm, a fast parallel thinning algorithm, and a region-growing information fusion algorithm. The county of Landau of the State Rheinland-Pfalz, Germany was selected as a test site. The high-resolution IRS-1C imagery was used as the principal input data.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

[EN]In face recognition, where high-dimensional representation spaces are generally used, it is very important to take advantage of all the available information. In particular, many labelled facial images will be accumulated while the recognition system is functioning, and due to practical reasons some of them are often discarded. In this paper, we propose an algorithm for using this information. The algorithm has the fundamental characteristic of being incremental. On the other hand, the algorithm makes use of a combination of classification results for the images in the input sequence. Experiments with sequences obtained with a real person detection and tracking system allow us to analyze the performance of the algorithm, as well as its potential improvements.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We present a modification of the algorithm of Dani et al. [8] for the online linear optimization problem in the bandit setting, which with high probability has regret at most O ∗ ( √ T) against an adaptive adversary. This improves on the previous algorithm [8] whose regret is bounded in expectation against an oblivious adversary. We obtain the same dependence on the dimension (n 3/2) as that exhibited by Dani et al. The results of this paper rest firmly on those of [8] and the remarkable technique of Auer et al. [2] for obtaining high probability bounds via optimistic estimates. This paper answers an open question: it eliminates the gap between the high-probability bounds obtained in the full-information vs bandit settings.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cryptosystems based on the hardness of lattice problems have recently acquired much importance due to their average-case to worst-case equivalence, their conjectured resistance to quantum cryptanalysis, their ease of implementation and increasing practicality, and, lately, their promising potential as a platform for constructing advanced functionalities. In this work, we construct “Fuzzy” Identity Based Encryption from the hardness of the Learning With Errors (LWE) problem. We note that for our parameters, the underlying lattice problems (such as gapSVP or SIVP) are assumed to be hard to approximate within supexponential factors for adversaries running in subexponential time. We give CPA and CCA secure variants of our construction, for small and large universes of attributes. All our constructions are secure against selective-identity attacks in the standard model. Our construction is made possible by observing certain special properties that secret sharing schemes need to satisfy in order to be useful for Fuzzy IBE. We also discuss some obstacles towards realizing lattice-based attribute-based encryption (ABE).

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We analyse the security of the cryptographic hash function LAKE-256 proposed at FSE 2008 by Aumasson, Meier and Phan. By exploiting non-injectivity of some of the building primitives of LAKE, we show three different collision and near-collision attacks on the compression function. The first attack uses differences in the chaining values and the block counter and finds collisions with complexity 233. The second attack utilizes differences in the chaining values and salt and yields collisions with complexity 242. The final attack uses differences only in the chaining values to yield near-collisions with complexity 299. All our attacks are independent of the number of rounds in the compression function. We illustrate the first two attacks by showing examples of collisions and near-collisions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The current study presents an algorithm to retrieve surface Soil Moisture (SM) from multi-temporal Synthetic Aperture Radar (SAR) data. The developed algorithm is based on the Cumulative Density Function (CDF) transformation of multi-temporal RADARSAT-2 backscatter coefficient (BC) to obtain relative SM values, and then converts relative SM values into absolute SM values using soil information. The algorithm is tested in a semi-arid tropical region in South India using 30 satellite images of RADARSAT-2, SMOS L2 SM products, and 1262 SM field measurements in 50 plots spanning over 4 years. The validation with the field data showed the ability of the developed algorithm to retrieve SM with RMSE ranging from 0.02 to 0.06 m(3)/m(3) for the majority of plots. Comparison with the SMOS SM showed a good temporal behaviour with RMSE of approximately 0.05 m(3)/m(3) and a correlation coefficient of approximately 0.9. The developed model is compared and found to be better than the change detection and delta index model. The approach does not require calibration of any parameter to obtain relative SM and hence can easily be extended to any region having time series of SAR data available.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This thesis presents a novel framework for state estimation in the context of robotic grasping and manipulation. The overall estimation approach is based on fusing various visual cues for manipulator tracking, namely appearance and feature-based, shape-based, and silhouette-based visual cues. Similarly, a framework is developed to fuse the above visual cues, but also kinesthetic cues such as force-torque and tactile measurements, for in-hand object pose estimation. The cues are extracted from multiple sensor modalities and are fused in a variety of Kalman filters.

A hybrid estimator is developed to estimate both a continuous state (robot and object states) and discrete states, called contact modes, which specify how each finger contacts a particular object surface. A static multiple model estimator is used to compute and maintain this mode probability. The thesis also develops an estimation framework for estimating model parameters associated with object grasping. Dual and joint state-parameter estimation is explored for parameter estimation of a grasped object's mass and center of mass. Experimental results demonstrate simultaneous object localization and center of mass estimation.

Dual-arm estimation is developed for two arm robotic manipulation tasks. Two types of filters are explored; the first is an augmented filter that contains both arms in the state vector while the second runs two filters in parallel, one for each arm. These two frameworks and their performance is compared in a dual-arm task of removing a wheel from a hub.

This thesis also presents a new method for action selection involving touch. This next best touch method selects an available action for interacting with an object that will gain the most information. The algorithm employs information theory to compute an information gain metric that is based on a probabilistic belief suitable for the task. An estimation framework is used to maintain this belief over time. Kinesthetic measurements such as contact and tactile measurements are used to update the state belief after every interactive action. Simulation and experimental results are demonstrated using next best touch for object localization, specifically a door handle on a door. The next best touch theory is extended for model parameter determination. Since many objects within a particular object category share the same rough shape, principle component analysis may be used to parametrize the object mesh models. These parameters can be estimated using the action selection technique that selects the touching action which best both localizes and estimates these parameters. Simulation results are then presented involving localizing and determining a parameter of a screwdriver.

Lastly, the next best touch theory is further extended to model classes. Instead of estimating parameters, object class determination is incorporated into the information gain metric calculation. The best touching action is selected in order to best discern between the possible model classes. Simulation results are presented to validate the theory.