306 resultados para Underground networks


Relevância:

20.00% 20.00%

Publicador:

Resumo:

In a storage system where individual storage nodes are prone to failure, the redundant storage of data in a distributed manner across multiple nodes is a must to ensure reliability. Reed-Solomon codes possess the reconstruction property under which the stored data can be recovered by connecting to any k of the n nodes in the network across which data is dispersed. This property can be shown to lead to vastly improved network reliability over simple replication schemes. Also of interest in such storage systems is the minimization of the repair bandwidth, i.e., the amount of data needed to be downloaded from the network in order to repair a single failed node. Reed-Solomon codes perform poorly here as they require the entire data to be downloaded. Regenerating codes are a new class of codes which minimize the repair bandwidth while retaining the reconstruction property. This paper provides an overview of regenerating codes including a discussion on the explicit construction of optimum codes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Artificial neural networks (ANNs) have shown great promise in modeling circuit parameters for computer aided design applications. Leakage currents, which depend on process parameters, supply voltage and temperature can be modeled accurately with ANNs. However, the complex nature of the ANN model, with the standard sigmoidal activation functions, does not allow analytical expressions for its mean and variance. We propose the use of a new activation function that allows us to derive an analytical expression for the mean and a semi-analytical expression for the variance of the ANN-based leakage model. To the best of our knowledge this is the first result in this direction. Our neural network model also includes the voltage and temperature as input parameters, thereby enabling voltage and temperature aware statistical leakage analysis (SLA). All existing SLA frameworks are closely tied to the exponential polynomial leakage model and hence fail to work with sophisticated ANN models. In this paper, we also set up an SLA framework that can efficiently work with these ANN models. Results show that the cumulative distribution function of leakage current of ISCAS'85 circuits can be predicted accurately with the error in mean and standard deviation, compared to Monte Carlo-based simulations, being less than 1% and 2% respectively across a range of voltage and temperature values.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In earlier work, nonisomorphic graphs have been converted into networks to realize Multistage Interconnection networks, which are topologically nonequivalent to the Baseline network. The drawback of this technique is that these nonequivalent networks are not guaranteed to be self-routing, because each node in the graph model can be replaced by a (2 × 2) switch in any one of the four different configurations. Hence, the problem of routing in these networks remains unsolved. Moreover, nonisomorphic graphs were obtained by interconnecting bipartite loops in a heuristic manner; the heuristic nature of this procedure makes it difficult to guarantee full connectivity in large networks. We solve these problems through a direct approach, in which a matrix model for self-routing networks is developed. An example is given to show that this model encompases nonequivalent self-routing networks. This approach has the additional advantage in that the matrix model itself ensures full connectivity.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose a method to compute a probably approximately correct (PAC) normalized histogram of observations with a refresh rate of Theta(1) time units per histogram sample on a random geometric graph with noise-free links. The delay in computation is Theta(root n) time units. We further extend our approach to a network with noisy links. While the refresh rate remains Theta(1) time units per sample, the delay increases to Theta(root n log n). The number of transmissions in both cases is Theta(n) per histogram sample. The achieved Theta(1) refresh rate for PAC histogram computation is a significant improvement over the refresh rate of Theta(1/log n) for histogram computation in noiseless networks. We achieve this by operating in the supercritical thermodynamic regime where large pathways for communication build up, but the network may have more than one component. The largest component however will have an arbitrarily large fraction of nodes in order to enable approximate computation of the histogram to the desired level of accuracy. Operation in the supercritical thermodynamic regime also reduces energy consumption. A key step in the proof of our achievability result is the construction of a connected component having bounded degree and any desired fraction of nodes. This construction may also prove useful in other communication settings on the random geometric graph.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The use of energy harvesting (EH) nodes as cooperative relays is a promising and emerging solution in wireless systems such as wireless sensor networks. It harnesses the spatial diversity of a multi-relay network and addresses the vexing problem of a relay's batteries getting drained in forwarding information to the destination. We consider a cooperative system in which EH nodes volunteer to serve as amplify-and-forward relays whenever they have sufficient energy for transmission. For a general class of stationary and ergodic EH processes, we introduce the notion of energy constrained and energy unconstrained relays and analytically characterize the symbol error rate of the system. Further insight is gained by an asymptotic analysis that considers the cases where the signal-to-noise-ratio or the number of relays is large. Our analysis quantifies how the energy usage at an EH relay and, consequently, its availability for relaying, depends not only on the relay's energy harvesting process, but also on its transmit power setting and the other relays in the system. The optimal static transmit power setting at the EH relays is also determined. Altogether, our results demonstrate how a system that uses EH relays differs in significant ways from one that uses conventional cooperative relays.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In rapid parallel magnetic resonance imaging, the problem of image reconstruction is challenging. Here, a novel image reconstruction technique for data acquired along any general trajectory in neural network framework, called ``Composite Reconstruction And Unaliasing using Neural Networks'' (CRAUNN), is proposed. CRAUNN is based on the observation that the nature of aliasing remains unchanged whether the undersampled acquisition contains only low frequencies or includes high frequencies too. Here, the transformation needed to reconstruct the alias-free image from the aliased coil images is learnt, using acquisitions consisting of densely sampled low frequencies. Neural networks are made use of as machine learning tools to learn the transformation, in order to obtain the desired alias-free image for actual acquisitions containing sparsely sampled low as well as high frequencies. CRAUNN operates in the image domain and does not require explicit coil sensitivity estimation. It is also independent of the sampling trajectory used, and could be applied to arbitrary trajectories as well. As a pilot trial, the technique is first applied to Cartesian trajectory-sampled data. Experiments performed using radial and spiral trajectories on real and synthetic data, illustrate the performance of the method. The reconstruction errors depend on the acceleration factor as well as the sampling trajectory. It is found that higher acceleration factors can be obtained when radial trajectories are used. Comparisons against existing techniques are presented. CRAUNN has been found to perform on par with the state-of-the-art techniques. Acceleration factors of up to 4, 6 and 4 are achieved in Cartesian, radial and spiral cases, respectively. (C) 2010 Elsevier Inc. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The three dimensional structure of a protein is formed and maintained by the noncovalent interactions among the amino acid residues of the polypeptide chain These interactions can be represented collectively in the form of a network So far such networks have been investigated by considering the connections based on distances between the amino acid residues Here we present a method of constructing the structure network based on interaction energies among the amino acid residues in the protein We have investigated the properties of such protein energy based networks (PENs) and have shown correlations to protein structural features such as the clusters of residues involved in stability formation of secondary and super secondary structural units Further we demonstrate that the analysis of PENs in terms of parameters such as hubs and shortest paths can provide a variety of biologically important information such as the residues crucial for stabilizing the folded units and the paths of communication between distal residues in the protein Finally the energy regimes for different levels of stabilization in the protein structure have clearly emerged from the PEN analysis

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Standard-cell design methodology is an important technique in semicustom-VLSI design. It lends itself to the easy automation of the crucial layout part, and many algorithms have been proposed in recent literature for the efficient placement of standard cells. While many studies have identified the Kerninghan-Lin bipartitioning method as being superior to most others, it must be admitted that the behaviour of the method is erratic, and that it is strongly dependent on the initial partition. This paper proposes a novel algorithm for overcoming some of the deficiencies of the Kernighan-Lin method. The approach is based on an analogy of the placement problem with neural networks, and, by the use of some of the organizing principles of these nets, an attempt is made to improve the behavior of the bipartitioning scheme. The results have been encouraging, and the approach seems to be promising for other NP-complete problems in circuit layout.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Many next-generation distributed applications, such as grid computing, require a single source to communicate with a group of destinations. Traditionally, such applications are implemented using multicast communication. A typical multicast session requires creating the shortest-path tree to a fixed number of destinations. The fundamental issue in multicasting data to a fixed set of destinations is receiver blocking. If one of the destinations is not reachable, the entire multicast request (say, grid task request) may fail. Manycasting is a generalized variation of multicasting that provides the freedom to choose the best subset of destinations from a larger set of candidate destinations. We propose an impairment-aware algorithm to provide manycasting service in the optical layer, specifically OBS. We compare the performance of our proposed manycasting algorithm with traditional multicasting and multicast with over provisioning. Our results show a significant improvement in the blocking probability by implementing optical-layer manycasting.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Copolyurethanes of hydroxy terminated polybutadiene (HTPB) and ISRO–Polyol (ISPO), an indigenously developed castor-oil based polyol, have been prepared using toluene diiso-cyanate and hexamethylenediisocyanate. The mechanical strength and swelling characteristics of the copolyurethanes cured with trimethylol propane and triethanolamine have been studied to evolve improved solid propellant binders. By varying the ratios of the two hydroxy pre-polymers, chain extenders, and crosslinkers, copolyurethanes having a wide range of tensile strength and elongation could be obtained. Many of these systems are desirable for their use as propellant binders. The results have been explained in terms of the measured crosslink densities and other swelling properties. © 1993 John Wiley & Sons, Inc.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Our study concerns an important current problem, that of diffusion of information in social networks. This problem has received significant attention from the Internet research community in the recent times, driven by many potential applications such as viral marketing and sales promotions. In this paper, we focus on the target set selection problem, which involves discovering a small subset of influential players in a given social network, to perform a certain task of information diffusion. The target set selection problem manifests in two forms: 1) top-k nodes problem and 2) lambda-coverage problem. In the top-k nodes problem, we are required to find a set of k key nodes that would maximize the number of nodes being influenced in the network. The lambda-coverage problem is concerned with finding a set of k key nodes having minimal size that can influence a given percentage lambda of the nodes in the entire network. We propose a new way of solving these problems using the concept of Shapley value which is a well known solution concept in cooperative game theory. Our approach leads to algorithms which we call the ShaPley value-based Influential Nodes (SPINs) algorithms for solving the top-k nodes problem and the lambda-coverage problem. We compare the performance of the proposed SPIN algorithms with well known algorithms in the literature. Through extensive experimentation on four synthetically generated random graphs and six real-world data sets (Celegans, Jazz, NIPS coauthorship data set, Netscience data set, High-Energy Physics data set, and Political Books data set), we show that the proposed SPIN approach is more powerful and computationally efficient. Note to Practitioners-In recent times, social networks have received a high level of attention due to their proven ability in improving the performance of web search, recommendations in collaborative filtering systems, spreading a technology in the market using viral marketing techniques, etc. It is well known that the interpersonal relationships (or ties or links) between individuals cause change or improvement in the social system because the decisions made by individuals are influenced heavily by the behavior of their neighbors. An interesting and key problem in social networks is to discover the most influential nodes in the social network which can influence other nodes in the social network in a strong and deep way. This problem is called the target set selection problem and has two variants: 1) the top-k nodes problem, where we are required to identify a set of k influential nodes that maximize the number of nodes being influenced in the network and 2) the lambda-coverage problem which involves finding a set of influential nodes having minimum size that can influence a given percentage lambda of the nodes in the entire network. There are many existing algorithms in the literature for solving these problems. In this paper, we propose a new algorithm which is based on a novel interpretation of information diffusion in a social network as a cooperative game. Using this analogy, we develop an algorithm based on the Shapley value of the underlying cooperative game. The proposed algorithm outperforms the existing algorithms in terms of generality or computational complexity or both. Our results are validated through extensive experimentation on both synthetically generated and real-world data sets.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

IEEE 802.16 standards for Wireless Metropolitan Area Networks (WMANs) include a mesh mode of operation for improving the coverage and throughput of the network. In this paper, we consider the problem of routing and centralized scheduling for such networks. We first fix the routing, which reduces the network to a tree. We then present a finite horizon dynamic programming framework. Using it we obtain various scheduling algorithms depending upon the cost function. Next we consider simpler suboptimal algorithms and compare their performances.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Increasing network lifetime is important in wireless sensor/ad-hoc networks. In this paper, we are concerned with algorithms to increase network lifetime and amount of data delivered during the lifetime by deploying multiple mobile base stations in the sensor network field. Specifically, we allow multiple mobile base stations to be deployed along the periphery of the sensor network field and develop algorithms to dynamically choose the locations of these base stations so as to improve network lifetime. We propose energy efficient low-complexity algorithms to determine the locations of the base stations; they include i) Top-K-max algorithm, ii) maximizing the minimum residual energy (Max-Min-RE) algorithm, and iii) minimizing the residual energy difference (MinDiff-RE) algorithm. We show that the proposed base stations placement algorithms provide increased network lifetimes and amount of data delivered during the network lifetime compared to single base station scenario as well as multiple static base stations scenario, and close to those obtained by solving an integer linear program (ILP) to determine the locations of the mobile base stations. We also investigate the lifetime gain when an energy aware routing protocol is employed along with multiple base stations.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We develop new scheduling algorithms for the IEEE 802.16d OFDMA/TDD based broadband wireless access system, in which radio resources of both time and frequency slots are dynamically shared by all users. Our objective is to provide a fair and efficient allocation to all the users to satisfy their quality of service.