989 resultados para Sink nodes


Relevância:

10.00% 10.00%

Publicador:

Resumo:

A linear time approximate maximum likelihood decoding algorithm on tail-biting trellises is presented, that requires exactly two rounds on the trellis. This is an adaptation of an algorithm proposed earlier with the advantage that it reduces the time complexity from O(m log m) to O(m) where m is the number of nodes in the tail-biting trellis. A necessary condition for the output of the algorithm to differ from the output of the ideal ML decoder is deduced and simulation results on an AWGN channel using tail-biting trellises for two rate 1/2 convolutional codes with memory 4 and 6 respectively, are reported.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we describe an efficient coordinated-checkpointing and recovery algorithm which can work even when the channels are assumed to be non-FIFO, and messages may be lost. Nodes are assumed to be autonomous, and they do not block while taking checkpoints. Based on the local conditions, any process can request the previous coordinator for the 'permission' to initiate a new checkpoint. Allowing multiple initiators of checkpoints avoids the bottleneck associated with a single initiator, but the algorithm permits only a single instance of checkpointing process at any given time, thus reducing much of the overhead associated with multiple initiators of distributed algorithms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We provide a survey of some of our recent results ([9], [13], [4], [6], [7]) on the analytical performance modeling of IEEE 802.11 wireless local area networks (WLANs). We first present extensions of the decoupling approach of Bianchi ([1]) to the saturation analysis of IEEE 802.11e networks with multiple traffic classes. We have found that even when analysing WLANs with unsaturated nodes the following state dependent service model works well: when a certain set of nodes is nonempty, their channel attempt behaviour is obtained from the corresponding fixed point analysis of the saturated system. We will present our experiences in using this approximation to model multimedia traffic over an IEEE 802.11e network using the enhanced DCF channel access (EDCA) mechanism. We have found that we can model TCP controlled file transfers, VoIP packet telephony, and streaming video in the IEEE802.11e setting by this simple approximation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we develop and numerically explore the modeling heuristic of using saturation attempt probabilities as state dependent attempt probabilities in an IEEE 802.11e infrastructure network carrying packet telephone calls and TCP controlled file downloads, using Enhanced Distributed Channel Access (EDCA). We build upon the fixed point analysis and performance insights in [1]. When there are a certain number of nodes of each class contending for the channel (i.e., have nonempty queues), then their attempt probabilities are taken to be those obtained from saturation analysis for that number of nodes. Then we model the system queue dynamics at the network nodes. With the proposed heuristic, the system evolution at channel slot boundaries becomes a Markov renewal process, and regenerative analysis yields the desired performance measures.The results obtained from this approach match well with ns2 simulations. We find that, with the default IEEE 802.11e EDCA parameters for AC 1 and AC 3, the voice call capacity decreases if even one file download is initiated by some station. Subsequently, reducing the voice calls increases the file download capacity almost linearly (by 1/3 Mbps per voice call for the 11 Mbps PHY).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The complex web of interactions between the host immune system and the pathogen determines the outcome of any infection. A computational model of this interaction network, which encodes complex interplay among host and bacterial components, forms a useful basis for improving the understanding of pathogenesis, in filling knowledge gaps and consequently to identify strategies to counter the disease. We have built an extensive model of the Mycobacterium tuberculosis host-pathogen interactome, consisting of 75 nodes corresponding to host and pathogen molecules, cells, cellular states or processes. Vaccination effects, clearance efficiencies due to drugs and growth rates have also been encoded in the model. The system is modelled as a Boolean network. Virtual deletion experiments, multiple parameter scans and analysis of the system's response to perturbations, indicate that disabling processes such as phagocytosis and phagolysosome fusion or cytokines such as TNF-alpha and IFN-gamma, greatly impaired bacterial clearance, while removing cytokines such as IL-10 alongside bacterial defence proteins such as SapM greatly favour clearance. Simulations indicate a high propensity of the pathogen to persist under different conditions.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We discuss the key issues in the deployment of sparse sensor networks. The network monitors several environment parameters and is deployed in a semi-arid region for the benefit of small and marginal farmers. We begin by discussing the problems of an existing unreliable 1 sq km sparse network deployed in a village. The proposed solutions are implemented in a new cluster. The new cluster is a reliable 5 sq km network. Our contributions are two fold. Firstly, we describe a. novel methodology to deploy a sparse reliable data gathering sensor network and evaluate the ``safe distance'' or ``reliable'' distance between nodes using propagation models. Secondly, we address the problem of transporting data from rural aggregation servers to urban data centres. This paper tracks our steps in deploying a sensor network in a village,in India, trying to provide better diagnosis for better crop management. Keywords - Rural, Agriculture, CTRS, Sparse.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Erasure coding techniques are used to increase the reliability of distributed storage systems while minimizing storage overhead. Also of interest is minimization of the bandwidth required to repair the system following a node failure. In a recent paper, Wu et al. characterize the tradeoff between the repair bandwidth and the amount of data stored per node. They also prove the existence of regenerating codes that achieve this tradeoff. In this paper, we introduce Exact Regenerating Codes, which are regenerating codes possessing the additional property of being able to duplicate the data stored at a failed node. Such codes require low processing and communication overheads, making the system practical and easy to maintain. Explicit construction of exact regenerating codes is provided for the minimum bandwidth point on the storage-repair bandwidth tradeoff, relevant to distributed-mail-server applications. A sub-space based approach is provided and shown to yield necessary and sufficient conditions on a linear code to possess the exact regeneration property as well as prove the uniqueness of our construction. Also included in the paper, is an explicit construction of regenerating codes for the minimum storage point for parameters relevant to storage in peer-to-peer systems. This construction supports a variable number of nodes and can handle multiple, simultaneous node failures. All constructions given in the paper are of low complexity, requiring low field size in particular.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This study reports the details of the finite element analysis of eleven shear critical partially prestressed concrete T-beams having steel fibers over partial or full depth. Prestressed concrete T-beams having a shear span to depth ratio of 2.65 and 1.59 and failing in the shear have been analyzed Using 'ANSYS'. The 'ANSYS' model accounts for the nonlinear phenomenon, such as, bond-slip of longitudinal reinforcements, post-cracking tensile stiffness of the concrete, stress transfer across the cracked blocks of the concrete and load sustenance through the bridging of steel fibers at crack interlace. The concrete is modeled using 'SOLID65'-eight-node brick element, which is capable Of simulating the cracking and crushing behavior of brittle materials. The reinforcements such as deformed bars, prestressing wires and steel fibers have been modeled discretely Using 'LINK8' - 3D spar element. The slip between the reinforcement (rebar, fibers) and the concrete has been modeled using a 'COMBIN39'-non-linear spring element connecting the nodes of the 'LINK8' element representing the reinforcement and nodes of the 'SOLID65' elements representing the concrete. The 'ANSYS' model correctly predicted the diagonal tension failure and shear compression failure of prestressed concrete beams observed in the experiment. I-lie capability of the model to capture the critical crack regions, loads and deflections for various types Of shear failures ill prestressed concrete beam has been illustrated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Relay selection for cooperative communications has attracted considerable research interest recently. While several criteria have been proposed for selecting one or more relays and analyzed, mechanisms that perform the selection in a distributed manner have received relatively less attention. In this paper, we analyze a splitting algorithm for selecting the single best relay amongst a known number of active nodes in a cooperative network. We develop new and exact asymptotic analysis for computing the average number of slots required to resolve the best relay. We then propose and analyze a new algorithm that addresses the general problem of selecting the best Q >= 1 relays. Regardless of the number of relays, the algorithm selects the best two relays within 4.406 slots and the best three within 6.491 slots, on average. Our analysis also brings out an intimate relationship between multiple access selection and multiple access control algorithms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Glioblastoma (GBM; grade IV astrocytoma) is a very aggressive form of brain cancer with a poor survival and few qualified predictive markers. This study integrates experimentally validated genes that showed specific upregulation in GBM along with their protein-protein interaction information. A system level analysis was used to construct GBM-specific network. Computation of topological parameters of networks showed scale-free pattern and hierarchical organization. From the large network involving 1,447 proteins, we synthesized subnetworks and annotated them with highly enriched biological processes. A careful dissection of the functional modules, important nodes, and their connections identified two novel intermediary molecules CSK21 and protein phosphatase 1 alpha (PP1A) connecting the two subnetworks CDC2-PTEN-TOP2A-CAV1-P53 and CDC2-CAV1-RB-P53-PTEN, respectively. Real-time quantitative reverse transcription-PCR analysis revealed CSK21 to be moderately upregulated and PP1A to be overexpressed by 20-fold in GBM tumor samples. Immunohistochemical staining revealed nuclear expression of PP1A only in GBM samples. Thus, CSK21 and PP1A, whose functions are intimately associated with cell cycle regulation, might play key role in gliomagenesis. Cancer Res; 70(16); 6437-47. (C)2010 AACR.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

There are a number of large networks which occur in many problems dealing with the flow of power, communication signals, water, gas, transportable goods, etc. Both design and planning of these networks involve optimization problems. The first part of this paper introduces the common characteristics of a nonlinear network (the network may be linear, the objective function may be non linear, or both may be nonlinear). The second part develops a mathematical model trying to put together some important constraints based on the abstraction for a general network. The third part deals with solution procedures; it converts the network to a matrix based system of equations, gives the characteristics of the matrix and suggests two solution procedures, one of them being a new one. The fourth part handles spatially distributed networks and evolves a number of decomposition techniques so that we can solve the problem with the help of a distributed computer system. Algorithms for parallel processors and spatially distributed systems have been described.There are a number of common features that pertain to networks. A network consists of a set of nodes and arcs. In addition at every node, there is a possibility of an input (like power, water, message, goods etc) or an output or none. Normally, the network equations describe the flows amoungst nodes through the arcs. These network equations couple variables associated with nodes. Invariably, variables pertaining to arcs are constants; the result required will be flows through the arcs. To solve the normal base problem, we are given input flows at nodes, output flows at nodes and certain physical constraints on other variables at nodes and we should find out the flows through the network (variables at nodes will be referred to as across variables).The optimization problem involves in selecting inputs at nodes so as to optimise an objective function; the objective may be a cost function based on the inputs to be minimised or a loss function or an efficiency function. The above mathematical model can be solved using Lagrange Multiplier technique since the equalities are strong compared to inequalities. The Lagrange multiplier technique divides the solution procedure into two stages per iteration. Stage one calculates the problem variables % and stage two the multipliers lambda. It is shown that the Jacobian matrix used in stage one (for solving a nonlinear system of necessary conditions) occurs in the stage two also.A second solution procedure has also been imbedded into the first one. This is called total residue approach. It changes the equality constraints so that we can get faster convergence of the iterations.Both solution procedures are found to coverge in 3 to 7 iterations for a sample network.The availability of distributed computer systems — both LAN and WAN — suggest the need for algorithms to solve the optimization problems. Two types of algorithms have been proposed — one based on the physics of the network and the other on the property of the Jacobian matrix. Three algorithms have been deviced, one of them for the local area case. These algorithms are called as regional distributed algorithm, hierarchical regional distributed algorithm (both using the physics properties of the network), and locally distributed algorithm (a multiprocessor based approach with a local area network configuration). The approach used was to define an algorithm that is faster and uses minimum communications. These algorithms are found to converge at the same rate as the non distributed (unitary) case.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Lakes serve as sites for terrestrially fixed carbon to be remineralized and transferred back to the atmosphere. Their role in regional carbon cycling is especially important in the Boreal Zone, where lakes can cover up to 20% of the land area. Boreal lakes are often characterized by the presence of a brown water colour, which implies high levels of dissolved organic carbon from the surrounding terrestrial ecosystem, but the load of inorganic carbon from the catchment is largely unknown. Organic carbon is transformed to methane (CH4) and carbon dioxide (CO2) in biological processes that result in lake water gas concentrations that increase above atmospheric equilibrium, thus making boreal lakes as sources of these important greenhouse gases. However, flux estimates are often based on sporadic sampling and modelling and actual flux measurements are scarce. Thus, the detailed temporal flux dynamics of greenhouse gases are still largely unknown. ----- One aim here was to reveal the natural dynamics of CH4 and CO2 concentrations and fluxes in a small boreal lake. The other aim was to test the applicability of a measuring technique for CO2 flux, i.e. the eddy covariance (EC) technique, and a computational method for estimation of primary production and community respiration, both commonly used in terrestrial research, in this lake. Continuous surface water CO2 concentration measurements, also needed in free-water applications to estimate primary production and community respiration, were used over two open water periods in a study of CO2 concentration dynamics. Traditional methods were also used to measure gas concentration and fluxes. The study lake, Valkea-Kotinen, is a small, humic, headwater lake within an old-growth forest catchment with no local anthropogenic disturbance and thus possible changes in gas dynamics reflect the natural variability in lake ecosystems. CH4 accumulated under the ice and in the hypolimnion during summer stratification. The surface water CH4 concentration was always above atmospheric equilibrium and thus the lake was a continuous source of CH4 to the atmosphere. However, the annual CH4 fluxes were small, i.e. 0.11 mol m-2 yr-1, and the timing of fluxes differed from that of other published estimates. The highest fluxes are usually measured in spring after ice melt but in Lake Valkea-Kotinen CH4 was effectively oxidised in spring and highest effluxes occurred in autumn after summer stratification period. CO2 also accumulated under the ice and the hypolimnetic CO2 concentration increased steadily during stratification period. The surface water CO2 concentration was highest in spring and in autumn, whereas during the stable stratification it was sometimes under atmospheric equilibrium. It showed diel, daily and seasonal variation; the diel cycle was clearly driven by light and thus reflected the metabolism of the lacustrine ecosystem. However, the diel cycle was sometimes blurred by injection of hypolimnetic water rich in CO2 and the surface water CO2 concentration was thus controlled by stratification dynamics. The highest CO2 fluxes were measured in spring, autumn and during those hypolimnetic injections causing bursts of CO2 comparable with the spring and autumn fluxes. The annual fluxes averaged 77 (±11 SD) g C m-2 yr-1. In estimating the importance of the lake in recycling terrestrial carbon, the flux was normalized to the catchment area and this normalized flux was compared with net ecosystem production estimates of -50 to 200 g C m-2 yr-1 from unmanaged forests in corresponding temperature and precipitation regimes in the literature. Within this range the flux of Lake Valkea-Kotinen yielded from the increase in source of the surrounding forest by 20% to decrease in sink by 5%. The free water approach gave primary production and community respiration estimates of 5- and 16-fold, respectively, compared with traditional bottle incubations during a 5-day testing period in autumn. The results are in parallel with findings in the literature. Both methods adopted from the terrestrial community also proved useful in lake studies. A large percentage of the EC data was rejected, due to the unfulfilled prerequisites of the method. However, the amount of data accepted remained large compared with what would be feasible with traditional methods. Use of the EC method revealed underestimation of the widely used gas exchange model and suggests simultaneous measurements of actual turbulence at the water surface with comparison of the different gas flux methods to revise the parameterization of the gas transfer velocity used in the models.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In some bimolecular diffusion-controlled electron transfer (ET) reactions such as ion recombination (IR), both solvent polarization relaxation and the mutual diffusion of the reacting ion pair may determine the rate and even the yield of the reaction. However, a full treatment with these two reaction coordinates is a challenging task and has been left mostly unsolved. In this work, we address this problem by developing a dynamic theory by combining the ideas from ET reaction literature and barrierless chemical reactions. Two-dimensional coupled Smoluchowski equations are employed to compute the time evolution of joint probability distribution for the reactant (P-(1)(X,R,t)) and the product (p((2))(X,R,t)), where X, as is usual in ET reactions, describes the solvent polarization coordinate and R is the distance between the reacting ion pair. The reaction is described by a reaction line (sink) which is a function of X and R obtained by imposing a condition of equal energy on the initial and final states of a reacting ion pair. The resulting two-dimensional coupled equations of motion have been solved numerically using an alternate direction implicit (ADI) scheme (Peaceman and Rachford, J. Soc. Ind. Appl. Math. 1955, 3, 28). The results reveal interesting interplay between polarization relaxation and translational dynamics. The following new results have been obtained. (i) For solvents with slow longitudinal polarization relaxation, the escape probability decreases drastically as the polarization relaxation time increases. We attribute this to caging by polarization of the surrounding solvent, As expected, for the solvents having fast polarization relaxation, the escape probability is independent of the polarization relaxation time. (ii) In the slow relaxation limit, there is a significant dependence of escape probability and average rate on the initial solvent polarization, again displaying the effects of polarization caging. Escape probability increases, and the average rate decreases on increasing the initial polarization. Again, in the fast polarization relaxation limit, there is no effect of initial polarization on the escape probability and the average rate of IR. (iii) For normal and barrierless regions the dependence of escape probability and the rate of IR on initial polarization is stronger than in the inverted region. (iv) Because of the involvement of dynamics along R coordinate, the asymmetrical parabolic (that is, non-Marcus) energy gap dependence of the rate is observed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In a storage system where individual storage nodes are prone to failure, the redundant storage of data in a distributed manner across multiple nodes is a must to ensure reliability. Reed-Solomon codes possess the reconstruction property under which the stored data can be recovered by connecting to any k of the n nodes in the network across which data is dispersed. This property can be shown to lead to vastly improved network reliability over simple replication schemes. Also of interest in such storage systems is the minimization of the repair bandwidth, i.e., the amount of data needed to be downloaded from the network in order to repair a single failed node. Reed-Solomon codes perform poorly here as they require the entire data to be downloaded. Regenerating codes are a new class of codes which minimize the repair bandwidth while retaining the reconstruction property. This paper provides an overview of regenerating codes including a discussion on the explicit construction of optimum codes.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A distributed system is a collection of networked autonomous processing units which must work in a cooperative manner. Currently, large-scale distributed systems, such as various telecommunication and computer networks, are abundant and used in a multitude of tasks. The field of distributed computing studies what can be computed efficiently in such systems. Distributed systems are usually modelled as graphs where nodes represent the processors and edges denote communication links between processors. This thesis concentrates on the computational complexity of the distributed graph colouring problem. The objective of the graph colouring problem is to assign a colour to each node in such a way that no two nodes connected by an edge share the same colour. In particular, it is often desirable to use only a small number of colours. This task is a fundamental symmetry-breaking primitive in various distributed algorithms. A graph that has been coloured in this manner using at most k different colours is said to be k-coloured. This work examines the synchronous message-passing model of distributed computation: every node runs the same algorithm, and the system operates in discrete synchronous communication rounds. During each round, a node can communicate with its neighbours and perform local computation. In this model, the time complexity of a problem is the number of synchronous communication rounds required to solve the problem. It is known that 3-colouring any k-coloured directed cycle requires at least ½(log* k - 3) communication rounds and is possible in ½(log* k + 7) communication rounds for all k ≥ 3. This work shows that for any k ≥ 3, colouring a k-coloured directed cycle with at most three colours is possible in ½(log* k + 3) rounds. In contrast, it is also shown that for some values of k, colouring a directed cycle with at most three colours requires at least ½(log* k + 1) communication rounds. Furthermore, in the case of directed rooted trees, reducing a k-colouring into a 3-colouring requires at least log* k + 1 rounds for some k and possible in log* k + 3 rounds for all k ≥ 3. The new positive and negative results are derived using computational methods, as the existence of distributed colouring algorithms corresponds to the colourability of so-called neighbourhood graphs. The colourability of these graphs is analysed using Boolean satisfiability (SAT) solvers. Finally, this thesis shows that similar methods are applicable in capturing the existence of distributed algorithms for other graph problems, such as the maximal matching problem.