52 resultados para Distributed replication system
Resumo:
Regenerating codes are a class of recently developed codes for distributed storage that, like Reed-Solomon codes, permit data recovery from any subset of k nodes within the n-node network. However, regenerating codes possess in addition, the ability to repair a failed node by connecting to an arbitrary subset of d nodes. It has been shown that for the case of functional repair, there is a tradeoff between the amount of data stored per node and the bandwidth required to repair a failed node. A special case of functional repair is exact repair where the replacement node is required to store data identical to that in the failed node. Exact repair is of interest as it greatly simplifies system implementation. The first result of this paper is an explicit, exact-repair code for the point on the storage-bandwidth tradeoff corresponding to the minimum possible repair bandwidth, for the case when d = n-1. This code has a particularly simple graphical description, and most interestingly has the ability to carry out exact repair without any need to perform arithmetic operations. We term this ability of the code to perform repair through mere transfer of data as repair by transfer. The second result of this paper shows that the interior points on the storage-bandwidth tradeoff cannot be achieved under exact repair, thus pointing to the existence of a separate tradeoff under exact repair. Specifically, we identify a set of scenarios which we term as ``helper node pooling,'' and show that it is the necessity to satisfy such scenarios that overconstrains the system.
Resumo:
Channel-aware assignment of subchannels to users in the downlink of an OFDMA system requires extensive feedback of channel state information (CSI) to the base station. Since bandwidth is scarce, schemes that limit feedback are necessary. We develop a novel, low feedback, distributed splitting-based algorithm called SplitSelect to opportunistically assign each subchannel to its most suitable user. SplitSelect explicitly handles multiple access control aspects associated with CSI feedback, and scales well with the number of users. In it, according to a scheduling criterion, each user locally maintains a scheduling metric for each subchannel. The goal is to select, for each subchannel, the user with the highest scheduling metric. At any time, each user contends for the subchannel for which it has the largest scheduling metric among the unallocated subchannels. A tractable asymptotic analysis of a system with many users is central to SplitSelect's simple design. Extensive simulation results demonstrate the speed with which subchannels and users are paired. The net data throughput, when the time overhead of selection is accounted for, is shown to be substantially better than several schemes proposed in the literature. We also show how fairness and user prioritization can be ensured by suitably defining the scheduling metric.
Resumo:
There is a lot of pressure on all the developed and second world countries to produce low emission power and distributed generation (DG) is found to be one of the most viable ways to achieve this. DG generally makes use of renewable energy sources like wind, micro turbines, photovoltaic, etc., which produce power with minimum green house gas emissions. While installing a DG it is important to define its size and optimal location enabling minimum network expansion and line losses. In this paper, a methodology to locate the optimal site for a DG installation, with the objective to minimize the net transmission losses, is presented. The methodology is based on the concept of relative electrical distance (RED) between the DG and the load points. This approach will help to identify the new DG location(s), without the necessity to conduct repeated power flows. To validate this methodology case studies are carried out on a 20 node, 66kV system, a part of Karnataka Transco and results are presented.
Resumo:
Sesbania mosaic virus (SeMV) is a positive stranded RNA virus belonging to the genus Sobemovirus. Construction of an infectious clone is an essential step for deciphering the virus gene functions in vivo. Using Agrobacterium based transient expression system we show that SeMV icDNA is infectious on Sesbania grandiflora and Cyamopsis tetragonoloba plants. The efficiency of icDNA infection was found to be significantly high on Cyamopsis plants when compared to that on Sesbania grandiflora. The coat protein could be detected within 6 days post infiltration in the infiltrated leaves. Different species of viral RNA (double stranded and single stranded genomic and subgenomic RNA) could be detected upon northern analysis, suggesting that complete replication had taken place. Based on the analysis of the sequences at the genomic termini of progeny RNA from SeMV icDNA infiltrated leaves and those of its 3' and 5' terminal deletion mutants, we propose a possible mechanism for 3' and 5' end repair in vivo. Mutation of the cleavage sites in the polyproteins encoded by ORF 2 resulted in complete loss of infection by the icDNA, suggesting the importance of correct polyprotein processing at all the four cleavage sites for viral replication. Complementation analysis suggested that ORF 2 gene products can act in trans. However, the trans acting ability of ORF 2 gene products was abolished upon deletion of the N-terminal hydrophobic domain of polyprotein 2a and 2ab, suggesting that these products necessarily function at the replication site, where they are anchored to membranes.
Resumo:
In a communication system in which K nodes communicate with a central sink node, the following problem of selection often occurs. Each node maintains a preference number called a metric, which is not known to other nodes. The sink node must find the `best' node with the largest metric. The local nature of the metrics requires the selection process to be distributed. Further, the selection needs to be fast in order to increase the fraction of time available for data transmission using the selected node and to handle time-varying environments. While several selection schemes have been proposed in the literature, each has its own shortcomings. We propose a novel, distributed selection scheme that generalizes the best features of the timer scheme, which requires minimal feedback but does not guarantee successful selection, and the splitting scheme, which requires more feedback but guarantees successful selection. The proposed scheme introduces several new ideas into the design of the timer and splitting schemes. It explicitly accounts for feedback overheads and guarantees selection of the best node. We analyze and optimize the performance of the scheme and show that it is scalable, reliable, and fast. We also present new insights about the optimal timer scheme.
Resumo:
Background: Peste-des-petits ruminants virus (PPRV) is a non segmented negative strand RNA virus of the genus Morbillivirus within Paramyxoviridae family. Negative strand RNA viruses are known to carry nucleocapsid (N) protein, phospho (P) protein and RNA polymerase (L protein) packaged within the virion which possess all activities required for transcription, post-transcriptional modification of mRNA and replication. In order to understand the mechanism of transcription and replication of the virus, an in vitro transcription reconstitution system is required. In the present work, an in vitro transcription system has been developed with ribonucleoprotein (RNP) complex purified from virus infected cells as well as partially purified recombinant polymerase (L-P) complex from insect cells along with N-RNA (genomic RNA encapsidated by N protein) template isolated from virus infected cells. Results: RNP complex isolated from virus infected cells and recombinant L-P complex purified from insect cells was used to reconstitute transcription on N-RNA template. The requirement for this transcription reconstitution has been defined. Transcription of viral genes in the in vitro system was confirmed by PCR amplification of cDNAs corresponding to individual transcripts using gene specific primers. In order to measure the relative expression level of viral transcripts, real time PCR analysis was carried out. qPCR analysis of the transcription products made in vitro showed a gradient of polarity of transcription from 3' end to 5' end of the genome similar to that exhibited by the virus in infected cells. Conclusion: This report describes for the first time, the development of an in vitro transcription reconstitution system for PPRV with RNP complex purified from infected cells and recombinant L-P complex expressed in insect cells. Both the complexes were able to synthesize all the mRNA species in vitro, exhibiting a gradient of polarity in transcription.
Resumo:
The envelope protein (E1-E2) of Hepatitis C virus (HCV) is a major component of the viral structure. The glycosylated envelope protein is considered to be important for initiation of infection by binding to cellular receptor(s) and also known as one of the major antigenic targets to host immune response. The present study was aimed at identifying mouse monoclonal antibodies which inhibit binding of virus like particles of HCV to target cells. The first step in this direction was to generate recombinant HCV-like particles (HCV-LPs) specific for genotypes 3a of HCV (prevalent in India) using the genes encoding core, E1 and E2 envelop proteins in a baculovirus expression system. The purified HCV-LPs were characterized by ELISA and electron microscopy and were used to generate monoclonal antibodies (mAbs) in mice. Two monoclonal antibodies (E8G9 and H1H10) specific for the E2 region of envelope protein of HCV genotype 3a, were found to reduce the virus binding to Huh7 cells. However, the mAbs generated against HCV genotype 1b (D2H3, G2C7, E1B11) were not so effective. More importantly, mAb E8G9 showed significant inhibition of the virus entry in HCV JFH1 cell culture system. Finally, the epitopic regions on E2 protein which bind to the mAbs have also been identified. Results suggest a new therapeutic strategy and provide the proof of concept that mAb against HCV-LP could be effective in preventing virus entry into liver cells to block HCV replication.
Resumo:
Erasure codes are an efficient means of storing data across a network in comparison to data replication, as they tend to reduce the amount of data stored in the network and offer increased resilience in the presence of node failures. The codes perform poorly though, when repair of a failed node is called for, as they typically require the entire file to be downloaded to repair a failed node. A new class of erasure codes, termed as regenerating codes were recently introduced, that do much better in this respect. However, given the variety of efficient erasure codes available in the literature, there is considerable interest in the construction of coding schemes that would enable traditional erasure codes to be used, while retaining the feature that only a fraction of the data need be downloaded for node repair. In this paper, we present a simple, yet powerful, framework that does precisely this. Under this framework, the nodes are partitioned into two types and encoded using two codes in a manner that reduces the problem of node-repair to that of erasure-decoding of the constituent codes. Depending upon the choice of the two codes, the framework can be used to avail one or more of the following advantages: simultaneous minimization of storage space and repair-bandwidth, low complexity of operation, fewer disk reads at helper nodes during repair, and error detection and correction.
Resumo:
Erasure codes are an efficient means of storing data across a network in comparison to data replication, as they tend to reduce the amount of data stored in the network and offer increased resilience in the presence of node failures. The codes perform poorly though, when repair of a failed node is called for, as they typically require the entire file to be downloaded to repair a failed node. A new class of erasure codes, termed as regenerating codes were recently introduced, that do much better in this respect. However, given the variety of efficient erasure codes available in the literature, there is considerable interest in the construction of coding schemes that would enable traditional erasure codes to be used, while retaining the feature that only a fraction of the data need be downloaded for node repair. In this paper, we present a simple, yet powerful, framework that does precisely this. Under this framework, the nodes are partitioned into two types and encoded using two codes in a manner that reduces the problem of node-repair to that of erasure-decoding of the constituent codes. Depending upon the choice of the two codes, the framework can be used to avail one or more of the following advantages: simultaneous minimization of storage space and repair-bandwidth, low complexity of operation, fewer disk reads at helper nodes during repair, and error detection and correction.
Resumo:
In this paper, we analyze the coexistence of a primary and a secondary (cognitive) network when both networks use the IEEE 802.11 based distributed coordination function for medium access control. Specifically, we consider the problem of channel capture by a secondary network that uses spectrum sensing to determine the availability of the channel, and its impact on the primary throughput. We integrate the notion of transmission slots in Bianchi's Markov model with the physical time slots, to derive the transmission probability of the secondary network as a function of its scan duration. This is used to obtain analytical expressions for the throughput achievable by the primary and secondary networks. Our analysis considers both saturated and unsaturated networks. By performing a numerical search, the secondary network parameters are selected to maximize its throughput for a given level of protection of the primary network throughput. The theoretical expressions are validated using extensive simulations carried out in the Network Simulator 2. Our results provide critical insights into the performance and robustness of different schemes for medium access by the secondary network. In particular, we find that the channel captures by the secondary network does not significantly impact the primary throughput, and that simply increasing the secondary contention window size is only marginally inferior to silent-period based methods in terms of its throughput performance.
Resumo:
Human La protein has been implicated in facilitating the internal initiation of translation as well as replication of hepatitis C virus (HCV) RNA. Previously, we demonstrated that La interacts with the HCV internal ribosome entry site (IRES) around the GCAC motif near the initiator AUG within stem-loop IV by its RNA recognition motif (RRM) (residues 112 to 184) and influences HCV translation. In this study, we have deciphered the role of this interaction in HCV replication in a hepatocellular carcinoma cell culture system. We incorporated mutation of the GCAC motif in an HCV monocistronic subgenomic replicon and a pJFH1 construct which altered the binding of La and checked HCV RNA replication by reverse transcriptase PCR (RT-PCR). The mutation drastically affected HCV replication. Furthermore, to address whether the decrease in replication is a consequence of translation inhibition or not, we incorporated the same mutation into a bicistronic replicon and observed a substantial decrease in HCV RNA levels. Interestingly, La overexpression rescued this inhibition of replication. More importantly, we observed that the mutation reduced the association between La and NS5B. The effect of the GCAC mutation on the translation-to-replication switch, which is regulated by the interplay between NS3 and La, was further investigated. Additionally, our analyses of point mutations in the GCAC motif revealed distinct roles of each nucleotide in HCV replication and translation. Finally, we showed that a specific interaction of the GCAC motif with human La protein is crucial for linking 5' and 3' ends of the HCV genome. Taken together, our results demonstrate the mechanism of regulation of HCV replication by interaction of the cis-acting element GCAC within the HCV IRES with human La protein.
Resumo:
Adaptive Mesh Refinement is a method which dynamically varies the spatio-temporal resolution of localized mesh regions in numerical simulations, based on the strength of the solution features. In-situ visualization plays an important role for analyzing the time evolving characteristics of the domain structures. Continuous visualization of the output data for various timesteps results in a better study of the underlying domain and the model used for simulating the domain. In this paper, we develop strategies for continuous online visualization of time evolving data for AMR applications executed on GPUs. We reorder the meshes for computations on the GPU based on the users input related to the subdomain that he wants to visualize. This makes the data available for visualization at a faster rate. We then perform asynchronous executions of the visualization steps and fix-up operations on the CPUs while the GPU advances the solution. By performing experiments on Tesla S1070 and Fermi C2070 clusters, we found that our strategies result in 60% improvement in response time and 16% improvement in the rate of visualization of frames over the existing strategy of performing fix-ups and visualization at the end of the timesteps.
Resumo:
The timer-based selection scheme is a popular, simple, and distributed scheme that is used to select the best node from a set of available nodes. In it, each node sets a timer as a function of a local preference number called a metric, and transmits a packet when its timer expires. The scheme ensures that the timer of the best node, which has the highest metric, expires first. However, it fails to select the best node if another node transmits a packet within Delta s of the transmission by the best node. We derive the optimal timer mapping that maximizes the average success probability for the practical scenario in which the number of nodes in the system is unknown but only its probability distribution is known. We show that it has a special discrete structure, and present a recursive characterization to determine it. We benchmark its performance with ad hoc approaches proposed in the literature, and show that it delivers significant gains. New insights about the optimality of some ad hoc approaches are also developed.
Resumo:
Multi-packet reception (MPR) promises significant throughput gains in wireless local area networks (WLANs) by allowing nodes to transmit even in the presence of ongoing transmissions in the medium. However, the medium access control (MAC) layer must now be redesigned to facilitate rather than discourage - these overlapping transmissions. We investigate asynchronous MPR MAC protocols, which successfully accomplish this by controlling the node behavior based on the number of ongoing transmissions in the channel. The protocols use the backoff timer mechanism of the distributed coordination function, which makes them practically appealing. We first highlight a unique problem of acknowledgment delays, which arises in asynchronous MPR, and investigate a solution that modifies the medium access rules to reduce these delays and increase system throughput in the single receiver scenario. We develop a general renewal-theoretic fixed-point analysis that leads to expressions for the saturation throughput, packet dropping probability, and average head-of-line packet delay. We also model and analyze the practical scenario in which nodes may incorrectly estimate the number of ongoing transmissions.
Resumo:
Climate change impact on a groundwater-dependent small urban town has been investigated in the semiarid hard rock aquifer in southern India. A distributed groundwater model was used to simulate the groundwater levels in the study region for the projected future rainfall (2012-32) obtained from a general circulation model (GCM) to estimate the impacts of climate change and management practices on groundwater system. Management practices were based on the human-induced changes on the urban infrastructure such as reduced recharge from the lakes, reduced recharge from water and wastewater utility due to an operational and functioning underground drainage system, and additional water extracted by the water utility for domestic purposes. An assessment of impacts on the groundwater levels was carried out by calibrating a groundwater model using comprehensive data gathered during the period 2008-11 and then simulating the future groundwater level changes using rainfall from six GCMs Institute of Numerical Mathematics Coupled Model, version 3.0 (INM-CM. 3.0); L'Institut Pierre-Simon Laplace Coupled Model, version 4 (IPSL-CM4); Model for Interdisciplinary Research on Climate, version 3.2 (MIROC3.2); ECHAM and the global Hamburg Ocean Primitive Equation (ECHO-G); Hadley Centre Coupled Model, version 3 (HadCM3); and Hadley Centre Global Environment Model, version 1 (HadGEM1)] that were found to show good correlation to the historical rainfall in the study area. The model results for the present condition indicate that the annual average discharge (sum of pumping and natural groundwater outflow) was marginally or moderately higher at various locations than the recharge and further the recharge is aided from the recharge from the lakes. Model simulations showed that groundwater levels were vulnerable to the GCM rainfall and a scenario of moderate reduction in recharge from lakes. Hence, it is important to sustain the induced recharge from lakes by ensuring that sufficient runoff water flows to these lakes.