21 resultados para Data replication processes

em Indian Institute of Science - Bangalore - Índia


Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper we propose a new method of data handling for web servers. We call this method Network Aware Buffering and Caching (NABC for short). NABC facilitates reduction of data copies in web server's data sending path, by doing three things: (1) Layout the data in main memory in a way that protocol processing can be done without data copies (2) Keep a unified cache of data in kernel and ensure safe access to it by various processes and kernel and (3) Pass only the necessary meta data between processes so that bulk data handling time spent during IPC can be reduced. We realize NABC by implementing a set of system calls and an user library. The end product of the implementation is a set of APIs specifically designed for use by the web servers. We port an in house web server called SWEET, to NABC APIs and evaluate performance using a range of workloads both simulated and real. The results show a very impressive gain of 12% to 21% in throughput for static file serving and 1.6 to 4 times gain in throughput for lightweight dynamic content serving for a server using NABC APIs over the one using UNIX APIs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A von Mises truss with stochastically varying material properties is investigated for snapthrough instability. The variability of the snap-through load is calculated analytically as a function of the material property variability represented as a stochastic process. The bounds are established which are independent of the knowledge of the complete description of correlation structure which is seldom possible using the experimental data. Two processes are considered to represent the material property variability and the results are presented graphically. Ein von Mises Fachwerk mit stochastisch verteilten Materialeigenschaften wird bezüglich der Durchschlagsinstabilität untersucht. Die Spannbreite der Durchschlagslast wird analytisch als Funktion der Spannbreite der Materialeigenschaften berechnet, die stochastisch verteilt angenommen werden. Eine explizite Gesamtbeschreibung der Struktur ist bei Benutzung experimenteller Daten selten möglich. Deshalb werden Grenzen für die Durchschlagskraft entwickelt, die von der Kenntnis dieser Gesamtbeschreibung unabhängig sind. Zwei Grenzfälle werden betrachtet, um die Spannbreite der Materialeigenschaften darzustellen. Die Ergebnisse werden grafisch dargestellt.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Erasure codes are an efficient means of storing data across a network in comparison to data replication, as they tend to reduce the amount of data stored in the network and offer increased resilience in the presence of node failures. The codes perform poorly though, when repair of a failed node is called for, as they typically require the entire file to be downloaded to repair a failed node. A new class of erasure codes, termed as regenerating codes were recently introduced, that do much better in this respect. However, given the variety of efficient erasure codes available in the literature, there is considerable interest in the construction of coding schemes that would enable traditional erasure codes to be used, while retaining the feature that only a fraction of the data need be downloaded for node repair. In this paper, we present a simple, yet powerful, framework that does precisely this. Under this framework, the nodes are partitioned into two types and encoded using two codes in a manner that reduces the problem of node-repair to that of erasure-decoding of the constituent codes. Depending upon the choice of the two codes, the framework can be used to avail one or more of the following advantages: simultaneous minimization of storage space and repair-bandwidth, low complexity of operation, fewer disk reads at helper nodes during repair, and error detection and correction.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Erasure codes are an efficient means of storing data across a network in comparison to data replication, as they tend to reduce the amount of data stored in the network and offer increased resilience in the presence of node failures. The codes perform poorly though, when repair of a failed node is called for, as they typically require the entire file to be downloaded to repair a failed node. A new class of erasure codes, termed as regenerating codes were recently introduced, that do much better in this respect. However, given the variety of efficient erasure codes available in the literature, there is considerable interest in the construction of coding schemes that would enable traditional erasure codes to be used, while retaining the feature that only a fraction of the data need be downloaded for node repair. In this paper, we present a simple, yet powerful, framework that does precisely this. Under this framework, the nodes are partitioned into two types and encoded using two codes in a manner that reduces the problem of node-repair to that of erasure-decoding of the constituent codes. Depending upon the choice of the two codes, the framework can be used to avail one or more of the following advantages: simultaneous minimization of storage space and repair-bandwidth, low complexity of operation, fewer disk reads at helper nodes during repair, and error detection and correction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Telomeres are the termini of linear eukaryotic chromosomes consisting of tandem repeats of DNA and proteins that bind to these repeat sequences. Telomeres ensure the complete replication of chromosome ends, impart protection to ends from nucleolytic degradation, end-to-end fusion, and guide the localization of chromosomes within the nucleus. In addition, a combination of genetic, biochemical, and molecular biological approaches have implicated key roles for telomeres in diverse cellular processes such as regulation of gene expression, cell division, cell senescence, and cancer. This review focuses on recent advances in our understanding of the organization of telomeres, telomere replication, proteins that bind telomeric DNA, and the establishment of telomere length equilibrium.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The thermal degradation processes of two sulfur polymers, poly(xylylene sulfide) (PXM) and poly(xylylene disulfide) (PXD), were investigated in parallel by direct pyrolysis mass spectrometry (DPMS) and flash pyrolysis GC/MS (Py-GC/MS). Thermogravimetric data showed that these polymers decompose with two separate steps in the temperature ranges of 250-280 and 600-650 degrees C, leaving a high amount of residue (about 50% at 800 degrees C). The pyrolysis products detected by DPMS in the first degradation step of PXM and PXD were terminated by three types of end groups, -CH3, -CH2SH, and -CH=S, originating from thermal cleavage reactions involving a series of homolytic chain scissions followed by hydrogen transfer reactions, generating several oligomers containing some intact xylylene sulfide repeating units. The presence of pyrolysis compounds containing some stilbene-like units in the first degradation step has also been observed. Their formation has been accounted for with a parallel cleavage involving the elimination of H2S from the PXM main chains. These unsaturated units can undergo cross-linking at higher temperatures, producing the high amount of char residue observed. The thermal degradation compounds detected by DPMS in the second decomposition step at about 600-650 degrees C were constituted of condensed aromatic molecules containing dihydrofenanthrene and fenanthrene units. These compounds might be generated from the polymer chains containing stilbene units, by isomerization and dehydrogenation reactions. The pyrolysis products obtained in the Py-GC/MS of PXM and PXD at 610 degrees C are almost identical. The relative abundance in the pyrolysate and the spectral properties of the main pyrolysis products were found to be in generally good agreement with those obtained by DPMS. Polycyclic aromatic hydrocarbons (PAHs) were also detected by Py-GC/MS but in minor amounts with respect to DPMS. This apparent discrepancy was due to the simultaneous detection of PAHs together with all pyrolysis products in the Py-GC/MS, whereas in DPMS they were detected in the second thermal degradation step without the greatest part of pyrolysis compounds generated in the first degradation step. The results obtained by DPMS and PSI-GC/MS experiments showed complementary data for the degradation of PXM and PXD and, therefore, allowed the unequivocal formulation of the thermal degradation mechanism for these sulfur-containing polymers.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many novel computer architectures like array and multiprocessors which achieve high performance through the use of concurrency exploit variations of the von Neumann model of computation. The effective utilization of the machines makes special demands on programmers and their programming languages, such as the structuring of data into vectors or the partitioning of programs into concurrent processes. In comparison, the data flow model of computation demands only that the principle of structured programming be followed. A data flow program, often represented as a data flow graph, is a program that expresses a computation by indicating the data dependencies among operators. A data flow computer is a machine designed to take advantage of concurrency in data flow graphs by executing data independent operations in parallel. In this paper, we discuss the design of a high level language (DFL: Data Flow Language) suitable for data flow computers. Some sample procedures in DFL are presented. The implementation aspects have not been discussed in detail since there are no new problems encountered. The language DFL embodies the concepts of functional programming, but in appearance closely resembles Pascal. The language is a better vehicle than the data flow graph for expressing a parallel algorithm. The compiler has been implemented on a DEC 1090 system in Pascal.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

DNA helicases are present in all kingdoms of life and play crucial roles in processes of DNA metabolism such as replication, repair, recombination, and transcription. To date, however, the role of DNA helicases during homologous recombination in mycobacteria remains unknown. In this study, we show that Mycobacterium tuberculosis UvrD1 more efficiently inhibited the strand exchange promoted by its cognate RecA, compared to noncognate Mycobacterium smegmatis or Escherichia coli RecA proteins. The M. tuberculosis UvrD1(Q276R) mutant lacking the helicase and ATPase activities was able to block strand exchange promoted by mycobacterial RecA proteins but not of E. coil RecA. We observed that M. tuberculosis UvrA by itself has no discernible effect on strand exchange promoted by E. coli RecA but impedes the reaction catalyzed by the mycobacterial RecA proteins. Our data also show that M. tuberculosis UvrA and UvrD1 can act together to inhibit strand exchange promoted by mycobacterial RecA proteins. Taken together, these findings raise the possibility that UvrD1 and UvrA might act together in vivo to counter the deleterious effects of RecA nucleoprotein filaments and/or facilitate the dissolution of recombination intermediates. Finally, we provide direct experimental evidence for a physical interaction between M. tuberculosis UvrD1 and RecA on one hand and RecA and UvrA on the other hand. These observations are consistent with a molecular mechanism, whereby M. tuberculosis UvrA and UvrD1, acting together, block DNA strand exchange promoted by cognate and noncognate RecA proteins.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

DNA helicases are present in all kingdoms of life and play crucial roles in processes of DNA metabolism such as replication, repair, recombination, and transcription. To date, however, the role of DNA helicases during homologous recombination in mycobacteria remains unknown. In this study, we show that Mycobacterium tuberculosis UvrD1 more efficiently inhibited the strand exchange promoted by its cognate RecA, compared to noncognate Mycobacterium smegmatis or Escherichia coli RecA proteins. The M. tuberculosis UvrD1(Q276R) mutant lacking the helicase and ATPase activities was able to block strand exchange promoted by mycobacterial RecA proteins but not of E. coil RecA. We observed that M. tuberculosis UvrA by itself has no discernible effect on strand exchange promoted by E. coli RecA but impedes the reaction catalyzed by the mycobacterial RecA proteins. Our data also show that M. tuberculosis UvrA and UvrD1 can act together to inhibit strand exchange promoted by mycobacterial RecA proteins. Taken together, these findings raise the possibility that UvrD1 and UvrA might act together in vivo to counter the deleterious effects of RecA nucleoprotein filaments and/or facilitate the dissolution of recombination intermediates. Finally, we provide direct experimental evidence for a physical interaction between M. tuberculosis UvrD1 and RecA on one hand and RecA and UvrA on the other hand. These observations are consistent with a molecular mechanism, whereby M. tuberculosis UvrA and UvrD1, acting together, block DNA strand exchange promoted by cognate and noncognate RecA proteins.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Equatorial Indian Ocean is warmer in the east, has a deeper thermocline and mixed layer, and supports a more convective atmosphere than in the west. During certain years, the eastern Indian Ocean becomes unusually cold, anomalous winds blow from east to west along the equator and southeastward off the coast of Sumatra, thermocline and mixed layer lift up and the atmospheric convection gets suppressed. At the same time, western Indian Ocean becomes warmer and enhances atmospheric convection. This coupled ocean-atmospheric phenomenon in which convection, winds, sea surface temperature (SST) and thermocline take part actively is known as the Indian Ocean Dipole (IOD). Propagation of baroclinic Kelvin and Rossby waves excited by anomalous winds, play an important role in the development of SST anomalies associated with the IOD. Since mean thermocline in the Indian Ocean is deep compared to the Pacific, it was believed for a long time that the Indian Ocean is passive and merely responds to the atmospheric forcing. Discovery of the IOD and studies that followed demonstrate that the Indian Ocean can sustain its own intrinsic coupled ocean-atmosphere processes. About 50% percent of the IOD events in the past 100 years have co-occurred with El Nino Southern Oscillation (ENSO) and the other half independently. Coupled models have been able to reproduce IOD events and process experiments by such models – switching ENSO on and off – support the hypothesis based on observations that IOD events develop either in the presence or absence of ENSO. There is a general consensus among different coupled models as well as analysis of data that IOD events co-occurring during the ENSO are forced by a zonal shift in the descending branch of Walker cell over to the eastern Indian Ocean. Processes that initiate the IOD in the absence of ENSO are not clear, although several studies suggest that anomalies of Hadley circulation are the most probable forcing function. Impact of the IOD is felt in the vicinity of Indian Ocean as well as in remote regions. During IOD events, biological productivity of the eastern Indian Ocean increases and this in turn leads to death of corals over a large area.Moreover, the IOD affects rainfall over the maritime continent, Indian subcontinent, Australia and eastern Africa. The maritime continent and Australia suffer from deficit rainfall whereas India and east Africa receive excess. Despite the successful hindcast of the 2006 IOD by a coupled model, forecasting IOD events and their implications to rainfall variability remains a major challenge as understanding reasons behind an increase in frequency of IOD events in recent decades.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present work is an attempt to study crack initiation in nuclear grade, 9Cr-1Mo ferritic steel using AE as an online NDE tool. Laboratory experiments were conducted on 5 heat treated Compact Tension (CT) specimens made out of nuclear grade 9Cr-1Mo ferritic steel by subjecting them to cyclic tensile load. The CT Specimens were of 12.5 mm thickness. The Acoustic emission test system was setup to acquire the data continuously during the test by mounting AE sensor on one of the surfaces of the specimen. This was done to characterize AE data pertaining to crack initiation and then discriminate the samples in terms of their heat treatment processes based on AE data. The AE signatures at crack initiation could conclusively bring to fore the heat treatment distinction on a sample to sample basis in a qualitative sense.Thus, the results obtained through these investigations establish a step forward in utilizing AE technique as an on-line measurement tool for accurate detection and understanding of crack initiation and its profile in 9Cr-1Mo nuclear grade steel subjected to different processes of heat treatment.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thermodynamic properties of Mn3O4, Mn2O3 and MnO2 are reassessed based on new measurements and selected data from the literature. Data for these oxides are available in most thermodynamics compilations based on older calorimetric measurements on heat capacity and enthalpy of formation, and high-temperature decomposition studies. The older heat capacity measurements did not extend below 50 K. Recent measurements have extended the low temperature limit to 5 K. A reassessment of thermodynamic data was therefore undertaken, supplemented by new measurements on high temperature heat capacity of Mn3O4 and oxygen chemical potential for the oxidation of MnO1-x, Mn3O4, and Mn2O3 to their respective higher oxides using an advanced version of solid-state electrochemical cell incorporating a buffer electrode. Because of the high accuracy now achievable with solid-state electrochemical cells, phase-equilibrium calorimetry involving the ``third-law'' analysis has emerged as a competing tool to solution and combustion calorimetry for determining the standard enthalpy of formation at 298.15 K. The refined thermodynamic data for the oxides are presented in tabular form at regular intervals of temperature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Data mining is concerned with analysing large volumes of (often unstructured) data to automatically discover interesting regularities or relationships which in turn lead to better understanding of the underlying processes. The field of temporal data mining is concerned with such analysis in the case of ordered data streams with temporal interdependencies. Over the last decade many interesting techniques of temporal data mining were proposed and shown to be useful in many applications. Since temporal data mining brings together techniques from different fields such as statistics, machine learning and databases, the literature is scattered among many different sources. In this article, we present an overview of techniques of temporal data mining.We mainly concentrate on algorithms for pattern discovery in sequential data streams.We also describe some recent results regarding statistical analysis of pattern discovery methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes a sparse modeling approach to solve ordinal regression problems using Gaussian processes (GP). Designing a sparse GP model is important from training time and inference time viewpoints. We first propose a variant of the Gaussian process ordinal regression (GPOR) approach, leave-one-out GPOR (LOO-GPOR). It performs model selection using the leave-one-out cross-validation (LOO-CV) technique. We then provide an approach to design a sparse model for GPOR. The sparse GPOR model reduces computational time and storage requirements. Further, it provides faster inference. We compare the proposed approaches with the state-of-the-art GPOR approach on some benchmark data sets. Experimental results show that the proposed approaches are competitive.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Madurai Block, the largest crustal block in the Southern Granulite Terrane (SGT) of Peninsular India, preserves the imprints of multistage tectonic evolution. Here, we present U-Pb and Hf isotope data on zircons from a charnockite-granite suite in the north-western part of this block. The oscillatory zoning, and the LREE to HREE enriched patterns of the zircons with positive Ce and negative Eu anomalies suggest that the zircon cores are of magmatic origin, with ages in the range of 2634-2435 Ma implying Neoarchean-Paleoproterozoic magmatism followed by subsequent metamorphism and protocontinent formation in the north-western part of the Madurai Block. A regional 550-500 Ma metamorphic overprint is also preserved in the zircons coinciding with the final amalgamation of the Gondwana supercontinent. The Hf isotopic data suggest that the granite and charnockite were derived from isotopically heterogeneous juvenile crustal domains and the charnockites show a significant contribution of mantle-derived components. Therefore, the Hf isotopic data reflect mixing of crustal and mantle-derived sources for the generation of Neoarchean crust in the north-western Madurai Block, possibly in a suprasubduction zone setting during continent building processes. (c) 2014 Elsevier Ltd. All rights reserved.