19 resultados para data replication
Resumo:
Erasure codes are an efficient means of storing data across a network in comparison to data replication, as they tend to reduce the amount of data stored in the network and offer increased resilience in the presence of node failures. The codes perform poorly though, when repair of a failed node is called for, as they typically require the entire file to be downloaded to repair a failed node. A new class of erasure codes, termed as regenerating codes were recently introduced, that do much better in this respect. However, given the variety of efficient erasure codes available in the literature, there is considerable interest in the construction of coding schemes that would enable traditional erasure codes to be used, while retaining the feature that only a fraction of the data need be downloaded for node repair. In this paper, we present a simple, yet powerful, framework that does precisely this. Under this framework, the nodes are partitioned into two types and encoded using two codes in a manner that reduces the problem of node-repair to that of erasure-decoding of the constituent codes. Depending upon the choice of the two codes, the framework can be used to avail one or more of the following advantages: simultaneous minimization of storage space and repair-bandwidth, low complexity of operation, fewer disk reads at helper nodes during repair, and error detection and correction.
Resumo:
Erasure codes are an efficient means of storing data across a network in comparison to data replication, as they tend to reduce the amount of data stored in the network and offer increased resilience in the presence of node failures. The codes perform poorly though, when repair of a failed node is called for, as they typically require the entire file to be downloaded to repair a failed node. A new class of erasure codes, termed as regenerating codes were recently introduced, that do much better in this respect. However, given the variety of efficient erasure codes available in the literature, there is considerable interest in the construction of coding schemes that would enable traditional erasure codes to be used, while retaining the feature that only a fraction of the data need be downloaded for node repair. In this paper, we present a simple, yet powerful, framework that does precisely this. Under this framework, the nodes are partitioned into two types and encoded using two codes in a manner that reduces the problem of node-repair to that of erasure-decoding of the constituent codes. Depending upon the choice of the two codes, the framework can be used to avail one or more of the following advantages: simultaneous minimization of storage space and repair-bandwidth, low complexity of operation, fewer disk reads at helper nodes during repair, and error detection and correction.
Resumo:
Over the last few decades, there has been a significant land cover (LC) change across the globe due to the increasing demand of the burgeoning population and urban sprawl. In order to take account of the change, there is a need for accurate and up-to-date LC maps. Mapping and monitoring of LC in India is being carried out at national level using multi-temporal IRS AWiFS data. Multispectral data such as IKONOS, Landsat-TM/ETM+, IRS-ICID LISS-III/IV, AWiFS and SPOT-5, etc. have adequate spatial resolution (similar to 1m to 56m) for LC mapping to generate 1:50,000 maps. However, for developing countries and those with large geographical extent, seasonal LC mapping is prohibitive with data from commercial sensors of limited spatial coverage. Superspectral data from the MODIS sensor are freely available, have better temporal (8 day composites) and spectral information. MODIS pixels typically contain a mixture of various LC types (due to coarse spatial resolution of 250, 500 and 1000 in), especially in more fragmented landscapes. In this context, linear spectral unmixing would be useful for mapping patchy land covers, such as those that characterise much of the Indian subcontinent. This work evaluates the existing unmixing technique for LC mapping using MODIS data, using end-members that are extracted through Pixel Purity Index (PPI), Scatter plot and N-dimensional visualisation. The abundance maps were generated for agriculture, built up, forest, plantations, waste land/others and water bodies. The assessment of the results using ground truth and a LISS-III classified map shows 86% overall accuracy, suggesting the potential for broad-scale applicability of the technique with superspectral data for natural resource planning and inventory applications. Index Terms-Remote sensing, digital
Novel derivatives of spirohydantoin induce growth inhibition followed by apoptosis in leukemia cells
Resumo:
Hydantoin derivatives possess a variety of biochemical and pharmacological properties and consequently are used to treat many human diseases. However, there are only few studies focusing on their potential as cancer therapeutic agents. In the present study, we have examined anticancer properties of two novel spirohydantoin compounds, 8-(3,4-difluorobenzyl)-1'-(pent-4-enyl)-8-azaspiro[bicyclo[3.2.1] octane-3,4'-imidazolidine]-2',5'-dione (DFH) and 8-(3,4-dichlorobenzyl)-1'-(pent-4-enyl)-8-azaspiro[bicyclo[3.2.1]octane-3,4'-imidazolidine]-2',5'-dione (DCH). Both the compounds exhibited dose- and time-dependent cytotoxic effect on human leukemic cell lines, K562, Reh, CEM and 8ES. Incorporation of tritiated thymidine ([H-3) thymidine) in conjunction with cell cycle analysis suggested that DFH and DCH inhibited the growth of leukemic cells. Downregulation of PCNA and p-histone H3 further confirm that the growth inhibition could be at the level of DNA replication. Flow cytometric analysis indicated the accumulation of cells at subG1 phase suggesting induction of apoptosis, which was further confirmed and quantified both by fluorescence-activated cell sorting (FACS) and confocal microscopy following annexin V-FITC/propidium iodide (PI) staining. Mechanistically, our data support the induction of apoptosis by activation of the mitochondrial pathway. Results supporting such a model include, elevated levels of p53, and BAD, decreased level of BCL2, activation and cleavage of caspase 9, activation of procaspase 3, poly (ADP-ribosyl) polymerase (PARP) cleavage, downregulation of Ku70, Ku80 and DNA fragmentation. Based on these results we discuss the mechanism of apoptosis induced by DFH and its implications in leukemia therapy. (C) 2008 Elsevier Inc. All rights reserved.
Resumo:
Parameterization of sensible heat and momentum fluxes as inferred from an analysis of tower observations archived during MONTBLEX-90 at Jodhpur is proposed, both in terms of standard exchange coefficients C-H and C-D respectively and also according to free convection scaling. Both coefficients increase rapidly at low winds (the latter more strongly) and with increasing instability. All the sensible heat flux data at Jodhpur (wind speed at 10m <(U)over bar (10)>, < 8ms(-1)) also obey free convection scaling, with the flux proportional to the '4/3' power of an appropriate temperature difference such as that between 1 and 30 m. Furthermore, for <(U)over bar (10)> < 4 ms(-1) the momentum flux displays a linear dependence on wind speed.
Resumo:
Telomeres are the termini of linear eukaryotic chromosomes consisting of tandem repeats of DNA and proteins that bind to these repeat sequences. Telomeres ensure the complete replication of chromosome ends, impart protection to ends from nucleolytic degradation, end-to-end fusion, and guide the localization of chromosomes within the nucleus. In addition, a combination of genetic, biochemical, and molecular biological approaches have implicated key roles for telomeres in diverse cellular processes such as regulation of gene expression, cell division, cell senescence, and cancer. This review focuses on recent advances in our understanding of the organization of telomeres, telomere replication, proteins that bind telomeric DNA, and the establishment of telomere length equilibrium.
Resumo:
Several late gene expression factors (Lefs) have been implicated in fostering high levels of transcription from the very late gene promoters of polyhedrin and p10 from baculoviruses. We cloned and characterized from Bombyx mori nuclear polyhedrosis virus a late gene expression factor (Bmlef2) that encodes a 209-amino-acid protein harboring a Cys-rich C-terminal domain. The temporal transcription profiles of lef2 revealed a 1.2-kb transcript in both delayed early and late periods after virus infection. Transcription start site mapping identified the presence of an aphidicolin-sensitive late transcript arising from a TAAG motif located at -352 nucleotides and an aphidicolin-insensitive early transcript originating from a TTGT motif located 35 nucleotides downstream to a TATA box at -312 nucleotides, with respect to the +1 ATG of lef2. BmLef2 trans-activated very late gene expression from both polyhedrin and p10 promoters in transient expression assays. Internal deletion of the Cys-rich domain from the C-terminal region abolished the transcriptional activation. Inactivation of Lef2 synthesis by antisense lef2 transcripts drastically reduced the very late gene transcription but showed little effect on the expression from immediate early promoter. Decrease in viral DNA synthesis and a reduction in virus titer were observed only when antisense lef2 was expressed under the immediate early (ie-1) promoter. Furthermore, the antisense experiments suggested that lef2 plays a direct role in very late gene transcription.
Resumo:
A method for reconstruction of an object f(x) x=(x,y,z) from a limited set of cone-beam projection data has been developed. This method uses a modified form of convolution back-projection and projection onto convex sets (POCS) for handling the limited (or incomplete) data problem. In cone-beam tomography, one needs to have a complete geometry to completely reconstruct the original three-dimensional object. While complete geometries do exist, they are of little use in practical implementations. The most common trajectory used in practical scanners is circular, which is incomplete. It is, however, possible to recover some of the information of the original signal f(x) based on a priori knowledge of the nature of f(x). If this knowledge can be posed in a convex set framework, then POCS can be utilized. In this report, we utilize this a priori knowledge as convex set constraints to reconstruct f(x) using POCS. While we demonstrate the effectiveness of our algorithm for circular trajectories, it is essentially geometry independent and will be useful in any limited-view cone-beam reconstruction.
Resumo:
The Taylor coefficients c and d of the EM form factor of the pion are constrained using analyticity, knowledge of the phase of the form factor in the time-like region, 4m(pi)(2) <= t <= t(in) and its value at one space-like point, using as input the (g - 2) of the muon. This is achieved using the technique of Lagrange multipliers, which gives a transparent expression for the corresponding bounds. We present a detailed study of the sensitivity of the bounds to the choice of time-like phase and errors present in the space-like data, taken from recent experiments. We find that our results constrain c stringently. We compare our results with those in the literature and find agreement with the chiral perturbation-theory results for c. We obtain d similar to O(10) GeV-6 when c is set to the chiral perturbation-theory values.
Resumo:
There is an error in the JANAF (1985) data on the standard enthalpy, Gibbs energy and equilibrium constant for the formation of C2H2 (g) from elements. The error has arisen on account of an incorrect expression used for computing these parameters from the heat capacity, entropy and the relative heat content. Presented in this paper are the corrected values of the enthalpy, the Gibbs energy of formation and the corresponding equilibrium constant.
Resumo:
The correlation dimension D 2 and correlation entropy K 2 are both important quantifiers in nonlinear time series analysis. However, use of D 2 has been more common compared to K 2 as a discriminating measure. One reason for this is that D 2 is a static measure and can be easily evaluated from a time series. However, in many cases, especially those involving coloured noise, K 2 is regarded as a more useful measure. Here we present an efficient algorithmic scheme to compute K 2 directly from a time series data and show that K 2 can be used as a more effective measure compared to D 2 for analysing practical time series involving coloured noise.
Resumo:
Background: A genetic network can be represented as a directed graph in which a node corresponds to a gene and a directed edge specifies the direction of influence of one gene on another. The reconstruction of such networks from transcript profiling data remains an important yet challenging endeavor. A transcript profile specifies the abundances of many genes in a biological sample of interest. Prevailing strategies for learning the structure of a genetic network from high-dimensional transcript profiling data assume sparsity and linearity. Many methods consider relatively small directed graphs, inferring graphs with up to a few hundred nodes. This work examines large undirected graphs representations of genetic networks, graphs with many thousands of nodes where an undirected edge between two nodes does not indicate the direction of influence, and the problem of estimating the structure of such a sparse linear genetic network (SLGN) from transcript profiling data. Results: The structure learning task is cast as a sparse linear regression problem which is then posed as a LASSO (l1-constrained fitting) problem and solved finally by formulating a Linear Program (LP). A bound on the Generalization Error of this approach is given in terms of the Leave-One-Out Error. The accuracy and utility of LP-SLGNs is assessed quantitatively and qualitatively using simulated and real data. The Dialogue for Reverse Engineering Assessments and Methods (DREAM) initiative provides gold standard data sets and evaluation metrics that enable and facilitate the comparison of algorithms for deducing the structure of networks. The structures of LP-SLGNs estimated from the INSILICO1, INSILICO2 and INSILICO3 simulated DREAM2 data sets are comparable to those proposed by the first and/or second ranked teams in the DREAM2 competition. The structures of LP-SLGNs estimated from two published Saccharomyces cerevisae cell cycle transcript profiling data sets capture known regulatory associations. In each S. cerevisiae LP-SLGN, the number of nodes with a particular degree follows an approximate power law suggesting that its degree distributions is similar to that observed in real-world networks. Inspection of these LP-SLGNs suggests biological hypotheses amenable to experimental verification. Conclusion: A statistically robust and computationally efficient LP-based method for estimating the topology of a large sparse undirected graph from high-dimensional data yields representations of genetic networks that are biologically plausible and useful abstractions of the structures of real genetic networks. Analysis of the statistical and topological properties of learned LP-SLGNs may have practical value; for example, genes with high random walk betweenness, a measure of the centrality of a node in a graph, are good candidates for intervention studies and hence integrated computational – experimental investigations designed to infer more realistic and sophisticated probabilistic directed graphical model representations of genetic networks. The LP-based solutions of the sparse linear regression problem described here may provide a method for learning the structure of transcription factor networks from transcript profiling and transcription factor binding motif data.
Resumo:
The standard free energies of formation of CaO derived from a variety of high-temperature equilibrium measurements made by seven groups of experimentalists are significantly different from those given in the standard compilations of thermodynamic data. Indirect support for the validity of the compiled data comes from new solid-state electrochemical measurements using single-crystal CaF2 and SrF2 as electrolytes. The change in free energy for the following reactions are obtained: CaO + MgF2 --> MgO + CaF2 Delta G degrees = -68,050 -2.47 T(+/-100) J mol(-1) SrO + CaF2 --> SrF2 + CaO Delta G degrees = -35,010 + 6.39 T (+/-80) J mol(-1) The standard free energy changes associated with cell reactions agree with data in standard compilations within +/- 4 kJ mol(-1). The results of this study do not support recent suggestions for a major revision in thermodynamic data for CaO.
Resumo:
Neural data are inevitably contaminated by noise. When such noisy data are subjected to statistical analysis, misleading conclusions can be reached. Here we attempt to address this problem by applying a state-space smoothing method, based on the combined use of the Kalman filter theory and the Expectation–Maximization algorithm, to denoise two datasets of local field potentials recorded from monkeys performing a visuomotor task. For the first dataset, it was found that the analysis of the high gamma band (60–90 Hz) neural activity in the prefrontal cortex is highly susceptible to the effect of noise, and denoising leads to markedly improved results that were physiologically interpretable. For the second dataset, Granger causality between primary motor and primary somatosensory cortices was not consistent across two monkeys and the effect of noise was suspected. After denoising, the discrepancy between the two subjects was significantly reduced.
Resumo:
A series of bimetallic acetylacetonate (acac) complexes, AlxCr1-x(acac)(3), 0 <= x <= 1, have been synthesized for application as precursors for the CVD Of Substituted oxides, such as (AlxCr1-x)(2)O-3. Detailed thermal analysis has been carried out on these complexes, which are solids that begin subliming at low temperatures, followed by melting, and evaporation from the melt. By applying the Langmuir equation to differential thermogravimetry data, the vapour pressure of these complexes is estimated. From these vapour pressure data, the distinctly different enthalpies of sublimation and evaporation are calculated, using the Clausius-Clapeyron equation. Such a determination of both the enthalpies of sublimation and evaporation of complexes, which sublime and melt congruently, does not appear to have been reported in the literature to date.