85 resultados para REFERENCE SAMPLES


Relevância:

20.00% 20.00%

Publicador:

Resumo:

DNA is the chemotherapeutic target for treating diseases of genetic origin. Besides well-known double-helical structures (A, B, Z, parallel stranded-DNA etc.), DNA is capable of forming several multi-stranded structures (triplex, tetraplex, i-motif etc.) which have unique biological significance. The G-rich 3'-ends of chromosomes, called telomeres, are synthesized by telomerase, a ribonucleoprotein, and over-expression of telomerase is associated with cancer. The activity of telomerase is suppressed if the G-rich region is folded into the four stranded structures, called G-quadruplexes (G4-DNAs) using small synthetic ligands. Thus design and synthesis of new G4-DNA ligands is an attractive strategy to combat cancer. G4-DNA forming sequences are also prevalent in other genomic regions of biological significance including promoter regions of several oncogenes. Effective gene regulation may be achieved by inducing a G4-DNA structure within the G-rich promoter sequences. To date, several G4-DNA stabilizing ligands are known. DNA groove binders interact with the duplex B-DNA through the grooves (major and minor groove) in a sequence-specific manner. Some of the groove binders are known to stabilize the G4-DNA. However, this is a relatively under explored field of research. In this review, we focus on the recent advances in the understanding of the G4-DNA structures, particularly made from the human telomeric DNA stretches. We summarize the results of various investigations of the interaction of various organic ligands with the G4-DNA while highlighting the importance of groove binder-G4-DNA interactions.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Electrical failure of insulation is known to be an extremal random process wherein nominally identical pro-rated specimens of equipment insulation, at constant stress fail at inordinately different times even under laboratory test conditions. In order to be able to estimate the life of power equipment, it is necessary to run long duration ageing experiments under accelerated stresses, to acquire and analyze insulation specific failure data. In the present work, Resin Impregnated Paper (RIP) a relatively new insulation system of choice used in transformer bushings, is taken as an example. The failure data has been processed using proven statistical methods, both graphical and analytical. The physical model governing insulation failure at constant accelerated stress has been assumed to be based on temperature dependent inverse power law model.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The classical approach to A/D conversion has been uniform sampling and we get perfect reconstruction for bandlimited signals by satisfying the Nyquist Sampling Theorem. We propose a non-uniform sampling scheme based on level crossing (LC) time information. We show stable reconstruction of bandpass signals with correct scale factor and hence a unique reconstruction from only the non-uniform time information. For reconstruction from the level crossings we make use of the sparse reconstruction based optimization by constraining the bandpass signal to be sparse in its frequency content. While overdetermined system of equations is resorted to in the literature we use an undetermined approach along with sparse reconstruction formulation. We could get a reconstruction SNR > 20dB and perfect support recovery with probability close to 1, in noise-less case and with lower probability in the noisy case. Random picking of LC from different levels over the same limited signal duration and for the same length of information, is seen to be advantageous for reconstruction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The possibility of establishing an accurate relative chronology of the early solar system events based on the decay of short-lived Al-26 to Mg-26 (half-life of 0.72 Myr) depends on the level of homogeneity (or heterogeneity) of Al-26 and Mg isotopes. However, this level is difficult. to constrain precisely because of the very high precision needed for the determination of isotopic ratios, typically of +/- 5 ppm. In this study, we report for the first time a detailed analytical protocol developed for high precision in situ Mg isotopic measurements ((25)mg/(24)mg and (26)mg/Mg-24 ratios, as well as Mg-26 excess) by MC-SIMS. As the data reduction process is critical for both accuracy and precision of the final isotopic results, factors such as the Faraday cup (FC) background drift and matrix effects on instrumental fractionation have been investigated. Indeed these instrumental effects impacting the measured Mg-isotope ratios can be as large or larger than the variations we are looking for to constrain the initial distribution of Al-26 and Mg isotopes in the early solar system. Our results show that they definitely are limiting factors regarding the precision of Mg isotopic compositions, and that an under- or over-correction of both FC background instabilities and instrumental isotopic fractionation leads to important bias on delta Mg-25, delta(26)mg and Delta Mg-26 values (for example, olivines not corrected for FC background drifts display Delta Mg-26 values that can differ by as much as 10 ppm from the truly corrected value). The new data reduction process described here can then be applied to meteoritic samples (components of chondritic meteorites for instance) to accurately establish their relative chronology of formation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present here, an experimental set-up developed for the first time in India for the determination of mixing ratio and carbon isotopic ratio of air-CO2. The set-up includes traps for collection and extraction of CO2 from air samples using cryogenic procedures, followed by the measurement of CO2 mixing ratio using an MKS Baratron gauge and analysis of isotopic ratios using the dual inlet peripheral of a high sensitivity isotope ratio mass spectrometer (IRMS) MAT 253. The internal reproducibility (precision) for the PC measurement is established based on repeat analyses of CO2 +/- 0.03 parts per thousand. The set-up is calibrated with international carbonate and air-CO2 standards. An in-house air-CO2 mixture, `OASIS AIRMIX' is prepared mixing CO2 from a high purity cylinder with O-2 and N-2 and an aliquot of this mixture is routinely analyzed together with the air samples. The external reproducibility for the measurement of the CO2 mixing ratio and carbon isotopic ratios are +/- 7 (n = 169) mu mol.mol(-1) and +/- 0.05 (n = 169) parts per thousand based on the mean of the difference between two aliquots of reference air mixture analyzed during daily operation carried out during November 2009-December 2011. The correction due to the isobaric interference of N2O on air-CO2 samples is determined separately by analyzing mixture of CO2 (of known isotopic composition) and N2O in varying proportions. A +0.2 parts per thousand correction in the delta C-13 value for a N2O concentration of 329 ppb is determined. As an application, we present results from an experiment conducted during solar eclipse of 2010. The isotopic ratio in CO2 and the carbon dioxide mixing ratio in the air samples collected during the event are different from neighbouring samples, suggesting the role of atmospheric inversion in trapping the emitted CO2 from the urban atmosphere during the eclipse.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents a unified taxonomy of shape features. Such taxonomy is required to construct ontologies to address heterogeneity in product/shape models. Literature provides separate classifications for volumetric, deformation and free-form surface features. The unified taxonomy proposed allows classification, representation and extraction of shape features in a product model. The novelty of the taxonomy is that the classification is based purely on shape entities and therefore it is possible to automatically extract the features from any shape model. This enables the use of this taxonomy to build reference ontology.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent focus of flood frequency analysis (FFA) studies has been on development of methods to model joint distributions of variables such as peak flow, volume, and duration that characterize a flood event, as comprehensive knowledge of flood event is often necessary in hydrological applications. Diffusion process based adaptive kernel (D-kernel) is suggested in this paper for this purpose. It is data driven, flexible and unlike most kernel density estimators, always yields a bona fide probability density function. It overcomes shortcomings associated with the use of conventional kernel density estimators in FFA, such as boundary leakage problem and normal reference rule. The potential of the D-kernel is demonstrated by application to synthetic samples of various sizes drawn from known unimodal and bimodal populations, and five typical peak flow records from different parts of the world. It is shown to be effective when compared to conventional Gaussian kernel and the best of seven commonly used copulas (Gumbel-Hougaard, Frank, Clayton, Joe, Normal, Plackett, and Student's T) in estimating joint distribution of peak flow characteristics and extrapolating beyond historical maxima. Selection of optimum number of bins is found to be critical in modeling with D-kernel.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Voltage source inverters are an integral part of renewable power sources and smart grid systems. Computationally efficient and fairly accurate models for the voltage source inverter are required to carry out extensive simulation studies on complex power networks. Accuracy requires that the effect of dead-time be incorporated in the inverter model. The dead-time is essentially a short delay introduced between the gating pulses to the complementary switches in an inverter leg for the safety of power devices. As the modern voltage source inverters switch at fairly high frequencies, the dead-time significantly influences the output fundamental voltage. Dead-time also causes low-frequency harmonic distortion and is hence important from a power quality perspective. This paper studies the dead-time effect in a synchronous dq reference frame, since dynamic studies and controller design are typically carried out in this frame of reference. For the sake of computational efficiency, average models are derived, incorporating the dead-time effect, in both RYB and dq reference frames. The average models are shown to consume less computation time than their corresponding switching models, the accuracies of the models being comparable. The proposed average synchronous reference frame model, including effect of dead-time, is validated through experimental results.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The development of a viable adsorbed natural gas onboard fuel system involves synthesizing materials that meet specific storage target requirements. We assess the impact on natural gas storage due to intermediate processes involved in taking a laboratory powder sample to an onboard packed or adsorbent bed module. We illustrate that reporting the V/V (volume of gas/volume of container) capacities based on powder adsorption data without accounting for losses due to pelletization and bed porosity, grossly overestimates the working storage capacity for a given material. Using data typically found for adsorbent materials that are carbon and MOF based materials, we show that in order to meet the Department of Energy targets of 180 V/V (equivalent STP) loading at 3.5 MPa and 298 K at the onboard packed bed level, the volumetric capacity of the pelletized sample should be at least 245 V/V and the corresponding gravimetric loading varies from 0.175 to 0.38 kg/kg for pellet densities ranging from 461.5 to 1,000 . With recent revision of the DOE target to 263 V/V at the onboard packed bed level, the volumetric loadings for the pelletized sample should be about 373 V/V.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

H. 264/advanced video coding surveillance video encoders use the Skip mode specified by the standard to reduce bandwidth. They also use multiple frames as reference for motion-compensated prediction. In this paper, we propose two techniques to reduce the bandwidth and computational cost of static camera surveillance video encoders without affecting detection and recognition performance. A spatial sampler is proposed to sample pixels that are segmented using a Gaussian mixture model. Modified weight updates are derived for the parameters of the mixture model to reduce floating point computations. A storage pattern of the parameters in memory is also modified to improve cache performance. Skip selection is performed using the segmentation results of the sampled pixels. The second contribution is a low computational cost algorithm to choose the reference frames. The proposed reference frame selection algorithm reduces the cost of coding uncovered background regions. We also study the number of reference frames required to achieve good coding efficiency. Distortion over foreground pixels is measured to quantify the performance of the proposed techniques. Experimental results show bit rate savings of up to 94.5% over methods proposed in literature on video surveillance data sets. The proposed techniques also provide up to 74.5% reduction in compression complexity without increasing the distortion over the foreground regions in the video sequence.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper proposes a novel experimental test procedure to estimate the reliability of structural dynamical systems under excitations specified via random process models. The samples of random excitations to be used in the test are modified by the addition of an artificial control force. An unbiased estimator for the reliability is derived based on measured ensemble of responses under these modified inputs based on the tenets of Girsanov transformation. The control force is selected so as to reduce the sampling variance of the estimator. The study observes that an acceptable choice for the control force can be made solely based on experimental techniques and the estimator for the reliability can be deduced without taking recourse to mathematical model for the structure under study. This permits the proposed procedure to be applied in the experimental study of time-variant reliability of complex structural systems that are difficult to model mathematically. Illustrative example consists of a multi-axes shake table study on bending-torsion coupled, geometrically non-linear, five-storey frame under uni/bi-axial, non-stationary, random base excitation. Copyright (c) 2014 John Wiley & Sons, Ltd.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Background: The function of a protein can be deciphered with higher accuracy from its structure than from its amino acid sequence. Due to the huge gap in the available protein sequence and structural space, tools that can generate functionally homogeneous clusters using only the sequence information, hold great importance. For this, traditional alignment-based tools work well in most cases and clustering is performed on the basis of sequence similarity. But, in the case of multi-domain proteins, the alignment quality might be poor due to varied lengths of the proteins, domain shuffling or circular permutations. Multi-domain proteins are ubiquitous in nature, hence alignment-free tools, which overcome the shortcomings of alignment-based protein comparison methods, are required. Further, existing tools classify proteins using only domain-level information and hence miss out on the information encoded in the tethered regions or accessory domains. Our method, on the other hand, takes into account the full-length sequence of a protein, consolidating the complete sequence information to understand a given protein better. Results: Our web-server, CLAP (Classification of Proteins), is one such alignment-free software for automatic classification of protein sequences. It utilizes a pattern-matching algorithm that assigns local matching scores (LMS) to residues that are a part of the matched patterns between two sequences being compared. CLAP works on full-length sequences and does not require prior domain definitions. Pilot studies undertaken previously on protein kinases and immunoglobulins have shown that CLAP yields clusters, which have high functional and domain architectural similarity. Moreover, parsing at a statistically determined cut-off resulted in clusters that corroborated with the sub-family level classification of that particular domain family. Conclusions: CLAP is a useful protein-clustering tool, independent of domain assignment, domain order, sequence length and domain diversity. Our method can be used for any set of protein sequences, yielding functionally relevant clusters with high domain architectural homogeneity. The CLAP web server is freely available for academic use at http://nslab.mbu.iisc.ernet.in/clap/.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Electromagnetic Articulography (EMA) technique is used to record the kinematics of different articulators while one speaks. EMA data often contains missing segments due to sensor failure. In this work, we propose a maximum a-posteriori (MAP) estimation with continuity constraint to recover the missing samples in the articulatory trajectories recorded using EMA. In this approach, we combine the benefits of statistical MAP estimation as well as the temporal continuity of the articulatory trajectories. Experiments on articulatory corpus using different missing segment durations show that the proposed continuity constraint results in a 30% reduction in average root mean squared error in estimation over statistical estimation of missing segments without any continuity constraint.