56 resultados para Data Models


Relevância:

40.00% 40.00%

Publicador:

Resumo:

The conformational flexibility inherent in the polynucleotide chain plays an important role in deciding its three-dimensonal structure and enables it to undergo structural transitions in order to fulfil all its functions. Following certain stereochemical guidelines, both right and left handed double-helical models have been built in our laboratory and they are in reasonably good agreement with the fibre patterns for various polymorphous forms of DNA. Recently, nuclear magnetic resonance spectroscopy has become an important technique for studying the solution conformation and polymorphism of nucleic acids. Several workers have used 1H nuclear magnetic resonance nuclear Overhauser enhancement measurements to estimate the interproton distances for the various DNA oligomers and compared them with the interproton distances for particular models of A and Β form DNA. In some cases the solution conformation does not seem to fit either of these models. We have been studying various models for DNA with a view to exploring the full conformational space allowed for nucleic acid polymers. In this paper, the interproton distances calculated for the different stereochemically feasible models of DNA are presented and they are compared and correlated against those obtained from 1Η nuclear magnetic resonance nuclear Overhauser enhancement measurements of various nucleic acid oligomers.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The problem of time variant reliability analysis of existing structures subjected to stationary random dynamic excitations is considered. The study assumes that samples of dynamic response of the structure, under the action of external excitations, have been measured at a set of sparse points on the structure. The utilization of these measurements m in updating reliability models, postulated prior to making any measurements, is considered. This is achieved by using dynamic state estimation methods which combine results from Markov process theory and Bayes' theorem. The uncertainties present in measurements as well as in the postulated model for the structural behaviour are accounted for. The samples of external excitations are taken to emanate from known stochastic models and allowance is made for ability (or lack of it) to measure the applied excitations. The future reliability of the structure is modeled using expected structural response conditioned on all the measurements made. This expected response is shown to have a time varying mean and a random component that can be treated as being weakly stationary. For linear systems, an approximate analytical solution for the problem of reliability model updating is obtained by combining theories of discrete Kalman filter and level crossing statistics. For the case of nonlinear systems, the problem is tackled by combining particle filtering strategies with data based extreme value analysis. In all these studies, the governing stochastic differential equations are discretized using the strong forms of Ito-Taylor's discretization schemes. The possibility of using conditional simulation strategies, when applied external actions are measured, is also considered. The proposed procedures are exemplifiedmby considering the reliability analysis of a few low-dimensional dynamical systems based on synthetically generated measurement data. The performance of the procedures developed is also assessed based on a limited amount of pertinent Monte Carlo simulations. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Predictions of two popular closed-form models for unsaturated hydraulic conductivity (K) are compared with in situ measurements made in a sandy loam field soil. Whereas the Van Genuchten model estimates were very close to field measured values, the Brooks-Corey model predictions were higher by about one order of magnitude in the wetter range. Estimation of parameters of the Van Genuchten soil moisture characteristic (SMC) equation, however, involves the use of non-linear regression techniques. The Brooks-Corey SMC equation has the advantage of being amenable to application of linear regression techniques for estimation of its parameters from retention data. A conversion technique, whereby known Brooks-Corey model parameters may be converted into Van Genuchten model parameters, is formulated. The proposed conversion algorithm may be used to obtain the parameters of the preferred Van Genuchten model from in situ retention data, without the use of non-linear regression techniques.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A systematic assessment of the submodels of conditional moment closure (CMC) formalism for the autoignition problem is carried out using direct numerical simulation (DNS) data. An initially non-premixed, n-heptane/air system, subjected to a three-dimensional, homogeneous, isotropic, and decaying turbulence, is considered. Two kinetic schemes, (1) a one-step and (2) a reduced four-step reaction mechanism, are considered for chemistry An alternative formulation is developed for closure of the mean chemical source term , based on the condition that the instantaneous fluctuation of excess temperature is small. With this model, it is shown that the CMC equations describe the autoignition process all the way up to near the equilibrium limit. The effect of second-order terms (namely, conditional variance of temperature excess sigma(2) and conditional correlations of species q(ij)) in modeling is examined. Comparison with DNS data shows that sigma(2) has little effect on the predicted conditional mean temperature evolution, if the average conditional scalar dissipation rate is properly modeled. Using DNS data, a correction factor is introduced in the modeling of nonlinear terms to include the effect of species fluctuations. Computations including such a correction factor show that the species conditional correlations q(ij) have little effect on model predictions with a one-step reaction, but those q(ij) involving intermediate species are found to be crucial when four-step reduced kinetics is considered. The "most reactive mixture fraction" is found to vary with time when a four-step kinetics is considered. First-order CMC results are found to be qualitatively wrong if the conditional mean scalar dissipation rate is not modeled properly. The autoignition delay time predicted by the CMC model compares excellently with DNS results and shows a trend similar to experimental data over a range of initial temperatures.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents a novel algorithm for compression of single lead Electrocardiogram (ECG) signals. The method is based on Pole-Zero modelling of the Discrete Cosine Transformed (DCT) signal. An extension is proposed to the well known Steiglitz-Hcbride algorithm, to model the higher frequency components of the input signal more accurately. This is achieved by weighting the error function minimized by the algorithm to estimate the model parameters. The data compression achieved by the parametric model is further enhanced by Differential Pulse Code Modulation (DPCM) of the model parameters. The method accomplishes a compression ratio in the range of 1:20 to 1:40, which far exceeds those achieved by most of the current methods.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Two-dimensional magnetic recording (2-D TDMR) is an emerging technology that aims to achieve areal densities as high as 10 Tb/in(2) using sophisticated 2-D signal-processing algorithms. High areal densities are achieved by reducing the size of a bit to the order of the size of magnetic grains, resulting in severe 2-D intersymbol interference (ISI). Jitter noise due to irregular grain positions on the magnetic medium is more pronounced at these areal densities. Therefore, a viable read-channel architecture for TDMR requires 2-D signal-detection algorithms that can mitigate 2-D ISI and combat noise comprising jitter and electronic components. Partial response maximum likelihood (PRML) detection scheme allows controlled ISI as seen by the detector. With the controlled and reduced span of 2-D ISI, the PRML scheme overcomes practical difficulties such as Nyquist rate signaling required for full response 2-D equalization. As in the case of 1-D magnetic recording, jitter noise can be handled using a data-dependent noise-prediction (DDNP) filter bank within a 2-D signal-detection engine. The contributions of this paper are threefold: 1) we empirically study the jitter noise characteristics in TDMR as a function of grain density using a Voronoi-based granular media model; 2) we develop a 2-D DDNP algorithm to handle the media noise seen in TDMR; and 3) we also develop techniques to design 2-D separable and nonseparable targets for generalized partial response equalization for TDMR. This can be used along with a 2-D signal-detection algorithm. The DDNP algorithm is observed to give a 2.5 dB gain in SNR over uncoded data compared with the noise predictive maximum likelihood detection for the same choice of channel model parameters to achieve a channel bit density of 1.3 Tb/in(2) with media grain center-to-center distance of 10 nm. The DDNP algorithm is observed to give similar to 10% gain in areal density near 5 grains/bit. The proposed signal-processing framework can broadly scale to various TDMR realizations and areal density points.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One difficulty in summarising biological survivorship data is that the hazard rates are often neither constant nor increasing with time or decreasing with time in the entire life span. The promising Weibull model does not work here. The paper demonstrates how bath tub shaped quadratic models may be used in such a case. Further, sometimes due to a paucity of data actual lifetimes are not as certainable. It is shown how a concept from queuing theory namely first in first out (FIFO) can be profitably used here. Another nonstandard situation considered is one in which lifespan of the individual entity is too long compared to duration of the experiment. This situation is dealt with, by using ancilliary information. In each case the methodology is illustrated with numerical examples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of identifying parameters of time invariant linear dynamical systems with fractional derivative damping models, based on a spatially incomplete set of measured frequency response functions and experimentally determined eigensolutions, is considered. Methods based on inverse sensitivity analysis of damped eigensolutions and frequency response functions are developed. It is shown that the eigensensitivity method requires the development of derivatives of solutions of an asymmetric generalized eigenvalue problem. Both the first and second order inverse sensitivity analyses are considered. The study demonstrates the successful performance of the identification algorithms developed based on synthetic data on one, two and a 33 degrees of freedom vibrating systems with fractional dampers. Limited studies have also been conducted by combining finite element modeling with experimental data on accelerances measured in laboratory conditions on a system consisting of two steel beams rigidly joined together by a rubber hose. The method based on sensitivity of frequency response functions is shown to be more efficient than the eigensensitivity based method in identifying system parameters, especially for large scale systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Knowledge of drag force is an important design parameter in aerodynamics. Measurement of aerodynamic forces at hypersonic speed is a challenge and usually ground test facilities like shock tunnels are used to carry out such tests. Accelerometer based force balances are commonly employed for measuring aerodynamic drag around bodies in hypersonic shock tunnels. In this study, we present an analysis of the effect of model material on the performance of an accelerometer balance used for measurement of drag in impulse facilities. From the experimental studies performed on models constructed out of Bakelite HYLEM and Aluminum, it is clear that the rigid body assumption does not hold good during the short testing duration available in shock tunnels. This is notwithstanding the fact that the rubber bush used for supporting the model allows unconstrained motion of the model during the short testing time available in the shock tunnel. The vibrations induced in the model on impact loading in the shock tunnel are damped out in metallic model, resulting in a smooth acceleration signal, while the signal become noisy and non-linear when we use non-isotropic materials like Bakelite HYLEM. This also implies that careful analysis and proper data reduction methodologies are necessary for measuring aerodynamic drag for non-metallic models in shock tunnels. The results from the drag measurements carried out using a 60 degrees half angle blunt cone is given in the present analysis.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The behaviour of laterally loaded piles is considerably influenced by the uncertainties in soil properties. Hence probabilistic models for assessment of allowable lateral load are necessary. Cone penetration test (CPT) data are often used to determine soil strength parameters, whereby the allowable lateral load of the pile is computed. In the present study, the maximum lateral displacement and moment of the pile are obtained based on the coefficient of subgrade reaction approach, considering the nonlinear soil behaviour in undrained clay. The coefficient of subgrade reaction is related to the undrained shear strength of soil, which can be obtained from CPT data. The soil medium is modelled as a one-dimensional random field along the depth, and it is described by the standard deviation and scale of fluctuation of the undrained shear strength of soil. Inherent soil variability, measurement uncertainty and transformation uncertainty are taken into consideration. The statistics of maximum lateral deflection and moment are obtained using the first-order, second-moment technique. Hasofer-Lind reliability indices for component and system failure criteria, based on the allowable lateral displacement and moment capacity of the pile section, are evaluated. The geotechnical database from the Konaseema site in India is used as a case example. It is shown that the reliability-based design approach for pile foundations, considering the spatial variability of soil, permits a rational choice of allowable lateral loads.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The knowledge of hydrological variables (e. g. soil moisture, evapotranspiration) are of pronounced importance in various applications including flood control, agricultural production and effective water resources management. These applications require the accurate prediction of hydrological variables spatially and temporally in watershed/basin. Though hydrological models can simulate these variables at desired resolution (spatial and temporal), often they are validated against the variables, which are either sparse in resolution (e. g. soil moisture) or averaged over large regions (e. g. runoff). A combination of the distributed hydrological model (DHM) and remote sensing (RS) has the potential to improve resolution. Data assimilation schemes can optimally combine DHM and RS. Retrieval of hydrological variables (e. g. soil moisture) from remote sensing and assimilating it in hydrological model requires validation of algorithms using field studies. Here we present a review of methodologies developed to assimilate RS in DHM and demonstrate the application for soil moisture in a small experimental watershed in south India.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have evaluated techniques of estimating animal density through direct counts using line transects during 1988-92 in the tropical deciduous forests of Mudumalai Sanctuary in southern India for four species of large herbivorous mammals, namely, chital (Axis axis), sambar (Cervus unicolor), Asian elephant (Elephas maximus) and gaur (Bos gauras). Density estimates derived from the Fourier Series and the Half-Normal models consistently had the lowest coefficient of variation. These two models also generated similar mean density estimates. For the Fourier Series estimator, appropriate cut-off widths for analysing line transect data for the four species are suggested. Grouping data into various distance classes did not produce any appreciable differences in estimates of mean density or their variances, although model fit is generally better when data are placed in fewer groups. The sampling effort needed to achieve a desired precision (coefficient of variation) in the density estimate is derived. A sampling effort of 800 km of transects returned a 10% coefficient of variation on estimate for chital; for the other species a higher effort was needed to achieve this level of precision. There was no statistically significant relationship between detectability of a group and the size of the group for any species. Density estimates along roads were generally significantly different from those in the interior af the forest, indicating that road-side counts may not be appropriate for most species.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper proposes the use of empirical modeling techniques for building microarchitecture sensitive models for compiler optimizations. The models we build relate program performance to settings of compiler optimization flags, associated heuristics and key microarchitectural parameters. Unlike traditional analytical modeling methods, this relationship is learned entirely from data obtained by measuring performance at a small number of carefully selected compiler/microarchitecture configurations. We evaluate three different learning techniques in this context viz. linear regression, adaptive regression splines and radial basis function networks. We use the generated models to a) predict program performance at arbitrary compiler/microarchitecture configurations, b) quantify the significance of complex interactions between optimizations and the microarchitecture, and c) efficiently search for'optimal' settings of optimization flags and heuristics for any given microarchitectural configuration. Our evaluation using benchmarks from the SPEC CPU2000 suits suggests that accurate models (< 5% average error in prediction) can be generated using a reasonable number of simulations. We also find that using compiler settings prescribed by a model-based search can improve program performance by as much as 19% (with an average of 9.5%) over highly optimized binaries.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Suitable pin-to-hole interference can significantly increase the fatigue life of a pin joint. In practical design, the initial stresses due to interference are high and they are proportional to the effective interference. In experimental studies on such joints, difficulties have been experienced in estimating the interference accurately from physical measurements of pin and hole diameters. A simple photoelastic method has been developed to determine the effective interference to a high degree of accuracy. This paper presents the method and reports illustrative data from a successful application thereof.