50 resultados para Towards Seamless Integration of Geoscience Models and Data
em Indian Institute of Science - Bangalore - Índia
Resumo:
A conceptually unifying and flexible approach to the ABC and FGH segments of the nortriterpenoid rubrifloradilactone C, each embodying a furo[3,2-b]furanone moiety, from the appropriate Morita-Baylis-Hillman adducts is delineated. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Predictions of two popular closed-form models for unsaturated hydraulic conductivity (K) are compared with in situ measurements made in a sandy loam field soil. Whereas the Van Genuchten model estimates were very close to field measured values, the Brooks-Corey model predictions were higher by about one order of magnitude in the wetter range. Estimation of parameters of the Van Genuchten soil moisture characteristic (SMC) equation, however, involves the use of non-linear regression techniques. The Brooks-Corey SMC equation has the advantage of being amenable to application of linear regression techniques for estimation of its parameters from retention data. A conversion technique, whereby known Brooks-Corey model parameters may be converted into Van Genuchten model parameters, is formulated. The proposed conversion algorithm may be used to obtain the parameters of the preferred Van Genuchten model from in situ retention data, without the use of non-linear regression techniques.
Resumo:
Systems level modelling and simulations of biological processes are proving to be invaluable in obtaining a quantitative and dynamic perspective of various aspects of cellular function. In particular, constraint-based analyses of metabolic networks have gained considerable popularity for simulating cellular metabolism, of which flux balance analysis (FBA), is most widely used. Unlike mechanistic simulations that depend on accurate kinetic data, which are scarcely available, FBA is based on the principle of conservation of mass in a network, which utilizes the stoichiometric matrix and a biologically relevant objective function to identify optimal reaction flux distributions. FBA has been used to analyse genome-scale reconstructions of several organisms; it has also been used to analyse the effect of perturbations, such as gene deletions or drug inhibitions in silico. This article reviews the usefulness of FBA as a tool for gaining biological insights, advances in methodology enabling integration of regulatory information and thermodynamic constraints, and finally addresses the challenges that lie ahead. Various use scenarios and biological insights obtained from FBA, and applications in fields such metabolic engineering and drug target identification, are also discussed. Genome-scale constraint-based models have an immense potential for building and testing hypotheses, as well as to guide experimentation.
Resumo:
The knowledge of hydrological variables (e. g. soil moisture, evapotranspiration) are of pronounced importance in various applications including flood control, agricultural production and effective water resources management. These applications require the accurate prediction of hydrological variables spatially and temporally in watershed/basin. Though hydrological models can simulate these variables at desired resolution (spatial and temporal), often they are validated against the variables, which are either sparse in resolution (e. g. soil moisture) or averaged over large regions (e. g. runoff). A combination of the distributed hydrological model (DHM) and remote sensing (RS) has the potential to improve resolution. Data assimilation schemes can optimally combine DHM and RS. Retrieval of hydrological variables (e. g. soil moisture) from remote sensing and assimilating it in hydrological model requires validation of algorithms using field studies. Here we present a review of methodologies developed to assimilate RS in DHM and demonstrate the application for soil moisture in a small experimental watershed in south India.
Resumo:
We have evaluated techniques of estimating animal density through direct counts using line transects during 1988-92 in the tropical deciduous forests of Mudumalai Sanctuary in southern India for four species of large herbivorous mammals, namely, chital (Axis axis), sambar (Cervus unicolor), Asian elephant (Elephas maximus) and gaur (Bos gauras). Density estimates derived from the Fourier Series and the Half-Normal models consistently had the lowest coefficient of variation. These two models also generated similar mean density estimates. For the Fourier Series estimator, appropriate cut-off widths for analysing line transect data for the four species are suggested. Grouping data into various distance classes did not produce any appreciable differences in estimates of mean density or their variances, although model fit is generally better when data are placed in fewer groups. The sampling effort needed to achieve a desired precision (coefficient of variation) in the density estimate is derived. A sampling effort of 800 km of transects returned a 10% coefficient of variation on estimate for chital; for the other species a higher effort was needed to achieve this level of precision. There was no statistically significant relationship between detectability of a group and the size of the group for any species. Density estimates along roads were generally significantly different from those in the interior af the forest, indicating that road-side counts may not be appropriate for most species.
Resumo:
The problem of time variant reliability analysis of existing structures subjected to stationary random dynamic excitations is considered. The study assumes that samples of dynamic response of the structure, under the action of external excitations, have been measured at a set of sparse points on the structure. The utilization of these measurements m in updating reliability models, postulated prior to making any measurements, is considered. This is achieved by using dynamic state estimation methods which combine results from Markov process theory and Bayes' theorem. The uncertainties present in measurements as well as in the postulated model for the structural behaviour are accounted for. The samples of external excitations are taken to emanate from known stochastic models and allowance is made for ability (or lack of it) to measure the applied excitations. The future reliability of the structure is modeled using expected structural response conditioned on all the measurements made. This expected response is shown to have a time varying mean and a random component that can be treated as being weakly stationary. For linear systems, an approximate analytical solution for the problem of reliability model updating is obtained by combining theories of discrete Kalman filter and level crossing statistics. For the case of nonlinear systems, the problem is tackled by combining particle filtering strategies with data based extreme value analysis. In all these studies, the governing stochastic differential equations are discretized using the strong forms of Ito-Taylor's discretization schemes. The possibility of using conditional simulation strategies, when applied external actions are measured, is also considered. The proposed procedures are exemplifiedmby considering the reliability analysis of a few low-dimensional dynamical systems based on synthetically generated measurement data. The performance of the procedures developed is also assessed based on a limited amount of pertinent Monte Carlo simulations. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
In this paper, reduced level of rock at Bangalore, India is arrived from the 652 boreholes data in the area covering 220 sq.km. In the context of prediction of reduced level of rock in the subsurface of Bangalore and to study the spatial variability of the rock depth, ordinary kriging and Support Vector Machine (SVM) models have been developed. In ordinary kriging, the knowledge of the semivariogram of the reduced level of rock from 652 points in Bangalore is used to predict the reduced level of rock at any point in the subsurface of Bangalore, where field measurements are not available. A cross validation (Q1 and Q2) analysis is also done for the developed ordinary kriging model. The SVM is a novel type of learning machine based on statistical learning theory, uses regression technique by introducing e-insensitive loss function has been used to predict the reduced level of rock from a large set of data. A comparison between ordinary kriging and SVM model demonstrates that the SVM is superior to ordinary kriging in predicting rock depth.
Resumo:
This paper presents a novel algorithm for compression of single lead Electrocardiogram (ECG) signals. The method is based on Pole-Zero modelling of the Discrete Cosine Transformed (DCT) signal. An extension is proposed to the well known Steiglitz-Hcbride algorithm, to model the higher frequency components of the input signal more accurately. This is achieved by weighting the error function minimized by the algorithm to estimate the model parameters. The data compression achieved by the parametric model is further enhanced by Differential Pulse Code Modulation (DPCM) of the model parameters. The method accomplishes a compression ratio in the range of 1:20 to 1:40, which far exceeds those achieved by most of the current methods.
Resumo:
Flood is one of the detrimental hydro-meteorological threats to mankind. This compels very efficient flood assessment models. In this paper, we propose remote sensing based flood assessment using Synthetic Aperture Radar (SAR) image because of its imperviousness to unfavourable weather conditions. However, they suffer from the speckle noise. Hence, the processing of SAR image is applied in two stages: speckle removal filters and image segmentation methods for flood mapping. The speckle noise has been reduced with the help of Lee, Frost and Gamma MAP filters. A performance comparison of these speckle removal filters is presented. From the results obtained, we deduce that the Gamma MAP is reliable. The selected Gamma MAP filtered image is segmented using Gray Level Co-occurrence Matrix (GLCM) and Mean Shift Segmentation (MSS). The GLCM is a texture analysis method that separates the image pixels into water and non-water groups based on their spectral feature whereas MSS is a gradient ascent method, here segmentation is carried out using spectral and spatial information. As test case, Kosi river flood is considered in our study. From the segmentation result of both these methods are comprehensively analysed and concluded that the MSS is efficient for flood mapping.
Resumo:
Vernacular dwellings are well-suited climate-responsive designs that adopt local materials and skills to support comfortable indoor environments in response to local climatic conditions. These naturally-ventilated passive dwellings have enabled civilizations to sustain even in extreme climatic conditions. The design and physiological resilience of the inhabitants have coevolved to be attuned to local climatic and environmental conditions. Such adaptations have perplexed modern theories in human thermal-comfort that have evolved in the era of electricity and air-conditioned buildings. Vernacular local building elements like rubble walls and mud roofs are given way to burnt brick walls and reinforced cement concrete tin roofs. Over 60% of Indian population is rural, and implications of such transitions on thermal comfort and energy in buildings are crucial to understand. Types of energy use associated with a buildings life cycle include its embodied energy, operational and maintenance energy, demolition and disposal energy. Embodied Energy (EE) represents total energy consumption for construction of building, i.e., embodied energy of building materials, material transportation energy and building construction energy. Embodied energy of building materials forms major contribution to embodied energy in buildings. Operational energy (OE) in buildings mainly contributed by space conditioning and lighting requirements, depends on the climatic conditions of the region and comfort requirements of the building occupants. Less energy intensive natural materials are used for traditional buildings and the EE of traditional buildings is low. Transition in use of materials causes significant impact on embodied energy of vernacular dwellings. Use of manufactured, energy intensive materials like brick, cement, steel, glass etc. contributes to high embodied energy in these dwellings. This paper studies the increase in EE of the dwelling attributed to change in wall materials. Climatic location significantly influences operational energy in dwellings. Buildings located in regions experiencing extreme climatic conditions would require more operational energy to satisfy the heating and cooling energy demands throughout the year. Traditional buildings adopt passive techniques or non-mechanical methods for space conditioning to overcome the vagaries of extreme climatic variations and hence less operational energy. This study assesses operational energy in traditional dwelling with regard to change in wall material and climatic location. OE in the dwellings has been assessed for hot-dry, warm humid and moderate climatic zones. Choice of thermal comfort models is yet another factor which greatly influences operational energy assessment in buildings. The paper adopts two popular thermal-comfort models, viz., ASHRAE comfort standards and TSI by Sharma and Ali to investigate thermal comfort aspects and impact of these comfort models on OE assessment in traditional dwellings. A naturally ventilated vernacular dwelling in Sugganahalli, a village close to Bangalore (India), set in warm - humid climate is considered for present investigations on impact of transition in building materials, change in climatic location and choice of thermal comfort models on energy in buildings. The study includes a rigorous real time monitoring of the thermal performance of the dwelling. Dynamic simulation models validated by measured data have also been adopted to determine the impact of the transition from vernacular to modern material-configurations. Results of the study and appraisal for appropriate thermal comfort standards for computing operational energy has been presented and discussed in this paper. (c) 2014 K.I. Praseeda. Published by Elsevier Ltd.
Resumo:
Two-dimensional magnetic recording (2-D TDMR) is an emerging technology that aims to achieve areal densities as high as 10 Tb/in(2) using sophisticated 2-D signal-processing algorithms. High areal densities are achieved by reducing the size of a bit to the order of the size of magnetic grains, resulting in severe 2-D intersymbol interference (ISI). Jitter noise due to irregular grain positions on the magnetic medium is more pronounced at these areal densities. Therefore, a viable read-channel architecture for TDMR requires 2-D signal-detection algorithms that can mitigate 2-D ISI and combat noise comprising jitter and electronic components. Partial response maximum likelihood (PRML) detection scheme allows controlled ISI as seen by the detector. With the controlled and reduced span of 2-D ISI, the PRML scheme overcomes practical difficulties such as Nyquist rate signaling required for full response 2-D equalization. As in the case of 1-D magnetic recording, jitter noise can be handled using a data-dependent noise-prediction (DDNP) filter bank within a 2-D signal-detection engine. The contributions of this paper are threefold: 1) we empirically study the jitter noise characteristics in TDMR as a function of grain density using a Voronoi-based granular media model; 2) we develop a 2-D DDNP algorithm to handle the media noise seen in TDMR; and 3) we also develop techniques to design 2-D separable and nonseparable targets for generalized partial response equalization for TDMR. This can be used along with a 2-D signal-detection algorithm. The DDNP algorithm is observed to give a 2.5 dB gain in SNR over uncoded data compared with the noise predictive maximum likelihood detection for the same choice of channel model parameters to achieve a channel bit density of 1.3 Tb/in(2) with media grain center-to-center distance of 10 nm. The DDNP algorithm is observed to give similar to 10% gain in areal density near 5 grains/bit. The proposed signal-processing framework can broadly scale to various TDMR realizations and areal density points.
Resumo:
It is shown that left-handed duplexes are possible for A, B, and D forms of DNA. These duplexes are stereochemically satisfactory and are consistent with the observed x-ray intensity data. On scrutiny the refined right-handed models of B and D DNA by Arnott and coworkers are found to be stereochemically unacceptable. It was possible to formulate a stereochemical guideline for molecular model building based on theory and analysis of single-crystal structure data of dinucleoside monophosphate and higher oligomers. This led to both right- and left-handed DNA duplexes. The right-handed B and D DNA duplexes so obtained are stereochemically superior to earlier models and agree well with the observed x-ray intensity data. The observation that DNA can exist in either handedness for all the polymorphous forms of DNA at once explained A in equilibrium B and B in equilibrium D transitions. Hence it is confirmed that polymorphism of DNA is a reflection on the conformational flexibility inherent in DNA, the same cause that ultimately allows DNA in either handedness. The possibility of various types of right- and left-handed duplexes generated by using dinucleoside monophosphate and trinucleoside diphosphate as repeating units resulted in a variety of models, called RL models. All these models have alternating right and left helical segments and inverted stacking at the bend region as suggested by us earlier. It turns out that the B-Z DNA model of Wang et al. is only an example of RL models.
Resumo:
This paper is concerned with the integration of voice and data on an experimental local area network used by the School of Automation, of the Indian Institute of Science. SALAN (School of Automation Local Area Network) consists of a number of microprocessor-based communication nodes linked to a shared coaxial cable transmission medium. The communication nodes handle the various low-level functions associated with computer communication, and interface user data equipment to the network. SALAN at present provides a file transfer facility between an Intel Series III microcomputer development system and a Texas Instruments Model 990/4 microcomputer system. Further, a packet voice communication system has also been implemented on SALAN. The various aspects of the design and implementation of the above two utilities are discussed.
Resumo:
The development of a new synthesis of 2,6,7,7a-tetrahydro-lβ-hydroxy-4-formyl-7a-methylindene was undertaken involving the preparation of 2,6,7,7a-tetra-hydro-1β-hydroxy-4-methoxymethyl-7a-methylindene because of the erratic yield in the last oxidation step of the reported synthesis of the former compound. Although various attempts to prepare the latter were not successful, interesting rearrangement products, the dienone, 5,6,7,7a-tetrahydro-4,7a-dimethyl-5H-indene-1,5-dione and the tricyclic keto alcohol, 2,6-diketo-3-methyltricyclo(5,2,1,0)decan-8-ol, were obtained, the structures of which have been proved by spectral data. Mechanisms for the formation of these products have been proposed.
Resumo:
This paper discusses the consistent regularization property of the generalized α method when applied as an integrator to an initial value high index and singular differential-algebraic equation model of a multibody system. The regularization comes from within the discretization itself and the discretization remains consistent over the range of values the regularization parameter may take. The regularization involves increase of the smallest singular values of the ill-conditioned Jacobian of the discretization and is different from Baumgarte and similar techniques which tend to be inconsistent for poor choice of regularization parameter. This regularization also helps where pre-conditioning the Jacobian by scaling is of limited effect, for example, when the scleronomic constraints contain multiple closed loops or singular configuration or when high index path constraints are present. The feed-forward control in Kane's equation models is additionally considered in the numerical examples to illustrate the effect of regularization. The discretization presented in this work is adopted to the first order DAE system (unlike the original method which is intended for second order systems) for its A-stability and same order of accuracy for positions and velocities.