33 resultados para Modelling lifetime data


Relevância:

30.00% 30.00%

Publicador:

Resumo:

A general procedure for arriving at 3-D models of disulphiderich olypeptide systems based on the covalent cross-link constraints has been developed. The procedure, which has been coded as a computer program, RANMOD, assigns a large number of random, permitted backbone conformations to the polypeptide and identifies stereochemically acceptable structures as plausible models based on strainless disulphide bridge modelling. Disulphide bond modelling is performed using the procedure MODIP developed earlier, in connection with the choice of suitable sites where disulphide bonds could be engineered in proteins (Sowdhamini,R., Srinivasan,N., Shoichet,B., Santi,D.V., Ramakrishnan,C. and Balaram,P. (1989) Protein Engng, 3, 95-103). The method RANMOD has been tested on small disulphide loops and the structures compared against preferred backbone conformations derived from an analysis of putative disulphide subdatabase and model calculations. RANMOD has been applied to disulphiderich peptides and found to give rise to several stereochemically acceptable structures. The results obtained on the modelling of two test cases, a-conotoxin GI and endothelin I, are presented. Available NMR data suggest that such small systems exhibit conformational heterogeneity in solution. Hence, this approach for obtaining several distinct models is particularly attractive for the study of conformational excursions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study describes two machine learning techniques applied to predict liquefaction susceptibility of soil based on the standard penetration test (SPT) data from the 1999 Chi-Chi, Taiwan earthquake. The first machine learning technique which uses Artificial Neural Network (ANN) based on multi-layer perceptions (MLP) that are trained with Levenberg-Marquardt backpropagation algorithm. The second machine learning technique uses the Support Vector machine (SVM) that is firmly based on the theory of statistical learning theory, uses classification technique. ANN and SVM have been developed to predict liquefaction susceptibility using corrected SPT (N-1)(60)] and cyclic stress ratio (CSR). Further, an attempt has been made to simplify the models, requiring only the two parameters (N-1)(60) and peck ground acceleration (a(max)/g)], for the prediction of liquefaction susceptibility. The developed ANN and SVM models have also been applied to different case histories available globally. The paper also highlights the capability of the SVM over the ANN models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

It is well known that the increasing space activities pose a serious threat to future missions. This is mainly due to the presence of spent stages, rockets spacecraft and fragments which can lead to collisions. The calculation of the collision probability of future space vehicles with the orbital debris is necessary for estimating the risk. There is lack of adequately catalogued and openly available detailed information on the explosion characteristics of trackable and untrackable debris data. Such a situation compels one to develop suitable mathematical modelling of the explosion and the resultant debris environment. Based on a study of the available information regarding the fragmentation, subsequent evolution and observation, it turns out to be possible to develop such a mathematical model connecting the dynamical features of the fragmentation with the geometrical/orbital characteristics of the debris and representing the environment through the idea of equivalent breakup. (C) 1997 COSPAR.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A mathematical model has been developed for the gas carburising (diffusion) process using finite volume method. The computer simulation has been carried out for an industrial gas carburising process. The model's predictions are in good agreement with industrial experimental data and with data collected from the literature. A study of various mass transfer and diffusion coefficients has been carried out in order to suggest which correlations should be used for the gas carburising process. The model has been interfaced in a Windows environment using a graphical user interface. In this way, the model is extremely user friendly. The sensitivity analysis of various parameters such as initial carbon concentration in the specimen, carbon potential of the atmosphere, temperature of the process, etc. has been carried out using the model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have analysed the diurnal cycle of rainfall over the Indian region (10S-35N, 60E-100E) using both satellite and in-situ data, and found many interesting features associated with this fundamental, yet under-explored, mode of variability. Since there is a distinct and strong diurnal mode of variability associated with the Indian summer monsoon rainfall, we evaluate the ability of the Weather Research and Forecasting Model (WRF) to simulate the observed diurnal rainfall characteristics. The model (at 54km grid-spacing) is integrated for the month of July, 2006, since this period was particularly favourable for the study of diurnal cycle. We first evaluate the sensitivity of the model to the prescribed sea surface temperature (SST), by using two different SST datasets, namely, Final Analyses (FNL) and Real-time Global (RTG). It was found that with RTG SST the rainfall simulation over central India (CI) was significantly better than that with FNL. On the other hand, over the Bay of Bengal (BoB), rainfall simulated with FNL was marginally better than with RTG. However, the overall performance of RTG SST was found to be better than FNL, and hence it was used for further model simulations. Next, we investigated the role of the convective parameterization scheme on the simulation of diurnal cycle of rainfall. We found that the Kain-Fritsch (KF) scheme performs significantly better than Betts-Miller-Janjić (BMJ) and Grell-Devenyi schemes. We also studied the impact of other physical parameterizations, namely, microphysics, boundary layer, land surface, and the radiation parameterization, on the simulation of diurnal cycle of rainfall, and identified the “best” model configuration. We used this configuration of the “best” model to perform a sensitivity study on the role of various convective components used in the KF scheme. In particular, we studied the role of convective downdrafts, convective timescale, and feedback fraction, on the simulated diurnal cycle of rainfall. The “best” model simulations, in general, show a good agreement with observations. Specifically, (i) Over CI, the simulated diurnal rainfall peak is at 1430 IST, in comparison to the observed 1430-1730 IST peak; (ii) Over Western Ghats and Burmese mountains, the model simulates a diurnal rainfall peak at 1430 IST, as opposed to the observed peak of 1430-1730 IST; (iii) Over Sumatra, both model and observations show a diurnal peak at 1730 IST; (iv) The observed southward propagating diurnal rainfall bands over BoB are weakly simulated by WRF. Besides the diurnal cycle of rainfall, the mean spatial pattern of total rainfall and its partitioning between the convective and stratiform components, are also well simulated. The “best” model configuration was used to conduct two nested simulations with one-way, three-level nesting (54-18-6km) over CI and BoB. While, the 54km and 18km simulations were conducted for the whole of July, 2006, the 6km simulation was carried out for the period 18 - 24 July, 2006. The results of our coarse- and fine-scale numerical simulations of the diurnal cycle of monsoon rainfall will be discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The lifetime calculation of large dense sensor networks with fixed energy resources and the remaining residual energy have shown that for a constant energy resource in a sensor network the fault rate at the cluster head is network size invariant when using the network layer with no MAC losses.Even after increasing the battery capacities in the nodes the total lifetime does not increase after a max limit of 8 times. As this is a serious limitation lots of research has been done at the MAC layer which allows to adapt to the specific connectivity, traffic and channel polling needs for sensor networks. There have been lots of MAC protocols which allow to control the channel polling of new radios which are available to sensor nodes to communicate. This further reduces the communication overhead by idling and sleep scheduling thus extending the lifetime of the monitoring application. We address the two issues which effects the distributed characteristics and performance of connected MAC nodes. (1) To determine the theoretical minimum rate based on joint coding for a correlated data source at the singlehop, (2a) to estimate cluster head errors using Bayesian rule for routing using persistence clustering when node densities are the same and stored using prior probability at the network layer, (2b) to estimate the upper bound of routing errors when using passive clustering were the node densities at the multi-hop MACS are unknown and not stored at the multi-hop nodes a priori. In this paper we evaluate many MAC based sensor network protocols and study the effects on sensor network lifetime. A renewable energy MAC routing protocol is designed when the probabilities of active nodes are not known a priori. From theoretical derivations we show that for a Bayesian rule with known class densities of omega1, omega2 with expected error P* is bounded by max error rate of P=2P* for single-hop. We study the effects of energy losses using cross-layer simulation of - large sensor network MACS setup, the error rate which effect finding sufficient node densities to have reliable multi-hop communications due to unknown node densities. The simulation results show that even though the lifetime is comparable the expected Bayesian posterior probability error bound is close or higher than Pges2P*.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a novel algorithm for compression of single lead Electrocardiogram (ECG) signals. The method is based on Pole-Zero modelling of the Discrete Cosine Transformed (DCT) signal. An extension is proposed to the well known Steiglitz-Hcbride algorithm, to model the higher frequency components of the input signal more accurately. This is achieved by weighting the error function minimized by the algorithm to estimate the model parameters. The data compression achieved by the parametric model is further enhanced by Differential Pulse Code Modulation (DPCM) of the model parameters. The method accomplishes a compression ratio in the range of 1:20 to 1:40, which far exceeds those achieved by most of the current methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Natural hazards such as landslides are triggered by numerous factors such as ground movements, rock falls, slope failure, debris flows, slope instability, etc. Changes in slope stability happen due to human intervention, anthropogenic activities, change in soil structure, loss or absence of vegetation (changes in land cover), etc. Loss of vegetation happens when the forest is fragmented due to anthropogenic activities. Hence land cover mapping with forest fragmentation can provide vital information for visualising the regions that require immediate attention from slope stability aspects. The main objective of this paper is to understand the rate of change in forest landscape from 1973 to 2004 through multi-sensor remote sensing data analysis. The forest fragmentation index presented here is based on temporal land use information and forest fragmentation model, in which the forest pixels are classified as patch, transitional, edge, perforated, and interior, that give a measure of forest continuity. The analysis carried out for five prominent watersheds of Uttara Kannada district– Aganashini, Bedthi, Kali, Sharavathi and Venkatpura revealed that interior forest is continuously decreasing while patch, transitional, edge and perforated forest show increasing trend. The effect of forest fragmentation on landslide occurrence was visualised by overlaying the landslide occurrence points on classified image and forest fragmentation map. The increasing patch and transitional forest on hill slopes are the areas prone to landslides, evident from the field verification, indicating that deforestation is a major triggering factor for landslides. This emphasises the need for immediate conservation measures for sustainable management of the landscape. Quantifying and describing land use - land cover change and fragmentation is crucial for assessing the effect of land management policies and environmental protection decisions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sixteen irrigation subsystems of the Mahi Bajaj Sagar Project, Rajasthan, India, are evaluated and selection of the most suitable/best is made using data envelopment analysis (DEA) in both deterministic and fuzzy environments. Seven performance-related indicators, namely, land development works (LDW), timely supply of inputs (TSI), conjunctive use of water resources (CUW), participation of farmers (PF), environmental conservation (EC), economic impact (EI) and crop productivity (CPR) are considered. Of the seven, LDW, TSI, CUW, PF and EC are considered inputs, whereas CPR and EI are considered outputs for DEA modelling purposes. Spearman rank correlation coefficient values are also computed for various scenarios. It is concluded that DEA in both deterministic and fuzzy environments is useful for the present problem. However, the outcome of fuzzy DEA may be explored for further analysis due to its simple, effective data and discrimination handling procedure. It is inferred that the present study can be explored for similar situations with suitable modifications.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this report, we investigate the problem of applying a range constraint in order to reduce the systematic heading drift in a foot-mounted inertial navigation system (INS) (motion-tracking). We make use of two foot-mounted INS, one on each foot, which are aided with zero-velocity detectors. A novel algorithm is proposed in order to reduce the systematic heading drift. The proposed algorithm is based on the idea that the separation between the two feet at any given instance must always lie within a sphere of radius equal to the maximum possible spatial separation between the two feet. A Kalman filter, getting one measurement update and two observation updates is used in this algorithm.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Study of Oceans dynamics and forecast is crucial as it influences the regional climate and other marine activities. Forecasting oceanographic states like sea surface currents, Sea surface temperature (SST) and mixed layer depth at different time scales is extremely important for these activities. These forecasts are generated by various ocean general circulation models (OGCM). One such model is the Regional Ocean Modelling System (ROMS). Though ROMS can simulate several features of ocean, it cannot reproduce the thermocline of the ocean properly. Solution to this problem is to incorporates data assimilation (DA) in the model. DA system using Ensemble Transform Kalman Filter (ETKF) has been developed for ROMS model to improve the accuracy of the model forecast. To assimilate data temperature and salinity from ARGO data has been used as observation. Assimilated temperature and salinity without localization shows oscillations compared to the model run without assimilation for India Ocean. Same was also found for u and v-velocity fields. With localization we found that the state variables are diverging within the localization scale.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many industrial processes involve reaction between the two immiscible liquid systems. It is very important to increase the efficiency and productivity of such reactions. One of the important processes that involve such reactions is the metal-slag system. To increase the reaction rate or efficiency, one must increase the contact surface area of one of the phases. This is either done by emulsifying the slag into the metal phase or the metal into the slag phase. The latter is preferred from the stability viewpoint. Recently, we have proposed a simple and elegant mathematical model to describe metal emulsification in the presence of bottom gas bubbling. The same model is being extended here. The effect of slag and metal phase viscosity, density and metal droplet size on the metal droplet velocity in the slag phase is discussed for the above mentioned metal emulsification process. The models results have been compared with experimental data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider a scenario where the communication nodes in a sensor network have limited energy, and the objective is to maximize the aggregate bits transported from sources to respective destinations before network partition due to node deaths. This performance metric is novel, and captures the useful information that a network can provide over its lifetime. The optimization problem that results from our approach is nonlinear; however, we show that it can be converted to a Multicommodity Flow (MCF) problem that yields the optimal value of the metric. Subsequently, we compare the performance of a practical routing strategy, based on Node Disjoint Paths (NDPs), with the ideal corresponding to the MCF formulation. Our results indicate that the performance of NDP-based routing is within 7.5% of the optimal.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Stochastic modelling is a useful way of simulating complex hard-rock aquifers as hydrological properties (permeability, porosity etc.) can be described using random variables with known statistics. However, very few studies have assessed the influence of topological uncertainty (i.e. the variability of thickness of conductive zones in the aquifer), probably because it is not easy to retrieve accurate statistics of the aquifer geometry, especially in hard rock context. In this paper, we assessed the potential of using geophysical surveys to describe the geometry of a hard rock-aquifer in a stochastic modelling framework. The study site was a small experimental watershed in South India, where the aquifer consisted of a clayey to loamy-sandy zone (regolith) underlain by a conductive fissured rock layer (protolith) and the unweathered gneiss (bedrock) at the bottom. The spatial variability of the thickness of the regolith and fissured layers was estimated by electrical resistivity tomography (ERT) profiles, which were performed along a few cross sections in the watershed. For stochastic analysis using Monte Carlo simulation, the generated random layer thickness was made conditional to the available data from the geophysics. In order to simulate steady state flow in the irregular domain with variable geometry, we used an isoparametric finite element method to discretize the flow equation over an unstructured grid with irregular hexahedral elements. The results indicated that the spatial variability of the layer thickness had a significant effect on reducing the simulated effective steady seepage flux and that using the conditional simulations reduced the uncertainty of the simulated seepage flux. As a conclusion, combining information on the aquifer geometry obtained from geophysical surveys with stochastic modelling is a promising methodology to improve the simulation of groundwater flow in complex hard-rock aquifers. (C) 2013 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Drastic groundwater resource depletion due to excessive extraction for irrigation is a major concern in many parts of India. In this study, an attempt was made to simulate the groundwater scenario of the catchment using ArcSWAT. Due to the restriction on the maximum initial storage, the deep aquifer component in ArcSWAT was found to be insufficient to represent the excessive groundwater depletion scenario. Hence, a separate water balance model was used for simulating the deep aquifer water table. This approach is demonstrated through a case study for the Malaprabha catchment in India. Multi-site rainfall data was used to represent the spatial variation in the catchment climatology. Model parameters were calibrated using observed monthly stream flow data. Groundwater table simulation was validated using the qualitative information available from the field. The stream flow was found to be well simulated in the model. The simulated groundwater table fluctuation is also matching reasonably well with the field observations. From the model simulations, deep aquifer water table fluctuation was found very severe in the semi-arid lower parts of the catchment, with some areas showing around 60m depletion over a period of eight years. Copyright (c) 2012 John Wiley & Sons, Ltd.