54 resultados para Test data
Resumo:
This paper highlights the seismic microzonation carried out for a nuclear power plant site. Nuclear power plants are considered to be one of the most important and critical structures designed to withstand all natural disasters. Seismic microzonation is a process of demarcating a region into individual areas having different levels of various seismic hazards. This will help in identifying regions having high seismic hazard which is vital for engineering design and land-use planning. The main objective of this paper is to carry out the seismic microzonation of a nuclear power plant site situated in the east coast of South India, based on the spatial distribution of the hazard index value. The hazard index represents the consolidated effect of all major earthquake hazards and hazard influencing parameters. The present work will provide new directions for assessing the seismic hazards of new power plant sites in the country. Major seismic hazards considered for the evaluation of the hazard index are (1) intensity of ground shaking at bedrock, (2) site amplification, (3) liquefaction potential and (4) the predominant frequency of the earthquake motion at the surface. The intensity of ground shaking in terms of peak horizontal acceleration (PHA) was estimated for the study area using both deterministic and probabilistic approaches with logic tree methodology. The site characterization of the study area has been carried out using the multichannel analysis of surface waves test and available borehole data. One-dimensional ground response analysis was carried out at major locations within the study area for evaluating PHA and spectral accelerations at the ground surface. Based on the standard penetration test data, deterministic as well as probabilistic liquefaction hazard analysis has been carried out for the entire study area. Finally, all the major earthquake hazards estimated above, and other significant parameters representing local geology were integrated using the analytic hierarchy process and hazard index map for the study area was prepared. Maps showing the spatial variation of seismic hazards (intensity of ground shaking, liquefaction potential and predominant frequency) and hazard index are presented in this work.
Resumo:
Two of the aims of laboratory one-dimensional consolidation tests are prediction of the end of primary settlement, and determination of the coefficient of consolidation of soils required for the time rate of consolidation analysis from time-compression data. Of the many methods documented in the literature to achieve these aims, Asaoka's method is a simple and useful tool, and yet the most neglected one since its inception in the geotechnical engineering literature more than three decades ago. This paper appraises Asaoka's method, originally proposed for the field prediction of ultimate settlement, from the perspective of laboratory consolidation analysis along with recent developments. It is shown through experimental illustrations that Asaoka's method is simpler than the conventional and popular methods, and makes a satisfactory prediction of both the end of primary compression and the coefficient of consolidation from laboratory one-dimensional consolidation test data.
Resumo:
In this work, a methodology to achieve ordinary-, medium-, and high-strength self-consolidating concrete (SCC) with and without mineral additions is proposed. The inclusion of Class F fly ash increases the density of SCC but retards the hydration rate, resulting in substantial strength gain only after 28 days. This delayed strength gain due to the use of fly ash has been considered in the mixture design model. The accuracy of the proposed mixture design model is validated with the present test data and mixture and strength data obtained from diverse sources reported in the literature.
Resumo:
Following the recent work of the authors in development and numerical verification of a new kinematic approach of the limit analysis for surface footings on non-associative materials, a practical procedure is proposed to utilize the theory. It is known that both the peak friction angle and dilation angle depend on the sand density as well as the stress level, which was not the concern of the former work. In the current work, a practical procedure is established to provide a better estimate of the bearing capacity of surface footings on sand which is often non-associative. This practical procedure is based on the results obtained theoretically and requires the density index and the critical state friction angle of the sand. The proposed practical procedure is a simple iterative computational procedure which relates the density index of the sand, stress level, dilation angle, peak friction angle and eventually the bearing capacity. The procedure is described and verified among available footing load test data.
Resumo:
The Restricted Boltzmann Machines (RBM) can be used either as classifiers or as generative models. The quality of the generative RBM is measured through the average log-likelihood on test data. Due to the high computational complexity of evaluating the partition function, exact calculation of test log-likelihood is very difficult. In recent years some estimation methods are suggested for approximate computation of test log-likelihood. In this paper we present an empirical comparison of the main estimation methods, namely, the AIS algorithm for estimating the partition function, the CSL method for directly estimating the log-likelihood, and the RAISE algorithm that combines these two ideas.
Resumo:
The problem of structural system identification when measurements originate from multiple tests and multiple sensors is considered. An offline solution to this problem using bootstrap particle filtering is proposed. The central idea of the proposed method is the introduction of a dummy independent variable that allows for simultaneous assimilation of multiple measurements in a sequential manner. The method can treat linear/nonlinear structural models and allows for measurements on strains and displacements under static/dynamic loads. Illustrative examples consider measurement data from numerical models and also from laboratory experiments. The results from the proposed method are compared with those from a Kalman filter-based approach and the superior performance of the proposed method is demonstrated. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
Careful study of various aspects presented in the note reveals basic fallacies in the concept and final conclusions.The Authors claim to have presented a new method of determining C-v. However, the note does not contain a new method. In fact, the method proposed is an attempt to generate settlement vs. time data using only two values of (t,8). The Authors have used a rectangular hyperbola method to determine C-v from the predicated 8- t data. In this context, the title of the paper itself is misleading and questionable. The Authors have compared C-v values predicated with measured values, both of them being the results of the rectangular hyperbola method.
Resumo:
A new clustering technique, based on the concept of immediato neighbourhood, with a novel capability to self-learn the number of clusters expected in the unsupervized environment, has been developed. The method compares favourably with other clustering schemes based on distance measures, both in terms of conceptual innovations and computational economy. Test implementation of the scheme using C-l flight line training sample data in a simulated unsupervized mode has brought out the efficacy of the technique. The technique can easily be implemented as a front end to established pattern classification systems with supervized learning capabilities to derive unified learning systems capable of operating in both supervized and unsupervized environments. This makes the technique an attractive proposition in the context of remotely sensed earth resources data analysis wherein it is essential to have such a unified learning system capability.
Resumo:
Statistical learning algorithms provide a viable framework for geotechnical engineering modeling. This paper describes two statistical learning algorithms applied for site characterization modeling based on standard penetration test (SPT) data. More than 2700 field SPT values (N) have been collected from 766 boreholes spread over an area of 220 sqkm area in Bangalore. To get N corrected value (N,), N values have been corrected (Ne) for different parameters such as overburden stress, size of borehole, type of sampler, length of connecting rod, etc. In three-dimensional site characterization model, the function N-c=N-c (X, Y, Z), where X, Y and Z are the coordinates of a point corresponding to N, value, is to be approximated in which N, value at any half-space point in Bangalore can be determined. The first algorithm uses least-square support vector machine (LSSVM), which is related to aridge regression type of support vector machine. The second algorithm uses relevance vector machine (RVM), which combines the strengths of kernel-based methods and Bayesian theory to establish the relationships between a set of input vectors and a desired output. The paper also presents the comparative study between the developed LSSVM and RVM model for site characterization. Copyright (C) 2009 John Wiley & Sons,Ltd.
Resumo:
The performance-based liquefaction potential analysis was carried out in the present study to estimate the liquefaction return period for Bangalore, India, through a probabilistic approach. In this approach, the entire range of peak ground acceleration (PGA) and earthquake magnitudes was used in the evaluation of liquefaction return period. The seismic hazard analysis for the study area was done using probabilistic approach to evaluate the peak horizontal acceleration at bed rock level. Based on the results of the multichannel analysis of surface wave, it was found that the study area belonged to site class D. The PGA values for the study area were evaluated for site class D by considering the local site effects. The soil resistance for the study area was characterized using the standard penetration test (SPT) values obtained from 450 boreholes. These SPT data along with the PGA values obtained from the probabilistic seismic hazard analysis were used to evaluate the liquefaction return period for the study area. The contour plot showing the spatial variation of factor of safety against liquefaction and the corrected SPT values required for preventing liquefaction for a return period of 475 years at depths of 3 and 6 m are presented in this paper. The entire process of liquefaction potential evaluation, starting from collection of earthquake data, identifying the seismic sources, evaluation of seismic hazard and the assessment of liquefaction return period were carried out, and the entire analysis was done based on the probabilistic approach.
Resumo:
A low strain shear modulus plays a fundamental role in the estimation of site response parameters In this study an attempt has been made to develop the relationships between standard penetration test (SPT) N values with the low strain shear modulus (G(max)) For this purpose, field experiments SPT and multichannel analysis of surface wave data from 38 locations in Bangalore, India, have been used, which were also used for seismic microzonation project The in situ density of soil layer was evaluated using undisturbed soil samples from the boreholes Shear wave velocity (V-s) profiles with depth were obtained for the same locations or close to the boreholes The values for low strain shear modulus have been calculated using measured V-s and soil density About 215 pairs of SPT N and G(max) values are used for regression analysis The differences between fitted regression relations using measured and corrected values were analyzed It is found that an uncorrected value of N and modulus gives the best fit with a high regression coefficient when compared to corrected N and corrected modulus values This study shows better correlation between measured values of N and G(max) when compared to overburden stress corrected values of N and G(max)
Resumo:
The displacement between the ridges situated outside the filleted test section of an axially loaded unnotched specimen is computed from the axial load and shape of the specimen and compared with extensometer deflection data obtained from experiments. The effect of prestrain on the extensometer deflection versus specimen strain curve has been studied experimentally and analytically. An analytical study shows that an increase in the slope of the stress-strain curve in the inelastic region increases the slope of the corresponding computed extensometer deflection versus specimen strain curve. A mathematical model has been developed which uses a modified length ¯ℓef in place of the actual length of the uniform diameter test section of the specimen. This model predicts the extensometer deflection within 5% of the corresponding experimental value. This method has been successfully used by the authors to evolve an iterative procedure for predicting the cyclic specimen strain in axial fatigue tests on unnotched specimens.
Resumo:
A general procedure for arriving at 3-D models of disulphiderich olypeptide systems based on the covalent cross-link constraints has been developed. The procedure, which has been coded as a computer program, RANMOD, assigns a large number of random, permitted backbone conformations to the polypeptide and identifies stereochemically acceptable structures as plausible models based on strainless disulphide bridge modelling. Disulphide bond modelling is performed using the procedure MODIP developed earlier, in connection with the choice of suitable sites where disulphide bonds could be engineered in proteins (Sowdhamini,R., Srinivasan,N., Shoichet,B., Santi,D.V., Ramakrishnan,C. and Balaram,P. (1989) Protein Engng, 3, 95-103). The method RANMOD has been tested on small disulphide loops and the structures compared against preferred backbone conformations derived from an analysis of putative disulphide subdatabase and model calculations. RANMOD has been applied to disulphiderich peptides and found to give rise to several stereochemically acceptable structures. The results obtained on the modelling of two test cases, a-conotoxin GI and endothelin I, are presented. Available NMR data suggest that such small systems exhibit conformational heterogeneity in solution. Hence, this approach for obtaining several distinct models is particularly attractive for the study of conformational excursions.
Resumo:
Delineation of homogeneous precipitation regions (regionalization) is necessary for investigating frequency and spatial distribution of meteorological droughts. The conventional methods of regionalization use statistics of precipitation as attributes to establish homogeneous regions. Therefore they cannot be used to form regions in ungauged areas, and they may not be useful to form meaningful regions in areas having sparse rain gauge density. Further, validation of the regions for homogeneity in precipitation is not possible, since the use of the precipitation statistics to form regions and subsequently to test the regional homogeneity is not appropriate. To alleviate this problem, an approach based on fuzzy cluster analysis is presented. It allows delineation of homogeneous precipitation regions in data sparse areas using large scale atmospheric variables (LSAV), which influence precipitation in the study area, as attributes. The LSAV, location parameters (latitude, longitude and altitude) and seasonality of precipitation are suggested as features for regionalization. The approach allows independent validation of the identified regions for homogeneity using statistics computed from the observed precipitation. Further it has the ability to form regions even in ungauged areas, owing to the use of attributes that can be reliably estimated even when no at-site precipitation data are available. The approach was applied to delineate homogeneous annual rainfall regions in India, and its effectiveness is illustrated by comparing the results with those obtained using rainfall statistics, regionalization based on hard cluster analysis, and meteorological sub-divisions in India. (C) 2011 Elsevier B.V. All rights reserved.