69 resultados para Survey Methodology
Resumo:
The efficiency of track foundation material gradually decreases due to insufficient lateral confinement, ballast fouling, and loss of shear strength of the subsurface soil under cyclic loading. This paper presents characterization of rail track subsurface to identify ballast fouling and subsurface layers shear wave velocity using seismic survey. Seismic surface wave method of multi-channel analysis of surface wave (MASW) has been carried out in the model track and field track for finding out shear wave velocity of the clean and fouled ballast and track subsurface. The shear wave velocity (SWV) of fouled ballast increases with increase in fouling percentage, and reaches a maximum value and then decreases. This character is similar to typical compaction curve of soil, which is used to define optimum and critical fouling percentage (OFP and CFP). Critical fouling percentage of 15 % is noticed for Coal fouled ballast and 25 % is noticed for clayey sand fouled ballast. Coal fouled ballast reaches the OFP and CFP before clayey sand fouled ballast. Fouling of ballast reduces voids in ballast and there by decreases the drainage. Combined plot of permeability and SWV with percentage of fouling shows that after critical fouling point drainage condition of fouled ballast goes below acceptable limit. Shear wave velocities are measured in the selected location in the Wollongong field track by carrying out similar seismic survey. In-situ samples were collected and degrees of fouling were measured. Field SWV values are more than that of the model track SWV values for the same degree of fouling, which might be due to sleeper's confinement. This article also highlights the ballast gradation widely followed in different countries and presents the comparison of Indian ballast gradation with international gradation standards. Indian ballast contains a coarser particle size when compared to other countries. The upper limit of Indian gradation curve matches with lower limit of ballast gradation curves of America and Australia. The ballast gradation followed by Indian railways is poorly graded and more favorable for the drainage conditions. Indian ballast engineering needs extensive research to improve presents track conditions.
Resumo:
We present the radio-optical imaging of ATLBS, a sensitive radio survey (Subrahmanyan et al. 2010). The primary aim of the ATLBS survey is to image low-power radio sources which form the bulk of the radio source population to moderately high red-shifts (z similar to 1.0). The accompanying multiband optical and near infra-red observations provide information about the hosts and environments of the radio sources. We give here details of the imaging of the radio data and optical data for the ATLBS survey.
Resumo:
A reliable method for service life estimation of the structural element is a prerequisite for service life design. A new methodology for durability-based service life estimation of reinforced concrete flexural elements with respect to chloride-induced corrosion of reinforcement is proposed. The methodology takes into consideration the fuzzy and random uncertainties associated with the variables involved in service life estimation by using a hybrid method combining the vertex method of fuzzy set theory with Monte Carlo simulation technique. It is also shown how to determine the bounds for characteristic value of failure probability from the resulting fuzzy set for failure probability with minimal computational effort. Using the methodology, the bounds for the characteristic value of failure probability for a reinforced concrete T-beam bridge girder has been determined. The service life of the structural element is determined by comparing the upper bound of characteristic value of failure probability with the target failure probability. The methodology will be useful for durability-based service life design and also for making decisions regarding in-service inspections.
Resumo:
Data mining is concerned with analysing large volumes of (often unstructured) data to automatically discover interesting regularities or relationships which in turn lead to better understanding of the underlying processes. The field of temporal data mining is concerned with such analysis in the case of ordered data streams with temporal interdependencies. Over the last decade many interesting techniques of temporal data mining were proposed and shown to be useful in many applications. Since temporal data mining brings together techniques from different fields such as statistics, machine learning and databases, the literature is scattered among many different sources. In this article, we present an overview of techniques of temporal data mining.We mainly concentrate on algorithms for pattern discovery in sequential data streams.We also describe some recent results regarding statistical analysis of pattern discovery methods.
Resumo:
This paper presents a method for minimizing the sum of the square of voltage deviations by a least-square minimization technique, and thus improving the voltage profile in a given system by adjusting control variables, such as tap position of transformers, reactive power injection of VAR sources and generator excitations. The control variables and dependent variables are related by a matrix J whose elements are computed as the sensitivity matrix. Linear programming is used to calculate voltage increments that minimize transmission losses. The active and reactive power optimization sub-problems are solved separately taking advantage of the loose coupling between the two problems. The proposed algorithm is applied to IEEE 14-and 30-bus systems and numerical results are presented. The method is computationally fast and promises to be suitable for implementation in real-time dispatch centres.
Resumo:
The present work proposes a new sensing methodology, which uses Fiber Bragg Gratings (FBGs) to measure in vivo the surface strain and strain rate on calf muscles while performing certain exercises. Two simple exercises, namely ankle dorsi-flexion and ankle plantar-flexion, have been considered and the strain induced on the medial head of the gastrocnemius muscle while performing these exercises has been monitored. The real time strain generated has been recorded and the results are compared with those obtained using a commercial Color Doppler Ultrasound (CDU) system. It is found that the proposed sensing methodology is promising for surface strain measurements in biomechanical applications.
Resumo:
Thermoacoustic engines are energy conversion devices that convert thermal energy from a high-temperature heat source into useful work in the form of acoustic power while diverting waste heat into a cold sink; it can be used as a drive for cryocoolers and refrigerators. Though the devices are simple to fabricate, it is very challenging to design an optimized thermoacoustic primemover with better performance. The study presented here aims to optimize the thermoacoustic primemover using response surface methodology. The influence of stack position and its length, resonator length, plate thickness, and plate spacing on pressure amplitude and frequency in a thermoacoustic primemover is investigated in this study. For the desired frequency of 207 Hz, the optimized value of the above parameters suggested by the response surface methodology has been conducted experimentally, and simulations are also performed using DeltaEC. The experimental and simulation results showed similar output performance.
Resumo:
Given the increasing cost of designing and building new highway pavements, reliability analysis has become vital to ensure that a given pavement performs as expected in the field. Recognizing the importance of failure analysis to safety, reliability, performance, and economy, back analysis has been employed in various engineering applications to evaluate the inherent uncertainties of the design and analysis. The probabilistic back analysis method formulated on Bayes' theorem and solved using the Markov chain Monte Carlo simulation method with a Metropolis-Hastings algorithm has proved to be highly efficient to address this issue. It is also quite flexible and is applicable to any type of prior information. In this paper, this method has been used to back-analyze the parameters that influence the pavement life and to consider the uncertainty of the mechanistic-empirical pavement design model. The load-induced pavement structural responses (e.g., stresses, strains, and deflections) used to predict the pavement life are estimated using the response surface methodology model developed based on the results of linear elastic analysis. The failure criteria adopted for the analysis were based on the factor of safety (FOS), and the study was carried out for different sample sizes and jumping distributions to estimate the most robust posterior statistics. From the posterior statistics of the case considered, it was observed that after approximately 150 million standard axle load repetitions, the mean values of the pavement properties decrease as expected, with a significant decrease in the values of the elastic moduli of the expected layers. An analysis of the posterior statistics indicated that the parameters that contribute significantly to the pavement failure were the moduli of the base and surface layer, which is consistent with the findings from other studies. After the back analysis, the base modulus parameters show a significant decrease of 15.8% and the surface layer modulus a decrease of 3.12% in the mean value. The usefulness of the back analysis methodology is further highlighted by estimating the design parameters for specified values of the factor of safety. The analysis revealed that for the pavement section considered, a reliability of 89% and 94% can be achieved by adopting FOS values of 1.5 and 2, respectively. The methodology proposed can therefore be effectively used to identify the parameters that are critical to pavement failure in the design of pavements for specified levels of reliability. DOI: 10.1061/(ASCE)TE.1943-5436.0000455. (C) 2013 American Society of Civil Engineers.
Resumo:
There are many applications such as software for processing customer records in telecom, patient records in hospitals, email processing software accessing a single email in a mailbox etc. which require to access a single record in a database consisting of millions of records. A basic feature of these applications is that they need to access data sets which are very large but simple. Cloud computing provides computing requirements for these kinds of new generation of applications involving very large data sets which cannot possibly be handled efficiently using traditional computing infrastructure. In this paper, we describe storage services provided by three well-known cloud service providers and give a comparison of their features with a view to characterize storage requirements of very large data sets as examples and we hope that it would act as a catalyst for the design of storage services for very large data set requirements in future. We also give a brief overview of other kinds of storage that have come up in the recent past for cloud computing.
Resumo:
Thermoacoustic refrigerator (TAR) converts acoustic waves into heat without any moving parts. The study presented here aims to optimize the parameters like frequency, stack position, stack length, and plate spacing involving in designing TAR using the Response Surface Methodology (RSM). A mathematical model is developed using the RSM based on the results obtained from DeltaEC software. For desired temperature difference of 40 K, optimized parameters suggested by the RSM are the frequency 254 Hz, stack position 0.108 m, stack length 0.08 m, and plate spacing 0.0005 m. The experiments were conducted with optimized parameters and simulations were performed using the Design Environment for Low-amplitude ThermoAcoustic Energy Conversion (DeltaEC) which showed similar results.
Resumo:
In social choice theory, preference aggregation refers to computing an aggregate preference over a set of alternatives given individual preferences of all the agents. In real-world scenarios, it may not be feasible to gather preferences from all the agents. Moreover, determining the aggregate preference is computationally intensive. In this paper, we show that the aggregate preference of the agents in a social network can be computed efficiently and with sufficient accuracy using preferences elicited from a small subset of critical nodes in the network. Our methodology uses a model developed based on real-world data obtained using a survey on human subjects, and exploits network structure and homophily of relationships. Our approach guarantees good performance for aggregation rules that satisfy a property which we call expected weak insensitivity. We demonstrate empirically that many practically relevant aggregation rules satisfy this property. We also show that two natural objective functions in this context satisfy certain properties, which makes our methodology attractive for scalable preference aggregation over large scale social networks. We conclude that our approach is superior to random polling while aggregating preferences related to individualistic metrics, whereas random polling is acceptable in the case of social metrics.
Resumo:
We present a study of the environments of extended radio sources in the Australia Telescope Low-Brightness Survey (ATLBS). The radio sources were selected from the ATLBS Extended Source Sample, which is a well defined sample containing the most extended of radio sources in the ATLBS sky survey regions. The environments were analysed using 4-m Cerro-Tololo Inter-American Observatory Blanco telescope observations carried out for ATLBS fields in the Sloan Digital Sky Survey r(') band. We have estimated the properties of the environments using smoothed density maps derived from galaxy catalogues constructed using these optical imaging data. The angular distribution of galaxy density relative to the axes of the radio sources has been quantified by defining anisotropy parameters that are estimated using a new method presented here. Examining the anisotropy parameters for a subsample of extended double radio sources that includes all sources with pronounced asymmetry in lobe extents, we find good evidence for environmental anisotropy being the dominant cause for lobe asymmetry in that higher galaxy density occurs almost always on the side of the shorter lobe, and this validates the usefulness of the method proposed and adopted here. The environmental anisotropy parameters have been used to examine and compare the environments of Fanaroff-Riley Class I (FRI) and Fanaroff-Riley Class II (FRII) radio sources in two redshift regimes (z < 0.5 and z > 0.5). Wide-angle tail sources and head-tail sources lie in the most overdense environments. The head-tail source environments (for the HT sources in our sample) display dipolar anisotropy in that higher galaxy density appears to lie in the direction of the tails. Excluding the head-tail and wide-angle tail sources, subsamples of FRI and FRII sources from the ATLBS appear to lie in similar moderately overdense environments, with no evidence for redshift evolution in the regimes studied herein.
Resumo:
Identification and mapping of crevasses in glaciated regions is important for safe movement. However, the remote and rugged glacial terrain in the Himalaya poses greater challenges for field data collection. In the present study crevasse signatures were collected from Siachen and Samudra Tapu glaciers in the Indian Himalaya using ground-penetrating radar (GPR). The surveys were conducted using the antennas of 250 MHz frequency in ground mode and 350 MHz in airborne mode. The identified signatures of open and hidden crevasses in GPR profiles collected in ground mode were validated by ground truthing. The crevasse zones and buried boulder areas in a glacier were identified using a combination of airborne GPR profiles and SAR data, and the same have been validated with the high-resolution optical satellite imagery (Cartosat-1) and Survey of India mapsheet. Using multi-sensor data, a crevasse map for Samudra Tapu glacier was prepared. The present methodology can also be used for mapping the crevasse zones in other glaciers in the Himalaya.
Resumo:
This article describes a new performance-based approach for evaluating the return period of seismic soil liquefaction based on standard penetration test (SPT) and cone penetration test (CPT) data. The conventional liquefaction evaluation methods consider a single acceleration level and magnitude and these approaches fail to take into account the uncertainty in earthquake loading. The seismic hazard analysis based on the probabilistic method clearly shows that a particular acceleration value is being contributed by different magnitudes with varying probability. In the new method presented in this article, the entire range of ground shaking and the entire range of earthquake magnitude are considered and the liquefaction return period is evaluated based on the SPT and CPT data. This article explains the performance-based methodology for the liquefaction analysis – starting from probabilistic seismic hazard analysis (PSHA) for the evaluation of seismic hazard and the performance-based method to evaluate the liquefaction return period. A case study has been done for Bangalore, India, based on SPT data and converted CPT values. The comparison of results obtained from both the methods have been presented. In an area of 220 km2 in Bangalore city, the site class was assessed based on large number of borehole data and 58 Multi-channel analysis of surface wave survey. Using the site class and peak acceleration at rock depth from PSHA, the peak ground acceleration at the ground surface was estimated using probabilistic approach. The liquefaction analysis was done based on 450 borehole data obtained in the study area. The results of CPT match well with the results obtained from similar analysis with SPT data.