29 resultados para GROUND REFERENCE DATA
Resumo:
A Radio Frequency (RF) based digital data transmission scheme with 8 channel encoder/decoder ICs is proposed for surface electrode switching of a 16-electrode wireless Electrical Impedance Tomography (EIT) system. A RF based wireless digital data transmission module (WDDTM) is developed and the electrode switching of a EIT system is studied by analyzing the boundary data collected and the resistivity images of practical phantoms. An analog multiplexers based electrode switching module (ESM) is developed with analog multiplexers and switched with parallel digital data transmitted by a wireless transmitter/receiver (T-x/R-x) module working with radio frequency technology. Parallel digital bits are generated using NI USB 6251 card working in LabVIEW platform and sent to transmission module to transmit the digital data to the receiver end. The transmitter/receiver module developed is properly interfaced with the personal computer (PC) and practical phantoms through the ESM and USB based DAQ system respectively. It is observed that the digital bits required for multiplexer operation are sequentially generated by the digital output (D/O) ports of the DAQ card. Parallel to serial and serial to parallel conversion of digital data are suitably done by encoder and decoder ICs. Wireless digital data transmission module successfully transmitted and received the parallel data required for switching the current and voltage electrodes wirelessly. 1 mA, 50 kHz sinusoidal constant current is injected at the phantom boundary using common ground current injection protocol and the boundary potentials developed at the voltage electrodes are measured. Resistivity images of the practical phantoms are reconstructed from boundary data using EIDORS. Boundary data and the resistivity images reconstructed from the surface potentials are studied to assess the wireless digital data transmission system. Boundary data profiles of the practical phantom with different configurations show that the multiplexers are operating in the required sequence for common ground current injection protocol. The voltage peaks obtained at the proper positions in the boundary data profiles proved the sequential operation of multiplexers and successful wireless transmission of digital bits. Reconstructed images and their image parameters proved that the boundary data are successfully acquired by the DAQ system which in turn again indicates a sequential and proper operation of multiplexers as well as the successful wireless transmission of digital bits. Hence the developed RF based wireless digital data transmission module (WDDTM) is found suitable for transmitting digital bits required for electrode switching in wireless EIT data acquisition system. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
Surface electrodes are essentially required to be switched for boundary data collection in electrical impedance tomography (Ell). Parallel digital data bits are required to operate the multiplexers used, generally, for electrode switching in ELT. More the electrodes in an EIT system more the digital data bits are needed. For a sixteen electrode system. 16 parallel digital data bits are required to operate the multiplexers in opposite or neighbouring current injection method. In this paper a common ground current injection is proposed for EIT and the resistivity imaging is studied. Common ground method needs only two analog multiplexers each of which need only 4 digital data bits and hence only 8 digital bits are required to switch the 16 surface electrodes. Results show that the USB based data acquisition system sequentially generate digital data required for multiplexers operating in common ground current injection method. The profile of the boundary data collected from practical phantom show that the multiplexers are operating in the required sequence in common ground current injection protocol. The voltage peaks obtained for all the inhomogeneity configurations are found at the accurate positions in the boundary data matrix which proved the sequential operation of multiplexers. Resistivity images reconstructed from the boundary data collected from the practical phantom with different configurations also show that the entire digital data generation module is functioning properly. Reconstructed images and their image parameters proved that the boundary data are successfully acquired by the DAQ system which in turn indicates a sequential and proper operation of multiplexers.
Resumo:
Electrical failure of insulation is known to be an extremal random process wherein nominally identical pro-rated specimens of equipment insulation, at constant stress fail at inordinately different times even under laboratory test conditions. In order to be able to estimate the life of power equipment, it is necessary to run long duration ageing experiments under accelerated stresses, to acquire and analyze insulation specific failure data. In the present work, Resin Impregnated Paper (RIP) a relatively new insulation system of choice used in transformer bushings, is taken as an example. The failure data has been processed using proven statistical methods, both graphical and analytical. The physical model governing insulation failure at constant accelerated stress has been assumed to be based on temperature dependent inverse power law model.
Resumo:
Effective conservation and management of natural resources requires up-to-date information of the land cover (LC) types and their dynamics. The LC dynamics are being captured using multi-resolution remote sensing (RS) data with appropriate classification strategies. RS data with important environmental layers (either remotely acquired or derived from ground measurements) would however be more effective in addressing LC dynamics and associated changes. These ancillary layers provide additional information for delineating LC classes' decision boundaries compared to the conventional classification techniques. This communication ascertains the possibility of improved classification accuracy of RS data with ancillary and derived geographical layers such as vegetation index, temperature, digital elevation model (DEM), aspect, slope and texture. This has been implemented in three terrains of varying topography. The study would help in the selection of appropriate ancillary data depending on the terrain for better classified information.
Resumo:
Low power consumption per channel and data rate minimization are two key challenges which need to be addressed in future generations of neural recording systems (NRS). Power consumption can be reduced by avoiding unnecessary processing whereas data rate is greatly decreased by sending spike time-stamps along with spike features as opposed to raw digitized data. Dynamic range in NRS can vary with time due to change in electrode-neuron distance or background noise, which demands adaptability. An analog-to-digital converter (ADC) is one of the most important blocks in a NRS. This paper presents an 8-bit SAR ADC in 0.13-mu m CMOS technology along with input and reference buffer. A novel energy efficient digital-to-analog converter switching scheme is proposed, which consumes 37% less energy than the present state-of-the-art. The use of a ping-pong input sampling scheme is emphasized for multichannel input to alleviate the bandwidth requirement of the input buffer. To reduce the data rate, the A/D process is only enabled through the in-built background noise rejection logic to ensure that the noise is not processed. The ADC resolution can be adjusted from 8 to 1 bit in 1-bit step based on the input dynamic range. The ADC consumes 8.8 mu W from 1 V supply at 1 MS/s speed. It achieves effective number of bits of 7.7 bits and FoM of 42.3 fJ/conversion-step.
Resumo:
Himalayan region is one of the most active seismic regions in the world and many researchers have highlighted the possibility of great seismic event in the near future due to seismic gap. Seismic hazard analysis and microzonation of highly populated places in the region are mandatory in a regional scale. Region specific Ground Motion Predictive Equation (GMPE) is an important input in the seismic hazard analysis for macro- and micro-zonation studies. Few GMPEs developed in India are based on the recorded data and are applicable for a particular range of magnitudes and distances. This paper focuses on the development of a new GMPE for the Himalayan region considering both the recorded and simulated earthquakes of moment magnitude 5.3-8.7. The Finite Fault simulation model has been used for the ground motion simulation considering region specific seismotectonic parameters from the past earthquakes and source models. Simulated acceleration time histories and response spectra are compared with available records. In the absence of a large number of recorded data, simulations have been performed at unavailable locations by adopting Apparent Stations concept. Earthquakes recorded up to 2007 have been used for the development of new GMPE and earthquakes records after 2007 are used to validate new GMPE. Proposed GMPE matched very well with recorded data and also with other highly ranked GMPEs developed elsewhere and applicable for the region. Comparison of response spectra also have shown good agreement with recorded earthquake data. Quantitative analysis of residuals for the proposed GMPE and region specific GMPEs to predict Nepal-India 2011 earthquake of Mw of 5.7 records values shows that the proposed GMPE predicts Peak ground acceleration and spectral acceleration for entire distance and period range with lower percent residual when compared to exiting region specific GMPEs. Crown Copyright (C) 2013 Published by Elsevier Ltd. All rights reserved.
Resumo:
Identification and mapping of crevasses in glaciated regions is important for safe movement. However, the remote and rugged glacial terrain in the Himalaya poses greater challenges for field data collection. In the present study crevasse signatures were collected from Siachen and Samudra Tapu glaciers in the Indian Himalaya using ground-penetrating radar (GPR). The surveys were conducted using the antennas of 250 MHz frequency in ground mode and 350 MHz in airborne mode. The identified signatures of open and hidden crevasses in GPR profiles collected in ground mode were validated by ground truthing. The crevasse zones and buried boulder areas in a glacier were identified using a combination of airborne GPR profiles and SAR data, and the same have been validated with the high-resolution optical satellite imagery (Cartosat-1) and Survey of India mapsheet. Using multi-sensor data, a crevasse map for Samudra Tapu glacier was prepared. The present methodology can also be used for mapping the crevasse zones in other glaciers in the Himalaya.
Resumo:
The present paper details the prediction of blast induced ground vibration, using artificial neural network. The data was generated from five different coal mines. Twenty one different parameters involving rock mass parameters, explosive parameters and blast design parameters, were used to develop the one comprehensive ANN model for five different coal bearing formations. A total of 131 datasets was used to develop the ANN model and 44 datasets was used to test the model. The developed ANN model was compared with the USBM model. The prediction capability to predict blast induced ground vibration, of the comprehensive ANN model was found to be superior.
Resumo:
This article describes a new performance-based approach for evaluating the return period of seismic soil liquefaction based on standard penetration test (SPT) and cone penetration test (CPT) data. The conventional liquefaction evaluation methods consider a single acceleration level and magnitude and these approaches fail to take into account the uncertainty in earthquake loading. The seismic hazard analysis based on the probabilistic method clearly shows that a particular acceleration value is being contributed by different magnitudes with varying probability. In the new method presented in this article, the entire range of ground shaking and the entire range of earthquake magnitude are considered and the liquefaction return period is evaluated based on the SPT and CPT data. This article explains the performance-based methodology for the liquefaction analysis – starting from probabilistic seismic hazard analysis (PSHA) for the evaluation of seismic hazard and the performance-based method to evaluate the liquefaction return period. A case study has been done for Bangalore, India, based on SPT data and converted CPT values. The comparison of results obtained from both the methods have been presented. In an area of 220 km2 in Bangalore city, the site class was assessed based on large number of borehole data and 58 Multi-channel analysis of surface wave survey. Using the site class and peak acceleration at rock depth from PSHA, the peak ground acceleration at the ground surface was estimated using probabilistic approach. The liquefaction analysis was done based on 450 borehole data obtained in the study area. The results of CPT match well with the results obtained from similar analysis with SPT data.
Resumo:
H. 264/advanced video coding surveillance video encoders use the Skip mode specified by the standard to reduce bandwidth. They also use multiple frames as reference for motion-compensated prediction. In this paper, we propose two techniques to reduce the bandwidth and computational cost of static camera surveillance video encoders without affecting detection and recognition performance. A spatial sampler is proposed to sample pixels that are segmented using a Gaussian mixture model. Modified weight updates are derived for the parameters of the mixture model to reduce floating point computations. A storage pattern of the parameters in memory is also modified to improve cache performance. Skip selection is performed using the segmentation results of the sampled pixels. The second contribution is a low computational cost algorithm to choose the reference frames. The proposed reference frame selection algorithm reduces the cost of coding uncovered background regions. We also study the number of reference frames required to achieve good coding efficiency. Distortion over foreground pixels is measured to quantify the performance of the proposed techniques. Experimental results show bit rate savings of up to 94.5% over methods proposed in literature on video surveillance data sets. The proposed techniques also provide up to 74.5% reduction in compression complexity without increasing the distortion over the foreground regions in the video sequence.
Resumo:
A discrete vortex method-based model has been proposed for two-dimensional/three-dimensional ground-effect prediction. The model merely requires two-dimensional sectional aerodynamics in free flight. This free-flight data can be obtained either from experiments or a high-fidelity computational fluid dynamics solver. The first step of this two-step model involves a constrained optimization procedure that modifies the vortex distribution on the camber line as obtained from a discrete vortex method to match the free-flight data from experiments/computational fluid dynamics. In the second step, the vortex distribution thus obtained is further modified to account for the presence of the ground plane within a discrete vortex method-based framework. Whereas the predictability of the lift appears as a natural extension, the drag predictability within a potential flow framework is achieved through the introduction of what are referred to as drag panels. The need for the use of the generalized Kutta-Joukowski theorem is emphasized. The extension of the model to three dimensions is by the way of using the numerical lifting-line theory that allows for wing sweep. The model is extensively validated for both two-dimensional and three-dimensional ground-effect studies. The work also demonstrates the ability of the model to predict lift and drag coefficients of a high-lift wing in ground effect to about 2 and 8% accuracy, respectively, as compared to the results obtained using a Reynolds-averaged Navier-Stokes solver involving grids with several million volumes. The model shows a lot of promise in design, particularly during the early phase.
Resumo:
As the volume of data relating to proteins increases, researchers rely more and more on the analysis of published data, thus increasing the importance of good access to these data that vary from the supplemental material of individual articles, all the way to major reference databases with professional staff and long-term funding. Specialist protein resources fill an important middle ground, providing interactive web interfaces to their databases for a focused topic or family of proteins, using specialized approaches that are not feasible in the major reference databases. Many are labors of love, run by a single lab with little or no dedicated funding and there are many challenges to building and maintaining them. This perspective arose from a meeting of several specialist protein resources and major reference databases held at the Wellcome Trust Genome Campus (Cambridge, UK) on August 11 and 12, 2014. During this meeting some common key challenges involved in creating and maintaining such resources were discussed, along with various approaches to address them. In laying out these challenges, we aim to inform users about how these issues impact our resources and illustrate ways in which our working together could enhance their accuracy, currency, and overall value. Proteins 2015; 83:1005-1013. (c) 2015 The Authors. Proteins: Structure, Function, and Bioinformatics Published by Wiley Periodicals, Inc.
Resumo:
The spatial error structure of daily precipitation derived from the latest version 7 (v7) tropical rainfall measuring mission (TRMM) level 2 data products are studied through comparison with the Asian precipitation highly resolved observational data integration toward evaluation of the water resources (APHRODITE) data over a subtropical region of the Indian subcontinent for the seasonal rainfall over 6 years from June 2002 to September 2007. The data products examined include v7 data from the TRMM radiometer Microwave Imager (TMI) and radar precipitation radar (PR), namely, 2A12, 2A25, and 2B31 (combined data from PR and TMI). The spatial distribution of uncertainty from these data products were quantified based on performance metrics derived from the contingency table. For the seasonal daily precipitation over a subtropical basin in India, the data product of 2A12 showed greater skill in detecting and quantifying the volume of rainfall when compared with the 2A25 and 2B31 data products. Error characterization using various error models revealed that random errors from multiplicative error models were homoscedastic and that they better represented rainfall estimates from 2A12 algorithm. Error decomposition techniques performed to disentangle systematic and random errors verify that the multiplicative error model representing rainfall from 2A12 algorithm successfully estimated a greater percentage of systematic error than 2A25 or 2B31 algorithms. Results verify that although the radiometer derived 2A12 rainfall data is known to suffer from many sources of uncertainties, spatial analysis over the case study region of India testifies that the 2A12 rainfall estimates are in a very good agreement with the reference estimates for the data period considered.
Resumo:
The non-availability of high-spatial-resolution thermal data from satellites on a consistent basis led to the development of different models for sharpening coarse-spatial-resolution thermal data. Thermal sharpening models that are based on the relationship between land-surface temperature (LST) and a vegetation index (VI) such as the normalized difference vegetation index (NDVI) or fraction vegetation cover (FVC) have gained much attention due to their simplicity, physical basis, and operational capability. However, there are hardly any studies in the literature examining comprehensively various VIs apart from NDVI and FVC, which may be better suited for thermal sharpening over agricultural and natural landscapes. The aim of this study is to compare the relative performance of five different VIs, namely NDVI, FVC, the normalized difference water index (NDWI), soil adjusted vegetation index (SAVI), and modified soil adjusted vegetation index (MSAVI), for thermal sharpening using the DisTrad thermal sharpening model over agricultural and natural landscapes in India. Multi-temporal LST data from Landsat-7 Enhanced Thematic Mapper Plus (ETM+) and Moderate Resolution Imaging Spectroradiometer (MODIS) sensors obtained over two different agro-climatic grids in India were disaggregated from 960 m to 120 m spatial resolution. The sharpened LST was compared with the reference LST estimated from the Landsat data at 120 m spatial resolution. In addition to this, MODIS LST was disaggregated from 960 m to 480 m and compared with ground measurements at five sites in India. It was found that NDVI and FVC performed better only under wet conditions, whereas under drier conditions, the performance of NDWI was superior to other indices and produced accurate results. SAVI and MSAVI always produced poorer results compared with NDVI/FVC and NDWI for wet and dry cases, respectively.