14 resultados para Semantic Publishing, Linked Data, Bibliometrics, Informetrics, Data Retrieval, Citations
em Indian Institute of Science - Bangalore - Índia
Resumo:
Several techniques are known for searching an ordered collection of data. The techniques and analyses of retrieval methods based on primary attributes are straightforward. Retrieval using secondary attributes depends on several factors. For secondary attribute retrieval, the linear structures—inverted lists, multilists, doubly linked lists—and the recently proposed nonlinear tree structures—multiple attribute tree (MAT), K-d tree (kdT)—have their individual merits. It is shown in this paper that, of the two tree structures, MAT possesses several features of a systematic data structure for external file organisation which make it superior to kdT. Analytic estimates for the complexity of node searchers, in MAT and kdT for several types of queries, are developed and compared.
Resumo:
The current study presents an algorithm to retrieve surface Soil Moisture (SM) from multi-temporal Synthetic Aperture Radar (SAR) data. The developed algorithm is based on the Cumulative Density Function (CDF) transformation of multi-temporal RADARSAT-2 backscatter coefficient (BC) to obtain relative SM values, and then converts relative SM values into absolute SM values using soil information. The algorithm is tested in a semi-arid tropical region in South India using 30 satellite images of RADARSAT-2, SMOS L2 SM products, and 1262 SM field measurements in 50 plots spanning over 4 years. The validation with the field data showed the ability of the developed algorithm to retrieve SM with RMSE ranging from 0.02 to 0.06 m(3)/m(3) for the majority of plots. Comparison with the SMOS SM showed a good temporal behaviour with RMSE of approximately 0.05 m(3)/m(3) and a correlation coefficient of approximately 0.9. The developed model is compared and found to be better than the change detection and delta index model. The approach does not require calibration of any parameter to obtain relative SM and hence can easily be extended to any region having time series of SAR data available.
Resumo:
This paper is concerned with the integration of voice and data on an experimental local area network used by the School of Automation, of the Indian Institute of Science. SALAN (School of Automation Local Area Network) consists of a number of microprocessor-based communication nodes linked to a shared coaxial cable transmission medium. The communication nodes handle the various low-level functions associated with computer communication, and interface user data equipment to the network. SALAN at present provides a file transfer facility between an Intel Series III microcomputer development system and a Texas Instruments Model 990/4 microcomputer system. Further, a packet voice communication system has also been implemented on SALAN. The various aspects of the design and implementation of the above two utilities are discussed.
Resumo:
The knowledge of hydrological variables (e. g. soil moisture, evapotranspiration) are of pronounced importance in various applications including flood control, agricultural production and effective water resources management. These applications require the accurate prediction of hydrological variables spatially and temporally in watershed/basin. Though hydrological models can simulate these variables at desired resolution (spatial and temporal), often they are validated against the variables, which are either sparse in resolution (e. g. soil moisture) or averaged over large regions (e. g. runoff). A combination of the distributed hydrological model (DHM) and remote sensing (RS) has the potential to improve resolution. Data assimilation schemes can optimally combine DHM and RS. Retrieval of hydrological variables (e. g. soil moisture) from remote sensing and assimilating it in hydrological model requires validation of algorithms using field studies. Here we present a review of methodologies developed to assimilate RS in DHM and demonstrate the application for soil moisture in a small experimental watershed in south India.
Resumo:
Automatic identification of software faults has enormous practical significance. This requires characterizing program execution behavior and the use of appropriate data mining techniques on the chosen representation. In this paper, we use the sequence of system calls to characterize program execution. The data mining tasks addressed are learning to map system call streams to fault labels and automatic identification of fault causes. Spectrum kernels and SVM are used for the former while latent semantic analysis is used for the latter The techniques are demonstrated for the intrusion dataset containing system call traces. The results show that kernel techniques are as accurate as the best available results but are faster by orders of magnitude. We also show that latent semantic indexing is capable of revealing fault-specific features.
Resumo:
Purpose - This paper aims to validate a comprehensive aeroelastic analysis for a helicopter rotor with the higher harmonic control aeroacoustic rotor test (HART-II) wind tunnel test data. Design/methodology/approach - Aeroelastic analysis of helicopter rotor with elastic blades based on finite element method in space and time and capable of considering higher harmonic control inputs is carried out. Moderate deflection and coriolis nonlinearities are included in the analysis. The rotor aerodynamics are represented using free wake and unsteady aerodynamic models. Findings - Good correlation between analysis and HART-II wind tunnel test data is obtained for blade natural frequencies across a range of rotating speeds. The basic physics of the blade mode shapes are also well captured. In particular, the fundamental flap, lag and torsion modes compare very well. The blade response compares well with HART-II result and other high-fidelity aeroelastic code predictions for flap and torsion mode. For the lead-lag response, the present analysis prediction is somewhat better than other aeroelastic analyses. Research limitations/implications - Predicted blade response trend with higher harmonic pitch control agreed well with the wind tunnel test data, but usually contained a constant offset in the mean values of lead-lag and elastic torsion response. Improvements in the modeling of the aerodynamic environment around the rotor can help reduce this gap between the experimental and numerical results. Practical implications - Correlation of predicted aeroelastic response with wind tunnel test data is a vital step towards validating any helicopter aeroelastic analysis. Such efforts lend confidence in using the numerical analysis to understand the actual physical behavior of the helicopter system. Also, validated numerical analyses can take the place of time-consuming and expensive wind tunnel tests during the initial stage of the design process. Originality/value - While the basic physics appears to be well captured by the aeroelastic analysis, there is need for improvement in the aerodynamic modeling which appears to be the source of the gap between numerical predictions and HART-II wind tunnel experiments.
Resumo:
This paper presents a fast algorithm for data exchange in a network of processors organized as a reconfigurable tree structure. For a given data exchange table, the algorithm generates a sequence of tree configurations in which the data exchanges are to be executed. A significant feature of the algorithm is that each exchange is executed in a tree configuration in which the source and destination nodes are adjacent to each other. It has been proved in a theorem that for every pair of nodes in the reconfigurable tree structure, there always exists two and only two configurations in which these two nodes are adjacent to each other. The algorithm utilizes this fact and determines the solution so as to optimize both the number of configurations required and the time to perform the data exchanges. Analysis of the algorithm shows that it has linear time complexity, and provides a large reduction in run-time as compared to a previously proposed algorithm. This is well-confirmed from the experimental results obtained by executing a large number of randomly-generated data exchange tables. Another significant feature of the algorithm is that the bit-size of the routing information code is always two bits, irrespective of the number of nodes in the tree. This not only increases the speed of the algorithm but also results in simpler hardware inside each node.
Resumo:
Three dimensional digital model of a representative human kidney is needed for a surgical simulator that is capable of simulating a laparoscopic surgery involving kidney. Buying a three dimensional computer model of a representative human kidney, or reconstructing a human kidney from an image sequence using commercial software, both involve (sometimes significant amount of) money. In this paper, author has shown that one can obtain a three dimensional surface model of human kidney by making use of images from the Visible Human Data Set and a few free software packages (ImageJ, ITK-SNAP, and MeshLab in particular). Images from the Visible Human Data Set, and the software packages used here, both do not cost anything. Hence, the practice of extracting the geometry of a representative human kidney for free, as illustrated in the present work, could be a free alternative to the use of expensive commercial software or to the purchase of a digital model.
Resumo:
It is increasingly being recognized that resting state brain connectivity derived from functional magnetic resonance imaging (fMRI) data is an important marker of brain function both in healthy and clinical populations. Though linear correlation has been extensively used to characterize brain connectivity, it is limited to detecting first order dependencies. In this study, we propose a framework where in phase synchronization (PS) between brain regions is characterized using a new metric ``correlation between probabilities of recurrence'' (CPR) and subsequent graph-theoretic analysis of the ensuing networks. We applied this method to resting state fMRI data obtained from human subjects with and without administration of propofol anesthetic. Our results showed decreased PS during anesthesia and a biologically more plausible community structure using CPR rather than linear correlation. We conclude that CPR provides an attractive nonparametric method for modeling interactions in brain networks as compared to standard correlation for obtaining physiologically meaningful insights about brain function.
Resumo:
The study follows an approach to estimate phytomass using recent techniques of remote sensing and digital photogrammetry. It involved tree inventory of forest plantations in Bhakra forest range of Nainital district. Panchromatic stereo dataset of Cartosat-1 was evaluated for mean stand height retrieval. Texture analysis and tree-tops detection analyses were done on Quick-Bird PAN data. The composite texture image of mean, variance and contrast with a 5x5 pixel window was found best to separate tree crowns for assessment of crown areas. Tree tops count obtained by local maxima filtering was found to be 83.4 % efficient with an RMSE+/-13 for 35 sample plots. The predicted phytomass ranged from 27.01 to 35.08 t/ha in the case of Eucalyptus sp. while in the case of Tectona grandis from 26.52 to 156 t/ha. The correlation between observed and predicted phytomass in Eucalyptus sp. was 0.468 with an RMSE of 5.12. However, the phytomass predicted in Tectona grandis was fairly strong with R-2=0.65 and RMSE of 9.89 as there was no undergrowth and the crowns were clearly visible. Results of the study show the potential of Cartosat-1 derived DSM and Quick-Bird texture image for the estimation of stand height, stem diameter, tree count and phytomass of important timber species.
Resumo:
Anthropogenic aerosols play a crucial role in our environment, climate, and health. Assessment of spatial and temporal variation in anthropogenic aerosols is essential to determine their impact. Aerosols are of natural and anthropogenic origin and together constitute a composite aerosol system. Information about either component needs elimination of the other from the composite aerosol system. In the present work we estimated the anthropogenic aerosol fraction (AF) over the Indian region following two different approaches and inter-compared the estimates. We espouse multi-satellite data analysis and model simulations (using the CHIMERE Chemical transport model) to derive natural aerosol distribution, which was subsequently used to estimate AF over the Indian subcontinent. These two approaches are significantly different from each other. Natural aerosol satellite-derived information was extracted in terms of optical depth while model simulations yielded mass concentration. Anthropogenic aerosol fraction distribution was studied over two periods in 2008: premonsoon (March-May) and winter (November-February) in regard to the known distinct seasonality in aerosol loading and type over the Indian region. Although both techniques have derived the same property, considerable differences were noted in temporal and spatial distribution. Satellite retrieval of AF showed maximum values during the pre-monsoon and summer months while lowest values were observed in winter. On the other hand, model simulations showed the highest concentration of AF in winter and the lowest during pre-monsoon and summer months. Both techniques provided an annual average AF of comparable magnitude (similar to 0.43 +/- 0.06 from the satellite and similar to 0.48 +/- 0.19 from the model). For winter months the model-estimated AF was similar to 0.62 +/- 0.09, significantly higher than that (0.39 +/- 0.05) estimated from the satellite, while during pre-monsoon months satellite-estimated AF was similar to 0.46 +/- 0.06 and the model simulation estimation similar to 0.53 +/- 0.14. Preliminary results from this work indicate that model-simulated results are nearer to the actual variation as compared to satellite estimation in view of general seasonal variation in aerosol concentrations.
Resumo:
Since streaming data keeps coming continuously as an ordered sequence, massive amounts of data is created. A big challenge in handling data streams is the limitation of time and space. Prototype selection on streaming data requires the prototypes to be updated in an incremental manner as new data comes in. We propose an incremental algorithm for prototype selection. This algorithm can also be used to handle very large datasets. Results have been presented on a number of large datasets and our method is compared to an existing algorithm for streaming data. Our algorithm saves time and the prototypes selected gives good classification accuracy.