13 resultados para Data Envelopment Analysis
em Indian Institute of Science - Bangalore - Índia
Resumo:
Sixteen irrigation subsystems of the Mahi Bajaj Sagar Project, Rajasthan, India, are evaluated and selection of the most suitable/best is made using data envelopment analysis (DEA) in both deterministic and fuzzy environments. Seven performance-related indicators, namely, land development works (LDW), timely supply of inputs (TSI), conjunctive use of water resources (CUW), participation of farmers (PF), environmental conservation (EC), economic impact (EI) and crop productivity (CPR) are considered. Of the seven, LDW, TSI, CUW, PF and EC are considered inputs, whereas CPR and EI are considered outputs for DEA modelling purposes. Spearman rank correlation coefficient values are also computed for various scenarios. It is concluded that DEA in both deterministic and fuzzy environments is useful for the present problem. However, the outcome of fuzzy DEA may be explored for further analysis due to its simple, effective data and discrimination handling procedure. It is inferred that the present study can be explored for similar situations with suitable modifications.
Resumo:
Data-flow analysis is an integral part of any aggressive optimizing compiler. We propose a framework for improving the precision of data-flow analysis in the presence of complex control-flow. W initially perform data-flow analysis to determine those control-flow merges which cause the loss in data-flow analysis precision. The control-flow graph of the program is then restructured such that performing data-flow analysis on the resulting restructured graph gives more precise results. The proposed framework is both simple, involving the familiar notion of product automata, and also general, since it is applicable to any forward data-flow analysis. Apart from proving that our restructuring process is correct, we also show that restructuring is effective in that it necessarily leads to more optimization opportunities. Furthermore, the framework handles the trade-off between the increase in data-flow precision and the code size increase inherent in the restructuring. We show that determining an optimal restructuring is NP-hard, and propose and evaluate a greedy strategy. The framework has been implemented in the Scale research compiler, and instantiated for the specific problem of Constant Propagation. On the SPECINT 2000 benchmark suite we observe an average speedup of 4% in the running times over Wegman-Zadeck conditional constant propagation algorithm and 2% over a purely path profile guided approach.
Resumo:
Static analysis (aka offline analysis) of a model of an IP network is useful for understanding, debugging, and verifying packet flow properties of the network. Data-flow analysis is a method that has typically been applied to static analysis of programs. We propose a new, data-flow based approach for static analysis of packet flows in networks. We also investigate an application of our analysis to the problem of inferring a high-level policy from the network, which has been addressed in the past only for a single router.
Resumo:
This paper presents a fast algorithm for data exchange in a network of processors organized as a reconfigurable tree structure. For a given data exchange table, the algorithm generates a sequence of tree configurations in which the data exchanges are to be executed. A significant feature of the algorithm is that each exchange is executed in a tree configuration in which the source and destination nodes are adjacent to each other. It has been proved in a theorem that for every pair of nodes in the reconfigurable tree structure, there always exists two and only two configurations in which these two nodes are adjacent to each other. The algorithm utilizes this fact and determines the solution so as to optimize both the number of configurations required and the time to perform the data exchanges. Analysis of the algorithm shows that it has linear time complexity, and provides a large reduction in run-time as compared to a previously proposed algorithm. This is well-confirmed from the experimental results obtained by executing a large number of randomly-generated data exchange tables. Another significant feature of the algorithm is that the bit-size of the routing information code is always two bits, irrespective of the number of nodes in the tree. This not only increases the speed of the algorithm but also results in simpler hardware inside each node.
Resumo:
The effect of some experimental parameters, namely sample weight, particle size and its distribution, heating rate and flow rate of inert gas, on the fractional decomposition of calcium carbonate samples have been studied both experimentally and theoretical. The general conclusions obtained from theoretical analysis are corroborated qualitatively by the experimental data. The analysis indicates that the kinetic compensating effect may be partly due to the variations in experimental parameters for different experiments.
Resumo:
MATLAB is an array language, initially popular for rapid prototyping, but is now being increasingly used to develop production code for numerical and scientific applications. Typical MATLAB programs have abundant data parallelism. These programs also have control flow dominated scalar regions that have an impact on the program's execution time. Today's computer systems have tremendous computing power in the form of traditional CPU cores and throughput oriented accelerators such as graphics processing units(GPUs). Thus, an approach that maps the control flow dominated regions to the CPU and the data parallel regions to the GPU can significantly improve program performance. In this paper, we present the design and implementation of MEGHA, a compiler that automatically compiles MATLAB programs to enable synergistic execution on heterogeneous processors. Our solution is fully automated and does not require programmer input for identifying data parallel regions. We propose a set of compiler optimizations tailored for MATLAB. Our compiler identifies data parallel regions of the program and composes them into kernels. The problem of combining statements into kernels is formulated as a constrained graph clustering problem. Heuristics are presented to map identified kernels to either the CPU or GPU so that kernel execution on the CPU and the GPU happens synergistically and the amount of data transfer needed is minimized. In order to ensure required data movement for dependencies across basic blocks, we propose a data flow analysis and edge splitting strategy. Thus our compiler automatically handles composition of kernels, mapping of kernels to CPU and GPU, scheduling and insertion of required data transfer. The proposed compiler was implemented and experimental evaluation using a set of MATLAB benchmarks shows that our approach achieves a geometric mean speedup of 19.8X for data parallel benchmarks over native execution of MATLAB.
Resumo:
We demonstrate the activity of Ti0.84Pt0.01Fe0.15O2-delta and Ti0.73Pd0.02Fe0.25O2-delta catalysts towards the CO oxidation and water gas shift (VMS) reaction. Both the catalysts were synthesized in the nano crystalline form by a low temperature sonochemical method and characterized by different techniques such as XRD, FT-Raman, TEM, FT-IR, XPS and BET surface analyzer. H-2-TPR results corroborate the intimate contact between noble metal and Fe ions in the both catalysts that facilitates the reducibility of the support. In the absence of feed CO2 and H-2, nearly 100% conversion of CO to CO2 with 100% H-2 selectivity was observed at 300 degrees C and 260 degrees C respectively, for Ti0.84Pt0.01Fe0.15O2-delta and Ti0.73Pd0.02Fe0.25O2-delta catalyst. However, the catalytic performance of Ti0.73Pd0.02Fe0.25O2-delta deteriorates in the presence of feed CO2 and H-2. The change in the support reducibility is the primary reason for the significant increase in the activity for CO oxidation and WGS reaction. The effect of Fe addition was more significant in Ti0.73Pd0.02Fe0.25O2-delta than Ti0.84Pt0.01Fe0.15O2-delta. Based on the spectroscopic evidences and surface phenomena, a hybrid reaction scheme utilizing both surface hydroxyl groups and the lattice oxygen was hypothesized over these catalysts for WGS reaction. The mechanisms based on the formate and redox pathway were used to fit the ldnetic data. The analysis of experimental data shows the redox mechanism is the dominant pathway over these catalysts. Copyright (C) 2012, Hydrogen Energy Publications, LLC. Published by Elsevier Ltd. All rights reserved.
Resumo:
CdTe thin films of 500 thickness prepared by thermal evaporation technique were analyzed for leakage current and conduction mechanisms. Metal-insulator-metal (MIM) capacitors were fabricated using these films as a dielectric. These films have many possible applications, such as passivation for infrared diodes that operate at low temperatures (80 K). Direct-current (DC) current-voltage (I-V) and capacitance-voltage (C-V) measurements were performed on these films. Furthermore, the films were subjected to thermal cycling from 300 K to 80 K and back to 300 K. Typical minimum leakage currents near zero bias at room temperature varied between 0.9 nA and 0.1 mu A, while low-temperature leakage currents were in the range of 9.5 pA to 0.5 nA, corresponding to resistivity values on the order of 10(8) a''broken vertical bar-cm and 10(10) a''broken vertical bar-cm, respectively. Well-known conduction mechanisms from the literature were utilized for fitting of measured I-V data. Our analysis indicates that the conduction mechanism in general is Ohmic for low fields < 5 x 10(4) V cm(-1), while the conduction mechanism for fields > 6 x 10(4) V cm(-1) is modified Poole-Frenkel (MPF) and Fowler-Nordheim (FN) tunneling at room temperature. At 80 K, Schottky-type conduction dominates. A significant observation is that the film did not show any appreciable degradation in leakage current characteristics due to the thermal cycling.
Resumo:
MATLAB is an array language, initially popular for rapid prototyping, but is now being increasingly used to develop production code for numerical and scientific applications. Typical MATLAB programs have abundant data parallelism. These programs also have control flow dominated scalar regions that have an impact on the program's execution time. Today's computer systems have tremendous computing power in the form of traditional CPU cores and throughput oriented accelerators such as graphics processing units(GPUs). Thus, an approach that maps the control flow dominated regions to the CPU and the data parallel regions to the GPU can significantly improve program performance. In this paper, we present the design and implementation of MEGHA, a compiler that automatically compiles MATLAB programs to enable synergistic execution on heterogeneous processors. Our solution is fully automated and does not require programmer input for identifying data parallel regions. We propose a set of compiler optimizations tailored for MATLAB. Our compiler identifies data parallel regions of the program and composes them into kernels. The problem of combining statements into kernels is formulated as a constrained graph clustering problem. Heuristics are presented to map identified kernels to either the CPU or GPU so that kernel execution on the CPU and the GPU happens synergistically and the amount of data transfer needed is minimized. In order to ensure required data movement for dependencies across basic blocks, we propose a data flow analysis and edge splitting strategy. Thus our compiler automatically handles composition of kernels, mapping of kernels to CPU and GPU, scheduling and insertion of required data transfer. The proposed compiler was implemented and experimental evaluation using a set of MATLAB benchmarks shows that our approach achieves a geometric mean speedup of 19.8X for data parallel benchmarks over native execution of MATLAB.
Resumo:
The name `Seven Pagodas' has served as a nickname for the south Indian port of Mahabalipuram since the early European explorers used it as landmark for navigation as they could see summits of seven temples from the sea. There are many theories concerning the name Seven Pagodas. The present study has compared coastline and adjacent seven monuments illustrated in a 17th century Portolan Chart (maritime map) with recent remote sensing data. This analysis throws new light on the name ``Seven Pagodas'' for the city. This study has used DEM of the site to simulate the coastline which is similar to the one depicted in the old portolan chart. Through this, the then sea level and corresponding flooding extent according to topography of the area and their effect on monuments could be analyzed. Most importantly this work has in the process identified possibly the seven monuments that constituted the name Seven Pagodas and this provides an alternative explanation to one of the mysteries of history. This work has demonstrated unique method of studying coastal archaeological sites. As large numbers of heritage sites around the world are on coastlines, this methodology has potential to be very useful for coastal heritage preservation and management.
Resumo:
Himalayan region is one of the most active seismic regions in the world and many researchers have highlighted the possibility of great seismic event in the near future due to seismic gap. Seismic hazard analysis and microzonation of highly populated places in the region are mandatory in a regional scale. Region specific Ground Motion Predictive Equation (GMPE) is an important input in the seismic hazard analysis for macro- and micro-zonation studies. Few GMPEs developed in India are based on the recorded data and are applicable for a particular range of magnitudes and distances. This paper focuses on the development of a new GMPE for the Himalayan region considering both the recorded and simulated earthquakes of moment magnitude 5.3-8.7. The Finite Fault simulation model has been used for the ground motion simulation considering region specific seismotectonic parameters from the past earthquakes and source models. Simulated acceleration time histories and response spectra are compared with available records. In the absence of a large number of recorded data, simulations have been performed at unavailable locations by adopting Apparent Stations concept. Earthquakes recorded up to 2007 have been used for the development of new GMPE and earthquakes records after 2007 are used to validate new GMPE. Proposed GMPE matched very well with recorded data and also with other highly ranked GMPEs developed elsewhere and applicable for the region. Comparison of response spectra also have shown good agreement with recorded earthquake data. Quantitative analysis of residuals for the proposed GMPE and region specific GMPEs to predict Nepal-India 2011 earthquake of Mw of 5.7 records values shows that the proposed GMPE predicts Peak ground acceleration and spectral acceleration for entire distance and period range with lower percent residual when compared to exiting region specific GMPEs. Crown Copyright (C) 2013 Published by Elsevier Ltd. All rights reserved.
Resumo:
This paper considers the design of a power-controlled reverse channel training (RCT) scheme for spatial multiplexing (SM)-based data transmission along the dominant modes of the channel in a time-division duplex (TDD) multiple-input and multiple-output (MIMO) system, when channel knowledge is available at the receiver. A channel-dependent power-controlled RCT scheme is proposed, using which the transmitter estimates the beamforming (BF) vectors required for the forward-link SM data transmission. Tight approximate expressions for 1) the mean square error (MSE) in the estimate of the BF vectors, and 2) a capacity lower bound (CLB) for an SM system, are derived and used to optimize the parameters of the training sequence. Moreover, an extension of the channel-dependent training scheme and the data rate analysis to a multiuser scenario with M user terminals is presented. For the single-mode BF system, a closed-form expression for an upper bound on the average sum data rate is derived, which is shown to scale as ((L-c - L-B,L- tau)/L-c) log logM asymptotically in M, where L-c and L-B,L- tau are the channel coherence time and training duration, respectively. The significant performance gain offered by the proposed training sequence over the conventional constant-power orthogonal RCT sequence is demonstrated using Monte Carlo simulations.
Resumo:
Northeast India is one of the most highly seismically active regions in the world with more than seven earthquakes on an average per year of magnitude 5.0 and above. Reliable seismic hazard assessment could provide the necessary design inputs for earthquake resistant design of structures in this' region. In this study, deterministic as well as probabilistic methods have been attempted for seismic hazard assessment of Tripura and Mizoram states at bedrock level condition. An updated earthquake catalogue was collected from various national and international seismological agencies for the period from 1731 to 2011. The homogenization, declustering and data completeness analysis of events have been carried out before hazard evaluation. Seismicity parameters have been estimated using G R relationship for each source zone. Based on the seismicity, tectonic features and fault rupture mechanism, this region was divided into six major subzones. Region specific correlations were used for magnitude conversion for homogenization of earthquake size. Ground motion equations (Atkinson and Boore 2003; Gupta 2010) were validated with the observed PGA (peak ground acceleration) values before use in the hazard evaluation. In this study, the hazard is estimated using linear sources, identified in and around the study area. Results are presented in the form of PGA using both DSHA (deterministic seismic hazard analysis) and PSHA (probabilistic seismic hazard analysis) with 2 and 10% probability of exceedance in 50 years, and spectral acceleration (T = 0. 2 s, 1.0 s) for both the states (2% probability of exceedance in 50 years). The results are important to provide inputs for planning risk reduction strategies, for developing risk acceptance criteria and financial analysis for possible damages in the study area with a comprehensive analysis and higher resolution hazard mapping.