811 resultados para Data-driven analysis


Relevância:

80.00% 80.00%

Publicador:

Resumo:

To mitigate the effects of climate change, countries worldwide are advancing technologies to reduce greenhouse gas emissions. This paper proposes and measures optimal production resource reallocation using data envelopment analysis. This research attempts to clarify the effect of optimal production resource reallocation on CO2 emissions reduction, focusing on regional and industrial characteristics. We use finance, energy, and CO2 emissions data from 13 industrial sectors in 39 countries from 1995 to 2009. The resulting emissions reduction potential is 2.54 Gt-CO2 in the year 2009, with former communist countries having the largest potential to reduce CO2 emissions in the manufacturing sectors. In particular, basic material industry including chemical and steel sectors has a lot of potential to reduce CO2 emissions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Deterministic models have been widely used to predict water quality in distribution systems, but their calibration requires extensive and accurate data sets for numerous parameters. In this study, alternative data-driven modeling approaches based on artificial neural networks (ANNs) were used to predict temporal variations of two important characteristics of water quality chlorine residual and biomass concentrations. The authors considered three types of ANN algorithms. Of these, the Levenberg-Marquardt algorithm provided the best results in predicting residual chlorine and biomass with error-free and ``noisy'' data. The ANN models developed here can generate water quality scenarios of piped systems in real time to help utilities determine weak points of low chlorine residual and high biomass concentration and select optimum remedial strategies.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A sensitive framework has been developed for modelling young radiata pine survival, its growth and its size class distribution, from time of planting to age 5 or 6 years. The data and analysis refer to the Central North Island region of New Zealand. The survival function is derived from a Weibull probability density function, to reflect diminishing mortality with the passage of time in young stands. An anamorphic family of trends was used, as very little between-tree competition can be expected in young stands. An exponential height function was found to fit best the lower portion of its sigmoid form. The most appropriate basal area/ha exponential function included an allometric adjustment which resulted in compatible mean height and basal area/ha models. Each of these equations successfully represented the effects of several establishment practices by making coefficients linear functions of site factors, management activities and their interactions. Height and diameter distribution modelling techniques that ensured compatibility with stand values were employed to represent the effects of management practices on crop variation. Model parameters for this research were estimated using data from site preparation experiments in the region and were tested with some independent data sets.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The objectives of this study were to make a detailed and systematic empirical analysis of microfinance borrowers and non-borrowers in Bangladesh and also examine how efficiency measures are influenced by the access to agricultural microfinance. In the empirical analysis, this study used both parametric and non-parametric frontier approaches to investigate differences in efficiency estimates between microfinance borrowers and non-borrowers. This thesis, based on five articles, applied data obtained from a survey of 360 farm households from north-central and north-western regions in Bangladesh. The methods used in this investigation involve stochastic frontier (SFA) and data envelopment analysis (DEA) in addition to sample selectivity and limited dependent variable models. In article I, technical efficiency (TE) estimation and identification of its determinants were performed by applying an extended Cobb-Douglas stochastic frontier production function. The results show that farm households had a mean TE of 83% with lower TE scores for the non-borrowers of agricultural microfinance. Addressing institutional policies regarding the consolidation of individual plots into farm units, ensuring access to microfinance, extension education for the farmers with longer farming experience are suggested to improve the TE of the farmers. In article II, the objective was to assess the effects of access to microfinance on household production and cost efficiency (CE) and to determine the efficiency differences between the microfinance participating and non-participating farms. In addition, a non-discretionary DEA model was applied to capture directly the influence of microfinance on farm households production and CE. The results suggested that under both pooled DEA models and non-discretionary DEA models, farmers with access to microfinance were significantly more efficient than their non-borrowing counterparts. Results also revealed that land fragmentation, family size, household wealth, on farm-training and off farm income share are the main determinants of inefficiency after effectively correcting for sample selection bias. In article III, the TE of traditional variety (TV) and high-yielding-variety (HYV) rice producers were estimated in addition to investigating the determinants of adoption rate of HYV rice. Furthermore, the role of TE as a potential determinant to explain the differences of adoption rate of HYV rice among the farmers was assessed. The results indicated that in spite of its much higher yield potential, HYV rice production was associated with lower TE and had a greater variability in yield. It was also found that TE had a significant positive influence on the adoption rates of HYV rice. In article IV, we estimated profit efficiency (PE) and profit-loss between microfinance borrowers and non-borrowers by a sample selection framework, which provided a general framework for testing and taking into account the sample selection in the stochastic (profit) frontier function analysis. After effectively correcting for selectivity bias, the mean PE of the microfinance borrowers and non-borrowers were estimated at 68% and 52% respectively. This suggested that a considerable share of profits were lost due to profit inefficiencies in rice production. The results also demonstrated that access to microfinance contributes significantly to increasing PE and reducing profit-loss per hectare land. In article V, the effects of credit constraints on TE, allocative efficiency (AE) and CE were assessed while adequately controlling for sample selection bias. The confidence intervals were determined by the bootstrap method for both samples. The results indicated that differences in average efficiency scores of credit constrained and unconstrained farms were not statistically significant although the average efficiencies tended to be higher in the group of unconstrained farms. After effectively correcting for selectivity bias, household experience, number of dependents, off-farm income, farm size, access to on farm training and yearly savings were found to be the main determinants of inefficiencies. In general, the results of the study revealed the existence substantial technical, allocative, economic inefficiencies and also considerable profit inefficiencies. The results of the study suggested the need to streamline agricultural microfinance by the microfinance institutions (MFIs), donor agencies and government at all tiers. Moreover, formulating policies that ensure greater access to agricultural microfinance to the smallholder farmers on a sustainable basis in the study areas to enhance productivity and efficiency has been recommended. Key Words: Technical, allocative, economic efficiency, DEA, Non-discretionary DEA, selection bias, bootstrapping, microfinance, Bangladesh.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

MATLAB is an array language, initially popular for rapid prototyping, but is now being increasingly used to develop production code for numerical and scientific applications. Typical MATLAB programs have abundant data parallelism. These programs also have control flow dominated scalar regions that have an impact on the program's execution time. Today's computer systems have tremendous computing power in the form of traditional CPU cores and throughput oriented accelerators such as graphics processing units(GPUs). Thus, an approach that maps the control flow dominated regions to the CPU and the data parallel regions to the GPU can significantly improve program performance. In this paper, we present the design and implementation of MEGHA, a compiler that automatically compiles MATLAB programs to enable synergistic execution on heterogeneous processors. Our solution is fully automated and does not require programmer input for identifying data parallel regions. We propose a set of compiler optimizations tailored for MATLAB. Our compiler identifies data parallel regions of the program and composes them into kernels. The problem of combining statements into kernels is formulated as a constrained graph clustering problem. Heuristics are presented to map identified kernels to either the CPU or GPU so that kernel execution on the CPU and the GPU happens synergistically and the amount of data transfer needed is minimized. In order to ensure required data movement for dependencies across basic blocks, we propose a data flow analysis and edge splitting strategy. Thus our compiler automatically handles composition of kernels, mapping of kernels to CPU and GPU, scheduling and insertion of required data transfer. The proposed compiler was implemented and experimental evaluation using a set of MATLAB benchmarks shows that our approach achieves a geometric mean speedup of 19.8X for data parallel benchmarks over native execution of MATLAB.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents a new algorithm for extracting Free-Form Surface Features (FFSFs) from a surface model. The extraction algorithm is based on a modified taxonomy of FFSFs from that proposed in the literature. A new classification scheme has been proposed for FFSFs to enable their representation and extraction. The paper proposes a separating curve as a signature of FFSFs in a surface model. FFSFs are classified based on the characteristics of the separating curve (number and type) and the influence region (the region enclosed by the separating curve). A method to extract these entities is presented. The algorithm has been implemented and tested for various free-form surface features on different types of free-form surfaces (base surfaces) and is found to correctly identify and represent the features irrespective of the type of underlying surface. The representation and extraction algorithm are both based on topology and geometry. The algorithm is data-driven and does not use any pre-defined templates. The definition presented for a feature is unambiguous and application independent. The proposed classification of FFSFs can be used to develop an ontology to determine semantic equivalences for the feature to be exchanged, mapped and used across PLM applications. (C) 2011 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We demonstrate the activity of Ti0.84Pt0.01Fe0.15O2-delta and Ti0.73Pd0.02Fe0.25O2-delta catalysts towards the CO oxidation and water gas shift (VMS) reaction. Both the catalysts were synthesized in the nano crystalline form by a low temperature sonochemical method and characterized by different techniques such as XRD, FT-Raman, TEM, FT-IR, XPS and BET surface analyzer. H-2-TPR results corroborate the intimate contact between noble metal and Fe ions in the both catalysts that facilitates the reducibility of the support. In the absence of feed CO2 and H-2, nearly 100% conversion of CO to CO2 with 100% H-2 selectivity was observed at 300 degrees C and 260 degrees C respectively, for Ti0.84Pt0.01Fe0.15O2-delta and Ti0.73Pd0.02Fe0.25O2-delta catalyst. However, the catalytic performance of Ti0.73Pd0.02Fe0.25O2-delta deteriorates in the presence of feed CO2 and H-2. The change in the support reducibility is the primary reason for the significant increase in the activity for CO oxidation and WGS reaction. The effect of Fe addition was more significant in Ti0.73Pd0.02Fe0.25O2-delta than Ti0.84Pt0.01Fe0.15O2-delta. Based on the spectroscopic evidences and surface phenomena, a hybrid reaction scheme utilizing both surface hydroxyl groups and the lattice oxygen was hypothesized over these catalysts for WGS reaction. The mechanisms based on the formate and redox pathway were used to fit the ldnetic data. The analysis of experimental data shows the redox mechanism is the dominant pathway over these catalysts. Copyright (C) 2012, Hydrogen Energy Publications, LLC. Published by Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

CdTe thin films of 500 thickness prepared by thermal evaporation technique were analyzed for leakage current and conduction mechanisms. Metal-insulator-metal (MIM) capacitors were fabricated using these films as a dielectric. These films have many possible applications, such as passivation for infrared diodes that operate at low temperatures (80 K). Direct-current (DC) current-voltage (I-V) and capacitance-voltage (C-V) measurements were performed on these films. Furthermore, the films were subjected to thermal cycling from 300 K to 80 K and back to 300 K. Typical minimum leakage currents near zero bias at room temperature varied between 0.9 nA and 0.1 mu A, while low-temperature leakage currents were in the range of 9.5 pA to 0.5 nA, corresponding to resistivity values on the order of 10(8) a''broken vertical bar-cm and 10(10) a''broken vertical bar-cm, respectively. Well-known conduction mechanisms from the literature were utilized for fitting of measured I-V data. Our analysis indicates that the conduction mechanism in general is Ohmic for low fields < 5 x 10(4) V cm(-1), while the conduction mechanism for fields > 6 x 10(4) V cm(-1) is modified Poole-Frenkel (MPF) and Fowler-Nordheim (FN) tunneling at room temperature. At 80 K, Schottky-type conduction dominates. A significant observation is that the film did not show any appreciable degradation in leakage current characteristics due to the thermal cycling.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

MATLAB is an array language, initially popular for rapid prototyping, but is now being increasingly used to develop production code for numerical and scientific applications. Typical MATLAB programs have abundant data parallelism. These programs also have control flow dominated scalar regions that have an impact on the program's execution time. Today's computer systems have tremendous computing power in the form of traditional CPU cores and throughput oriented accelerators such as graphics processing units(GPUs). Thus, an approach that maps the control flow dominated regions to the CPU and the data parallel regions to the GPU can significantly improve program performance. In this paper, we present the design and implementation of MEGHA, a compiler that automatically compiles MATLAB programs to enable synergistic execution on heterogeneous processors. Our solution is fully automated and does not require programmer input for identifying data parallel regions. We propose a set of compiler optimizations tailored for MATLAB. Our compiler identifies data parallel regions of the program and composes them into kernels. The problem of combining statements into kernels is formulated as a constrained graph clustering problem. Heuristics are presented to map identified kernels to either the CPU or GPU so that kernel execution on the CPU and the GPU happens synergistically and the amount of data transfer needed is minimized. In order to ensure required data movement for dependencies across basic blocks, we propose a data flow analysis and edge splitting strategy. Thus our compiler automatically handles composition of kernels, mapping of kernels to CPU and GPU, scheduling and insertion of required data transfer. The proposed compiler was implemented and experimental evaluation using a set of MATLAB benchmarks shows that our approach achieves a geometric mean speedup of 19.8X for data parallel benchmarks over native execution of MATLAB.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The name `Seven Pagodas' has served as a nickname for the south Indian port of Mahabalipuram since the early European explorers used it as landmark for navigation as they could see summits of seven temples from the sea. There are many theories concerning the name Seven Pagodas. The present study has compared coastline and adjacent seven monuments illustrated in a 17th century Portolan Chart (maritime map) with recent remote sensing data. This analysis throws new light on the name ``Seven Pagodas'' for the city. This study has used DEM of the site to simulate the coastline which is similar to the one depicted in the old portolan chart. Through this, the then sea level and corresponding flooding extent according to topography of the area and their effect on monuments could be analyzed. Most importantly this work has in the process identified possibly the seven monuments that constituted the name Seven Pagodas and this provides an alternative explanation to one of the mysteries of history. This work has demonstrated unique method of studying coastal archaeological sites. As large numbers of heritage sites around the world are on coastlines, this methodology has potential to be very useful for coastal heritage preservation and management.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Himalayan region is one of the most active seismic regions in the world and many researchers have highlighted the possibility of great seismic event in the near future due to seismic gap. Seismic hazard analysis and microzonation of highly populated places in the region are mandatory in a regional scale. Region specific Ground Motion Predictive Equation (GMPE) is an important input in the seismic hazard analysis for macro- and micro-zonation studies. Few GMPEs developed in India are based on the recorded data and are applicable for a particular range of magnitudes and distances. This paper focuses on the development of a new GMPE for the Himalayan region considering both the recorded and simulated earthquakes of moment magnitude 5.3-8.7. The Finite Fault simulation model has been used for the ground motion simulation considering region specific seismotectonic parameters from the past earthquakes and source models. Simulated acceleration time histories and response spectra are compared with available records. In the absence of a large number of recorded data, simulations have been performed at unavailable locations by adopting Apparent Stations concept. Earthquakes recorded up to 2007 have been used for the development of new GMPE and earthquakes records after 2007 are used to validate new GMPE. Proposed GMPE matched very well with recorded data and also with other highly ranked GMPEs developed elsewhere and applicable for the region. Comparison of response spectra also have shown good agreement with recorded earthquake data. Quantitative analysis of residuals for the proposed GMPE and region specific GMPEs to predict Nepal-India 2011 earthquake of Mw of 5.7 records values shows that the proposed GMPE predicts Peak ground acceleration and spectral acceleration for entire distance and period range with lower percent residual when compared to exiting region specific GMPEs. Crown Copyright (C) 2013 Published by Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper considers the design of a power-controlled reverse channel training (RCT) scheme for spatial multiplexing (SM)-based data transmission along the dominant modes of the channel in a time-division duplex (TDD) multiple-input and multiple-output (MIMO) system, when channel knowledge is available at the receiver. A channel-dependent power-controlled RCT scheme is proposed, using which the transmitter estimates the beamforming (BF) vectors required for the forward-link SM data transmission. Tight approximate expressions for 1) the mean square error (MSE) in the estimate of the BF vectors, and 2) a capacity lower bound (CLB) for an SM system, are derived and used to optimize the parameters of the training sequence. Moreover, an extension of the channel-dependent training scheme and the data rate analysis to a multiuser scenario with M user terminals is presented. For the single-mode BF system, a closed-form expression for an upper bound on the average sum data rate is derived, which is shown to scale as ((L-c - L-B,L- tau)/L-c) log logM asymptotically in M, where L-c and L-B,L- tau are the channel coherence time and training duration, respectively. The significant performance gain offered by the proposed training sequence over the conventional constant-power orthogonal RCT sequence is demonstrated using Monte Carlo simulations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Complex biological systems such as the human brain can be expected to be inherently nonlinear and hence difficult to model. Most of the previous studies on investigations of brain function have either used linear models or parametric nonlinear models. In this paper, we propose a novel application of a nonlinear measure of phase synchronization based on recurrences, correlation between probabilities of recurrence (CPR), to study seizures in the brain. The advantage of this nonparametric method is that it makes very few assumptions thus making it possible to investigate brain functioning in a data-driven way. We have demonstrated the utility of CPR measure for the study of phase synchronization in multichannel seizure EEG recorded from patients with global as well as focal epilepsy. For the case of global epilepsy, brain synchronization using thresholded CPR matrix of multichannel EEG signals showed clear differences in results obtained for epileptic seizure and pre-seizure. Brain headmaps obtained for seizure and preseizure cases provide meaningful insights about synchronization in the brain in those states. The headmap in the case of focal epilepsy clearly enables us to identify the focus of the epilepsy which provides certain diagnostic value. Comparative studies with linear correlation have shown that the nonlinear measure CPR outperforms the linear correlation measure. (C) 2014 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Northeast India is one of the most highly seismically active regions in the world with more than seven earthquakes on an average per year of magnitude 5.0 and above. Reliable seismic hazard assessment could provide the necessary design inputs for earthquake resistant design of structures in this' region. In this study, deterministic as well as probabilistic methods have been attempted for seismic hazard assessment of Tripura and Mizoram states at bedrock level condition. An updated earthquake catalogue was collected from various national and international seismological agencies for the period from 1731 to 2011. The homogenization, declustering and data completeness analysis of events have been carried out before hazard evaluation. Seismicity parameters have been estimated using G R relationship for each source zone. Based on the seismicity, tectonic features and fault rupture mechanism, this region was divided into six major subzones. Region specific correlations were used for magnitude conversion for homogenization of earthquake size. Ground motion equations (Atkinson and Boore 2003; Gupta 2010) were validated with the observed PGA (peak ground acceleration) values before use in the hazard evaluation. In this study, the hazard is estimated using linear sources, identified in and around the study area. Results are presented in the form of PGA using both DSHA (deterministic seismic hazard analysis) and PSHA (probabilistic seismic hazard analysis) with 2 and 10% probability of exceedance in 50 years, and spectral acceleration (T = 0. 2 s, 1.0 s) for both the states (2% probability of exceedance in 50 years). The results are important to provide inputs for planning risk reduction strategies, for developing risk acceptance criteria and financial analysis for possible damages in the study area with a comprehensive analysis and higher resolution hazard mapping.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Resumen: En el presente trabajo se examina la eficiencia técnica de las universidades argentinas de gestión estatal utilizando una metodología no paramétrica. A través del Análisis Envolvente de Datos se caracteriza a cada universidad mediante una única puntuación de eficiencia técnica relativa, lo que permite estimar las mejoras necesarias por comparación con un grupo de referencia. Se considera el modelo básico con orientación al producto, cuyos resultados muestran que las universidades tienen en promedio entre un 23,2% y un 23,9% de ineficiencia. Estos resultados son de utilidad para el diseño de políticas universitarias.