921 resultados para input-output analysis
Resumo:
This paper re-assesses three independently developed approaches that are aimed at solving the problem of zero-weights or non-zero slacks in Data Envelopment Analysis (DEA). The methods are weights restricted, non-radial and extended facet DEA models. Weights restricted DEA models are dual to envelopment DEA models with restrictions on the dual variables (DEA weights) aimed at avoiding zero values for those weights; non-radial DEA models are envelopment models which avoid non-zero slacks in the input-output constraints. Finally, extended facet DEA models recognize that only projections on facets of full dimension correspond to well defined rates of substitution/transformation between all inputs/outputs which in turn correspond to non-zero weights in the multiplier version of the DEA model. We demonstrate how these methods are equivalent, not only in their aim but also in the solutions they yield. In addition, we show that the aforementioned methods modify the production frontier by extending existing facets or creating unobserved facets. Further we propose a new approach that uses weight restrictions to extend existing facets. This approach has some advantages in computational terms, because extended facet models normally make use of mixed integer programming models, which are computationally demanding.
Resumo:
Liquid-liquid extraction has long been known as a unit operation that plays an important role in industry. This process is well known for its complexity and sensitivity to operation conditions. This thesis presents an attempt to explore the dynamics and control of this process using a systematic approach and state of the art control system design techniques. The process was studied first experimentally under carefully selected. operation conditions, which resembles the ranges employed practically under stable and efficient conditions. Data were collected at steady state conditions using adequate sampling techniques for the dispersed and continuous phases as well as during the transients of the column with the aid of a computer-based online data logging system and online concentration analysis. A stagewise single stage backflow model was improved to mimic the dynamic operation of the column. The developed model accounts for the variation in hydrodynamics, mass transfer, and physical properties throughout the length of the column. End effects were treated by addition of stages at the column entrances. Two parameters were incorporated in the model namely; mass transfer weight factor to correct for the assumption of no mass transfer in the. settling zones at each stage and the backmixing coefficients to handle the axial dispersion phenomena encountered in the course of column operation. The parameters were estimated by minimizing the differences between the experimental and the model predicted concentration profiles at steady state conditions using non-linear optimisation technique. The estimated values were then correlated as functions of operating parameters and were incorporated in·the model equations. The model equations comprise a stiff differential~algebraic system. This system was solved using the GEAR ODE solver. The calculated concentration profiles were compared to those experimentally measured. A very good agreement of the two profiles was achieved within a percent relative error of ±2.S%. The developed rigorous dynamic model of the extraction column was used to derive linear time-invariant reduced-order models that relate the input variables (agitator speed, solvent feed flowrate and concentration, feed concentration and flowrate) to the output variables (raffinate concentration and extract concentration) using the asymptotic method of system identification. The reduced-order models were shown to be accurate in capturing the dynamic behaviour of the process with a maximum modelling prediction error of I %. The simplicity and accuracy of the derived reduced-order models allow for control system design and analysis of such complicated processes. The extraction column is a typical multivariable process with agitator speed and solvent feed flowrate considered as manipulative variables; raffinate concentration and extract concentration as controlled variables and the feeds concentration and feed flowrate as disturbance variables. The control system design of the extraction process was tackled as multi-loop decentralised SISO (Single Input Single Output) as well as centralised MIMO (Multi-Input Multi-Output) system using both conventional and model-based control techniques such as IMC (Internal Model Control) and MPC (Model Predictive Control). Control performance of each control scheme was. studied in terms of stability, speed of response, sensitivity to modelling errors (robustness), setpoint tracking capabilities and load rejection. For decentralised control, multiple loops were assigned to pair.each manipulated variable with each controlled variable according to the interaction analysis and other pairing criteria such as relative gain array (RGA), singular value analysis (SVD). Loops namely Rotor speed-Raffinate concentration and Solvent flowrate Extract concentration showed weak interaction. Multivariable MPC has shown more effective performance compared to other conventional techniques since it accounts for loops interaction, time delays, and input-output variables constraints.
Resumo:
DEA literature continues apace but software has lagged behind. This session uses suitably selected data to present newly developed software which includes many of the most recent DEA models. The software enables the user to address a variety of issues not frequently found in existing DEA software such as: -Assessments under a variety of possible assumptions of returns to scale including NIRS and NDRS; -Scale elasticity computations; -Numerous Input/Output variables and truly unlimited number of assessment units (DMUs) -Panel data analysis -Analysis of categorical data (multiple categories) -Malmquist Index and its decompositions -Computations of Supper efficiency -Automated removal of super-efficient outliers under user-specified criteria; -Graphical presentation of results -Integrated statistical tests
Resumo:
Mathematical Subject Classification 2010:26A33, 33E99, 15A52, 62E15.
Resumo:
In recent years there has been a growing concern about the emission trade balance of countries. It is due to the fact that countries with an open economy are active players in the international trade, though trade is not only a major factor in forging a country’s economic structure anymore, but it does contribute to the movement of embodied emissions beyond the country borders. This issue is especially relevant from the carbon accounting policy’s point of view, as it is known that the production-based principle is in effect now in the Kyoto agreement. The study aims at revealing the interdependence of countries on international trade and its environmental impacts, and how the carbon accounting method plays a crucial role in evaluating a country’s environmental performance and its role in the climate mitigation processes. The input-output models are used in the methodology, as they provide an appropriate framework for this kind of environmental accounting; the analysis shows an international comparison of four European countries (Germany, the United Kingdom, the Netherlands, and Hungary) with extended trading activities and carbon emissions. Moving from the production-based approach in the climate policy, to the consumptionperspective principle and allocation [15], it would also help increasing the efficiency of emission reduction targets and the evaluation of the sustainability dimension and its impacts of international trade. The results of the study have shown that there is an importance of distinction between the two emission allocation approaches, both from global and local level point of view.
Resumo:
In recent years there has been growing concern about the emission trade balances of countries. This is due to the fact that countries with an open economy are active players in international trade. Trade is not only a major factor in forging a country’s economic structure, but contributes to the movement of embodied emissions beyond country borders. This issue is especially relevant from the carbon accounting policy and domestic production perspective, as it is known that the production-based principle is employed in the Kyoto agreement. The research described herein was designed to reveal the interdependence of countries on international trade and the corresponding embodied emissions both on national and on sectoral level and to illustrate the significance of the consumption-based emission accounting. It is presented here to what extent a consumption-based accounting would change the present system based on production-based accounting and allocation. The relationship of CO2 emission embodied in exports and embodied in imports is analysed here. International trade can blur the responsibility for the ecological effects of production and consumption and it can lengthen the link between consumption and its consequences. Input-output models are used in the methodology as they provide an appropriate framework for climate change accounting. The analysis comprises an international comparative study of four European countries (Germany, the United Kingdom, the Netherlands, and Hungary) with extended trading activities and carbon emissions. Moving from a production-based approach in climate policy to a consumption-based principle and allocation approach would help to increase the efficiency of emission reductions and would force countries to rethink their trading activities in order to decrease the environmental load of production activities. The results of this study show that it is important to distinguish between the two emission accounting approaches, both on the global and the local level.
Resumo:
If we classify variables in a program into various security levels, then a secure information flow analysis aims to verify statically that information in a program can flow only in ways consistent with the specified security levels. One well-studied approach is to formulate the rules of the secure information flow analysis as a type system. A major trend of recent research focuses on how to accommodate various sophisticated modern language features. However, this approach often leads to overly complicated and restrictive type systems, making them unfit for practical use. Also, problems essential to practical use, such as type inference and error reporting, have received little attention. This dissertation identified and solved major theoretical and practical hurdles to the application of secure information flow. ^ We adopted a minimalist approach to designing our language to ensure a simple lenient type system. We started out with a small simple imperative language and only added features that we deemed most important for practical use. One language feature we addressed is arrays. Due to the various leaking channels associated with array operations, arrays have received complicated and restrictive typing rules in other secure languages. We presented a novel approach for lenient array operations, which lead to simple and lenient typing of arrays. ^ Type inference is necessary because usually a user is only concerned with the security types for input/output variables of a program and would like to have all types for auxiliary variables inferred automatically. We presented a type inference algorithm B and proved its soundness and completeness. Moreover, algorithm B stays close to the program and the type system and therefore facilitates informative error reporting that is generated in a cascading fashion. Algorithm B and error reporting have been implemented and tested. ^ Lastly, we presented a novel framework for developing applications that ensure user information privacy. In this framework, core computations are defined as code modules that involve input/output data from multiple parties. Incrementally, secure flow policies are refined based on feedback from the type checking/inference. Core computations only interact with code modules from involved parties through well-defined interfaces. All code modules are digitally signed to ensure their authenticity and integrity. ^
Resumo:
This research aimed at developing a research framework for the emerging field of enterprise systems engineering (ESE). The framework consists of an ESE definition, an ESE classification scheme, and an ESE process. This study views an enterprise as a system that creates value for its customers. Thus, developing the framework made use of system theory and IDEF methodologies. This study defined ESE as an engineering discipline that develops and applies systems theory and engineering techniques to specification, analysis, design, and implementation of an enterprise for its life cycle. The proposed ESE classification scheme breaks down an enterprise system into four elements. They are work, resources, decision, and information. Each enterprise element is specified with four system facets: strategy, competency, capacity, and structure. Each element-facet combination is subject to the engineering process of specification, analysis, design, and implementation, to achieve its pre-specified performance with respect to cost, time, quality, and benefit to the enterprise. This framework is intended for identifying research voids in the ESE discipline. It also helps to apply engineering and systems tools to this emerging field. It harnesses the relationships among various enterprise aspects and bridges the gap between engineering and management practices in an enterprise. The proposed ESE process is generic. It consists of a hierarchy of engineering activities presented in an IDEF0 model. Each activity is defined with its input, output, constraints, and mechanisms. The output of an ESE effort can be a partial or whole enterprise system design for its physical, managerial, and/or informational layers. The proposed ESE process is applicable to a new enterprise system design or an engineering change in an existing system. The long-term goal of this study aims at development of a scientific foundation for ESE research and development.
Resumo:
This dissertation comprises three individual chapters in an effort to examine different explanatory variables that affect firm performance. Chapter Two proposes an additional determinant of firm survival. Based on a detailed examination of firm survival in the British automobile industry between 1895 and 1970, we conclude that a firm's selection of submarket (defined by quality level) influenced survival. In contrast to findings for the US automobile industry, there is no evidence of first-mover advantage in the market as a whole. However, we do find evidence of first-mover advantage after conditioning on submarket choice. Chapter Three examines the effects of product line expansion on firm performance in terms of survival time. Based on a detailed examination of firm survival time in the British automobile industry between 1895 and 1970, we find that diversification exerts a positive effect on firm survival. Furthermore, our findings support the literature with respect to the impacts of submarket types, pre-entry experience, and timing of entry on firm survival time. Chapter Four examines corporate diversification in U.S. manufacturing and service firms. We develop measures of how related a firm's diverse activities are using input-output data and the NAILS classification to construct indexes of "vertical relatedness" and "complementarity". Strong relationships between these two measures are found. We utilize profitability and excess value as the measure for firm performance. Econometric analysis reveals that there is no relationship between the degree of relatedness of diversification and firm performance for the study period.
Resumo:
The purpose of this study is to explore the accuracy issue of the Input-Output model in quantifying the impacts of the 2007 economic crisis on a local tourism industry and economy. Though the model has been used in the tourism impact analysis, its estimation accuracy is rarely verified empirically. The Metro Orlando area in Florida is investigated as an empirical study, and the negative change in visitor expenditure between 2007 and 2008 is taken as the direct shock. The total impacts are assessed in terms of output and employment, and are compared with the actual data. This study finds that there are surprisingly large discrepancies among the estimated and actual results, and the Input-Output model appears to overestimate the negative impacts. By investigating the local economic activities during the study period, this study made some exploratory efforts in explaining such discrepancies. Theoretical and practical implications are then suggested.
Resumo:
The purpose of this study is to explore the accuracy issue of the Input-Output model in quantifying the impacts of the 2007 economic crisis on a local tourism industry and economy. Though the model has been used in the tourism impact analysis, its estimation accuracy is rarely verified empirically. The Metro Orlando area in Florida is investigated as an empirical study, and the negative change in visitor expenditure between 2007 and 2008 is taken as the direct shock. The total impacts are assessed in terms of output and employment, and are compared with the actual data. This study finds that there are surprisingly large discrepancies among the estimated and actual results, and the Input-Output model appears to overestimate the negative impacts. By investigating the local economic activities during the study period, this study made some exploratory efforts in explaining such discrepancies. Theoretical and practical implications are then suggested.
Resumo:
This research aimed at developing a research framework for the emerging field of enterprise systems engineering (ESE). The framework consists of an ESE definition, an ESE classification scheme, and an ESE process. This study views an enterprise as a system that creates value for its customers. Thus, developing the framework made use of system theory and IDEF methodologies. This study defined ESE as an engineering discipline that develops and applies systems theory and engineering techniques to specification, analysis, design, and implementation of an enterprise for its life cycle. The proposed ESE classification scheme breaks down an enterprise system into four elements. They are work, resources, decision, and information. Each enterprise element is specified with four system facets: strategy, competency, capacity, and structure. Each element-facet combination is subject to the engineering process of specification, analysis, design, and implementation, to achieve its pre-specified performance with respect to cost, time, quality, and benefit to the enterprise. This framework is intended for identifying research voids in the ESE discipline. It also helps to apply engineering and systems tools to this emerging field. It harnesses the relationships among various enterprise aspects and bridges the gap between engineering and management practices in an enterprise. The proposed ESE process is generic. It consists of a hierarchy of engineering activities presented in an IDEF0 model. Each activity is defined with its input, output, constraints, and mechanisms. The output of an ESE effort can be a partial or whole enterprise system design for its physical, managerial, and/or informational layers. The proposed ESE process is applicable to a new enterprise system design or an engineering change in an existing system. The long-term goal of this study aims at development of a scientific foundation for ESE research and development.
Resumo:
The demand side growth accounting studies the demand aggregate component contributions in the Gross Domestic Product (GDP). Traditionally, international and national organizations that uses the traditional method for calculating such contributions. However, this method does not take into account the effect the induction of imports by the various components of aggregate demand on the calculation of these. As an alternative to this method are presented others studies that consider this effect, as the alternative method proposed by Lara (2013), the attribution method, proposed by Kranendonk and Verbruggen (2005) and Hoekstra and van der Helm (2010), and the method the sraffian supermultiplier, by Freitas and Dweck (2013). Was made a summary of these methods, demonstrating the similarities and differences between them. Also, in the aim to contribute to the study of the subject was developed the “method of distribution of imports” that aims to distribute imports for the various components of aggregate demand, through the information set forth in the input-output matrices and tables of resources and uses. Were accounted the contributions to the growth of macroeconomic aggregates for Brazil from 2001 to 2009 using the method of distribution, and realized comparison with the traditional method, understanding the reasons for the differences in contributions. Later was done comparisons with all the methods presented in this work, between the calculated contributions to the growth of the components of aggregate demand and the domestic and external sectors. Was verified that the methods that exist in the literature was not enough to deal with this question, and given the alternatives for contributions to the growth presented throughout this work, it is believed that the method of distribution provides the best estimates for the account of contributions by aggregate demand sector. In particular, the main advantage of this method to the others is the breakdown of the contribution of imports, separated by aggregate demand component, which allows the analysis of contribution of each component to GDP growth. Thus, this type of analysis helps to study the pattern of growth of the Brazilian economy, not just the theoretical point of view, but also empirical and basis for the decision to economic policies
Resumo:
In this work we explore optimising parameters of a physical circuit model relative to input/output measurements, using the Dallas Rangemaster Treble Booster as a case study. A hybrid metaheuristic/gradient descent algorithm is implemented, where the initial parameter sets for the optimisation are informed by nominal values from schematics and datasheets. Sensitivity analysis is used to screen parameters, which informs a study of the optimisation algorithm against model complexity by fixing parameters. The results of the optimisation show a significant increase in the accuracy of model behaviour, but also highlight several key issues regarding the recovery of parameters.
Resumo:
In this study, interactions between potential hierarchical value chains existing in the production structure and industry-wise productivity growths are sought. We applied generalized Chenery-Watanabe heuristics for matrix linearity maximization to triangulate the input-output incidence matrix for both Japan and the Republic of Korea, finding the potential directed flow of values spanning the industrial sectors of the basic (disaggregated) industry classifications for both countries. Sector specific productivity growths were measured by way of the Trönquvist index, using the 2000-2005 linked input-output tables for both Japan and Korea.