918 resultados para Input-Output Modelling
Resumo:
Dissertação de mestrado em Bioinformática
Resumo:
Doutoramento em Economia.
Resumo:
In previous work we have applied the environmental multi-region input-output (MRIO) method proposed by Turner et al (2007) to examine the ‘CO2 trade balance’ between Scotland and the Rest of the UK. In McGregor et al (2008) we construct an interregional economy-environment input-output (IO) and social accounting matrix (SAM) framework that allows us to investigate methods of attributing responsibility for pollution generation in the UK at the regional level. This facilitates analysis of the nature and significance of environmental spillovers and the existence of an environmental ‘trade balance’ between regions. While the existence of significant data problems mean that the quantitative results of this study should be regarded as provisional, we argue that the use of such a framework allows us to begin to consider questions such as the extent to which a devolved authority like the Scottish Parliament can and should be responsible for contributing to national targets for reductions in emissions levels (e.g. the UK commitment to the Kyoto Protocol) when it is limited in the way it can control emissions, particularly with respect to changes in demand elsewhere in the UK. However, while such analysis is useful in terms of accounting for pollution flows in the single time period that the accounts relate to, it is limited when the focus is on modelling the impacts of any marginal change in activity. This is because a conventional demand-driven IO model assumes an entirely passive supply-side in the economy (i.e. all supply is infinitely elastic) and is further restricted by the assumption of universal Leontief (fixed proportions) technology implied by the use of the A and multiplier matrices. In this paper we argue that where analysis of marginal changes in activity is required, a more flexible interregional computable general equilibrium approach that models behavioural relationships in a more realistic and theory-consistent manner, is more appropriate and informative. To illustrate our analysis, we compare the results of introducing a positive demand stimulus in the UK economy using both IO and CGE interregional models of Scotland and the rest of the UK. In the case of the latter, we demonstrate how more theory consistent modelling of both demand and supply side behaviour at the regional and national levels affect model results, including the impact on the interregional CO2 ‘trade balance’.
Resumo:
In this paper a Social Accounting Matrix is constructed for Libya for the year 2000. The procedure was divided into three steps. First, a macro SAM was constructed to consistently capture and represent the macroeconomic framework of the Libyan economy in 2000. Second, that macro SAM was disaggregated into a micro SAM incorporating the accounts for individual activities, primary factors and the main economic institutions. But the SAM obtained in this way was not balanced. So in thE final step we balanced the SAM using a cross-entropy procedure in General Algebraic Modelling System (GAMS). This SAM integrates national income, inputoutput, flow-of-funds, and foreign trade statistics into a comprehensive and consistent dataset. The lack of coherent time series data for Libya is a serious obstacle for applied research that uses econometric analysis. Our main intension in constructing this SAM has been one of providing benchmark data for economy-wide analysis using CGE modelling for Libya.
Resumo:
The application of multi-region environmental input-output (IO) analysis to the problem of accounting for emissions generation (and/or resource use) under different accounting principles has become increasingly common in the ecological and environmental economics literature in particular, with applications at the international and interregional subnational level. However, while environmental IO analysis is invaluable in accounting for pollution flows in the single time period that the accounts relate to, it is limited when the focus is on modelling the impacts of any marginal change in activity. This is because a conventional demand-driven IO model assumes an entirely passive supply-side in the economy (i.e. all supply is infinitely elastic) and is further restricted by the assumption of universal Leontief (fixed proportions) technology implied by the use of the A and multiplier matrices. Where analysis of marginal changes in activity is required, extension from an IO accounting framework to a more flexible interregional computable general equilibrium (CGE) approach, where behavioural relationships can be modelled in a more realistic and theory-consistent manner, is appropriate. Our argument is illustrated by comparing the results of introducing a positive demand stimulus in the UK economy using IO and CGE interregional models of Scotland and the rest of the UK. In the case of the latter, we demonstrate how more theory consistent modelling of both demand and supply side behaviour at the regional and national levels effect model results, including the impact on the interregional CO2 ‘trade balance’.
Resumo:
Part of the local economic impact of a major sporting event comes from the associated temporary tourism expenditures. Typically demand-driven Input-Output (IO) methods are used to quantify the impacts of such expenditures. However, IO modelling has specific weaknesses when measuring temporary tourism impacts; particular problems lie in its treatment of factor supplies and its lack of dynamics. Recent work argues that Computable General Equilibrium (CGE) analysis is more appropriate and this has been widely applied. Neglected in this literature however is an understanding of the role that behavioural characteristics and factor supply assumptions play in determining the economic impact of tourist expenditures, particularly where expenditures are temporary (i.e. of limited duration) and anticipated (i.e. known in advance). This paper uses a CGE model for Scotland in which agents can have myopic- or forward-looking behaviours and shows how these alternative specifications affect the timing and scale of the economic impacts from anticipated and temporary tourism expenditure. The tourism shock analysed is of a scale expected for the Commonwealth Games to be held in Glasgow in 2014. The model shows how “pre-shock” and “legacy” effects – impacts before and after the shock – arise and their quantitative importance. Using the forward-looking model the paper calculates the optimal degree of pre-announcement.
Resumo:
In economic literature, information deficiencies and computational complexities have traditionally been solved through the aggregation of agents and institutions. In inputoutput modelling, researchers have been interested in the aggregation problem since the beginning of 1950s. Extending the conventional input-output aggregation approach to the social accounting matrix (SAM) models may help to identify the effects caused by the information problems and data deficiencies that usually appear in the SAM framework. This paper develops the theory of aggregation and applies it to the social accounting matrix model of multipliers. First, we define the concept of linear aggregation in a SAM database context. Second, we define the aggregated partitioned matrices of multipliers which are characteristic of the SAM approach. Third, we extend the analysis to other related concepts, such as aggregation bias and consistency in aggregation. Finally, we provide an illustrative example that shows the effects of aggregating a social accounting matrix model.
Resumo:
In the recent years, kernel methods have revealed very powerful tools in many application domains in general and in remote sensing image classification in particular. The special characteristics of remote sensing images (high dimension, few labeled samples and different noise sources) are efficiently dealt with kernel machines. In this paper, we propose the use of structured output learning to improve remote sensing image classification based on kernels. Structured output learning is concerned with the design of machine learning algorithms that not only implement input-output mapping, but also take into account the relations between output labels, thus generalizing unstructured kernel methods. We analyze the framework and introduce it to the remote sensing community. Output similarity is here encoded into SVM classifiers by modifying the model loss function and the kernel function either independently or jointly. Experiments on a very high resolution (VHR) image classification problem shows promising results and opens a wide field of research with structured output kernel methods.
Resumo:
What determines which inputs are initially considered and eventually adopted in the productionof new or improved goods? Why are some inputs much more prominent than others? We modelthe evolution of input linkages as a process where new producers first search for potentially usefulinputs and then decide which ones to adopt. A new product initially draws a set of 'essentialsuppliers'. The search stage is then confined to the network neighborhood of the latter, i.e., to theinputs used by the essential suppliers. The adoption decision is driven by a tradeoff between thebenefits accruing from input variety and the costs of input adoption. This has important implicationsfor the number of forward linkages that a product (input variety) develops over time. Inputdiffusion is fostered by network centrality ? an input that is initially represented in many networkneighborhoods is subsequently more likely to be adopted. This mechanism also delivers a powerlaw distribution of forward linkages. Our predictions continue to hold when varieties are aggregatedinto sectors. We can thus test them, using detailed sectoral US input-output tables. We showthat initial network proximity of a sector in 1967 significantly increases the likelihood of adoptionthroughout the subsequent four decades. The same is true for rapid productivity growth in aninput-producing sector. Our empirical results highlight two conditions for new products to becomecentral nodes: initial network proximity to prospective adopters, and technological progress thatreduces their relative price. Semiconductors met both conditions.
Resumo:
The dissertation proposes two control strategies, which include the trajectory planning and vibration suppression, for a kinematic redundant serial-parallel robot machine, with the aim of attaining the satisfactory machining performance. For a given prescribed trajectory of the robot's end-effector in the Cartesian space, a set of trajectories in the robot's joint space are generated based on the best stiffness performance of the robot along the prescribed trajectory. To construct the required system-wide analytical stiffness model for the serial-parallel robot machine, a variant of the virtual joint method (VJM) is proposed in the dissertation. The modified method is an evolution of Gosselin's lumped model that can account for the deformations of a flexible link in more directions. The effectiveness of this VJM variant is validated by comparing the computed stiffness results of a flexible link with the those of a matrix structural analysis (MSA) method. The comparison shows that the numerical results from both methods on an individual flexible beam are almost identical, which, in some sense, provides mutual validation. The most prominent advantage of the presented VJM variant compared with the MSA method is that it can be applied in a flexible structure system with complicated kinematics formed in terms of flexible serial links and joints. Moreover, by combining the VJM variant and the virtual work principle, a systemwide analytical stiffness model can be easily obtained for mechanisms with both serial kinematics and parallel kinematics. In the dissertation, a system-wide stiffness model of a kinematic redundant serial-parallel robot machine is constructed based on integration of the VJM variant and the virtual work principle. Numerical results of its stiffness performance are reported. For a kinematic redundant robot, to generate a set of feasible joints' trajectories for a prescribed trajectory of its end-effector, its system-wide stiffness performance is taken as the constraint in the joints trajectory planning in the dissertation. For a prescribed location of the end-effector, the robot permits an infinite number of inverse solutions, which consequently yields infinite kinds of stiffness performance. Therefore, a differential evolution (DE) algorithm in which the positions of redundant joints in the kinematics are taken as input variables was employed to search for the best stiffness performance of the robot. Numerical results of the generated joint trajectories are given for a kinematic redundant serial-parallel robot machine, IWR (Intersector Welding/Cutting Robot), when a particular trajectory of its end-effector has been prescribed. The numerical results show that the joint trajectories generated based on the stiffness optimization are feasible for realization in the control system since they are acceptably smooth. The results imply that the stiffness performance of the robot machine deviates smoothly with respect to the kinematic configuration in the adjacent domain of its best stiffness performance. To suppress the vibration of the robot machine due to varying cutting force during the machining process, this dissertation proposed a feedforward control strategy, which is constructed based on the derived inverse dynamics model of target system. The effectiveness of applying such a feedforward control in the vibration suppression has been validated in a parallel manipulator in the software environment. The experimental study of such a feedforward control has also been included in the dissertation. The difficulties of modelling the actual system due to the unknown components in its dynamics is noticed. As a solution, a back propagation (BP) neural network is proposed for identification of the unknown components of the dynamics model of the target system. To train such a BP neural network, a modified Levenberg-Marquardt algorithm that can utilize an experimental input-output data set of the entire dynamic system is introduced in the dissertation. Validation of the BP neural network and the modified Levenberg- Marquardt algorithm is done, respectively, by a sinusoidal output approximation, a second order system parameters estimation, and a friction model estimation of a parallel manipulator, which represent three different application aspects of this method.
Resumo:
The current study discusses new opportunities for secure ground to satellite communications using shaped femtosecond pulses that induce spatial hole burning in the atmosphere for efficient communications with data encoded within super-continua generated by femtosecond pulses. Refractive index variation across the different layers in the atmosphere may be modelled using assumptions that the upper strata of the atmosphere and troposphere behaving as layered composite amorphous dielectric networks composed of resistors and capacitors with different time constants across each layer. Input-output expressions of the dynamics of the networks in the frequency domain provide the transmission characteristics of the propagation medium. Femtosecond pulse shaping may be used to optimize the pulse phase-front and spectral composition across the different layers in the atmosphere. A generic procedure based on evolutionary algorithms to perform the pulse shaping is proposed. In contrast to alternative procedures that would require ab initio modelling and calculations of the propagation constant for the pulse through the atmosphere, the proposed approach is adaptive, compensating for refractive index variations along the column of air between the transmitter and receiver.
Resumo:
Existing distributed hydrologic models are complex and computationally demanding for using as a rapid-forecasting policy-decision tool, or even as a class-room educational tool. In addition, platform dependence, specific input/output data structures and non-dynamic data-interaction with pluggable software components inside the existing proprietary frameworks make these models restrictive only to the specialized user groups. RWater is a web-based hydrologic analysis and modeling framework that utilizes the commonly used R software within the HUBzero cyber infrastructure of Purdue University. RWater is designed as an integrated framework for distributed hydrologic simulation, along with subsequent parameter optimization and visualization schemes. RWater provides platform independent web-based interface, flexible data integration capacity, grid-based simulations, and user-extensibility. RWater uses RStudio to simulate hydrologic processes on raster based data obtained through conventional GIS pre-processing. The program integrates Shuffled Complex Evolution (SCE) algorithm for parameter optimization. Moreover, RWater enables users to produce different descriptive statistics and visualization of the outputs at different temporal resolutions. The applicability of RWater will be demonstrated by application on two watersheds in Indiana for multiple rainfall events.
Resumo:
This paper presents a hybrid way mixing time and frequency domain for transmission lines modelling. The proposed methodology handles steady fundamental signal mixed with fast and slow transients, including impulsive and oscillatory behaviour. A transmission line model is developed based on lumped elements representation and state-space techniques. The proposed methodology represents an easy and practical procedure to model a three-phase transmission line directly in time domain, without the explicit use of inverse transforms. The proposed methodology takes into account the frequency-dependent parameters of the line, considering the soil and skin effects. In order to include this effect in the state matrices, a fitting method is applied. Furthermore the accuracy of proposed the developed model is verified, in frequency domain, by a simple methodology based on line distributed parameters and transfer function related to the input/output signals of the lumped parameters representation. In addition, this article proposes the use of a fast and robust analytic integration procedure to solve the state equations, enabling transient and steady-state simulations. The results are compared with those obtained by the commercial software Microtran (EMTP), taking into account a three-phase transmission line, typical in the Brazilian transmission system.
Resumo:
Model-based calibration of steady-state engine operation is commonly performed with highly parameterized empirical models that are accurate but not very robust, particularly when predicting highly nonlinear responses such as diesel smoke emissions. To address this problem, and to boost the accuracy of more robust non-parametric methods to the same level, GT-Power was used to transform the empirical model input space into multiple input spaces that simplified the input-output relationship and improved the accuracy and robustness of smoke predictions made by three commonly used empirical modeling methods: Multivariate Regression, Neural Networks and the k-Nearest Neighbor method. The availability of multiple input spaces allowed the development of two committee techniques: a 'Simple Committee' technique that used averaged predictions from a set of 10 pre-selected input spaces chosen by the training data and the "Minimum Variance Committee" technique where the input spaces for each prediction were chosen on the basis of disagreement between the three modeling methods. This latter technique equalized the performance of the three modeling methods. The successively increasing improvements resulting from the use of a single best transformed input space (Best Combination Technique), Simple Committee Technique and Minimum Variance Committee Technique were verified with hypothesis testing. The transformed input spaces were also shown to improve outlier detection and to improve k-Nearest Neighbor performance when predicting dynamic emissions with steady-state training data. An unexpected finding was that the benefits of input space transformation were unaffected by changes in the hardware or the calibration of the underlying GT-Power model.
Resumo:
Although numerous modelling efforts have integrated food and water considerations at the farm or river basin level, very few agro-economic models are able to jointly assess water and food policies at the global level. The present report explores the feasibility of integrating water considerations into the CAPRI model. First, a literature review of modelling approaches integrating food and water issues has been conducted. Three agro-economic models, IMPACT, WATERSIM and GLOBIOM, have been analysed in detail. In addition, biophysical and hydrological models estimating agricultural water use have also been studied, in particular the global hydrological model WATERGAP and the LISFLOOD model. Thanks to the programming approach of its supply module, CAPRI shows a high potentiality to integrate environmental indicators as well as to enter new resource constraints (land potentially irrigated, irrigation water) and input-output relationships. At least in theory, the activity-based approach of the regional programming model in CAPRI allows differentiating between rainfed and irrigated activities. The suggested approach to include water into the CAPRI model involves creating an irrigation module and a water use module. The development of the CAPRI water module will enable to provide scientific assessment on agricultural water use within the EU and to analyze agricultural pressures on water resources. The feasibility of the approach has been tested in a pilot case study including two NUTS 2 regions (Andalucia in Spain and Midi-Pyrenees in France). Preliminary results are presented, highlighting the interrelations between water and agricultural developments in Europe. As a next step, it is foreseen to further develop the CAPRI water module to account for competition between agricultural and non-agricultural water use. This will imply building a water use sub-module to compute water use balances.