935 resultados para Leontief Input-Output model
Resumo:
This article examines shock persistence in agricultural and industrial output in India. Drawing on the dual economy literature, the linkages between the sectors through the terms of trade are emphasised. However different dual economy models make differing assumptions regarding the categorisation of variables as being either endogenous or exogenous and this distinction is crucial in explaining the pattern of shock persistence. Using annual data for 1955-95, our results show that shocks to both output series are permanent while those to the terms of trade are transient.
Resumo:
Presented herein is an experimental design that allows the effects of several radiative forcing factors on climate to be estimated as precisely as possible from a limited suite of atmosphere-only general circulation model (GCM) integrations. The forcings include the combined effect of observed changes in sea surface temperatures, sea ice extent, stratospheric (volcanic) aerosols, and solar output, plus the individual effects of several anthropogenic forcings. A single linear statistical model is used to estimate the forcing effects, each of which is represented by its global mean radiative forcing. The strong colinearity in time between the various anthropogenic forcings provides a technical problem that is overcome through the design of the experiment. This design uses every combination of anthropogenic forcing rather than having a few highly replicated ensembles, which is more commonly used in climate studies. Not only is this design highly efficient for a given number of integrations, but it also allows the estimation of (nonadditive) interactions between pairs of anthropogenic forcings. The simulated land surface air temperature changes since 1871 have been analyzed. The changes in natural and oceanic forcing, which itself contains some forcing from anthropogenic and natural influences, have the most influence. For the global mean, increasing greenhouse gases and the indirect aerosol effect had the largest anthropogenic effects. It was also found that an interaction between these two anthropogenic effects in the atmosphere-only GCM exists. This interaction is similar in magnitude to the individual effects of changing tropospheric and stratospheric ozone concentrations or to the direct (sulfate) aerosol effect. Various diagnostics are used to evaluate the fit of the statistical model. For the global mean, this shows that the land temperature response is proportional to the global mean radiative forcing, reinforcing the use of radiative forcing as a measure of climate change. The diagnostic tests also show that the linear model was suitable for analyses of land surface air temperature at each GCM grid point. Therefore, the linear model provides precise estimates of the space time signals for all forcing factors under consideration. For simulated 50-hPa temperatures, results show that tropospheric ozone increases have contributed to stratospheric cooling over the twentieth century almost as much as changes in well-mixed greenhouse gases.
Resumo:
This paper reviews four approaches used to create rational tools to aid the planning and the management of the building design process and then proposes a fifth approach. The new approach that has been developed is based on the mechanical aspects of technology rather than subjective design issues. The knowledge base contains, for each construction technology, a generic model of the detailed design process. Each activity in the process is specified by its input and output information needs. By connecting the input demands of one technology with the output supply from another technology a map or network of design activity is formed. Thus, it is possible to structure a specific model from the generic knowledge base within a KBE system.
Resumo:
Because of the importance and potential usefulness of construction market statistics to firms and government, consistency between different sources of data is examined with a view to building a predictive model of construction output using construction data alone. However, a comparison of Department of Trade and Industry (DTI) and Office for National Statistics (ONS) series shows that the correlation coefcient (used as a measure of consistency) of the DTI output and DTI orders data and the correlation coefficient of the DTI output and ONS output data are low. It is not possible to derive a predictive model of DTI output based on DTI orders data alone. The question arises whether or not an alternative independent source of data may be used to predict DTI output data. Independent data produced by Emap Glenigan (EG), based on planning applications, potentially offers such a source of information. The EG data records the value of planning applications and their planned start and finish dates. However, as this data is ex ante and is not correlated with DTI output it is not possible to use this data to describe the volume of actual construction output. Nor is it possible to use the EG planning data to predict DTI construc-tion orders data. Further consideration of the issues raised reveal that it is not practically possible to develop a consistent predictive model of construction output using construction statistics gathered at different stages in the development process.
Resumo:
The combination of model predictive control based on linear models (MPC) with feedback linearization (FL) has attracted interest for a number of years, giving rise to MPC+FL control schemes. An important advantage of such schemes is that feedback linearizable plants can be controlled with a linear predictive controller with a fixed model. Handling input constraints within such schemes is difficult since simple bound contraints on the input become state dependent because of the nonlinear transformation introduced by feedback linearization. This paper introduces a technique for handling input constraints within a real time MPC/FL scheme, where the plant model employed is a class of dynamic neural networks. The technique is based on a simple affine transformation of the feasible area. A simulated case study is presented to illustrate the use and benefits of the technique.
Resumo:
A quasi-optical deembedding technique for characterizing waveguides is demonstrated using wide-band time-resolved terahertz spectroscopy. A transfer function representation is adopted for the description of the signal in the input and output port of the waveguides. The time-domain responses were discretized and the waveguide transfer function was obtained through a parametric approach in the z-domain after describing the system with an AutoRegressive with eXogenous input (ARX), as well as with a state-space model. Prior to the identification procedure, filtering was performed in the wavelet domain to minimize both signal distortion, as well as the noise propagating in the ARX and subspace models. The optimal filtering procedure used in the wavelet domain for the recorded time-domain signatures is described in detail. The effect of filtering prior to the identification procedures is elucidated with the aid of pole-zero diagrams. Models derived from measurements of terahertz transients in a precision WR-8 waveguide adjustable short are presented.
Resumo:
A quasi-optical technique for characterizing micromachined waveguides is demonstrated with wideband time-resolved terahertz spectroscopy. A transfer-function representation is adopted for the description of the relation between the signals in the input and output port of the waveguides. The time-domain responses were discretized, and the waveguide transfer function was obtained through a parametric approach in the z domain after describing the system with an autoregressive with exogenous input model. The a priori assumption of the number of modes propagating in the structure was inferred from comparisons of the theoretical with the measured characteristic impedance as well as with parsimony arguments. Measurements for a precision WR-8 waveguide-adjustable short as well as for G-band reduced-height micromachined waveguides are presented. (C) 2003 Optical Society of America.
Resumo:
The identification of non-linear systems using only observed finite datasets has become a mature research area over the last two decades. A class of linear-in-the-parameter models with universal approximation capabilities have been intensively studied and widely used due to the availability of many linear-learning algorithms and their inherent convergence conditions. This article presents a systematic overview of basic research on model selection approaches for linear-in-the-parameter models. One of the fundamental problems in non-linear system identification is to find the minimal model with the best model generalisation performance from observational data only. The important concepts in achieving good model generalisation used in various non-linear system-identification algorithms are first reviewed, including Bayesian parameter regularisation and models selective criteria based on the cross validation and experimental design. A significant advance in machine learning has been the development of the support vector machine as a means for identifying kernel models based on the structural risk minimisation principle. The developments on the convex optimisation-based model construction algorithms including the support vector regression algorithms are outlined. Input selection algorithms and on-line system identification algorithms are also included in this review. Finally, some industrial applications of non-linear models are discussed.
Resumo:
A cross-platform field campaign, OP3, was conducted in the state of Sabah in Malaysian Borneo between April and July of 2008. Among the suite of observations recorded, the campaign included measurements of NOx and O3 – crucial outputs of any model chemistry mechanism. We describe the measurements of these species made from both the ground site and aircraft. We then use the output from two resolutions of the chemistry transport model p-TOMCAT to illustrate the ability of a global model chemical mechanism to capture the chemistry at the rainforest site. The basic model performance is good for NOx and poor for ozone. A box model containing the same chemical mechanism is used to explore the results of the global model in more depth and make comparisons between the two. Without some parameterization of the nighttime boundary layer – free troposphere mixing (i.e. the use of a dilution parameter), the box model does not reproduce the observations, pointing to the importance of adequately representing physical processes for comparisons with surface measurements. We conclude with a discussion of box model budget calculations of chemical reaction fluxes, deposition and mixing, and compare these results to output from p-TOMCAT. These show the same chemical mechanism behaves similarly in both models, but that emissions and advection play particularly strong roles in influencing the comparison to surface measurements.
Resumo:
A radionuclide source term model has been developed which simulates the biogeochemical evolution of the Drigg low level waste (LLW) disposal site. The DRINK (DRIgg Near field Kinetic) model provides data regarding radionuclide concentrations in groundwater over a period of 100,000 years, which are used as input to assessment calculations for a groundwater pathway. The DRINK model also provides input to human intrusion and gaseous assessment calculations through simulation of the solid radionuclide inventory. These calculations are being used to support the Drigg post closure safety case. The DRINK model considers the coupled interaction of the effects of fluid flow, microbiology, corrosion, chemical reaction, sorption and radioactive decay. It represents the first direct use of a mechanistic reaction-transport model in risk assessment calculations.
Resumo:
An input variable selection procedure is introduced for the identification and construction of multi-input multi-output (MIMO) neurofuzzy operating point dependent models. The algorithm is an extension of a forward modified Gram-Schmidt orthogonal least squares procedure for a linear model structure which is modified to accommodate nonlinear system modeling by incorporating piecewise locally linear model fitting. The proposed input nodes selection procedure effectively tackles the problem of the curse of dimensionality associated with lattice-based modeling algorithms such as radial basis function neurofuzzy networks, enabling the resulting neurofuzzy operating point dependent model to be widely applied in control and estimation. Some numerical examples are given to demonstrate the effectiveness of the proposed construction algorithm.
Resumo:
Neurofuzzy modelling systems combine fuzzy logic with quantitative artificial neural networks via a concept of fuzzification by using a fuzzy membership function usually based on B-splines and algebraic operators for inference, etc. The paper introduces a neurofuzzy model construction algorithm using Bezier-Bernstein polynomial functions as basis functions. The new network maintains most of the properties of the B-spline expansion based neurofuzzy system, such as the non-negativity of the basis functions, and unity of support but with the additional advantages of structural parsimony and Delaunay input space partitioning, avoiding the inherent computational problems of lattice networks. This new modelling network is based on the idea that an input vector can be mapped into barycentric co-ordinates with respect to a set of predetermined knots as vertices of a polygon (a set of tiled Delaunay triangles) over the input space. The network is expressed as the Bezier-Bernstein polynomial function of barycentric co-ordinates of the input vector. An inverse de Casteljau procedure using backpropagation is developed to obtain the input vector's barycentric co-ordinates that form the basis functions. Extension of the Bezier-Bernstein neurofuzzy algorithm to n-dimensional inputs is discussed followed by numerical examples to demonstrate the effectiveness of this new data based modelling approach.
Resumo:
The purpose of this paper is to design a control law for continuous systems with Boolean inputs allowing the output to track a desired trajectory. Such systems are controlled by items of commutation. This type of systems, with Boolean inputs, has found increasing use in the electric industry. Power supplies include such systems and a power converter represents one of theses systems. For instance, in power electronics the control variable is the switching OFF and ON of components such as thyristors or transistors. In this paper, a method is proposed for the designing of a control law in state space for such systems. This approach is implemented in simulation for the control of an electronic circuit.
Resumo:
In this chapter we described how the inclusion of a model of a human arm, combined with the measurement of its neural input and a predictor, can provide to a previously proposed teleoperator design robustness under time delay. Our trials gave clear indications of the superiority of the NPT scheme over traditional as well as the modified Yokokohji and Yoshikawa architectures. Its fundamental advantages are: the time-lead of the slave, the more efficient, and providing a more natural feeling manipulation, and the fact that incorporating an operator arm model leads to more credible stability results. Finally, its simplicity allows less likely to fail local control techniques to be employed. However, a significant advantage for the enhanced Yokokohji and Yoshikawa architecture results from the very fact that it’s a conservative modification of current designs. Under large prediction errors, it can provide robustness through directing the master and slave states to their means and, since it relies on the passivity of the mechanical part of the system, it would not confuse the operator. An experimental implementation of the techniques will provide further evidence for the performance of the proposed architectures. The employment of neural networks and fuzzy logic, which will provide an adaptive model of the human arm and robustifying control terms, is scheduled for the near future.
Resumo:
A quasi-optical de-embedding technique for characterizing waveguides is demonstrated using wideband time-resolved terahertz spectroscopy. A transfer function representation is adopted for the description of the signal in the input and output port of the waveguides. The time domain responses were discretised and the waveguide transfer function was obtained through a parametric approach in the z-domain after describing the system with an ARX as well as with a state space model. Prior to the identification procedure, filtering was performed in the wavelet domain to minimize signal distortion and the noise propagating in the ARX and subspace models. The model identification procedure requires isolation of the phase delay in the structure and therefore the time-domain signatures must be firstly aligned with respect to each other before they are compared. An initial estimate of the number of propagating modes was provided by comparing the measured phase delay in the structure with theoretical calculations that take into account the physical dimensions of the waveguide. Models derived from measurements of THz transients in a precision WR-8 waveguide adjustable short will be presented.