955 resultados para Internal model control
Resumo:
FAMOUS is an ocean-atmosphere general circulation model of low resolution, capable of simulating approximately 120 years of model climate per wallclock day using current high performance computing facilities. It uses most of the same code as HadCM3, a widely used climate model of higher resolution and computational cost, and has been tuned to reproduce the same climate reasonably well. FAMOUS is useful for climate simulations where the computational cost makes the application of HadCM3 unfeasible, either because of the length of simulation or the size of the ensemble desired. We document a number of scientific and technical improvements to the original version of FAMOUS. These improvements include changes to the parameterisations of ozone and sea-ice which alleviate a significant cold bias from high northern latitudes and the upper troposphere, and the elimination of volume-averaged drifts in ocean tracers. A simple model of the marine carbon cycle has also been included. A particular goal of FAMOUS is to conduct millennial-scale paleoclimate simulations of Quaternary ice ages; to this end, a number of useful changes to the model infrastructure have been made.
Resumo:
In this work, a fault-tolerant control scheme is applied to a air handling unit of a heating, ventilation and air-conditioning system. Using the multiple-model approach it is possible to identify faults and to control the system under faulty and normal conditions in an effective way. Using well known techniques to model and control the process, this work focuses on the importance of the cost function in the fault detection and its influence on the reconfigurable controller. Experimental results show how the control of the terminal unit is affected in the presence a fault, and how the recuperation and reconfiguration of the control action is able to deal with the effects of faults.
Resumo:
Objectives. Theoretic modeling and experimental studies suggest that functional electrical stimulation (FES) can improve trunk balance in spinal cord injured subjects. This can have a positive impact on daily life, increasing the volume of bimanual workspace, improving sitting posture, and wheelchair propulsion. A closed loop controller for the stimulation is desirable, as it can potentially decrease muscle fatigue and offer better rejection to disturbances. This paper proposes a biomechanical model of the human trunk, and a procedure for its identification, to be used for the future development of FES controllers. The advantage over previous models resides in the simplicity of the solution proposed, which makes it possible to identify the model just before a stimulation session ( taking into account the variability of the muscle response to the FES). Materials and Methods. The structure of the model is based on previous research on FES and muscle physiology. Some details could not be inferred from previous studies, and were determined from experimental data. Experiments with a paraplegic volunteer were conducted in order to measure the moments exerted by the trunk-passive tissues and artificially stimulated muscles. Data for model identification and validation also were collected. Results. Using the proposed structure and identification procedure, the model could adequately reproduce the moments exerted during the experiments. The study reveals that the stimulated trunk extensors can exert maximal moment when the trunk is in the upright position. In contrast, previous studies show that able-bodied subjects can exert maximal trunk extension when flexed forward. Conclusions. The proposed model and identification procedure are a successful first step toward the development of a model-based controller for trunk FES. The model also gives information on the trunk in unique conditions, normally not observable in able-bodied subjects (ie, subject only to extensor muscles contraction).
Resumo:
The use of data reconciliation techniques can considerably reduce the inaccuracy of process data due to measurement errors. This in turn results in improved control system performance and process knowledge. Dynamic data reconciliation techniques are applied to a model-based predictive control scheme. It is shown through simulations on a chemical reactor system that the overall performance of the model-based predictive controller is enhanced considerably when data reconciliation is applied. The dynamic data reconciliation techniques used include a combined strategy for the simultaneous identification of outliers and systematic bias.
Resumo:
This paper discusses the application of model reference adaptive control concepts to the automatic tuning of PID controllers. The effectiveness of the proposed method is shown through simulated applications. The gradient approach and simulated examples are provided.
Resumo:
DISOPE is a technique for solving optimal control problems where there are differences in structure and parameter values between reality and the model employed in the computations. The model reality differences can also allow for deliberate simplification of model characteristics and performance indices in order to facilitate the solution of the optimal control problem. The technique was developed originally in continuous time and later extended to discrete time. The main property of the procedure is that by iterating on appropriately modified model based problems the correct optimal solution is achieved in spite of the model-reality differences. Algorithms have been developed in both continuous and discrete time for a general nonlinear optimal control problem with terminal weighting, bounded controls and terminal constraints. The aim of this paper is to show how the DISOPE technique can aid receding horizon optimal control computation in nonlinear model predictive control.
Resumo:
This paper describes the application of artificial neural networks for automatic tuning of PID controllers using the Model Reference Adaptive Control approach. The effectiveness of the proposed method is shown through a simulated application.
Resumo:
We explore the potential for making statistical decadal predictions of sea surface temperatures (SSTs) in a perfect model analysis, with a focus on the Atlantic basin. Various statistical methods (Lagged correlations, Linear Inverse Modelling and Constructed Analogue) are found to have significant skill in predicting the internal variability of Atlantic SSTs for up to a decade ahead in control integrations of two different global climate models (GCMs), namely HadCM3 and HadGEM1. Statistical methods which consider non-local information tend to perform best, but which is the most successful statistical method depends on the region considered, GCM data used and prediction lead time. However, the Constructed Analogue method tends to have the highest skill at longer lead times. Importantly, the regions of greatest prediction skill can be very different to regions identified as potentially predictable from variance explained arguments. This finding suggests that significant local decadal variability is not necessarily a prerequisite for skillful decadal predictions, and that the statistical methods are capturing some of the dynamics of low-frequency SST evolution. In particular, using data from HadGEM1, significant skill at lead times of 6–10 years is found in the tropical North Atlantic, a region with relatively little decadal variability compared to interannual variability. This skill appears to come from reconstructing the SSTs in the far north Atlantic, suggesting that the more northern latitudes are optimal for SST observations to improve predictions. We additionally explore whether adding sub-surface temperature data improves these decadal statistical predictions, and find that, again, it depends on the region, prediction lead time and GCM data used. Overall, we argue that the estimated prediction skill motivates the further development of statistical decadal predictions of SSTs as a benchmark for current and future GCM-based decadal climate predictions.