955 resultados para Internal-model
Resumo:
In this paper, a new control design method is proposed for stable processes which can be described using Hammerstein-Wiener models. The internal model control (IMC) framework is extended to accommodate multiple IMC controllers, one for each subsystem. The concept of passive systems is used to construct the IMC controllers which approximate the inverses of the subsystems to achieve dynamic control performance. The Passivity Theorem is used to ensure the closed-loop stability. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
This study investigated how movement error is evaluated and used to change feedforward commands following a change in the environmental dynamics. In particular, we addressed the question of whether only position-error information is used or whether information about the force-field direction can also be used for rapid adaptation to changes in the environmental dynamics. Subjects learned to move in a position-dependent force field (PF) with a parabolic profile and the dynamics of a negative spring, which produced lateral force to the left of the target hand path. They adapted very rapidly, dramatically reducing lateral error after a single trial. Several times during training, the strength of the PF was unexpectedly doubled (PF2) for two trials. This again created a large leftward deviation, which was greatly reduced on the second PF2 trial, and an aftereffect when the force field subsequently returned to its original strength. The aftereffect was abolished if the second PF2 trial was replaced by an oppositely directed velocity-dependent force field (VF). During subsequent training in the VF, immediately after having adapted to the PF, subjects applied a force that assisted the force field for similar to 15 trials, indicating that they did not use information about the force-field direction. We concluded that the CNS uses only the position error for updating the internal model of the environmental dynamics and modifying feedforward commands. Although this strategy is not necessarily optimal, it may be the most reliable strategy for iterative improvement in performance.
Resumo:
Traditional vegetation mapping methods use high cost, labour-intensive aerial photography interpretation. This approach can be subjective and is limited by factors such as the extent of remnant vegetation, and the differing scale and quality of aerial photography over time. An alternative approach is proposed which integrates a data model, a statistical model and an ecological model using sophisticated Geographic Information Systems (GIS) techniques and rule-based systems to support fine-scale vegetation community modelling. This approach is based on a more realistic representation of vegetation patterns with transitional gradients from one vegetation community to another. Arbitrary, though often unrealistic, sharp boundaries can be imposed on the model by the application of statistical methods. This GIS-integrated multivariate approach is applied to the problem of vegetation mapping in the complex vegetation communities of the Innisfail Lowlands in the Wet Tropics bioregion of Northeastern Australia. The paper presents the full cycle of this vegetation modelling approach including sampling sites, variable selection, model selection, model implementation, internal model assessment, model prediction assessments, models integration of discrete vegetation community models to generate a composite pre-clearing vegetation map, independent data set model validation and model prediction's scale assessments. An accurate pre-clearing vegetation map of the Innisfail Lowlands was generated (0.83r(2)) through GIS integration of 28 separate statistical models. This modelling approach has good potential for wider application, including provision of. vital information for conservation planning and management; a scientific basis for rehabilitation of disturbed and cleared areas; a viable method for the production of adequate vegetation maps for conservation and forestry planning of poorly-studied areas. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
Liquid-liquid extraction has long been known as a unit operation that plays an important role in industry. This process is well known for its complexity and sensitivity to operation conditions. This thesis presents an attempt to explore the dynamics and control of this process using a systematic approach and state of the art control system design techniques. The process was studied first experimentally under carefully selected. operation conditions, which resembles the ranges employed practically under stable and efficient conditions. Data were collected at steady state conditions using adequate sampling techniques for the dispersed and continuous phases as well as during the transients of the column with the aid of a computer-based online data logging system and online concentration analysis. A stagewise single stage backflow model was improved to mimic the dynamic operation of the column. The developed model accounts for the variation in hydrodynamics, mass transfer, and physical properties throughout the length of the column. End effects were treated by addition of stages at the column entrances. Two parameters were incorporated in the model namely; mass transfer weight factor to correct for the assumption of no mass transfer in the. settling zones at each stage and the backmixing coefficients to handle the axial dispersion phenomena encountered in the course of column operation. The parameters were estimated by minimizing the differences between the experimental and the model predicted concentration profiles at steady state conditions using non-linear optimisation technique. The estimated values were then correlated as functions of operating parameters and were incorporated in·the model equations. The model equations comprise a stiff differential~algebraic system. This system was solved using the GEAR ODE solver. The calculated concentration profiles were compared to those experimentally measured. A very good agreement of the two profiles was achieved within a percent relative error of ±2.S%. The developed rigorous dynamic model of the extraction column was used to derive linear time-invariant reduced-order models that relate the input variables (agitator speed, solvent feed flowrate and concentration, feed concentration and flowrate) to the output variables (raffinate concentration and extract concentration) using the asymptotic method of system identification. The reduced-order models were shown to be accurate in capturing the dynamic behaviour of the process with a maximum modelling prediction error of I %. The simplicity and accuracy of the derived reduced-order models allow for control system design and analysis of such complicated processes. The extraction column is a typical multivariable process with agitator speed and solvent feed flowrate considered as manipulative variables; raffinate concentration and extract concentration as controlled variables and the feeds concentration and feed flowrate as disturbance variables. The control system design of the extraction process was tackled as multi-loop decentralised SISO (Single Input Single Output) as well as centralised MIMO (Multi-Input Multi-Output) system using both conventional and model-based control techniques such as IMC (Internal Model Control) and MPC (Model Predictive Control). Control performance of each control scheme was. studied in terms of stability, speed of response, sensitivity to modelling errors (robustness), setpoint tracking capabilities and load rejection. For decentralised control, multiple loops were assigned to pair.each manipulated variable with each controlled variable according to the interaction analysis and other pairing criteria such as relative gain array (RGA), singular value analysis (SVD). Loops namely Rotor speed-Raffinate concentration and Solvent flowrate Extract concentration showed weak interaction. Multivariable MPC has shown more effective performance compared to other conventional techniques since it accounts for loops interaction, time delays, and input-output variables constraints.
Resumo:
The humanity reached a time of unprecedented technological development. Science has achieved and continues to achieve technologies that allowed increasingly to understand the universe and the laws which govern it, and also try to coexist without destroying the planet we live on. One of the main challenges of the XXI century is to seek and increase new sources of clean energy, renewable and able to sustain our growth and lifestyle. It is the duty of every researcher engage and contribute in this race of energy. In this context, wind power presents itself as one of the great promises for the future of electricity generation . Despite being a bit older than other sources of renewable energy, wind power still presents a wide field for improvement. The development of new techniques for control of the generator along with the development of research laboratories specializing in wind generation are one of the key points to improve the performance, efficiency and reliability of the system. Appropriate control of back-to-back converter scheme allows wind turbines based on the doubly-fed induction generator to operate in the variable-speed mode, whose benefits include maximum power extraction, reactive power injection and mechanical stress reduction. The generator-side converter provides control of active and reactive power injected into the grid, whereas the grid-side converter provides control of the DC link voltage and bi-directional power flow. The conventional control structure uses PI controllers with feed-forward compensation of cross-coupling dq terms. This control technique is sensitive to model uncertainties and the compensation of dynamic dq terms results on a competing control strategy. Therefore, to overcome these problems, it is proposed in this thesis a robust internal model based state-feedback control structure in order to eliminate the cross-coupling terms and thereby improve the generator drive as well as its dynamic behavior during sudden changes in wind speed. It is compared the conventional control approach with the proposed control technique for DFIG wind turbine control under both steady and gust wind conditions. Moreover, it is also proposed in this thesis an wind turbine emulator, which was developed to recreate in laboratory a realistic condition and to submit the generator to several wind speed conditions.
Resumo:
The conventional control schemes applied to Shunt Active Power Filters (SAPF) are Harmonic extractor-based strategies (HEBSs) because their effectiveness depends on how quickly and accurately the harmonic components of the nonlinear loads are identified. The SAPF can be also implemented without the use of the load harmonic extractors. In this case, the harmonic compensating term is obtained from the system active power balance. These systems can be considered as balanced-energy-based schemes (BEBSs) and their performance depends on how fast the system reaches the equilibrium state. In this case, the phase currents of the power grid are indirectly regulated by double sequence controllers with two degrees of freedom, where the internal model principle is employed to avoid reference frame transformation. Additionally the DSC controller presents robustness when the SAPF is operating under unbalanced conditions. Furthermore, SAPF implemented without harmonic detection schemes compensate simultaneously harmonic distortion and reactive power of the load. Their compensation capabilities, however, are limited by the SAPF power converter rating. Such a restriction can be minimized if the level of the reactive power correction is managed. In this work an estimation scheme for determining the filter currents is introduced to manage the compensation of reactive power. Experimental results are shown for demonstrating the performance of the proposed SAPF system.
Resumo:
Understanding the exploration patterns of foragers in the wild provides fundamental insight into animal behavior. Recent experimental evidence has demonstrated that path lengths (distances between consecutive turns) taken by foragers are well fitted by a power law distribution. Numerous theoretical contributions have posited that “Lévy random walks”—which can produce power law path length distributions—are optimal for memoryless agents searching a sparse reward landscape. It is unclear, however, whether such a strategy is efficient for cognitively complex agents, from wild animals to humans. Here, we developed a model to explain the emergence of apparent power law path length distributions in animals that can learn about their environments. In our model, the agent’s goal during search is to build an internal model of the distribution of rewards in space that takes into account the cost of time to reach distant locations (i.e., temporally discounting rewards). For an agent with such a goal, we find that an optimal model of exploration in fact produces hyperbolic path lengths, which are well approximated by power laws. We then provide support for our model by showing that humans in a laboratory spatial exploration task search space systematically and modify their search patterns under a cost of time. In addition, we find that path length distributions in a large dataset obtained from free-ranging marine vertebrates are well described by our hyperbolic model. Thus, we provide a general theoretical framework for understanding spatial exploration patterns of cognitively complex foragers.
Resumo:
Understanding the exploration patterns of foragers in the wild provides fundamental insight into animal behavior. Recent experimental evidence has demonstrated that path lengths (distances between consecutive turns) taken by foragers are well fitted by a power law distribution. Numerous theoretical contributions have posited that “Lévy random walks”—which can produce power law path length distributions—are optimal for memoryless agents searching a sparse reward landscape. It is unclear, however, whether such a strategy is efficient for cognitively complex agents, from wild animals to humans. Here, we developed a model to explain the emergence of apparent power law path length distributions in animals that can learn about their environments. In our model, the agent’s goal during search is to build an internal model of the distribution of rewards in space that takes into account the cost of time to reach distant locations (i.e., temporally discounting rewards). For an agent with such a goal, we find that an optimal model of exploration in fact produces hyperbolic path lengths, which are well approximated by power laws. We then provide support for our model by showing that humans in a laboratory spatial exploration task search space systematically and modify their search patterns under a cost of time. In addition, we find that path length distributions in a large dataset obtained from free-ranging marine vertebrates are well described by our hyperbolic model. Thus, we provide a general theoretical framework for understanding spatial exploration patterns of cognitively complex foragers.
Resumo:
Magdeburg, Univ., Diss, 2007
Resumo:
Aim: When planning SIRT using 90Y microspheres, the partition model is used to refine the activity calculated by the body surface area (BSA) method to potentially improve the safety and efficacy of treatment. For this partition model dosimetry, accurate determination of mean tumor-to-normal liver ratio (TNR) is critical since it directly impacts absorbed dose estimates. This work aimed at developing and assessing a reliable methodology for the calculation of 99mTc-MAA SPECT/CT-derived TNR ratios based on phantom studies. Materials and methods: IQ NEMA (6 hot spheres) and Kyoto liver phantoms with different hot/background activity concentration ratios were imaged on a SPECT/CT (GE Infinia Hawkeye 4). For each reconstruction with the IQ phantom, TNR quantification was assessed in terms of relative recovery coefficients (RC) and image noise was evaluated in terms of coefficient of variation (COV) in the filled background. RCs were compared using OSEM with Hann, Butterworth and Gaussian filters, as well as FBP reconstruction algorithms. Regarding OSEM, RCs were assessed by varying different parameters independently, such as the number of iterations (i) and subsets (s) and the cut-off frequency of the filter (fc). The influence of the attenuation and diffusion corrections was also investigated. Furthermore, both 2D-ROIs and 3D-VOIs contouring were compared. For this purpose, dedicated Matlab© routines were developed in-house for automatic 2D-ROI/3D-VOI determination to reduce intra-user and intra-slice variability. Best reconstruction parameters and RCs obtained with the IQ phantom were used to recover corrected TNR in case of the Kyoto phantom for arbitrary hot-lesion size. In addition, we computed TNR volume histograms to better assess uptake heterogeneityResults: The highest RCs were obtained with OSEM (i=2, s=10) coupled with the Butterworth filter (fc=0.8). Indeed, we observed a global 20% RC improvement over other OSEM settings and a 50% increase as compared to the best FBP reconstruction. In any case, both attenuation and diffusion corrections must be applied, thus improving RC while preserving good image noise (COV<10%). Both 2D-ROI and 3D-VOI analysis lead to similar results. Nevertheless, we recommend using 3D-VOI since tumor uptake regions are intrinsically 3D. RC-corrected TNR values lie within 17% around the true value, substantially improving the evaluation of small volume (<15 mL) regions. Conclusions: This study reports the multi-parameter optimization of 99mTc MAA SPECT/CT images reconstruction in planning 90Y dosimetry for SIRT. In phantoms, accurate quantification of TNR was obtained using OSEM coupled with Butterworth and RC correction.
Resumo:
The objectives of this Master’s Thesis were to find out what kind of knowledge management strategy would fit best an IT organization that uses ITIL (Information Technology Infrastructure Library) framework for IT Service Management and to create a knowledge management process model to support chosen strategy. The empirical material for this research was collected through qualitative semi-structured interviews of a case organization Stora Enso Corporate IT. The results of the qualitative interviews indicate that codification knowledge management strategy would fit best for the case organization. The knowledge management process model was created based on earlier studies and a literature of knowledge management. The model was evaluated in the interview research and the results showed that the created process model is realistic, useful, and it responds to a real life phenomenon.
Resumo:
The North Atlantic Ocean subpolar gyre (NA SPG) is an important region for initialising decadal climate forecasts. Climate model simulations and palaeo climate reconstructions have indicated that this region could also exhibit large, internally generated variability on decadal timescales. Understanding these modes of variability, their consistency across models, and the conditions in which they exist, is clearly important for improving the skill of decadal predictions — particularly when these predictions are made with the same underlying climate models. Here we describe and analyse a mode of internal variability in the NA SPG in a state-of-the-art, high resolution, coupled climate model. This mode has a period of 17 years and explains 15–30% of the annual variance in related ocean indices. It arises due to the advection of heat content anomalies around the NA SPG. Anomalous circulation drives the variability in the southern half of the NA SPG, whilst mean circulation and anomalous temperatures are important in the northern half. A negative feedback between Labrador Sea temperatures/densities and those in the North Atlantic Current is identified, which allows for the phase reversal. The atmosphere is found to act as a positive feedback on to this mode via the North Atlantic Oscillation which itself exhibits a spectral peak at 17 years. Decadal ocean density changes associated with this mode are driven by variations in temperature, rather than salinity — a point which models often disagree on and which we suggest may affect the veracity of the underlying assumptions of anomaly-assimilating decadal prediction methodologies.