30 resultados para Hierarchical dynamic models
Resumo:
In this thesis, general approach is devised to model electrolyte sorption from aqueous solutions on solid materials. Electrolyte sorption is often considered as unwanted phenomenon in ion exchange and its potential as an independent separation method has not been fully explored. The solid sorbents studied here are porous and non-porous organic or inorganic materials with or without specific functional groups attached on the solid matrix. Accordingly, the sorption mechanisms include physical adsorption, chemisorption on the functional groups and partition restricted by electrostatic or steric factors. The model is tested in four Cases Studies dealing with chelating adsorption of transition metal mixtures, physical adsorption of metal and metalloid complexes from chloride solutions, size exclusion of electrolytes in nano-porous materials and electrolyte exclusion of electrolyte/non-electrolyte mixtures. The model parameters are estimated using experimental data from equilibrium and batch kinetic measurements, and they are used to simulate actual single-column fixed-bed separations. Phase equilibrium between the solution and solid phases is described using thermodynamic Gibbs-Donnan model and various adsorption models depending on the properties of the sorbent. The 3-dimensional thermodynamic approach is used for volume sorption in gel-type ion exchangers and in nano-porous adsorbents, and satisfactory correlation is obtained provided that both mixing and exclusion effects are adequately taken into account. 2-Dimensional surface adsorption models are successfully applied to physical adsorption of complex species and to chelating adsorption of transition metal salts. In the latter case, comparison is also made with complex formation models. Results of the mass transport studies show that uptake rates even in a competitive high-affinity system can be described by constant diffusion coefficients, when the adsorbent structure and the phase equilibrium conditions are adequately included in the model. Furthermore, a simplified solution based on the linear driving force approximation and the shrinking-core model is developed for very non-linear adsorption systems. In each Case Study, the actual separation is carried out batch-wise in fixed-beds and the experimental data are simulated/correlated using the parameters derived from equilibrium and kinetic data. Good agreement between the calculated and experimental break-through curves is usually obtained indicating that the proposed approach is useful in systems, which at first sight are very different. For example, the important improvement in copper separation from concentrated zinc sulfate solution at elevated temperatures can be correctly predicted by the model. In some cases, however, re-adjustment of model parameters is needed due to e.g. high solution viscosity.
Resumo:
Cells of epithelial origin, e.g. from breast and prostate cancers, effectively differentiate into complex multicellular structures when cultured in three-dimensions (3D) instead of conventional two-dimensional (2D) adherent surfaces. The spectrum of different organotypic morphologies is highly dependent on the culture environment that can be either non-adherent or scaffold-based. When embedded in physiological extracellular matrices (ECMs), such as laminin-rich basement membrane extracts, normal epithelial cells differentiate into acinar spheroids reminiscent of glandular ductal structures. Transformed cancer cells, in contrast, typically fail to undergo acinar morphogenic patterns, forming poorly differentiated or invasive multicellular structures. The 3D cancer spheroids are widely accepted to better recapitulate various tumorigenic processes and drug responses. So far, however, 3D models have been employed predominantly in the Academia, whereas the pharmaceutical industry has yet to adopt a more widely and routine use. This is mainly due to poor characterisation of cell models, lack of standardised workflows and high throughput cell culture platforms, and the availability of proper readout and quantification tools. In this thesis, a complete workflow has been established entailing well-characterised 3D cell culture models for prostate cancer, a standardised 3D cell culture routine based on high-throughput-ready platform, automated image acquisition with concomitant morphometric image analysis, and data visualisation, in order to enable large-scale high-content screens. Our integrated suite of software and statistical analysis tools were optimised and validated using a comprehensive panel of prostate cancer cell lines and 3D models. The tools quantify multiple key cancer-relevant morphological features, ranging from cancer cell invasion through multicellular differentiation to growth, and detect dynamic changes both in morphology and function, such as cell death and apoptosis, in response to experimental perturbations including RNA interference and small molecule inhibitors. Our panel of cell lines included many non-transformed and most currently available classic prostate cancer cell lines, which were characterised for their morphogenetic properties in 3D laminin-rich ECM. The phenotypes and gene expression profiles were evaluated concerning their relevance for pre-clinical drug discovery, disease modelling and basic research. In addition, a spontaneous model for invasive transformation was discovered, displaying a highdegree of epithelial plasticity. This plasticity is mediated by an abundant bioactive serum lipid, lysophosphatidic acid (LPA), and its receptor LPAR1. The invasive transformation was caused by abrupt cytoskeletal rearrangement through impaired G protein alpha 12/13 and RhoA/ROCK, and mediated by upregulated adenylyl cyclase/cyclic AMP (cAMP)/protein kinase A, and Rac/ PAK pathways. The spontaneous invasion model tangibly exemplifies the biological relevance of organotypic cell culture models. Overall, this thesis work underlines the power of novel morphometric screening tools in drug discovery.
Resumo:
Modern machine structures are often fabricated by welding. From a fatigue point of view, the structural details and especially, the welded details are the most prone to fatigue damage and failure. Design against fatigue requires information on the fatigue resistance of a structure’s critical details and the stress loads that act on each detail. Even though, dynamic simulation of flexible bodies is already current method for analyzing structures, obtaining the stress history of a structural detail during dynamic simulation is a challenging task; especially when the detail has a complex geometry. In particular, analyzing the stress history of every structural detail within a single finite element model can be overwhelming since the amount of nodal degrees of freedom needed in the model may require an impractical amount of computational effort. The purpose of computer simulation is to reduce amount of prototypes and speed up the product development process. Also, to take operator influence into account, real time models, i.e. simplified and computationally efficient models are required. This in turn, requires stress computation to be efficient if it will be performed during dynamic simulation. The research looks back at the theoretical background of multibody dynamic simulation and finite element method to find suitable parts to form a new approach for efficient stress calculation. This study proposes that, the problem of stress calculation during dynamic simulation can be greatly simplified by using a combination of floating frame of reference formulation with modal superposition and a sub-modeling approach. In practice, the proposed approach can be used to efficiently generate the relevant fatigue assessment stress history for a structural detail during or after dynamic simulation. In this work numerical examples are presented to demonstrate the proposed approach in practice. The results show that approach is applicable and can be used as proposed.
Resumo:
Abstract—This paper discusses existing military capability models and proposes a comprehensive capability meta-model (CCMM) which unites the existing capability models into an integrated and hierarchical whole. The Zachman Framework for Enterprise Architecture is used as a structure for the CCMM. The CCMM takes into account the abstraction level, the primary area of application, stakeholders, intrinsic process, and life cycle considerations of each existing capability model, and shows how the models relate to each other. The validity of the CCMM was verified through a survey of subject matter experts. The results suggest that the CCMM is of practical value to various capability stakeholders in many ways, such as helping to improve communication between the different capability communities.
Resumo:
With the shift towards many-core computer architectures, dataflow programming has been proposed as one potential solution for producing software that scales to a varying number of processor cores. Programming for parallel architectures is considered difficult as the current popular programming languages are inherently sequential and introducing parallelism is typically up to the programmer. Dataflow, however, is inherently parallel, describing an application as a directed graph, where nodes represent calculations and edges represent a data dependency in form of a queue. These queues are the only allowed communication between the nodes, making the dependencies between the nodes explicit and thereby also the parallelism. Once a node have the su cient inputs available, the node can, independently of any other node, perform calculations, consume inputs, and produce outputs. Data ow models have existed for several decades and have become popular for describing signal processing applications as the graph representation is a very natural representation within this eld. Digital lters are typically described with boxes and arrows also in textbooks. Data ow is also becoming more interesting in other domains, and in principle, any application working on an information stream ts the dataflow paradigm. Such applications are, among others, network protocols, cryptography, and multimedia applications. As an example, the MPEG group standardized a dataflow language called RVC-CAL to be use within reconfigurable video coding. Describing a video coder as a data ow network instead of with conventional programming languages, makes the coder more readable as it describes how the video dataflows through the different coding tools. While dataflow provides an intuitive representation for many applications, it also introduces some new problems that need to be solved in order for data ow to be more widely used. The explicit parallelism of a dataflow program is descriptive and enables an improved utilization of available processing units, however, the independent nodes also implies that some kind of scheduling is required. The need for efficient scheduling becomes even more evident when the number of nodes is larger than the number of processing units and several nodes are running concurrently on one processor core. There exist several data ow models of computation, with different trade-offs between expressiveness and analyzability. These vary from rather restricted but statically schedulable, with minimal scheduling overhead, to dynamic where each ring requires a ring rule to evaluated. The model used in this work, namely RVC-CAL, is a very expressive language, and in the general case it requires dynamic scheduling, however, the strong encapsulation of dataflow nodes enables analysis and the scheduling overhead can be reduced by using quasi-static, or piecewise static, scheduling techniques. The scheduling problem is concerned with nding the few scheduling decisions that must be run-time, while most decisions are pre-calculated. The result is then an, as small as possible, set of static schedules that are dynamically scheduled. To identify these dynamic decisions and to find the concrete schedules, this thesis shows how quasi-static scheduling can be represented as a model checking problem. This involves identifying the relevant information to generate a minimal but complete model to be used for model checking. The model must describe everything that may affect scheduling of the application while omitting everything else in order to avoid state space explosion. This kind of simplification is necessary to make the state space analysis feasible. For the model checker to nd the actual schedules, a set of scheduling strategies are de ned which are able to produce quasi-static schedulers for a wide range of applications. The results of this work show that actor composition with quasi-static scheduling can be used to transform data ow programs to t many different computer architecture with different type and number of cores. This in turn, enables dataflow to provide a more platform independent representation as one application can be fitted to a specific processor architecture without changing the actual program representation. Instead, the program representation is in the context of design space exploration optimized by the development tools to fit the target platform. This work focuses on representing the dataflow scheduling problem as a model checking problem and is implemented as part of a compiler infrastructure. The thesis also presents experimental results as evidence of the usefulness of the approach.
Resumo:
This thesis considers optimization problems arising in printed circuit board assembly. Especially, the case in which the electronic components of a single circuit board are placed using a single placement machine is studied. Although there is a large number of different placement machines, the use of collect-and-place -type gantry machines is discussed because of their flexibility and increasing popularity in the industry. Instead of solving the entire control optimization problem of a collect-andplace machine with a single application, the problem is divided into multiple subproblems because of its hard combinatorial nature. This dividing technique is called hierarchical decomposition. All the subproblems of the one PCB - one machine -context are described, classified and reviewed. The derived subproblems are then either solved with exact methods or new heuristic algorithms are developed and applied. The exact methods include, for example, a greedy algorithm and a solution based on dynamic programming. Some of the proposed heuristics contain constructive parts while others utilize local search or are based on frequency calculations. For the heuristics, it is made sure with comprehensive experimental tests that they are applicable and feasible. A number of quality functions will be proposed for evaluation and applied to the subproblems. In the experimental tests, artificially generated data from Markov-models and data from real-world PCB production are used. The thesis consists of an introduction and of five publications where the developed and used solution methods are described in their full detail. For all the problems stated in this thesis, the methods proposed are efficient enough to be used in the PCB assembly production in practice and are readily applicable in the PCB manufacturing industry.
Resumo:
Time series analysis can be categorized into three different approaches: classical, Box-Jenkins, and State space. Classical approach makes a basement for the analysis and Box-Jenkins approach is an improvement of the classical approach and deals with stationary time series. State space approach allows time variant factors and covers up a broader area of time series analysis. This thesis focuses on parameter identifiablity of different parameter estimation methods such as LSQ, Yule-Walker, MLE which are used in the above time series analysis approaches. Also the Kalman filter method and smoothing techniques are integrated with the state space approach and MLE method to estimate parameters allowing them to change over time. Parameter estimation is carried out by repeating estimation and integrating with MCMC and inspect how well different estimation methods can identify the optimal model parameters. Identification is performed in probabilistic and general senses and compare the results in order to study and represent identifiability more informative way.
Resumo:
Traditionally real estate has been seen as a good diversification tool for a stock portfolio due to the lower return and volatility characteristics of real estate investments. However, the diversification benefits of a multi-asset portfolio depend on how the different asset classes co-move in the short- and long-run. As the asset classes are affected by the same macroeconomic factors, interrelationships limiting the diversification benefits could exist. This master’s thesis aims to identify such dynamic linkages in the Finnish real estate and stock markets. The results are beneficial for portfolio optimization tasks as well as for policy-making. The real estate industry can be divided into direct and securitized markets. In this thesis the direct market is depicted by the Finnish housing market index. The securitized market is proxied by the Finnish all-sectors securitized real estate index and by a European residential Real Estate Investment Trust index. The stock market is depicted by OMX Helsinki Cap index. Several macroeconomic variables are incorporated as well. The methodology of this thesis is based on the Vector Autoregressive (VAR) models. The long-run dynamic linkages are studied with Johansen’s cointegration tests and the short-run interrelationships are examined with Granger-causality tests. In addition, impulse response functions and forecast error variance decomposition analyses are used for robustness checks. The results show that long-run co-movement, or cointegration, did not exist between the housing and stock markets during the sample period. This indicates diversification benefits in the long-run. However, cointegration between the stock and securitized real estate markets was identified. This indicates limited diversification benefits and shows that the listed real estate market in Finland is not matured enough to be considered a separate market from the general stock market. Moreover, while securitized real estate was shown to cointegrate with the housing market in the long-run, the two markets are still too different in their characteristics to be used as substitutes in a multi-asset portfolio. This implies that the capital intensiveness of housing investments cannot be circumvented by investing in securitized real estate.
Resumo:
This thesis studies the impact of the latest Russian crisis on global markets, and especially Central and Eastern Europe. The results are compared to other shocks and crises over the last twenty years to see how significant they have been. The cointegration process of Central and Eastern European financial markets is also reviewed and updated. Using three separate conditional correlation GARCH models, the latest crisis is not found to have initiated similar surges in conditional correlations to previous crises over the last two decades. Market cointegration for Central and Eastern Europe is found to have stalled somewhat after initial correlation increases post EU accession.
Resumo:
Guided by the social-ecological conceptualization of bullying, this thesis examines the implications of classroom and school contexts—that is, students’ shared microsystems—for peer-to-peer bullying and antibullying practices. Included are four original publications, three of which are empirical studies utilizing data from a large Finnish sample of students in the upper grade levels of elementary school. Both self- and peer reports of bullying and victimization are utilized, and the hierarchical nature of the data collected from students nested within school ecologies is accounted for by multilevel modeling techniques. The first objective of the thesis is to simultaneously examine risk factors for victimization at individual, classroom, and school levels (Study I). The second objective is to uncover the individual- and classroom-level working mechanisms of the KiVa antibullying program which has been shown to be effective in reducing bullying problems in Finnish schools (Study II). Thirdly, an overview of the extant literature on classroom- and school-level contributions to bullying and victimization is provided (Study III). Finally, attention is paid to the assessment of victimization and, more specifically, to how the classroom context influences the concordance between self- and peer reports of victimization (Study IV). Findings demonstrate the multiple ways in which contextual factors, and importantly students’ perceptions thereof, contribute to the bullying dynamic and efforts to counteract it. Whereas certain popular beliefs regarding the implications of classroom and school contexts do not receive support, the role of peer contextual factors and the significance of students’ perceptions of teachers’ attitudes toward bullying are highlighted. Directions for future research and school-based antibullying practices are suggested.
Resumo:
This thesis concerns the analysis of epidemic models. We adopt the Bayesian paradigm and develop suitable Markov Chain Monte Carlo (MCMC) algorithms. This is done by considering an Ebola outbreak in the Democratic Republic of Congo, former Zaïre, 1995 as a case of SEIR epidemic models. We model the Ebola epidemic deterministically using ODEs and stochastically through SDEs to take into account a possible bias in each compartment. Since the model has unknown parameters, we use different methods to estimate them such as least squares, maximum likelihood and MCMC. The motivation behind choosing MCMC over other existing methods in this thesis is that it has the ability to tackle complicated nonlinear problems with large number of parameters. First, in a deterministic Ebola model, we compute the likelihood function by sum of square of residuals method and estimate parameters using the LSQ and MCMC methods. We sample parameters and then use them to calculate the basic reproduction number and to study the disease-free equilibrium. From the sampled chain from the posterior, we test the convergence diagnostic and confirm the viability of the model. The results show that the Ebola model fits the observed onset data with high precision, and all the unknown model parameters are well identified. Second, we convert the ODE model into a SDE Ebola model. We compute the likelihood function using extended Kalman filter (EKF) and estimate parameters again. The motivation of using the SDE formulation here is to consider the impact of modelling errors. Moreover, the EKF approach allows us to formulate a filtered likelihood for the parameters of such a stochastic model. We use the MCMC procedure to attain the posterior distributions of the parameters of the SDE Ebola model drift and diffusion parts. In this thesis, we analyse two cases: (1) the model error covariance matrix of the dynamic noise is close to zero , i.e. only small stochasticity added into the model. The results are then similar to the ones got from deterministic Ebola model, even if methods of computing the likelihood function are different (2) the model error covariance matrix is different from zero, i.e. a considerable stochasticity is introduced into the Ebola model. This accounts for the situation where we would know that the model is not exact. As a results, we obtain parameter posteriors with larger variances. Consequently, the model predictions then show larger uncertainties, in accordance with the assumption of an incomplete model.
Resumo:
Since different stock markets have become more integrated during 2000s, investors need new asset classes in order to gain diversification benefits. Commodities have become popular to invest in and thus it is important to examine whether the investors should use commodities as a part for portfolio diversification. This master’s thesis examines the dynamic relationship between Finnish stock market and commodities. The methodology is based on Vector Autoregressive models (VAR). The long-run relationship between Finnish stock market and commodities is examined with Johansen cointegration while short-run relationship is examined with VAR models and Granger causality test. In addition, impulse response test and forecast error variance decomposition are employed to strengthen the results of short-run relationship. The dynamic relationships might change under different market conditions. Thus, the sample period is divided into two sub-samples in order to reveal whether the dynamic relationship varies under different market conditions. The results show that Finnish stock market has stable long-run relationship with industrial metals, indicating that there would not be diversification benefits among the industrial metals. The long-run relationship between Finnish stock market and energy commodities is not as stable as the long-run relationship between Finnish stock market and industrial metals. Long-run relationship was found in the full sample period and first sub-sample which indicate less room for diversification. However, the long-run relationship disappeared in the second sub-sample which indicates diversification benefits. Long-run relationship between Finnish stock market and agricultural commodities was not found in the full sample period which indicates diversification benefits between the variables. However, long-run relationship was found from both sub-samples. The best diversification benefits would be achieved if investor invested in precious metals. No long-run relationship was found from either sample. In the full sample period OMX Helsinki had short-run relationship with most of the energy commodities and industrial metals and the causality was mostly running from equities to commodities. During the first sub period the number of short-run relationships and causality shrunk but during the crisis period the number of short-run relationships and causality increased. The most notable result found was unidirectional causality from gold to OMX Helsinki during the crisis period.
Resumo:
Increased rotational speed brings many advantages to an electric motor. One of the benefits is that when the desired power is generated at increased rotational speed, the torque demanded from the rotor decreases linearly, and as a consequence, a motor of smaller size can be used. Using a rotor with high rotational speed in a system with mechanical bearings can, however, create undesirable vibrations, and therefore active magnetic bearings (AMBs) are often considered a good option for the main bearings, as the rotor then has no mechanical contact with other parts of the system but levitates on the magnetic forces. On the other hand, such systems can experience overloading or a sudden shutdown of the electrical system, whereupon the magnetic field becomes extinct, and as a result of rotor delevitation, mechanical contact occurs. To manage such nonstandard operations, AMB-systems require mechanical touchdown bearings with an oversized bore diameter. The need for touchdown bearings seems to be one of the barriers preventing greater adoption of AMB technology, because in the event of an uncontrolled touchdown, failure may occur, for example, in the bearing’s cage or balls, or in the rotor. This dissertation consists of two parts: First, touchdown bearing misalignment in the contact event is studied. It is found that misalignment increases the likelihood of a potentially damaging whirling motion of the rotor. A model for analysis of the stresses occurring in the rotor is proposed. In the studies of misalignment and stresses, a flexible rotor using a finite element approach is applied. Simplified models of cageless and caged bearings are used for the description of touchdown bearings. The results indicate that an increase in misalignment can have a direct influence on the bending and shear stresses occurring in the rotor during the contact event. Thus, it was concluded that analysis of stresses arising in the contact event is essential to guarantee appropriate system dimensioning for possible contact events with misaligned touchdown bearings. One of the conclusions drawn from the first part of the study is that knowledge of the forces affecting the balls and cage of the touchdown bearings can enable a more reliable estimation of the service life of the bearing. Therefore, the second part of the dissertation investigates the forces occurring in the cage and balls of touchdown bearings and introduces two detailed models of touchdown bearings in which all bearing parts are modelled as independent bodies. Two multibody-based two-dimensional models of touchdown bearings are introduced for dynamic analysis of the contact event. All parts of the bearings are modelled with geometrical surfaces, and the bodies interact with each other through elastic contact forces. To assist in identification of the forces affecting the balls and cage in the contact event, the first model describes a touchdown bearing without a cage, and the second model describes a touchdown bearing with a cage. The introduced models are compared with the simplified models used in the first part of the dissertation through parametric study. Damages to the rotor, cage and balls are some of the main reasons for failures of AMB-systems. The stresses in the rotor in the contact event are defined in this work. Furthermore, the forces affecting key bodies of the bearings, cage and balls can be studied using the models of touchdown bearings introduced in this dissertation. Knowledge obtained from the introduced models is valuable since it can enable an optimum structure for a rotor and touchdown bearings to be designed.
Resumo:
Our surrounding landscape is in a constantly dynamic state, but recently the rate of changes and their effects on the environment have considerably increased. In terms of the impact on nature, this development has not been entirely positive, but has rather caused a decline in valuable species, habitats, and general biodiversity. Regardless of recognizing the problem and its high importance, plans and actions of how to stop the detrimental development are largely lacking. This partly originates from a lack of genuine will, but is also due to difficulties in detecting many valuable landscape components and their consequent neglect. To support knowledge extraction, various digital environmental data sources may be of substantial help, but only if all the relevant background factors are known and the data is processed in a suitable way. This dissertation concentrates on detecting ecologically valuable landscape components by using geospatial data sources, and applies this knowledge to support spatial planning and management activities. In other words, the focus is on observing regionally valuable species, habitats, and biotopes with GIS and remote sensing data, using suitable methods for their analysis. Primary emphasis is given to the hemiboreal vegetation zone and the drastic decline in its semi-natural grasslands, which were created by a long trajectory of traditional grazing and management activities. However, the applied perspective is largely methodological, and allows for the application of the obtained results in various contexts. Models based on statistical dependencies and correlations of multiple variables, which are able to extract desired properties from a large mass of initial data, are emphasized in the dissertation. In addition, the papers included combine several data sets from different sources and dates together, with the aim of detecting a wider range of environmental characteristics, as well as pointing out their temporal dynamics. The results of the dissertation emphasise the multidimensionality and dynamics of landscapes, which need to be understood in order to be able to recognise their ecologically valuable components. This not only requires knowledge about the emergence of these components and an understanding of the used data, but also the need to focus the observations on minute details that are able to indicate the existence of fragmented and partly overlapping landscape targets. In addition, this pinpoints the fact that most of the existing classifications are too generalised as such to provide all the required details, but they can be utilized at various steps along a longer processing chain. The dissertation also emphases the importance of landscape history as an important factor, which both creates and preserves ecological values, and which sets an essential standpoint for understanding the present landscape characteristics. The obtained results are significant both in terms of preserving semi-natural grasslands, as well as general methodological development, giving support to science-based framework in order to evaluate ecological values and guide spatial planning.
Resumo:
The share of variable renewable energy in electricity generation has seen exponential growth during the recent decades, and due to the heightened pursuit of environmental targets, the trend is to continue with increased pace. The two most important resources, wind and insolation both bear the burden of intermittency, creating a need for regulation and posing a threat to grid stability. One possibility to deal with the imbalance between demand and generation is to store electricity temporarily, which was addressed in this thesis by implementing a dynamic model of adiabatic compressed air energy storage (CAES) with Apros dynamic simulation software. Based on literature review, the existing models due to their simplifications were found insufficient for studying transient situations, and despite of its importance, the investigation of part load operation has not yet been possible with satisfactory precision. As a key result of the thesis, the cycle efficiency at design point was simulated to be 58.7%, which correlated well with literature information, and was validated through analytical calculations. The performance at part load was validated against models shown in literature, showing good correlation. By introducing wind resource and electricity demand data to the model, grid operation of CAES was studied. In order to enable the dynamic operation, start-up and shutdown sequences were approximated in dynamic environment, as far as is known, the first time, and a user component for compressor variable guide vanes (VGV) was implemented. Even in the current state, the modularly designed model offers a framework for numerous studies. The validity of the model is limited by the accuracy of VGV correlations at part load, and in addition the implementation of heat losses to the thermal energy storage is necessary to enable longer simulations. More extended use of forecasts is one of the important targets of development, if the system operation is to be optimised in future.