918 resultados para Input-Output Modelling
                                
Resumo:
In 2006, a large and prolonged bloom of the dinoflagellate Karenia mikimotoi occurred in Scottish coastal waters, causing extensive mortalities of benthic organisms including annelids and molluscs and some species of fish ( Davidson et al., 2009). A coupled hydrodynamic-algal transport model was developed to track the progression of the bloom around the Scottish coast during June–September 2006 and hence investigate the processes controlling the bloom dynamics. Within this individual-based model, cells were capable of growth, mortality and phototaxis and were transported by physical processes of advection and turbulent diffusion, using current velocities extracted from operational simulations of the MRCS ocean circulation model of the North-west European continental shelf. Vertical and horizontal turbulent diffusion of cells are treated using a random walk approach. Comparison of model output with remotely sensed chlorophyll concentrations and cell counts from coastal monitoring stations indicated that it was necessary to include multiple spatially distinct seed populations of K. mikimotoi at separate locations on the shelf edge to capture the qualitative pattern of bloom transport and development. We interpret this as indicating that the source population was being transported northwards by the Hebridean slope current from where colonies of K. mikimotoi were injected onto the continental shelf by eddies or other transient exchange processes. The model was used to investigate the effects on simulated K. mikimotoi transport and dispersal of: (1) the distribution of the initial seed population; (2) algal growth and mortality; (3) water temperature; (4) the vertical movement of particles by diurnal migration and eddy diffusion; (5) the relative role of the shelf edge and coastal currents; (6) the role of wind forcing. The numerical experiments emphasized the requirement for a physiologically based biological model and indicated that improved modelling of future blooms will potentially benefit from better parameterisation of temperature dependence of both growth and mortality and finer spatial and temporal hydrodynamic resolution.
                                
Resumo:
In 2006, a large and prolonged bloom of the dinoflagellate Karenia mikimotoi occurred in Scottish coastal waters, causing extensive mortalities of benthic organisms including annelids and molluscs and some species of fish ( Davidson et al., 2009). A coupled hydrodynamic-algal transport model was developed to track the progression of the bloom around the Scottish coast during June–September 2006 and hence investigate the processes controlling the bloom dynamics. Within this individual-based model, cells were capable of growth, mortality and phototaxis and were transported by physical processes of advection and turbulent diffusion, using current velocities extracted from operational simulations of the MRCS ocean circulation model of the North-west European continental shelf. Vertical and horizontal turbulent diffusion of cells are treated using a random walk approach. Comparison of model output with remotely sensed chlorophyll concentrations and cell counts from coastal monitoring stations indicated that it was necessary to include multiple spatially distinct seed populations of K. mikimotoi at separate locations on the shelf edge to capture the qualitative pattern of bloom transport and development. We interpret this as indicating that the source population was being transported northwards by the Hebridean slope current from where colonies of K. mikimotoi were injected onto the continental shelf by eddies or other transient exchange processes. The model was used to investigate the effects on simulated K. mikimotoi transport and dispersal of: (1) the distribution of the initial seed population; (2) algal growth and mortality; (3) water temperature; (4) the vertical movement of particles by diurnal migration and eddy diffusion; (5) the relative role of the shelf edge and coastal currents; (6) the role of wind forcing. The numerical experiments emphasized the requirement for a physiologically based biological model and indicated that improved modelling of future blooms will potentially benefit from better parameterisation of temperature dependence of both growth and mortality and finer spatial and temporal hydrodynamic resolution.
                                
Resumo:
The predictive capability of high fidelity finite element modelling, to accurately capture damage and crush behaviour of composite structures, relies on the acquisition of accurate material properties, some of which have necessitated the development of novel approaches. This paper details the measurement of interlaminar and intralaminar fracture toughness, the non-linear shear behaviour of carbon fibre (AS4)/thermoplastic Polyetherketoneketone (PEKK) composite laminates and the utilisation of these properties for the accurate computational modelling of crush. Double-cantilever-beam (DCB), four-point end-notched flexure (4ENF) and Mixed-mode bending (MMB) test configurations were used to determine the initiation and propagation fracture toughness in mode I, mode II and mixed-mode loading, respectively. Compact Tension (CT) and Compact Compression (CC) test samples were employed to determine the intralaminar longitudinal tensile and compressive fracture toughness. V-notched rail shear tests were used to measure the highly non-linear shear behaviour, associated with thermoplastic composites, and fracture toughness. Corresponding numerical models of these tests were developed for verification and yielded good correlation with the experimental response. This also confirmed the accuracy of the measured values which were then employed as input material parameters for modelling the crush behaviour of a corrugated test specimen.
                                
Resumo:
One challenge on data assimilation (DA) methods is how the error covariance for the model state is computed. Ensemble methods have been proposed for producing error covariance estimates, as error is propagated in time using the non-linear model. Variational methods, on the other hand, use the concepts of control theory, whereby the state estimate is optimized from both the background and the measurements. Numerical optimization schemes are applied which solve the problem of memory storage and huge matrix inversion needed by classical Kalman filter methods. Variational Ensemble Kalman filter (VEnKF), as a method inspired the Variational Kalman Filter (VKF), enjoys the benefits from both ensemble methods and variational methods. It avoids filter inbreeding problems which emerge when the ensemble spread underestimates the true error covariance. In VEnKF this is tackled by resampling the ensemble every time measurements are available. One advantage of VEnKF over VKF is that it needs neither tangent linear code nor adjoint code. In this thesis, VEnKF has been applied to a two-dimensional shallow water model simulating a dam-break experiment. The model is a public code with water height measurements recorded in seven stations along the 21:2 m long 1:4 m wide flume’s mid-line. Because the data were too sparse to assimilate the 30 171 model state vector, we chose to interpolate the data both in time and in space. The results of the assimilation were compared with that of a pure simulation. We have found that the results revealed by the VEnKF were more realistic, without numerical artifacts present in the pure simulation. Creating a wrapper code for a model and DA scheme might be challenging, especially when the two were designed independently or are poorly documented. In this thesis we have presented a non-intrusive approach of coupling the model and a DA scheme. An external program is used to send and receive information between the model and DA procedure using files. The advantage of this method is that the model code changes needed are minimal, only a few lines which facilitate input and output. Apart from being simple to coupling, the approach can be employed even if the two were written in different programming languages, because the communication is not through code. The non-intrusive approach is made to accommodate parallel computing by just telling the control program to wait until all the processes have ended before the DA procedure is invoked. It is worth mentioning the overhead increase caused by the approach, as at every assimilation cycle both the model and the DA procedure have to be initialized. Nonetheless, the method can be an ideal approach for a benchmark platform in testing DA methods. The non-intrusive VEnKF has been applied to a multi-purpose hydrodynamic model COHERENS to assimilate Total Suspended Matter (TSM) in lake Säkylän Pyhäjärvi. The lake has an area of 154 km2 with an average depth of 5:4 m. Turbidity and chlorophyll-a concentrations from MERIS satellite images for 7 days between May 16 and July 6 2009 were available. The effect of the organic matter has been computationally eliminated to obtain TSM data. Because of computational demands from both COHERENS and VEnKF, we have chosen to use 1 km grid resolution. The results of the VEnKF have been compared with the measurements recorded at an automatic station located at the North-Western part of the lake. However, due to TSM data sparsity in both time and space, it could not be well matched. The use of multiple automatic stations with real time data is important to elude the time sparsity problem. With DA, this will help in better understanding the environmental hazard variables for instance. We have found that using a very high ensemble size does not necessarily improve the results, because there is a limit whereby additional ensemble members add very little to the performance. Successful implementation of the non-intrusive VEnKF and the ensemble size limit for performance leads to an emerging area of Reduced Order Modeling (ROM). To save computational resources, running full-blown model in ROM is avoided. When the ROM is applied with the non-intrusive DA approach, it might result in a cheaper algorithm that will relax computation challenges existing in the field of modelling and DA.
                                
Resumo:
The aim of this study was to compute a swimming performance confirmatory model based on biomechanical parameters. The sample included 100 young swimmers (overall: 12.3 ± 0.74 years; 49 boys: 12.5 ± 0.76 years; 51 girls: 12.2 ± 0.71 years; both genders in Tanner stages 1-2 by self-report) participating on a regular basis in regional and national-level events. The 100 m freestyle event was chosen as the performance indicator. Anthropometric (arm span), strength (throwing velocity), power output (power to overcome drag), kinematic (swimming velocity) and efficiency (propelling efficiency) parameters were measured and included in the model. The path-flow analysis procedure was used to design and compute the model. The anthropometric parameter (arm span) was excluded in the final model, increasing its goodness-of-fit. The final model included the throw velocity, power output, swimming velocity and propelling efficiency. All links were significant between the parameters included, but the throw velocity-power output. The final model was explained by 69% presenting a reasonable adjustment (model's goodness-of-fit; x(2)/df = 3.89). This model shows that strength and power output parameters do play a mediator and meaningful role in the young swimmers' performance.
                                
Resumo:
Accurate immunological models offer the possibility of performing highthroughput experiments in silico that can predict, or at least suggest, in vivo phenomena. In this chapter, we compare various models of immunological memory. We first validate an experimental immunological simulator, developed by the authors, by simulating several theories of immunological memory with known results. We then use the same system to evaluate the predicted effects of a theory of immunological memory. The resulting model has not been explored before in artificial immune systems research, and we compare the simulated in silico output with in vivo measurements. Although the theory appears valid, we suggest that there are a common set of reasons why immunological memory models are a useful support tool; not conclusive in themselves.
                                
Resumo:
When designing systems that are complex, dynamic and stochastic in nature, simulation is generally recognised as one of the best design support technologies, and a valuable aid in the strategic and tactical decision making process. A simulation model consists of a set of rules that define how a system changes over time, given its current state. Unlike analytical models, a simulation model is not solved but is run and the changes of system states can be observed at any point in time. This provides an insight into system dynamics rather than just predicting the output of a system based on specific inputs. Simulation is not a decision making tool but a decision support tool, allowing better informed decisions to be made. Due to the complexity of the real world, a simulation model can only be an approximation of the target system. The essence of the art of simulation modelling is abstraction and simplification. Only those characteristics that are important for the study and analysis of the target system should be included in the simulation model. The purpose of simulation is either to better understand the operation of a target system, or to make predictions about a target system’s performance. It can be viewed as an artificial white-room which allows one to gain insight but also to test new theories and practices without disrupting the daily routine of the focal organisation. What you can expect to gain from a simulation study is very well summarised by FIRMA (2000). His idea is that if the theory that has been framed about the target system holds, and if this theory has been adequately translated into a computer model this would allow you to answer some of the following questions: · Which kind of behaviour can be expected under arbitrarily given parameter combinations and initial conditions? · Which kind of behaviour will a given target system display in the future? · Which state will the target system reach in the future? The required accuracy of the simulation model very much depends on the type of question one is trying to answer. In order to be able to respond to the first question the simulation model needs to be an explanatory model. This requires less data accuracy. In comparison, the simulation model required to answer the latter two questions has to be predictive in nature and therefore needs highly accurate input data to achieve credible outputs. These predictions involve showing trends, rather than giving precise and absolute predictions of the target system performance. The numerical results of a simulation experiment on their own are most often not very useful and need to be rigorously analysed with statistical methods. These results then need to be considered in the context of the real system and interpreted in a qualitative way to make meaningful recommendations or compile best practice guidelines. One needs a good working knowledge about the behaviour of the real system to be able to fully exploit the understanding gained from simulation experiments. The goal of this chapter is to brace the newcomer to the topic of what we think is a valuable asset to the toolset of analysts and decision makers. We will give you a summary of information we have gathered from the literature and of the experiences that we have made first hand during the last five years, whilst obtaining a better understanding of this exciting technology. We hope that this will help you to avoid some pitfalls that we have unwittingly encountered. Section 2 is an introduction to the different types of simulation used in Operational Research and Management Science with a clear focus on agent-based simulation. In Section 3 we outline the theoretical background of multi-agent systems and their elements to prepare you for Section 4 where we discuss how to develop a multi-agent simulation model. Section 5 outlines a simple example of a multi-agent system. Section 6 provides a collection of resources for further studies and finally in Section 7 we will conclude the chapter with a short summary.
                                
Resumo:
This study is aimed to model and forecast the tourism demand for Mozambique for the period from January 2004 to December 2013 using artificial neural networks models. The number of overnight stays in Hotels was used as representative of the tourism demand. A set of independent variables were experimented in the input of the model, namely: Consumer Price Index, Gross Domestic Product and Exchange Rates, of the outbound tourism markets, South Africa, United State of America, Mozambique, Portugal and the United Kingdom. The best model achieved has 6.5% for Mean Absolute Percentage Error and 0.696 for Pearson correlation coefficient. A model like this with high accuracy of forecast is important for the economic agents to know the future growth of this activity sector, as it is important for stakeholders to provide products, services and infrastructures and for the hotels establishments to adequate its level of capacity to the tourism demand.
                                
Resumo:
A smart solar photovoltaic grid system is an advent of innovation coherence of information and communications technology (ICT) with power systems control engineering via the internet [1]. This thesis designs and demonstrates a smart solar photovoltaic grid system that is selfhealing, environmental and consumer friendly, but also with the ability to accommodate other renewable sources of energy generation seamlessly, creating a healthy competitive energy industry and optimising energy assets efficiency. This thesis also presents the modelling of an efficient dynamic smart solar photovoltaic power grid system by exploring the maximum power point tracking efficiency, optimisation of the smart solar photovoltaic array through modelling and simulation to improve the quality of design for the solar photovoltaic module. In contrast, over the past decade quite promising results have been published in literature, most of which have not addressed the basis of the research questions in this thesis. The Levenberg-Marquardt and sparse based algorithms have proven to be very effective tools in helping to improve the quality of design for solar photovoltaic modules, minimising the possible relative errors in this thesis. Guided by theoretical and analytical reviews in literature, this research has carefully chosen the MatLab/Simulink software toolbox for modelling and simulation experiments performed on the static smart solar grid system. The auto-correlation coefficient results obtained from the modelling experiments give an accuracy of 99% with negligible mean square error (MSE), root mean square error (RMSE) and standard deviation. This thesis further explores the design and implementation of a robust real-time online solar photovoltaic monitoring system, establishing a comparative study of two solar photovoltaic tracking systems which provide remote access to the harvested energy data. This research made a landmark innovation in designing and implementing a unique approach for online remote access solar photovoltaic monitoring systems providing updated information of the energy produced by the solar photovoltaic module at the site location. In addressing the challenge of online solar photovoltaic monitoring systems, Darfon online data logger device has been systematically integrated into the design for a comparative study of the two solar photovoltaic tracking systems examined in this thesis. The site location for the comparative study of the solar photovoltaic tracking systems is at the National Kaohsiung University of Applied Sciences, Taiwan, R.O.C. The overall comparative energy output efficiency of the azimuthal-altitude dual-axis over the 450 stationary solar photovoltaic monitoring system as observed at the research location site is about 72% based on the total energy produced, estimated money saved and the amount of CO2 reduction achieved. Similarly, in comparing the total amount of energy produced by the two solar photovoltaic tracking systems, the overall daily generated energy for the month of July shows the effectiveness of the azimuthal-altitude tracking systems over the 450 stationary solar photovoltaic system. It was found that the azimuthal-altitude dual-axis tracking systems were about 68.43% efficient compared to the 450 stationary solar photovoltaic systems. Lastly, the overall comparative hourly energy efficiency of the azimuthal-altitude dual-axis over the 450 stationary solar photovoltaic energy system was found to be 74.2% efficient. Results from this research are quite promising and significant in satisfying the purpose of the research objectives and questions posed in the thesis. The new algorithms introduced in this research and the statistical measures applied to the modelling and simulation of a smart static solar photovoltaic grid system performance outperformed other previous works in reviewed literature. Based on this new implementation design of the online data logging systems for solar photovoltaic monitoring, it is possible for the first time to have online on-site information of the energy produced remotely, fault identification and rectification, maintenance and recovery time deployed as fast as possible. The results presented in this research as Internet of things (IoT) on smart solar grid systems are likely to offer real-life experiences especially both to the existing body of knowledge and the future solar photovoltaic energy industry irrespective of the study site location for the comparative solar photovoltaic tracking systems. While the thesis has contributed to the smart solar photovoltaic grid system, it has also highlighted areas of further research and the need to investigate more on improving the choice and quality design for solar photovoltaic modules. Finally, it has also made recommendations for further research in the minimization of the absolute or relative errors in the quality and design of the smart static solar photovoltaic module.
                                
Resumo:
The anticipated growth of air traffic worldwide requires enhanced Air Traffic Management (ATM) technologies and procedures to increase the system capacity, efficiency, and resilience, while reducing environmental impact and maintaining operational safety. To deal with these challenges, new automation and information exchange capabilities are being developed through different modernisation initiatives toward a new global operational concept called Trajectory Based Operations (TBO), in which aircraft trajectory information becomes the cornerstone of advanced ATM applications. This transformation will lead to higher levels of system complexity requiring enhanced Decision Support Tools (DST) to aid humans in the decision making processes. These will rely on accurate predicted aircraft trajectories, provided by advanced Trajectory Predictors (TP). The trajectory prediction process is subject to stochastic effects that introduce uncertainty into the predictions. Regardless of the assumptions that define the aircraft motion model underpinning the TP, deviations between predicted and actual trajectories are unavoidable. This thesis proposes an innovative method to characterise the uncertainty associated with a trajectory prediction based on the mathematical theory of Polynomial Chaos Expansions (PCE). Assuming univariate PCEs of the trajectory prediction inputs, the method describes how to generate multivariate PCEs of the prediction outputs that quantify their associated uncertainty. Arbitrary PCE (aPCE) was chosen because it allows a higher degree of flexibility to model input uncertainty. The obtained polynomial description can be used in subsequent prediction sensitivity analyses thanks to the relationship between polynomial coefficients and Sobol indices. The Sobol indices enable ranking the input parameters according to their influence on trajectory prediction uncertainty. The applicability of the aPCE-based uncertainty quantification detailed herein is analysed through a study case. This study case represents a typical aircraft trajectory prediction problem in ATM, in which uncertain parameters regarding aircraft performance, aircraft intent description, weather forecast, and initial conditions are considered simultaneously. Numerical results are compared to those obtained from a Monte Carlo simulation, demonstrating the advantages of the proposed method. The thesis includes two examples of DSTs (Demand and Capacity Balancing tool, and Arrival Manager) to illustrate the potential benefits of exploiting the proposed uncertainty quantification method.
                                
Resumo:
In this contribution, a system identification procedure of a two-input Wiener model suitable for the analysis of the disturbance behavior of integrated nonlinear circuits is presented. The identified block model is comprised of two linear dynamic and one static nonlinear block, which are determined using an parameterized approach. In order to characterize the linear blocks, an correlation analysis using a white noise input in combination with a model reduction scheme is adopted. After having characterized the linear blocks, from the output spectrum under single tone excitation at each input a linear set of equations will be set up, whose solution gives the coefficients of the nonlinear block. By this data based black box approach, the distortion behavior of a nonlinear circuit under the influence of an interfering signal at an arbitrary input port can be determined. Such an interfering signal can be, for example, an electromagnetic interference signal which conductively couples into the port of consideration. © 2011 Author(s).
                                
Resumo:
In our research we investigate the output accuracy of discrete event simulation models and agent based simulation models when studying human centric complex systems. In this paper we focus on human reactive behaviour as it is possible in both modelling approaches to implement human reactive behaviour in the model by using standard methods. As a case study we have chosen the retail sector, and here in particular the operations of the fitting room in the women wear department of a large UK department store. In our case study we looked at ways of determining the efficiency of implementing new management policies for the fitting room operation through modelling the reactive behaviour of staff and customers of the department. First, we have carried out a validation experiment in which we compared the results from our models to the performance of the real system. This experiment also allowed us to establish differences in output accuracy between the two modelling methods. In a second step a multi-scenario experiment was carried out to study the behaviour of the models when they are used for the purpose of operational improvement. Overall we have found that for our case study example both, discrete event simulation and agent based simulation have the same potential to support the investigation into the efficiency of implementing new management policies.
                                
Resumo:
In this paper, we investigate output accuracy for a Discrete Event Simulation (DES) model and Agent Based Simulation (ABS) model. The purpose of this investigation is to find out which of these simulation techniques is the best one for modelling human reactive behaviour in the retail sector. In order to study the output accuracy in both models, we have carried out a validation experiment in which we compared the results from our simulation models to the performance of a real system. Our experiment was carried out using a large UK department store as a case study. We had to determine an efficient implementation of management policy in the store’s fitting room using DES and ABS. Overall, we have found that both simulation models were a good representation of the real system when modelling human reactive behaviour.
                                
Resumo:
This research investigated the simulation model behaviour of a traditional and combined discrete event as well as agent based simulation models when modelling human reactive and proactive behaviour in human centric complex systems. A departmental store was chosen as human centric complex case study where the operation system of a fitting room in WomensWear department was investigated. We have looked at ways to determine the efficiency of new management policies for the fitting room operation through simulating the reactive and proactive behaviour of staff towards customers. Once development of the simulation models and their verification had been done, we carried out a validation experiment in the form of a sensitivity analysis. Subsequently, we executed a statistical analysis where the mixed reactive and proactive behaviour experimental results were compared with some reactive experimental results from previously published works. Generally, this case study discovered that simple proactive individual behaviour could be modelled in both simulation models. In addition, we found the traditional discrete event model performed similar in the simulation model output compared to the combined discrete event and agent based simulation when modelling similar human behaviour.
                                
Resumo:
A novel numerical model of a Bent Backwards Duct Buoy (BBDB) Oscillating Water Column (OWC) Wave Energy Converter was created based on existing isolated numerical models of the different energy conversion systems utilised by an OWC. The novel aspect of this numerical model is that it incorporates the interdependencies of the different power conversion systems rather than modelling each system individually. This was achieved by accounting for the dynamic aerodynamic damping caused by the changing turbine rotational velocity by recalculating the turbine damping for each simulation sample and applying it via a feedback loop. The accuracy of the model was validated using experimental data collected during the Components for Ocean Renewable Energy Systems (CORES) EU FP-7 project that was tested in Galway Bay, Ireland. During the verification process, it was discovered that the model could also be applied as a valuable tool when troubleshooting device performance. A new turbine was developed and added to a full scale model after being investigated using Computational Fluid Dynamics. The energy storage capacity of the impulse turbine was investigated by modelling the turbine with both high and low inertia and applying three turbine control theories to the turbine using the full scale model. A single Maximum Power Point Tracking algorithm was applied to the low-inertia turbine, while both a fixed and dynamic control algorithm was applied to the high-inertia turbine. These results suggest that the highinertia turbine could be used as a flywheel energy storage device that could help minimize output power variation despite the low operating speed of the impulse turbine. This research identified the importance of applying dynamic turbine damping to a BBDB OWC numerical model, revealed additional value of the model as a device troubleshooting tool, and found that an impulse turbine could be applied as an energy storage system.
 
                    