897 resultados para 2447: modelling and forecasting
Resumo:
Operational approaches have been more and more widely developed and used for providing marine data and information services for different socio-economic sectors of the Blue Growth and to advance knowledge about the marine environment. The objective of operational oceanographic research is to develop and improve the efficiency, timeliness, robustness and product quality of this approach. This white paper aims to address key scientific challenges and research priorities for the development of operational oceanography in Europe for the next 5-10 years. Knowledge gaps and deficiencies are identified in relation to common scientific challenges in four EuroGOOS knowledge areas: European Ocean Observations, Modelling and Forecasting Technology, Coastal Operational Oceanography and Operational Ecology. The areas "European Ocean Observations" and "Modelling and Forecasting Technology" focus on the further advancement of the basic instruments and capacities for European operational oceanography, while "Coastal Operational Oceanography" and "Operational Ecology" aim at developing new operational approaches for the corresponding knowledge areas.
Resumo:
The efficiency of current cargo screening processes at sea and air ports is unknown as no benchmarks exists against which they could be measured. Some manufacturer benchmarks exist for individual sensors but we have not found any benchmarks that take a holistic view of the screening procedures assessing a combination of sensors and also taking operator variability into account. Just adding up resources and manpower used is not an effective way for assessing systems where human decision-making and operator compliance to rules play a vital role. For such systems more advanced assessment methods need to be used, taking into account that the cargo screening process is of a dynamic and stochastic nature. Our project aim is to develop a decision support tool (cargo-screening system simulator) that will map the right technology and manpower to the right commodity-threat combination in order to maximize detection rates. In this paper we present a project outline and highlight the research challenges we have identified so far. In addition we introduce our first case study, where we investigate the cargo screening process at the ferry port in Calais.
Resumo:
Multi-agent systems offer a new and exciting way of understanding the world of work. We apply agent-based modeling and simulation to investigate a set of problems in a retail context. Specifically, we are working to understand the relationship between people management practices on the shop-floor and retail performance. Despite the fact we are working within a relatively novel and complex domain, it is clear that using an agent-based approach offers great potential for improving organizational capabilities in the future. Our multi-disciplinary research team has worked closely with one of the UK’s top ten retailers to collect data and build an understanding of shop-floor operations and the key actors in a department (customers, staff, and managers). Based on this case study we have built and tested our first version of a retail branch agent-based simulation model where we have focused on how we can simulate the effects of people management practices on customer satisfaction and sales. In our experiments we have looked at employee development and cashier empowerment as two examples of shop floor management practices. In this paper we describe the underlying conceptual ideas and the features of our simulation model. We present a selection of experiments we have conducted in order to validate our simulation model and to show its potential for answering “what-if” questions in a retail context. We also introduce a novel performance measure which we have created to quantify customers’ satisfaction with service, based on their individual shopping experiences.
Resumo:
Model predictive control (MPC) has often been referred to in literature as a potential method for more efficient control of building heating systems. Though a significant performance improvement can be achieved with an MPC strategy, the complexity introduced to the commissioning of the system is often prohibitive. Models are required which can capture the thermodynamic properties of the building with sufficient accuracy for meaningful predictions to be made. Furthermore, a large number of tuning weights may need to be determined to achieve a desired performance. For MPC to become a practicable alternative, these issues must be addressed. Acknowledging the impact of the external environment as well as the interaction of occupants on the thermal behaviour of the building, in this work, techniques have been developed for deriving building models from data in which large, unmeasured disturbances are present. A spatio-temporal filtering process was introduced to determine estimates of the disturbances from measured data, which were then incorporated with metaheuristic search techniques to derive high-order simulation models, capable of replicating the thermal dynamics of a building. While a high-order simulation model allowed for control strategies to be analysed and compared, low-order models were required for use within the MPC strategy itself. The disturbance estimation techniques were adapted for use with system-identification methods to derive such models. MPC formulations were then derived to enable a more straightforward commissioning process and implemented in a validated simulation platform. A prioritised-objective strategy was developed which allowed for the tuning parameters typically associated with an MPC cost function to be omitted from the formulation by separation of the conflicting requirements of comfort satisfaction and energy reduction within a lexicographic framework. The improved ability of the formulation to be set-up and reconfigured in faulted conditions was shown.
Resumo:
A novel numerical model of a Bent Backwards Duct Buoy (BBDB) Oscillating Water Column (OWC) Wave Energy Converter was created based on existing isolated numerical models of the different energy conversion systems utilised by an OWC. The novel aspect of this numerical model is that it incorporates the interdependencies of the different power conversion systems rather than modelling each system individually. This was achieved by accounting for the dynamic aerodynamic damping caused by the changing turbine rotational velocity by recalculating the turbine damping for each simulation sample and applying it via a feedback loop. The accuracy of the model was validated using experimental data collected during the Components for Ocean Renewable Energy Systems (CORES) EU FP-7 project that was tested in Galway Bay, Ireland. During the verification process, it was discovered that the model could also be applied as a valuable tool when troubleshooting device performance. A new turbine was developed and added to a full scale model after being investigated using Computational Fluid Dynamics. The energy storage capacity of the impulse turbine was investigated by modelling the turbine with both high and low inertia and applying three turbine control theories to the turbine using the full scale model. A single Maximum Power Point Tracking algorithm was applied to the low-inertia turbine, while both a fixed and dynamic control algorithm was applied to the high-inertia turbine. These results suggest that the highinertia turbine could be used as a flywheel energy storage device that could help minimize output power variation despite the low operating speed of the impulse turbine. This research identified the importance of applying dynamic turbine damping to a BBDB OWC numerical model, revealed additional value of the model as a device troubleshooting tool, and found that an impulse turbine could be applied as an energy storage system.
Resumo:
We analysed the use of microneedle-based electrodes to enhance electroporation of mouse testis with DNA vectors for production of transgenic mice. Different microneedle formats were developed and tested, and we ultimately used electrodes based on arrays of 500 μm tall microneedles. In a series of experiments involving injection of a DNA vector expressing Green Fluorescent Protein (GFP) and electroporation using microneedle electrodes and a commercially available voltage supply, we compared the performance of flat and microneedle electrodes by measuring GFP expression at various timepoints after electroporation. Our main finding, supported by both experimental and simulated data, is that needles significantly enhanced electroporation of testis.
Resumo:
The mechanical behaviour and performance of a ductile iron component is highly dependent on the local variations in solidification conditions during the casting process. Here we show a framework which combine a previously developed closed chain of simulations for cast components with a micro-scale Finite Element Method (FEM) simulation of the behaviour and performance of the microstructure. A casting process simulation, including modelling of solidification and mechanical material characterization, provides the basis for a macro-scale FEM analysis of the component. A critical region is identified to which the micro-scale FEM simulation of a representative microstructure, generated using X-ray tomography, is applied. The mechanical behaviour of the different microstructural phases are determined using a surrogate model based optimisation routine and experimental data. It is discussed that the approach enables a link between solidification- and microstructure-models and simulations of as well component as microstructural behaviour, and can contribute with new understanding regarding the behaviour and performance of different microstructural phases and morphologies in industrial ductile iron components in service.
Resumo:
The main goal of this paper is to expose and validate a methodology to design efficient automatic controllers for irrigation canals, based on the Saint-Venant model. This model-based methodology enables to design controllers at the design stage (when the canal is not already built). The methodology is applied on an experimental canal located in Portugal. First the full nonlinear PDE model is calibrated, using a single steady-state experiment. The model is then linearized around a functioning point, in order to design linear PI controllers. Two classical control strategies are tested (local upstream control and distant downstream control) and compared on the canal. The experimental results show the effectiveness of the model.
Resumo:
Solar radiation takes in today's world, an increasing importance. Different devices are used to carry out spectral and integrated measurements of solar radiation. Thus the sensors can be divided into the fallow types: Calorimetric, Thermomechanical, Thermoelectric and Photoelectric. The first three categories are based on components converting the radiation to temperature (or heat) and then into electrical quantity. On the other hand, the photoelectric sensors are based on semiconductor or optoelectronic elements that when irradiated change their impedance or generate a measurable electric signal. The response function of the sensor element depends not only on the intensity of the radiation but also on its wavelengths. The radiation sensors most widely used fit in the first categories, but thanks to the reduction in manufacturing costs and to the increased integration of electronic systems, the use of the photoelectric-type sensors became more interesting. In this work we present a study of the behavior of different optoelectronic sensor elements. It is intended to verify the static response of the elements to the incident radiation. We study the optoelectronic elements using mathematical models that best fit their response as a function of wavelength. As an input to the model, the solar radiation values are generated with a radiative transfer model. We present a modeling of the spectral response sensors of other types in order to compare the behavior of optoelectronic elements with other sensors currently in use.
Resumo:
本文将随机系统状态模型辨识技术用于电力系统负荷预报。首先根据负荷的一系列历史数据建立负荷的状态空间模型,然后用滤波算法进行次日负荷预报,最后用电网实际数据在 PDP-11/23计算机上进行预报计算,得到比较满意的结果。
Resumo:
Artificial neural networks (ANNs) can be easily applied to short-term load forecasting (STLF) models for electric power distribution applications. However, they are not typically used in medium and long term load forecasting (MLTLF) electric power models because of the difficulties associated with collecting and processing the necessary data. Virtual instrument (VI) techniques can be applied to electric power load forecasting but this is rarely reported in the literature. In this paper, we investigate the modelling and design of a VI for short, medium and long term load forecasting using ANNs. Three ANN models were built for STLF of electric power. These networks were trained using historical load data and also considering weather data which is known to have a significant affect of the use of electric power (such as wind speed, precipitation, atmospheric pressure, temperature and humidity). In order to do this a V-shape temperature processing model is proposed. With regards MLTLF, a model was developed using radial basis function neural networks (RBFNN). Results indicate that the forecasting model based on the RBFNN has a high accuracy and stability. Finally, a virtual load forecaster which integrates the VI and the RBFNN is presented.
Resumo:
In situ high resolution aircraft measurements of cloud microphysical properties were made in coordination with ground based remote sensing observations of a line of small cumulus clouds, using Radar and Lidar, as part of the Aerosol Properties, PRocesses And InfluenceS on the Earth's climate (APPRAISE) project. A narrow but extensive line (~100 km long) of shallow convective clouds over the southern UK was studied. Cloud top temperatures were observed to be higher than −8 °C, but the clouds were seen to consist of supercooled droplets and varying concentrations of ice particles. No ice particles were observed to be falling into the cloud tops from above. Current parameterisations of ice nuclei (IN) numbers predict too few particles will be active as ice nuclei to account for ice particle concentrations at the observed, near cloud top, temperatures (−7.5 °C). The role of mineral dust particles, consistent with concentrations observed near the surface, acting as high temperature IN is considered important in this case. It was found that very high concentrations of ice particles (up to 100 L−1) could be produced by secondary ice particle production providing the observed small amount of primary ice (about 0.01 L−1) was present to initiate it. This emphasises the need to understand primary ice formation in slightly supercooled clouds. It is shown using simple calculations that the Hallett-Mossop process (HM) is the likely source of the secondary ice. Model simulations of the case study were performed with the Aerosol Cloud and Precipitation Interactions Model (ACPIM). These parcel model investigations confirmed the HM process to be a very important mechanism for producing the observed high ice concentrations. A key step in generating the high concentrations was the process of collision and coalescence of rain drops, which once formed fell rapidly through the cloud, collecting ice particles which caused them to freeze and form instant large riming particles. The broadening of the droplet size-distribution by collision-coalescence was, therefore, a vital step in this process as this was required to generate the large number of ice crystals observed in the time available. Simulations were also performed with the WRF (Weather, Research and Forecasting) model. The results showed that while HM does act to increase the mass and number concentration of ice particles in these model simulations it was not found to be critical for the formation of precipitation. However, the WRF simulations produced a cloud top that was too cold and this, combined with the assumption of continual replenishing of ice nuclei removed by ice crystal formation, resulted in too many ice crystals forming by primary nucleation compared to the observations and parcel modelling.
Resumo:
To bridge the gaps between traditional mesoscale modelling and microscale modelling, the National Center for Atmospheric Research, in collaboration with other agencies and research groups, has developed an integrated urban modelling system coupled to the weather research and forecasting (WRF) model as a community tool to address urban environmental issues. The core of this WRF/urban modelling system consists of the following: (1) three methods with different degrees of freedom to parameterize urban surface processes, ranging from a simple bulk parameterization to a sophisticated multi-layer urban canopy model with an indoor–outdoor exchange sub-model that directly interacts with the atmospheric boundary layer, (2) coupling to fine-scale computational fluid dynamic Reynolds-averaged Navier–Stokes and Large-Eddy simulation models for transport and dispersion (T&D) applications, (3) procedures to incorporate high-resolution urban land use, building morphology, and anthropogenic heating data using the National Urban Database and Access Portal Tool (NUDAPT), and (4) an urbanized high-resolution land data assimilation system. This paper provides an overview of this modelling system; addresses the daunting challenges of initializing the coupled WRF/urban model and of specifying the potentially vast number of parameters required to execute the WRF/urban model; explores the model sensitivity to these urban parameters; and evaluates the ability of WRF/urban to capture urban heat islands, complex boundary-layer structures aloft, and urban plume T&D for several major metropolitan regions. Recent applications of this modelling system illustrate its promising utility, as a regional climate-modelling tool, to investigate impacts of future urbanization on regional meteorological conditions and on air quality under future climate change scenarios. Copyright © 2010 Royal Meteorological Society
Resumo:
Floods are the most frequent of natural disasters, affecting millions of people across the globe every year. The anticipation and forecasting of floods at the global scale is crucial to preparing for severe events and providing early awareness where local flood models and warning services may not exist. As numerical weather prediction models continue to improve, operational centres are increasingly using the meteorological output from these to drive hydrological models, creating hydrometeorological systems capable of forecasting river flow and flood events at much longer lead times than has previously been possible. Furthermore, developments in, for example, modelling capabilities, data and resources in recent years have made it possible to produce global scale flood forecasting systems. In this paper, the current state of operational large scale flood forecasting is discussed, including probabilistic forecasting of floods using ensemble prediction systems. Six state-of-the-art operational large scale flood forecasting systems are reviewed, describing similarities and differences in their approaches to forecasting floods at the global and continental scale. Currently, operational systems have the capability to produce coarse-scale discharge forecasts in the medium-range and disseminate forecasts and, in some cases, early warning products, in real time across the globe, in support of national forecasting capabilities. With improvements in seasonal weather forecasting, future advances may include more seamless hydrological forecasting at the global scale, alongside a move towards multi-model forecasts and grand ensemble techniques, responding to the requirement of developing multi-hazard early warning systems for disaster risk reduction.
Resumo:
The Short-term Water Information and Forecasting Tools (SWIFT) is a suite of tools for flood and short-term streamflow forecasting, consisting of a collection of hydrologic model components and utilities. Catchments are modeled using conceptual subareas and a node-link structure for channel routing. The tools comprise modules for calibration, model state updating, output error correction, ensemble runs and data assimilation. Given the combinatorial nature of the modelling experiments and the sub-daily time steps typically used for simulations, the volume of model configurations and time series data is substantial and its management is not trivial. SWIFT is currently used mostly for research purposes but has also been used operationally, with intersecting but significantly different requirements. Early versions of SWIFT used mostly ad-hoc text files handled via Fortran code, with limited use of netCDF for time series data. The configuration and data handling modules have since been redesigned. The model configuration now follows a design where the data model is decoupled from the on-disk persistence mechanism. For research purposes the preferred on-disk format is JSON, to leverage numerous software libraries in a variety of languages, while retaining the legacy option of custom tab-separated text formats when it is a preferred access arrangement for the researcher. By decoupling data model and data persistence, it is much easier to interchangeably use for instance relational databases to provide stricter provenance and audit trail capabilities in an operational flood forecasting context. For the time series data, given the volume and required throughput, text based formats are usually inadequate. A schema derived from CF conventions has been designed to efficiently handle time series for SWIFT.