61 resultados para Process Models

em CentAUR: Central Archive University of Reading - UK


Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper the implementation of dynamic data reconciliation techniques for sequential modular models is described. The paper is organised as follows. First, an introduction to dynamic data reconciliation is given. Then, the online use of rigorous process models is introduced. The sequential modular approach to dynamic simulation is briefly discussed followed by a short review of the extended Kalman filter. The second section describes how the modules are implemented. A simulation case study and its results are also presented.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Models play a vital role in supporting a range of activities in numerous domains. We rely on models to support the design, visualisation, analysis and representation of parts of the world around us, and as such significant research effort has been invested into numerous areas of modelling; including support for model semantics, dynamic states and behaviour, temporal data storage and visualisation. Whilst these efforts have increased our capabilities and allowed us to create increasingly powerful software-based models, the process of developing models, supporting tools and /or data structures remains difficult, expensive and error-prone. In this paper we define from literature the key factors in assessing a model’s quality and usefulness: semantic richness, support for dynamic states and object behaviour, temporal data storage and visualisation. We also identify a number of shortcomings in both existing modelling standards and model development processes and propose a unified generic process to guide users through the development of semantically rich, dynamic and temporal models.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Evaluating CCMs with the presented framework will increase our confidence in predictions of stratospheric ozone change.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We evaluate the ability of process based models to reproduce observed global mean sea-level change. When the models are forced by changes in natural and anthropogenic radiative forcing of the climate system and anthropogenic changes in land-water storage, the average of the modelled sea-level change for the periods 1900–2010, 1961–2010 and 1990–2010 is about 80%, 85% and 90% of the observed rise. The modelled rate of rise is over 1 mm yr−1 prior to 1950, decreases to less than 0.5 mm yr−1 in the 1960s, and increases to 3 mm yr−1 by 2000. When observed regional climate changes are used to drive a glacier model and an allowance is included for an ongoing adjustment of the ice sheets, the modelled sea-level rise is about 2 mm yr−1 prior to 1950, similar to the observations. The model results encompass the observed rise and the model average is within 20% of the observations, about 10% when the observed ice sheet contributions since 1993 are added, increasing confidence in future projections for the 21st century. The increased rate of rise since 1990 is not part of a natural cycle but a direct response to increased radiative forcing (both anthropogenic and natural), which will continue to grow with ongoing greenhouse gas emissions

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present an application of birth-and-death processes on configuration spaces to a generalized mutation4 selection balance model. The model describes the aging of population as a process of accumulation of mu5 tations in a genotype. A rigorous treatment demands that mutations correspond to points in abstract spaces. 6 Our model describes an infinite-population, infinite-sites model in continuum. The dynamical equation which 7 describes the system, is of Kimura-Maruyama type. The problem can be posed in terms of evolution of states 8 (differential equation) or, equivalently, represented in terms of Feynman-Kac formula. The questions of interest 9 are the existence of a solution, its asymptotic behavior, and properties of the limiting state. In the non-epistatic 10 case the problem was posed and solved in [Steinsaltz D., Evans S.N., Wachter K.W., Adv. Appl. Math., 2005, 11 35(1)]. In our model we consider a topological space X as the space of positions of mutations and the influence of epistatic potentials

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite the many models developed for phosphorus concentration prediction at differing spatial and temporal scales, there has been little effort to quantify uncertainty in their predictions. Model prediction uncertainty quantification is desirable, for informed decision-making in river-systems management. An uncertainty analysis of the process-based model, integrated catchment model of phosphorus (INCA-P), within the generalised likelihood uncertainty estimation (GLUE) framework is presented. The framework is applied to the Lugg catchment (1,077 km2), a River Wye tributary, on the England–Wales border. Daily discharge and monthly phosphorus (total reactive and total), for a limited number of reaches, are used to initially assess uncertainty and sensitivity of 44 model parameters, identified as being most important for discharge and phosphorus predictions. This study demonstrates that parameter homogeneity assumptions (spatial heterogeneity is treated as land use type fractional areas) can achieve higher model fits, than a previous expertly calibrated parameter set. The model is capable of reproducing the hydrology, but a threshold Nash-Sutcliffe co-efficient of determination (E or R 2) of 0.3 is not achieved when simulating observed total phosphorus (TP) data in the upland reaches or total reactive phosphorus (TRP) in any reach. Despite this, the model reproduces the general dynamics of TP and TRP, in point source dominated lower reaches. This paper discusses why this application of INCA-P fails to find any parameter sets, which simultaneously describe all observed data acceptably. The discussion focuses on uncertainty of readily available input data, and whether such process-based models should be used when there isn’t sufficient data to support the many parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Applications such as neuroscience, telecommunication, online social networking, transport and retail trading give rise to connectivity patterns that change over time. In this work, we address the resulting need for network models and computational algorithms that deal with dynamic links. We introduce a new class of evolving range-dependent random graphs that gives a tractable framework for modelling and simulation. We develop a spectral algorithm for calibrating a set of edge ranges from a sequence of network snapshots and give a proof of principle illustration on some neuroscience data. We also show how the model can be used computationally and analytically to investigate the scenario where an evolutionary process, such as an epidemic, takes place on an evolving network. This allows us to study the cumulative effect of two distinct types of dynamics.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Composites of wind speeds, equivalent potential temperature, mean sea level pressure, vertical velocity, and relative humidity have been produced for the 100 most intense extratropical cyclones in the Northern Hemisphere winter for the 40-yr ECMWF Re-Analysis (ERA-40) and the high resolution global environment model (HiGEM). Features of conceptual models of cyclone structure—the warm conveyor belt, cold conveyor belt, and dry intrusion—have been identified in the composites from ERA-40 and compared to HiGEM. Such features can be identified in the composite fields despite the smoothing that occurs in the compositing process. The surface features and the three-dimensional structure of the cyclones in HiGEM compare very well with those from ERA-40. The warm conveyor belt is identified in the temperature and wind fields as a mass of warm air undergoing moist isentropic uplift and is very similar in ERA-40 and HiGEM. The rate of ascent is lower in HiGEM, associated with a shallower slope of the moist isentropes in the warm sector. There are also differences in the relative humidity fields in the warm conveyor belt. In ERA-40, the high values of relative humidity are strongly associated with the moist isentropic uplift, whereas in HiGEM these are not so strongly associated. The cold conveyor belt is identified as rearward flowing air that undercuts the warm conveyor belt and produces a low-level jet, and is very similar in HiGEM and ERA-40. The dry intrusion is identified in the 500-hPa vertical velocity and relative humidity. The structure of the dry intrusion compares well between HiGEM and ERA-40 but the descent is weaker in HiGEM because of weaker along-isentrope flow behind the composite cyclone. HiGEM’s ability to represent the key features of extratropical cyclone structure can give confidence in future predictions from this model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Airborne scanning laser altimetry (LiDAR) is an important new data source for river flood modelling. LiDAR can give dense and accurate DTMs of floodplains for use as model bathymetry. Spatial resolutions of 0.5m or less are possible, with a height accuracy of 0.15m. LiDAR gives a Digital Surface Model (DSM), so vegetation removal software (e.g. TERRASCAN) must be used to obtain a DTM. An example used to illustrate the current state of the art will be the LiDAR data provided by the EA, which has been processed by their in-house software to convert the raw data to a ground DTM and separate vegetation height map. Their method distinguishes trees from buildings on the basis of object size. EA data products include the DTM with or without buildings removed, a vegetation height map, a DTM with bridges removed, etc. Most vegetation removal software ignores short vegetation less than say 1m high. We have attempted to extend vegetation height measurement to short vegetation using local height texture. Typically most of a floodplain may be covered in such vegetation. The idea is to assign friction coefficients depending on local vegetation height, so that friction is spatially varying. This obviates the need to calibrate a global floodplain friction coefficient. It’s not clear at present if the method is useful, but it’s worth testing further. The LiDAR DTM is usually determined by looking for local minima in the raw data, then interpolating between these to form a space-filling height surface. This is a low pass filtering operation, in which objects of high spatial frequency such as buildings, river embankments and walls may be incorrectly classed as vegetation. The problem is particularly acute in urban areas. A solution may be to apply pattern recognition techniques to LiDAR height data fused with other data types such as LiDAR intensity or multispectral CASI data. We are attempting to use digital map data (Mastermap structured topography data) to help to distinguish buildings from trees, and roads from areas of short vegetation. The problems involved in doing this will be discussed. A related problem of how best to merge historic river cross-section data with a LiDAR DTM will also be considered. LiDAR data may also be used to help generate a finite element mesh. In rural area we have decomposed a floodplain mesh according to taller vegetation features such as hedges and trees, so that e.g. hedge elements can be assigned higher friction coefficients than those in adjacent fields. We are attempting to extend this approach to urban area, so that the mesh is decomposed in the vicinity of buildings, roads, etc as well as trees and hedges. A dominant points algorithm is used to identify points of high curvature on a building or road, which act as initial nodes in the meshing process. A difficulty is that the resulting mesh may contain a very large number of nodes. However, the mesh generated may be useful to allow a high resolution FE model to act as a benchmark for a more practical lower resolution model. A further problem discussed will be how best to exploit data redundancy due to the high resolution of the LiDAR compared to that of a typical flood model. Problems occur if features have dimensions smaller than the model cell size e.g. for a 5m-wide embankment within a raster grid model with 15m cell size, the maximum height of the embankment locally could be assigned to each cell covering the embankment. But how could a 5m-wide ditch be represented? Again, this redundancy has been exploited to improve wetting/drying algorithms using the sub-grid-scale LiDAR heights within finite elements at the waterline.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Improvements in the resolution of satellite imagery have enabled extraction of water surface elevations at the margins of the flood. Comparison between modelled and observed water surface elevations provides a new means for calibrating and validating flood inundation models, however the uncertainty in this observed data has yet to be addressed. Here a flood inundation model is calibrated using a probabilistic treatment of the observed data. A LiDAR guided snake algorithm is used to determine an outline of a flood event in 2006 on the River Dee, North Wales, UK, using a 12.5m ERS-1 image. Points at approximately 100m intervals along this outline are selected, and the water surface elevation recorded as the LiDAR DEM elevation at each point. With a planar water surface from the gauged upstream to downstream water elevations as an approximation, the water surface elevations at points along this flooded extent are compared to their ‘expected’ value. The pattern of errors between the two show a roughly normal distribution, however when plotted against coordinates there is obvious spatial autocorrelation. The source of this spatial dependency is investigated by comparing errors to the slope gradient and aspect of the LiDAR DEM. A LISFLOOD-FP model of the flood event is set-up to investigate the effect of observed data uncertainty on the calibration of flood inundation models. Multiple simulations are run using different combinations of friction parameters, from which the optimum parameter set will be selected. For each simulation a T-test is used to quantify the fit between modelled and observed water surface elevations. The points chosen for use in this T-test are selected based on their error. The criteria for selection enables evaluation of the sensitivity of the choice of optimum parameter set to uncertainty in the observed data. This work explores the observed data in detail and highlights possible causes of error. The identification of significant error (RMSE = 0.8m) between approximate expected and actual observed elevations from the remotely sensed data emphasises the limitations of using this data in a deterministic manner within the calibration process. These limitations are addressed by developing a new probabilistic approach to using the observed data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Formal and analytical risk models prescribe how risk should be incorporated in construction bids. However, the actual process of how contractors and their clients negotiate and agree on price is complex, and not clearly articulated in the literature. Using participant observation, the entire tender process was shadowed in two leading UK construction firms. This was compared to propositions in analytical models and significant differences were found. 670 hours of work observed in both firms revealed three stages of the bidding process. Bidding activities were categorized and their extent estimated as deskwork (32%), calculations (19%), meetings (14%), documents (13%), off-days (11%), conversations (7%), correspondence (3%) and travel (1%). Risk allowances of 1-2% were priced in some bids and three tiers of risk apportionment in bids were identified. However, priced risks may sometimes be excluded from the final bidding price to enhance competitiveness. Thus, although risk apportionment affects a contractor’s pricing strategy, other complex, microeconomic factors also affect price. Instead of pricing in contingencies, risk was priced mostly through contractual rather than price mechanisms, to reflect commercial imperatives. The findings explain why some assumptions underpinning analytical models may not be sustainable in practice and why what actually happens in practice is important for those who seek to model the pricing of construction bids.