163 resultados para Data Modelling
Resumo:
The global vegetation response to climate and atmospheric CO2 changes between the last glacial maximum and recent times is examined using an equilibrium vegetation model (BIOME4), driven by output from 17 climate simulations from the Palaeoclimate Modelling Intercomparison Project. Features common to all of the simulations include expansion of treeless vegetation in high northern latitudes; southward displacement and fragmentation of boreal and temperate forests; and expansion of drought-tolerant biomes in the tropics. These features are broadly consistent with pollen-based reconstructions of vegetation distribution at the last glacial maximum. Glacial vegetation in high latitudes reflects cold and dry conditions due to the low CO2 concentration and the presence of large continental ice sheets. The extent of drought-tolerant vegetation in tropical and subtropical latitudes reflects a generally drier low-latitude climate. Comparisons of the observations with BIOME4 simulations, with and without consideration of the direct physiological effect of CO2 concentration on C3 photosynthesis, suggest an important additional role of low CO2 concentration in restricting the extent of forests, especially in the tropics. Global forest cover was overestimated by all models when climate change alone was used to drive BIOME4, and estimated more accurately when physiological effects of CO2 concentration were included. This result suggests that both CO2 effects and climate effects were important in determining glacial-interglacial changes in vegetation. More realistic simulations of glacial vegetation and climate will need to take into account the feedback effects of these structural and physiological changes on the climate.
Resumo:
Performance modelling is a useful tool in the lifeycle of high performance scientific software, such as weather and climate models, especially as a means of ensuring efficient use of available computing resources. In particular, sufficiently accurate performance prediction could reduce the effort and experimental computer time required when porting and optimising a climate model to a new machine. In this paper, traditional techniques are used to predict the computation time of a simple shallow water model which is illustrative of the computation (and communication) involved in climate models. These models are compared with real execution data gathered on AMD Opteron-based systems, including several phases of the U.K. academic community HPC resource, HECToR. Some success is had in relating source code to achieved performance for the K10 series of Opterons, but the method is found to be inadequate for the next-generation Interlagos processor. The experience leads to the investigation of a data-driven application benchmarking approach to performance modelling. Results for an early version of the approach are presented using the shallow model as an example.
Resumo:
Recent interest in the validation of general circulation models (GCMs) has been devoted to objective methods. A small number of authors have used the direct synoptic identification of phenomena together with a statistical analysis to perform the objective comparison between various datasets. This paper describes a general method for performing the synoptic identification of phenomena that can be used for an objective analysis of atmospheric, or oceanographic, datasets obtained from numerical models and remote sensing. Methods usually associated with image processing have been used to segment the scene and to identify suitable feature points to represent the phenomena of interest. This is performed for each time level. A technique from dynamic scene analysis is then used to link the feature points to form trajectories. The method is fully automatic and should be applicable to a wide range of geophysical fields. An example will be shown of results obtained from this method using data obtained from a run of the Universities Global Atmospheric Modelling Project GCM.
Resumo:
This paper investigates the use of data assimilation in coastal area morphodynamic modelling using Morecambe Bay as a study site. A simple model of the bay has been enhanced with a data assimilation scheme to better predict large-scale changes in bathymetry observed in the bay over a 3-year period. The 2DH decoupled morphodynamic model developed for the work is described, as is the optimal interpolation scheme used to assimilate waterline observations into the model run. Each waterline was acquired from a SAR satellite image and is essentially a contour of the bathymetry at some level within the inter-tidal zone of the bay. For model parameters calibrated against validation observations, model performance is good, even without data assimilation. However the use of data assimilation successfully compensates for a particular failing of the model, and helps to keep the model bathymetry on track. It also improves the ability of the model to predict future bathymetry. Although the benefits of data assimilation are demonstrated using waterline observations, any observations of morphology could potentially be used. These results suggest that data assimilation should be considered for use in future coastal area morphodynamic models.
Resumo:
Data assimilation – the set of techniques whereby information from observing systems and models is combined optimally – is rapidly becoming prominent in endeavours to exploit Earth Observation for Earth sciences, including climate prediction. This paper explains the broad principles of data assimilation, outlining different approaches (optimal interpolation, three-dimensional and four-dimensional variational methods, the Kalman Filter), together with the approximations that are often necessary to make them practicable. After pointing out a variety of benefits of data assimilation, the paper then outlines some practical applications of the exploitation of Earth Observation by data assimilation in the areas of operational oceanography, chemical weather forecasting and carbon cycle modelling. Finally, some challenges for the future are noted.
Resumo:
The performance of a 2D numerical model of flood hydraulics is tested for a major event in Carlisle, UK, in 2005. This event is associated with a unique data set, with GPS surveyed wrack lines and flood extent surveyed 3 weeks after the flood. The Simple Finite Volume (SFV) model is used to solve the 2D Saint-Venant equations over an unstructured mesh of 30000 elements representing channel and floodplain, and allowing detailed hydraulics of flow around bridge piers and other influential features to be represented. The SFV model is also used to corroborate flows recorded for the event at two gauging stations. Calibration of Manning's n is performed with a two stage strategy, with channel values determined by calibration of the gauging station models, and floodplain values determined by optimising the fit between model results and observed water levels and flood extent for the 2005 event. RMS error for the calibrated model compared with surveyed water levels is ~±0.4m, the same order of magnitude as the estimated error in the survey data. The study demonstrates the ability of unstructured mesh hydraulic models to represent important hydraulic processes across a range of scales, with potential applications to flood risk management.
Resumo:
Data assimilation is a sophisticated mathematical technique for combining observational data with model predictions to produce state and parameter estimates that most accurately approximate the current and future states of the true system. The technique is commonly used in atmospheric and oceanic modelling, combining empirical observations with model predictions to produce more accurate and well-calibrated forecasts. Here, we consider a novel application within a coastal environment and describe how the method can also be used to deliver improved estimates of uncertain morphodynamic model parameters. This is achieved using a technique known as state augmentation. Earlier applications of state augmentation have typically employed the 4D-Var, Kalman filter or ensemble Kalman filter assimilation schemes. Our new method is based on a computationally inexpensive 3D-Var scheme, where the specification of the error covariance matrices is crucial for success. A simple 1D model of bed-form propagation is used to demonstrate the method. The scheme is capable of recovering near-perfect parameter values and, therefore, improves the capability of our model to predict future bathymetry. Such positive results suggest the potential for application to more complex morphodynamic models.
Resumo:
Severe wind storms are one of the major natural hazards in the extratropics and inflict substantial economic damages and even casualties. Insured storm-related losses depend on (i) the frequency, nature and dynamics of storms, (ii) the vulnerability of the values at risk, (iii) the geographical distribution of these values, and (iv) the particular conditions of the risk transfer. It is thus of great importance to assess the impact of climate change on future storm losses. To this end, the current study employs—to our knowledge for the first time—a coupled approach, using output from high-resolution regional climate model scenarios for the European sector to drive an operational insurance loss model. An ensemble of coupled climate-damage scenarios is used to provide an estimate of the inherent uncertainties. Output of two state-of-the-art global climate models (HadAM3, ECHAM5) is used for present (1961–1990) and future climates (2071–2100, SRES A2 scenario). These serve as boundary data for two nested regional climate models with a sophisticated gust parametrizations (CLM, CHRM). For validation and calibration purposes, an additional simulation is undertaken with the CHRM driven by the ERA40 reanalysis. The operational insurance model (Swiss Re) uses a European-wide damage function, an average vulnerability curve for all risk types, and contains the actual value distribution of a complete European market portfolio. The coupling between climate and damage models is based on daily maxima of 10 m gust winds, and the strategy adopted consists of three main steps: (i) development and application of a pragmatic selection criterion to retrieve significant storm events, (ii) generation of a probabilistic event set using a Monte-Carlo approach in the hazard module of the insurance model, and (iii) calibration of the simulated annual expected losses with a historic loss data base. The climate models considered agree regarding an increase in the intensity of extreme storms in a band across central Europe (stretching from southern UK and northern France to Denmark, northern Germany into eastern Europe). This effect increases with event strength, and rare storms show the largest climate change sensitivity, but are also beset with the largest uncertainties. Wind gusts decrease over northern Scandinavia and Southern Europe. Highest intra-ensemble variability is simulated for Ireland, the UK, the Mediterranean, and parts of Eastern Europe. The resulting changes on European-wide losses over the 110-year period are positive for all layers and all model runs considered and amount to 44% (annual expected loss), 23% (10 years loss), 50% (30 years loss), and 104% (100 years loss). There is a disproportionate increase in losses for rare high-impact events. The changes result from increases in both severity and frequency of wind gusts. Considerable geographical variability of the expected losses exists, with Denmark and Germany experiencing the largest loss increases (116% and 114%, respectively). All countries considered except for Ireland (−22%) experience some loss increases. Some ramifications of these results for the socio-economic sector are discussed, and future avenues for research are highlighted. The technique introduced in this study and its application to realistic market portfolios offer exciting prospects for future research on the impact of climate change that is relevant for policy makers, scientists and economists.
Resumo:
Estimates of soil organic carbon (SOC) stocks and changes under different land use systems can help determine vulnerability to land degradation. Such information is important for countries in and areas with high susceptibility to desertification. SOC stocks, and predicted changes between 2000 and 2030, were determined at the national scale for Jordan using The Global Environment Facility Soil Organic Carbon (GEFSOC) Modelling System. For the purpose of this study, Jordan was divided into three natural regions (The Jordan Valley, the Uplands and the Badia) and three developmental regions (North, Middle and South). Based on this division, Jordan was divided into five zones (based on the dominant land use): the Jordan Valley, the North Uplands, the Middle Uplands, the South Uplands and the Badia. This information was merged using GIS, along with a map of rainfall isohyets, to produce a map with 498 polygons. Each of these was given a unique ID, a land management unit identifier and was characterized in terms of its dominant soil type. Historical land use data, current land use and future land use change scenarios were also assembled, forming major inputs of the modelling system. The GEFSOC Modelling System was then run to produce C stocks in Jordan for the years 1990, 2000 and 2030. The results were compared with conventional methods of estimating carbon stocks, such as the mapping based SOTER method. The results of these comparisons showed that the model runs are acceptable, taking into consideration the limited availability of long-term experimental soil data that can be used to validate them. The main findings of this research show that between 2000 and 2030, SOC may increase in heavily used areas under irrigation and will likely decrease in grazed rangelands that cover most of Jordan giving an overall decrease in total SOC over time if the land is indeed used under the estimated forms of land use. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
This review introduces the methods used to simulate the processes affecting dissolved oxygen (DO) in lowland rivers. The important processes are described and this provides a modelling framework to describe those processes in the context of a mass-balance model. The process equations that are introduced all require (reaction) rate parameters and a variety of common procedures for identifying those parameters are reviewed. This is important because there is a wide range of estimation techniques for many of the parameters. These different techniques elicit different estimates of the parameter value and so there is the potential for a significant uncertainty in the model's inputs and therefore in the output too. Finally, the data requirements for modelling DO in lowland rivers are summarised on the basis of modelling the processes described in this review using a mass-balance model. This is reviewed with regard to what data are available and from where they might be obtained. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
A wide variety of exposure models are currently employed for health risk assessments. Individual models have been developed to meet the chemical exposure assessment needs of Government, industry and academia. These existing exposure models can be broadly categorised according to the following types of exposure source: environmental, dietary, consumer product, occupational, and aggregate and cumulative. Aggregate exposure models consider multiple exposure pathways, while cumulative models consider multiple chemicals. In this paper each of these basic types of exposure model are briefly described, along with any inherent strengths or weaknesses, with the UK as a case study. Examples are given of specific exposure models that are currently used, or that have the potential for future use, and key differences in modelling approaches adopted are discussed. The use of exposure models is currently fragmentary in nature. Specific organisations with exposure assessment responsibilities tend to use a limited range of models. The modelling techniques adopted in current exposure models have evolved along distinct lines for the various types of source. In fact different organisations may be using different models for very similar exposure assessment situations. This lack of consistency between exposure modelling practices can make understanding the exposure assessment process more complex, can lead to inconsistency between organisations in how critical modelling issues are addressed (e.g. variability and uncertainty), and has the potential to communicate mixed messages to the general public. Further work should be conducted to integrate the various approaches and models, where possible and regulatory remits allow, to get a coherent and consistent exposure modelling process. We recommend the development of an overall framework for exposure and risk assessment with common approaches and methodology, a screening tool for exposure assessment, collection of better input data, probabilistic modelling, validation of model input and output and a closer working relationship between scientists and policy makers and staff from different Government departments. A much increased effort is required is required in the UK to address these issues. The result will be a more robust, transparent, valid and more comparable exposure and risk assessment process. (C) 2006 Elsevier Ltd. All rights reserved.
Resumo:
The nature and magnitude of climatic variability during the period of middle Pliocene warmth (ca 3.29–2.97 Ma) is poorly understood. We present a suite of palaeoclimate modelling experiments incorporating an advanced atmospheric general circulation model (GCM), coupled to a Q-flux ocean model for 3.29, 3.12 and 2.97 Ma BP. Astronomical solutions for the periods in question were derived from the Berger and Loutre BL2 astronomical solution. Boundary conditions, excluding sea surface temperatures (SSTs) which were predicted by the slab-ocean model, were provided from the USGS PRISM2 2°×2° digital data set. The model results indicate that little annual variation (0.5°C) in SSTs, relative to a ‘control’ experiment, occurred during the middle Pliocene in response to the altered orbital configurations. Annual surface air temperatures also displayed little variation. Seasonally, surface air temperatures displayed a trend of cooler temperatures during December, January and February, and warmer temperatures during June, July and August. This pattern is consistent with altered seasonality resulting from the prescribed orbital configurations. Precipitation changes follow the seasonal trend observed for surface air temperature. Compared to present-day, surface wind strength and wind stress over the North Atlantic, North Pacific and Southern Ocean remained greater in each of the Pliocene experiments. This suggests that wind-driven gyral circulation may have been consistently greater during the middle Pliocene. The trend of climatic variability predicted by the GCM for the middle Pliocene accords with geological data. However, it is unclear if the model correctly simulates the magnitude of the variation. This uncertainty is derived from, (a) the relative insensitivity of the GCM to perturbation in the imposed boundary conditions, (b) a lack of detailed time series data concerning changes to terrestrial ice cover and greenhouse gas concentrations for the middle Pliocene and (c) difficulties in representing the effects of ‘climatic history’ in snap-shot GCM experiments.
Resumo:
An investigation using the Stepping Out model of early hominin dispersal out of Africa is presented here. The late arrival of early hominins into Europe, as deduced from the fossil record, is shown to be consistent with poor ability of these hominins to survive in the Eurasian landscape. The present study also extends the understanding of modelling results from the original study by Mithen and Reed (2002. Stepping out: a computer simulation of hominid dispersal from Africa. J. Hum. Evol. 43, 433-462). The representation of climate and vegetation patterns has been improved through the use of climate model output. This study demonstrates that interpretative confidence may be strengthened, and new insights gained when climate models and hominin dispersal models are integrated. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Rising nitrate levels have been observed in UK Chalk catchments in recent decades, with concentrations now approaching or exceeding legislated maximum values in many areas. In response, strategies seeking to contain concentrations through appropriate land management are now in place. However, there is an increasing consensus that Chalk systems, a predominant landscape type over England and indeed northwest Europe, can retard decades of prior nitrate loading within their deep unsaturated zones. Current levels may not fully reflect the long-term impact of present-day practices, and stringent land management controls may not be enough to avert further medium-term rises. This paper discusses these issues in the context of the EU Water Framework Directive, drawing on data from recent experimental work and a new model (INCA-Chalk) that allows the impacts of different land use management practices to be explored. Results strongly imply that timelines for water quality improvement demanded by the Water Framework directive are not realistic for the Chalk, and give an indication of time-scales over which improvements might be achieved. However, important unresolved scientific issues remain, and further monitoring and targeted data collection is recommended to reduce prediction uncertainties and allow cost effective strategies for mitigation to be designed and implemented. (C) 2007 Elsevier Ltd. All rights reserved.