51 resultados para Predicting model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The sensitivity to the horizontal resolution of the climate, anthropogenic climate change, and seasonal predictive skill of the ECMWF model has been studied as part of Project Athena—an international collaboration formed to test the hypothesis that substantial progress in simulating and predicting climate can be achieved if mesoscale and subsynoptic atmospheric phenomena are more realistically represented in climate models. In this study the experiments carried out with the ECMWF model (atmosphere only) are described in detail. Here, the focus is on the tropics and the Northern Hemisphere extratropics during boreal winter. The resolutions considered in Project Athena for the ECMWF model are T159 (126 km), T511 (39 km), T1279 (16 km), and T2047 (10 km). It was found that increasing horizontal resolution improves the tropical precipitation, the tropical atmospheric circulation, the frequency of occurrence of Euro-Atlantic blocking, and the representation of extratropical cyclones in large parts of the Northern Hemisphere extratropics. All of these improvements come from the increase in resolution from T159 to T511 with relatively small changes for further resolution increases to T1279 and T2047, although it should be noted that results from this very highest resolution are from a previously untested model version. Problems in simulating the Madden–Julian oscillation remain unchanged for all resolutions tested. There is some evidence that increasing horizontal resolution to T1279 leads to moderate increases in seasonal forecast skill during boreal winter in the tropics and Northern Hemisphere extratropics. Sensitivity experiments are discussed, which helps to foster a better understanding of some of the resolution dependence found for the ECMWF model in Project Athena

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An ensemble forecast is a collection of runs of a numerical dynamical model, initialized with perturbed initial conditions. In modern weather prediction for example, ensembles are used to retrieve probabilistic information about future weather conditions. In this contribution, we are concerned with ensemble forecasts of a scalar quantity (say, the temperature at a specific location). We consider the event that the verification is smaller than the smallest, or larger than the largest ensemble member. We call these events outliers. If a K-member ensemble accurately reflected the variability of the verification, outliers should occur with a base rate of 2/(K + 1). In operational forecast ensembles though, this frequency is often found to be higher. We study the predictability of outliers and find that, exploiting information available from the ensemble, forecast probabilities for outlier events can be calculated which are more skilful than the unconditional base rate. We prove this analytically for statistically consistent forecast ensembles. Further, the analytical results are compared to the predictability of outliers in an operational forecast ensemble by means of model output statistics. We find the analytical and empirical results to agree both qualitatively and quantitatively.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The ECMWF operational grid point model (with a resolution of 1.875° of latitude and longitude) and its limited area version (with a resolution of !0.47° of latitude and longitude) with boundary values from the global model have been used to study the simulation of the typhoon Tip. The fine-mesh model was capable of simulating the main structural features of the typhoon and predicting a fall in central pressure of 60 mb in 3 days. The structure of the forecast typhoon, with a warm core (maximum potential temperature anomaly 17 K). intense swirling wind (maximum 55 m s-1 at 850 mb) and spiralling precipitation patterns is characteristic of a tropical cyclone. Comparison with the lower resolution forecast shows that the horizontal resolution is a determining factor in predicting not only the structure and intensity but even the movement of these vortices. However, an accurate and refined initial analysis is considered to be a prerequisite for a correct forecast of this phenomenon.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background. Within a therapeutic gene by environment (GxE) framework, we recently demonstrated that variation in the Serotonin Transporter Promoter Polymorphism; 5HTTLPR and marker rs6330 in Nerve Growth Factor gene; NGF is associated with poorer outcomes following cognitive behaviour therapy (CBT) for child anxiety disorders. The aim of this study was to explore one potential means of extending the translational reach of G×E data in a way that may be clinically informative. We describe a ‘risk-index’ approach combining genetic, demographic and clinical data and test its ability to predict diagnostic outcome following CBT in anxious children. Method. DNA and clinical data were collected from 384 children with a primary anxiety disorder undergoing CBT. We tested our risk model in five cross-validation training sets. Results. In predicting treatment outcome, six variables had a minimum mean beta value of 0.5: 5HTTLPR, NGF rs6330, gender, primary anxiety severity, comorbid mood disorder and comorbid externalising disorder. A risk index (range 0-8) constructed from these variables had moderate predictive ability (AUC = .62-.69) in this study. Children scoring high on this index (5-8) were approximately three times as likely to retain their primary anxiety disorder at follow-up as compared to those children scoring 2 or less. Conclusion. Significant genetic, demographic and clinical predictors of outcome following CBT for anxiety-disordered children were identified. Combining these predictors within a risk-index could be used to identify which children are less likely to be diagnosis free following CBT alone or thus require longer or enhanced treatment. The ‘risk-index’ approach represents one means of harnessing the translational potential of G×E data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Details are given of the development and application of a 2D depth-integrated, conformal boundary-fitted, curvilinear model for predicting the depth-mean velocity field and the spatial concentration distribution in estuarine and coastal waters. A numerical method for conformal mesh generation, based on a boundary integral equation formulation, has been developed. By this method a general polygonal region with curved edges can be mapped onto a regular polygonal region with the same number of horizontal and vertical straight edges and a multiply connected region can be mapped onto a regular region with the same connectivity. A stretching transformation on the conformally generated mesh has also been used to provide greater detail where it is needed close to the coast, with larger mesh sizes further offshore, thereby minimizing the computing effort whilst maximizing accuracy. The curvilinear hydrodynamic and solute model has been developed based on a robust rectilinear model. The hydrodynamic equations are approximated using the ADI finite difference scheme with a staggered grid and the solute transport equation is approximated using a modified QUICK scheme. Three numerical examples have been chosen to test the curvilinear model, with an emphasis placed on complex practical applications

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Agro-hydrological models have widely been used for optimizing resources use and minimizing environmental consequences in agriculture. SMCRN is a recently developed sophisticated model which simulates crop response to nitrogen fertilizer for a wide range of crops, and the associated leaching of nitrate from arable soils. In this paper, we describe the improvements of this model by replacing the existing approximate hydrological cascade algorithm with a new simple and explicit algorithm for the basic soil water flow equation, which not only enhanced the model performance in hydrological simulation, but also was essential to extend the model application to the situations where the capillary flow is important. As a result, the updated SMCRN model could be used for more accurate study of water dynamics in the soil-crop system. The success of the model update was demonstrated by the simulated results that the updated model consistently out-performed the original model in drainage simulations and in predicting time course soil water content in different layers in the soil-wheat system. Tests of the updated SMCRN model against data from 4 field crop experiments showed that crop nitrogen offtakes and soil mineral nitrogen in the top 90 cm were in a good agreement with the measured values, indicating that the model could make more reliable predictions of nitrogen fate in the crop-soil system, and thus provides a useful platform to assess the impacts of nitrogen fertilizer on crop yield and nitrogen leaching from different production systems. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Hadley Centre Global Environmental Model (HadGEM) includes two aerosol schemes: the Coupled Large-scale Aerosol Simulator for Studies in Climate (CLASSIC), and the new Global Model of Aerosol Processes (GLOMAP-mode). GLOMAP-mode is a modal aerosol microphysics scheme that simulates not only aerosol mass but also aerosol number, represents internally-mixed particles, and includes aerosol microphysical processes such as nucleation. In this study, both schemes provide hindcast simulations of natural and anthropogenic aerosol species for the period 2000–2006. HadGEM simulations of the aerosol optical depth using GLOMAP-mode compare better than CLASSIC against a data-assimilated aerosol re-analysis and aerosol ground-based observations. Because of differences in wet deposition rates, GLOMAP-mode sulphate aerosol residence time is two days longer than CLASSIC sulphate aerosols, whereas black carbon residence time is much shorter. As a result, CLASSIC underestimates aerosol optical depths in continental regions of the Northern Hemisphere and likely overestimates absorption in remote regions. Aerosol direct and first indirect radiative forcings are computed from simulations of aerosols with emissions for the year 1850 and 2000. In 1850, GLOMAP-mode predicts lower aerosol optical depths and higher cloud droplet number concentrations than CLASSIC. Consequently, simulated clouds are much less susceptible to natural and anthropogenic aerosol changes when the microphysical scheme is used. In particular, the response of cloud condensation nuclei to an increase in dimethyl sulphide emissions becomes a factor of four smaller. The combined effect of different 1850 baselines, residence times, and abilities to affect cloud droplet number, leads to substantial differences in the aerosol forcings simulated by the two schemes. GLOMAP-mode finds a presentday direct aerosol forcing of −0.49Wm−2 on a global average, 72% stronger than the corresponding forcing from CLASSIC. This difference is compensated by changes in first indirect aerosol forcing: the forcing of −1.17Wm−2 obtained with GLOMAP-mode is 20% weaker than with CLASSIC. Results suggest that mass-based schemes such as CLASSIC lack the necessary sophistication to provide realistic input to aerosol-cloud interaction schemes. Furthermore, the importance of the 1850 baseline highlights how model skill in predicting present-day aerosol does not guarantee reliable forcing estimates. Those findings suggest that the more complex representation of aerosol processes in microphysical schemes improves the fidelity of simulated aerosol forcings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Radiative forcing and climate sensitivity have been widely used as concepts to understand climate change. This work performs climate change experiments with an intermediate general circulation model (IGCM) to examine the robustness of the radiative forcing concept for carbon dioxide and solar constant changes. This IGCM has been specifically developed as a computationally fast model, but one that allows an interaction between physical processes and large-scale dynamics; the model allows many long integrations to be performed relatively quickly. It employs a fast and accurate radiative transfer scheme, as well as simple convection and surface schemes, and a slab ocean, to model the effects of climate change mechanisms on the atmospheric temperatures and dynamics with a reasonable degree of complexity. The climatology of the IGCM run at T-21 resolution with 22 levels is compared to European Centre for Medium Range Weather Forecasting Reanalysis data. The response of the model to changes in carbon dioxide and solar output are examined when these changes are applied globally and when constrained geographically (e.g. over land only). The CO2 experiments have a roughly 17% higher climate sensitivity than the solar experiments. It is also found that a forcing at high latitudes causes a 40% higher climate sensitivity than a forcing only applied at low latitudes. It is found that, despite differences in the model feedbacks, climate sensitivity is roughly constant over a range of distributions of CO2 and solar forcings. Hence, in the IGCM at least, the radiative forcing concept is capable of predicting global surface temperature changes to within 30%, for the perturbations described here. It is concluded that radiative forcing remains a useful tool for assessing the natural and anthropogenic impact of climate change mechanisms on surface temperature.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As the calibration and evaluation of flood inundation models are a prerequisite for their successful application, there is a clear need to ensure that the performance measures that quantify how well models match the available observations are fit for purpose. This paper evaluates the binary pattern performance measures that are frequently used to compare flood inundation models with observations of flood extent. This evaluation considers whether these measures are able to calibrate and evaluate model predictions in a credible and consistent way, i.e. identifying the underlying model behaviour for a number of different purposes such as comparing models of floods of different magnitudes or on different catchments. Through theoretical examples, it is shown that the binary pattern measures are not consistent for floods of different sizes, such that for the same vertical error in water level, a model of a flood of large magnitude appears to perform better than a model of a smaller magnitude flood. Further, the commonly used Critical Success Index (usually referred to as F<2 >) is biased in favour of overprediction of the flood extent, and is also biased towards correctly predicting areas of the domain with smaller topographic gradients. Consequently, it is recommended that future studies consider carefully the implications of reporting conclusions using these performance measures. Additionally, future research should consider whether a more robust and consistent analysis could be achieved by using elevation comparison methods instead.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Population modelling is increasingly recognised as a useful tool for pesticide risk assessment. For vertebrates that may ingest pesticides with their food, such as woodpigeon (Columba palumbus), population models that simulate foraging behaviour explicitly can help predicting both exposure and population-level impact. Optimal foraging theory is often assumed to explain the individual-level decisions driving distributions of individuals in the field, but it may not adequately predict spatial and temporal characteristics of woodpigeon foraging because of the woodpigeons’ excellent memory, ability to fly long distances, and distinctive flocking behaviour. Here we present an individual-based model (IBM) of the woodpigeon. We used the model to predict distributions of foraging woodpigeons that use one of six alternative foraging strategies: optimal foraging, memory-based foraging and random foraging, each with or without flocking mechanisms. We used pattern-oriented modelling to determine which of the foraging strategies is best able to reproduce observed data patterns. Data used for model evaluation were gathered during a long-term woodpigeon study conducted between 1961 and 2004 and a radiotracking study conducted in 2003 and 2004, both in the UK, and are summarised here as three complex patterns: the distributions of foraging birds between vegetation types during the year, the number of fields visited daily by individuals, and the proportion of fields revisited by them on subsequent days. The model with a memory-based foraging strategy and a flocking mechanism was the only one to reproduce these three data patterns, and the optimal foraging model produced poor matches to all of them. The random foraging strategy reproduced two of the three patterns but was not able to guarantee population persistence. We conclude that with the memory-based foraging strategy including a flocking mechanism our model is realistic enough to estimate the potential exposure of woodpigeons to pesticides. We discuss how exposure can be linked to our model, and how the model could be used for risk assessment of pesticides, for example predicting exposure and effects in heterogeneous landscapes planted seasonally with a variety of crops, while accounting for differences in land use between landscapes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Earthworms are significant ecosystem engineers and are an important component of the diet of many vertebrates and invertebrates, so the ability to predict their distribution and abundance would have wide application in ecology, conservation and land management. Earthworm viability is known to be affected by the availability and quality of food resources, soil water conditions and temperature, but has not yet been modelled mechanistically to link effects on individuals to field population responses. Here we present a novel model capable of predicting the effects of land management and environmental conditions on the distribution and abundance of Aporrectodea caliginosa, the dominant earthworm species in agroecosystems. Our process-based approach uses individual based modelling (IBM), in which each individual has its own energy budget. Individual earthworm energy budgets follow established principles of physiological ecology and are parameterised for A. caliginosa from experimental measurements under optimal conditions. Under suboptimal conditions (e.g. food limitation, low soil temperatures and water contents) reproduction is prioritised over growth. Good model agreement to independent laboratory data on individual cocoon production and growth of body mass, under variable feeding and temperature conditions support our representation of A. caliginosa physiology through energy budgets. Our mechanistic model is able to accurately predict A. caliginosa distribution and abundance in spatially heterogeneous soil profiles representative of field study conditions. Essential here is the explicit modelling of earthworm behaviour in the soil profile. Local earthworm movement responds to a trade-off between food availability and soil water conditions, and this determines the spatiotemporal distribution of the population in the soil profile. Importantly, multiple environmental variables can be manipulated simultaneously in the model to explore earthworm population exposure and effects to combinations of stressors. Potential applications include prediction of the population-level effects of pesticides and changes in soil management e.g. conservation tillage and climate change.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many studies evaluating model boundary-layer schemes focus either on near-surface parameters or on short-term observational campaigns. This reflects the observational datasets that are widely available for use in model evaluation. In this paper we show how surface and long-term Doppler lidar observations, combined in a way to match model representation of the boundary layer as closely as possible, can be used to evaluate the skill of boundary-layer forecasts. We use a 2-year observational dataset from a rural site in the UK to evaluate a climatology of boundary layer type forecast by the UK Met Office Unified Model. In addition, we demonstrate the use of a binary skill score (Symmetric Extremal Dependence Index) to investigate the dependence of forecast skill on season, horizontal resolution and forecast leadtime. A clear diurnal and seasonal cycle can be seen in the climatology of both the model and observations, with the main discrepancies being the model overpredicting cumulus capped and decoupled stratocumulus capped boundary-layers and underpredicting well mixed boundary-layers. Using the SEDI skill score the model is most skillful at predicting the surface stability. The skill of the model in predicting cumulus capped and stratocumulus capped stable boundary layer forecasts is low but greater than a 24 hr persistence forecast. In contrast, the prediction of decoupled boundary-layers and boundary-layers with multiple cloud layers is lower than persistence. This process based evaluation approach has the potential to be applied to other boundary-layer parameterisation schemes with similar decision structures.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A fast simple climate modelling approach is developed for predicting and helping to understand general circulation model (GCM) simulations. We show that the simple model reproduces the GCM results accurately, for global mean surface air temperature change and global-mean heat uptake projections from 9 GCMs in the fifth coupled model inter-comparison project (CMIP5). This implies that understanding gained from idealised CO2 step experiments is applicable to policy-relevant scenario projections. Our approach is conceptually simple. It works by using the climate response to a CO2 step change taken directly from a GCM experiment. With radiative forcing from non-CO2 constituents obtained by adapting the Forster and Taylor method, we use our method to estimate results for CMIP5 representative concentration pathway (RCP) experiments for cases not run by the GCMs. We estimate differences between pairs of RCPs rather than RCP anomalies relative to the pre-industrial state. This gives better results because it makes greater use of available GCM projections. The GCMs exhibit differences in radiative forcing, which we incorporate in the simple model. We analyse the thus-completed ensemble of RCP projections. The ensemble mean changes between 1986–2005 and 2080–2099 for global temperature (heat uptake) are, for RCP8.5: 3.8 K (2.3 × 1024 J); for RCP6.0: 2.3 K (1.6 × 1024 J); for RCP4.5: 2.0 K (1.6 × 1024 J); for RCP2.6: 1.1 K (1.3 × 1024 J). The relative spread (standard deviation/ensemble mean) for these scenarios is around 0.2 and 0.15 for temperature and heat uptake respectively. We quantify the relative effect of mitigation action, through reduced emissions, via the time-dependent ratios (change in RCPx)/(change in RCP8.5), using changes with respect to pre-industrial conditions. We find that the effects of mitigation on global-mean temperature change and heat uptake are very similar across these different GCMs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a numerical model for predicting the evolution of the pattern of ionospheric convection in response to general time-dependent magnetic reconnection at the dayside magnetopause and in the cross-tail current sheet of the geomagnetic tail. The model quantifies the concepts of ionospheric flow excitation by Cowley and Lockwood (1992), assuming a uniform spatial distribution of ionospheric conductivity. The model is demonstrated using an example in which travelling reconnection pulses commence near noon and then move across the dayside magnetopause towards both dawn and dusk. Two such pulses, 8 min apart, are used and each causes the reconnection to be active for 1 min at every MLT that they pass over. This example demonstrates how the convection response to a given change in the interplanetary magnetic field (via the reconnection rate) depends on the previous reconnection history. The causes of this effect are explained. The inherent assumptions and the potential applications of the model are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We report on the first realtime ionospheric predictions network and its capabilities to ingest a global database and forecast F-layer characteristics and "in situ" electron densities along the track of an orbiting spacecraft. A global network of ionosonde stations reported around-the-clock observations of F-region heights and densities, and an on-line library of models provided forecasting capabilities. Each model was tested against the incoming data; relative accuracies were intercompared to determine the best overall fit to the prevailing conditions; and the best-fit model was used to predict ionospheric conditions on an orbit-to-orbit basis for the 12-hour period following a twice-daily model test and validation procedure. It was found that the best-fit model often provided averaged (i.e., climatologically-based) accuracies better than 5% in predicting the heights and critical frequencies of the F-region peaks in the latitudinal domain of the TSS-1R flight path. There was a sharp contrast however, in model-measurement comparisons involving predictions of actual, unaveraged, along-track densities at the 295 km orbital altitude of TSS-1R In this case, extrema in the first-principle models varied by as much as an order of magnitude in density predictions, and the best-fit models were found to disagree with the "in situ" observations of Ne by as much as 140%. The discrepancies are interpreted as a manifestation of difficulties in accurately and self-consistently modeling the external controls of solar and magnetospheric inputs and the spatial and temporal variabilities in electric fields, thermospheric winds, plasmaspheric fluxes, and chemistry.