937 resultados para Evaluation model


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Runoff fields over northern Africa (10–25°N, 20°W–30°E) derived from 17 atmospheric general circulation models driven by identical 6 ka BP orbital forcing, sea surface temperatures, and CO2 concentration have been analyzed using a hydrological routing scheme (HYDRA) to simulate changes in lake area. The AGCM-simulated runoff produced six-fold differences in simulated lake area between models, although even the largest simulated changes considerably underestimate the observed changes in lake area during the mid-Holocene. The inter-model differences in simulated lake area are largely due to differences in simulated runoff (the squared correlation coefficient, R2, is 0.84). Most of these differences can be attributed to differences in the simulated precipitation (R2=0.83). The higher correlation between runoff and simulated lake area (R2=0.92) implies that simulated differences in evaporation have a contributory effect. When runoff is calculated using an offline land-surface scheme (BIOME3), the correlation between runoff and simulated lake area is (R2=0.94). Finally, the spatial distribution of simulated precipitation can exert an important control on the overall response.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Variations in lake area and depth reflect climatically induced changes in the water balance of overflowing as well as closed lakes. A new global data base of lake status has been assembled, and is used to compare two simulations for 6 ka (6000 yr ago) made with successive R15 versions of the NCAR Community Climate Model (CCM). Simulated water balance was expressed as anomalies of annual precipitation minus evaporation (P-E); observed water balance as anomalies of lake status. Comparisons were made visually, by comparing regional averages, and by a statistic that compares the signs of simulated P-E anomalies (smoothly interpolated to the lake sites) with the status anomalies. Both CCM0 and CCM1 showed enhanced Northern-Hemisphere monsoons at 6 ka. Both underestimated the effect, but CCM1 fitted the spatial patterns better. In the northern mid- and high-latitudes the two versions differed more, and fitted the data less satisfactorily. CCM1 performed better than CCM0 in North America and central Eurasia, but not in Europe. Both models (especially CCM0) simulated excessive aridity in interior Eurasia. The models were systematically wrong in the southern mid-latitudes. Problems may have been caused by inadequate treatment of changes in sea-surface conditions in both models. Palaeolake status data will continue to provide a benchmark for the evaluation of modelling improvements.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents an in-depth critical discussion and derivation of a detailed small-signal analysis of the Phase-Shifted Full-Bridge (PSFB) converter. Circuit parasitics, resonant inductance and transformer turns ratio have all been taken into account in the evaluation of this topology’s open-loop control-to-output, line-to-output and load-to-output transfer functions. Accordingly, the significant impact of losses and resonant inductance on the converter’s transfer functions is highlighted. The enhanced dynamic model proposed in this paper enables the correct design of the converter compensator, including the effect of parasitics on the dynamic behavior of the PSFB converter. Detailed experimental results for a real-life 36V-to-14V/10A PSFB industrial application show excellent agreement with the predictions from the model proposed herein.1

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper evaluates environmental externality when the structure of the externality is cumulative. The evaluation exercise is based on the assumption that the agents in question form conjectural variations. A number of environments are encompassed within this classification and have received due attention in the literature. Each of these heterogeneous environments, however, possesses considerable analytical homogeneity and permit subscription to a general model treatment. These environments include environmental externality, oligopoly and the analysis of the private provision of public goods. We highlight the general analytical approach by focusing on this latter context, in which debate centers around four issues: the existence of free-riding, the extent to which contributions are matched equally across individuals, the nature of conjectures consistent with equilibrium, and the allocative inefficiency of alternative regimes. This paper resolves each of these issues, with the following conclusions: A consistent-conjectures equilibrium exists in the private provision of public goods. It is the monopolistic-conjectures equilibrium. Agents act identically, contributing positive amounts of the public good in an efficient allocation of resources. There is complete matching of contributions among agents, no free-riding, and the allocation is independent of the number of members within the community. Thus the Olson conjecture—that inefficiency is exacerbated by community size—has no foundation in a consistent-conjectures, cumulative-externality, context (212 words).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Population modelling is increasingly recognised as a useful tool for pesticide risk assessment. For vertebrates that may ingest pesticides with their food, such as woodpigeon (Columba palumbus), population models that simulate foraging behaviour explicitly can help predicting both exposure and population-level impact. Optimal foraging theory is often assumed to explain the individual-level decisions driving distributions of individuals in the field, but it may not adequately predict spatial and temporal characteristics of woodpigeon foraging because of the woodpigeons’ excellent memory, ability to fly long distances, and distinctive flocking behaviour. Here we present an individual-based model (IBM) of the woodpigeon. We used the model to predict distributions of foraging woodpigeons that use one of six alternative foraging strategies: optimal foraging, memory-based foraging and random foraging, each with or without flocking mechanisms. We used pattern-oriented modelling to determine which of the foraging strategies is best able to reproduce observed data patterns. Data used for model evaluation were gathered during a long-term woodpigeon study conducted between 1961 and 2004 and a radiotracking study conducted in 2003 and 2004, both in the UK, and are summarised here as three complex patterns: the distributions of foraging birds between vegetation types during the year, the number of fields visited daily by individuals, and the proportion of fields revisited by them on subsequent days. The model with a memory-based foraging strategy and a flocking mechanism was the only one to reproduce these three data patterns, and the optimal foraging model produced poor matches to all of them. The random foraging strategy reproduced two of the three patterns but was not able to guarantee population persistence. We conclude that with the memory-based foraging strategy including a flocking mechanism our model is realistic enough to estimate the potential exposure of woodpigeons to pesticides. We discuss how exposure can be linked to our model, and how the model could be used for risk assessment of pesticides, for example predicting exposure and effects in heterogeneous landscapes planted seasonally with a variety of crops, while accounting for differences in land use between landscapes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A statistical–dynamical downscaling (SDD) approach for the regionalization of wind energy output (Eout) over Europe with special focus on Germany is proposed. SDD uses an extended circulation weather type (CWT) analysis on global daily mean sea level pressure fields with the central point being located over Germany. Seventy-seven weather classes based on the associated CWT and the intensity of the geostrophic flow are identified. Representatives of these classes are dynamically downscaled with the regional climate model COSMO-CLM. By using weather class frequencies of different data sets, the simulated representatives are recombined to probability density functions (PDFs) of near-surface wind speed and finally to Eout of a sample wind turbine for present and future climate. This is performed for reanalysis, decadal hindcasts and long-term future projections. For evaluation purposes, results of SDD are compared to wind observations and to simulated Eout of purely dynamical downscaling (DD) methods. For the present climate, SDD is able to simulate realistic PDFs of 10-m wind speed for most stations in Germany. The resulting spatial Eout patterns are similar to DD-simulated Eout. In terms of decadal hindcasts, results of SDD are similar to DD-simulated Eout over Germany, Poland, Czech Republic, and Benelux, for which high correlations between annual Eout time series of SDD and DD are detected for selected hindcasts. Lower correlation is found for other European countries. It is demonstrated that SDD can be used to downscale the full ensemble of the Earth System Model of the Max Planck Institute (MPI-ESM) decadal prediction system. Long-term climate change projections in Special Report on Emission Scenarios of ECHAM5/MPI-OM as obtained by SDD agree well to the results of other studies using DD methods, with increasing Eout over northern Europe and a negative trend over southern Europe. Despite some biases, it is concluded that SDD is an adequate tool to assess regional wind energy changes in large model ensembles.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Regional climate downscaling has arrived at an important juncture. Some in the research community favour continued refinement and evaluation of downscaling techniques within a broader framework of uncertainty characterisation and reduction. Others are calling for smarter use of downscaling tools, accepting that conventional, scenario-led strategies for adaptation planning have limited utility in practice. This paper sets out the rationale and new functionality of the Decision Centric (DC) version of the Statistical DownScaling Model (SDSM-DC). This tool enables synthesis of plausible daily weather series, exotic variables (such as tidal surge), and climate change scenarios guided, not determined, by climate model output. Two worked examples are presented. The first shows how SDSM-DC can be used to reconstruct and in-fill missing records based on calibrated predictor-predictand relationships. Daily temperature and precipitation series from sites in Africa, Asia and North America are deliberately degraded to show that SDSM-DC can reconstitute lost data. The second demonstrates the application of the new scenario generator for stress testing a specific adaptation decision. SDSM-DC is used to generate daily precipitation scenarios to simulate winter flooding in the Boyne catchment, Ireland. This sensitivity analysis reveals the conditions under which existing precautionary allowances for climate change might be insufficient. We conclude by discussing the wider implications of the proposed approach and research opportunities presented by the new tool.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many of the next generation of global climate models will include aerosol schemes which explicitly simulate the microphysical processes that determine the particle size distribution. These models enable aerosol optical properties and cloud condensation nuclei (CCN) concentrations to be determined by fundamental aerosol processes, which should lead to a more physically based simulation of aerosol direct and indirect radiative forcings. This study examines the global variation in particle size distribution simulated by 12 global aerosol microphysics models to quantify model diversity and to identify any common biases against observations. Evaluation against size distribution measurements from a new European network of aerosol supersites shows that the mean model agrees quite well with the observations at many sites on the annual mean, but there are some seasonal biases common to many sites. In particular, at many of these European sites, the accumulation mode number concentration is biased low during winter and Aitken mode concentrations tend to be overestimated in winter and underestimated in summer. At high northern latitudes, the models strongly underpredict Aitken and accumulation particle concentrations compared to the measurements, consistent with previous studies that have highlighted the poor performance of global aerosol models in the Arctic. In the marine boundary layer, the models capture the observed meridional variation in the size distribution, which is dominated by the Aitken mode at high latitudes, with an increasing concentration of accumulation particles with decreasing latitude. Considering vertical profiles, the models reproduce the observed peak in total particle concentrations in the upper troposphere due to new particle formation, although modelled peak concentrations tend to be biased high over Europe. Overall, the multi-model-mean data set simulates the global variation of the particle size distribution with a good degree of skill, suggesting that most of the individual global aerosol microphysics models are performing well, although the large model diversity indicates that some models are in poor agreement with the observations. Further work is required to better constrain size-resolved primary and secondary particle number sources, and an improved understanding of nucleation and growth (e.g. the role of nitrate and secondary organics) will improve the fidelity of simulated particle size distributions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper evaluates the current status of global modeling of the organic aerosol (OA) in the troposphere and analyzes the differences between models as well as between models and observations. Thirty-one global chemistry transport models (CTMs) and general circulation models (GCMs) have participated in this intercomparison, in the framework of AeroCom phase II. The simulation of OA varies greatly between models in terms of the magnitude of primary emissions, secondary OA (SOA) formation, the number of OA species used (2 to 62), the complexity of OA parameterizations (gas-particle partitioning, chemical aging, multiphase chemistry, aerosol microphysics), and the OA physical, chemical and optical properties. The diversity of the global OA simulation results has increased since earlier AeroCom experiments, mainly due to the increasing complexity of the SOA parameterization in models, and the implementation of new, highly uncertain, OA sources. Diversity of over one order of magnitude exists in the modeled vertical distribution of OA concentrations that deserves a dedicated future study. Furthermore, although the OA / OC ratio depends on OA sources and atmospheric processing, and is important for model evaluation against OA and OC observations, it is resolved only by a few global models. The median global primary OA (POA) source strength is 56 Tg a−1 (range 34–144 Tg a−1) and the median SOA source strength (natural and anthropogenic) is 19 Tg a−1 (range 13–121 Tg a−1). Among the models that take into account the semi-volatile SOA nature, the median source is calculated to be 51 Tg a−1 (range 16–121 Tg a−1), much larger than the median value of the models that calculate SOA in a more simplistic way (19 Tg a−1; range 13–20 Tg a−1, with one model at 37 Tg a−1). The median atmospheric burden of OA is 1.4 Tg (24 models in the range of 0.6–2.0 Tg and 4 between 2.0 and 3.8 Tg), with a median OA lifetime of 5.4 days (range 3.8–9.6 days). In models that reported both OA and sulfate burdens, the median value of the OA/sulfate burden ratio is calculated to be 0.77; 13 models calculate a ratio lower than 1, and 9 models higher than 1. For 26 models that reported OA deposition fluxes, the median wet removal is 70 Tg a−1 (range 28–209 Tg a−1), which is on average 85% of the total OA deposition. Fine aerosol organic carbon (OC) and OA observations from continuous monitoring networks and individual field campaigns have been used for model evaluation. At urban locations, the model–observation comparison indicates missing knowledge on anthropogenic OA sources, both strength and seasonality. The combined model–measurements analysis suggests the existence of increased OA levels during summer due to biogenic SOA formation over large areas of the USA that can be of the same order of magnitude as the POA, even at urban locations, and contribute to the measured urban seasonal pattern. Global models are able to simulate the high secondary character of OA observed in the atmosphere as a result of SOA formation and POA aging, although the amount of OA present in the atmosphere remains largely underestimated, with a mean normalized bias (MNB) equal to −0.62 (−0.51) based on the comparison against OC (OA) urban data of all models at the surface, −0.15 (+0.51) when compared with remote measurements, and −0.30 for marine locations with OC data. The mean temporal correlations across all stations are low when compared with OC (OA) measurements: 0.47 (0.52) for urban stations, 0.39 (0.37) for remote stations, and 0.25 for marine stations with OC data. The combination of high (negative) MNB and higher correlation at urban stations when compared with the low MNB and lower correlation at remote sites suggests that knowledge about the processes that govern aerosol processing, transport and removal, on top of their sources, is important at the remote stations. There is no clear change in model skill with increasing model complexity with regard to OC or OA mass concentration. However, the complexity is needed in models in order to distinguish between anthropogenic and natural OA as needed for climate mitigation, and to calculate the impact of OA on climate accurately.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Longitudinal flow bursts observed by the European Incoherent Scatter (EISCAT) radar, in association with dayside auroral transients observed from Svalbard, have been interpreted as resulting from pulses of enhanced reconnection at the dayside magnetopause. However, an alternative model has recently been proposed for a steady rate of magnetopause reconnection, in which the bursts of longitudinal flow are due to increases in the field line curvature force, associated with the By component of the magnetosheath field. We here evaluate these two models, using observations on January 20, 1990, by EISCAT and a 630-nm all-sky camera at Ny Ålesund. For both models, we predict the behavior of both the dayside flows and the 630-nm emissions on newly opened field lines. It is shown that the signatures of steady reconnection and magnetosheath By changes could possibly resemble the observed 630-nm auroral events, but only for certain locations of the observing site, relative to the ionospheric projection of the reconnection X line: however, in such cases, the flow bursts would be seen between the 630-nm transients and not within them. On the other hand, the model of reconnection rate pulses predicts that the flows will be enhanced within each 630-nm transient auroral event. The observations on January 20, 1990, are shown to be consistent with the model of enhanced reconnection rate pulses over a background level and inconsistent with the effects of periodic enhancements of the magnitude of the magnetosheath By component. We estimate that the reconnection rate within the pulses would have to be at least an order of magnitude larger than the background level between the pulses.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mechanistic catchment-scale phosphorus models appear to perform poorly where diffuse sources dominate. We investigate the reasons for this for one model, INCA-P, testing model output against 18 months of daily data in a small Scottish catchment. We examine key model processes and provide recommendations for model improvement and simplification. Improvements to the particulate phosphorus simulation are especially needed. The model evaluation procedure is then generalised to provide a checklist for identifying why model performance may be poor or unreliable, incorporating calibration, data, structural and conceptual challenges. There needs to be greater recognition that current models struggle to produce positive Nash–Sutcliffe statistics in agricultural catchments when evaluated against daily data. Phosphorus modelling is difficult, but models are not as useless as this might suggest. We found a combination of correlation coefficients, bias, a comparison of distributions and a visual assessment of time series a better means of identifying realistic simulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current feed evaluation systems for ruminants are too imprecise to describe diets in terms of their acidosis risk. The dynamic mechanistic model described herein arises from the integration of a lactic acid (La) metabolism module into an extant model of whole-rumen function. The model was evaluated using published data from cows and sheep fed a range of diets or infused with various doses of La. The model performed well in simulating peak rumen La concentrations (coefficient of determination = 0.96; root mean square prediction error = 16.96% of observed mean), although frequency of sampling for the published data prevented a comprehensive comparison of prediction of time to peak La accumulation. The model showed a tendency for increased La accumulation following feeding of diets rich in nonstructural carbohydrates, although less-soluble starch sources such as corn tended to limit rumen La concentration. Simulated La absorption from the rumen remained low throughout the feeding cycle. The competition between bacteria and protozoa for rumen La suggests a variable contribution of protozoa to total La utilization. However, the model was unable to simulate the effects of defaunation on rumen La metabolism, indicating a need for a more detailed description of protozoal metabolism. The model could form the basis of a feed evaluation system with regard to rumen La metabolism.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

While state-of-the-art models of Earth's climate system have improved tremendously over the last 20 years, nontrivial structural flaws still hinder their ability to forecast the decadal dynamics of the Earth system realistically. Contrasting the skill of these models not only with each other but also with empirical models can reveal the space and time scales on which simulation models exploit their physical basis effectively and quantify their ability to add information to operational forecasts. The skill of decadal probabilistic hindcasts for annual global-mean and regional-mean temperatures from the EU Ensemble-Based Predictions of Climate Changes and Their Impacts (ENSEMBLES) project is contrasted with several empirical models. Both the ENSEMBLES models and a “dynamic climatology” empirical model show probabilistic skill above that of a static climatology for global-mean temperature. The dynamic climatology model, however, often outperforms the ENSEMBLES models. The fact that empirical models display skill similar to that of today's state-of-the-art simulation models suggests that empirical forecasts can improve decadal forecasts for climate services, just as in weather, medium-range, and seasonal forecasting. It is suggested that the direct comparison of simulation models with empirical models becomes a regular component of large model forecast evaluations. Doing so would clarify the extent to which state-of-the-art simulation models provide information beyond that available from simpler empirical models and clarify current limitations in using simulation forecasting for decision support. Ultimately, the skill of simulation models based on physical principles is expected to surpass that of empirical models in a changing climate; their direct comparison provides information on progress toward that goal, which is not available in modelmodel intercomparisons.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A new frontier in weather forecasting is emerging by operational forecast models now being run at convection-permitting resolutions at many national weather services. However, this is not a panacea; significant systematic errors remain in the character of convective storms and rainfall distributions. The DYMECS project (Dynamical and Microphysical Evolution of Convective Storms) is taking a fundamentally new approach to evaluate and improve such models: rather than relying on a limited number of cases, which may not be representative, we have gathered a large database of 3D storm structures on 40 convective days using the Chilbolton radar in southern England. We have related these structures to storm life-cycles derived by tracking features in the rainfall from the UK radar network, and compared them statistically to storm structures in the Met Office model, which we ran at horizontal grid length between 1.5 km and 100 m, including simulations with different subgrid mixing length. We also evaluated the scale and intensity of convective updrafts using a new radar technique. We find that the horizontal size of simulated convective storms and the updrafts within them is much too large at 1.5-km resolution, such that the convective mass flux of individual updrafts can be too large by an order of magnitude. The scale of precipitation cores and updrafts decreases steadily with decreasing grid lengths, as does the typical storm lifetime. The 200-m grid-length simulation with standard mixing length performs best over all diagnostics, although a greater mixing length improves the representation of deep convective storms.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Ground-based remote-sensing observations from Atmospheric Radiation Measurement (ARM) and Cloud-Net sites are used to evaluate the clouds predicted by a weather forecasting and climate model. By evaluating the cloud predictions using separate measures for the errors in frequency of occurrence, amount when present, and timing, we provide a detailed assessment of the model performance, which is relevant to weather and climate time-scales. Importantly, this methodology will be of great use when attempting to develop a cloud parametrization scheme, as it provides a clearer picture of the current deficiencies in the predicted clouds. Using the Met Office Unified Model, it is shown that when cloud fractions produced by a diagnostic and a prognostic cloud scheme are compared, the prognostic cloud scheme shows improvements to the biases in frequency of occurrence of low, medium and high cloud and to the frequency distributions of cloud amount when cloud is present. The mean cloud profiles are generally improved, although it is shown that in some cases the diagnostic scheme produced misleadingly good mean profiles as a result of compensating errors in frequency of occurrence and amount when present. Some biases remain when using the prognostic scheme, notably the underprediction of mean ice cloud fraction due to the amount when present being too low, and the overprediction of mean liquid cloud fraction due to the frequency of occurrence being too high.