52 resultados para Teaching Evaluation and Assessment
Resumo:
In this paper, we propose a scenario framework that could provide a scenario “thread” through the different climate research communities (climate change – vulnerability, impact, and adaptation (VIA) and mitigation) in order to provide assessment of mitigation and adaptation strategies and other VIA challenges. The scenario framework is organised around a matrix with two main axes: radiative forcing levels and socio-economic conditions. The radiative forcing levels (and the associated climate signal) are described by the new Representative Concentration Pathways. The second axis, socio-economic developments, comprises elements that affect the capacity for mitigation and adaptation, as well as the exposure to climate impacts. The proposed scenarios derived from this framework are limited in number, allow for comparison across various mitigation and adaptation levels, address a range of vulnerability characteristics, provide information across climate forcing and vulnerability states and span a full century time scale. Assessments based on the proposed scenario framework would strengthen cooperation between integrated-assessment modelers, climate modelers and vulnerability, impact and adaptation researchers, and most importantly, facilitate the development of more consistent and comparable research within and across communities.
Resumo:
An evaluation is undertaken of the statistics of daily precipitation as simulated by five regional climate models using comprehensive observations in the region of the European Alps. Four limited area models and one variable-resolution global model are considered, all with a grid spacing of 50 km. The 15-year integrations were forced from reanalyses and observed sea surface temperature and sea ice (global model from sea surface only). The observational reference is based on 6400 rain gauge records (10–50 stations per grid box). Evaluation statistics encompass mean precipitation, wet-day frequency, precipitation intensity, and quantiles of the frequency distribution. For mean precipitation, the models reproduce the characteristics of the annual cycle and the spatial distribution. The domain mean bias varies between −23% and +3% in winter and between −27% and −5% in summer. Larger errors are found for other statistics. In summer, all models underestimate precipitation intensity (by 16–42%) and there is a too low frequency of heavy events. This bias reflects too dry summer mean conditions in three of the models, while it is partly compensated by too many low-intensity events in the other two models. Similar intermodel differences are found for other European subregions. Interestingly, the model errors are very similar between the two models with the same dynamical core (but different parameterizations) and they differ considerably between the two models with similar parameterizations (but different dynamics). Despite considerable biases, the models reproduce prominent mesoscale features of heavy precipitation, which is a promising result for their use in climate change downscaling over complex topography.
Resumo:
Specific traditional plate count method and real-time PCR systems based on SYBR Green I and TaqMan technologies using a specific primer pair and probe for amplification of iap-gene were used for quantitative assay of Listeria monocytogenes in seven decimal serial dilution series of nutrient broth and milk samples containing 1.58 to 1.58×107 cfu /ml and the real-time PCR methods were compared with the plate count method with respect to accuracy and sensitivity. In this study, the plate count method was performed using surface-plating of 0.1 ml of each sample on Palcam Agar. The lowest detectable level for this method was 1.58×10 cfu/ml for both nutrient broth and milk samples. Using purified DNA as a template for generation of standard curves, as few as four copies of the iap-gene could be detected per reaction with both real-time PCR assays, indicating that they were highly sensitive. When these real-time PCR assays were applied to quantification of L. monocytogenes in decimal serial dilution series of nutrient broth and milk samples, 3.16×10 to 3.16×105 copies per reaction (equals to 1.58×103 to 1.58×107 cfu/ml L. monocytogenes) were detectable. As logarithmic cycles, for Plate Count and both molecular assays, the quantitative results of the detectable steps were similar to the inoculation levels.
Resumo:
A manageable, relatively inexpensive model was constructed to predict the loss of nitrogen and phosphorus from a complex catchment to its drainage system. The model used an export coefficient approach, calculating the total nitrogen (N) and total phosphorus (P) load delivered annually to a water body as the sum of the individual loads exported from each nutrient source in its catchment. The export coefficient modelling approach permits scaling up from plot-scale experiments to the catchment scale, allowing application of findings from field experimental studies at a suitable scale for catchment management. The catchment of the River Windrush, a tributary of the River Thames, UK, was selected as the initial study site. The Windrush model predicted nitrogen and phosphorus loading within 2% of observed total nitrogen load and 0.5% of observed total phosphorus load in 1989. The export coefficient modelling approach was then validated by application in a second research basin, the catchment of Slapton Ley, south Devon, which has markedly different catchment hydrology and land use. The Slapton model was calibrated within 2% of observed total nitrogen load and 2.5% of observed total phosphorus load in 1986. Both models proved sensitive to the impact of temporal changes in land use and management on water quality in both catchments, and were therefore used to evaluate the potential impact of proposed pollution control strategies on the nutrient loading delivered to the River Windrush and Slapton Ley
Resumo:
The intention of this review is to place crop albedo biogeoengineering in the wider picture of climate manipulation. Crop biogeoengineering is considered within the context of the long-term modification of the land surface for agriculture over several thousand years. Biogeoengineering is also critiqued in relation to other geoengineering schemes in terms of mitigation power and adherence to social principles for geoengineering. Although its impact is small and regional, crop biogeoengineering could be a useful and inexpensive component of an ensemble of geoengineering schemes to provide temperature mitigation. The method should not detrimentally affect food security and there may even be positive impacts on crop productivity, although more laboratory and field research is required in this area to understand the underlying mechanisms.
Resumo:
To bridge the gaps between traditional mesoscale modelling and microscale modelling, the National Center for Atmospheric Research, in collaboration with other agencies and research groups, has developed an integrated urban modelling system coupled to the weather research and forecasting (WRF) model as a community tool to address urban environmental issues. The core of this WRF/urban modelling system consists of the following: (1) three methods with different degrees of freedom to parameterize urban surface processes, ranging from a simple bulk parameterization to a sophisticated multi-layer urban canopy model with an indoor–outdoor exchange sub-model that directly interacts with the atmospheric boundary layer, (2) coupling to fine-scale computational fluid dynamic Reynolds-averaged Navier–Stokes and Large-Eddy simulation models for transport and dispersion (T&D) applications, (3) procedures to incorporate high-resolution urban land use, building morphology, and anthropogenic heating data using the National Urban Database and Access Portal Tool (NUDAPT), and (4) an urbanized high-resolution land data assimilation system. This paper provides an overview of this modelling system; addresses the daunting challenges of initializing the coupled WRF/urban model and of specifying the potentially vast number of parameters required to execute the WRF/urban model; explores the model sensitivity to these urban parameters; and evaluates the ability of WRF/urban to capture urban heat islands, complex boundary-layer structures aloft, and urban plume T&D for several major metropolitan regions. Recent applications of this modelling system illustrate its promising utility, as a regional climate-modelling tool, to investigate impacts of future urbanization on regional meteorological conditions and on air quality under future climate change scenarios. Copyright © 2010 Royal Meteorological Society
Resumo:
Advances in the science and observation of climate change are providing a clearer understanding of the inherent variability of Earth’s climate system and its likely response to human and natural influences. The implications of climate change for the environment and society will depend not only on the response of the Earth system to changes in radiative forcings, but also on how humankind responds through changes in technology, economies, lifestyle and policy. Extensive uncertainties exist in future forcings of and responses to climate change, necessitating the use of scenarios of the future to explore the potential consequences of different response options. To date, such scenarios have not adequately examined crucial possibilities, such as climate change mitigation and adaptation, and have relied on research processes that slowed the exchange of information among physical, biological and social scientists. Here we describe a new process for creating plausible scenarios to investigate some of the most challenging and important questions about climate change confronting the global community
Resumo:
A statistical–dynamical downscaling (SDD) approach for the regionalization of wind energy output (Eout) over Europe with special focus on Germany is proposed. SDD uses an extended circulation weather type (CWT) analysis on global daily mean sea level pressure fields with the central point being located over Germany. Seventy-seven weather classes based on the associated CWT and the intensity of the geostrophic flow are identified. Representatives of these classes are dynamically downscaled with the regional climate model COSMO-CLM. By using weather class frequencies of different data sets, the simulated representatives are recombined to probability density functions (PDFs) of near-surface wind speed and finally to Eout of a sample wind turbine for present and future climate. This is performed for reanalysis, decadal hindcasts and long-term future projections. For evaluation purposes, results of SDD are compared to wind observations and to simulated Eout of purely dynamical downscaling (DD) methods. For the present climate, SDD is able to simulate realistic PDFs of 10-m wind speed for most stations in Germany. The resulting spatial Eout patterns are similar to DD-simulated Eout. In terms of decadal hindcasts, results of SDD are similar to DD-simulated Eout over Germany, Poland, Czech Republic, and Benelux, for which high correlations between annual Eout time series of SDD and DD are detected for selected hindcasts. Lower correlation is found for other European countries. It is demonstrated that SDD can be used to downscale the full ensemble of the Earth System Model of the Max Planck Institute (MPI-ESM) decadal prediction system. Long-term climate change projections in Special Report on Emission Scenarios of ECHAM5/MPI-OM as obtained by SDD agree well to the results of other studies using DD methods, with increasing Eout over northern Europe and a negative trend over southern Europe. Despite some biases, it is concluded that SDD is an adequate tool to assess regional wind energy changes in large model ensembles.
Resumo:
This paper evaluates the current status of global modeling of the organic aerosol (OA) in the troposphere and analyzes the differences between models as well as between models and observations. Thirty-one global chemistry transport models (CTMs) and general circulation models (GCMs) have participated in this intercomparison, in the framework of AeroCom phase II. The simulation of OA varies greatly between models in terms of the magnitude of primary emissions, secondary OA (SOA) formation, the number of OA species used (2 to 62), the complexity of OA parameterizations (gas-particle partitioning, chemical aging, multiphase chemistry, aerosol microphysics), and the OA physical, chemical and optical properties. The diversity of the global OA simulation results has increased since earlier AeroCom experiments, mainly due to the increasing complexity of the SOA parameterization in models, and the implementation of new, highly uncertain, OA sources. Diversity of over one order of magnitude exists in the modeled vertical distribution of OA concentrations that deserves a dedicated future study. Furthermore, although the OA / OC ratio depends on OA sources and atmospheric processing, and is important for model evaluation against OA and OC observations, it is resolved only by a few global models. The median global primary OA (POA) source strength is 56 Tg a−1 (range 34–144 Tg a−1) and the median SOA source strength (natural and anthropogenic) is 19 Tg a−1 (range 13–121 Tg a−1). Among the models that take into account the semi-volatile SOA nature, the median source is calculated to be 51 Tg a−1 (range 16–121 Tg a−1), much larger than the median value of the models that calculate SOA in a more simplistic way (19 Tg a−1; range 13–20 Tg a−1, with one model at 37 Tg a−1). The median atmospheric burden of OA is 1.4 Tg (24 models in the range of 0.6–2.0 Tg and 4 between 2.0 and 3.8 Tg), with a median OA lifetime of 5.4 days (range 3.8–9.6 days). In models that reported both OA and sulfate burdens, the median value of the OA/sulfate burden ratio is calculated to be 0.77; 13 models calculate a ratio lower than 1, and 9 models higher than 1. For 26 models that reported OA deposition fluxes, the median wet removal is 70 Tg a−1 (range 28–209 Tg a−1), which is on average 85% of the total OA deposition. Fine aerosol organic carbon (OC) and OA observations from continuous monitoring networks and individual field campaigns have been used for model evaluation. At urban locations, the model–observation comparison indicates missing knowledge on anthropogenic OA sources, both strength and seasonality. The combined model–measurements analysis suggests the existence of increased OA levels during summer due to biogenic SOA formation over large areas of the USA that can be of the same order of magnitude as the POA, even at urban locations, and contribute to the measured urban seasonal pattern. Global models are able to simulate the high secondary character of OA observed in the atmosphere as a result of SOA formation and POA aging, although the amount of OA present in the atmosphere remains largely underestimated, with a mean normalized bias (MNB) equal to −0.62 (−0.51) based on the comparison against OC (OA) urban data of all models at the surface, −0.15 (+0.51) when compared with remote measurements, and −0.30 for marine locations with OC data. The mean temporal correlations across all stations are low when compared with OC (OA) measurements: 0.47 (0.52) for urban stations, 0.39 (0.37) for remote stations, and 0.25 for marine stations with OC data. The combination of high (negative) MNB and higher correlation at urban stations when compared with the low MNB and lower correlation at remote sites suggests that knowledge about the processes that govern aerosol processing, transport and removal, on top of their sources, is important at the remote stations. There is no clear change in model skill with increasing model complexity with regard to OC or OA mass concentration. However, the complexity is needed in models in order to distinguish between anthropogenic and natural OA as needed for climate mitigation, and to calculate the impact of OA on climate accurately.
Resumo:
A millimetre-wave scintillometer was paired with an infrared scintillometer, enabling estimation of large-area evapotranspiration across northern Swindon, a suburban area in the UK. Both sensible and latent heat fluxes can be obtained using this "two-wavelength" technique, as it is able to provide both temperature and humidity structure parameters, offering a major advantage over conventional single-wavelength scintillometry. The first paper of this two-part series presented the measurement theory and structure parameters. In this second paper, heat fluxes are obtained and analysed. These fluxes, estimated using two-wavelength scintillometry over an urban area, are the first of their kind. Source area modelling suggests the scintillometric fluxes are representative of 5–10 km2. For comparison, local-scale (0.05–0.5 km2) fluxes were measured by an eddy covariance station. Similar responses to seasonal changes are evident at the different scales but the energy partitioning varies between source areas. The response to moisture availability is explored using data from 2 consecutive years with contrasting rainfall patterns (2011–2012). This extensive data set offers insight into urban surface-atmosphere interactions and demonstrates the potential for two-wavelength scintillometry to deliver fluxes over mixed land cover, typically representative of an area 1–2 orders of magnitude greater than for eddy covariance measurements. Fluxes at this scale are extremely valuable for hydro-meteorological model evaluation and assessment of satellite data products
Resumo:
We investigated the processes of how adult readers evaluate and revise their situation model during reading by monitoring their eye movements as they read narrative texts and subsequent critical sentences. In each narrative text, a short introduction primed a knowledge-based inference, followed by a target concept that was either expected (e.g., “oven”) or unexpected (e.g., “grill”) in relation to the inferred concept. Eye movements showed that readers detected a mismatch between the new unexpected information and their prior interpretation, confirming their ability to evaluate inferential information. Just below the narrative text, a critical sentence included a target word that was either congruent (e.g., “roasted”) or incongruent (e.g., “barbecued”) with the expected but not the unexpected concept. Readers spent less time reading the congruent than the incongruent target word, reflecting the facilitation of prior information. In addition, when the unexpected (but not expected) concept had been presented, participants with lower verbal (but not visuospatial) working memory span exhibited longer reading times and made more regressions (from the critical sentence to previous information) on encountering congruent information, indicating difficulty in inhibiting their initial incorrect interpretation and revising their situation model