819 resultados para Observational Methodology
Resumo:
The greenhouse effect of cloud may be quantified as the difference between outgoing longwave radiation (OLR) and its clear-sky component (OLRc). Clear-sky measurements from satellite preferentially sample drier, more stable conditions relative to the monthly-mean state. The resulting observational bias is evident when OLRc is stratified by vertical motion; differences to climate model OLRc of 15 Wm−2 occur over warm regions of strong ascent. Using data from the ECMWF 40-year reanalysis, an estimate of cloud longwave radiative effect is made which is directly comparable with standard climate model diagnostics. The impact of this methodology on the cancellation of cloud longwave and shortwave radiative forcing in the tropics is estimated.
Resumo:
We describe a new methodology for comparing satellite radiation budget data with a numerical weather prediction (NWP) model. This is applied to data from the Geostationary Earth Radiation Budget (GERB) instrument on Meteosat-8. The methodology brings together, in near-real time, GERB broadband shortwave and longwave fluxes with simulations based on analyses produced by the Met Office global NWP model. Results for the period May 2003 to February 2005 illustrate the progressive improvements in the data products as various initial problems were resolved. In most areas the comparisons reveal systematic errors in the model's representation of surface properties and clouds, which are discussed elsewhere. However, for clear-sky regions over the oceans the model simulations are believed to be sufficiently accurate to allow the quality of the GERB fluxes themselves to be assessed and any changes in time of the performance of the instrument to be identified. Using model and radiosonde profiles of temperature and humidity as input to a single-column version of the model's radiation code, we conduct sensitivity experiments which provide estimates of the expected model errors over the ocean of about ±5–10 W m−2 in clear-sky outgoing longwave radiation (OLR) and ±0.01 in clear-sky albedo. For the more recent data the differences between the observed and modeled OLR and albedo are well within these error estimates. The close agreement between the observed and modeled values, particularly for the most recent period, illustrates the value of the methodology. It also contributes to the validation of the GERB products and increases confidence in the quality of the data, prior to their release.
Resumo:
The uptake of metals by earthworms occurs predominantly via the soil pore water, or via an uptake route which is related to the soil pore water metal concentration. However, it has been suggested that the speciation of the metal is also important. A novel technique is described which exposes Eisenia andrei Bouche to contaminant bearing solutions in which the chemical factors affecting its speciation may be individually and systematically manipulated. In a preliminary experiment, the LC50 for copper nitrate was 0.046 mg l(-1) (95 % confidence intervals: 0.03 and 0.07 mg l(-1)). There was a significant positive correlation between earthworm mortality and bulk copper concentration in solution (R-2 = 0.88, P less than or equal to 0.001), and a significant positive increase in earthworm tissue copper concentration with increasing copper concentration in solution (R-2 = 0.97, P less than or equal to 0.001). It is anticipated that quantifying the effect of soil solution chemical speciation on copper bioavailability will provide an excellent aid to understanding the importance of chemical composition and the speciation of metals, in the calculation of toxicological parameters.
Resumo:
Crop irrigation has long been recognized as having been important for the evolution of social complexity in several parts of the world. Structural evidence for water management, as in the form of wells, ditches and dams, is often difficult to interpret and may be a poor indicator of past irrigation that may have had no need for such constructions. It would be of considerable value, therefore, to be able to infer past irrigation directly from archaeo-botanical remains, and especially the type of archaeo-botanical remains that are relatively abundant in the archaeological record, such as phytoliths. Building on the pioneering work of Rosen and Wiener (1994), this paper describes a crop-growing experiment designed to explore the impact of irrigation on the formation of phytoliths within cereals. If it can be shown that a systemic and consistent relationship exists between phytolith size, structure and the intensity of irrigation, and if various taphonomic and palaeoenvironmental processes can be controlled for, then the presence of past irrigation can feasibly be inferred from the phytoliths recovered from the archaeological record.
Resumo:
The conceptual and parameter uncertainty of the semi-distributed INCA-N (Integrated Nutrients in Catchments-Nitrogen) model was studied using the GLUE (Generalized Likelihood Uncertainty Estimation) methodology combined with quantitative experimental knowledge, the concept known as 'soft data'. Cumulative inorganic N leaching, annual plant N uptake and annual mineralization proved to be useful soft data to constrain the parameter space. The INCA-N model was able to simulate the seasonal and inter-annual variations in the stream-water nitrate concentrations, although the lowest concentrations during the growing season were not reproduced. This suggested that there were some retention processes or losses either in peatland/wetland areas or in the river which were not included in the INCA-N model. The results of the study suggested that soft data was a way to reduce parameter equifinality, and that the calibration and testing of distributed hydrological and nutrient leaching models should be based both on runoff and/or nutrient concentration data and the qualitative knowledge of experimentalist. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
This paper investigates the impact of aerosol forcing uncertainty on the robustness of estimates of the twentieth-century warming attributable to anthropogenic greenhouse gas emissions. Attribution analyses on three coupled climate models with very different sensitivities and aerosol forcing are carried out. The Third Hadley Centre Coupled Ocean - Atmosphere GCM (HadCM3), Parallel Climate Model (PCM), and GFDL R30 models all provide good simulations of twentieth-century global mean temperature changes when they include both anthropogenic and natural forcings. Such good agreement could result from a fortuitous cancellation of errors, for example, by balancing too much ( or too little) greenhouse warming by too much ( or too little) aerosol cooling. Despite a very large uncertainty for estimates of the possible range of sulfate aerosol forcing obtained from measurement campaigns, results show that the spatial and temporal nature of observed twentieth-century temperature change constrains the component of past warming attributable to anthropogenic greenhouse gases to be significantly greater ( at the 5% level) than the observed warming over the twentieth century. The cooling effects of aerosols are detected in all three models. Both spatial and temporal aspects of observed temperature change are responsible for constraining the relative roles of greenhouse warming and sulfate cooling over the twentieth century. This is because there are distinctive temporal structures in differential warming rates between the hemispheres, between land and ocean, and between mid- and low latitudes. As a result, consistent estimates of warming attributable to greenhouse gas emissions are obtained from all three models, and predictions are relatively robust to the use of more or less sensitive models. The transient climate response following a 1% yr(-1) increase in CO2 is estimated to lie between 2.2 and 4 K century(-1) (5-95 percentiles).
Resumo:
We present the first observational evidence of the near-Sun distortion of the leading edge of a coronal mass ejection (CME) by the ambient solar wind into a concave structure. On 2007 November 14, a CME was observed by coronagraphs onboard the STEREO-B spacecraft, possessing a circular cross section. Subsequently the CME passed through the field of view of the STEREO-B Heliospheric Imagers where the leading edge was observed to distort into an increasingly concave structure. The CME observations are compared to an analytical flux rope model constrained by a magnetohydrodynamic solar wind solution. The resultant bimodal speed profile is used to kinematically distort a circular structure that replicates the initial shape of the CME. The CME morphology is found to change rapidly over a relatively short distance. This indicates an approximate radial distance in the heliosphere where the solar wind forces begin to dominate over the magnetic forces of the CME influencing the shape of the CME.
Resumo:
From April 2010, the General Pharmaceutical Council (GPhC) will be responsible for the statutory regulation of pharmacists and pharmacy technicians in Great Britain (GB).[1] All statutorily regulated health professionals will need to periodically demonstrate their fitness-to-practise through a process of revalidation.[2] One option being considered in GB is that continuing professional development (CPD) records will form a part of the evidence submitted for revalidation, similar to the system in New Zealand.[3] At present, pharmacy professionals must make a minimum of nine CPD entries per annum from 1 March 2009 using the Royal Pharmaceutical Society of Great Britain (RPSGB) CPD framework. Our aim was to explore the applicability of new revalidation standards within the current CPD framework. We also wanted to review the content of CPD portfolios to assess strengths and qualities and identify any information gaps for the purpose of revalidation.
Resumo:
Chemical and meteorological parameters measured on board the Facility for Airborne Atmospheric Measurements (FAAM) BAe 146 Atmospheric Research Aircraft during the African Monsoon Multidisciplinary Analysis (AMMA) campaign are presented to show the impact of NOx emissions from recently wetted soils in West Africa. NO emissions from soils have been previously observed in many geographical areas with different types of soil/vegetation cover during small scale studies and have been inferred at large scales from satellite measurements of NOx. This study is the first dedicated to showing the emissions of NOx at an intermediate scale between local surface sites and continental satellite measurements. The measurements reveal pronounced mesoscale variations in NOx concentrations closely linked to spatial patterns of antecedent rainfall. Fluxes required to maintain the NOx concentrations observed by the BAe-146 in a number of cases studies and for a range of assumed OH concentrations (1×106 to 1×107 molecules cm−3) are calculated to be in the range 8.4 to 36.1 ng N m−2 s−1. These values are comparable to the range of fluxes from 0.5 to 28 ng N m−2 s−1 reported from small scale field studies in a variety of non-nutrient rich tropical and sub-tropical locations reported in the review of Davidson and Kingerlee (1997). The fluxes calculated in the present study have been scaled up to cover the area of the Sahel bounded by 10 to 20 N and 10 E to 20 W giving an estimated emission of 0.03 to 0.30 Tg N from this area for July and August 2006. The observed chemical data also suggest that the NOx emitted from soils is taking part in ozone formation as ozone concentrations exhibit similar fine scale structure to the NOx, with enhancements over the wet soils. Such variability can not be explained on the basis of transport from other areas. Delon et al. (2008) is a companion paper to this one which models the impact of soil NOx emissions on the NOx and ozone concentration over West Africa during AMMA. It employs an artificial neural network to define the emissions of NOx from soils, integrated into a coupled chemistry-dynamics model. The results are compared to the observed data presented in this paper. Here we compare fluxes deduced from the observed data with the model-derived values from Delon et al. (2008).
Resumo:
We advocate the use of systolic design techniques to create custom hardware for Custom Computing Machines. We have developed a hardware genetic algorithm based on systolic arrays to illustrate the feasibility of the approach. The architecture is independent of the lengths of chromosomes used and can be scaled in size to accommodate different population sizes. An FPGA prototype design can process 16 million genes per second.