38 resultados para Adenylates as Adenosine 5-Triphosphate, standard deviation


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Within the SPARC Data Initiative, the first comprehensive assessment of the quality of 13 water vapor products from 11 limb-viewing satellite instruments (LIMS, SAGE II, UARS-MLS, HALOE, POAM III, SMR, SAGE III, MIPAS, SCIAMACHY, ACE-FTS, and Aura-MLS) obtained within the time period 1978-2010 has been performed. Each instrument's water vapor profile measurements were compiled into monthly zonal mean time series on a common latitude-pressure grid. These time series serve as basis for the "climatological" validation approach used within the project. The evaluations include comparisons of monthly or annual zonal mean cross sections and seasonal cycles in the tropical and extratropical upper troposphere and lower stratosphere averaged over one or more years, comparisons of interannual variability, and a study of the time evolution of physical features in water vapor such as the tropical tape recorder and polar vortex dehydration. Our knowledge of the atmospheric mean state in water vapor is best in the lower and middle stratosphere of the tropics and midlatitudes, with a relative uncertainty of. 2-6% (as quantified by the standard deviation of the instruments' multiannual means). The uncertainty increases toward the polar regions (+/- 10-15%), the mesosphere (+/- 15%), and the upper troposphere/lower stratosphere below 100 hPa (+/- 30-50%), where sampling issues add uncertainty due to large gradients and high natural variability in water vapor. The minimum found in multiannual (1998-2008) mean water vapor in the tropical lower stratosphere is 3.5 ppmv (+/- 14%), with slightly larger uncertainties for monthly mean values. The frequently used HALOE water vapor data set shows consistently lower values than most other data sets throughout the atmosphere, with increasing deviations from the multi-instrument mean below 100 hPa in both the tropics and extratropics. The knowledge gained from these comparisons and regarding the quality of the individual data sets in different regions of the atmosphere will help to improve model-measurement comparisons (e.g., for diagnostics such as the tropical tape recorder or seasonal cycles), data merging activities, and studies of climate variability.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Resilience of rice cropping systems to potential global climate change will partly depend on temperature tolerance of pollen germination (PG) and tube growth (PTG). Germination of pollen of high temperature susceptible Oryza glaberrima Steud. (cv. CG14) and O. sativa L. ssp. indica (cv. IR64) and high temperature tolerant O. sativa ssp. aus (cv. N22), was assessed on a 5.6-45.4°C temperature gradient system. Mean maximum PG was 85% at 27°C with 1488 μm PTG at 25°C. The hypothesis that in each pollen grain, minimum temperature requirements (Tn) and maximum temperature limits (Tx) for germination operate independently was accepted by comparing multiplicative and subtractive probability models. The maximum temperature limit for PG in 50% of grains (Tx(50)) was lowest (29.8°C) in IR64 compared with CG14 (34.3°C) and N22 (35.6°C). Standard deviation (sx) of Tx was also low in IR64 (2.3°C) suggesting that the mechanism of IR64's susceptibility to high temperatures may relate to PG. Optimum germination temperatures and thermal times for 1mm PTG were not linked to tolerating high temperatures at anthesis. However, the parameters Tx(50) and sx in the germination model define new pragmatic criteria for successful and resilient PG, preferable to the more traditional cardinal (maximum and minimum) temperatures.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A fast simple climate modelling approach is developed for predicting and helping to understand general circulation model (GCM) simulations. We show that the simple model reproduces the GCM results accurately, for global mean surface air temperature change and global-mean heat uptake projections from 9 GCMs in the fifth coupled model inter-comparison project (CMIP5). This implies that understanding gained from idealised CO2 step experiments is applicable to policy-relevant scenario projections. Our approach is conceptually simple. It works by using the climate response to a CO2 step change taken directly from a GCM experiment. With radiative forcing from non-CO2 constituents obtained by adapting the Forster and Taylor method, we use our method to estimate results for CMIP5 representative concentration pathway (RCP) experiments for cases not run by the GCMs. We estimate differences between pairs of RCPs rather than RCP anomalies relative to the pre-industrial state. This gives better results because it makes greater use of available GCM projections. The GCMs exhibit differences in radiative forcing, which we incorporate in the simple model. We analyse the thus-completed ensemble of RCP projections. The ensemble mean changes between 1986–2005 and 2080–2099 for global temperature (heat uptake) are, for RCP8.5: 3.8 K (2.3 × 1024 J); for RCP6.0: 2.3 K (1.6 × 1024 J); for RCP4.5: 2.0 K (1.6 × 1024 J); for RCP2.6: 1.1 K (1.3 × 1024 J). The relative spread (standard deviation/ensemble mean) for these scenarios is around 0.2 and 0.15 for temperature and heat uptake respectively. We quantify the relative effect of mitigation action, through reduced emissions, via the time-dependent ratios (change in RCPx)/(change in RCP8.5), using changes with respect to pre-industrial conditions. We find that the effects of mitigation on global-mean temperature change and heat uptake are very similar across these different GCMs.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this study we examine the performance of 31 global model radiative transfer schemes in cloud-free conditions with prescribed gaseous absorbers and no aerosols (Rayleigh atmosphere), with prescribed scattering-only aerosols, and with more absorbing aerosols. Results are compared to benchmark results from high-resolution, multi-angular line-by-line radiation models. For purely scattering aerosols, model bias relative to the line-by-line models in the top-of-the atmosphere aerosol radiative forcing ranges from roughly −10 to 20%, with over- and underestimates of radiative cooling at lower and higher solar zenith angle, respectively. Inter-model diversity (relative standard deviation) increases from ~10 to 15% as solar zenith angle decreases. Inter-model diversity in atmospheric and surface forcing decreases with increased aerosol absorption, indicating that the treatment of multiple-scattering is more variable than aerosol absorption in the models considered. Aerosol radiative forcing results from multi-stream models are generally in better agreement with the line-by-line results than the simpler two-stream schemes. Considering radiative fluxes, model performance is generally the same or slightly better than results from previous radiation scheme intercomparisons. However, the inter-model diversity in aerosol radiative forcing remains large, primarily as a result of the treatment of multiple-scattering. Results indicate that global models that estimate aerosol radiative forcing with two-stream radiation schemes may be subject to persistent biases introduced by these schemes, particularly for regional aerosol forcing.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study has investigated serial (temporal) clustering of extra-tropical cyclones simulated by 17 climate models that participated in CMIP5. Clustering was estimated by calculating the dispersion (ratio of variance to mean) of 30 December-February counts of Atlantic storm tracks passing nearby each grid point. Results from single historical simulations of 1975-2005 were compared to those from historical ERA40 reanalyses from 1958-2001 ERA40 and single future model projections of 2069-2099 under the RCP4.5 climate change scenario. Models were generally able to capture the broad features in reanalyses reported previously: underdispersion/regularity (i.e. variance less than mean) in the western core of the Atlantic storm track surrounded by overdispersion/clustering (i.e. variance greater than mean) to the north and south and over western Europe. Regression of counts onto North Atlantic Oscillation (NAO) indices revealed that much of the overdispersion in the historical reanalyses and model simulations can be accounted for by NAO variability. Future changes in dispersion were generally found to be small and not consistent across models. The overdispersion statistic, for any 30 year sample, is prone to large amounts of sampling uncertainty that obscures the climate change signal. For example, the projected increase in dispersion for storm counts near London in the CNRMCM5 model is 0.1 compared to a standard deviation of 0.25. Projected changes in the mean and variance of NAO are insufficient to create changes in overdispersion that are discernible above natural sampling variations.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The topography of many floodplains in the developed world has now been surveyed with high resolution sensors such as airborne LiDAR (Light Detection and Ranging), giving accurate Digital Elevation Models (DEMs) that facilitate accurate flood inundation modelling. This is not always the case for remote rivers in developing countries. However, the accuracy of DEMs produced for modelling studies on such rivers should be enhanced in the near future by the high resolution TanDEM-X WorldDEM. In a parallel development, increasing use is now being made of flood extents derived from high resolution Synthetic Aperture Radar (SAR) images for calibrating, validating and assimilating observations into flood inundation models in order to improve these. This paper discusses an additional use of SAR flood extents, namely to improve the accuracy of the TanDEM-X DEM in the floodplain covered by the flood extents, thereby permanently improving this DEM for future flood modelling and other studies. The method is based on the fact that for larger rivers the water elevation generally changes only slowly along a reach, so that the boundary of the flood extent (the waterline) can be regarded locally as a quasi-contour. As a result, heights of adjacent pixels along a small section of waterline can be regarded as samples with a common population mean. The height of the central pixel in the section can be replaced with the average of these heights, leading to a more accurate estimate. While this will result in a reduction in the height errors along a waterline, the waterline is a linear feature in a two-dimensional space. However, improvements to the DEM heights between adjacent pairs of waterlines can also be made, because DEM heights enclosed by the higher waterline of a pair must be at least no higher than the corrected heights along the higher waterline, whereas DEM heights not enclosed by the lower waterline must in general be no lower than the corrected heights along the lower waterline. In addition, DEM heights between the higher and lower waterlines can also be assigned smaller errors because of the reduced errors on the corrected waterline heights. The method was tested on a section of the TanDEM-X Intermediate DEM (IDEM) covering an 11km reach of the Warwickshire Avon, England. Flood extents from four COSMO-SKyMed images were available at various stages of a flood in November 2012, and a LiDAR DEM was available for validation. In the area covered by the flood extents, the original IDEM heights had a mean difference from the corresponding LiDAR heights of 0.5 m with a standard deviation of 2.0 m, while the corrected heights had a mean difference of 0.3 m with standard deviation 1.2 m. These figures show that significant reductions in IDEM height bias and error can be made using the method, with the corrected error being only 60% of the original. Even if only a single SAR image obtained near the peak of the flood was used, the corrected error was only 66% of the original. The method should also be capable of improving the final TanDEM-X DEM and other DEMs, and may also be of use with data from the SWOT (Surface Water and Ocean Topography) satellite.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Accurate knowledge of the location and magnitude of ocean heat content (OHC) variability and change is essential for understanding the processes that govern decadal variations in surface temperature, quantifying changes in the planetary energy budget, and developing constraints on the transient climate response to external forcings. We present an overview of the temporal and spatial characteristics of OHC variability and change as represented by an ensemble of dynamical and statistical ocean reanalyses (ORAs). Spatial maps of the 0–300 m layer show large regions of the Pacific and Indian Oceans where the interannual variability of the ensemble mean exceeds ensemble spread, indicating that OHC variations are well-constrained by the available observations over the period 1993–2009. At deeper levels, the ORAs are less well-constrained by observations with the largest differences across the ensemble mostly associated with areas of high eddy kinetic energy, such as the Southern Ocean and boundary current regions. Spatial patterns of OHC change for the period 1997–2009 show good agreement in the upper 300 m and are characterized by a strong dipole pattern in the Pacific Ocean. There is less agreement in the patterns of change at deeper levels, potentially linked to differences in the representation of ocean dynamics, such as water mass formation processes. However, the Atlantic and Southern Oceans are regions in which many ORAs show widespread warming below 700 m over the period 1997–2009. Annual time series of global and hemispheric OHC change for 0–700 m show the largest spread for the data sparse Southern Hemisphere and a number of ORAs seem to be subject to large initialization ‘shock’ over the first few years. In agreement with previous studies, a number of ORAs exhibit enhanced ocean heat uptake below 300 and 700 m during the mid-1990s or early 2000s. The ORA ensemble mean (±1 standard deviation) of rolling 5-year trends in full-depth OHC shows a relatively steady heat uptake of approximately 0.9 ± 0.8 W m−2 (expressed relative to Earth’s surface area) between 1995 and 2002, which reduces to about 0.2 ± 0.6 W m−2 between 2004 and 2006, in qualitative agreement with recent analysis of Earth’s energy imbalance. There is a marked reduction in the ensemble spread of OHC trends below 300 m as the Argo profiling float observations become available in the early 2000s. In general, we suggest that ORAs should be treated with caution when employed to understand past ocean warming trends—especially when considering the deeper ocean where there is little in the way of observational constraints. The current work emphasizes the need to better observe the deep ocean, both for providing observational constraints for future ocean state estimation efforts and also to develop improved models and data assimilation methods.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper describes new advances in the exploitation of oxygen A-band measurements from POLDER3 sensor onboard PARASOL, satellite platform within the A-Train. These developments result from not only an account of the dependence of POLDER oxygen parameters to cloud optical thickness τ and to the scene's geometrical conditions but also, and more importantly, from the finer understanding of the sensitivity of these parameters to cloud vertical extent. This sensitivity is made possible thanks to the multidirectional character of POLDER measurements. In the case of monolayer clouds that represent most of cloudy conditions, new oxygen parameters are obtained and calibrated from POLDER3 data colocalized with the measurements of the two active sensors of the A-Train: CALIOP/CALIPSO and CPR/CloudSat. From a parameterization that is (μs, τ) dependent, with μs the cosine of the solar zenith angle, a cloud top oxygen pressure (CTOP) and a cloud middle oxygen pressure (CMOP) are obtained, which are estimates of actual cloud top and middle pressures (CTP and CMP). Performances of CTOP and CMOP are presented by class of clouds following the ISCCP classification. In 2008, the coefficient of the correlation between CMOP and CMP is 0.81 for cirrostratus, 0.79 for stratocumulus, 0.75 for deep convective clouds. The coefficient of the correlation between CTOP and CTP is 0.75, 0.73, and 0.79 for the same cloud types. The score obtained by CTOP, defined as the confidence in the retrieval for a particular range of inferred value and for a given error, is higher than the one of MODIS CTP estimate. Scores of CTOP are the highest for bin value of CTP superior in numbers. For liquid (ice) clouds and an error of 30 hPa (50 hPa), the score of CTOP reaches 50% (70%). From the difference between CTOP and CMOP, a first estimate of the cloud vertical extent h is possible. A second estimate of h comes from the correlation between the angular standard deviation of POLDER oxygen pressure σPO2 and the cloud vertical extent. This correlation is studied in detail in the case of liquid clouds. It is shown to be spatially and temporally robust, except for clouds above land during winter months. The analysis of the correlation's dependence on the scene's characteristics leads to a parameterization providing h from σPO2. For liquid water clouds above ocean in 2008, the mean difference between the actual cloud vertical extent and the one retrieved from σPO2 (from the pressure difference) is 5 m (−12 m). The standard deviation of the mean difference is close to 1000 m for the two methods. POLDER estimates of the cloud geometrical thickness obtain a global score of 50% confidence for a relative error of 20% (40%) of the estimate for ice (liquid) clouds over ocean. These results need to be validated outside of the CALIPSO/CloudSat track.