15 resultados para Age, lower confidence level
em CentAUR: Central Archive University of Reading - UK
Resumo:
Transient and equilibrium sensitivity of Earth's climate has been calculated using global temperature, forcing and heating rate data for the period 1970–2010. We have assumed increased long-wave radiative forcing in the period due to the increase of the long-lived greenhouse gases. By assuming the change in aerosol forcing in the period to be zero, we calculate what we consider to be lower bounds to these sensitivities, as the magnitude of the negative aerosol forcing is unlikely to have diminished in this period. The radiation imbalance necessary to calculate equilibrium sensitivity is estimated from the rate of ocean heat accumulation as 0.37±0.03W m^−2 (all uncertainty estimates are 1−σ). With these data, we obtain best estimates for transient climate sensitivity 0.39±0.07K (W m^−2)^−1 and equilibrium climate sensitivity 0.54±0.14K (W m^−2)^−1, equivalent to 1.5±0.3 and 2.0±0.5K (3.7W m^−2)^−1, respectively. The latter quantity is equal to the lower bound of the ‘likely’ range for this quantity given by the 2007 IPCC Assessment Report. The uncertainty attached to the lower-bound equilibrium sensitivity permits us to state, within the assumptions of this analysis, that the equilibrium sensitivity is greater than 0.31K (W m^−2)^−1, equivalent to 1.16K(3.7W m^−2)^−1, at the 95% confidence level.
Resumo:
Combining satellite data, atmospheric reanalyses and climate model simulations, variability in the net downward radiative flux imbalance at the top of Earth's atmosphere (N) is reconstructed and linked to recent climate change. Over the 1985-1999 period mean N (0.34 ± 0.67 Wm–2) is lower than for the 2000-2012 period (0.62 ± 0.43 Wm–2, uncertainties at 90% confidence level) despite the slower rate of surface temperature rise since 2000. While the precise magnitude of N remains uncertain, the reconstruction captures interannual variability which is dominated by the eruption of Mt. Pinatubo in 1991 and the El Niño Southern Oscillation. Monthly deseasonalized interannual variability in N generated by an ensemble of 9 climate model simulations using prescribed sea surface temperature and radiative forcings and from the satellite-based reconstruction is significantly correlated (r ∼ 0.6) over the 1985-2012 period.
Resumo:
Aims. Although the time of the Maunder minimum (1645–1715) is widely known as a period of extremely low solar activity, it is still being debated whether solar activity during that period might have been moderate or even higher than the current solar cycle (number 24). We have revisited all existing evidence and datasets, both direct and indirect, to assess the level of solar activity during the Maunder minimum. Methods. We discuss the East Asian naked-eye sunspot observations, the telescopic solar observations, the fraction of sunspot active days, the latitudinal extent of sunspot positions, auroral sightings at high latitudes, cosmogenic radionuclide data as well as solar eclipse observations for that period. We also consider peculiar features of the Sun (very strong hemispheric asymmetry of the sunspot location, unusual differential rotation and the lack of the K-corona) that imply a special mode of solar activity during the Maunder minimum. Results. The level of solar activity during the Maunder minimum is reassessed on the basis of all available datasets. Conclusions. We conclude that solar activity was indeed at an exceptionally low level during the Maunder minimum. Although the exact level is still unclear, it was definitely lower than during the Dalton minimum of around 1800 and significantly below that of the current solar cycle #24. Claims of a moderate-to-high level of solar activity during the Maunder minimum are rejected with a high confidence level.
Resumo:
Changes to stratospheric sudden warmings (SSWs) over the coming century, as predicted by the Geophysical Fluid Dynamics Laboratory (GFDL) chemistry climate model [Atmospheric Model With Transport and Chemistry (AMTRAC)], are investigated in detail. Two sets of integrations, each a three-member ensemble, are analyzed. The first set is driven with observed climate forcings between 1960 and 2004; the second is driven with climate forcings from a coupled model run, including trace gas concentrations representing a midrange estimate of future anthropogenic emissions between 1990 and 2099. A small positive trend in the frequency of SSWs is found. This trend, amounting to 1 event/decade over a century, is statistically significant at the 90% confidence level and is consistent over the two sets of model integrations. Comparison of the model SSW climatology between the late 20th and 21st centuries shows that the increase is largest toward the end of the winter season. In contrast, the dynamical properties are not significantly altered in the coming century, despite the increase in SSW frequency. Owing to the intrinsic complexity of our model, the direct cause of the predicted trend in SSW frequency remains an open question.
Resumo:
The skill of numerical Lagrangian drifter trajectories in three numerical models is assessed by comparing these numerically obtained paths to the trajectories of drifting buoys in the real ocean. The skill assessment is performed using the two-sample Kolmogorov–Smirnov statistical test. To demonstrate the assessment procedure, it is applied to three different models of the Agulhas region. The test can either be performed using crossing positions of one-dimensional sections in order to test model performance in specific locations, or using the total two-dimensional data set of trajectories. The test yields four quantities: a binary decision of model skill, a confidence level which can be used as a measure of goodness-of-fit of the model, a test statistic which can be used to determine the sensitivity of the confidence level, and cumulative distribution functions that aid in the qualitative analysis. The ordering of models by their confidence levels is the same as the ordering based on the qualitative analysis, which suggests that the method is suited for model validation. Only one of the three models, a 1/10° two-way nested regional ocean model, might have skill in the Agulhas region. The other two models, a 1/2° global model and a 1/8° assimilative model, might have skill only on some sections in the region
Resumo:
A multivariate fit to the variation in global mean surface air temperature anomaly over the past half century is presented. The fit procedure allows for the effect of response time on the waveform, amplitude and lag of each radiative forcing input, and each is allowed to have its own time constant. It is shown that the contribution of solar variability to the temperature trend since 1987 is small and downward; the best estimate is -1.3% and the 2sigma confidence level sets the uncertainty range of -0.7 to -1.9%. The result is the same if one quantifies the solar variation using galactic cosmic ray fluxes (for which the analysis can be extended back to 1953) or the most accurate total solar irradiance data composite. The rise in the global mean air surface temperatures is predominantly associated with a linear increase that represents the combined effects of changes in anthropogenic well-mixed greenhouse gases and aerosols, although, in recent decades, there is also a considerable contribution by a relative lack of major volcanic eruptions. The best estimate is that the anthropogenic factors contribute 75% of the rise since 1987, with an uncertainty range (set by the 2sigma confidence level using an AR(1) noise model) of 49–160%; thus, the uncertainty is large, but we can state that at least half of the temperature trend comes from the linear term and that this term could explain the entire rise. The results are consistent with the intergovernmental panel on climate change (IPCC) estimates of the changes in radiative forcing (given for 1961–1995) and are here combined with those estimates to find the response times, equilibrium climate sensitivities and pertinent heat capacities (i.e. the depth into the oceans to which a given radiative forcing variation penetrates) of the quasi-periodic (decadal-scale) input forcing variations. As shown by previous studies, the decadal-scale variations do not penetrate as deeply into the oceans as the longer term drifts and have shorter response times. Hence, conclusions about the response to century-scale forcing changes (and hence the associated equilibrium climate sensitivity and the temperature rise commitment) cannot be made from studies of the response to shorter period forcing changes.
Resumo:
We have combined several key sample preparation steps for the use of a liquid matrix system to provide high analytical sensitivity in automated ultraviolet -- matrix-assisted laser desorption/ionisation -- mass spectrometry (UV-MALDI-MS). This new sample preparation protocol employs a matrix-mixture which is based on the glycerol matrix-mixture described by Sze et al. The low-femtomole sensitivity that is achievable with this new preparation protocol enables proteomic analysis of protein digests comparable to solid-state matrix systems. For automated data acquisition and analysis, the MALDI performance of this liquid matrix surpasses the conventional solid-state MALDI matrices. Besides the inherent general advantages of liquid samples for automated sample preparation and data acquisition the use of the presented liquid matrix significantly reduces the extent of unspecific ion signals in peptide mass fingerprints compared to typically used solid matrices, such as 2,5-dihydroxybenzoic acid (DHB) or alpha-cyano-hydroxycinnamic acid (CHCA). In particular, matrix and low-mass ion signals and ion signals resulting from cation adduct formation are dramatically reduced. Consequently, the confidence level of protein identification by peptide mass mapping of in-solution and in-gel digests is generally higher.
Resumo:
We have combined several key sample preparation steps for the use of a liquid matrix system to provide high analytical sensitivity in automated ultraviolet - matrix-assisted laser desorption/ ionisation - mass spectrometry (UV-MALDI-MS). This new sample preparation protocol employs a matrix-mixture which is based on the glycerol matrix-mixture described by Sze et al. U. Am. Soc. Mass Spectrom. 1998, 9, 166-174). The low-ferntomole sensitivity that is achievable with this new preparation protocol enables proteomic analysis of protein digests comparable to solid-state matrix systems. For automated data acquisition and analysis, the MALDI performance of this liquid matrix surpasses the conventional solid-state MALDI matrices. Besides the inherent general advantages of liquid samples for automated sample preparation and data acquisition the use of the presented liquid matrix significantly reduces the extent of unspecific ion signals in peptide mass fingerprints compared to typically used solid matrices, such as 2,5-dihydrox-ybenzoic acid (DHB) or alpha-cyano-hydroxycinnamic acid (CHCA). In particular, matrix and lowmass ion signals and ion signals resulting from cation adduct formation are dramatically reduced. Consequently, the confidence level of protein identification by peptide mass mapping of in-solution and in-gel digests is generally higher.
Resumo:
Global climate change results from a small yet persistent imbalance between the amount of sunlight absorbed by Earth and the thermal radiation emitted back to space. An apparent inconsistency has been diagnosed between interannual variations in the net radiation imbalance inferred from satellite measurements and upper-ocean heating rate from in situ measurements, and this inconsistency has been interpreted as ‘missing energy’ in the system. Here we present a revised analysis of net radiation at the top of the atmosphere from satellite data, and we estimate ocean heat content, based on three independent sources. We find that the difference between the heat balance at the top of the atmosphere and upper-ocean heat content change is not statistically significant when accounting for observational uncertainties in ocean measurements, given transitions in instrumentation and sampling. Furthermore, variability in Earth’s energy imbalance relating to El Niño-Southern Oscillation is found to be consistent within observational uncertainties among the satellite measurements, a reanalysis model simulation and one of the ocean heat content records. We combine satellite data with ocean measurements to depths of 1,800 m, and show that between January 2001 and December 2010, Earth has been steadily accumulating energy at a rate of 0.50±0.43 Wm−2 (uncertainties at the 90% confidence level). We conclude that energy storage is continuing to increase in the sub-surface ocean.
Resumo:
Geophysical time series sometimes exhibit serial correlations that are stronger than can be captured by the commonly used first‐order autoregressive model. In this study we demonstrate that a power law statistical model serves as a useful upper bound for the persistence of total ozone anomalies on monthly to interannual timescales. Such a model is usually characterized by the Hurst exponent. We show that the estimation of the Hurst exponent in time series of total ozone is sensitive to various choices made in the statistical analysis, especially whether and how the deterministic (including periodic) signals are filtered from the time series, and the frequency range over which the estimation is made. In particular, care must be taken to ensure that the estimate of the Hurst exponent accurately represents the low‐frequency limit of the spectrum, which is the part that is relevant to long‐term correlations and the uncertainty of estimated trends. Otherwise, spurious results can be obtained. Based on this analysis, and using an updated equivalent effective stratospheric chlorine (EESC) function, we predict that an increase in total ozone attributable to EESC should be detectable at the 95% confidence level by 2015 at the latest in southern midlatitudes, and by 2020–2025 at the latest over 30°–45°N, with the time to detection increasing rapidly with latitude north of this range.
Resumo:
We have incorporated a semi-mechanistic isoprene emission module into the JULES land-surface scheme, as a first step towards a modelling tool that can be applied for studies of vegetation – atmospheric chemistry interactions, including chemistry-climate feedbacks. Here, we evaluate the coupled model against local above-canopy isoprene emission flux measurements from six flux tower sites as well as satellite-derived estimates of isoprene emission over tropical South America and east and south Asia. The model simulates diurnal variability well: correlation coefficients are significant (at the 95 % level) for all flux tower sites. The model reproduces day-to-day variability with significant correlations (at the 95 % confidence level) at four of the six flux tower sites. At the UMBS site, a complete set of seasonal observations is available for two years (2000 and 2002). The model reproduces the seasonal pattern of emission during 2002, but does less well in the year 2000. The model overestimates observed emissions at all sites, which is partially because it does not include isoprene loss through the canopy. Comparison with the satellite-derived isoprene-emission estimates suggests that the model simulates the main spatial patterns, seasonal and inter-annual variability over tropical regions. The model yields a global annual isoprene emission of 535 ± 9 TgC yr−1 during the 1990s, 78 % of which from forested areas.
Resumo:
1 The recent increase in planting of selected willow clones as energy crops for biomass production has resulted in a need to understand the relationship between commonly grown, clonally propagated genotypes and their pests. 2 For the first time, we present a study of the interactions of six willow clones and a previously unconsidered pest, the giant willow aphid Tuberolachnus salignus. 3 Tuberolachnus salignus alatae displayed no preference between the clones, but there was genetic variation in resistance between the clones; Q83 was the most resistant and led to the lowest reproductive performance in the aphid 4 Maternal effects buffered changes in aphid performance. On four tested willow clones fecundity of first generation aphids on the new host clone was intermediate to that of the second generation and that of the clone used to maintain the aphids in culture. 5 In the field, patterns of aphid infestation were highly variable between years, with the duration of attack being up to four times longer in 1999. In both years there was a significant effect of willow clone on the intensity of infestation. However, whereas Orm had the lowest intensity of infestation in the first year, Dasyclados supported a lower population level than other monitored clones in the second year.
Resumo:
Decades of research attest that memory processes suffer under conditions of auditory distraction. What is however less well understood is whether people are able to modify how their memory processes are deployed in order to compensate for disruptive effects of distraction. The metacognitive approach to memory describes a variety of ways people can exert control over their cognitive processes to optimize performance. Here we describe our recent investigations into how these control processes change under conditions of auditory distraction. We specifically looked at control of encoding in the form of decisions about how long to study a word when it is presented and control of memory reporting in the form of decisions whether to volunteer or withhold retrieved details. Regarding control of encoding, we expected that people would compensate for disruptive effects of distraction by extending study time under noise. Our results revealed, however, that when exposed to irrelevant speech, people curtail rather than extend study. Regarding control of memory reporting, we expected that people would compensate for the loss of access to memory records by volunteering responses held with lower confidence. Our results revealed, however, that people’s reporting strategies do not differ when memory task is performed in silence or under auditory distraction, although distraction seriously undermines people’s confidence in their own responses. Together, our studies reveal novel avenues for investigating the psychological effects of auditory distraction within a metacognitive framework.
Resumo:
This chapter presents findings on English Language instruction at the lower primary level in the context of policies for curricular innovation at national, school and classroom levels. The focus is on policies which connect national and school levels, and on how they might be interpreted when implemented in multiple schools within Singapore’s educational system. Referring to case studies in two schools and to individual lesson observations in 10 schools, we found much agreement with national policies in terms of curriculum (i.e. lesson content and activity selection),leading to great uniformity in the lessons taught by different teachers in different schools. In addition, we found that schools had an important mediating influence on implementation of national policies. However, adoptions and adaptations of policy innovations at the classroom level were somewhat superficial as they were more related to changes in educational facilities and procedures than in philosophies.