179 resultados para Quasi-Arithmetic Mean
Resumo:
A method is proposed for merging different nadir-sounding climate data records using measurements from high-resolution limb sounders to provide a transfer function between the different nadir measurements. The two nadir-sounding records need not be overlapping so long as the limb-sounding record bridges between them. The method is applied to global-mean stratospheric temperatures from the NOAA Climate Data Records based on the Stratospheric Sounding Unit (SSU) and the Advanced Microwave Sounding Unit-A (AMSU), extending the SSU record forward in time to yield a continuous data set from 1979 to present, and providing a simple framework for extending the SSU record into the future using AMSU. SSU and AMSU are bridged using temperature measurements from the Michelson Interferometer for Passive Atmospheric Sounding (MIPAS), which is of high enough vertical resolution to accurately represent the weighting functions of both SSU and AMSU. For this application, a purely statistical approach is not viable since the different nadir channels are not sufficiently linearly independent, statistically speaking. The near-global-mean linear temperature trends for extended SSU for 1980–2012 are −0.63 ± 0.13, −0.71 ± 0.15 and −0.80 ± 0.17 K decade−1 (95 % confidence) for channels 1, 2 and 3, respectively. The extended SSU temperature changes are in good agreement with those from the Microwave Limb Sounder (MLS) on the Aura satellite, with both exhibiting a cooling trend of ~ 0.6 ± 0.3 K decade−1 in the upper stratosphere from 2004 to 2012. The extended SSU record is found to be in agreement with high-top coupled atmosphere–ocean models over the 1980–2012 period, including the continued cooling over the first decade of the 21st century.
Resumo:
The subject of climate feedbacks focuses attention on global mean surface air temperature (GMST) as the key metric of climate change. But what does knowledge of past and future GMST tell us about the climate of specific regions? In the context of the ongoing UNFCCC process, this is an important question for policy-makers as well as for scientists. The answer depends on many factors, including the mechanisms causing changes, the timescale of the changes, and the variables and regions of interest. This paper provides a review and analysis of the relationship between changes in GMST and changes in local climate, first in observational records and then in a range of climate model simulations, which are used to interpret the observations. The focus is on decadal timescales, which are of particular interest in relation to recent and near-future anthropogenic climate change. It is shown that GMST primarily provides information about forced responses, but that understanding and quantifying internal variability is essential to projecting climate and climate impacts on regional-to-local scales. The relationship between local forced responses and GMST is often linear but may be nonlinear, and can be greatly complicated by competition between different forcing factors. Climate projections are limited not only by uncertainties in the signal of climate change but also by uncertainties in the characteristics of real-world internal variability. Finally, it is shown that the relationship between GMST and local climate provides a simple approach to climate change detection, and a useful guide to attribution studies.
Resumo:
Considerable progress has been made in understanding the present and future regional and global sea level in the 2 years since the publication of the Fifth Assessment Report (AR5) of the Intergovernmental Panel on Climate Change. Here, we evaluate how the new results affect the AR5’s assessment of (i) historical sea level rise, including attribution of that rise and implications for the sea level budget, (ii) projections of the components and of total global mean sea level (GMSL), and (iii) projections of regional variability and emergence of the anthropogenic signal. In each of these cases, new work largely provides additional evidence in support of the AR5 assessment, providing greater confidence in those findings. Recent analyses confirm the twentieth century sea level rise, with some analyses showing a slightly smaller rate before 1990 and some a slightly larger value than reported in the AR5. There is now more evidence of an acceleration in the rate of rise. Ongoing ocean heat uptake and associated thermal expansion have continued since 2000, and are consistent with ocean thermal expansion reported in the AR5. A significant amount of heat is being stored deeper in the water column, with a larger rate of heat uptake since 2000 compared to the previous decades and with the largest storage in the Southern Ocean. The first formal detection studies for ocean thermal expansion and glacier mass loss since the AR5 have confirmed the AR5 finding of a significant anthropogenic contribution to sea level rise over the last 50 years. New projections of glacier loss from two regions suggest smaller contributions to GMSL rise from these regions than in studies assessed by the AR5; additional regional studies are required to further assess whether there are broader implications of these results. Mass loss from the Greenland Ice Sheet, primarily as a result of increased surface melting, and from the Antarctic Ice Sheet, primarily as a result of increased ice discharge, has accelerated. The largest estimates of acceleration in mass loss from the two ice sheets for 2003–2013 equal or exceed the acceleration of GMSL rise calculated from the satellite altimeter sea level record over the longer period of 1993–2014. However, when increased mass gain in land water storage and parts of East Antarctica, and decreased mass loss from glaciers in Alaska and some other regions are taken into account, the net acceleration in the ocean mass gain is consistent with the satellite altimeter record. New studies suggest that a marine ice sheet instability (MISI) may have been initiated in parts of the West Antarctic Ice Sheet (WAIS), but that it will affect only a limited number of ice streams in the twenty-first century. New projections of mass loss from the Greenland and Antarctic Ice Sheets by 2100, including a contribution from parts of WAIS undergoing unstable retreat, suggest a contribution that falls largely within the likely range (i.e., two thirds probability) of the AR5. These new results increase confidence in the AR5 likely range, indicating that there is a greater probability that sea level rise by 2100 will lie in this range with a corresponding decrease in the likelihood of an additional contribution of several tens of centimeters above the likely range. In view of the comparatively limited state of knowledge and understanding of rapid ice sheet dynamics, we continue to think that it is not yet possible to make reliable quantitative estimates of future GMSL rise outside the likely range. Projections of twenty-first century GMSL rise published since the AR5 depend on results from expert elicitation, but we have low confidence in conclusions based on these approaches. New work on regional projections and emergence of the anthropogenic signal suggests that the two commonly predicted features of future regional sea level change (the increasing tilt across the Antarctic Circumpolar Current and the dipole in the North Atlantic) are related to regional changes in wind stress and surface heat flux. Moreover, it is expected that sea level change in response to anthropogenic forcing, particularly in regions of relatively low unforced variability such as the low-latitude Atlantic, will be detectable over most of the ocean by 2040. The east-west contrast of sea level trends in the Pacific observed since the early 1990s cannot be satisfactorily accounted for by climate models, nor yet definitively attributed either to unforced variability or forced climate change.
Resumo:
We construct a quasi-sure version (in the sense of Malliavin) of geometric rough paths associated with a Gaussian process with long-time memory. As an application we establish a large deviation principle (LDP) for capacities for such Gaussian rough paths. Together with Lyons' universal limit theorem, our results yield immediately the corresponding results for pathwise solutions to stochastic differential equations driven by such Gaussian process in the sense of rough paths. Moreover, our LDP result implies the result of Yoshida on the LDP for capacities over the abstract Wiener space associated with such Gaussian process.
Resumo:
The topography of many floodplains in the developed world has now been surveyed with high resolution sensors such as airborne LiDAR (Light Detection and Ranging), giving accurate Digital Elevation Models (DEMs) that facilitate accurate flood inundation modelling. This is not always the case for remote rivers in developing countries. However, the accuracy of DEMs produced for modelling studies on such rivers should be enhanced in the near future by the high resolution TanDEM-X WorldDEM. In a parallel development, increasing use is now being made of flood extents derived from high resolution Synthetic Aperture Radar (SAR) images for calibrating, validating and assimilating observations into flood inundation models in order to improve these. This paper discusses an additional use of SAR flood extents, namely to improve the accuracy of the TanDEM-X DEM in the floodplain covered by the flood extents, thereby permanently improving this DEM for future flood modelling and other studies. The method is based on the fact that for larger rivers the water elevation generally changes only slowly along a reach, so that the boundary of the flood extent (the waterline) can be regarded locally as a quasi-contour. As a result, heights of adjacent pixels along a small section of waterline can be regarded as samples with a common population mean. The height of the central pixel in the section can be replaced with the average of these heights, leading to a more accurate estimate. While this will result in a reduction in the height errors along a waterline, the waterline is a linear feature in a two-dimensional space. However, improvements to the DEM heights between adjacent pairs of waterlines can also be made, because DEM heights enclosed by the higher waterline of a pair must be at least no higher than the corrected heights along the higher waterline, whereas DEM heights not enclosed by the lower waterline must in general be no lower than the corrected heights along the lower waterline. In addition, DEM heights between the higher and lower waterlines can also be assigned smaller errors because of the reduced errors on the corrected waterline heights. The method was tested on a section of the TanDEM-X Intermediate DEM (IDEM) covering an 11km reach of the Warwickshire Avon, England. Flood extents from four COSMO-SKyMed images were available at various stages of a flood in November 2012, and a LiDAR DEM was available for validation. In the area covered by the flood extents, the original IDEM heights had a mean difference from the corresponding LiDAR heights of 0.5 m with a standard deviation of 2.0 m, while the corrected heights had a mean difference of 0.3 m with standard deviation 1.2 m. These figures show that significant reductions in IDEM height bias and error can be made using the method, with the corrected error being only 60% of the original. Even if only a single SAR image obtained near the peak of the flood was used, the corrected error was only 66% of the original. The method should also be capable of improving the final TanDEM-X DEM and other DEMs, and may also be of use with data from the SWOT (Surface Water and Ocean Topography) satellite.
Resumo:
A generalization of Arakawa and Schubert's convective quasi-equilibrium principle is presented for a closure formulation of mass-flux convection parameterization. The original principle is based on the budget of the cloud work function. This principle is generalized by considering the budget for a vertical integral of an arbitrary convection-related quantity. The closure formulation includes Arakawa and Schubert's quasi-equilibrium, as well as both CAPE and moisture closures as special cases. The formulation also includes new possibilities for considering vertical integrals that are dependent on convective-scale variables, such as the moisture within convection. The generalized convective quasi-equilibrium is defined by a balance between large-scale forcing and convective response for a given vertically-integrated quantity. The latter takes the form of a convolution of a kernel matrix and a mass-flux spectrum, as in the original convective quasi-equilibrium. The kernel reduces to a scalar when either a bulk formulation is adopted, or only large-scale variables are considered within the vertical integral. Various physical implications of the generalized closure are discussed. These include the possibility that precipitation might be considered as a potentially-significant contribution to the large-scale forcing. Two dicta are proposed as guiding physical principles for the specifying a suitable vertically-integrated quantity.
Resumo:
Various studies show moral intuitions to be susceptible to framing effects. Many have argued that this susceptibility is a sign of unreliability and that this poses a methodological challenge for moral philosophy. Recently, doubt has been cast on this idea. It has been argued that extant evidence of framing effects does not show that moral intuitions have a unreliability problem. I argue that, even if the extant evidence suggests that moral intuitions are fairly stable with respect to what intuitions we have, the effect of framing on the strength of those intuitions still needs to be taken into account. I argue that this by itself poses a methodological challenge for moral philosophy.
Resumo:
The combined influences of the westerly phase of the quasi-biennial oscillation (QBO-W) and solar maximum (Smax) conditions on the Northern Hemisphere extratropical winter circulation are investigated using reanalysis data and Center for Climate System Research/National Institute for Environmental Studies chemistry climate model (CCM) simulations. The composite analysis for the reanalysis data indicates strengthened polar vortex in December followed by weakened polar vortex in February–March for QBO-W during Smax (QBO-W/Smax) conditions. This relationship need not be specific to QBO-W/Smax conditions but may just require strengthened vortex in December, which is more likely under QBO-W/Smax. Both the reanalysis data and CCM simulations suggest that dynamical processes of planetary wave propagation and meridional circulation related to QBO-W around polar vortex in December are similar in character to those related to Smax; furthermore, both processes may work in concert to maintain stronger vortex during QBO-W/Smax. In the reanalysis data, the strengthened polar vortex in December is associated with the development of north–south dipole tropospheric anomaly in the Atlantic sector similar to the North Atlantic oscillation (NAO) during December–January. The structure of the north–south dipole anomaly has zonal wavenumber 1 (WN1) component, where the longitude of anomalous ridge overlaps with that of climatological ridge in the North Atlantic in January. This implies amplification of the WN1 wave and results in the enhancement of the upward WN1 propagation from troposphere into stratosphere in January, leading to the weakened polar vortex in February–March. Although WN2 waves do not play a direct role in forcing the stratospheric vortex evolution, their tropospheric response to QBO-W/Smax conditions appears to be related to the maintenance of the NAO-like anomaly in the high-latitude troposphere in January. These results may provide a possible explanation for the mechanisms underlying the seasonal evolution of wintertime polar vortex anomalies during QBO-W/Smax conditions and the role of troposphere in this evolution.
Resumo:
In both the observational record and atmosphere-ocean general circulation model (AOGCM) simulations of the last ∼∼ 150 years, short-lived negative radiative forcing due to volcanic aerosol, following explosive eruptions, causes sudden global-mean cooling of up to ∼∼ 0.3 K. This is about five times smaller than expected from the transient climate response parameter (TCRP, K of global-mean surface air temperature change per W m−2 of radiative forcing increase) evaluated under atmospheric CO2 concentration increasing at 1 % yr−1. Using the step model (Good et al. in Geophys Res Lett 38:L01703, 2011. doi:10.1029/2010GL045208), we confirm the previous finding (Held et al. in J Clim 23:2418–2427, 2010. doi:10.1175/2009JCLI3466.1) that the main reason for the discrepancy is the damping of the response to short-lived forcing by the thermal inertia of the upper ocean. Although the step model includes this effect, it still overestimates the volcanic cooling simulated by AOGCMs by about 60 %. We show that this remaining discrepancy can be explained by the magnitude of the volcanic forcing, which may be smaller in AOGCMs (by 30 % for the HadCM3 AOGCM) than in off-line calculations that do not account for rapid cloud adjustment, and the climate sensitivity parameter, which may be smaller than for increasing CO2 (40 % smaller than for 4 × CO2 in HadCM3).
Resumo:
Using an international, multi-model suite of historical forecasts from the World Climate Research Programme (WCRP) Climate-system Historical Forecast Project (CHFP), we compare the seasonal prediction skill in boreal wintertime between models that resolve the stratosphere and its dynamics (“high-top”) and models that do not (“low-top”). We evaluate hindcasts that are initialized in November, and examine the model biases in the stratosphere and how they relate to boreal wintertime (Dec-Mar) seasonal forecast skill. We are unable to detect more skill in the high-top ensemble-mean than the low-top ensemble-mean in forecasting the wintertime North Atlantic Oscillation, but model performance varies widely. Increasing the ensemble size clearly increases the skill for a given model. We then examine two major processes involving stratosphere-troposphere interactions (the El Niño-Southern Oscillation/ENSO and the Quasi-biennial Oscillation/QBO) and how they relate to predictive skill on intra-seasonal to seasonal timescales, particularly over the North Atlantic and Eurasia regions. High-top models tend to have a more realistic stratospheric response to El Niño and the QBO compared to low-top models. Enhanced conditional wintertime skill over high-latitudes and the North Atlantic region during winters with El Niño conditions suggests a possible role for a stratospheric pathway.
Resumo:
The effect of variations in land cover on mean radiant surface temperature (Tmrt) is explored through a simple scheme developed within the radiation model SOLWEIG. Outgoing longwave radiation is parameterised using surface temperature observations on a grass and an asphalt surface, whereas outgoing shortwave radiation is modelled through variations in albedo for the different surfaces. The influence of surface materials on Tmrt is small compared to the effects of shadowing. Nevertheless, altering ground surface materials could contribute to a reduction on Tmrt to reduce the radiant load during heat-wave episodes in locations where shadowing is not an option. Evaluation of the new scheme suggests that despite its simplicity it can simulate the outgoing fluxes well, especially during sunny conditions. However, it underestimates at night and in shadowed locations. One grass surface used to develop the parameterisation, with very different characteristics compared to an evaluation grass site, caused Tmrt to be underestimated. The implications of using high resolution (e.g. 15 minutes) temporal forcing data under partly cloudy conditions are demonstrated even for fairly proximal sites.
Resumo:
Let E/Q be an elliptic curve and p a rational prime of good ordinary reduction. For every imaginary quadratic field K/Q satisfying the Heegner hypothesis for E we have a corresponding line in E(K)\otimes Q_p, known as a shadow line. When E/Q has analytic rank 2 and E/K has analytic rank 3, shadow lines are expected to lie in E(Q)\otimes Qp. If, in addition, p splits in K/Q, then shadow lines can be determined using the anticyclotomic p-adic height pairing. We develop an algorithm to compute anticyclotomic p-adic heights which we then use to provide an algorithm to compute shadow lines. We conclude by illustrating these algorithms in a collection of examples.
Resumo:
The stratospheric mean-meridional circulation (MMC) and eddy mixing are compared among six meteorological reanalysis data sets: NCEP-NCAR, NCEP-CFSR, ERA-40, ERA-Interim, JRA-25, and JRA-55 for the period 1979–2012. The reanalysis data sets produced using advanced systems (i.e., NCEP-CFSR, ERA-Interim, and JRA-55) generally reveal a weaker MMC in the Northern Hemisphere (NH) compared with those produced using older systems (i.e., NCEP/NCAR, ERA-40, and JRA-25). The mean mixing strength differs largely among the data products. In the NH lower stratosphere, the contribution of planetary-scale mixing is larger in the new data sets than in the old data sets, whereas that of small-scale mixing is weaker in the new data sets. Conventional data assimilation techniques introduce analysis increments without maintaining physical balance, which may have caused an overly strong MMC and spurious small-scale eddies in the old data sets. At the NH mid-latitudes, only ERA-Interim reveals a weakening MMC trend in the deep branch of the Brewer–Dobson circulation (BDC). The relative importance of the eddy mixing compared with the mean-meridional transport in the subtropical lower stratosphere shows increasing trends in ERA-Interim and JRA-55; this together with the weakened MMC in the deep branch may imply an increasing age-of-air (AoA) in the NH middle stratosphere in ERA-Interim. Overall, discrepancies between the different variables and trends therein as derived from the different reanalyses are still relatively large, suggesting that more investments in these products are needed in order to obtain a consolidated picture of observed changes in the BDC and the mechanisms that drive them.