982 resultados para Statistics of extremes
Resumo:
The statistics of cloud-base vertical velocity simulated by the non-hydrostatic mesoscale model AROME are compared with Cloudnet remote sensing observations at two locations: the ARM SGP site in Central Oklahoma, and the DWD observatory at Lindenberg, Germany. The results show that, as expected, AROME significantly underestimates the variability of vertical velocity at cloud-base compared to observations at their nominal resolution; the standard deviation of vertical velocity in the model is typically 4-6 times smaller than observed, and even more during the winter at Lindenberg. Averaging the observations to the horizontal scale corresponding to the physical grid spacing of AROME (2.5 km) explains 70-80% of the underestimation by the model. Further averaging of the observations in the horizontal is required to match the model values for the standard deviation in vertical velocity. This indicates an effective horizontal resolution for the AROME model of at least 4 times the physically-defined grid spacing. The results illustrate the need for special treatment of sub-grid scale variability of vertical velocities in kilometer-scale atmospheric models, if processes such as aerosol-cloud interactions are to be included in the future.
Resumo:
This paper examines two hydrochemical time-series derived from stream samples taken in the Upper Hafren catchment, Plynlimon, Wales. One time-series comprises data collected at 7-hour intervals over 22 months (Neal et al., submitted, this issue), while the other is based on weekly sampling over 20 years. A subset of determinands: aluminium, calcium, chloride, conductivity, dissolved organic carbon, iron, nitrate, pH, silicon and sulphate are examined within a framework of non-stationary time-series analysis to identify determinand trends, seasonality and short-term dynamics. The results demonstrate that both long-term and high-frequency monitoring provide valuable and unique insights into the hydrochemistry of a catchment. The long-term data allowed analysis of long-termtrends, demonstrating continued increases in DOC concentrations accompanied by declining SO4 concentrations within the stream, and provided new insights into the changing amplitude and phase of the seasonality of the determinands such as DOC and Al. Additionally, these data proved invaluable for placing the short-term variability demonstrated within the high-frequency data within context. The 7-hour data highlighted complex diurnal cycles for NO3, Ca and Fe with cycles displaying changes in phase and amplitude on a seasonal basis. The high-frequency data also demonstrated the need to consider the impact that the time of sample collection can have on the summary statistics of the data and also that sampling during the hours of darkness provides additional hydrochemical information for determinands which exhibit pronounced diurnal variability. Moving forward, this research demonstrates the need for both long-term and high-frequency monitoring to facilitate a full and accurate understanding of catchment hydrochemical dynamics.
Resumo:
Increasingly, corporate occupiers seek more flexible ways of meeting their accommodation needs. One consequence of this process has been the growth of the executive suite, serviced office or business centre market. This paper, the final report of a research project funded by the Real Estate Research Institute, focuses upon the geographical distribution of business centers offering executive suites within the US. After a brief review of the development of the market, the paper examines the availability of data, provides basic descriptive statistics of the distribution of executive suites by state and by metropolitan statistical area and then attempts to model the distribution using demographic and socio-economic data at MSA level. The distribution reflects employment in key growth sectors and the position of the MSA in the urban hierarchy. An appendix presents a preliminary view of the global distribution of suites.
Resumo:
A direct method is presented for determining the uncertainty in reservoir pressure, flow, and net present value (NPV) using the time-dependent, one phase, two- or three-dimensional equations of flow through a porous medium. The uncertainty in the solution is modelled as a probability distribution function and is computed from given statistical data for input parameters such as permeability. The method generates an expansion for the mean of the pressure about a deterministic solution to the system equations using a perturbation to the mean of the input parameters. Hierarchical equations that define approximations to the mean solution at each point and to the field covariance of the pressure are developed and solved numerically. The procedure is then used to find the statistics of the flow and the risked value of the field, defined by the NPV, for a given development scenario. This method involves only one (albeit complicated) solution of the equations and contrasts with the more usual Monte-Carlo approach where many such solutions are required. The procedure is applied easily to other physical systems modelled by linear or nonlinear partial differential equations with uncertain data.
Resumo:
An evaluation is undertaken of the statistics of daily precipitation as simulated by five regional climate models using comprehensive observations in the region of the European Alps. Four limited area models and one variable-resolution global model are considered, all with a grid spacing of 50 km. The 15-year integrations were forced from reanalyses and observed sea surface temperature and sea ice (global model from sea surface only). The observational reference is based on 6400 rain gauge records (10–50 stations per grid box). Evaluation statistics encompass mean precipitation, wet-day frequency, precipitation intensity, and quantiles of the frequency distribution. For mean precipitation, the models reproduce the characteristics of the annual cycle and the spatial distribution. The domain mean bias varies between −23% and +3% in winter and between −27% and −5% in summer. Larger errors are found for other statistics. In summer, all models underestimate precipitation intensity (by 16–42%) and there is a too low frequency of heavy events. This bias reflects too dry summer mean conditions in three of the models, while it is partly compensated by too many low-intensity events in the other two models. Similar intermodel differences are found for other European subregions. Interestingly, the model errors are very similar between the two models with the same dynamical core (but different parameterizations) and they differ considerably between the two models with similar parameterizations (but different dynamics). Despite considerable biases, the models reproduce prominent mesoscale features of heavy precipitation, which is a promising result for their use in climate change downscaling over complex topography.
Resumo:
The impending threat of global climate change and its regional manifestations is among the most important and urgent problems facing humanity. Society needs accurate and reliable estimates of changes in the probability of regional weather variations to develop science-based adaptation and mitigation strategies. Recent advances in weather prediction and in our understanding and ability to model the climate system suggest that it is both necessary and possible to revolutionize climate prediction to meet these societal needs. However, the scientific workforce and the computational capability required to bring about such a revolution is not available in any single nation. Motivated by the success of internationally funded infrastructure in other areas of science, this paper argues that, because of the complexity of the climate system, and because the regional manifestations of climate change are mainly through changes in the statistics of regional weather variations, the scientific and computational requirements to predict its behavior reliably are so enormous that the nations of the world should create a small number of multinational high-performance computing facilities dedicated to the grand challenges of developing the capabilities to predict climate variability and change on both global and regional scales over the coming decades. Such facilities will play a key role in the development of next-generation climate models, build global capacity in climate research, nurture a highly trained workforce, and engage the global user community, policy-makers, and stakeholders. We recommend the creation of a small number of multinational facilities with computer capability at each facility of about 20 peta-flops in the near term, about 200 petaflops within five years, and 1 exaflop by the end of the next decade. Each facility should have sufficient scientific workforce to develop and maintain the software and data analysis infrastructure. Such facilities will enable questions of what resolution, both horizontal and vertical, in atmospheric and ocean models, is necessary for more confident predictions at the regional and local level. Current limitations in computing power have placed severe limitations on such an investigation, which is now badly needed. These facilities will also provide the world's scientists with the computational laboratories for fundamental research on weather–climate interactions using 1-km resolution models and on atmospheric, terrestrial, cryospheric, and oceanic processes at even finer scales. Each facility should have enabling infrastructure including hardware, software, and data analysis support, and scientific capacity to interact with the national centers and other visitors. This will accelerate our understanding of how the climate system works and how to model it. It will ultimately enable the climate community to provide society with climate predictions, which are based on our best knowledge of science and the most advanced technology.
Resumo:
This paper proposes a method for describing the distribution of observed temperatures on any day of the year such that the distribution and summary statistics of interest derived from the distribution vary smoothly through the year. The method removes the noise inherent in calculating summary statistics directly from the data thus easing comparisons of distributions and summary statistics between different periods. The method is demonstrated using daily effective temperatures (DET) derived from observations of temperature and wind speed at De Bilt, Holland. Distributions and summary statistics are obtained from 1985 to 2009 and compared to the period 1904–1984. A two-stage process first obtains parameters of a theoretical probability distribution, in this case the generalized extreme value (GEV) distribution, which describes the distribution of DET on any day of the year. Second, linear models describe seasonal variation in the parameters. Model predictions provide parameters of the GEV distribution, and therefore summary statistics, that vary smoothly through the year. There is evidence of an increasing mean temperature, a decrease in the variability in temperatures mainly in the winter and more positive skew, more warm days, in the summer. In the winter, the 2% point, the value below which 2% of observations are expected to fall, has risen by 1.2 °C, in the summer the 98% point has risen by 0.8 °C. Medians have risen by 1.1 and 0.9 °C in winter and summer, respectively. The method can be used to describe distributions of future climate projections and other climate variables. Further extensions to the methodology are suggested.
Resumo:
Mesospheric temperature inversions are well established observed phenomena, yet their properties remain the subject of ongoing research. Comparisons between Rayleigh-scatter lidar temperature measurements obtained by the University of Western Ontario's Purple Crow Lidar (42.9°N, 81.4°W) and the Canadian Middle Atmosphere Model are used to quantify the statistics of inversions. In both model and measurements, inversions occur most frequently in the winter and exhibit an average amplitude of ∼10 K. The model exhibits virtually no inversions in the summer, while the measurements show a strongly reduced frequency of occurrence with an amplitude about half that in the winter. A simple theory of mesospheric inversions based on wave saturation is developed, with no adjustable parameters. It predicts that the environmental lapse rate must be less than half the adiabatic lapse rate for an inversion to form, and it predicts the ratio of the inversion amplitude and thickness as a function of environmental lapse rate. Comparison of this prediction to the actual amplitude/thickness ratio using the lidar measurements shows good agreement between theory and measurements.
Resumo:
A novel diagnostic tool is presented, based on polar-cap temperature anomalies, for visualizing daily variability of the Arctic stratospheric polar vortex over multiple decades. This visualization illustrates the ubiquity of extended-time-scale recoveries from stratospheric sudden warmings, termed here polar-night jet oscillation (PJO) events. These are characterized by an anomalously warm polar lower stratosphere that persists for several months. Following the initial warming, a cold anomaly forms in the middle stratosphere, as does an anomalously high stratopause, both of which descend while the lower-stratospheric anomaly persists. These events are characterized in four datasets: Microwave Limb Sounder (MLS) temperature observations; the 40-yr ECMWF Re-Analysis (ERA-40) and Modern Era Retrospective Analysis for Research and Applications (MERRA) reanalyses; and an ensemble of three 150-yr simulations from the Canadian Middle Atmosphere Model. The statistics of PJO events in the model are found to agree very closely with those of the observations and reanalyses. The time scale for the recovery of the polar vortex following sudden warmings correlates strongly with the depth to which the warming initially descends. PJO events occur following roughly half of all major sudden warmings and are associated with an extended period of suppressed wave-activity fluxes entering the polar vortex. They follow vortex splits more frequently than they do vortex displacements. They are also related to weak vortex events as identified by the northern annular mode; in particular, those weak vortex events followed by a PJO event show a stronger tropospheric response. The long time scales, predominantly radiative dynamics, and tropospheric influence of PJO events suggest that they represent an important source of conditional skill in seasonal forecasting.
Resumo:
Many modern statistical applications involve inference for complex stochastic models, where it is easy to simulate from the models, but impossible to calculate likelihoods. Approximate Bayesian computation (ABC) is a method of inference for such models. It replaces calculation of the likelihood by a step which involves simulating artificial data for different parameter values, and comparing summary statistics of the simulated data with summary statistics of the observed data. Here we show how to construct appropriate summary statistics for ABC in a semi-automatic manner. We aim for summary statistics which will enable inference about certain parameters of interest to be as accurate as possible. Theoretical results show that optimal summary statistics are the posterior means of the parameters. Although these cannot be calculated analytically, we use an extra stage of simulation to estimate how the posterior means vary as a function of the data; and we then use these estimates of our summary statistics within ABC. Empirical results show that our approach is a robust method for choosing summary statistics that can result in substantially more accurate ABC analyses than the ad hoc choices of summary statistics that have been proposed in the literature. We also demonstrate advantages over two alternative methods of simulation-based inference.
Resumo:
Precipitation forecast data from the ERA-Interim reanalysis (33 years) are evaluated using the daily England and Wales Precipitation (EWP) observations obtained from a rain gauge network. Observed and reanalysis daily precipitation data are both described well by Weibull distributions with indistinguishable shapes but different scale parameters, such that the reanalysis underestimates the observations by an average factor of 22%. The correlation between the observed and ERA-Interim time series of regional, daily precipitation is 0.91. ERA-Interim also captures the statistics of extreme precipitation including a slightly lower likelihood of the heaviest precipitation events (>15 mm day− 1 for the regional average) than indicated by the Weibull fit. ERA-Interim is also closer to EWP for the high precipitation events. Since these carry weight in longer accumulations, a smaller underestimation of 19% is found for monthly mean precipitation. The partition between convective and stratiform precipitation in the ERA-Interim forecast is also examined. In summer both components contribute equally to the total precipitation amount, while in winter the stratiform precipitation is approximately double convective. These results are expected to be relevant to other regions with low orography on the coast of a continent at the downstream end of mid-latitude stormtracks.
Resumo:
During the winter of 2013/14, much of the UK experienced repeated intense rainfall events and flooding. This had a considerable impact on property and transport infrastructure. A key question is whether the burning of fossil fuels is changing the frequency of extremes, and if so to what extent. We assess the scale of the winter flooding before reviewing a broad range of Earth system drivers affecting UK rainfall. Some drivers can be potentially disregarded for these specific storms whereas others are likely to have increased their risk of occurrence. We discuss the requirements of hydrological models to transform rainfall into river flows and flooding. To determine any general changing flood risk, we argue that accurate modelling needs to capture evolving understanding of UK rainfall interactions with a broad set of factors. This includes changes to multiscale atmospheric, oceanic, solar and sea-ice features, and land-use and demographics. Ensembles of such model simulations may be needed to build probability distributions of extremes for both pre-industrial and contemporary concentration levels of atmospheric greenhouse gases.
Resumo:
Upscaling ecological information to larger scales in space and downscaling remote sensing observations or model simulations to finer scales remain grand challenges in Earth system science. Downscaling often involves inferring subgrid information from coarse-scale data, and such ill-posed problems are classically addressed using regularization. Here, we apply two-dimensional Tikhonov Regularization (2DTR) to simulate subgrid surface patterns for ecological applications. Specifically, we test the ability of 2DTR to simulate the spatial statistics of high-resolution (4 m) remote sensing observations of the normalized difference vegetation index (NDVI) in a tundra landscape. We find that the 2DTR approach as applied here can capture the major mode of spatial variability of the high-resolution information, but not multiple modes of spatial variability, and that the Lagrange multiplier (γ) used to impose the condition of smoothness across space is related to the range of the experimental semivariogram. We used observed and 2DTR-simulated maps of NDVI to estimate landscape-level leaf area index (LAI) and gross primary productivity (GPP). NDVI maps simulated using a γ value that approximates the range of observed NDVI result in a landscape-level GPP estimate that differs by ca 2% from those created using observed NDVI. Following findings that GPP per unit LAI is lower near vegetation patch edges, we simulated vegetation patch edges using multiple approaches and found that simulated GPP declined by up to 12% as a result. 2DTR can generate random landscapes rapidly and can be applied to disaggregate ecological information and compare of spatial observations against simulated landscapes.
Resumo:
The England and Wales precipitation (EWP) dataset is a homogeneous time series of daily accumulations from 1931 to 2014, composed from rain gauge observations spanning the region. The daily regional-average precipitation statistics are shown to be well described by a Weibull distribution, which is used to define extremes in terms of percentiles. Computed trends in annual and seasonal precipitation are sensitive to the period chosen, due to large variability on interannual and decadal timescales. Atmospheric circulation patterns associated with seasonal precipitation variability are identified. These patterns project onto known leading modes of variability, all of which involve displacements of the jet stream and storm-track over the eastern Atlantic. The intensity of daily precipitation for each calendar season is investigated by partitioning all observations into eight intensity categories contributing equally to the total precipitation in the dataset. Contrary to previous results based on shorter periods, no significant trends of the most intense categories are found between 1931 and 2014. The regional-average precipitation is found to share statistical properties common to the majority of individual stations across England and Wales used in previous studies. Statistics of the EWP data are examined for multi-day accumulations up to 10 days, which are more relevant for river flooding. Four recent years (2000, 2007, 2008 and 2012) have a greater number of extreme events in the 3-and 5-day accumulations than any previous year in the record. It is the duration of precipitation events in these years that is remarkable, rather than the magnitude of the daily accumulations.
Resumo:
The mechanisms underlying the occurrence of temperature extremes in Iberia are analysed considering a Lagrangian perspective of the atmospheric flow, using 6-hourly ERA-Interim reanalysis data for the years 1979–2012. Daily 2-m minimum temperatures below the 1st percentile and 2-m maximum temperatures above the 99th percentile at each grid point over Iberia are selected separately for winter and summer. Four categories of extremes are analysed using 10-d backward trajectories initialized at the extreme temperature grid points close to the surface: winter cold (WCE) and warm extremes (WWE), and summer cold (SCE) and warm extremes (SWE). Air masses leading to temperature extremes are first transported from the North Atlantic towards Europe for all categories. While there is a clear relation to large-scale circulation patterns in winter, the Iberian thermal low is important in summer. Along the trajectories, air mass characteristics are significantly modified through adiabatic warming (air parcel descent), upper-air radiative cooling and near-surface warming (surface heat fluxes and radiation). High residence times over continental areas, such as over northern-central Europe for WCE and, to a lesser extent, over Iberia for SWE, significantly enhance these air mass modifications. Near-surface diabatic warming is particularly striking for SWE. WCE and SWE are responsible for the most extreme conditions in a given year. For WWE and SCE, strong temperature advection associated with important meridional air mass transports are the main driving mechanisms, accompanied by comparatively minor changes in the air mass properties. These results permit a better understanding of mechanisms leading to temperature extremes in Iberia.