903 resultados para Bayesian inference on precipitation
Resumo:
The relationship between winter (DJF) rainfall over Portugal and the variable large scale circulation is addressed. It is shown that the poles of the sea level pressure (SLP) field variability associated with rainfall variability are shifted about 15° northward with respect to those used in standard definitions of the North Atlantic Oscillation (NAO). It is suggested that the influence of NAO on rainfall dominantly arises from the associated advection of humidity from the Atlantic Ocean. Rainfall is also related to different aspects of baroclinic wave activity, the variability of the latter quantity in turn being largely dependent on the NAO.
A negative NAO index (leading to increased westerly surface geostrophic winds into Portugal) is associated with an increased number of deep (ps<980 hPa) surface lows over the central North Atlantic and of intermediate (980
Effects of temporal resolution of input precipitation on the performance of hydrological forecasting
Resumo:
Flood prediction systems rely on good quality precipitation input data and forecasts to drive hydrological models. Most precipitation data comes from daily stations with a good spatial coverage. However, some flood events occur on sub-daily time scales and flood prediction systems could benefit from using models calibrated on the same time scale. This study compares precipitation data aggregated from hourly stations (HP) and data disaggregated from daily stations (DP) with 6-hourly forecasts from ECMWF over the time period 1 October 2006–31 December 2009. The HP and DP data sets were then used to calibrate two hydrological models, LISFLOOD-RR and HBV, and the latter was used in a flood case study. The HP scored better than the DP when evaluated against the forecast for lead times up to 4 days. However, this was not translated in the same way to the hydrological modelling, where the models gave similar scores for simulated runoff with the two datasets. The flood forecasting study showed that both datasets gave similar hit rates whereas the HP data set gave much smaller false alarm rates (FAR). This indicates that using sub-daily precipitation in the calibration and initiation of hydrological models can improve flood forecasting.
Resumo:
Catastrophe risk models used by the insurance industry are likely subject to significant uncertainty, but due to their proprietary nature and strict licensing conditions they are not available for experimentation. In addition, even if such experiments were conducted, these would not be repeatable by other researchers because commercial confidentiality issues prevent the details of proprietary catastrophe model structures from being described in public domain documents. However, such experimentation is urgently required to improve decision making in both insurance and reinsurance markets. In this paper we therefore construct our own catastrophe risk model for flooding in Dublin, Ireland, in order to assess the impact of typical precipitation data uncertainty on loss predictions. As we consider only a city region rather than a whole territory and have access to detailed data and computing resources typically unavailable to industry modellers, our model is significantly more detailed than most commercial products. The model consists of four components, a stochastic rainfall module, a hydrological and hydraulic flood hazard module, a vulnerability module, and a financial loss module. Using these we undertake a series of simulations to test the impact of driving the stochastic event generator with four different rainfall data sets: ground gauge data, gauge-corrected rainfall radar, meteorological reanalysis data (European Centre for Medium-Range Weather Forecasts Reanalysis-Interim; ERA-Interim) and a satellite rainfall product (The Climate Prediction Center morphing method; CMORPH). Catastrophe models are unusual because they use the upper three components of the modelling chain to generate a large synthetic database of unobserved and severe loss-driving events for which estimated losses are calculated. We find the loss estimates to be more sensitive to uncertainties propagated from the driving precipitation data sets than to other uncertainties in the hazard and vulnerability modules, suggesting that the range of uncertainty within catastrophe model structures may be greater than commonly believed.
Resumo:
The asymmetries in the convective flows, current systems, and particle precipitation in the high-latitude dayside ionosphere which are related to the equatorial plane components of the interplanetary magnetic field (IMF) are discussed in relation to the results of several recent observational studies. It is argued that all of the effects reported to date which are ascribed to the y component of the IMF can be understood, at least qualitatively, in terms of a simple theoretical picture in which the effects result from the stresses exerted on the magnetosphere consequent on the interconnection of terrestrial and interplanetary fields. In particular, relaxation under the action of these stresses allows, in effect, a partial penetration of the IMF into the magnetospheric cavity, such that the sense of the expected asymmetry effects on closed field lines can be understood, to zeroth order, in terms of the “dipole plus uniform field” model. In particular, in response to IMF By, the dayside cusp should be displaced in longitude about noon in the same sense as By in the northern hemisphere, and in the opposite sense to By in the southern hemisphere, while simultaneously the auroral oval as a whole should be shifted in the dawn-dusk direction in the opposite sense with respect to By. These expected displacements are found to be consistent with recently published observations. Similar considerations lead to the suggestion that the auroral oval may also undergo displacements in the noon-midnight direction which are associated with the x component of the IMF. We show that a previously published study of the position of the auroral oval contains strong initial evidence for the existence of this effect. However, recent results on variations in the latitude of the cusp are more ambiguous. This topic therefore requires further study before definitive conclusions can be drawn.
Resumo:
We used a light-use efficiency model of photosynthesis coupled with a dynamic carbon allocation and tree-growth model to simulate annual growth of the gymnosperm Callitris columellaris in the semi-arid Great Western Woodlands, Western Australia, over the past 100 years. Parameter values were derived from independent observations except for sapwood specific respiration rate, fine-root turnover time, fine-root specific respiration rate and the ratio of fine-root mass to foliage area, which were estimated by Bayesian optimization. The model reproduced the general pattern of interannual variability in radial growth (tree-ring width), including the response to the shift in precipitation regimes that occurred in the 1960s. Simulated and observed responses to climate were consistent. Both showed a significant positive response of tree-ring width to total photosynthetically active radiation received and to the ratio of modeled actual to equilibrium evapotranspiration, and a significant negative response to vapour pressure deficit. However, the simulations showed an enhancement of radial growth in response to increasing atmospheric CO2 concentration (ppm) ([CO2]) during recent decades that is not present in the observations. The discrepancy disappeared when the model was recalibrated on successive 30-year windows. Then the ratio of fine-root mass to foliage area increases by 14% (from 0.127 to 0.144 kg C m-2) as [CO2] increased while the other three estimated parameters remained constant. The absence of a signal of increasing [CO2] has been noted in many tree-ring records, despite the enhancement of photosynthetic rates and water-use efficiency resulting from increasing [CO2]. Our simulations suggest that this behaviour could be explained as a consequence of a shift towards below-ground carbon allocation.
Resumo:
Model projections of heavy precipitation and temperature extremes include large uncertainties. We demonstrate that the disagreement between individual simulations primarily arises from internal variability, whereas models agree remarkably well on the forced signal, the change in the absence of internal variability. Agreement is high on the spatial pattern of the forced heavy precipitation response showing an intensification over most land regions, in particular Eurasia and North America. The forced response of heavy precipitation is even more robust than that of annual mean precipitation. Likewise, models agree on the forced response pattern of hot extremes showing the greatest intensification over midlatitudinal land regions. Thus, confidence in the forced changes of temperature and precipitation extremes in response to a certain warming is high. Although in reality internal variability will be superimposed on that pattern, it is the forced response that determines the changes in temperature and precipitation extremes in a risk perspective.
Resumo:
Satellite based top-of-atmosphere (TOA) and surface radiation budget observations are combined with mass corrected vertically integrated atmospheric energy divergence and tendency from reanalysis to infer the regional distribution of the TOA, atmospheric and surface energy budget terms over the globe. Hemispheric contrasts in the energy budget terms are used to determine the radiative and combined sensible and latent heat contributions to the cross-equatorial heat transports in the atmosphere (AHT_EQ) and ocean (OHT_EQ). The contrast in net atmospheric radiation implies an AHT_EQ from the northern hemisphere (NH) to the southern hemisphere (SH) (0.75 PW), while the hemispheric difference in sensible and latent heat implies an AHT_EQ in the opposite direction (0.51 PW), resulting in a net NH to SH AHT_EQ (0.24 PW). At the surface, the hemispheric contrast in the radiative component (0.95 PW) dominates, implying a 0.44 PW SH to NH OHT_EQ. Coupled model intercomparison project phase 5 (CMIP5) models with excessive net downward surface radiation and surface-to-atmosphere sensible and latent heat transport in the SH relative to the NH exhibit anomalous northward AHT_EQ and overestimate SH tropical precipitation. The hemispheric bias in net surface radiative flux is due to too much longwave surface radiative cooling in the NH tropics in both clear and all-sky conditions and excessive shortwave surface radiation in the SH subtropics and extratropics due to an underestimation in reflection by clouds.
Resumo:
Climate models indicate a future wintertime precipitation reduction in the Mediterranean region but there is large uncertainty in the amplitude of the projected change. We analyse CMIP5 climate model output to quantify the role of atmospheric circulation in the Mediterranean precipitation change. It is found that a simple circulation index, i.e. the 850 hPa zonal wind (U850) in North Africa, well describes the year to year fluctuations in the area-averaged Mediterranean precipitation, with positive (i.e. westerly) U850 anomalies in North Africa being associated with positive precipitation anomalies. Under climate change, U850 in North Africa and the Mediterranean precipitation are both projected to decrease consistently with the relationship found in the inter-annual variability. This enables us to estimate that about 85% of the CMIP5 mean precipitation response and 80% of the variance in the inter-model spread are related to changes in the atmospheric circulation. In contrast, there is no significant correlation between the mean precipitation response and the global-mean surface warming across the models. It follows that the uncertainty in cold-season Mediterranean precipitation projection will not be narrowed unless the uncertainty in the atmospheric circulation response is reduced.
Resumo:
Approximate Bayesian computation (ABC) is a popular family of algorithms which perform approximate parameter inference when numerical evaluation of the likelihood function is not possible but data can be simulated from the model. They return a sample of parameter values which produce simulations close to the observed dataset. A standard approach is to reduce the simulated and observed datasets to vectors of summary statistics and accept when the difference between these is below a specified threshold. ABC can also be adapted to perform model choice. In this article, we present a new software package for R, abctools which provides methods for tuning ABC algorithms. This includes recent dimension reduction algorithms to tune the choice of summary statistics, and coverage methods to tune the choice of threshold. We provide several illustrations of these routines on applications taken from the ABC literature.
Resumo:
The potential impact of the abrupt 8.2 ka cold event on human demography, settlement patterns and culture in Europe and the Near East has emerged as a key theme in current discussion and debate. We test whether this event had an impact on the Mesolithic population of western Scotland, a case study located within the North Atlantic region where the environmental impact of the 8.2 ka event is likely to have been the most severe. By undertaking a Bayesian analysis of the radiocarbon record and using the number of activity events as a proxy for the size of the human population, we find evidence for a dramatic reduction in the Mesolithic population synchronous with the 8.2 ka event. We interpret this as reflecting the demographic collapse of a low density population that lacked the capability to adapt to the rapid onset of new environmental conditions. This impact of the 8.2 ka event in the North Atlantic region lends credence to the possibility of a similar impact on populations in Continental Europe and the Near East.
Resumo:
The distribution of masses for neutron stars is analysed using the Bayesian statistical inference, evaluating the likelihood of the proposed Gaussian peaks by using 54 measured points obtained in a variety of systems. The results strongly suggest the existence of a bimodal distribution of the masses, with the first peak around 1.37 M(circle dot) and a much wider second peak at 1.73 M(circle dot). The results support earlier views related to the different evolutionary histories of the members for the first two peaks, which produces a natural separation (even if no attempt to `label` the systems has been made here). They also accommodate the recent findings of similar to M(circle dot) masses quite naturally. Finally, we explore the existence of a subgroup around 1.25 M(circle dot), finding weak, if any, evidence for it. This recently claimed low-mass subgroup, possibly related to the O-Mg-Ne core collapse events, has a monotonically decreasing likelihood and does not stand out clearly from the rest of the sample.
Resumo:
The kinematic expansion history of the universe is investigated by using the 307 supernovae type Ia from the Union Compilation set. Three simple model parameterizations for the deceleration parameter ( constant, linear and abrupt transition) and two different models that are explicitly parametrized by the cosmic jerk parameter ( constant and variable) are considered. Likelihood and Bayesian analyses are employed to find best fit parameters and compare models among themselves and with the flat Lambda CDM model. Analytical expressions and estimates for the deceleration and cosmic jerk parameters today (q(0) and j(0)) and for the transition redshift (z(t)) between a past phase of cosmic deceleration to a current phase of acceleration are given. All models characterize an accelerated expansion for the universe today and largely indicate that it was decelerating in the past, having a transition redshift around 0.5. The cosmic jerk is not strongly constrained by the present supernovae data. For the most realistic kinematic models the 1 sigma confidence limits imply the following ranges of values: q(0) is an element of [-0.96, -0.46], j(0) is an element of [-3.2,-0.3] and z(t) is an element of [0.36, 0.84], which are compatible with the Lambda CDM predictions, q(0) = -0.57 +/- 0.04, j(0) = -1 and z(t) = 0.71 +/- 0.08. We find that even very simple kinematic models are equally good to describe the data compared to the concordance Lambda CDM model, and that the current observations are not powerful enough to discriminate among all of them.
Resumo:
For many learning tasks the duration of the data collection can be greater than the time scale for changes of the underlying data distribution. The question we ask is how to include the information that data are aging. Ad hoc methods to achieve this include the use of validity windows that prevent the learning machine from making inferences based on old data. This introduces the problem of how to define the size of validity windows. In this brief, a new adaptive Bayesian inspired algorithm is presented for learning drifting concepts. It uses the analogy of validity windows in an adaptive Bayesian way to incorporate changes in the data distribution over time. We apply a theoretical approach based on information geometry to the classification problem and measure its performance in simulations. The uncertainty about the appropriate size of the memory windows is dealt with in a Bayesian manner by integrating over the distribution of the adaptive window size. Thus, the posterior distribution of the weights may develop algebraic tails. The learning algorithm results from tracking the mean and variance of the posterior distribution of the weights. It was found that the algebraic tails of this posterior distribution give the learning algorithm the ability to cope with an evolving environment by permitting the escape from local traps.
Resumo:
In this article, we introduce a semi-parametric Bayesian approach based on Dirichlet process priors for the discrete calibration problem in binomial regression models. An interesting topic is the dosimetry problem related to the dose-response model. A hierarchical formulation is provided so that a Markov chain Monte Carlo approach is developed. The methodology is applied to simulated and real data.