910 resultados para Bayesian maximum entropy
Resumo:
We give an a posteriori analysis of a semidiscrete discontinuous Galerkin scheme approximating solutions to a model of multiphase elastodynamics, which involves an energy density depending not only on the strain but also the strain gradient. A key component in the analysis is the reduced relative entropy stability framework developed in Giesselmann (2014, SIAM J. Math. Anal., 46, 3518–3539). This framework allows energy-type arguments to be applied to continuous functions. Since we advocate the use of discontinuous Galerkin methods we make use of two families of reconstructions, one set of discrete reconstructions and a set of elliptic reconstructions to apply the reduced relative entropy framework in this setting.
Resumo:
We give an a priori analysis of a semi-discrete discontinuous Galerkin scheme approximating solutions to a model of multiphase elastodynamics which involves an energy density depending not only on the strain but also the strain gradient. A key component in the analysis is the reduced relative entropy stability framework developed in Giesselmann (SIAM J Math Anal 46(5):3518–3539, 2014). The estimate we derive is optimal in the L∞(0,T;dG) norm for the strain and the L2(0,T;dG) norm for the velocity, where dG is an appropriate mesh dependent H1-like space.
Resumo:
We describe the creation of a data set describing changes related to the presence of ice sheets, including ice-sheet extent and height, ice-shelf extent, and the distribution and elevation of ice-free land at the Last Glacial Maximum (LGM), which were used in LGM experiments conducted as part of the fifth phase of the Coupled Modelling Intercomparison Project (CMIP5) and the third phase of the Palaeoclimate Modelling Intercomparison Project (PMIP3). The CMIP5/PMIP3 data sets were created from reconstructions made by three different groups, which were all obtained using a model-inversion approach but differ in the assumptions used in the modelling and in the type of data used as constraints. The ice-sheet extent in the Northern Hemisphere (NH) does not vary substantially between the three individual data sources. The difference in the topography of the NH ice sheets is also moderate, and smaller than the differences between these reconstructions (and the resultant composite reconstruction) and ice-sheet reconstructions used in previous generations of PMIP. Only two of the individual reconstructions provide information for Antarctica. The discrepancy between these two reconstructions is larger than the difference for the NH ice sheets, although still less than the difference between the composite reconstruction and previous PMIP ice-sheet reconstructions. Although largely confined to the ice-covered regions, differences between the climate response to the individual LGM reconstructions extend over the North Atlantic Ocean and Northern Hemisphere continents, partly through atmospheric stationary waves. Differences between the climate response to the CMIP5/PMIP3 composite and any individual ice-sheet reconstruction are smaller than those between the CMIP5/PMIP3 composite and the ice sheet used in the last phase of PMIP (PMIP2).
Resumo:
The climates of the mid-Holocene (MH), 6,000 years ago, and of the Last Glacial Maximum (LGM), 21,000 years ago, have extensively been simulated, in particular in the framework of the Palaeoclimate Modelling Intercomparion Project. These periods are well documented by paleo-records, which can be used for evaluating model results for climates different from the present one. Here, we present new simulations of the MH and the LGM climates obtained with the IPSL_CM5A model and compare them to our previous results obtained with the IPSL_CM4 model. Compared to IPSL_CM4, IPSL_CM5A includes two new features: the interactive representation of the plant phenology and marine biogeochemistry. But one of the most important differences between these models is the latitudinal resolution and vertical domain of their atmospheric component, which have been improved in IPSL_CM5A and results in a better representation of the mid-latitude jet-streams. The Asian monsoon’s representation is also substantially improved. The global average mean annual temperature simulated for the pre-industrial (PI) period is colder in IPSL_CM5A than in IPSL_CM4 but their climate sensitivity to a CO2 doubling is similar. Here we show that these differences in the simulated PI climate have an impact on the simulated MH and LGM climatic anomalies. The larger cooling response to LGM boundary conditions in IPSL_CM5A appears to be mainly due to differences between the PMIP3 and PMIP2 boundary conditions, as shown by a short wave radiative forcing/feedback analysis based on a simplified perturbation method. It is found that the sensitivity computed from the LGM climate is lower than that computed from 2 × CO2 simulations, confirming previous studies based on different models. For the MH, the Asian monsoon, stronger in the IPSL_CM5A PI simulation, is also more sensitive to the insolation changes. The African monsoon is also further amplified in IPSL_CM5A due to the impact of the interactive phenology. Finally the changes in variability for both models and for MH and LGM are presented taking the example of the El-Niño Southern Oscillation (ENSO), which is very different in the PI simulations. ENSO variability is damped in both model versions at the MH, whereas inconsistent responses are found between the two versions for the LGM. Part 2 of this paper examines whether these differences between IPSL_CM4 and IPSL_CM5A can be distinguished when comparing those results to palaeo-climatic reconstructions and investigates new approaches for model-data comparisons made possible by the inclusion of new components in IPSL_CM5A.
Resumo:
The combined influences of the westerly phase of the quasi-biennial oscillation (QBO-W) and solar maximum (Smax) conditions on the Northern Hemisphere extratropical winter circulation are investigated using reanalysis data and Center for Climate System Research/National Institute for Environmental Studies chemistry climate model (CCM) simulations. The composite analysis for the reanalysis data indicates strengthened polar vortex in December followed by weakened polar vortex in February–March for QBO-W during Smax (QBO-W/Smax) conditions. This relationship need not be specific to QBO-W/Smax conditions but may just require strengthened vortex in December, which is more likely under QBO-W/Smax. Both the reanalysis data and CCM simulations suggest that dynamical processes of planetary wave propagation and meridional circulation related to QBO-W around polar vortex in December are similar in character to those related to Smax; furthermore, both processes may work in concert to maintain stronger vortex during QBO-W/Smax. In the reanalysis data, the strengthened polar vortex in December is associated with the development of north–south dipole tropospheric anomaly in the Atlantic sector similar to the North Atlantic oscillation (NAO) during December–January. The structure of the north–south dipole anomaly has zonal wavenumber 1 (WN1) component, where the longitude of anomalous ridge overlaps with that of climatological ridge in the North Atlantic in January. This implies amplification of the WN1 wave and results in the enhancement of the upward WN1 propagation from troposphere into stratosphere in January, leading to the weakened polar vortex in February–March. Although WN2 waves do not play a direct role in forcing the stratospheric vortex evolution, their tropospheric response to QBO-W/Smax conditions appears to be related to the maintenance of the NAO-like anomaly in the high-latitude troposphere in January. These results may provide a possible explanation for the mechanisms underlying the seasonal evolution of wintertime polar vortex anomalies during QBO-W/Smax conditions and the role of troposphere in this evolution.
Resumo:
Models for which the likelihood function can be evaluated only up to a parameter-dependent unknown normalizing constant, such as Markov random field models, are used widely in computer science, statistical physics, spatial statistics, and network analysis. However, Bayesian analysis of these models using standard Monte Carlo methods is not possible due to the intractability of their likelihood functions. Several methods that permit exact, or close to exact, simulation from the posterior distribution have recently been developed. However, estimating the evidence and Bayes’ factors for these models remains challenging in general. This paper describes new random weight importance sampling and sequential Monte Carlo methods for estimating BFs that use simulation to circumvent the evaluation of the intractable likelihood, and compares them to existing methods. In some cases we observe an advantage in the use of biased weight estimates. An initial investigation into the theoretical and empirical properties of this class of methods is presented. Some support for the use of biased estimates is presented, but we advocate caution in the use of such estimates.
Resumo:
Approximate Bayesian computation (ABC) is a popular family of algorithms which perform approximate parameter inference when numerical evaluation of the likelihood function is not possible but data can be simulated from the model. They return a sample of parameter values which produce simulations close to the observed dataset. A standard approach is to reduce the simulated and observed datasets to vectors of summary statistics and accept when the difference between these is below a specified threshold. ABC can also be adapted to perform model choice. In this article, we present a new software package for R, abctools which provides methods for tuning ABC algorithms. This includes recent dimension reduction algorithms to tune the choice of summary statistics, and coverage methods to tune the choice of threshold. We provide several illustrations of these routines on applications taken from the ABC literature.
Resumo:
Reconstructions of salinity are used to diagnose changes in the hydrological cycle and ocean circulation. A widely used method of determining past salinity uses oxygen isotope (δOw) residuals after the extraction of the global ice volume and temperature components. This method relies on a constant relationship between δOw and salinity throughout time. Here we use the isotope-enabled fully coupled General Circulation Model (GCM) HadCM3 to test the application of spatially and time-independent relationships in the reconstruction of past ocean salinity. Simulations of the Late Holocene (LH), Last Glacial Maximum (LGM), and Last Interglacial (LIG) climates are performed and benchmarked against existing compilations of stable oxygen isotopes in carbonates (δOc), which primarily reflect δOw and temperature. We find that HadCM3 produces an accurate representation of the surface ocean δOc distribution for the LH and LGM. Our simulations show considerable variability in spatial and temporal δOw-salinity relationships. Spatial gradients are generally shallower but within ∼50% of the actual simulated LH to LGM and LH to LIG temporal gradients and temporal gradients calculated from multi-decadal variability are generally shallower than both spatial and actual simulated gradients. The largest sources of uncertainty in salinity reconstructions are found to be caused by changes in regional freshwater budgets, ocean circulation, and sea ice regimes. These can cause errors in salinity estimates exceeding 4 psu. Our results suggest that paleosalinity reconstructions in the South Atlantic, Indian and Tropical Pacific Oceans should be most robust, since these regions exhibit relatively constant δOw-salinity relationships across spatial and temporal scales. Largest uncertainties will affect North Atlantic and high latitude paleosalinity reconstructions. Finally, the results show that it is difficult to generate reliable salinity estimates for regions of dynamic oceanography, such as the North Atlantic, without additional constraints.
Resumo:
The potential impact of the abrupt 8.2 ka cold event on human demography, settlement patterns and culture in Europe and the Near East has emerged as a key theme in current discussion and debate. We test whether this event had an impact on the Mesolithic population of western Scotland, a case study located within the North Atlantic region where the environmental impact of the 8.2 ka event is likely to have been the most severe. By undertaking a Bayesian analysis of the radiocarbon record and using the number of activity events as a proxy for the size of the human population, we find evidence for a dramatic reduction in the Mesolithic population synchronous with the 8.2 ka event. We interpret this as reflecting the demographic collapse of a low density population that lacked the capability to adapt to the rapid onset of new environmental conditions. This impact of the 8.2 ka event in the North Atlantic region lends credence to the possibility of a similar impact on populations in Continental Europe and the Near East.
Resumo:
The Last Glacial Maximum (LGM) exhibits different large-scale atmospheric conditions compared to present-day climate due to altered boundary conditions. The regional atmospheric circulation and associated precipitation patterns over Europe are characterized for the first time with a weather typing approach (circulation weather types, CWT) for LGM paleoclimate simulations. The CWT approach is applied to four representative regions across Europe. While the CWTs over Western Europe are prevailing westerly for both present-day and LGM conditions, considerable differences are identified elsewhere: Southern Europe experienced more frequent westerly and cyclonic CWTs under LGM conditions, while Central and Eastern Europe was predominantly affected by southerly and easterly flow patterns. Under LGM conditions, rainfall is enhanced over Western Europe but is reduced over most of Central and Eastern Europe. These differences are explained by changing CWT frequencies and evaporation patterns over the North Atlantic Ocean. The regional differences of the CWTs and precipitation patterns are linked to the North Atlantic storm track, which was stronger over Europe in all considered models during the LGM, explaining the overall increase of the cyclonic CWT. Enhanced evaporation over the North Atlantic leads to higher moisture availability over the ocean. Despite the overall cooling during the LGM, this explains the enhanced precipitation over southwestern Europe, particularly Iberia. This study links large-scale atmospheric dynamics to the regional circulation and associated precipitation patterns and provides an improved regional assessment of the European climate under LGM conditions.
Resumo:
Land cover data derived from satellites are commonly used to prescribe inputs to models of the land surface. Since such data inevitably contains errors, quantifying how uncertainties in the data affect a model’s output is important. To do so, a spatial distribution of possible land cover values is required to propagate through the model’s simulation. However, at large scales, such as those required for climate models, such spatial modelling can be difficult. Also, computer models often require land cover proportions at sites larger than the original map scale as inputs, and it is the uncertainty in these proportions that this article discusses. This paper describes a Monte Carlo sampling scheme that generates realisations of land cover proportions from the posterior distribution as implied by a Bayesian analysis that combines spatial information in the land cover map and its associated confusion matrix. The technique is computationally simple and has been applied previously to the Land Cover Map 2000 for the region of England and Wales. This article demonstrates the ability of the technique to scale up to large (global) satellite derived land cover maps and reports its application to the GlobCover 2009 data product. The results show that, in general, the GlobCover data possesses only small biases, with the largest belonging to non–vegetated surfaces. In vegetated surfaces, the most prominent area of uncertainty is Southern Africa, which represents a complex heterogeneous landscape. It is also clear from this study that greater resources need to be devoted to the construction of comprehensive confusion matrices.
Resumo:
Evidence of jet precession in many galactic and extragalactic sources has been reported in the literature. Much of this evidence is based on studies of the kinematics of the jet knots, which depends on the correct identification of the components to determine their respective proper motions and position angles on the plane of the sky. Identification problems related to fitting procedures, as well as observations poorly sampled in time, may influence the follow-up of the components in time, which consequently might contribute to a misinterpretation of the data. In order to deal with these limitations, we introduce a very powerful statistical tool to analyse jet precession: the cross-entropy method for continuous multi-extremal optimization. Only based on the raw data of the jet components (right ascension and declination offsets from the core), the cross-entropy method searches for the precession model parameters that better represent the data. In this work we present a large number of tests to validate this technique, using synthetic precessing jets built from a given set of precession parameters. With the aim of recovering these parameters, we applied the cross-entropy method to our precession model, varying exhaustively the quantities associated with the method. Our results have shown that even in the most challenging tests, the cross-entropy method was able to find the correct parameters within a 1 per cent level. Even for a non-precessing jet, our optimization method could point out successfully the lack of precession.
Resumo:
We present a new technique for obtaining model fittings to very long baseline interferometric images of astrophysical jets. The method minimizes a performance function proportional to the sum of the squared difference between the model and observed images. The model image is constructed by summing N(s) elliptical Gaussian sources characterized by six parameters: two-dimensional peak position, peak intensity, eccentricity, amplitude, and orientation angle of the major axis. We present results for the fitting of two main benchmark jets: the first constructed from three individual Gaussian sources, the second formed by five Gaussian sources. Both jets were analyzed by our cross-entropy technique in finite and infinite signal-to-noise regimes, the background noise chosen to mimic that found in interferometric radio maps. Those images were constructed to simulate most of the conditions encountered in interferometric images of active galactic nuclei. We show that the cross-entropy technique is capable of recovering the parameters of the sources with a similar accuracy to that obtained from the very traditional Astronomical Image Processing System Package task IMFIT when the image is relatively simple (e. g., few components). For more complex interferometric maps, our method displays superior performance in recovering the parameters of the jet components. Our methodology is also able to show quantitatively the number of individual components present in an image. An additional application of the cross-entropy technique to a real image of a BL Lac object is shown and discussed. Our results indicate that our cross-entropy model-fitting technique must be used in situations involving the analysis of complex emission regions having more than three sources, even though it is substantially slower than current model-fitting tasks (at least 10,000 times slower for a single processor, depending on the number of sources to be optimized). As in the case of any model fitting performed in the image plane, caution is required in analyzing images constructed from a poorly sampled (u, v) plane.
Resumo:
The kinematic expansion history of the universe is investigated by using the 307 supernovae type Ia from the Union Compilation set. Three simple model parameterizations for the deceleration parameter ( constant, linear and abrupt transition) and two different models that are explicitly parametrized by the cosmic jerk parameter ( constant and variable) are considered. Likelihood and Bayesian analyses are employed to find best fit parameters and compare models among themselves and with the flat Lambda CDM model. Analytical expressions and estimates for the deceleration and cosmic jerk parameters today (q(0) and j(0)) and for the transition redshift (z(t)) between a past phase of cosmic deceleration to a current phase of acceleration are given. All models characterize an accelerated expansion for the universe today and largely indicate that it was decelerating in the past, having a transition redshift around 0.5. The cosmic jerk is not strongly constrained by the present supernovae data. For the most realistic kinematic models the 1 sigma confidence limits imply the following ranges of values: q(0) is an element of [-0.96, -0.46], j(0) is an element of [-3.2,-0.3] and z(t) is an element of [0.36, 0.84], which are compatible with the Lambda CDM predictions, q(0) = -0.57 +/- 0.04, j(0) = -1 and z(t) = 0.71 +/- 0.08. We find that even very simple kinematic models are equally good to describe the data compared to the concordance Lambda CDM model, and that the current observations are not powerful enough to discriminate among all of them.
Resumo:
The phylogenetic placement of Kuhlmanniodendron Fiaschi & Groppo (Achariaceae) within Malpighiales was investigated with rbcL sequence data. This genus was recently created to accommodate Carpotroche apterocarpa Kuhlm., a poorly known species from the rainforests of Espirito Santo, Brazil. One rbcL sequence was obtained from Kuhlmanniodendron and analyzed with 73 additional sequences from Malpighiales, and 8 from two closer orders, Oxalidales and Celastrales, all of which were available at Genbank. Phylogenetic analyses were carried out with maximum parsimony and Bayesian inference; bootstrap analyses were used in maximum parsimony to evaluate branch support. The results confirmed the placement of Kuhlmanniodendron together with Camptostylus, Lindackeria, Xylotheca, and Caloncoba in a strongly supported clade (posterior probability = 0.99) that corresponds with the tribe Lindackerieae of Achariaceae (Malpighiales). Kuhlmanniodendron also does not appear to be closely related to Oncoba (Salicaceae), an African genus with similar floral and fruit morphology that has been traditionally placed among cyanogenic Flacourtiaceae (now Achariaceae). A picrosodic paper test was performed in herbarium dry leaves, and the presence of cyanogenic glycosides, a class of compounds usually found in Achariaceae, was detected. Pollen morphology and wood anatomy of Kuhlmanniodendron were also investigated, but both pollen (3-colporate and microreticulate) and wood, with solitary to multiple vessels, scalariform perforation plates and other features, do not seem to be useful to distinguish this genus from other members of the Achariaceae and are rather common among the eudicotyledons as a whole. However, perforated ray cells with scalariform plates, an uncommon wood character, present in Kuhlmanniodendron are similar to those found in Kiggelaria africana (Pangieae, Achariaceae), but the occurrence of such cells is not mapped among the angiosperms, and it is not clear how homoplastic this character could be.