26 resultados para Gaussian and t-copulas


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fossil pollen data from stratigraphic cores are irregularly spaced in time due to non-linear age-depth relations. Moreover, their marginal distributions may vary over time. We address these features in a nonparametric regression model with errors that are monotone transformations of a latent continuous-time Gaussian process Z(T). Although Z(T) is unobserved, due to monotonicity, under suitable regularity conditions, it can be recovered facilitating further computations such as estimation of the long-memory parameter and the Hermite coefficients. The estimation of Z(T) itself involves estimation of the marginal distribution function of the regression errors. These issues are considered in proposing a plug-in algorithm for optimal bandwidth selection and construction of confidence bands for the trend function. Some high-resolution time series of pollen records from Lago di Origlio in Switzerland, which go back ca. 20,000 years are used to illustrate the methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the context of expensive numerical experiments, a promising solution for alleviating the computational costs consists of using partially converged simulations instead of exact solutions. The gain in computational time is at the price of precision in the response. This work addresses the issue of fitting a Gaussian process model to partially converged simulation data for further use in prediction. The main challenge consists of the adequate approximation of the error due to partial convergence, which is correlated in both design variables and time directions. Here, we propose fitting a Gaussian process in the joint space of design parameters and computational time. The model is constructed by building a nonstationary covariance kernel that reflects accurately the actual structure of the error. Practical solutions are proposed for solving parameter estimation issues associated with the proposed model. The method is applied to a computational fluid dynamics test case and shows significant improvement in prediction compared to a classical kriging model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

PURPOSE Positron emission tomography (PET)∕computed tomography (CT) measurements on small lesions are impaired by the partial volume effect, which is intrinsically tied to the point spread function of the actual imaging system, including the reconstruction algorithms. The variability resulting from different point spread functions hinders the assessment of quantitative measurements in clinical routine and especially degrades comparability within multicenter trials. To improve quantitative comparability there is a need for methods to match different PET∕CT systems through elimination of this systemic variability. Consequently, a new method was developed and tested that transforms the image of an object as produced by one tomograph to another image of the same object as it would have been seen by a different tomograph. The proposed new method, termed Transconvolution, compensates for differing imaging properties of different tomographs and particularly aims at quantitative comparability of PET∕CT in the context of multicenter trials. METHODS To solve the problem of image normalization, the theory of Transconvolution was mathematically established together with new methods to handle point spread functions of different PET∕CT systems. Knowing the point spread functions of two different imaging systems allows determining a Transconvolution function to convert one image into the other. This function is calculated by convolving one point spread function with the inverse of the other point spread function which, when adhering to certain boundary conditions such as the use of linear acquisition and image reconstruction methods, is a numerically accessible operation. For reliable measurement of such point spread functions characterizing different PET∕CT systems, a dedicated solid-state phantom incorporating (68)Ge∕(68)Ga filled spheres was developed. To iteratively determine and represent such point spread functions, exponential density functions in combination with a Gaussian distribution were introduced. Furthermore, simulation of a virtual PET system provided a standard imaging system with clearly defined properties to which the real PET systems were to be matched. A Hann window served as the modulation transfer function for the virtual PET. The Hann's apodization properties suppressed high spatial frequencies above a certain critical frequency, thereby fulfilling the above-mentioned boundary conditions. The determined point spread functions were subsequently used by the novel Transconvolution algorithm to match different PET∕CT systems onto the virtual PET system. Finally, the theoretically elaborated Transconvolution method was validated transforming phantom images acquired on two different PET systems to nearly identical data sets, as they would be imaged by the virtual PET system. RESULTS The proposed Transconvolution method matched different PET∕CT-systems for an improved and reproducible determination of a normalized activity concentration. The highest difference in measured activity concentration between the two different PET systems of 18.2% was found in spheres of 2 ml volume. Transconvolution reduced this difference down to 1.6%. In addition to reestablishing comparability the new method with its parameterization of point spread functions allowed a full characterization of imaging properties of the examined tomographs. CONCLUSIONS By matching different tomographs to a virtual standardized imaging system, Transconvolution opens a new comprehensive method for cross calibration in quantitative PET imaging. The use of a virtual PET system restores comparability between data sets from different PET systems by exerting a common, reproducible, and defined partial volume effect.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Multi-objective optimization algorithms aim at finding Pareto-optimal solutions. Recovering Pareto fronts or Pareto sets from a limited number of function evaluations are challenging problems. A popular approach in the case of expensive-to-evaluate functions is to appeal to metamodels. Kriging has been shown efficient as a base for sequential multi-objective optimization, notably through infill sampling criteria balancing exploitation and exploration such as the Expected Hypervolume Improvement. Here we consider Kriging metamodels not only for selecting new points, but as a tool for estimating the whole Pareto front and quantifying how much uncertainty remains on it at any stage of Kriging-based multi-objective optimization algorithms. Our approach relies on the Gaussian process interpretation of Kriging, and bases upon conditional simulations. Using concepts from random set theory, we propose to adapt the Vorob’ev expectation and deviation to capture the variability of the set of non-dominated points. Numerical experiments illustrate the potential of the proposed workflow, and it is shown on examples how Gaussian process simulations and the estimated Vorob’ev deviation can be used to monitor the ability of Kriging-based multi-objective optimization algorithms to accurately learn the Pareto front.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We review various inequalities for Mills' ratio (1 - Φ)= Ø, where Ø and Φ denote the standard Gaussian density and distribution function, respectively. Elementary considerations involving finite continued fractions lead to a general approximation scheme which implies and refines several known bounds.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During time-resolved optical stimulation experiments (TR-OSL), one uses short light pulses to separate the stimulation and emission of luminescence in time. Experimental TR-OSL results show that the luminescence lifetime in quartz of sedimentary origin is independent of annealing temperature below 500 °C, but decreases monotonically thereafter. These results have been interpreted previously empirically on the basis of the existence of two separate luminescence centers LH and LL in quartz, each with its own distinct luminescence lifetime. Additional experimental evidence also supports the presence of a non-luminescent hole reservoir R, which plays a critical role in the predose effect in this material. This paper extends a recently published analytical model for thermal quenching in quartz, to include the two luminescence centers LH and LL, as well as the hole reservoir R. The new extended model involves localized electronic transitions between energy states within the two luminescence centers, and is described by a system of differential equations based on the Mott–Seitz mechanism of thermal quenching. It is shown that by using simplifying physical assumptions, one can obtain analytical solutions for the intensity of the light during a TR-OSL experiment carried out with previously annealed samples. These analytical expressions are found to be in good agreement with the numerical solutions of the equations. The results from the model are shown to be in quantitative agreement with published experimental data for commercially available quartz samples. Specifically the model describes the variation of the luminescence lifetimes with (a) annealing temperatures between room temperature and 900 °C, and (b) with stimulation temperatures between 20 and 200 °C. This paper also reports new radioluminescence (RL) measurements carried out using the same commercially available quartz samples. Gaussian deconvolution of the RL emission spectra was carried out using a total of seven emission bands between 1.5 and 4.5 eV, and the behavior of these bands was examined as a function of the annealing temperature. An emission band at ∼3.44 eV (360 nm) was found to be strongly enhanced when the annealing temperature was increased to 500 °C, and this band underwent a significant reduction in intensity with further increase in temperature. Furthermore, a new emission band at ∼3.73 eV (330 nm) became apparent for annealing temperatures in the range 600–700 °C. These new experimental results are discussed within the context of the model presented in this paper.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Air was sampled from the porous firn layer at the NEEM site in Northern Greenland. We use an ensemble of ten reference tracers of known atmospheric history to characterise the transport properties of the site. By analysing uncertainties in both data and the reference gas atmospheric histories, we can objectively assign weights to each of the gases used for the depth-diffusivity reconstruction. We define an objective root mean square criterion that is minimised in the model tuning procedure. Each tracer constrains the firn profile differently through its unique atmospheric history and free air diffusivity, making our multiple-tracer characterisation method a clear improvement over the commonly used single-tracer tuning. Six firn air transport models are tuned to the NEEM site; all models successfully reproduce the data within a 1σ Gaussian distribution. A comparison between two replicate boreholes drilled 64 m apart shows differences in measured mixing ratio profiles that exceed the experimental error. We find evidence that diffusivity does not vanish completely in the lock-in zone, as is commonly assumed. The ice age- gas age difference (1 age) at the firn-ice transition is calculated to be 182+3−9 yr. We further present the first intercomparison study of firn air models, where we introduce diagnostic scenarios designed to probe specific aspects of the model physics. Our results show that there are major differences in the way the models handle advective transport. Furthermore, diffusive fractionation of isotopes in the firn is poorly constrained by the models, which has consequences for attempts to reconstruct the isotopic composition of trace gases back in time using firn air and ice core records.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The adaptive response to extreme endurance exercise might involve transcriptional and translational regulation by microRNAs (miRNAs). Therefore, the objective of the present study was to perform an integrated analysis of the blood transcriptome and miRNome (using microarrays) in the horse before and after a 160 km endurance competition. A total of 2,453 differentially expressed genes and 167 differentially expressed microRNAs were identified when comparing pre- and post-ride samples. We used a hypergeometric test and its generalization to gain a better understanding of the biological functions regulated by the differentially expressed microRNA. In particular, 44 differentially expressed microRNAs putatively regulated a total of 351 depleted differentially expressed genes involved variously in glucose metabolism, fatty acid oxidation, mitochondrion biogenesis, and immune response pathways. In an independent validation set of animals, graphical Gaussian models confirmed that miR-21-5p, miR-181b-5p and miR-505-5p are candidate regulatory molecules for the adaptation to endurance exercise in the horse. To the best of our knowledge, the present study is the first to provide a comprehensive, integrated overview of the microRNA-mRNA co-regulation networks that may have a key role in controlling post-transcriptomic regulation during endurance exercise.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Under contact metamorphic conditions, carbonate rocks in the direct vicinity of the Adamello pluton reflect a temperature-induced grain coarsening. Despite this large-scale trend, a considerable grain size scatter occurs on the outcrop-scale indicating local influence of second-order effects such as thermal perturbations, fluid flow and second-phase particles. Second-phase particles, whose sizes range from nano- to the micron-scale, induce the most pronounced data scatter resulting in grain sizes too small by up to a factor of 10, compared with theoretical grain growth in a pure system. Such values are restricted to relatively impure samples consisting of up to 10 vol.% micron-scale second-phase particles, or to samples containing a large number of nano-scale particles. The obtained data set suggests that the second phases induce a temperature-controlled reduction on calcite grain growth. The mean calcite grain size can therefore be expressed in the form D 1⁄4 C2 eQ*/RT(dp/fp)m*, where C2 is a constant, Q* is an activation energy, T the temperature and m* the exponent of the ratio dp/fp, i.e. of the average size of the second phases divided by their volume fraction. However, more data are needed to obtain reliable values for C2 and Q*. Besides variations in the average grain size, the presence of second-phase particles generates crystal size distribution (CSD) shapes characterized by lognormal distributions, which differ from the Gaussian-type distributions of the pure samples. In contrast, fluid-enhanced grain growth does not change the shape of the CSDs, but due to enhanced transport properties, the average grain sizes increase by a factor of 2 and the variance of the distribution increases. Stable d18O and d13C isotope ratios in fluid-affected zones only deviate slightly from the host rock values, suggesting low fluid/rock ratios. Grain growth modelling indicates that the fluid-induced grain size variations can develop within several ka. As inferred from a combination of thermal and grain growth modelling, dykes with widths of up to 1 m have only a restricted influence on grain size deviations smaller than a factor of 1.1.To summarize, considerable grain size variations of up to one order of magnitude can locally result from second-order effects. Such effects require special attention when comparing experimentally derived grain growth kinetics with field studies.