972 resultados para Maximum entropy statistical estimate


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The first part of this work deals with the inverse problem solution in the X-ray spectroscopy field. An original strategy to solve the inverse problem by using the maximum entropy principle is illustrated. It is built the code UMESTRAT, to apply the described strategy in a semiautomatic way. The application of UMESTRAT is shown with a computational example. The second part of this work deals with the improvement of the X-ray Boltzmann model, by studying two radiative interactions neglected in the current photon models. Firstly it is studied the characteristic line emission due to Compton ionization. It is developed a strategy that allows the evaluation of this contribution for the shells K, L and M of all elements with Z from 11 to 92. It is evaluated the single shell Compton/photoelectric ratio as a function of the primary photon energy. It is derived the energy values at which the Compton interaction becomes the prevailing process to produce ionization for the considered shells. Finally it is introduced a new kernel for the XRF from Compton ionization. In a second place it is characterized the bremsstrahlung radiative contribution due the secondary electrons. The bremsstrahlung radiation is characterized in terms of space, angle and energy, for all elements whit Z=1-92 in the energy range 1–150 keV by using the Monte Carlo code PENELOPE. It is demonstrated that bremsstrahlung radiative contribution can be well approximated with an isotropic point photon source. It is created a data library comprising the energetic distributions of bremsstrahlung. It is developed a new bremsstrahlung kernel which allows the introduction of this contribution in the modified Boltzmann equation. An example of application to the simulation of a synchrotron experiment is shown.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We report dramatic sensitivity enhancements in multidimensional MAS NMR spectra by the use of nonuniform sampling (NUS) and introduce maximum entropy interpolation (MINT) processing that assures the linearity between the time and frequency domains of the NUS acquired data sets. A systematic analysis of sensitivity and resolution in 2D and 3D NUS spectra reveals that with NUS, at least 1.5- to 2-fold sensitivity enhancement can be attained in each indirect dimension without compromising the spectral resolution. These enhancements are similar to or higher than those attained by the newest-generation commercial cryogenic probes. We explore the benefits of this NUS/MaxEnt approach in proteins and protein assemblies using 1-73-(U-C-13,N-15)/74-108-(U-N-15) Escherichia coil thioredoxin reassembly. We demonstrate that in thioredoxin reassembly, NUS permits acquisition of high-quality 3D-NCACX spectra, which are inaccessible with conventional sampling due to prohibitively long experiment times. Of critical importance, issues that hinder NUS-based SNR enhancement in 3D-NMR of liquids are mitigated in the study of solid samples in which theoretical enhancements on the order of 3-4 fold are accessible by compounding the NUS-based SNR enhancement of each indirect dimension. NUS/MINT is anticipated to be widely applicable and advantageous for multidimensional heteronuclear MAS NMR spectroscopy of proteins, protein assemblies, and other biological systems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recent optimizations of NMR spectroscopy have focused their attention on innovations in new hardware, such as novel probes and higher field strengths. Only recently has the potential to enhance the sensitivity of NMR through data acquisition strategies been investigated. This thesis has focused on the practice of enhancing the signal-to-noise ratio (SNR) of NMR using non-uniform sampling (NUS). After first establishing the concept and exact theory of compounding sensitivity enhancements in multiple non-uniformly sampled indirect dimensions, a new result was derived that NUS enhances both SNR and resolution at any given signal evolution time. In contrast, uniform sampling alternately optimizes SNR (t < 1.26T2) or resolution (t~3T2), each at the expense of the other. Experiments were designed and conducted on a plant natural product to explore this behavior of NUS in which the SNR and resolution continue to improve as acquisition time increases. Possible absolute sensitivity improvements of 1.5 and 1.9 are possible in each indirect dimension for matched and 2x biased exponentially decaying sampling densities, respectively, at an acquisition time of ¿T2. Recommendations for breaking into the linear regime of maximum entropy (MaxEnt) are proposed. Furthermore, examination into a novel sinusoidal sampling density resulted in improved line shapes in MaxEnt reconstructions of NUS data and comparable enhancement to a matched exponential sampling density. The Absolute Sample Sensitivity derived and demonstrated here for NUS holds great promise in expanding the adoption of non-uniform sampling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Non-uniform sampling (NUS) has been established as a route to obtaining true sensitivity enhancements when recording indirect dimensions of decaying signals in the same total experimental time as traditional uniform incrementation of the indirect evolution period. Theory and experiments have shown that NUS can yield up to two-fold improvements in the intrinsic signal-to-noise ratio (SNR) of each dimension, while even conservative protocols can yield 20-40 % improvements in the intrinsic SNR of NMR data. Applications of biological NMR that can benefit from these improvements are emerging, and in this work we develop some practical aspects of applying NUS nD-NMR to studies that approach the traditional detection limit of nD-NMR spectroscopy. Conditions for obtaining high NUS sensitivity enhancements are considered here in the context of enabling H-1,N-15-HSQC experiments on natural abundance protein samples and H-1,C-13-HMBC experiments on a challenging natural product. Through systematic studies we arrive at more precise guidelines to contrast sensitivity enhancements with reduced line shape constraints, and report an alternative sampling density based on a quarter-wave sinusoidal distribution that returns the highest fidelity we have seen to date in line shapes obtained by maximum entropy processing of non-uniformly sampled data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Recently, we have demonstrated that considerable inherent sensitivity gains are attained in MAS NMR spectra acquired by nonuniform sampling (NUS) and introduced maximum entropy interpolation (MINT) processing that assures the linearity of transformation between the time and frequency domains. In this report, we examine the utility of the NUS/MINT approach in multidimensional datasets possessing high dynamic range, such as homonuclear C-13-C-13 correlation spectra. We demonstrate on model compounds and on 1-73-(U-C-13,N-15)/74-108-(U-N-15) E. coli thioredoxin reassembly, that with appropriately constructed 50 % NUS schedules inherent sensitivity gains of 1.7-2.1-fold are readily reached in such datasets. We show that both linearity and line width are retained under these experimental conditions throughout the entire dynamic range of the signals. Furthermore, we demonstrate that the reproducibility of the peak intensities is excellent in the NUS/MINT approach when experiments are repeated multiple times and identical experimental and processing conditions are employed. Finally, we discuss the principles for design and implementation of random exponentially biased NUS sampling schedules for homonuclear C-13-C-13 MAS correlation experiments that yield high-quality artifact-free datasets.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a novel approach to the inference of spectral functions from Euclidean time correlator data that makes close contact with modern Bayesian concepts. Our method differs significantly from the maximum entropy method (MEM). A new set of axioms is postulated for the prior probability, leading to an improved expression, which is devoid of the asymptotically flat directions present in the Shanon-Jaynes entropy. Hyperparameters are integrated out explicitly, liberating us from the Gaussian approximations underlying the evidence approach of the maximum entropy method. We present a realistic test of our method in the context of the nonperturbative extraction of the heavy quark potential. Based on hard-thermal-loop correlator mock data, we establish firm requirements in the number of data points and their accuracy for a successful extraction of the potential from lattice QCD. Finally we reinvestigate quenched lattice QCD correlators from a previous study and provide an improved potential estimation at T2.33TC.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The extraction of the finite temperature heavy quark potential from lattice QCD relies on a spectral analysis of the Wilson loop. General arguments tell us that the lowest lying spectral peak encodes, through its position and shape, the real and imaginary parts of this complex potential. Here we benchmark this extraction strategy using leading order hard-thermal loop (HTL) calculations. In other words, we analytically calculate the Wilson loop and determine the corresponding spectrum. By fitting its lowest lying peak we obtain the real and imaginary parts and confirm that the knowledge of the lowest peak alone is sufficient for obtaining the potential. Access to the full spectrum allows an investigation of spectral features that do not contribute to the potential but can pose a challenge to numerical attempts of an analytic continuation from imaginary time data. Differences in these contributions between the Wilson loop and gauge fixed Wilson line correlators are discussed. To better understand the difficulties in a numerical extraction we deploy the maximum entropy method with extended search space to HTL correlators in Euclidean time and observe how well the known spectral function and values for the real and imaginary parts are reproduced. Possible venues for improvement of the extraction strategy are discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present a novel approach for the reconstruction of spectra from Euclidean correlator data that makes close contact to modern Bayesian concepts. It is based upon an axiomatically justified dimensionless prior distribution, which in the case of constant prior function m(ω) only imprints smoothness on the reconstructed spectrum. In addition we are able to analytically integrate out the only relevant overall hyper-parameter α in the prior, removing the necessity for Gaussian approximations found e.g. in the Maximum Entropy Method. Using a quasi-Newton minimizer and high-precision arithmetic, we are then able to find the unique global extremum of P[ρ|D] in the full Nω » Nτ dimensional search space. The method actually yields gradually improving reconstruction results if the quality of the supplied input data increases, without introducing artificial peak structures, often encountered in the MEM. To support these statements we present mock data analyses for the case of zero width delta peaks and more realistic scenarios, based on the perturbative Euclidean Wilson Loop as well as the Wilson Line correlator in Coulomb gauge.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a shallow dialogue analysis model, aimed at human-human dialogues in the context of staff or business meetings. Four components of the model are defined, and several machine learning techniques are used to extract features from dialogue transcripts: maximum entropy classifiers for dialogue acts, latent semantic analysis for topic segmentation, or decision tree classifiers for discourse markers. A rule-based approach is proposed for solving cross-modal references to meeting documents. The methods are trained and evaluated thanks to a common data set and annotation format. The integration of the components into an automated shallow dialogue parser opens the way to multimodal meeting processing and retrieval applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We compare six high-resolution Holocene, sediment cores along a S-N transect on the Norwegian-Svalbard continental margin from ca 60°N to 77.4°N, northern North Atlantic. Planktonic foraminifera in the cores were investigated to show the changes in upper surface and subsurface water mass distribution and properties, including summer sea-surface temperatures (SST). The cores are located below the axis of the Norwegian Current and the West Spitsbergen Current, which today transport warm Atlantic Water to the Arctic. Sediment accumulation rates are generally high at all the core sites, allowing for a temporal resolution of 10-102 years. SST is reconstructed using different types of transfer functions, resulting in very similar SST trends, with deviations of no more than +- 1.0/1.5 °C. A transfer function based on the maximum likelihood statistical approach is found to be most relevant. The reconstruction documents an abrupt change in planktonic foraminiferal faunal composition and an associated warming at the Younger Dryas-Preboreal transition. The earliest part of the Holocene was characterized by large temperature variability, including the Preboreal Oscillations and the 8.2 k event. In general, the early Holocene was characterized by SSTs similar to those of today in the south and warmer than today in the north, and a smaller S-N temperature gradient (0.23 °C/°N) compared to the present temperature gradient (0.46 °C/°N). The southern proxy records (60-69°N) were more strongly influenced by slightly cooler subsurface water probably due to the seasonality of the orbital forcing and increased stratification due to freshening. The northern records (72-77.4°N) display a millennial-scale change associated with reduced insolation and a gradual weakening of the North Atlantic thermohaline circulation (THC). The observed northwards amplification of the early Holocene warming is comparable to the pattern of recent global warming and future climate modelling, which predicts greater warming at higher latitudes. The overall trend during mid and late Holocene was a cooling in the north, stable or weak warming in the south, and a maximum S-N SST gradient of ca 0.7 °C/°N at 5000 cal. years BP. Superimposed on this trend were several abrupt temperature shifts. Four of these shifts, dated to 9000-8000, 5500-3000 and 1000 and ~400 cal. years BP, appear to be global, as they correlate with periods of global climate change. In general, there is a good correlation between the northern North Atlantic temperature records and climate records from Norway and Svalbard.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The purpose of this study was: (1) To make an attempt at finding a stratification of the snowpack in order to help remove ambiguities in dating the snowlayers by standard methods. (2) To verify the depth at which the transition between firn and ice occurs. Clearly the first goal was missed, the structural information in a temperate firn being strongly smoothed out in time. Interesting details like horizontal ice lenses and layers of "cold snow" however, were revealed. In spite of strong variations of density, gravimetric density PG and ice density PI, computed from point density, are identical for the firn pack between Z = 2.0 m and 6.0 m. p(ice) = 0.522 ± 0.034 x 10**3 kg/m**3. The ice density of 0.8 x 10**3 kg/m**3, the assumed transition between firn and ice, was found to occur at a depth of Z= 19 m. Even at this level, rather important variations in density may be localized. Between Z= 19 m and 21 m, the ice density varies from 0.774 x 10**3 to 0.860 x 10**3 kg/m**3.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fragilariopsis kerguelensis, a dominant diatom species throughout the Antarctic Circumpolar Current, is coined to be one of the main drivers of the biological silicate pump. Here, we study the distribution of this important species and expected consequences of climate change upon it, using correlative species distribution modeling and publicly available presence-only data. As experience with SDM is scarce for marine phytoplankton, this also serves as a pilot study for this organism group. We used the maximum entropy method to calculate distribution models for the diatom F. kerguelensis based on yearly and monthly environmental data (sea surface temperature, salinity, nitrate and silicate concentrations). Observation data were harvested from GBIF and the Global Diatom Database, and for further analyses also from the Hustedt Diatom Collection (BRM). The models were projected on current yearly and seasonal environmental data to study current distribution and its seasonality. Furthermore, we projected the seasonal model on future environmental data obtained from climate models for the year 2100. Projected on current yearly averaged environmental data, all models showed similar distribution patterns for F. kerguelensis. The monthly model showed seasonality, for example, a shift of the southern distribution boundary toward the north in the winter. Projections on future scenarios resulted in a moderately to negligibly shrinking distribution area and a change in seasonality. We found a substantial bias in the publicly available observation datasets, which could be reduced by additional observation records we obtained from the Hustedt Diatom Collection. Present-day distribution patterns inferred from the models coincided well with background knowledge and previous reports about F. kerguelensis distribution, showing that maximum entropy-based distribution models are suitable to map distribution patterns for oceanic planktonic organisms. Our scenario projections indicate moderate effects of climate change upon the biogeography of F. kerguelensis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The crabeater seal (Lobodon carcinophaga) is the most abundant Antarctic seal and inhabits the circumpolar pack ice zone of the Southern Ocean. Until now, information on important environmental factors affecting its distribution as well as on foraging behaviour is limited. In austral summer 1998, 12 crabeater seals of both sexes and different age classes were equipped with satellitelinked dive recorders at Drescher Inlet (72.85°S, 19.26°E), eastern Weddell Sea. To identify suitable habitat conditions within the Weddell Sea, a maximum entropy (Maxent) modelling approach was implemented. The model revealed that the eastern and southern Weddell Sea is especially suitable for crabeater seals. Distance to the continental shelf break and sea ice concentration were the two most important parameters in modelling species distribution throughout the study period. Model predictions demonstrated that crabeater seals showed a dynamic response to their seasonally changing environment emphasized by the favoured sea ice conditions. Crabeater seals utilized ice-free waters substantially, which is potentially explained by the comparatively low sea ice cover of the Weddell Sea during summer 1998. Diving behaviour was characterized by short (>90 % = 0-4 min) and shallow (>90 % = 0-51 m) dives. This pattern reflects the typical summer and autumn foraging behaviour of crabeater seals. Both the distribution and foraging behaviour corresponded well with the life history of the Antarctic krill (Euphausia superba), the preferred prey of crabeater seals. In general, predicted suitable habitat conditions were congruent with probable habitats of krill, which emphasizes the strong dependence on their primary prey.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the Bonner spheres spectrometer neutron spectrum is obtained through an unfolding procedure. Monte Carlo methods, Regularization, Parametrization, Least-squares, and Maximum Entropy are some of the techniques utilized for unfolding. In the last decade methods based on Artificial Intelligence Technology have been used. Approaches based on Genetic Algorithms and Artificial Neural Networks have been developed in order to overcome the drawbacks of previous techniques. Nevertheless the advantages of Artificial Neural Networks still it has some drawbacks mainly in the design process of the network, vg the optimum selection of the architectural and learning ANN parameters. In recent years the use of hybrid technologies, combining Artificial Neural Networks and Genetic Algorithms, has been utilized to. In this work, several ANN topologies were trained and tested using Artificial Neural Networks and Genetically Evolved Artificial Neural Networks in the aim to unfold neutron spectra using the count rates of a Bonner sphere spectrometer. Here, a comparative study of both procedures has been carried out.