960 resultados para Maximum-entropy selection criterion
Resumo:
The goal of this thesis is to analyze the possibility of using early-type galaxies to place evolutionary and cosmological constraints, by both disentangling what is the main driver of ETGs evolution between mass and environment, and developing a technique to constrain H(z) and the cosmological parameters studying the ETGs age-redshift relation. The (U-V) rest-frame color distribution is studied as a function of mass and environment for two sample of ETGs up to z=1, extracted from the zCOSMOS survey with a new selection criterion. The color distributions and the slopes of the color-mass and color-environment relations are studied, finding a strong dependence on mass and a minor dependence on environment. The spectral analysis performed on the D4000 and Hδ features gives results validating the previous analysis. The main driver of galaxy evolution is found to be the galaxy mass, the environment playing a subdominant but non negligible role. The age distribution of ETGs is also analyzed as a function of mass, providing strong evidences supporting a downsizing scenario. The possibility of setting cosmological constraints studying the age-redshift relation is studied, discussing the relative degeneracies and model dependencies. A new approach is developed, aiming to minimize the impact of systematics on the “cosmic chronometer” method. Analyzing theoretical models, it is demonstrated that the D4000 is a feature correlated almost linearly with age at fixed metallicity, depending only minorly on the models assumed or on the SFH chosen. The analysis of a SDSS sample of ETGs shows that it is possible to use the differential D4000 evolution of the galaxies to set constraints to cosmological parameters in an almost model-independent way. Values of the Hubble constant and of the dark energy EoS parameter are found, which are not only fully compatible, but also with a comparable error budget with the latest results.
Resumo:
The first part of this work deals with the inverse problem solution in the X-ray spectroscopy field. An original strategy to solve the inverse problem by using the maximum entropy principle is illustrated. It is built the code UMESTRAT, to apply the described strategy in a semiautomatic way. The application of UMESTRAT is shown with a computational example. The second part of this work deals with the improvement of the X-ray Boltzmann model, by studying two radiative interactions neglected in the current photon models. Firstly it is studied the characteristic line emission due to Compton ionization. It is developed a strategy that allows the evaluation of this contribution for the shells K, L and M of all elements with Z from 11 to 92. It is evaluated the single shell Compton/photoelectric ratio as a function of the primary photon energy. It is derived the energy values at which the Compton interaction becomes the prevailing process to produce ionization for the considered shells. Finally it is introduced a new kernel for the XRF from Compton ionization. In a second place it is characterized the bremsstrahlung radiative contribution due the secondary electrons. The bremsstrahlung radiation is characterized in terms of space, angle and energy, for all elements whit Z=1-92 in the energy range 1–150 keV by using the Monte Carlo code PENELOPE. It is demonstrated that bremsstrahlung radiative contribution can be well approximated with an isotropic point photon source. It is created a data library comprising the energetic distributions of bremsstrahlung. It is developed a new bremsstrahlung kernel which allows the introduction of this contribution in the modified Boltzmann equation. An example of application to the simulation of a synchrotron experiment is shown.
Resumo:
We report dramatic sensitivity enhancements in multidimensional MAS NMR spectra by the use of nonuniform sampling (NUS) and introduce maximum entropy interpolation (MINT) processing that assures the linearity between the time and frequency domains of the NUS acquired data sets. A systematic analysis of sensitivity and resolution in 2D and 3D NUS spectra reveals that with NUS, at least 1.5- to 2-fold sensitivity enhancement can be attained in each indirect dimension without compromising the spectral resolution. These enhancements are similar to or higher than those attained by the newest-generation commercial cryogenic probes. We explore the benefits of this NUS/MaxEnt approach in proteins and protein assemblies using 1-73-(U-C-13,N-15)/74-108-(U-N-15) Escherichia coil thioredoxin reassembly. We demonstrate that in thioredoxin reassembly, NUS permits acquisition of high-quality 3D-NCACX spectra, which are inaccessible with conventional sampling due to prohibitively long experiment times. Of critical importance, issues that hinder NUS-based SNR enhancement in 3D-NMR of liquids are mitigated in the study of solid samples in which theoretical enhancements on the order of 3-4 fold are accessible by compounding the NUS-based SNR enhancement of each indirect dimension. NUS/MINT is anticipated to be widely applicable and advantageous for multidimensional heteronuclear MAS NMR spectroscopy of proteins, protein assemblies, and other biological systems.
Resumo:
Recent optimizations of NMR spectroscopy have focused their attention on innovations in new hardware, such as novel probes and higher field strengths. Only recently has the potential to enhance the sensitivity of NMR through data acquisition strategies been investigated. This thesis has focused on the practice of enhancing the signal-to-noise ratio (SNR) of NMR using non-uniform sampling (NUS). After first establishing the concept and exact theory of compounding sensitivity enhancements in multiple non-uniformly sampled indirect dimensions, a new result was derived that NUS enhances both SNR and resolution at any given signal evolution time. In contrast, uniform sampling alternately optimizes SNR (t < 1.26T2) or resolution (t~3T2), each at the expense of the other. Experiments were designed and conducted on a plant natural product to explore this behavior of NUS in which the SNR and resolution continue to improve as acquisition time increases. Possible absolute sensitivity improvements of 1.5 and 1.9 are possible in each indirect dimension for matched and 2x biased exponentially decaying sampling densities, respectively, at an acquisition time of ¿T2. Recommendations for breaking into the linear regime of maximum entropy (MaxEnt) are proposed. Furthermore, examination into a novel sinusoidal sampling density resulted in improved line shapes in MaxEnt reconstructions of NUS data and comparable enhancement to a matched exponential sampling density. The Absolute Sample Sensitivity derived and demonstrated here for NUS holds great promise in expanding the adoption of non-uniform sampling.
Performance Tuning Non-Uniform Sampling for Sensitivity Enhancement of Signal-Limited Biological NMR
Resumo:
Non-uniform sampling (NUS) has been established as a route to obtaining true sensitivity enhancements when recording indirect dimensions of decaying signals in the same total experimental time as traditional uniform incrementation of the indirect evolution period. Theory and experiments have shown that NUS can yield up to two-fold improvements in the intrinsic signal-to-noise ratio (SNR) of each dimension, while even conservative protocols can yield 20-40 % improvements in the intrinsic SNR of NMR data. Applications of biological NMR that can benefit from these improvements are emerging, and in this work we develop some practical aspects of applying NUS nD-NMR to studies that approach the traditional detection limit of nD-NMR spectroscopy. Conditions for obtaining high NUS sensitivity enhancements are considered here in the context of enabling H-1,N-15-HSQC experiments on natural abundance protein samples and H-1,C-13-HMBC experiments on a challenging natural product. Through systematic studies we arrive at more precise guidelines to contrast sensitivity enhancements with reduced line shape constraints, and report an alternative sampling density based on a quarter-wave sinusoidal distribution that returns the highest fidelity we have seen to date in line shapes obtained by maximum entropy processing of non-uniformly sampled data.
Resumo:
Recently, we have demonstrated that considerable inherent sensitivity gains are attained in MAS NMR spectra acquired by nonuniform sampling (NUS) and introduced maximum entropy interpolation (MINT) processing that assures the linearity of transformation between the time and frequency domains. In this report, we examine the utility of the NUS/MINT approach in multidimensional datasets possessing high dynamic range, such as homonuclear C-13-C-13 correlation spectra. We demonstrate on model compounds and on 1-73-(U-C-13,N-15)/74-108-(U-N-15) E. coli thioredoxin reassembly, that with appropriately constructed 50 % NUS schedules inherent sensitivity gains of 1.7-2.1-fold are readily reached in such datasets. We show that both linearity and line width are retained under these experimental conditions throughout the entire dynamic range of the signals. Furthermore, we demonstrate that the reproducibility of the peak intensities is excellent in the NUS/MINT approach when experiments are repeated multiple times and identical experimental and processing conditions are employed. Finally, we discuss the principles for design and implementation of random exponentially biased NUS sampling schedules for homonuclear C-13-C-13 MAS correlation experiments that yield high-quality artifact-free datasets.
Resumo:
We present a novel approach to the inference of spectral functions from Euclidean time correlator data that makes close contact with modern Bayesian concepts. Our method differs significantly from the maximum entropy method (MEM). A new set of axioms is postulated for the prior probability, leading to an improved expression, which is devoid of the asymptotically flat directions present in the Shanon-Jaynes entropy. Hyperparameters are integrated out explicitly, liberating us from the Gaussian approximations underlying the evidence approach of the maximum entropy method. We present a realistic test of our method in the context of the nonperturbative extraction of the heavy quark potential. Based on hard-thermal-loop correlator mock data, we establish firm requirements in the number of data points and their accuracy for a successful extraction of the potential from lattice QCD. Finally we reinvestigate quenched lattice QCD correlators from a previous study and provide an improved potential estimation at T2.33TC.
Resumo:
The extraction of the finite temperature heavy quark potential from lattice QCD relies on a spectral analysis of the Wilson loop. General arguments tell us that the lowest lying spectral peak encodes, through its position and shape, the real and imaginary parts of this complex potential. Here we benchmark this extraction strategy using leading order hard-thermal loop (HTL) calculations. In other words, we analytically calculate the Wilson loop and determine the corresponding spectrum. By fitting its lowest lying peak we obtain the real and imaginary parts and confirm that the knowledge of the lowest peak alone is sufficient for obtaining the potential. Access to the full spectrum allows an investigation of spectral features that do not contribute to the potential but can pose a challenge to numerical attempts of an analytic continuation from imaginary time data. Differences in these contributions between the Wilson loop and gauge fixed Wilson line correlators are discussed. To better understand the difficulties in a numerical extraction we deploy the maximum entropy method with extended search space to HTL correlators in Euclidean time and observe how well the known spectral function and values for the real and imaginary parts are reproduced. Possible venues for improvement of the extraction strategy are discussed.
Resumo:
We present a novel approach for the reconstruction of spectra from Euclidean correlator data that makes close contact to modern Bayesian concepts. It is based upon an axiomatically justified dimensionless prior distribution, which in the case of constant prior function m(ω) only imprints smoothness on the reconstructed spectrum. In addition we are able to analytically integrate out the only relevant overall hyper-parameter α in the prior, removing the necessity for Gaussian approximations found e.g. in the Maximum Entropy Method. Using a quasi-Newton minimizer and high-precision arithmetic, we are then able to find the unique global extremum of P[ρ|D] in the full Nω » Nτ dimensional search space. The method actually yields gradually improving reconstruction results if the quality of the supplied input data increases, without introducing artificial peak structures, often encountered in the MEM. To support these statements we present mock data analyses for the case of zero width delta peaks and more realistic scenarios, based on the perturbative Euclidean Wilson Loop as well as the Wilson Line correlator in Coulomb gauge.
Resumo:
The stability of a triple helix formed between a DNA duplex and an incoming oligonucleotide strand strongly depends on the solvent conditions and on intrinsic chemical and conformational factors. Attempts to increase triple helix stability in the past included chemical modification of the backbone, sugar ring, and bases in the third strand. However, the predictive power of such modifications is still rather poor. We therefore developed a method that allows for rapid screening of conformationally diverse third strand oligonucleotides for triplex stability in the parallel pairing motif to a given DNA double helix sequence. Combinatorial libraries of oligonucleotides of the requisite (fixed) base composition and length that vary in their sugar unit (ribose or deoxyribose) at each position were generated. After affinity chromatography against their corresponding immobilized DNA target duplex, utilizing a temperature gradient as the selection criterion, the oligonucleotides forming the most stable triple helices were selected and characterized by physicochemical methods. Thus, a series of oligonucleotides were identified that allowed us to define basic rules for triple helix stability in this conformationally diverse system. It was found that ribocytidines in the third strand increase triplex stability relative to deoxyribocytidines independently of the neighboring bases and position along the strand. However, remarkable sequence-dependent differences in stability were found for (deoxy)thymidines and uridines
Resumo:
Our knowledge on the many aspects of mammalian reproduction in general and equine reproduction in particular has greatly increased during the last 15 years. Advances in the understanding of the physiology, cell biology, and biochemistry of reproduction have facilitated genetic analyses of fertility. Currently, there are more than 200 genes known that are involved in the production of fertile sperm cells. The completion of a number of mammalian genome projects will aid in the investigation of these genes in different species. Great progress has been made in the understanding of genetic aberrations that lead to male infertility. Additionally, the first genetic mechanisms are being discovered that contribute to the quantitative variation of fertility traits in fertile male animals. As artificial insemination (AI) represents a widespread technology in horse breeding, semen quality traits may eventually become an additional selection criterion for breeding stallions. Current research activities try to identify genetic markers that correlate to these semen quality traits. Here, we will review the current state of genetic research in male fertility and offer some perspectives for future research in horses.
Resumo:
This paper presents a shallow dialogue analysis model, aimed at human-human dialogues in the context of staff or business meetings. Four components of the model are defined, and several machine learning techniques are used to extract features from dialogue transcripts: maximum entropy classifiers for dialogue acts, latent semantic analysis for topic segmentation, or decision tree classifiers for discourse markers. A rule-based approach is proposed for solving cross-modal references to meeting documents. The methods are trained and evaluated thanks to a common data set and annotation format. The integration of the components into an automated shallow dialogue parser opens the way to multimodal meeting processing and retrieval applications.
Resumo:
Dental caries, also known as tooth decay, are a disease of the oral cavity that affects the tooth structure and leads to the occurrence of cavities in teeth. Dental caries are one of the leading chronic diseases in the population and are very common in childhood. If not treated appropriately, dental caries have debilitating effect on the oral and general health of individuals. ^ Objectives. The aims of this review are to (1) analyze and elucidate the relationship between the social and economic determinants of health like income, education and race/ethnicity and the prevalence of dental caries and (2) identify and understand the pathways/underlying causes through which these factors affect the occurrence of dental caries. This review will provide a foundation for formulation of better oral health policies in future by identifying the key socio-economic factors and pathways affecting the prevalence of dental caries. Knowledge about these socioeconomic factors could be incorporated in the design of future policies and interventions to achieve greater benefits.^ Methods. This review includes information from all pertinent articles, reviews, surveys, reports, peer reviewed literature and web sources that were published after 2000. The selection criterion includes literature focusing on individuals between the ages of 1 to 65 years, and individuals from different subgroups of community based on income, education and race/ethnicity. The analyses of literature include identifying if a relationship between income/education/race and the prevalence of dental caries exists by comparing the prevalence of dental caries in different socio-economic groups. Also included in this review are articles that are relevant to the mechanisms/pathways through which income/education/race affect the prevalence of dental caries.^ Results. Analyses of available literature suggests that disparities in the prevalence of dental caries may be attributed to differences in income, education and race/ethnicity. Higher prevalence of dental caries was observed in African-American and Mexican-American individuals, and in people with low income and low education. The leading pathways through which the socioeconomic factors affect the prevalence of dental caries are the lack of access to dental care, lack of awareness about good oral hygiene beliefs and habits, oral health, inability to afford dental care, lack of social support to maintain oral health and lack of dental insurance.^ Conclusion. Disparities in the prevalence of dental caries exist in various socio-economic groups. The relationship between socio-economic factors and dental caries prevalence should be considered in the development of future policies and interventions that are aimed at reducing the prevalence of dental caries and enhancing oral health status.^
Resumo:
El objetivo de este trabajo fue evaluar la altura media dominante (AMD) y su estabilidad en 15 clones de álamos de 5 años de edad, implantados en tres ambientes diferentes de la pampa ondulada, Argentina. Los sitios fueron: Teodelina (Sitios 1 y 2), Santa Fe (34° 12' LS; 61° 43' W; 90 msnm) y Alberti (Sitio 3), Buenos Aires (34° 50' LS; 60° 30' W; 55 msnm) y se caracterizaron en base a variables de clima y suelo. Se realizaron los análisis de la varianza del conjunto de clones entre sitios y entre los clones por sitio. La comparación de AMD se realizó aplicando el test de Tukey. Se analizó la interacción clon-sitio. Se construyó un criterio de elección basado en parámetros genéticos de estabilidad sobre el carácter (AMD), estimándose la ganancia genética al seleccionar los mejores clones. Se realizaron los cálculos de heredabilidad en sentido amplio. Los posicionamientos en AMD fueron significativos entre clones por sitio y entre sitios. La interacción clon sitio fue significativa. Los valores de AMD y los de ecovalencia permitieron seleccionar genomas con mayor amplitud de adaptación y consecuentemente conciliar la producción con la plasticidad dentro de la disimilitud de los sitios evaluados.
Resumo:
Fragilariopsis kerguelensis, a dominant diatom species throughout the Antarctic Circumpolar Current, is coined to be one of the main drivers of the biological silicate pump. Here, we study the distribution of this important species and expected consequences of climate change upon it, using correlative species distribution modeling and publicly available presence-only data. As experience with SDM is scarce for marine phytoplankton, this also serves as a pilot study for this organism group. We used the maximum entropy method to calculate distribution models for the diatom F. kerguelensis based on yearly and monthly environmental data (sea surface temperature, salinity, nitrate and silicate concentrations). Observation data were harvested from GBIF and the Global Diatom Database, and for further analyses also from the Hustedt Diatom Collection (BRM). The models were projected on current yearly and seasonal environmental data to study current distribution and its seasonality. Furthermore, we projected the seasonal model on future environmental data obtained from climate models for the year 2100. Projected on current yearly averaged environmental data, all models showed similar distribution patterns for F. kerguelensis. The monthly model showed seasonality, for example, a shift of the southern distribution boundary toward the north in the winter. Projections on future scenarios resulted in a moderately to negligibly shrinking distribution area and a change in seasonality. We found a substantial bias in the publicly available observation datasets, which could be reduced by additional observation records we obtained from the Hustedt Diatom Collection. Present-day distribution patterns inferred from the models coincided well with background knowledge and previous reports about F. kerguelensis distribution, showing that maximum entropy-based distribution models are suitable to map distribution patterns for oceanic planktonic organisms. Our scenario projections indicate moderate effects of climate change upon the biogeography of F. kerguelensis.