869 resultados para Large-Scale Coherent Structure


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The function of a protein generally is determined by its three-dimensional (3D) structure. Thus, it would be useful to know the 3D structure of the thousands of protein sequences that are emerging from the many genome projects. To this end, fold assignment, comparative protein structure modeling, and model evaluation were automated completely. As an illustration, the method was applied to the proteins in the Saccharomyces cerevisiae (baker’s yeast) genome. It resulted in all-atom 3D models for substantial segments of 1,071 (17%) of the yeast proteins, only 40 of which have had their 3D structure determined experimentally. Of the 1,071 modeled yeast proteins, 236 were related clearly to a protein of known structure for the first time; 41 of these previously have not been characterized at all.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Vela X–1 is the prototype of the class of wind-fed accreting pulsars in high-mass X-ray binaries hosting a supergiant donor. We have analysed in a systematic way 10 years of INTEGRAL data of Vela X–1 (22–50 keV) and we found that when outside the X-ray eclipse, the source undergoes several luminosity drops where the hard X-rays luminosity goes below ∼3 × 1035 erg s−1, becoming undetected by INTEGRAL. These drops in the X-ray flux are usually referred to as ‘off-states’ in the literature. We have investigated the distribution of these off-states along the Vela X–1 ∼ 8.9 d orbit, finding that their orbital occurrence displays an asymmetric distribution, with a higher probability to observe an off-state near the pre-eclipse than during the post-eclipse. This asymmetry can be explained by scattering of hard X-rays in a region of ionized wind, able to reduce the source hard X-ray brightness preferentially near eclipse ingress. We associate this ionized large-scale wind structure with the photoionization wake produced by the interaction of the supergiant wind with the X-ray emission from the neutron star. We emphasize that this observational result could be obtained thanks to the accumulation of a decade of INTEGRAL data, with observations covering the whole orbit several times, allowing us to detect an asymmetric pattern in the orbital distribution of off-states in Vela X–1.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Functionally relevant large scale brain dynamics operates within the framework imposed by anatomical connectivity and time delays due to finite transmission speeds. To gain insight on the reliability and comparability of large scale brain network simulations, we investigate the effects of variations in the anatomical connectivity. Two different sets of detailed global connectivity structures are explored, the first extracted from the CoCoMac database and rescaled to the spatial extent of the human brain, the second derived from white-matter tractography applied to diffusion spectrum imaging (DSI) for a human subject. We use the combination of graph theoretical measures of the connection matrices and numerical simulations to explicate the importance of both connectivity strength and delays in shaping dynamic behaviour. Our results demonstrate that the brain dynamics derived from the CoCoMac database are more complex and biologically more realistic than the one based on the DSI database. We propose that the reason for this difference is the absence of directed weights in the DSI connectivity matrix.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Diverses méthodes ont été utilisées pour étudier les étoiles Wolf-Rayet (WR) dans le but de comprendre les phénomènes physiques variés qui prennent place dans leur vent dense. Pour étudier la variabilité qui n'est pas strictement périodique et ayant des caractéristiques différentes d'une époque à l'autre, il faut observer pendant des périodes de temps suffisamment longues en adopter un échantillonnage temporel élevé pour être en mesure d'identifier les phénomènes physiques sous-jacents. À l'été 2013, des astronomes professionnels et amateurs du monde entier ont contribué à une campagne d'observation de 4 mois, principalement en spectroscopie, mais aussi en photométrie, polarimétrie et en interférométrie, pour observer les 3 premières étoiles Wolf-Rayet découvertes: WR 134 (WN6b), WR 135 (WC8) et WR 137 (WC7pd + O9). Chacune de ces étoiles est intéressante à sa manière, chacune présentant une variété différente de structures dans son vent. Les données spectroscopiques de cette campagne ont été réduites et analysées pour l'étoile présumée simple WR 134 pour mieux comprendre le comportement de sa variabilité périodique à long terme dans le cadre d'une étude des régions d'interactions en corotation (CIRs) qui se retrouvent dans son vent. Les résultats de cette étude sont présentés dans ce mémoire.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

How the mathematical concept of Coarse Geometries is useful to analysing the Web

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Evidence is presented of widespread changes in structure and species composition between the 1980s and 2003–2004 from surveys of 249 British broadleaved woodlands. Structural components examined include canopy cover, vertical vegetation profiles, field-layer cover and deadwood abundance. Woods were located in 13 geographical localities and the patterns of change were examined for each locality as well as across all woods. Changes were not uniform throughout the localities; overall, there were significant decreases in canopy cover and increases in sub-canopy (2–10 m) cover. Changes in 0.5–2 m vegetation cover showed strong geographic patterns, increasing in western localities, but declining or showing no change in eastern localities. There were significant increases in canopy ash Fraxinus excelsior and decreases in oak Quercus robur/petraea. Shrub layer ash and honeysuckle Lonicera periclymenum increased while birch Betula spp. hawthorn Crataegus monogyna and hazel Corylus avellana declined. Within the field layer, both bracken Pteridium aquilinum and herbs increased. Overall, deadwood generally increased. Changes were consistent with reductions in active woodland management and changes in grazing and browsing pressure. These findings have important implications for sustainable active management of British broadleaved woodlands to meet silvicultural and biodiversity objectives.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We compare the characteristics of synthetic European droughts generated by the HiGEM1 coupled climate model run with present day atmospheric composition with observed drought events extracted from the CRU TS3 data set. The results demonstrate consistency in both the rate of drought occurrence and the spatiotemporal structure of the events. Estimates of the probability density functions for event area, duration and severity are shown to be similar with confidence > 90%. Encouragingly, HiGEM is shown to replicate the extreme tails of the observed distributions and thus the most damaging European drought events. The soil moisture state is shown to play an important role in drought development. Once a large-scale drought has been initiated it is found to be 50% more likely to continue if the local soil moisture is below the 40th percentile. In response to increased concentrations of atmospheric CO2, the modelled droughts are found to increase in duration, area and severity. The drought response can be largely attributed to temperature driven changes in relative humidity. 1 HiGEM is based on the latest climate configuration of the Met Office Hadley Centre Unified Model (HadGEM1) with the horizontal resolution increased to 1.25 x 0.83 degrees in longitude and latitude in the atmosphere and 1/3 x 1/3 degrees in the ocean.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We derive constraints on a simple quintessential inflation model, based on a spontaneously broken Phi(4) theory, imposed by the Wilkinson Microwave Anisotropy Probe three-year data (WMAP3) and by galaxy clustering results from the Sloan Digital Sky Survey (SDSS). We find that the scale of symmetry breaking must be larger than about 3 Planck masses in order for inflation to generate acceptable values of the scalar spectral index and of the tensor-to-scalar ratio. We also show that the resulting quintessence equation of state can evolve rapidly at recent times and hence can potentially be distinguished from a simple cosmological constant in this parameter regime.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We study soft limits of correlation functions for the density and velocity fields in the theory of structure formation. First, we re-derive the (resummed) consistency conditions at unequal times using the eikonal approximation. These are solely based on symmetry arguments and are therefore universal. Then, we explore the existence of equal-time relations in the soft limit which, on the other hand, depend on the interplay between soft and hard modes. We scrutinize two approaches in the literature: the time-flow formalism, and a background method where the soft mode is absorbed into a locally curved cosmology. The latter has been recently used to set up (angular averaged) 'equal-time consistency relations'. We explicitly demonstrate that the time-flow relations and 'equal-time consistency conditions'are only fulfilled at the linear level, and fail at next-to-leading order for an Einstein de-Sitter universe. While applied to the velocities both proposals break down beyond leading order, we find that the 'equal-time consistency conditions'quantitatively approximates the perturbative results for the density contrast. Thus, we generalize the background method to properly incorporate the effect of curvature in the density and velocity fluctuations on short scales, and discuss the reasons behind this discrepancy. We conclude with a few comments on practical implementations and future directions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Modeling the development of structure in the universe on galactic and larger scales is the challenge that drives the field of computational cosmology. Here, photorealism is used as a simple, yet expert, means of assessing the degree to which virtual worlds succeed in replicating our own.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

It is now straightforward to assemble large samples of very high redshift (z ∼ 3) field galaxies selected by their pronounced spectral discontinuity at the rest frame Lyman limit of hydrogen (at 912 Å). This makes possible both statistical analyses of the properties of the galaxies and the first direct glimpse of the progression of the growth of their large-scale distribution at such an early epoch. Here I present a summary of the progress made in these areas to date and some preliminary results of and future plans for a targeted redshift survey at z = 2.7–3.4. Also discussed is how the same discovery method may be used to obtain a “census” of star formation in the high redshift Universe, and the current implications for the history of galaxy formation as a function of cosmic epoch.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The hypothesis of relativistic flow on parsec scales, coupled with the symmetrical (and therefore subrelativistic) outer structure of extended radio sources, requires that jets decelerate on scales observable with the Very Large Array. The consequences of this idea for the appearances of FRI and FRII radio sources are explored.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.

Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.

One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.

Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.

In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.

Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.

The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.

Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The power loss reduction in distribution systems (DSs) is a nonlinear and multiobjective problem. Service restoration in DSs is even computationally hard since it additionally requires a solution in real-time. Both DS problems are computationally complex. For large-scale networks, the usual problem formulation has thousands of constraint equations. The node-depth encoding (NDE) enables a modeling of DSs problems that eliminates several constraint equations from the usual formulation, making the problem solution simpler. On the other hand, a multiobjective evolutionary algorithm (EA) based on subpopulation tables adequately models several objectives and constraints, enabling a better exploration of the search space. The combination of the multiobjective EA with NDE (MEAN) results in the proposed approach for solving DSs problems for large-scale networks. Simulation results have shown the MEAN is able to find adequate restoration plans for a real DS with 3860 buses and 632 switches in a running time of 0.68 s. Moreover, the MEAN has shown a sublinear running time in function of the system size. Tests with networks ranging from 632 to 5166 switches indicate that the MEAN can find network configurations corresponding to a power loss reduction of 27.64% for very large networks requiring relatively low running time.