882 resultados para formation of large scale structure
Resumo:
The Kolmogorov approach to turbulence is applied to the Burgers turbulence in the stochastic adhesion model of large-scale structure formation. As the perturbative approach to this model is unreliable, here a new, non-perturbative approach, based on a suitable formulation of Kolmogorov's scaling laws, is proposed. This approach suggests that the power-law exponent of the matter density two-point correlation function is in the range 1–1.33, but it also suggests that the adhesion model neglects important aspects of the gravitational dynamics.
Resumo:
A new proposal to the study of large-scale neural networks is reported. It is based on the use of similar graphs to the Feynman diagrams. A first general theory is presented and some interpretations are given. A propagator, based on the Green's function of the neuron, is the basis of the method. Application to a simple case is reported.
Resumo:
Smart Grids are advanced power networks that introduce intelligent management, control, and operation systems to address the new challenges generated by the growing energy demand and the appearance of renewal energies. In the literature, Smart Grids are presented as an exemplar SoS: systems composed of large heterogeneous and independent systems that leverage emergent behavior from their interaction. Smart Grids are currently scaling up the electricity service to millions of customers. These Smart Grids are known as Large-Scale Smart Grids. From the experience in several projects about Large-Scale Smart Grids, this paper defines Large-Scale Smart Grids as a SoS that integrate a set of SoS and conceptualizes the properties of this SoS. In addition, the paper defines the architectural framework for deploying the software architectures of Large-Scale Smart Grid SoS.
Resumo:
With the growing body of research on traumatic brain injury and spinal cord injury, computational neuroscience has recently focused its modeling efforts on neuronal functional deficits following mechanical loading. However, in most of these efforts, cell damage is generally only characterized by purely mechanistic criteria, function of quantities such as stress, strain or their corresponding rates. The modeling of functional deficits in neurites as a consequence of macroscopic mechanical insults has been rarely explored. In particular, a quantitative mechanically based model of electrophysiological impairment in neuronal cells has only very recently been proposed (Jerusalem et al., 2013). In this paper, we present the implementation details of Neurite: the finite difference parallel program used in this reference. Following the application of a macroscopic strain at a given strain rate produced by a mechanical insult, Neurite is able to simulate the resulting neuronal electrical signal propagation, and thus the corresponding functional deficits. The simulation of the coupled mechanical and electrophysiological behaviors requires computational expensive calculations that increase in complexity as the network of the simulated cells grows. The solvers implemented in Neurite-explicit and implicit-were therefore parallelized using graphics processing units in order to reduce the burden of the simulation costs of large scale scenarios. Cable Theory and Hodgkin-Huxley models were implemented to account for the electrophysiological passive and active regions of a neurite, respectively, whereas a coupled mechanical model accounting for the neurite mechanical behavior within its surrounding medium was adopted as a link between lectrophysiology and mechanics (Jerusalem et al., 2013). This paper provides the details of the parallel implementation of Neurite, along with three different application examples: a long myelinated axon, a segmented dendritic tree, and a damaged axon. The capabilities of the program to deal with large scale scenarios, segmented neuronal structures, and functional deficits under mechanical loading are specifically highlighted.
Resumo:
In many biological membranes, the major lipids are “non-bilayer lipids,” which in purified form cannot be arranged in a lamellar structure. The structural and functional roles of these lipids are poorly understood. This work demonstrates that the in vitro association of the two main components of a membrane, the non-bilayer lipid monogalactosyldiacylglycerol (MGDG) and the chlorophyll-a/b light-harvesting antenna protein of photosystem II (LHCII) of pea thylakoids, leads to the formation of large, ordered lamellar structures: (i) thin-section electron microscopy and circular dichroism spectroscopy reveal that the addition of MGDG induces the transformation of isolated, disordered macroaggregates of LHCII into stacked lamellar aggregates with a long-range chiral order of the complexes; (ii) small-angle x-ray scattering discloses that LHCII perturbs the structure of the pure lipid and destroys the inverted hexagonal phase; and (iii) an analysis of electron micrographs of negatively stained 2D crystals indicates that in MGDG-LHCII the complexes are found in an ordered macroarray. It is proposed that, by limiting the space available for MGDG in the macroaggregate, LHCII inhibits formation of the inverted hexagonal phase of lipids; in thylakoids, a spatial limitation is likely to be imposed by the high concentration of membrane-associated proteins.
Resumo:
El reciente crecimiento masivo de medios on-line y el incremento de los contenidos generados por los usuarios (por ejemplo, weblogs, Twitter, Facebook) plantea retos en el acceso e interpretación de datos multilingües de manera eficiente, rápida y asequible. El objetivo del proyecto TredMiner es desarrollar métodos innovadores, portables, de código abierto y que funcionen en tiempo real para generación de resúmenes y minería cross-lingüe de medios sociales a gran escala. Los resultados se están validando en tres casos de uso: soporte a la decisión en el dominio financiero (con analistas, empresarios, reguladores y economistas), monitorización y análisis político (con periodistas, economistas y políticos) y monitorización de medios sociales sobre salud con el fin de detectar información sobre efectos adversos a medicamentos.
Resumo:
A parallel computing environment to support optimization of large-scale engineering systems is designed and implemented on Windows-based personal computer networks, using the master-worker model and the Parallel Virtual Machine (PVM). It is involved in decomposition of a large engineering system into a number of smaller subsystems optimized in parallel on worker nodes and coordination of subsystem optimization results on the master node. The environment consists of six functional modules, i.e. the master control, the optimization model generator, the optimizer, the data manager, the monitor, and the post processor. Object-oriented design of these modules is presented. The environment supports steps from the generation of optimization models to the solution and the visualization on networks of computers. User-friendly graphical interfaces make it easy to define the problem, and monitor and steer the optimization process. It has been verified by an example of a large space truss optimization. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
We present 547 optical redshifts obtained for galaxies in the region of the Horologium-Reticulum supercluster (HRS) using the 6 degrees field (6dF) multifiber spectrograph on the UK Schmidt Telescope at the Anglo-Australian Observatory. The HRS covers an area of more than 12 degrees x 12 degrees on the sky centered at approximately alpha = 03(h)19(m), delta = 50 degrees 02'. Our 6dF observations concentrate on the intercluster regions of the HRS, from which we describe four primary results. First, the HRS spans at least the redshift range from 17,000 to 22,500 km s(-1). Second, the overdensity of galaxies in the intercluster regions of the HRS in this redshift range is estimated to be 2.4, or delta rho/(rho) over bar similar to 1: 4. Third, we find a systematic trend of increasing redshift along a southeast-northwest spatial axis in the HRS, in that the mean redshift of HRS members increases by more than 1500 km s(-1) from southeast to northwest over a 12 degrees region. Fourth, the HRS is bimodal in redshift with a separation of similar to 2500 km s(-1) (35 Mpc) between the higher and lower redshift peaks. This fact is particularly evident if the above spatial-redshift trend is fitted and removed. In short, the HRS appears to consist of two components in redshift space, each one exhibiting a similar systematic spatial-redshift trend along a southeast-northwest axis. Lastly, we compare these results from the HRS with the Shapley supercluster and find similar properties and large-scale features.
Resumo:
Experimental and theoretical studies have shown the importance of stochastic processes in genetic regulatory networks and cellular processes. Cellular networks and genetic circuits often involve small numbers of key proteins such as transcriptional factors and signaling proteins. In recent years stochastic models have been used successfully for studying noise in biological pathways, and stochastic modelling of biological systems has become a very important research field in computational biology. One of the challenge problems in this field is the reduction of the huge computing time in stochastic simulations. Based on the system of the mitogen-activated protein kinase cascade that is activated by epidermal growth factor, this work give a parallel implementation by using OpenMP and parallelism across the simulation. Special attention is paid to the independence of the generated random numbers in parallel computing, that is a key criterion for the success of stochastic simulations. Numerical results indicate that parallel computers can be used as an efficient tool for simulating the dynamics of large-scale genetic regulatory networks and cellular processes
Resumo:
Many modern applications fall into the category of "large-scale" statistical problems, in which both the number of observations n and the number of features or parameters p may be large. Many existing methods focus on point estimation, despite the continued relevance of uncertainty quantification in the sciences, where the number of parameters to estimate often exceeds the sample size, despite huge increases in the value of n typically seen in many fields. Thus, the tendency in some areas of industry to dispense with traditional statistical analysis on the basis that "n=all" is of little relevance outside of certain narrow applications. The main result of the Big Data revolution in most fields has instead been to make computation much harder without reducing the importance of uncertainty quantification. Bayesian methods excel at uncertainty quantification, but often scale poorly relative to alternatives. This conflict between the statistical advantages of Bayesian procedures and their substantial computational disadvantages is perhaps the greatest challenge facing modern Bayesian statistics, and is the primary motivation for the work presented here.
Two general strategies for scaling Bayesian inference are considered. The first is the development of methods that lend themselves to faster computation, and the second is design and characterization of computational algorithms that scale better in n or p. In the first instance, the focus is on joint inference outside of the standard problem of multivariate continuous data that has been a major focus of previous theoretical work in this area. In the second area, we pursue strategies for improving the speed of Markov chain Monte Carlo algorithms, and characterizing their performance in large-scale settings. Throughout, the focus is on rigorous theoretical evaluation combined with empirical demonstrations of performance and concordance with the theory.
One topic we consider is modeling the joint distribution of multivariate categorical data, often summarized in a contingency table. Contingency table analysis routinely relies on log-linear models, with latent structure analysis providing a common alternative. Latent structure models lead to a reduced rank tensor factorization of the probability mass function for multivariate categorical data, while log-linear models achieve dimensionality reduction through sparsity. Little is known about the relationship between these notions of dimensionality reduction in the two paradigms. In Chapter 2, we derive several results relating the support of a log-linear model to nonnegative ranks of the associated probability tensor. Motivated by these findings, we propose a new collapsed Tucker class of tensor decompositions, which bridge existing PARAFAC and Tucker decompositions, providing a more flexible framework for parsimoniously characterizing multivariate categorical data. Taking a Bayesian approach to inference, we illustrate empirical advantages of the new decompositions.
Latent class models for the joint distribution of multivariate categorical, such as the PARAFAC decomposition, data play an important role in the analysis of population structure. In this context, the number of latent classes is interpreted as the number of genetically distinct subpopulations of an organism, an important factor in the analysis of evolutionary processes and conservation status. Existing methods focus on point estimates of the number of subpopulations, and lack robust uncertainty quantification. Moreover, whether the number of latent classes in these models is even an identified parameter is an open question. In Chapter 3, we show that when the model is properly specified, the correct number of subpopulations can be recovered almost surely. We then propose an alternative method for estimating the number of latent subpopulations that provides good quantification of uncertainty, and provide a simple procedure for verifying that the proposed method is consistent for the number of subpopulations. The performance of the model in estimating the number of subpopulations and other common population structure inference problems is assessed in simulations and a real data application.
In contingency table analysis, sparse data is frequently encountered for even modest numbers of variables, resulting in non-existence of maximum likelihood estimates. A common solution is to obtain regularized estimates of the parameters of a log-linear model. Bayesian methods provide a coherent approach to regularization, but are often computationally intensive. Conjugate priors ease computational demands, but the conjugate Diaconis--Ylvisaker priors for the parameters of log-linear models do not give rise to closed form credible regions, complicating posterior inference. In Chapter 4 we derive the optimal Gaussian approximation to the posterior for log-linear models with Diaconis--Ylvisaker priors, and provide convergence rate and finite-sample bounds for the Kullback-Leibler divergence between the exact posterior and the optimal Gaussian approximation. We demonstrate empirically in simulations and a real data application that the approximation is highly accurate, even in relatively small samples. The proposed approximation provides a computationally scalable and principled approach to regularized estimation and approximate Bayesian inference for log-linear models.
Another challenging and somewhat non-standard joint modeling problem is inference on tail dependence in stochastic processes. In applications where extreme dependence is of interest, data are almost always time-indexed. Existing methods for inference and modeling in this setting often cluster extreme events or choose window sizes with the goal of preserving temporal information. In Chapter 5, we propose an alternative paradigm for inference on tail dependence in stochastic processes with arbitrary temporal dependence structure in the extremes, based on the idea that the information on strength of tail dependence and the temporal structure in this dependence are both encoded in waiting times between exceedances of high thresholds. We construct a class of time-indexed stochastic processes with tail dependence obtained by endowing the support points in de Haan's spectral representation of max-stable processes with velocities and lifetimes. We extend Smith's model to these max-stable velocity processes and obtain the distribution of waiting times between extreme events at multiple locations. Motivated by this result, a new definition of tail dependence is proposed that is a function of the distribution of waiting times between threshold exceedances, and an inferential framework is constructed for estimating the strength of extremal dependence and quantifying uncertainty in this paradigm. The method is applied to climatological, financial, and electrophysiology data.
The remainder of this thesis focuses on posterior computation by Markov chain Monte Carlo. The Markov Chain Monte Carlo method is the dominant paradigm for posterior computation in Bayesian analysis. It has long been common to control computation time by making approximations to the Markov transition kernel. Comparatively little attention has been paid to convergence and estimation error in these approximating Markov Chains. In Chapter 6, we propose a framework for assessing when to use approximations in MCMC algorithms, and how much error in the transition kernel should be tolerated to obtain optimal estimation performance with respect to a specified loss function and computational budget. The results require only ergodicity of the exact kernel and control of the kernel approximation accuracy. The theoretical framework is applied to approximations based on random subsets of data, low-rank approximations of Gaussian processes, and a novel approximating Markov chain for discrete mixture models.
Data augmentation Gibbs samplers are arguably the most popular class of algorithm for approximately sampling from the posterior distribution for the parameters of generalized linear models. The truncated Normal and Polya-Gamma data augmentation samplers are standard examples for probit and logit links, respectively. Motivated by an important problem in quantitative advertising, in Chapter 7 we consider the application of these algorithms to modeling rare events. We show that when the sample size is large but the observed number of successes is small, these data augmentation samplers mix very slowly, with a spectral gap that converges to zero at a rate at least proportional to the reciprocal of the square root of the sample size up to a log factor. In simulation studies, moderate sample sizes result in high autocorrelations and small effective sample sizes. Similar empirical results are observed for related data augmentation samplers for multinomial logit and probit models. When applied to a real quantitative advertising dataset, the data augmentation samplers mix very poorly. Conversely, Hamiltonian Monte Carlo and a type of independence chain Metropolis algorithm show good mixing on the same dataset.
Resumo:
The Himalayan orogen is the result of the collision between the Indian and Asian continents that began 55-50 Ma ago, causing intracontinental thrusting and nappe formation. Detailed mapping as well as structural and microfabric analyses on a traverse from the Tethyan Himalaya southwestward through the High Himalayan Crystalline and the Main Central Thrust zone (MCT zone) to the Lesser Himalayan Sequence in the Spiti-eastern Lahul-Parvati valley area reveal eight main phases of deformation, a series of late stage phases and five stages of metamorphic crystallization. This sequence of events is integrated into a reconstruction of the tectonometamorphic evolution of the Himalayan orogen in northern Himachal Pradesh. The oldest phase D-1 is preserved as relies in the High Himalayan Crystalline. Its deformational conditions are poorly known, but the metamorphic evolution is well documented by a prograde metamorphism reaching peak conditions within the upper amphibolite facies. This indicates that D-1 was an important tectonometamorphic event including considerable crustal thickening. The structural, metamorphic and sedimentary record suggest that D-1 most probably represents an early stage of continental collision. The first event clearly attributed to the collision between India and Asia is documented by two converging nappe systems, the NE-verging Shikar Beh Nappe and the SW-verging north Himalayan nappes. The D-2 Shikar Beh Nappe is characterized by isoclinal folding and top-to-the NE shearing, representing the main deformation in the High Himalayan Crystalline. D-2 also caused the main metamorphism in the High Himalayan Crystalline that was of a Barrovian-type, reaching upper amphibolite facies peak conditions. The Shikar Beh Nappe is interpreted to have formed within the Indian crust SW of the subduction zone. Simultaneously with NE-directed nappe formation, incipient subduction of India below Asia caused stacking of the SW-verging north Himalayan Nappes, that were thrust from the northern edge of the subducted continent toward the front of the Shikar Beh Nappe. As a result, the SW-verging folds of the D-3 Main Fold Zone formed in the Tethyan Himalaya below the front of the north Himalayan nappes. D-3 represents the main deformation in the Tethyan Himalaya, associated with a greenschist facies metamorphism. Folding within the Main Fold Zone subsequently propagated toward SW into the High Himalayan Crystalline, where it overprinted the preexisting D-2 structures. After subduction at the base of the north Himalayan nappes, the subduction zone stepped to the base of the High Himalayan Crystalline, where D-3 folds were crosscut by SW-directed D-4 thrusting. During D-4, the Crystalline Nappe, comprising the Main Fold Zone and relies of the Shikar Beh Nappe was thrust toward SW over the Lesser Himalayan Sequence along the 4 to 5 kms thick Main Central Thrust zone. Thrusting was related to a retrograde greenschist facies overprint at the base of the Crystalline Nappe and to pro-grade greenschist facies conditions in the Lesser Himalayan Sequence. Simultaneously with thrusting at the base of the Crystalline Nappe, higher crustal levels were affected by NE-directed D-5 normal extensional shearing and by dextral strike-slip motion, indicating that the high-grade metamorphic Crystalline Nappe was extruded between the low-grade metamorphic Lesser Himalayan Sequence at the base and the north Himalayan nappes at the top. The upper boundary of the Crystalline Nappe is not clearly delimited and passes gradually into the low-grade rocks at the front of the north Himalayan nappes. Extrusion of the Crystalline Nappe was followed by the phase D-6, characterized by large-scale, upright to steeply inclined, NE-verging folds and by another series of normal and extensional structures D-7+D-8 that may be related to ongoing extrusion of the Crystalline Nappe. The late stage evolution is represented by the phases D-A and D-B that indicate shortening parallel to the axis of the mountain chain and by D-C that is interpreted to account for the formation of large-scale domes with NNW-SSE-trending axes, an example of which is exposed in the Larji-Kullu-Rampur tectonic window.
Resumo:
What drove the transition from small-scale human societies centred on kinship and personal exchange, to large-scale societies comprising cooperation and division of labour among untold numbers of unrelated individuals? We propose that the unique human capacity to negotiate institutional rules that coordinate social actions was a key driver of this transition. By creating institutions, humans have been able to move from the default 'Hobbesian' rules of the 'game of life', determined by physical/environmental constraints, into self-created rules of social organization where cooperation can be individually advantageous even in large groups of unrelated individuals. Examples include rules of food sharing in hunter-gatherers, rules for the usage of irrigation systems in agriculturalists, property rights and systems for sharing reputation between mediaeval traders. Successful institutions create rules of interaction that are self-enforcing, providing direct benefits both to individuals that follow them, and to individuals that sanction rule breakers. Forming institutions requires shared intentionality, language and other cognitive abilities largely absent in other primates. We explain how cooperative breeding likely selected for these abilities early in the Homo lineage. This allowed anatomically modern humans to create institutions that transformed the self-reliance of our primate ancestors into the division of labour of large-scale human social organization.
Resumo:
This paper shows how the rainfall distribution over the UK, in the three major events on 13-15 June, 25 June and 20 July 2007, was related to troughs in the upper-level flow, and investigates the relationship of these features to a persistent large-scale flow pattern which extended around the northern hemisphere and its possible origins. Remote influences can be mediated by the propagation of large-scale atmospheric waves across the northern hemisphere and also by the origins of the air-masses that are wrapped into the developing weather systems delivering the rain to the UK. These dynamical influences are examined using analyses and forecasts produced by a range of atmospheric models.
Resumo:
A connection is shown to exist between the mesoscale eddy activity around Madagascar and the large-scale interannual variability in the Indian Ocean. We use the combined TOPEX/Poseidon-ERS sea surface height (SSH) data for the period 1993–2003. The SSH-fields in the Mozambique Channel and east of Madagascar exhibit a significant interannual oscillation. This is related to the arrival of large-scale anomalies that propagate westward along 10°–15°S in response to the Indian Ocean dipole (IOD) events. Positive (negative) SSH anomalies associated to a positive (negative) IOD phase induce a shift in the intensity and position of the tropical and subtropical gyres. A weakening (strengthening) results in the intensity of the South Equatorial Current and its branches along east Madagascar. In addition, the flow through the narrows of the Mozambique Channel around 17°S increases (decreases) during periods of a stronger and northward (southward) extension of the subtropical (tropical) gyre. Interaction between the currents in the narrows and southward propagating eddies from the northern Channel leads to interannual variability in the eddy kinetic energy of the central Channel in phase with the one in the SSH-field.
Resumo:
Terminally protected acyclic tripeptides containing tyrosine residues at both termini self-assemble into nanotubes in crystals through various non-covalent interactions including intermolecular hydrogen bonds. The nanotube has an average internal diameter of 5 angstrom (0.5 nm) and the tubular ensemble is developed through the hydrogen-bonded phenolic-OH side chains of tyrosine (Tyr) residues [Org. Lett. 2004, 6, 4463]. We have synthesized and studied several tripeptides 3-6 to probe the role of tyrosine residues in nanotube structure formation. These peptides either have only one Tyr residue at N- or C-termini or they have one or two terminally located phenylalanine (Phe) residues. These tripeptides failed to form any kind of nanotubular structure in the solid state. Single crystal X-ray diffraction studies of these peptides 3-6 clearly demonstrate that substitution of any one of the terminal Tyr residues in the Boc-Tyr-X-Tyr-OMe (X=VaI or Ile) sequence disrupts the formation of the nanotubular structure indicating that the presence of two terminally located Tyr residues is vital for nanotube formation. (c) 2006 Elsevier Ltd. All rights reserved.