64 resultados para LV Network Constraints
Resumo:
The kinematic approach to cosmological tests provides direct evidence to the present accelerating stage of the Universe that does not depend on the validity of general relativity, as well as on the matter-energy content of the Universe. In this context, we consider here a linear two-parameter expansion for the decelerating parameter, q(z)=q(0)+q(1)z, where q(0) and q(1) are arbitrary constants to be constrained by the union supernovae data. By assuming a flat Universe we find that the best fit to the pair of free parameters is (q(0),q(1))=(-0.73,1.5) whereas the transition redshift is z(t)=0.49(-0.07)(+0.14)(1 sigma) +0.54-0.12(2 sigma). This kinematic result is in agreement with some independent analyses and more easily accommodates many dynamical flat models (like Lambda CDM).
Resumo:
In many real situations, randomness is considered to be uncertainty or even confusion which impedes human beings from making a correct decision. Here we study the combined role of randomness and determinism in particle dynamics for complex network community detection. In the proposed model, particles walk in the network and compete with each other in such a way that each of them tries to possess as many nodes as possible. Moreover, we introduce a rule to adjust the level of randomness of particle walking in the network, and we have found that a portion of randomness can largely improve the community detection rate. Computer simulations show that the model has good community detection performance and at the same time presents low computational complexity. (C) 2008 American Institute of Physics.
Resumo:
This article focuses on the identification of the number of paths with different lengths between pairs of nodes in complex networks and how these paths can be used for characterization of topological properties of theoretical and real-world complex networks. This analysis revealed that the number of paths can provide a better discrimination of network models than traditional network measurements. In addition, the analysis of real-world networks suggests that the long-range connectivity tends to be limited in these networks and may be strongly related to network growth and organization.
Resumo:
This paper reports results from a search for nu(mu) -> nu(e) transitions by the MINOS experiment based on a 7 x 10(20) protons-on-target exposure. Our observation of 54 candidate nu(e) events in the far detector with a background of 49.1 +/- 7.0(stat) +/- 2.7(syst) events predicted by the measurements in the near detector requires 2sin(2)(2 theta(13))sin(2)theta(23) < 0.12(0.20) at the 90% C.L. for the normal (inverted) mass hierarchy at delta(CP) = 0. The experiment sets the tightest limits to date on the value of theta(13) for nearly all values of delta(CP) for the normal neutrino mass hierarchy and maximal sin(2)(2 theta(23)).
Resumo:
For Au + Au collisions at 200 GeV, we measure neutral pion production with good statistics for transverse momentum, p(T), up to 20 GeV/c. A fivefold suppression is found, which is essentially constant for 5 < p(T) < 20 GeV/c. Experimental uncertainties are small enough to constrain any model-dependent parametrization for the transport coefficient of the medium, e. g., <(q) over cap > in the parton quenching model. The spectral shape is similar for all collision classes, and the suppression does not saturate in Au + Au collisions.
Resumo:
The PHENIX experiment has measured the suppression of semi-inclusive single high-transverse-momentum pi(0)'s in Au+Au collisions at root s(NN) = 200 GeV. The present understanding of this suppression is in terms of energy loss of the parent (fragmenting) parton in a dense color-charge medium. We have performed a quantitative comparison between various parton energy-loss models and our experimental data. The statistical point-to-point uncorrelated as well as correlated systematic uncertainties are taken into account in the comparison. We detail this methodology and the resulting constraint on the model parameters, such as the initial color-charge density dN(g)/dy, the medium transport coefficient <(q) over cap >, or the initial energy-loss parameter epsilon(0). We find that high-transverse-momentum pi(0) suppression in Au+Au collisions has sufficient precision to constrain these model-dependent parameters at the +/- 20-25% (one standard deviation) level. These constraints include only the experimental uncertainties, and further studies are needed to compute the corresponding theoretical uncertainties.
Resumo:
This work clarifies the relation between network circuit (topology) and behaviour (information transmission and synchronization) in active networks, e.g. neural networks. As an application, we show how one can find network topologies that are able to transmit a large amount of information, possess a large number of communication channels, and are robust under large variations of the network coupling configuration. This theoretical approach is general and does not depend on the particular dynamic of the elements forming the network, since the network topology can be determined by finding a Laplacian matrix (the matrix that describes the connections and the coupling strengths among the elements) whose eigenvalues satisfy some special conditions. To illustrate our ideas and theoretical approaches, we use neural networks of electrically connected chaotic Hindmarsh-Rose neurons.
Resumo:
We numerically study the dynamics of a discrete spring-block model introduced by Olami, Feder, and Christensen (OFC) to mimic earthquakes and investigate to what extent this simple model is able to reproduce the observed spatiotemporal clustering of seismicity. Following a recently proposed method to characterize such clustering by networks of recurrent events [J. Davidsen, P. Grassberger, and M. Paczuski, Geophys. Res. Lett. 33, L11304 (2006)], we find that for synthetic catalogs generated by the OFC model these networks have many nontrivial statistical properties. This includes characteristic degree distributions, very similar to what has been observed for real seismicity. There are, however, also significant differences between the OFC model and earthquake catalogs, indicating that this simple model is insufficient to account for certain aspects of the spatiotemporal clustering of seismicity.
Resumo:
Complex networks have been characterised by their specific connectivity patterns (network motifs), but their building blocks can also be identified and described by node-motifs-a combination of local network features. One technique to identify single node-motifs has been presented by Costa et al. (L. D. F. Costa, F. A. Rodrigues, C. C. Hilgetag, and M. Kaiser, Europhys. Lett., 87, 1, 2009). Here, we first suggest improvements to the method including how its parameters can be determined automatically. Such automatic routines make high-throughput studies of many networks feasible. Second, the new routines are validated in different network-series. Third, we provide an example of how the method can be used to analyse network time-series. In conclusion, we provide a robust method for systematically discovering and classifying characteristic nodes of a network. In contrast to classical motif analysis, our approach can identify individual components (here: nodes) that are specific to a network. Such special nodes, as hubs before, might be found to play critical roles in real-world networks.
Resumo:
We present rigorous upper and lower bounds for the momentum-space ghost propagator G(p) of Yang-Mills theories in terms of the smallest nonzero eigenvalue (and of the corresponding eigenvector) of the Faddeev-Popov matrix. We apply our analysis to data from simulations of SU(2) lattice gauge theory in Landau gauge, using the largest lattice sizes to date. Our results suggest that, in three and in four space-time dimensions, the Landau gauge ghost propagator is not enhanced as compared to its tree-level behavior. This is also seen in plots and fits of the ghost dressing function. In the two-dimensional case, on the other hand, we find that G(p) diverges as p(-2-2 kappa) with kappa approximate to 0.15, in agreement with A. Maas, Phys. Rev. D 75, 116004 (2007). We note that our discussion is general, although we make an application only to pure gauge theory in Landau gauge. Our simulations have been performed on the IBM supercomputer at the University of Sao Paulo.
Resumo:
We present rigorous upper and lower bounds for the zero-momentum gluon propagator D(0) of Yang-Mills theories in terms of the average value of the gluon field. This allows us to perform a controlled extrapolation of lattice data to infinite volume, showing that the infrared limit of the Landau-gauge gluon propagator in SU(2) gauge theory is finite and nonzero in three and in four space-time dimensions. In the two-dimensional case, we find D(0)=0, in agreement with Maas. We suggest an explanation for these results. We note that our discussion is general, although we apply our analysis only to pure gauge theory in the Landau gauge. Simulations have been performed on the IBM supercomputer at the University of Sao Paulo.
Resumo:
Chagas disease is still a major public health problem in Latin America. Its causative agent, Trypanosoma cruzi, can be typed into three major groups, T. cruzi I, T. cruzi II and hybrids. These groups each have specific genetic characteristics and epidemiological distributions. Several highly virulent strains are found in the hybrid group; their origin is still a matter of debate. The null hypothesis is that the hybrids are of polyphyletic origin, evolving independently from various hybridization events. The alternative hypothesis is that all extant hybrid strains originated from a single hybridization event. We sequenced both alleles of genes encoding EF-1 alpha, actin and SSU rDNA of 26 T. cruzi strains and DHFR-TS and TR of 12 strains. This information was used for network genealogy analysis and Bayesian phylogenies. We found T. cruzi I and T. cruzi II to be monophyletic and that all hybrids had different combinations of T. cruzi I and T. cruzi II haplotypes plus hybrid-specific haplotypes. Bootstrap values (networks) and posterior probabilities (Bayesian phylogenies) of clades supporting the monophyly of hybrids were far below the 95% confidence interval, indicating that the hybrid group is polyphyletic. We hypothesize that T. cruzi I and T. cruzi II are two different species and that the hybrids are extant representatives of independent events of genome hybridization, which sporadically have sufficient fitness to impact on the epidemiology of Chagas disease.
Resumo:
In this article, we discuss school schedules and their implications in the context of chronobiological contemporary knowledge, arguing for the need to reconsider time planning in the school setting. We present anecdotal observations regarding chronobiological challenges imposed by the school system throughout different ages and discuss the effects of these schedules in terms of sleepiness and its deleterious consequences on learning, memory, and attention. Different settings (including urban vs. rural habitats) influence timing, which also depends on self-selected sleep schedules. Finally, we criticize the traditional view of a necessary strict stability of sleep-wake habits.
Resumo:
The power loss reduction in distribution systems (DSs) is a nonlinear and multiobjective problem. Service restoration in DSs is even computationally hard since it additionally requires a solution in real-time. Both DS problems are computationally complex. For large-scale networks, the usual problem formulation has thousands of constraint equations. The node-depth encoding (NDE) enables a modeling of DSs problems that eliminates several constraint equations from the usual formulation, making the problem solution simpler. On the other hand, a multiobjective evolutionary algorithm (EA) based on subpopulation tables adequately models several objectives and constraints, enabling a better exploration of the search space. The combination of the multiobjective EA with NDE (MEAN) results in the proposed approach for solving DSs problems for large-scale networks. Simulation results have shown the MEAN is able to find adequate restoration plans for a real DS with 3860 buses and 632 switches in a running time of 0.68 s. Moreover, the MEAN has shown a sublinear running time in function of the system size. Tests with networks ranging from 632 to 5166 switches indicate that the MEAN can find network configurations corresponding to a power loss reduction of 27.64% for very large networks requiring relatively low running time.
Resumo:
We proposed a connection admission control (CAC) to monitor the traffic in a multi-rate WDM optical network. The CAC searches for the shortest path connecting source and destination nodes, assigns wavelengths with enough bandwidth to serve the requests, supervises the traffic in the most required nodes, and if needed activates a reserved wavelength to release bandwidth according to traffic demand. We used a scale-free network topology, which includes highly connected nodes ( hubs), to enhance the monitoring procedure. Numerical results obtained from computational simulations show improved network performance evaluated in terms of blocking probability.