919 resultados para power-law distributions
Resumo:
We present a statistical analysis of the time evolution of ground magnetic fluctuations in three (12–48 s, 24–96 s and 48–192 s) period bands during nightside auroral activations. We use an independently derived auroral activation list composed of both substorms and pseudo-breakups to provide an estimate of the activation times of nightside aurora during periods with comprehensive ground magnetometer coverage. One hundred eighty-one events in total are studied to demonstrate the statistical nature of the time evolution of magnetic wave power during the ∼30 min surrounding auroral activations. We find that the magnetic wave power is approximately constant before an auroral activation, starts to grow up to 90 s prior to the optical onset time, maximizes a few minutes after the auroral activation, then decays slightly to a new, and higher, constant level. Importantly, magnetic ULF wave power always remains elevated after an auroral activation, whether it is a substorm or a pseudo-breakup. We subsequently divide the auroral activation list into events that formed part of ongoing auroral activity and events that had little preceding geomagnetic activity. We find that the evolution of wave power in the ∼10–200 s period band essentially behaves in the same manner through auroral onset, regardless of event type. The absolute power across ULF wave bands, however, displays a power law-like dependency throughout a 30 min period centered on auroral onset time. We also find evidence of a secondary maximum in wave power at high latitudes ∼10 min following isolated substorm activations. Most significantly, we demonstrate that magnetic wave power levels persist after auroral activations for ∼10 min, which is consistent with recent findings of wave-driven auroral precipitation during substorms. This suggests that magnetic wave power and auroral particle precipitation are intimately linked and key components of the substorm onset process.
Resumo:
A discrete-time random process is described, which can generate bursty sequences of events. A Bernoulli process, where the probability of an event occurring at time t is given by a fixed probability x, is modified to include a memory effect where the event probability is increased proportionally to the number of events that occurred within a given amount of time preceding t. For small values of x the interevent time distribution follows a power law with exponent −2−x. We consider a dynamic network where each node forms, and breaks connections according to this process. The value of x for each node depends on the fitness distribution, \rho(x), from which it is drawn; we find exact solutions for the expectation of the degree distribution for a variety of possible fitness distributions, and for both cases where the memory effect either is, or is not present. This work can potentially lead to methods to uncover hidden fitness distributions from fast changing, temporal network data, such as online social communications and fMRI scans.
Resumo:
Accurate high-resolution records of snow accumulation rates in Antarctica are crucial for estimating ice sheet mass balance and subsequent sea level change. Snowfall rates at Law Dome, East Antarctica, have been linked with regional atmospheric circulation to the mid-latitudes as well as regional Antarctic snowfall. Here, we extend the length of the Law Dome accumulation record from 750 years to 2035 years, using recent annual layer dating that extends to 22 BCE. Accumulation rates were calculated as the ratio of measured to modelled layer thicknesses, multiplied by the long-term mean accumulation rate. The modelled layer thicknesses were based on a power-law vertical strain rate profile fitted to observed annual layer thickness. The periods 380–442, 727–783 and 1970–2009 CE have above-average snow accumulation rates, while 663–704, 933–975 and 1429–1468 CE were below average, and decadal-scale snow accumulation anomalies were found to be relatively common (74 events in the 2035-year record). The calculated snow accumulation rates show good correlation with atmospheric reanalysis estimates, and significant spatial correlation over a wide expanse of East Antarctica, demonstrating that the Law Dome record captures larger-scale variability across a large region of East Antarctica well beyond the immediate vicinity of the Law Dome summit. Spectral analysis reveals periodicities in the snow accumulation record which may be related to El Niño–Southern Oscillation (ENSO) and Interdecadal Pacific Oscillation (IPO) frequencies.
Resumo:
This work maps and analyses cross-citations in the areas of Biology, Mathematics, Physics and Medicine in the English version of Wikipedia, which are represented as an undirected complex network where the entries correspond to nodes and the citations among the entries are mapped as edges. We found a high value of clustering coefficient for the areas of Biology and Medicine, and a small value for Mathematics and Physics. The topological organization is also different for each network, including a modular structure for Biology and Medicine, a sparse structure for Mathematics and a dense core for Physics. The networks have degree distributions that can be approximated by a power-law with a cut-off. The assortativity of the isolated networks has also been investigated and the results indicate distinct patterns for each subject. We estimated the betweenness centrality of each node considering the full Wikipedia network, which contains the nodes of the four subjects and the edges between them. In addition, the average shortest path length between the subjects revealed a close relationship between the subjects of Biology and Physics, and also between Medicine and Physics. Our results indicate that the analysis of the full Wikipedia network cannot predict the behavior of the isolated categories since their properties can be very different from those observed in the full network. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
The power-law size distributions obtained experimentally for neuronal avalanches are an important evidence of criticality in the brain. This evidence is supported by the fact that a critical branching process exhibits the same exponent t~3=2. Models at criticality have been employed to mimic avalanche propagation and explain the statistics observed experimentally. However, a crucial aspect of neuronal recordings has been almost completely neglected in the models: undersampling. While in a typical multielectrode array hundreds of neurons are recorded, in the same area of neuronal tissue tens of thousands of neurons can be found. Here we investigate the consequences of undersampling in models with three different topologies (two-dimensional, small-world and random network) and three different dynamical regimes (subcritical, critical and supercritical). We found that undersampling modifies avalanche size distributions, extinguishing the power laws observed in critical systems. Distributions from subcritical systems are also modified, but the shape of the undersampled distributions is more similar to that of a fully sampled system. Undersampled supercritical systems can recover the general characteristics of the fully sampled version, provided that enough neurons are measured. Undersampling in two-dimensional and small-world networks leads to similar effects, while the random network is insensitive to sampling density due to the lack of a well-defined neighborhood. We conjecture that neuronal avalanches recorded from local field potentials avoid undersampling effects due to the nature of this signal, but the same does not hold for spike avalanches. We conclude that undersampled branching-process-like models in these topologies fail to reproduce the statistics of spike avalanches.
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
The complex behavior of a wide variety of phenomena that are of interest to physicists, chemists, and engineers has been quantitatively characterized by using the ideas of fractal and multifractal distributions, which correspond in a unique way to the geometrical shape and dynamical properties of the systems under study. In this thesis we present the Space of Fractals and the methods of Hausdorff-Besicovitch, box-counting and Scaling to calculate the fractal dimension of a set. In this Thesis we investigate also percolation phenomena in multifractal objects that are built in a simple way. The central object of our analysis is a multifractal object that we call Qmf . In these objects the multifractality comes directly from the geometric tiling. We identify some differences between percolation in the proposed multifractals and in a regular lattice. There are basically two sources of these differences. The first is related to the coordination number, c, which changes along the multifractal. The second comes from the way the weight of each cell in the multifractal affects the percolation cluster. We use many samples of finite size lattices and draw the histogram of percolating lattices against site occupation probability p. Depending on a parameter, ρ, characterizing the multifractal and the lattice size, L, the histogram can have two peaks. We observe that the probability of occupation at the percolation threshold, pc, for the multifractal is lower than that for the square lattice. We compute the fractal dimension of the percolating cluster and the critical exponent β. Despite the topological differences, we find that the percolation in a multifractal support is in the same universality class as standard percolation. The area and the number of neighbors of the blocks of Qmf show a non-trivial behavior. A general view of the object Qmf shows an anisotropy. The value of pc is a function of ρ which is related to its anisotropy. We investigate the relation between pc and the average number of neighbors of the blocks as well as the anisotropy of Qmf. In this Thesis we study likewise the distribution of shortest paths in percolation systems at the percolation threshold in two dimensions (2D). We study paths from one given point to multiple other points. In oil recovery terminology, the given single point can be mapped to an injection well (injector) and the multiple other points to production wells (producers). In the previously standard case of one injection well and one production well separated by Euclidean distance r, the distribution of shortest paths l, P(l|r), shows a power-law behavior with exponent gl = 2.14 in 2D. Here we analyze the situation of one injector and an array A of producers. Symmetric arrays of producers lead to one peak in the distribution P(l|A), the probability that the shortest path between the injector and any of the producers is l, while the asymmetric configurations lead to several peaks in the distribution. We analyze configurations in which the injector is outside and inside the set of producers. The peak in P(l|A) for the symmetric arrays decays faster than for the standard case. For very long paths all the studied arrays exhibit a power-law behavior with exponent g ∼= gl.
Resumo:
Within a QCD-based eikonal model with a dynamical infrared gluon mass scale we discuss how the small x behavior of the gluon distribution function at moderate Q(2) is directly related to the rise of total hadronic cross-sections. In this model the rise of total cross-sections is driven by gluon-gluon semihard scattering processes, where the behavior of the small x gluon distribtuion function exhibits the power law xg(x, Q(2)) = h(Q(2))x(-epsilon). Assuming that the Q(2) scale is proportional to the dynamical gluon mass one, we show that the values of h(Q(2)) obtained in this model are compatible with an earlier result based on a specific nonperturbative Pomeron model. We discuss the implications of this picture for the behavior of input valence-like gluon distributions at low resolution scales.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
The temperature and velocity distributions of the air inside the cabinet of domestic refrigerators affect the quality of food products. If the consumer knows the location of warm and cold zones in the refrigerator, the products can be placed in the right zone. In addition, the knowledge of the thickness of thermal and hydrodynamic boundary layers near the evaporator and the other walls is also important. If the product is too close to the evaporator wall, freezing can occur, and if it is too close to warm walls, the products can be deteriorated. The aim of the present work is to develop a steady state computational fluid dynamics (CFD) model for domestic refrigerators working on natural convection regime. The Finite Volume Methodology is chosen as numerical procedure for discretizing the governing equations. The SIMPLE-Semi-Implicit Method for Pressure-Linked Equations algorithm applied to a staggered mesh was used for solving the pressure-velocity coupling problem. The Power-Law scheme is employed as interpolation function for the convective-diffusive terms, and the TDMA-Tri-Diagonal Matrix Algorithm is used to solve the systems of algebraic equations. The model is applied to a commercial static refrigerator, where the cabinet is considered an empty three-dimensional rectangular cavity with one drawer at the bottom of the cabinet, but without shelves. In order to analyze the velocity and temperature fields of the air flow inside the cabinet the evaporator temperature, Te, was varied from -20 degrees C to 0 degrees C, and nine different evaporator positions are evaluated for evaporator temperature of -15 degrees C. The cooling capacity of the evaporator for the steady state regime is also computed for each case. One can conclude that the vertical positioning of the evaporator inside the cabinet plays an important role on the temperature distribution inside the cabinet.
Resumo:
Within a QCD-based eikonal model with a dynamical infrared gluon mass scale we discuss how the small x behavior of the gluon distribution function at moderate Q 2 is directly related to the rise of total hadronic cross-sections. In this model the rise of total cross-sections is driven by gluon-gluon semihard scattering processes, where the behavior of the small x gluon distribution function exhibits the power law xg(x, Q 2) = h(Q 2)x( -∈). Assuming that the Q 2 scale is proportional to the dynamical gluon mass one, we show that the values of h(Q 2) obtained in this model are compatible with an earlier result based on a specific nonperturbative Pomeron model. We discuss the implications of this picture for the behavior of input valence-like gluon distributions at low resolution scales. © 2008 World Scientific Publishing Company.
Resumo:
Pós-graduação em Física - IGCE
Resumo:
Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)
Resumo:
Pós-graduação em Agronomia (Produção Vegetal) - FCAV
Resumo:
Die Verifikation bewertet die Güte von quantitativen Niederschlagsvorhersagen(QNV) gegenüber Beobachtungen und liefert Hinweise auf systematische Modellfehler. Mit Hilfe der merkmals-bezogenen Technik SAL werden simulierte Niederschlagsverteilungen hinsichtlich (S)truktur, (A)mplitude und (L)ocation analysiert. Seit einigen Jahren werden numerische Wettervorhersagemodelle benutzt, mit Gitterpunktabständen, die es erlauben, hochreichende Konvektion ohne Parametrisierung zu simulieren. Es stellt sich jetzt die Frage, ob diese Modelle bessere Vorhersagen liefern. Der hoch aufgelöste stündliche Beobachtungsdatensatz, der in dieser Arbeit verwendet wird, ist eine Kombination von Radar- und Stationsmessungen. Zum einem wird damit am Beispiel der deutschen COSMO-Modelle gezeigt, dass die Modelle der neuesten Generation eine bessere Simulation des mittleren Tagesgangs aufweisen, wenn auch mit zu geringen Maximum und etwas zu spätem Auftreten. Im Gegensatz dazu liefern die Modelle der alten Generation ein zu starkes Maximum, welches erheblich zu früh auftritt. Zum anderen wird mit dem neuartigen Modell eine bessere Simulation der räumlichen Verteilung des Niederschlags, durch eine deutliche Minimierung der Luv-/Lee Proble-matik, erreicht. Um diese subjektiven Bewertungen zu quantifizieren, wurden tägliche QNVs von vier Modellen für Deutschland in einem Achtjahreszeitraum durch SAL sowie klassischen Maßen untersucht. Die höher aufgelösten Modelle simulieren realistischere Niederschlagsverteilungen(besser in S), aber bei den anderen Komponenten tritt kaum ein Unterschied auf. Ein weiterer Aspekt ist, dass das Modell mit der gröbsten Auf-lösung(ECMWF) durch den RMSE deutlich am besten bewertet wird. Darin zeigt sich das Problem des ‚Double Penalty’. Die Zusammenfassung der drei Komponenten von SAL liefert das Resultat, dass vor allem im Sommer das am feinsten aufgelöste Modell (COSMO-DE) am besten abschneidet. Hauptsächlich kommt das durch eine realistischere Struktur zustande, so dass SAL hilfreiche Informationen liefert und die subjektive Bewertung bestätigt. rnIm Jahr 2007 fanden die Projekte COPS und MAP D-PHASE statt und boten die Möglich-keit, 19 Modelle aus drei Modellkategorien hinsichtlich ihrer Vorhersageleistung in Südwestdeutschland für Akkumulationszeiträume von 6 und 12 Stunden miteinander zu vergleichen. Als Ergebnisse besonders hervorzuheben sind, dass (i) je kleiner der Gitter-punktabstand der Modelle ist, desto realistischer sind die simulierten Niederschlags-verteilungen; (ii) bei der Niederschlagsmenge wird in den hoch aufgelösten Modellen weniger Niederschlag, d.h. meist zu wenig, simuliert und (iii) die Ortskomponente wird von allen Modellen am schlechtesten simuliert. Die Analyse der Vorhersageleistung dieser Modelltypen für konvektive Situationen zeigt deutliche Unterschiede. Bei Hochdrucklagen sind die Modelle ohne Konvektionsparametrisierung nicht in der Lage diese zu simulieren, wohingegen die Modelle mit Konvektionsparametrisierung die richtige Menge, aber zu flächige Strukturen realisieren. Für konvektive Ereignisse im Zusammenhang mit Fronten sind beide Modelltypen in der Lage die Niederschlagsverteilung zu simulieren, wobei die hoch aufgelösten Modelle realistischere Felder liefern. Diese wetterlagenbezogene Unter-suchung wird noch systematischer unter Verwendung der konvektiven Zeitskala durchge-führt. Eine erstmalig für Deutschland erstellte Klimatologie zeigt einen einer Potenzfunktion folgenden Abfall der Häufigkeit dieser Zeitskala zu größeren Werten hin auf. Die SAL Ergebnisse sind für beide Bereiche dramatisch unterschiedlich. Für kleine Werte der konvektiven Zeitskala sind sie gut, dagegen werden bei großen Werten die Struktur sowie die Amplitude deutlich überschätzt. rnFür zeitlich sehr hoch aufgelöste Niederschlagsvorhersagen gewinnt der Einfluss der zeitlichen Fehler immer mehr an Bedeutung. Durch die Optimierung/Minimierung der L Komponente von SAL innerhalb eines Zeitfensters(+/-3h) mit dem Beobachtungszeit-punkt im Zentrum ist es möglich diese zu bestimmen. Es wird gezeigt, dass bei optimalem Zeitversatz die Struktur und Amplitude der QNVs für das COSMO-DE besser werden und damit die grundsätzliche Fähigkeit des Modells die Niederschlagsverteilung realistischer zu simulieren, besser gezeigt werden kann.