983 resultados para Power laws


Relevância:

60.00% 60.00%

Publicador:

Resumo:

We obtain the Paris law of fatigue crack propagation in a fuse network model where the accumulated damage in each resistor increases with time as a power law of the local current amplitude. When a resistor reaches its fatigue threshold, it burns irreversibly. Over time, this drives cracks to grow until the system is fractured into two parts. We study the relation between the macroscopic exponent of the crack-growth rate -entering the phenomenological Paris law-and the microscopic damage accumulation exponent, gamma, under the influence of disorder. The way the jumps of the growing crack, Delta a, and the waiting time between successive breaks, Delta t, depend on the type of material, via gamma, are also investigated. We find that the averages of these quantities, <Delta a > and <Delta t >/< t(r)>, scale as power laws of the crack length a, <Delta a > proportional to a(alpha) and <Delta t >/< t(r)> proportional to a(-beta), where < t(r)> is the average rupture time. Strikingly, our results show, for small values of gamma, a decrease in the exponent of the Paris law in comparison with the homogeneous case, leading to an increase in the lifetime of breaking materials. For the particular case of gamma = 0, when fatigue is exclusively ruled by disorder, an analytical treatment confirms the results obtained by simulation. Copyright (C) EPLA, 2012

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The extension of Boltzmann-Gibbs thermostatistics, proposed by Tsallis, introduces an additional parameter q to the inverse temperature beta. Here, we show that a previously introduced generalized Metropolis dynamics to evolve spin models is not local and does not obey the detailed energy balance. In this dynamics, locality is only retrieved for q = 1, which corresponds to the standard Metropolis algorithm. Nonlocality implies very time-consuming computer calculations, since the energy of the whole system must be reevaluated when a single spin is flipped. To circumvent this costly calculation, we propose a generalized master equation, which gives rise to a local generalized Metropolis dynamics that obeys the detailed energy balance. To compare the different critical values obtained with other generalized dynamics, we perform Monte Carlo simulations in equilibrium for the Ising model. By using short-time nonequilibrium numerical simulations, we also calculate for this model the critical temperature and the static and dynamical critical exponents as functions of q. Even for q not equal 1, we show that suitable time-evolving power laws can be found for each initial condition. Our numerical experiments corroborate the literature results when we use nonlocal dynamics, showing that short-time parameter determination works also in this case. However, the dynamics governed by the new master equation leads to different results for critical temperatures and also the critical exponents affecting universality classes. We further propose a simple algorithm to optimize modeling the time evolution with a power law, considering in a log-log plot two successive refinements.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

[EN] Precipitation and desert dust event occurrence time series measured in the Canary Islands region are examined with the primary intention of exploring their scaling characteristics as well as their spatial variability in terms of the islands topography and geographical orientation. In particular, the desert dust intrusion regime in the islands is studied in terms of its relationship with visibility. Analysis of dust and rainfall events over the archipelago exhibits distributions in time that obey power laws. Results show that the rain process presents a high clustering and irregular pattern on short timescales and a more scattered structure for long ones. In contrast, dustiness presents a more uniform and dense structure and, consequently, a more persistent behaviour on short timescales. It was observed that the fractal dimension of rainfall events shows an important spatial variability, which increases with altitude, as well as towards northern latitudes and western longitudes.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

'Responsive' Bürstenpolymere Bürstenpolymere sind definiert verzweigte Makromoleküle, die aus einer Hauptkette und vielen darauf (kovalent) gepfropften Seitenketten bestehen; ist der Pfropfungsgrad hoch und die Hauptkette wesentlich länger als die Seitenketten, dann haben sie die Form semiflexibler molekularer Zylinder. Lassen sich Form bzw. Ausdehnung eines solchen Zylinders gezielt ansteuern, dann könnten diese Moleküle entweder als (Nano-)Sensoren für die entsprechende Umgebungsbedingung oder als molekulare Motoren eingesetzt werden. Die Idee responsiver Bürstenpolymere beruht auf folgender Überlegung: Die gestreckte Konformation der Hauptkette ist entropisch gegenüber einem entsprechenden Knäuel benachteiligt, weshalb sie ,molekulare Federn‘ darstellen, die auf Änderung der repulsiven Wechselwirkung zwischen den Seitenketten reagieren. Dies wurde für den Wechsel zwischen gutem und schlechtem Lösungsmitteln untersucht. Ein zweites Konzept zur Änderung der Molekülform beruht auf der intramolekularen Phasentrennung (,Segmentbildung‘) miteinander unverträglicher Seitenketten in selektiven Lösungsmitteln, da die Hauptkette durch Ausbildung von Mikrophasen entlang des Moleküls ebenfalls aus ihrer gestreckten Form gebracht werden sollte. Die dritte Möglichkeit zur Änderung der Konformation ist die intramolekulare Vernetzung von Seitenketten, die ebenfalls zu verringerter Abstoßung und damit zur Verkürzung der Zylinder führen sollte. Eine weitere wichtige Untersuchung der Arbeit war der Übergang einer geknäuelten Hauptkette zu einer gestreckten Bürste als Funktion der Pfropfdichte. Zur Beantwortung dieser Fragestellungen wurden zylindrische Bürstenpolymere durch ,Grafting Trough‘ und ,Grafting Onto‘ synthetisiert (PS bzw. PI/PS und PnBMA/PMAA mit Kern/Schale- und ,Segment‘-Architektur) und systematisch Pfropfdichte, Vernetzungsgrad (Vernetzung durch gamma-Bestrahlung) und Lösungsbedingungen verändert. Die Möglichkeit gezielter Ansteuerung der Konformationsänderung durch Vernetzung konnte nach polymeranaloger Modifikation von PI/PS-Bürstenpolymeren durch Photovernetzung und vernetzende Komplexierung erfolgreich bestätigt werden. Zur Untersuchung der Probenreihen wurden AFM, Licht- und Neutronenstreuung herangezogen. Die Analysen bestätigten konsistent die Änderung von Steifigkeit, Zylinderquerschnitt und Streckung der Hauptkette durch Variation von Pfropfdichte, Vernetzung und Lösungsmittelqualität. Für die Änderung der Pfropfdichte gehorchen die Parameter dabei Potenzgesetzen.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Spectral absorption coefficients of total particulate matter ap (lambda) were determined using the in vitro filter technique. The present analysis deals with a set of 1166 spectra, determined in various oceanic (case 1) waters, with field chl a concentrations ([chl]) spanning 3 orders of magnitude (0.02-25 mg/m**3). As previously shown [Bricaud et al., 1995, doi:10.1029/95JC00463] for the absorption coefficients of living phytoplankton a phi (lamda), the ap (labda) coefficients also increase nonlinearly with [chl]. The relationships (power laws) that link ap (lambda) and a phi (lambda) to [chl] show striking similarities. Despite large fluctuations, the relative contribution of nonalgal particles to total absorption oscillates around an average value of 25-30% throughout the [chl] range. The spectral dependence of absorption by these nonalgal particles follows an exponential increase toward short wavelengths, with a weakly variable slope (0.011 ± 0.0025/nm). The empirical relationships linking ap (lambda) to ([chl]) can be used in bio-optical models. This parameterization based on in vitro measurements leads to a good agreement with a former modeling of the diffuse attenuation coefficient based on in situ measurements. This agreement is worth noting as independent methods and data sets are compared. It is stressed that for a given ([chl]), the ap (lambda) coefficients show large residual variability around the regression lines (for instance, by a factor of 3 at 440 nm). The consequences of such a variability, when predicting or interpreting the diffuse reflectance of the ocean, are examined, according to whether or not these variations in ap are associated with concomitant variations in particle scattering. In most situations the deviations in ap actually are not compensated by those in particle scattering, so that the amplitude of reflectance is affected by these variations.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Este trabajo aborda el problema de modelizar sistemas din´amicos reales a partir del estudio de sus series temporales, usando una formulaci´on est´andar que pretende ser una abstracci´on universal de los sistemas din´amicos, independientemente de su naturaleza determinista, estoc´astica o h´ıbrida. Se parte de modelizaciones separadas de sistemas deterministas por un lado y estoc´asticos por otro, para converger finalmente en un modelo h´ıbrido que permite estudiar sistemas gen´ericos mixtos, esto es, que presentan una combinaci´on de comportamiento determinista y aleatorio. Este modelo consta de dos componentes, uno determinista consistente en una ecuaci´on en diferencias, obtenida a partir de un estudio de autocorrelaci´on, y otro estoc´astico que modeliza el error cometido por el primero. El componente estoc´astico es un generador universal de distribuciones de probabilidad, basado en un proceso compuesto de variables aleatorias, uniformemente distribuidas en un intervalo variable en el tiempo. Este generador universal es deducido en la tesis a partir de una nueva teor´ıa sobre la oferta y la demanda de un recurso gen´erico. El modelo resultante puede formularse conceptualmente como una entidad con tres elementos fundamentales: un motor generador de din´amica determinista, una fuente interna de ruido generadora de incertidumbre y una exposici´on al entorno que representa las interacciones del sistema real con el mundo exterior. En las aplicaciones estos tres elementos se ajustan en base al hist´orico de las series temporales del sistema din´amico. Una vez ajustados sus componentes, el modelo se comporta de una forma adaptativa tomando como inputs los nuevos valores de las series temporales del sistema y calculando predicciones sobre su comportamiento futuro. Cada predicci´on se presenta como un intervalo dentro del cual cualquier valor es equipro- bable, teniendo probabilidad nula cualquier valor externo al intervalo. De esta forma el modelo computa el comportamiento futuro y su nivel de incertidumbre en base al estado actual del sistema. Se ha aplicado el modelo en esta tesis a sistemas muy diferentes mostrando ser muy flexible para afrontar el estudio de campos de naturaleza dispar. El intercambio de tr´afico telef´onico entre operadores de telefon´ıa, la evoluci´on de mercados financieros y el flujo de informaci´on entre servidores de Internet son estudiados en profundidad en la tesis. Todos estos sistemas son modelizados de forma exitosa con un mismo lenguaje, a pesar de tratarse de sistemas f´ısicos totalmente distintos. El estudio de las redes de telefon´ıa muestra que los patrones de tr´afico telef´onico presentan una fuerte pseudo-periodicidad semanal contaminada con una gran cantidad de ruido, sobre todo en el caso de llamadas internacionales. El estudio de los mercados financieros muestra por su parte que la naturaleza fundamental de ´estos es aleatoria con un rango de comportamiento relativamente acotado. Una parte de la tesis se dedica a explicar algunas de las manifestaciones emp´ıricas m´as importantes en los mercados financieros como son los “fat tails”, “power laws” y “volatility clustering”. Por ´ultimo se demuestra que la comunicaci´on entre servidores de Internet tiene, al igual que los mercados financieros, una componente subyacente totalmente estoc´astica pero de comportamiento bastante “d´ocil”, siendo esta docilidad m´as acusada a medida que aumenta la distancia entre servidores. Dos aspectos son destacables en el modelo, su adaptabilidad y su universalidad. El primero es debido a que, una vez ajustados los par´ametros generales, el modelo se “alimenta” de los valores observables del sistema y es capaz de calcular con ellos comportamientos futuros. A pesar de tener unos par´ametros fijos, la variabilidad en los observables que sirven de input al modelo llevan a una gran riqueza de ouputs posibles. El segundo aspecto se debe a la formulaci´on gen´erica del modelo h´ıbrido y a que sus par´ametros se ajustan en base a manifestaciones externas del sistema en estudio, y no en base a sus caracter´ısticas f´ısicas. Estos factores hacen que el modelo pueda utilizarse en gran variedad de campos. Por ´ultimo, la tesis propone en su parte final otros campos donde se han obtenido ´exitos preliminares muy prometedores como son la modelizaci´on del riesgo financiero, los algoritmos de routing en redes de telecomunicaci´on y el cambio clim´atico. Abstract This work faces the problem of modeling dynamical systems based on the study of its time series, by using a standard language that aims to be an universal abstraction of dynamical systems, irrespective of their deterministic, stochastic or hybrid nature. Deterministic and stochastic models are developed separately to be merged subsequently into a hybrid model, which allows the study of generic systems, that is to say, those having both deterministic and random behavior. This model is a combination of two different components. One of them is deterministic and consisting in an equation in differences derived from an auto-correlation study and the other is stochastic and models the errors made by the deterministic one. The stochastic component is an universal generator of probability distributions based on a process consisting in random variables distributed uniformly within an interval varying in time. This universal generator is derived in the thesis from a new theory of offer and demand for a generic resource. The resulting model can be visualized as an entity with three fundamental elements: an engine generating deterministic dynamics, an internal source of noise generating uncertainty and an exposure to the environment which depicts the interactions between the real system and the external world. In the applications these three elements are adjusted to the history of the time series from the dynamical system. Once its components have been adjusted, the model behaves in an adaptive way by using the new time series values from the system as inputs and calculating predictions about its future behavior. Every prediction is provided as an interval, where any inner value is equally probable while all outer ones have null probability. So, the model computes the future behavior and its level of uncertainty based on the current state of the system. The model is applied to quite different systems in this thesis, showing to be very flexible when facing the study of fields with diverse nature. The exchange of traffic between telephony operators, the evolution of financial markets and the flow of information between servers on the Internet are deeply studied in this thesis. All these systems are successfully modeled by using the same “language”, in spite the fact that they are systems physically radically different. The study of telephony networks shows that the traffic patterns are strongly weekly pseudo-periodic but mixed with a great amount of noise, specially in the case of international calls. It is proved that the underlying nature of financial markets is random with a moderate range of variability. A part of this thesis is devoted to explain some of the most important empirical observations in financial markets, such as “fat tails”, “power laws” and “volatility clustering”. Finally it is proved that the communication between two servers on the Internet has, as in the case of financial markets, an underlaying random dynamics but with a narrow range of variability, being this lack of variability more marked as the distance between servers is increased. Two aspects of the model stand out as being the most important: its adaptability and its universality. The first one is due to the fact that once the general parameters have been adjusted , the model is “fed” on the observable manifestations of the system in order to calculate its future behavior. Despite the fact that the model has fixed parameters the variability in the observable manifestations of the system, which are used as inputs of the model, lead to a great variability in the possible outputs. The second aspect is due to the general “language” used in the formulation of the hybrid model and to the fact that its parameters are adjusted based on external manifestations of the system under study instead of its physical characteristics. These factors made the model suitable to be used in great variety of fields. Lastly, this thesis proposes other fields in which preliminary and promising results have been obtained, such as the modeling of financial risk, the development of routing algorithms for telecommunication networks and the assessment of climate change.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The low earth orbit (LEO) environment contains a large number of artificial debris, of which a significant portion is due to dead satellites and fragments of satellites resulted from explosions and in-orbit collisions. Deorbiting defunct satellites at the end of their life can be achieved by a successful operation of an Electrodynamic Tether (EDT) system. The effectiveness of an EDT greatly depends on the survivability of the tether, which can become debris itself if cut by debris particles; a tether can be completely cut by debris having some minimal diameter. The objective of this paper is to develop an accurate model using power laws for debris-size ranges, in both ORDEM2000 and MASTER2009 debris flux models, to calculate tape tether survivability. The analytical model, which depends on tape dimensions (width, thickness) and orbital parameters (inclinations, altitudes) is then verified with fully numerical results to compare for different orbit inclinations, altitudes and tape width for both ORDEM2000 and MASTER2009 flux data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

For taxonomic levels higher than species, the abundance distributions of the number of subtaxa per taxon tend to approximate power laws but often show strong deviations from such laws. Previously, these deviations were attributed to finite-time effects in a continuous-time branching process at the generic level. Instead, we describe herein a simple discrete branching process that generates the observed distributions and find that the distribution's deviation from power law form is not caused by disequilibration, but rather that it is time independent and determined by the evolutionary properties of the taxa of interest. Our model predicts—with no free parameters—the rank-frequency distribution of the number of families in fossil marine animal orders obtained from the fossil record. We find that near power law distributions are statistically almost inevitable for taxa higher than species. The branching model also sheds light on species-abundance patterns, as well as on links between evolutionary processes, self-organized criticality, and fractals.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We present analysis of 100 ks contiguous XMM-Newton data of the prototypical wind accretor Vela X-1. The observation covered eclipse egress between orbital phases 0.134 and 0.265, during which a giant flare took place, enabling us to study the spectral properties both outside and during the flare. This giant flare with a peak luminosity of 3.92+0.42-0.09 × 1037 erg s-1 allows estimates of the physical parameters of the accreted structure with a mass of ~1021 g. We have been able to model several contributions to the observed spectrum with a phenomenological model formed by three absorbed power laws plus three emission lines. After analysing the variations with orbital phase of the column density of each component, as well as those in the Fe and Ni fluorescence lines, we provide a physical interpretation for each spectral component. Meanwhile, the first two components are two aspects of the principal accretion component from the surface of the neutron star, the third component seems to be the X-ray light echo formed in the stellar wind of the companion.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Understanding the exploration patterns of foragers in the wild provides fundamental insight into animal behavior. Recent experimental evidence has demonstrated that path lengths (distances between consecutive turns) taken by foragers are well fitted by a power law distribution. Numerous theoretical contributions have posited that “Lévy random walks”—which can produce power law path length distributions—are optimal for memoryless agents searching a sparse reward landscape. It is unclear, however, whether such a strategy is efficient for cognitively complex agents, from wild animals to humans. Here, we developed a model to explain the emergence of apparent power law path length distributions in animals that can learn about their environments. In our model, the agent’s goal during search is to build an internal model of the distribution of rewards in space that takes into account the cost of time to reach distant locations (i.e., temporally discounting rewards). For an agent with such a goal, we find that an optimal model of exploration in fact produces hyperbolic path lengths, which are well approximated by power laws. We then provide support for our model by showing that humans in a laboratory spatial exploration task search space systematically and modify their search patterns under a cost of time. In addition, we find that path length distributions in a large dataset obtained from free-ranging marine vertebrates are well described by our hyperbolic model. Thus, we provide a general theoretical framework for understanding spatial exploration patterns of cognitively complex foragers.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Understanding the exploration patterns of foragers in the wild provides fundamental insight into animal behavior. Recent experimental evidence has demonstrated that path lengths (distances between consecutive turns) taken by foragers are well fitted by a power law distribution. Numerous theoretical contributions have posited that “Lévy random walks”—which can produce power law path length distributions—are optimal for memoryless agents searching a sparse reward landscape. It is unclear, however, whether such a strategy is efficient for cognitively complex agents, from wild animals to humans. Here, we developed a model to explain the emergence of apparent power law path length distributions in animals that can learn about their environments. In our model, the agent’s goal during search is to build an internal model of the distribution of rewards in space that takes into account the cost of time to reach distant locations (i.e., temporally discounting rewards). For an agent with such a goal, we find that an optimal model of exploration in fact produces hyperbolic path lengths, which are well approximated by power laws. We then provide support for our model by showing that humans in a laboratory spatial exploration task search space systematically and modify their search patterns under a cost of time. In addition, we find that path length distributions in a large dataset obtained from free-ranging marine vertebrates are well described by our hyperbolic model. Thus, we provide a general theoretical framework for understanding spatial exploration patterns of cognitively complex foragers.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The power-law size distributions obtained experimentally for neuronal avalanches are an important evidence of criticality in the brain. This evidence is supported by the fact that a critical branching process exhibits the same exponent t~3=2. Models at criticality have been employed to mimic avalanche propagation and explain the statistics observed experimentally. However, a crucial aspect of neuronal recordings has been almost completely neglected in the models: undersampling. While in a typical multielectrode array hundreds of neurons are recorded, in the same area of neuronal tissue tens of thousands of neurons can be found. Here we investigate the consequences of undersampling in models with three different topologies (two-dimensional, small-world and random network) and three different dynamical regimes (subcritical, critical and supercritical). We found that undersampling modifies avalanche size distributions, extinguishing the power laws observed in critical systems. Distributions from subcritical systems are also modified, but the shape of the undersampled distributions is more similar to that of a fully sampled system. Undersampled supercritical systems can recover the general characteristics of the fully sampled version, provided that enough neurons are measured. Undersampling in two-dimensional and small-world networks leads to similar effects, while the random network is insensitive to sampling density due to the lack of a well-defined neighborhood. We conjecture that neuronal avalanches recorded from local field potentials avoid undersampling effects due to the nature of this signal, but the same does not hold for spike avalanches. We conclude that undersampled branching-process-like models in these topologies fail to reproduce the statistics of spike avalanches.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The power-law size distributions obtained experimentally for neuronal avalanches are an important evidence of criticality in the brain. This evidence is supported by the fact that a critical branching process exhibits the same exponent t~3=2. Models at criticality have been employed to mimic avalanche propagation and explain the statistics observed experimentally. However, a crucial aspect of neuronal recordings has been almost completely neglected in the models: undersampling. While in a typical multielectrode array hundreds of neurons are recorded, in the same area of neuronal tissue tens of thousands of neurons can be found. Here we investigate the consequences of undersampling in models with three different topologies (two-dimensional, small-world and random network) and three different dynamical regimes (subcritical, critical and supercritical). We found that undersampling modifies avalanche size distributions, extinguishing the power laws observed in critical systems. Distributions from subcritical systems are also modified, but the shape of the undersampled distributions is more similar to that of a fully sampled system. Undersampled supercritical systems can recover the general characteristics of the fully sampled version, provided that enough neurons are measured. Undersampling in two-dimensional and small-world networks leads to similar effects, while the random network is insensitive to sampling density due to the lack of a well-defined neighborhood. We conjecture that neuronal avalanches recorded from local field potentials avoid undersampling effects due to the nature of this signal, but the same does not hold for spike avalanches. We conclude that undersampled branching-process-like models in these topologies fail to reproduce the statistics of spike avalanches.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The present study provides a methodology that gives a predictive character the computer simulations based on detailed models of the geometry of a porous medium. We using the software FLUENT to investigate the flow of a viscous Newtonian fluid through a random fractal medium which simplifies a two-dimensional disordered porous medium representing a petroleum reservoir. This fractal model is formed by obstacles of various sizes, whose size distribution function follows a power law where exponent is defined as the fractal dimension of fractionation Dff of the model characterizing the process of fragmentation these obstacles. They are randomly disposed in a rectangular channel. The modeling process incorporates modern concepts, scaling laws, to analyze the influence of heterogeneity found in the fields of the porosity and of the permeability in such a way as to characterize the medium in terms of their fractal properties. This procedure allows numerically analyze the measurements of permeability k and the drag coefficient Cd proposed relationships, like power law, for these properties on various modeling schemes. The purpose of this research is to study the variability provided by these heterogeneities where the velocity field and other details of viscous fluid dynamics are obtained by solving numerically the continuity and Navier-Stokes equations at pore level and observe how the fractal dimension of fractionation of the model can affect their hydrodynamic properties. This study were considered two classes of models, models with constant porosity, MPC, and models with varying porosity, MPV. The results have allowed us to find numerical relationship between the permeability, drag coefficient and the fractal dimension of fractionation of the medium. Based on these numerical results we have proposed scaling relations and algebraic expressions involving the relevant parameters of the phenomenon. In this study analytical equations were determined for Dff depending on the geometrical parameters of the models. We also found a relation between the permeability and the drag coefficient which is inversely proportional to one another. As for the difference in behavior it is most striking in the classes of models MPV. That is, the fact that the porosity vary in these models is an additional factor that plays a significant role in flow analysis. Finally, the results proved satisfactory and consistent, which demonstrates the effectiveness of the referred methodology for all applications analyzed in this study.