919 resultados para power-law distributions
Resumo:
We study a generalized Hubbard model on the two-leg ladder at zero temperature, focusing on a parameter region with staggered flux (SF)/d-density wave (DDW) order. To guide our numerical calculations, we first investigate the location of a SF/DDW phase in the phase diagram of the half-filled weakly interacting ladder using a perturbative renormalization group (RG) and bosonization approach. For hole doping 6 away from half-filling, finite-system density-matrix renormalizationgroup (DMRG) calculations are used to study ladders with up to 200 rungs for intermediate-strength interactions. In the doped SF/DDW phase, the staggered rung current and the rung electron density both show periodic spatial oscillations, with characteristic wavelengths 2/delta and 1/delta, respectively, corresponding to ordering wavevectors 2k(F) and 4k(F) for the currents and densities, where 2k(F) = pi(1 - delta). The density minima are located at the anti-phase domain walls of the staggered current. For sufficiently large dopings, SF/DDW order is suppressed. The rung density modulation also exists in neighboring phases where currents decay exponentially. We show that most of the DMRG results can be qualitatively understood from weak-coupling RG/bosonization arguments. However, while these arguments seem to suggest a crossover from non-decaying correlations to power-law decay at a length scale of order 1/delta, the DMRG results are consistent with a true long-range order scenario for the currents and densities. (c) 2005 Elsevier Inc. All rights reserved.
Resumo:
The recurrence interval statistics for regional seismicity follows a universal distribution function, independent of the tectonic setting or average rate of activity (Corral, 2004). The universal function is a modified gamma distribution with power-law scaling of recurrence intervals shorter than the average rate of activity and exponential decay for larger intervals. We employ the method of Corral (2004) to examine the recurrence statistics of a range of cellular automaton earthquake models. The majority of models has an exponential distribution of recurrence intervals, the same as that of a Poisson process. One model, the Olami-Feder-Christensen automaton, has recurrence statistics consistent with regional seismicity for a certain range of the conservation parameter of that model. For conservation parameters in this range, the event size statistics are also consistent with regional seismicity. Models whose dynamics are dominated by characteristic earthquakes do not appear to display universality of recurrence statistics.
Resumo:
Despite the insight gained from 2-D particle models, and given that the dynamics of crustal faults occur in 3-D space, the question remains, how do the 3-D fault gouge dynamics differ from those in 2-D? Traditionally, 2-D modeling has been preferred over 3-D simulations because of the computational cost of solving 3-D problems. However, modern high performance computing architectures, combined with a parallel implementation of the Lattice Solid Model (LSM), provide the opportunity to explore 3-D fault micro-mechanics and to advance understanding of effective constitutive relations of fault gouge layers. In this paper, macroscopic friction values from 2-D and 3-D LSM simulations, performed on an SGI Altix 3700 super-cluster, are compared. Two rectangular elastic blocks of bonded particles, with a rough fault plane and separated by a region of randomly sized non-bonded gouge particles, are sheared in opposite directions by normally-loaded driving plates. The results demonstrate that the gouge particles in the 3-D models undergo significant out-of-plane motion during shear. The 3-D models also exhibit a higher mean macroscopic friction than the 2-D models for varying values of interparticle friction. 2-D LSM gouge models have previously been shown to exhibit accelerating energy release in simulated earthquake cycles, supporting the Critical Point hypothesis. The 3-D models are shown to also display accelerating energy release, and good fits of power law time-to-failure functions to the cumulative energy release are obtained.
Resumo:
Much research is currently centred on the detection of damage in structures using vibrational data. The work presented here examined several areas of interest in support of a practical technique for identifying and locating damage within bridge structures using apparent changes in their vibrational response to known excitation. The proposed goals of such a technique included the need for the measurement system to be operated on site by a minimum number of staff and that the procedure should be as non-invasive to the bridge traffic-flow as possible. Initially the research investigated changes in the vibrational bending characteristics of two series of large-scale model bridge-beams in the laboratory and these included ordinary-reinforced and post-tensioned, prestressed designs. Each beam was progressively damaged at predetermined positions and its vibrational response to impact excitation was analysed. For the load-regime utilised the results suggested that the infuced damage manifested itself as a function of the span of a beam rather than a localised area. A power-law relating apparent damage with the applied loading and prestress levels was then proposed, together with a qualitative vibrational measure of structural damage. In parallel with the laboratory experiments a series of tests were undertaken at the sites of a number of highway bridges. The bridges selected had differing types of construction and geometric design including composite-concrete, concrete slab-and-beam, concrete-slab with supporting steel-troughing constructions together with regular-rectangular, skewed and heavily-skewed geometries. Initial investigations were made of the feasibility and reliability of various methods of structure excitation including traffic and impulse methods. It was found that localised impact using a sledge-hammer was ideal for the purposes of this work and that a cartridge `bolt-gun' could be used in some specific cases.
Resumo:
An experimental investigation into the Acoustic Emission (AE) response of sand has been undertaken, and the use of AE as a method of yield point identification has been assessed. Dense, saturated samples of sand were tested in conventional triaxial apparatus. The measurements of stresses and strains were carried out according to current research practice. The AE monitoring system was integrated with the soil mechanics equipment in such a way that sample disturbance was minimised. During monotonically loaded, constant cell pressure tests the total number of events recorded was found to increase at an increasing rate in a manner which may be approximated by a power law. The AE response of the sand was found to be both stress level and stress path dependent. Undrained constant cell pressure tests showed that, unlike drained tests, the AE event rate increased at an increasing rate; this was shown to correlate with the mean effective stress variation. The stress path dependence was most noticeable in extension tests, where the number of events recorded was an order of magnitude less than that recorded in comparable compression tests. This stress path dependence was shown to be due to the differences in the work done by the external stresses. In constant cell pressure tests containing unload/reload cycles it was found that yield could be identified from a discontinuity in the event rate/time curve which occurred during reloading. Further tests involving complex stress paths showed that AE was a useful method of yield point identification. Some tests involving large stress reversals were carried out, and AE identified the inverse yield points more distinctly than conventional methods of yield point identification.
Resumo:
Fatigue crack initiation and propagation in aluminium butt welds has been investigated. It is shown that the initiation of cracks from both buried defects and. from the weld reinforcement may be quantified by predictive laws based on either linear elastic fracture mechanics, or on Neuber's rule of stress and strain ooncentrations. The former is preferable on the grounds of theoretical models of crack tip plasticity, although either may be used as the basis of an effeotive design criteria against crack initiation. Fatigue lives fol1owing initiation were found to follow predictions based on the integration of a Paris type power law. The effect of residual stresses from the welding operation on both initiation and propagation was accounted for by a Forman type equation. This incorporated the notional stress ratio produced by the residual stresses after various heat treatments. A fracture mechanics analysis was found to be useful in describing the fatigue behaviour of the weldments at increased temperatures up to 300°C. It is pointed out, however, that the complex interaction of residual stresses, frequency, and changes in fracture mode necessitate great caution in the application of any general design criteria against crack initiation and growth at elevated. temperatures.
Resumo:
We present a stochastic agent-based model for the distribution of personal incomes in a developing economy. We start with the assumption that incomes are determined both by individual labour and by stochastic effects of trading and investment. The income from personal effort alone is distributed about a mean, while the income from trade, which may be positive or negative, is proportional to the trader's income. These assumptions lead to a Langevin model with multiplicative noise, from which we derive a Fokker-Planck (FP) equation for the income probability density function (IPDF) and its variation in time. We find that high earners have a power law income distribution while the low-income groups have a Levy IPDF. Comparing our analysis with the Indian survey data (obtained from the world bank website: http://go.worldbank.org/SWGZB45DN0) taken over many years we obtain a near-perfect data collapse onto our model's equilibrium IPDF. Using survey data to relate the IPDF to actual food consumption we define a poverty index (Sen A. K., Econometrica., 44 (1976) 219; Kakwani N. C., Econometrica, 48 (1980) 437), which is consistent with traditional indices, but independent of an arbitrarily chosen "poverty line" and therefore less susceptible to manipulation. Copyright © EPLA, 2010.
Resumo:
Particle breakage due to fluid flow through various geometries can have a major influence on the performance of particle/fluid processes and on the product quality characteristics of particle/fluid products. In this study, whey protein precipitate dispersions were used as a case study to investigate the effect of flow intensity and exposure time on the breakage of these precipitate particles. Computational fluid dynamic (CFD) simulations were performed to evaluate the turbulent eddy dissipation rate (TED) and associated exposure time along various flow geometries. The focus of this work is on the predictive modelling of particle breakage in particle/fluid systems. A number of breakage models were developed to relate TED and exposure time to particle breakage. The suitability of these breakage models was evaluated for their ability to predict the experimentally determined breakage of the whey protein precipitate particles. A "power-law threshold" breakage model was found to provide a satisfactory capability for predicting the breakage of the whey protein precipitate particles. The whey protein precipitate dispersions were propelled through a number of different geometries such as bends, tees and elbows, and the model accurately predicted the mean particle size attained after flow through these geometries. © 2005 Elsevier Ltd. All rights reserved.
Resumo:
We suggest a model for data losses in a single node (memory buffer) of a packet-switched network (like the Internet) which reduces to one-dimensional discrete random walks with unusual boundary conditions. By construction, the model has critical behavior with a sharp transition from exponentially small to finite losses with increasing data arrival rate. We show that for a finite-capacity buffer at the critical point the loss rate exhibits strong fluctuations and non-Markovian power-law correlations in time, in spite of the Markovian character of the data arrival process.
Resumo:
This paper resolves the long standing debate as to the proper time scale τ of the onset of the immunological synapse bond, the noncovalent chemical bond defining the immune pathways involving T cells and antigen presenting cells. Results from our model calculations show τ to be of the order of seconds instead of minutes. Close to the linearly stable regime, we show that in between the two critical spatial thresholds defined by the integrin:ligand pair (Δ2∼ 40-45 nm) and the T-cell receptor TCR:peptide-major-histocompatibility-complex pMHC bond (Δ1∼ 14-15 nm), τ grows monotonically with increasing coreceptor bond length separation δ (= Δ2-Δ1∼ 26-30 nm) while τ decays with Δ1 for fixed Δ2. The nonuniversal δ-dependent power-law structure of the probability density function further explains why only the TCR:pMHC bond is a likely candidate to form a stable synapse.
Resumo:
A range of physical and engineering systems exhibit an irregular complex dynamics featuring alternation of quiet and burst time intervals called the intermittency. The intermittent dynamics most popular in laser science is the on-off intermittency [1]. The on-off intermittency can be understood as a conversion of the noise in a system close to an instability threshold into effective time-dependent fluctuations which result in the alternation of stable and unstable periods. The on-off intermittency has been recently demonstrated in semiconductor, Erbium doped and Raman lasers [2-5]. Recently demonstrated random distributed feedback (random DFB) fiber laser has an irregular dynamics near the generation threshold [6,7]. Here we show the intermittency in the cascaded random DFB fiber laser. We study intensity fluctuations in a random DFB fiber laser based on nitrogen doped fiber. The laser generates first and second Stokes components 1120 nm and 1180 nm respectively under an appropriate pumping. We study the intermittency in the radiation of the second Stokes wave. The typical time trace near the generation threshold of the second Stokes wave (Pth) is shown at Fig. 1a. From the number of long enough time-traces we calculate statistical distribution between major spikes in time dynamics, Fig. 1b. To eliminate contribution of high frequency components of spikes we use a low pass filter along with the reference value of the output power. Experimental data is fitted by power law,
Resumo:
The article analyzes the contribution of stochastic thermal fluctuations in the attachment times of the immature T-cell receptor TCR: peptide-major-histocompatibility-complex pMHC immunological synapse bond. The key question addressed here is the following: how does a synapse bond remain stabilized in the presence of high-frequency thermal noise that potentially equates to a strong detaching force? Focusing on the average time persistence of an immature synapse, we show that the high-frequency nodes accompanying large fluctuations are counterbalanced by low-frequency nodes that evolve over longer time periods, eventually leading to signaling of the immunological synapse bond primarily decided by nodes of the latter type. Our analysis shows that such a counterintuitive behavior could be easily explained from the fact that the survival probability distribution is governed by two distinct phases, corresponding to two separate time exponents, for the two different time regimes. The relatively shorter timescales correspond to the cohesion:adhesion induced immature bond formation whereas the larger time reciprocates the association:dissociation regime leading to TCR:pMHC signaling. From an estimate of the bond survival probability, we show that, at shorter timescales, this probability PΔ(τ) scales with time τ as a universal function of a rescaled noise amplitude DΔ2, such that PΔ(τ)∼τ-(ΔD+12),Δ being the distance from the mean intermembrane (T cell:Antigen Presenting Cell) separation distance. The crossover from this shorter to a longer time regime leads to a universality in the dynamics, at which point the survival probability shows a different power-law scaling compared to the one at shorter timescales. In biological terms, such a crossover indicates that the TCR:pMHC bond has a survival probability with a slower decay rate than the longer LFA-1:ICAM-1 bond justifying its stability.
Resumo:
Implementation of a Monte Carlo simulation for the solution of population balance equations (PBEs) requires choice of initial sample number (N0), number of replicates (M), and number of bins for probability distribution reconstruction (n). It is found that Squared Hellinger Distance, H2, is a useful measurement of the accuracy of Monte Carlo (MC) simulation, and can be related directly to N0, M, and n. Asymptotic approximations of H2 are deduced and tested for both one-dimensional (1-D) and 2-D PBEs with coalescence. The central processing unit (CPU) cost, C, is found in a power-law relationship, C= aMNb0, with the CPU cost index, b, indicating the weighting of N0 in the total CPU cost. n must be chosen to balance accuracy and resolution. For fixed n, M × N0 determines the accuracy of MC prediction; if b > 1, then the optimal solution strategy uses multiple replications and small sample size. Conversely, if 0 < b < 1, one replicate and a large initial sample size is preferred. © 2015 American Institute of Chemical Engineers AIChE J, 61: 2394–2402, 2015
Resumo:
In studies of complex heterogeneous networks, particularly of the Internet, significant attention was paid to analysing network failures caused by hardware faults or overload. There network reaction was modelled as rerouting of traffic away from failed or congested elements. Here we model network reaction to congestion on much shorter time scales when the input traffic rate through congested routes is reduced. As an example we consider the Internet where local mismatch between demand and capacity results in traffic losses. We describe the onset of congestion as a phase transition characterised by strong, albeit relatively short-lived, fluctuations of losses caused by noise in input traffic and exacerbated by the heterogeneous nature of the network manifested in a power-law load distribution. The fluctuations may result in the network strongly overreacting to the first signs of congestion by significantly reducing input traffic along the communication paths where congestion is utterly negligible. © 2013 IEEE.
Resumo:
Previous work has shown that human vision performs spatial integration of luminance contrast energy, where signals are squared and summed (with internal noise) over area at detection threshold. We tested that model here in an experiment using arrays of micro-pattern textures that varied in overall stimulus area and sparseness of their target elements, where the contrast of each element was normalised for sensitivity across the visual field. We found a power-law improvement in performance with stimulus area, and a decrease in sensitivity with sparseness. While the contrast integrator model performed well when target elements constituted 50–100% of the target area (replicating previous results), observers outperformed the model when texture elements were sparser than this. This result required the inclusion of further templates in our model, selective for grids of various regular texture densities. By assuming a MAX operation across these noisy mechanisms the model also accounted for the increase in the slope of the psychometric function that occurred as texture density decreased. Thus, for the first time, mechanisms that are selective for texture density have been revealed at contrast detection threshold. We suggest that these mechanisms have a role to play in the perception of visual textures.