912 resultados para Power Law Distribution
Resumo:
Deposition of insoluble prion protein (PrP) in the brain in the form of protein aggregates or deposits is characteristic of the ‘transmissible spongiform encephalopathies’ (TSEs). Understanding the growth and development of PrP aggregates is important both in attempting to elucidate the pathogenesis of prion disease and in the development of treatments designed to inhibit the spread of prion pathology within the brain. Aggregation and disaggregation of proteins and the diffusion of substances into the developing aggregates (surface diffusion) are important factors in the development of protein deposits. Mathematical models suggest that if either aggregation/disaggregation or surface diffusion is the predominant factor, then the size frequency distribution of the resulting protein aggregates will be described by either a power-law or a log-normal model respectively. This study tested this hypothesis for two different populations of PrP deposit, viz., the diffuse and florid-type PrP deposits characteristic of patients with variant Creutzfeldt-Jakob disease (vCJD). The size distributions of the florid and diffuse deposits were fitted by a power-law function in 100% and 42% of brain areas studied respectively. By contrast, the size distributions of both types of aggregate deviated significantly from a log-normal model in all areas. Hence, protein aggregation and disaggregation may be the predominant factor in the development of the florid deposits. A more complex combination of factors appears to be involved in the pathogenesis of the diffuse deposits. These results may be useful in the design of treatments to inhibit the development of PrP aggregates in vCJD.
Resumo:
This thesis was focused on theoretical models of synchronization to cortical dynamics as measured by magnetoencephalography (MEG). Dynamical systems theory was used in both identifying relevant variables for brain coordination and also in devising methods for their quantification. We presented a method for studying interactions of linear and chaotic neuronal sources using MEG beamforming techniques. We showed that such sources can be accurately reconstructed in terms of their location, temporal dynamics and possible interactions. Synchronization in low-dimensional nonlinear systems was studied to explore specific correlates of functional integration and segregation. In the case of interacting dissimilar systems, relevant coordination phenomena involved generalized and phase synchronization, which were often intermittent. Spatially-extended systems were then studied. For locally-coupled dissimilar systems, as in the case of cortical columns, clustering behaviour occurred. Synchronized clusters emerged at different frequencies and their boundaries were marked through oscillation death. The macroscopic mean field revealed sharp spectral peaks at the frequencies of the clusters and broader spectral drops at their boundaries. These results question existing models of Event Related Synchronization and Desynchronization. We re-examined the concept of the steady-state evoked response following an AM stimulus. We showed that very little variability in the AM following response could be accounted by system noise. We presented a methodology for detecting local and global nonlinear interactions from MEG data in order to account for residual variability. We found crosshemispheric nonlinear interactions of ongoing cortical rhythms concurrent with the stimulus and interactions of these rhythms with the following AM responses. Finally, we hypothesized that holistic spatial stimuli would be accompanied by the emergence of clusters in primary visual cortex resulting in frequency-specific MEG oscillations. Indeed, we found different frequency distributions in induced gamma oscillations for different spatial stimuli, which was suggestive of temporal coding of these spatial stimuli. Further, we addressed the bursting character of these oscillations, which was suggestive of intermittent nonlinear dynamics. However, we did not observe the characteristic-3/2 power-law scaling in the distribution of interburst intervals. Further, this distribution was only seldom significantly different to the one obtained in surrogate data, where nonlinear structure was destroyed. In conclusion, the work presented in this thesis suggests that advances in dynamical systems theory in conjunction with developments in magnetoencephalography may facilitate a mapping between levels of description int he brain. this may potentially represent a major advancement in neuroscience.
Resumo:
We present a stochastic agent-based model for the distribution of personal incomes in a developing economy. We start with the assumption that incomes are determined both by individual labour and by stochastic effects of trading and investment. The income from personal effort alone is distributed about a mean, while the income from trade, which may be positive or negative, is proportional to the trader's income. These assumptions lead to a Langevin model with multiplicative noise, from which we derive a Fokker-Planck (FP) equation for the income probability density function (IPDF) and its variation in time. We find that high earners have a power law income distribution while the low-income groups have a Levy IPDF. Comparing our analysis with the Indian survey data (obtained from the world bank website: http://go.worldbank.org/SWGZB45DN0) taken over many years we obtain a near-perfect data collapse onto our model's equilibrium IPDF. Using survey data to relate the IPDF to actual food consumption we define a poverty index (Sen A. K., Econometrica., 44 (1976) 219; Kakwani N. C., Econometrica, 48 (1980) 437), which is consistent with traditional indices, but independent of an arbitrarily chosen "poverty line" and therefore less susceptible to manipulation. Copyright © EPLA, 2010.
Resumo:
A range of physical and engineering systems exhibit an irregular complex dynamics featuring alternation of quiet and burst time intervals called the intermittency. The intermittent dynamics most popular in laser science is the on-off intermittency [1]. The on-off intermittency can be understood as a conversion of the noise in a system close to an instability threshold into effective time-dependent fluctuations which result in the alternation of stable and unstable periods. The on-off intermittency has been recently demonstrated in semiconductor, Erbium doped and Raman lasers [2-5]. Recently demonstrated random distributed feedback (random DFB) fiber laser has an irregular dynamics near the generation threshold [6,7]. Here we show the intermittency in the cascaded random DFB fiber laser. We study intensity fluctuations in a random DFB fiber laser based on nitrogen doped fiber. The laser generates first and second Stokes components 1120 nm and 1180 nm respectively under an appropriate pumping. We study the intermittency in the radiation of the second Stokes wave. The typical time trace near the generation threshold of the second Stokes wave (Pth) is shown at Fig. 1a. From the number of long enough time-traces we calculate statistical distribution between major spikes in time dynamics, Fig. 1b. To eliminate contribution of high frequency components of spikes we use a low pass filter along with the reference value of the output power. Experimental data is fitted by power law,
Resumo:
The article analyzes the contribution of stochastic thermal fluctuations in the attachment times of the immature T-cell receptor TCR: peptide-major-histocompatibility-complex pMHC immunological synapse bond. The key question addressed here is the following: how does a synapse bond remain stabilized in the presence of high-frequency thermal noise that potentially equates to a strong detaching force? Focusing on the average time persistence of an immature synapse, we show that the high-frequency nodes accompanying large fluctuations are counterbalanced by low-frequency nodes that evolve over longer time periods, eventually leading to signaling of the immunological synapse bond primarily decided by nodes of the latter type. Our analysis shows that such a counterintuitive behavior could be easily explained from the fact that the survival probability distribution is governed by two distinct phases, corresponding to two separate time exponents, for the two different time regimes. The relatively shorter timescales correspond to the cohesion:adhesion induced immature bond formation whereas the larger time reciprocates the association:dissociation regime leading to TCR:pMHC signaling. From an estimate of the bond survival probability, we show that, at shorter timescales, this probability PΔ(τ) scales with time τ as a universal function of a rescaled noise amplitude DΔ2, such that PΔ(τ)∼τ-(ΔD+12),Δ being the distance from the mean intermembrane (T cell:Antigen Presenting Cell) separation distance. The crossover from this shorter to a longer time regime leads to a universality in the dynamics, at which point the survival probability shows a different power-law scaling compared to the one at shorter timescales. In biological terms, such a crossover indicates that the TCR:pMHC bond has a survival probability with a slower decay rate than the longer LFA-1:ICAM-1 bond justifying its stability.
Resumo:
Implementation of a Monte Carlo simulation for the solution of population balance equations (PBEs) requires choice of initial sample number (N0), number of replicates (M), and number of bins for probability distribution reconstruction (n). It is found that Squared Hellinger Distance, H2, is a useful measurement of the accuracy of Monte Carlo (MC) simulation, and can be related directly to N0, M, and n. Asymptotic approximations of H2 are deduced and tested for both one-dimensional (1-D) and 2-D PBEs with coalescence. The central processing unit (CPU) cost, C, is found in a power-law relationship, C= aMNb0, with the CPU cost index, b, indicating the weighting of N0 in the total CPU cost. n must be chosen to balance accuracy and resolution. For fixed n, M × N0 determines the accuracy of MC prediction; if b > 1, then the optimal solution strategy uses multiple replications and small sample size. Conversely, if 0 < b < 1, one replicate and a large initial sample size is preferred. © 2015 American Institute of Chemical Engineers AIChE J, 61: 2394–2402, 2015
Resumo:
In studies of complex heterogeneous networks, particularly of the Internet, significant attention was paid to analysing network failures caused by hardware faults or overload. There network reaction was modelled as rerouting of traffic away from failed or congested elements. Here we model network reaction to congestion on much shorter time scales when the input traffic rate through congested routes is reduced. As an example we consider the Internet where local mismatch between demand and capacity results in traffic losses. We describe the onset of congestion as a phase transition characterised by strong, albeit relatively short-lived, fluctuations of losses caused by noise in input traffic and exacerbated by the heterogeneous nature of the network manifested in a power-law load distribution. The fluctuations may result in the network strongly overreacting to the first signs of congestion by significantly reducing input traffic along the communication paths where congestion is utterly negligible. © 2013 IEEE.
Resumo:
In this work we have investigated some aspects of the two-dimensional flow of a viscous Newtonian fluid through a disordered porous medium modeled by a random fractal system similar to the Sierpinski carpet. This fractal is formed by obstacles of various sizes, whose distribution function follows a power law. They are randomly disposed in a rectangular channel. The velocity field and other details of fluid dynamics are obtained by solving numerically of the Navier-Stokes and continuity equations at the pore level, where occurs actually the flow of fluids in porous media. The results of numerical simulations allowed us to analyze the distribution of shear stresses developed in the solid-fluid interfaces, and find algebraic relations between the viscous forces or of friction with the geometric parameters of the model, including its fractal dimension. Based on the numerical results, we proposed scaling relations involving the relevant parameters of the phenomenon, allowing quantifying the fractions of these forces with respect to size classes of obstacles. Finally, it was also possible to make inferences about the fluctuations in the form of the distribution of viscous stresses developed on the surface of obstacles.
Resumo:
We present a study of the Galactic Center region as a possible source of both secondary gamma-ray and neutrino fluxes from annihilating dark matter. We have studied the gamma-ray flux observed by the High Energy Stereoscopic System (HESS) from the J1745-290 Galactic Center source. The data are well fitted as annihilating dark matter in combination with an astrophysical background. The analysis was performed by means of simulated gamma spectra produced by Monte Carlo event generators packages. We analyze the differences in the spectra obtained by the various Monte Carlo codes developed so far in particle physics. We show that, within some uncertainty, the HESS data can be fitted as a signal from a heavy dark matter density distribution peaked at the Galactic Center, with a power-law for the background with a spectral index which is compatible with the Fermi-Large Area Telescope (LAT) data from the same region. If this kind of dark matter distribution generates the gamma-ray flux observed by HESS, we also expect to observe a neutrino flux. We show prospective results for the observation of secondary neutrinos with the Astronomy with a Neutrino Telescope and Abyss environmental RESearch project (ANTARES), Ice Cube Neutrino Observatory (Ice Cube) and the Cubic Kilometer Neutrino Telescope (KM3NeT). Prospects solely depend on the device resolution angle when its effective area and the minimum energy threshold are fixed.
Resumo:
Owing to their important roles in biogeochemical cycles, phytoplankton functional types (PFTs) have been the aim of an increasing number of ocean color algorithms. Yet, none of the existing methods are based on phytoplankton carbon (C) biomass, which is a fundamental biogeochemical and ecological variable and the "unit of accounting" in Earth system models. We present a novel bio-optical algorithm to retrieve size-partitioned phytoplankton carbon from ocean color satellite data. The algorithm is based on existing methods to estimate particle volume from a power-law particle size distribution (PSD). Volume is converted to carbon concentrations using a compilation of allometric relationships. We quantify absolute and fractional biomass in three PFTs based on size - picophytoplankton (0.5-2 µm in diameter), nanophytoplankton (2-20 µm) and microphytoplankton (20-50 µm). The mean spatial distributions of total phytoplankton C biomass and individual PFTs, derived from global SeaWiFS monthly ocean color data, are consistent with current understanding of oceanic ecosystems, i.e., oligotrophic regions are characterized by low biomass and dominance of picoplankton, whereas eutrophic regions have high biomass to which nanoplankton and microplankton contribute relatively larger fractions. Global climatological, spatially integrated phytoplankton carbon biomass standing stock estimates using our PSD-based approach yield - 0.25 Gt of C, consistent with analogous estimates from two other ocean color algorithms and several state-of-the-art Earth system models. Satisfactory in situ closure observed between PSD and POC measurements lends support to the theoretical basis of the PSD-based algorithm. Uncertainty budget analyses indicate that absolute carbon concentration uncertainties are driven by the PSD parameter No which determines particle number concentration to first order, while uncertainties in PFTs' fractional contributions to total C biomass are mostly due to the allometric coefficients. The C algorithm presented here, which is not empirically constrained a priori, partitions biomass in size classes and introduces improvement over the assumptions of the other approaches. However, the range of phytoplankton C biomass spatial variability globally is larger than estimated by any other models considered here, which suggests an empirical correction to the No parameter is needed, based on PSD validation statistics. These corrected absolute carbon biomass concentrations validate well against in situ POC observations.
Resumo:
An RVE–based stochastic numerical model is used to calculate the permeability of randomly generated porous media at different values of the fiber volume fraction for the case of transverse flow in a unidirectional ply. Analysis of the numerical results shows that the permeability is not normally distributed. With the aim of proposing a new understanding on this particular topic, permeability data are fitted using both a mixture model and a unimodal distribution. Our findings suggest that permeability can be fitted well using a mixture model based on the lognormal and power law distributions. In case of a unimodal distribution, it is found, using the maximum-likelihood estimation method (MLE), that the generalized extreme value (GEV) distribution represents the best fit. Finally, an expression of the permeability as a function of the fiber volume fraction based on the GEV distribution is discussed in light of the previous results.
Resumo:
Basal melting of floating ice shelves and iceberg calving constitute the two almost equal paths of freshwater flux between the Antarctic ice cap and the Southern Ocean. The largest icebergs (>100 km2) transport most of the ice volume but their basal melting is small compared to their breaking into smaller icebergs that constitute thus the major vector of freshwater. The archives of nine altimeters have been processed to create a database of small icebergs (<8 km2) within open water containing the positions, sizes, and volumes spanning the 1992–2014 period. The intercalibrated monthly ice volumes from the different altimeters have been merged in a homogeneous 23 year climatology. The iceberg size distribution, covering the 0.1–10,000 km2 range, estimated by combining small and large icebergs size measurements follows well a power law of slope −1.52 ± 0.32 close to the −3/2 laws observed and modeled for brittle fragmentation. The global volume of ice and its distribution between the ocean basins present a very strong interannual variability only partially explained by the number of large icebergs. Indeed, vast zones of the Southern Ocean free of large icebergs are largely populated by small iceberg drifting over thousands of kilometers. The correlation between the global small and large icebergs volumes shows that small icebergs are mainly generated by large ones breaking. Drifting and trapping by sea ice can transport small icebergs for long period and distances. Small icebergs act as an ice diffuse process along large icebergs trajectories while sea ice trapping acts as a buffer delaying melting.
Resumo:
Context. 1ES 1011+496 (z = 0.212) was discovered in very high-energy (VHE, E >100 GeV) γ rays with MAGIC in 2007. The absence of simultaneous data at lower energies led to an incomplete characterization of the broadband spectral energy distribution (SED). Aims. We study the source properties and the emission mechanisms, probing whether a simple one-zone synchrotron self-Compton (SSC) scenario is able to explain the observed broadband spectrum. Methods. We analyzed data in the range from VHE to radio data from 2011 and 2012 collected by MAGIC, Fermi-LAT, Swift, KVA, OVRO, and Metsähovi in addition to optical polarimetry data and radio maps from the Liverpool Telescope and MOJAVE. Results. The VHE spectrum was fit with a simple power law with a photon index of 3.69 ± 0.22 and a flux above 150 GeV of (1.46±0.16)×10^(−11) ph cm^(−2) s^(−1) . The source 1ES 1011+496 was found to be in a generally quiescent state at all observed wavelengths, showing only moderate variability from radio to X-rays. A low degree of polarization of less than 10% was measured in optical, while some bright features polarized up to 60% were observed in the radio jet. A similar trend in the rotation of the electric vector position angle was found in optical and radio. The radio maps indicated a superluminal motion of 1.8 ± 0.4 c, which is the highest speed statistically significant measured so far in a high-frequency-peaked BL Lac. Conclusions. For the first time, the high-energy bump in the broadband SED of 1ES 1011+496 could be fully characterized from 0.1 GeV to 1 TeV, which permitted a more reliable interpretation within the one-zone SSC scenario. The polarimetry data suggest that at least part of the optical emission has its origin in some of the bright radio features, while the low polarization in optical might be due to the contribution of parts of the radio jet with different orientations of the magnetic field with respect to the optical emission.
Resumo:
The present study provides a methodology that gives a predictive character the computer simulations based on detailed models of the geometry of a porous medium. We using the software FLUENT to investigate the flow of a viscous Newtonian fluid through a random fractal medium which simplifies a two-dimensional disordered porous medium representing a petroleum reservoir. This fractal model is formed by obstacles of various sizes, whose size distribution function follows a power law where exponent is defined as the fractal dimension of fractionation Dff of the model characterizing the process of fragmentation these obstacles. They are randomly disposed in a rectangular channel. The modeling process incorporates modern concepts, scaling laws, to analyze the influence of heterogeneity found in the fields of the porosity and of the permeability in such a way as to characterize the medium in terms of their fractal properties. This procedure allows numerically analyze the measurements of permeability k and the drag coefficient Cd proposed relationships, like power law, for these properties on various modeling schemes. The purpose of this research is to study the variability provided by these heterogeneities where the velocity field and other details of viscous fluid dynamics are obtained by solving numerically the continuity and Navier-Stokes equations at pore level and observe how the fractal dimension of fractionation of the model can affect their hydrodynamic properties. This study were considered two classes of models, models with constant porosity, MPC, and models with varying porosity, MPV. The results have allowed us to find numerical relationship between the permeability, drag coefficient and the fractal dimension of fractionation of the medium. Based on these numerical results we have proposed scaling relations and algebraic expressions involving the relevant parameters of the phenomenon. In this study analytical equations were determined for Dff depending on the geometrical parameters of the models. We also found a relation between the permeability and the drag coefficient which is inversely proportional to one another. As for the difference in behavior it is most striking in the classes of models MPV. That is, the fact that the porosity vary in these models is an additional factor that plays a significant role in flow analysis. Finally, the results proved satisfactory and consistent, which demonstrates the effectiveness of the referred methodology for all applications analyzed in this study.
Resumo:
(The Mark and Recapture Network: a Heliconius case study). The current pace of habitat destruction, especially in tropical landscapes, has increased the need for understanding minimum patch requirements and patch distance as tools for conserving species in forest remnants. Mark recapture and tagging studies have been instrumental in providing parameters for functional models. Because of their popularity, ease of manipulation and well known biology, butterflies have become model in studies of spatial structure. Yet, most studies on butterflies movement have focused on temperate species that live in open habitats, in which forest patches are barrier to movement. This study aimed to view and review data from mark-recapture as a network in two species of butterfly (Heliconius erato and Heliconius melpomene). A work of marking and recapture of the species was carried out in an Atlantic forest reserve located about 20km from the city of Natal (RN). Mark recapture studies were conducted in 3 weekly visits during January-February and July-August in 2007 and 2008. Captures were more common in two sections of the dirt road, with minimal collection in the forest trail. The spatial spread of captures was similar in the two species. Yet, distances between recaptures seem to be greater for Heliconius erato than for Heliconius melpomene. In addition, the erato network is more disconnected, suggesting that this specie has shorter traveling patches. Moving on to the network, both species have similar number of links (N) and unweighed vertices (L). However, melpomene has a weighed network 50% more connections than erato. These network metrics suggest that erato has more compartmentalized network and restricted movement than melpomene. Thus, erato has a larger number of disconnected components, nC, in the network, and a smaller network diameter. The frequency distribution of network connectivity for both species was better explained by a Power-law than by a random, Poissom distribution, showing that the Power-law provides a better fit than the Poisson for both species. Moreover, the Powerlaw erato is much better adjusted than in melpomene, which should be linked to the small movements that erato makes in the network