919 resultados para power-law distributions
Resumo:
The antibunching and blinking from a single CdSe/ZnS nanocrystal with an emission wavelength of 655 nm were investigated under different excitation powers. The decay process of the photoluminescence from nanocrystal was fitted into a stretched exponential, and the small lifetime and the small stretching exponent under a high excitation power were explained by using nonradiative multi-channel model. The probability of distributions for off-times from photoluminescence intermittence was fitted into the power law, and the power exponents were explained by using a tunneling model. For higher excitation power, the Auger-assisted tunneling model takes effect, where the tunneling rate increases and the observed lifetime decreases. For weak excitation power, the electron directly tunnels between the nanocrystal and trapping state without Auger assistance. The correlation between antibunching and blinking from the same nanocrystal was analyzed.
Resumo:
The perturbation expansion method is used to find the effective thermal conductivity of graded nonlinear composites having thermal contact resistance on the inclusion surface. As an example, we have studied the graded composites with cylindrical inclusions immersed in a homogeneous matrix. The thermal conductivity of the cylindrical inclusion is assumed to have a power-law profile of the radial distance r measured from its origin. For weakly nonlinear constitutive relations between the heat flow density q and the temperature field T, namely, q = -mu del T - chi vertical bar del T vertical bar(2) del T, in both the inclusion and the matrix regions, we have derived the temperature distributions using the perturbation expansion method. A nonlinear effective medium approximation of graded composites is proposed to estimate the effective linear and nonlinear thermal conductivities. by considering the temperature singularity on the inclusion surface due to the heat contact resistance. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
Recent work has shown the prevalence of small-world phenomena [28] in many networks. Small-world graphs exhibit a high degree of clustering, yet have typically short path lengths between arbitrary vertices. Internet AS-level graphs have been shown to exhibit small-world behaviors [9]. In this paper, we show that both Internet AS-level and router-level graphs exhibit small-world behavior. We attribute such behavior to two possible causes–namely the high variability of vertex degree distributions (which were found to follow approximately a power law [15]) and the preference of vertices to have local connections. We show that both factors contribute with different relative degrees to the small-world behavior of AS-level and router-level topologies. Our findings underscore the inefficacy of the Barabasi-Albert model [6] in explaining the growth process of the Internet, and provide a basis for more promising approaches to the development of Internet topology generators. We present such a generator and show the resemblance of the synthetic graphs it generates to real Internet AS-level and router-level graphs. Using these graphs, we have examined how small-world behaviors affect the scalability of end-system multicast. Our findings indicate that lower variability of vertex degree and stronger preference for local connectivity in small-world graphs results in slower network neighborhood expansion, and in longer average path length between two arbitrary vertices, which in turn results in better scaling of end system multicast.
Resumo:
Considerable attention has been focused on the properties of graphs derived from Internet measurements. Router-level topologies collected via traceroute studies have led some authors to conclude that the router graph of the Internet is a scale-free graph, or more generally a power-law random graph. In such a graph, the degree distribution of nodes follows a distribution with a power-law tail. In this paper we argue that the evidence to date for this conclusion is at best insufficient. We show that graphs appearing to have power-law degree distributions can arise surprisingly easily, when sampling graphs whose true degree distribution is not at all like a power-law. For example, given a classical Erdös-Rényi sparse, random graph, the subgraph formed by a collection of shortest paths from a small set of random sources to a larger set of random destinations can easily appear to show a degree distribution remarkably like a power-law. We explore the reasons for how this effect arises, and show that in such a setting, edges are sampled in a highly biased manner. This insight allows us to distinguish measurements taken from the Erdös-Rényi graphs from those taken from power-law random graphs. When we apply this distinction to a number of well-known datasets, we find that the evidence for sampling bias in these datasets is strong.
Resumo:
The cost and complexity of deploying measurement infrastructure in the Internet for the purpose of analyzing its structure and behavior is considerable. Basic questions about the utility of increasing the number of measurements and/or measurement sites have not yet been addressed which has lead to a "more is better" approach to wide-area measurements. In this paper, we quantify the marginal utility of performing wide-area measurements in the context of Internet topology discovery. We characterize topology in terms of nodes, links, node degree distribution, and end-to-end flows using statistical and information-theoretic techniques. We classify nodes discovered on the routes between a set of 8 sources and 1277 destinations to differentiate nodes which make up the so called "backbone" from those which border the backbone and those on links between the border nodes and destination nodes. This process includes reducing nodes that advertise multiple interfaces to single IP addresses. We show that the utility of adding sources goes down significantly after 2 from the perspective of interface, node, link and node degree discovery. We show that the utility of adding destinations is constant for interfaces, nodes, links and node degree indicating that it is more important to add destinations than sources. Finally, we analyze paths through the backbone and show that shared link distributions approximate a power law indicating that a small number of backbone links in our study are very heavily utilized.
Resumo:
Slowly-compressed single crystals, bulk metallic glasses (BMGs), rocks, granular materials, and the earth all deform via intermittent slips or "quakes". We find that although these systems span 12 decades in length scale, they all show the same scaling behavior for their slip size distributions and other statistical properties. Remarkably, the size distributions follow the same power law multiplied with the same exponential cutoff. The cutoff grows with applied force for materials spanning length scales from nanometers to kilometers. The tuneability of the cutoff with stress reflects "tuned critical" behavior, rather than self-organized criticality (SOC), which would imply stress-independence. A simple mean field model for avalanches of slipping weak spots explains the agreement across scales. It predicts the observed slip-size distributions and the observed stress-dependent cutoff function. The results enable extrapolations from one scale to another, and from one force to another, across different materials and structures, from nanocrystals to earthquakes.
Resumo:
The validity of load estimates from intermittent, instantaneous grab sampling is dependent on adequate spatial coverage by monitoring networks and a sampling frequency that re?ects the variability in the system under study. Catchments with a ?ashy hydrology due to surface runoff pose a particular challenge as intense short duration rainfall events may account for a signi?cant portion of the total diffuse transfer of pollution from soil to water in any hydrological year. This can also be exacerbated by the presence of strong background pollution signals from point sources during low flows. In this paper, a range of sampling methodologies and load estimation techniques are applied to phosphorus data from such a surface water dominated river system, instrumented at three sub-catchments (ranging from 3 to 5 km2 in area) with near-continuous monitoring stations. Systematic and Monte Carlo approaches were applied to simulate grab sampling using multiple strategies and to calculate an estimated load, Le based on established load estimation methods. Comparison with the actual load, Lt, revealed signi?cant average underestimation, of up to 60%, and high variability for all feasible sampling approaches. Further analysis of the time series provides an insight into these observations; revealing peak frequencies and power-law scaling in the distributions of P concentration, discharge and load associated with surface runoff and background transfers. Results indicate that only near-continuous monitoring that re?ects the rapid temporal changes in these river systems is adequate for comparative monitoring and evaluation purposes. While the implications of this analysis may be more tenable to small scale ?ashy systems, this represents an appropriate scale in terms of evaluating catchment mitigation strategies such as agri-environmental policies for managing diffuse P transfers in complex landscapes.
Resumo:
Species-area relationships (SAR) are fundamental in the understanding of biodiversity patterns and of critical importance for predicting species extinction risk worldwide. Despite the enormous attention given to SAR in the form of many individual analyses, little attempt has been made to synthesize these studies. We conducted a quantitative meta-analysis of 794 SAR, comprising a wide span of organisms, habitats and locations. We identified factors reflecting both pattern-based and dynamic approaches to SAR and tested whether these factors leave significant imprints on the slope and strength of SAR. Our analysis revealed that SAR are significantly affected by variables characterizing the sampling scheme, the spatial scale, and the types of organisms or habitats involved. We found that steeper SAR are generated at lower latitudes and by larger organisms. SAR varied significantly between nested and independent sampling schemes and between major ecosystem types, but not generally between the terrestrial and the aquatic realm. Both the fit and the slope of the SAR were scale-dependent. We conclude that factors dynamically regulating species richness at different spatial scales strongly affect the shape of SAR. We highlight important consequences of this systematic variation in SAR for ecological theory, conservation management and extinction risk predictions.
Resumo:
Nas últimas décadas, um grande número de processos têm sido descritos em termos de redes complexas. A teoria de redes complexas vem sendo utilizada com sucesso para descrever, modelar e caracterizar sistemas naturais, artificias e sociais, tais como ecossistemas, interações entre proteínas, a Internet, WWW, até mesmo as relações interpessoais na sociedade. Nesta tese de doutoramento apresentamos alguns modelos de agentes interagentes em redes complexas. Inicialmente, apresentamos uma breve introdução histórica (Capítulo 1), seguida de algumas noções básicas sobre redes complexas (Capítulo 2) e de alguns trabalhos e modelos mais relevantes a esta tese de doutoramento (Capítulo 3). Apresentamos, no Capítulo 4, o estudo de um modelo de dinâmica de opiniões, onde busca-se o consenso entre os agentes em uma população, seguido do estudo da evolução de agentes interagentes em um processo de ramificação espacialmente definido (Capítulo 5). No Capítulo 6 apresentamos um modelo de otimização de fluxos em rede e um estudo do surgimento de redes livres de escala a partir de um processo de otimização . Finalmente, no Capítulo 7, apresentamos nossas conclusões e perspectivas futuras.
Resumo:
Proteins are biochemical entities consisting of one or more blocks typically folded in a 3D pattern. Each block (a polypeptide) is a single linear sequence of amino acids that are biochemically bonded together. The amino acid sequence in a protein is defined by the sequence of a gene or several genes encoded in the DNA-based genetic code. This genetic code typically uses twenty amino acids, but in certain organisms the genetic code can also include two other amino acids. After linking the amino acids during protein synthesis, each amino acid becomes a residue in a protein, which is then chemically modified, ultimately changing and defining the protein function. In this study, the authors analyze the amino acid sequence using alignment-free methods, aiming to identify structural patterns in sets of proteins and in the proteome, without any other previous assumptions. The paper starts by analyzing amino acid sequence data by means of histograms using fixed length amino acid words (tuples). After creating the initial relative frequency histograms, they are transformed and processed in order to generate quantitative results for information extraction and graphical visualization. Selected samples from two reference datasets are used, and results reveal that the proposed method is able to generate relevant outputs in accordance with current scientific knowledge in domains like protein sequence/proteome analysis.
Resumo:
Catastrophic events, such as wars and terrorist attacks, big tornadoes and hurricanes, huge earthquakes, tsunamis, floods, and landslides, are always accompanied by a large number of casualties. The size distribution of these casualties have separately been shown to follow approximate power law (PL) distributions. In this paper, we analyze the number of victims of catastrophic phenomena, in particular, terrorism, and find double PL behavior. This means that the data set is better approximated by two PLs instead of one. We have plotted the two PL parameters corresponding to all terrorist events occurred in every year, from 1980 to 2010. We observe an interesting pattern in the chart, where the lines, that connect each pair of points defining the double PLs, are roughly aligned to each other.
Resumo:
The Indian Ocean water that ends up in the Atlantic Ocean detaches from the Agulhas Current retroflection predominantly in the form of Agulhas rings and cyclones. Using numerical Lagrangian float trajectories in a high-resolution numerical ocean model, the fate of coherent structures near the Agulhas Current retroflection is investigated. It is shown that within the Agulhas Current, upstream of the retroflection, the spatial distributions of floats ending in the Atlantic Ocean and floats ending in the Indian Ocean are to a large extent similar. This indicates that Agulhas leakage occurs mostly through the detachment of Agulhas rings. After the floats detach from the Agulhas Current, the ambient water quickly looses its relative vorticity. The Agulhas rings thus seem to decay and loose much of their water in the Cape Basin. A cluster analysis reveals that most water in the Agulhas Current is within clusters of 180 km in diameter. Halfway in the Cape Basin there is an increase in the number of larger clusters with low relative vorticity, which carry the bulk of the Agulhas leakage transport through the Cape Basin. This upward cascade with respect to the length scales of the leakage, in combination with a power law decay of the magnitude of relative vorticity, might be an indication that the decay of Agulhas rings is somewhat comparable to the decay of two-dimensional turbulence.
Resumo:
A finite element numerical study has been carried out on the isothermal flow of power law fluids in lid-driven cavities with axial throughflow. The effects of the tangential flow Reynolds number (Re-U), axial flow Reynolds number (Re-W), cavity aspect ratio and shear thinning property of the fluids on tangential and axial velocity distributions and the frictional pressure drop are studied. Where comparison is possible, very good agreement is found between current numerical results and published asymptotic and numerical results. For shear thinning materials in long thin cavities in the tangential flow dominated flow regime, the numerical results show that the frictional pressure drop lies between two extreme conditions, namely the results for duct flow and analytical results from lubrication theory. For shear thinning materials in a lid-driven cavity, the interaction between the tangential flow and axial flow is very complex because the flow is dependent on the flow Reynolds numbers and the ratio of the average axial velocity and the lid velocity. For both Newtonian and shear thinning fluids, the axial velocity peak is shifted and the frictional pressure drop is increased with increasing tangential flow Reynolds number. The results are highly relevant to industrial devices such as screw extruders and scraped surface heat exchangers. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
Over recent years there has been an increasing deployment of renewable energy generation technologies, particularly large-scale wind farms. As wind farm deployment increases, it is vital to gain a good understanding of how the energy produced is affected by climate variations, over a wide range of time-scales, from short (hours to weeks) to long (months to decades) periods. By relating wind speed at specific sites in the UK to a large-scale climate pattern (the North Atlantic Oscillation or "NAO"), the power generated by a modelled wind turbine under three different NAO states is calculated. It was found that the wind conditions under these NAO states may yield a difference in the mean wind power output of up to 10%. A simple model is used to demonstrate that forecasts of future NAO states can potentially be used to improve month-ahead statistical forecasts of monthly-mean wind power generation. The results confirm that the NAO has a significant impact on the hourly-, daily- and monthly-mean power output distributions from the turbine with important implications for (a) the use of meteorological data (e.g. their relationship to large scale climate patterns) in wind farm site assessment and, (b) the utilisation of seasonal-to-decadal climate forecasts to estimate future wind farm power output. This suggests that further research into the links between large-scale climate variability and wind power generation is both necessary and valuable.
Resumo:
We analyse in a common framework the properties of the Voronoi tessellations resulting from regular 2D and 3D crystals and those of tessellations generated by Poisson distributions of points, thus joining on symmetry breaking processes and the approach to uniform random distributions of seeds. We perturb crystalline structures in 2D and 3D with a spatial Gaussian noise whose adimensional strength is α and analyse the statistical properties of the cells of the resulting Voronoi tessellations using an ensemble approach. In 2D we consider triangular, square and hexagonal regular lattices, resulting into hexagonal, square and triangular tessellations, respectively. In 3D we consider the simple cubic (SC), body-centred cubic (BCC), and face-centred cubic (FCC) crystals, whose corresponding Voronoi cells are the cube, the truncated octahedron, and the rhombic dodecahedron, respectively. In 2D, for all values α>0, hexagons constitute the most common class of cells. Noise destroys the triangular and square tessellations, which are structurally unstable, as their topological properties are discontinuous in α=0. On the contrary, the honeycomb hexagonal tessellation is topologically stable and, experimentally, all Voronoi cells are hexagonal for small but finite noise with α<0.12. Basically, the same happens in the 3D case, where only the tessellation of the BCC crystal is topologically stable even against noise of small but finite intensity. In both 2D and 3D cases, already for a moderate amount of Gaussian noise (α>0.5), memory of the specific initial unperturbed state is lost, because the statistical properties of the three perturbed regular tessellations are indistinguishable. When α>2, results converge to those of Poisson-Voronoi tessellations. In 2D, while the isoperimetric ratio increases with noise for the perturbed hexagonal tessellation, for the perturbed triangular and square tessellations it is optimised for specific value of noise intensity. The same applies in 3D, where noise degrades the isoperimetric ratio for perturbed FCC and BCC lattices, whereas the opposite holds for perturbed SCC lattices. This allows for formulating a weaker form of the Kelvin conjecture. By analysing jointly the statistical properties of the area and of the volume of the cells, we discover that also the cells shape heavily fluctuates when noise is introduced in the system. In 2D, the geometrical properties of n-sided cells change with α until the Poisson-Voronoi limit is reached for α>2; in this limit the Desch law for perimeters is shown to be not valid and a square root dependence on n is established, which agrees with exact asymptotic results. Anomalous scaling relations are observed between the perimeter and the area in the 2D and between the areas and the volumes of the cells in 3D: except for the hexagonal (2D) and FCC structure (3D), this applies also for infinitesimal noise. In the Poisson-Voronoi limit, the anomalous exponent is about 0.17 in both the 2D and 3D case. A positive anomaly in the scaling indicates that large cells preferentially feature large isoperimetric quotients. As the number of faces is strongly correlated with the sphericity (cells with more faces are bulkier), in 3D it is shown that the anomalous scaling is heavily reduced when we perform power law fits separately on cells with a specific number of faces.