920 resultados para power law
Resumo:
The space currents definitely take effects on electromagnetic environment and also are scientific highlight in the space research. Space currents as a momentum and energy provider to Geospace Storm, disturb the varied part of geomagnetic field, distort magnetospheric configuration and furthermore take control of the coupling between magnetosphere and ionosphere. Due to both academic and commercial objectives above, we carry on geomagnetic inverse and theoretical studies about the space currents by using geomagnetic data from INTERMAGNET. At first, we apply a method of Natural Orthogonal Components (NOC) to decomposition the solar daily variation, especially for (solar quiet variation). NOC is just one of eign mode analysis, the most advantage of this method is that the basic functions (BFs) were not previously designated, but naturally came from the original data so that there are several BFs usually corresponding to the process really happened and have more physical meaning than the traditional spectrum analysis with the fixed BFs like Fourier trigonometric functions. The first two eign modes are corresponding to the and daily variation and their amplitudes both have the seasonal and day-to-day trend, that will be useful for evaluating geomagnetic activity indices. Because of the too strict constraints of orthogonality, we try to extend orthogonal contraints to the non-orthogonal ones in order to give more suitable and appropriate decomposition of the real processes when the most components did not satisfy orthogonality. We introduce a mapping matrix which can transform the real physical space to a new mathematical space, after that process, the modified components which associated with the physical processes have satisfied the orthogonality in the new mathematical space, furthermore, we can continue to use the NOC decomposition in the new mathematical space, and then all the components inversely transform back to original physical space, so that we would have finished the non-orthogonal decomposition which more generally in the real world. Secondly, geomagnetic inverse of the ring current’s topology is conducted. Configurational changes of the ring current in the magnetosphere lead to different patterns of disturbed ground field, so that the global configuration of ring current can be inferred from its geomagnetic perturbations. We took advantages of worldwide geomagnetic observatories network to investigate the disturbed geomagnetic field which produced by ring current. It was found that the ring current was not always centered at geomagnetic equator, and significantly deviated off the equator during several intense magnetic storms. The deviation owing to the tilting and latitudinal shifting of the ring current with respect to the earth’s dipole can be estimated from global geomagnetic survey. Furthermore those two configurational factors which gave a quantitative description of the ring current configuration, will be helpful to improve the Dst calibration and understand the dependence of ring current’s configuration on the plasma sheet location relative to the equator when magnetotail field warped. Thirdly, the energization and physical acceleration process of ring current during magnetic storm has been proposed. When IMF Bz component increase, the enhanced convection electric field drive the plasma injection into the inner magnetosphere. During the transport process, a dynamic heating is happened which make the particles more ‘hot’ when the injection is more deeply inward. The energy gradient along the injection path is equivalent to a kind of force, which resist the plasma more earthward injection, as a diamagnetic effect of the magnetosphere anti and repellent action to the exotically injected plasma. The acceleration efficiency has a power law form. We use analytical way to quantitatively describe the dynamical process by introducing a physical parameter: energization index, which will be useful to understand how the particle is heated. At the end, we give a scheme of how to get the from storm time geomagnetic data. During intense magnetic storms, the lognormal trend of geomagnetic Dst decreases depend on the heating dynamic of magnetosphere controlling ring current. The descending pattern of main phase is governed by the magnetospheric configuration, which can be describled by the energization index. The amplitude of Dst correlated with convection electric field or south component of the solar wind. Finally, the Dst index is predicted by upstream solar wind parameter. As we known space weather have posed many chanllenges and impacts on techinal system, the geomagnetic index for evaluating the activity space weather. We review the most popular Dst prediction method and repeat the Dst forecasting model works. A concise and convnient Key Points model of the polar region is also introduced to space weather. In summary, this paper contains some new quantitative and physical description of the space currents with special focus on the ring current. Whatever we do is just to gain a better understanding of the natural world, particularly the space environment around Earth through analytical deduction, algorithm designing and physical analysis, to quantitative interpretation. Applications of theoretical physics in conjunction with data analysis help us to understand the basic physical process govering the universe.
Resumo:
Basin-scale heterogeneity contains information about the traces of the past sedimentary cycle and tectonic process, and has been a major concern to geophysicists because of its importance in resource exploration and development. In this paper, the sonic data of 30 wells of Sulige field are used to inverse the power-law spectra slope and correlation length which are measures of the heterogeneity of the velocity of the log using fractal and statistic correlation methods. By taking the heterogeneity parameters of different wells interpolated, we get power law spectra slope and correlation length contours reflecting the stratum heterogeneity. Then using correlation and gradient, we inverse the transverse heterogeneity of Sulige field. Reservior-scale heterogeneity influnce the distribution of remaining oil and hydrocarbon accumulation. Using wavelet modulus maximum method to divide the sedimentary cycle using Gr data, therefore we can calculate the heterogeneity parameter in each layer of each log. Then we get the heterogeneity distribution of each layer of Sulige field. Finally, we analyze the relation between the signal sigularity and the strata heterogeneity, and get two different sigularity profiles in different areas.
Resumo:
Li, Xing, 'Transition region, coronal heating and the fast solar wind', Astronomy and Astrophysics (2003) 406 pp.345-356 RAE2008
Resumo:
Javier G. P. Gamarra and Ricard V. Sole (2002). Biomass-diversity responses and spatial dependencies in disturbed tallgrass prairies. Journal of Theoretical Biology, 215 (4) pp.469-480 RAE2008
Resumo:
High-intensity focused ultrasound is a form of therapeutic ultrasound which uses high amplitude acoustic waves to heat and ablate tissue. HIFU employs acoustic amplitudes that are high enough that nonlinear propagation effects are important in the evolution of the sound field. A common model for HIFU beams is the Khokhlov-Zabolotskaya-Kuznetsov (KZK) equation which accounts for nonlinearity, diffraction, and absorption. The KZK equation models diffraction using the parabolic or paraxial approximation. Many HIFU sources have an aperture diameter similar to the focal length and the paraxial approximation may not be appropriate. Here, results obtained using the “Texas code,” a time-domain numerical solution to the KZK equation, were used to assess when the KZK equation can be employed. In a linear water case comparison with the O’Neil solution, the KZK equation accurately predicts the pressure field in the focal region. The KZK equation was also compared to simulations of the exact fluid dynamics equations (no paraxial approximation). The exact equations were solved using the Fourier-Continuation (FC) method to approximate derivatives in the equations. Results have been obtained for a focused HIFU source in tissue. For a low focusing gain transducer (focal length 50λ and radius 10λ), the KZK and FC models showed excellent agreement, however, as the source radius was increased to 30λ, discrepancies started to appear. Modeling was extended to the case of tissue with the appropriate power law using a relaxation model. The relaxation model resulted in a higher peak pressure and a shift in the location of the peak pressure, highlighting the importance of employing the correct attenuation model. Simulations from the code that were compared to experimental data in water showed good agreement through the focal plane.
Resumo:
Sound propagation in shallow water is characterized by interaction with the oceans surface, volume, and bottom. In many coastal margin regions, including the Eastern U.S. continental shelf and the coastal seas of China, the bottom is composed of a depositional sandy-silty top layer. Previous measurements of narrow and broadband sound transmission at frequencies from 100 Hz to 1 kHz in these regions are consistent with waveguide calculations based on depth and frequency dependent sound speed, attenuation and density profiles. Theoretical predictions for the frequency dependence of attenuation vary from quadratic for the porous media model of M.A. Biot to linear for various competing models. Results from experiments performed under known conditions with sandy bottoms, however, have agreed with attenuation proportional to f1.84, which is slightly less than the theoretical value of f2 [Zhou and Zhang, J. Acoust. Soc. Am. 117, 2494]. This dissertation presents a reexamination of the fundamental considerations in the Biot derivation and leads to a simplification of the theory that can be coupled with site-specific, depth dependent attenuation and sound speed profiles to explain the observed frequency dependence. Long-range sound transmission measurements in a known waveguide can be used to estimate the site-specific sediment attenuation properties, but the costs and time associated with such at-sea experiments using traditional measurement techniques can be prohibitive. Here a new measurement tool consisting of an autonomous underwater vehicle and a small, low noise, towed hydrophone array was developed and used to obtain accurate long-range sound transmission measurements efficiently and cost effectively. To demonstrate this capability and to determine the modal and intrinsic attenuation characteristics, experiments were conducted in a carefully surveyed area in Nantucket Sound. A best-fit comparison between measured results and calculated results, while varying attenuation parameters, revealed the estimated power law exponent to be 1.87 between 220.5 and 1228 Hz. These results demonstrate the utility of this new cost effective and accurate measurement system. The sound transmission results, when compared with calculations based on the modified Biot theory, are shown to explain the observed frequency dependence.
Resumo:
This paper explores reasons for the high degree of variability in the sizes of ASes that have recently been observed, and the processes by which this variable distribution develops. AS size distribution is important for a number of reasons. First, when modeling network topologies, an AS size distribution assists in labeling routers with an associated AS. Second, AS size has been found to be positively correlated with the degree of the AS (number of peering links), so understanding the distribution of AS sizes has implications for AS connectivity properties. Our model accounts for AS births, growth, and mergers. We analyze two models: one incorporates only the growth of hosts and ASes, and a second extends that model to include mergers of ASes. We show analytically that, given reasonable assumptions about the nature of mergers, the resulting size distribution exhibits a power law tail with the exponent independent of the details of the merging process. We estimate parameters of the models from measurements obtained from Internet registries and from BGP tables. We then compare the models solutions to empirical AS size distribution taken from Mercator and Skitter datasets, and find that the simple growth-based model yields general agreement with empirical data. Our analysis of the model in which mergers occur in a manner independent of the size of the merging ASes suggests that more detailed analysis of merger processes is needed.
Resumo:
Recent studies have noted that vertex degree in the autonomous system (AS) graph exhibits a highly variable distribution [15, 22]. The most prominent explanatory model for this phenomenon is the Barabási-Albert (B-A) model [5, 2]. A central feature of the B-A model is preferential connectivity—meaning that the likelihood a new node in a growing graph will connect to an existing node is proportional to the existing node’s degree. In this paper we ask whether a more general explanation than the B-A model, and absent the assumption of preferential connectivity, is consistent with empirical data. We are motivated by two observations: first, AS degree and AS size are highly correlated [11]; and second, highly variable AS size can arise simply through exponential growth. We construct a model incorporating exponential growth in the size of the Internet, and in the number of ASes. We then show via analysis that such a model yields a size distribution exhibiting a power-law tail. In such a model, if an AS’s link formation is roughly proportional to its size, then AS degree will also show high variability. We instantiate such a model with empirically derived estimates of growth rates and show that the resulting degree distribution is in good agreement with that of real AS graphs.
Resumo:
The explosion of WWW traffic necessitates an accurate picture of WWW use, and in particular requires a good understanding of client requests for WWW documents. To address this need, we have collected traces of actual executions of NCSA Mosaic, reflecting over half a million user requests for WWW documents. In this paper we describe the methods we used to collect our traces, and the formats of the collected data. Next, we present a descriptive statistical summary of the traces we collected, which identifies a number of trends and reference patterns in WWW use. In particular, we show that many characteristics of WWW use can be modelled using power-law distributions, including the distribution of document sizes, the popularity of documents as a function of size, the distribution of user requests for documents, and the number of references to documents as a function of their overall rank in popularity (Zipf's law). Finally, we show how the power-law distributions derived from our traces can be used to guide system designers interested in caching WWW documents.
Resumo:
Long-range dependence has been observed in many recent Internet traffic measurements. In addition, some recent studies have shown that under certain network conditions, TCP itself can produce traffic that exhibits dependence over limited timescales, even in the absence of higher-level variability. In this paper, we use a simple Markovian model to argue that when the loss rate is relatively high, TCP's adaptive congestion control mechanism indeed generates traffic with OFF periods exhibiting power-law shape over several timescales and thus introduces pseudo-long-range dependence into the overall traffic. Moreover, we observe that more variable initial retransmission timeout values for different packets introduces more variable packet inter-arrival times, which increases the burstiness of the overall traffic. We can thus explain why a single TCP connection can produce a time-series that can be misidentified as self-similar using standard tests.
Resumo:
Recent work has shown the prevalence of small-world phenomena [28] in many networks. Small-world graphs exhibit a high degree of clustering, yet have typically short path lengths between arbitrary vertices. Internet AS-level graphs have been shown to exhibit small-world behaviors [9]. In this paper, we show that both Internet AS-level and router-level graphs exhibit small-world behavior. We attribute such behavior to two possible causes–namely the high variability of vertex degree distributions (which were found to follow approximately a power law [15]) and the preference of vertices to have local connections. We show that both factors contribute with different relative degrees to the small-world behavior of AS-level and router-level topologies. Our findings underscore the inefficacy of the Barabasi-Albert model [6] in explaining the growth process of the Internet, and provide a basis for more promising approaches to the development of Internet topology generators. We present such a generator and show the resemblance of the synthetic graphs it generates to real Internet AS-level and router-level graphs. Using these graphs, we have examined how small-world behaviors affect the scalability of end-system multicast. Our findings indicate that lower variability of vertex degree and stronger preference for local connectivity in small-world graphs results in slower network neighborhood expansion, and in longer average path length between two arbitrary vertices, which in turn results in better scaling of end system multicast.
Resumo:
Considerable attention has been focused on the properties of graphs derived from Internet measurements. Router-level topologies collected via traceroute studies have led some authors to conclude that the router graph of the Internet is a scale-free graph, or more generally a power-law random graph. In such a graph, the degree distribution of nodes follows a distribution with a power-law tail. In this paper we argue that the evidence to date for this conclusion is at best insufficient. We show that graphs appearing to have power-law degree distributions can arise surprisingly easily, when sampling graphs whose true degree distribution is not at all like a power-law. For example, given a classical Erdös-Rényi sparse, random graph, the subgraph formed by a collection of shortest paths from a small set of random sources to a larger set of random destinations can easily appear to show a degree distribution remarkably like a power-law. We explore the reasons for how this effect arises, and show that in such a setting, edges are sampled in a highly biased manner. This insight allows us to distinguish measurements taken from the Erdös-Rényi graphs from those taken from power-law random graphs. When we apply this distinction to a number of well-known datasets, we find that the evidence for sampling bias in these datasets is strong.
Resumo:
We consider the problem of delivering popular streaming media to a large number of asynchronous clients. We propose and evaluate a cache-and-relay end-system multicast approach, whereby a client joining a multicast session caches the stream, and if needed, relays that stream to neighboring clients which may join the multicast session at some later time. This cache-and-relay approach is fully distributed, scalable, and efficient in terms of network link cost. In this paper we analytically derive bounds on the network link cost of our cache-and-relay approach, and we evaluate its performance under assumptions of limited client bandwidth and limited client cache capacity. When client bandwidth is limited, we show that although finding an optimal solution is NP-hard, a simple greedy algorithm performs surprisingly well in that it incurs network link costs that are very close to a theoretical lower bound. When client cache capacity is limited, we show that our cache-and-relay approach can still significantly reduce network link cost. We have evaluated our cache-and-relay approach using simulations over large, synthetic random networks, power-law degree networks, and small-world networks, as well as over large real router-level Internet maps.
Resumo:
Temporal locality of reference in Web request streams emerges from two distinct phenomena: the popularity of Web objects and the {\em temporal correlation} of requests. Capturing these two elements of temporal locality is important because it enables cache replacement policies to adjust how they capitalize on temporal locality based on the relative prevalence of these phenomena. In this paper, we show that temporal locality metrics proposed in the literature are unable to delineate between these two sources of temporal locality. In particular, we show that the commonly-used distribution of reference interarrival times is predominantly determined by the power law governing the popularity of documents in a request stream. To capture (and more importantly quantify) both sources of temporal locality in a request stream, we propose a new and robust metric that enables accurate delineation between locality due to popularity and that due to temporal correlation. Using this metric, we characterize the locality of reference in a number of representative proxy cache traces. Our findings show that there are measurable differences between the degrees (and sources) of temporal locality across these traces, and that these differences are effectively captured using our proposed metric. We illustrate the significance of our findings by summarizing the performance of a novel Web cache replacement policy---called GreedyDual*---which exploits both long-term popularity and short-term temporal correlation in an adaptive fashion. Our trace-driven simulation experiments (which are detailed in an accompanying Technical Report) show the superior performance of GreedyDual* when compared to other Web cache replacement policies.
Resumo:
The cost and complexity of deploying measurement infrastructure in the Internet for the purpose of analyzing its structure and behavior is considerable. Basic questions about the utility of increasing the number of measurements and/or measurement sites have not yet been addressed which has lead to a "more is better" approach to wide-area measurements. In this paper, we quantify the marginal utility of performing wide-area measurements in the context of Internet topology discovery. We characterize topology in terms of nodes, links, node degree distribution, and end-to-end flows using statistical and information-theoretic techniques. We classify nodes discovered on the routes between a set of 8 sources and 1277 destinations to differentiate nodes which make up the so called "backbone" from those which border the backbone and those on links between the border nodes and destination nodes. This process includes reducing nodes that advertise multiple interfaces to single IP addresses. We show that the utility of adding sources goes down significantly after 2 from the perspective of interface, node, link and node degree discovery. We show that the utility of adding destinations is constant for interfaces, nodes, links and node degree indicating that it is more important to add destinations than sources. Finally, we analyze paths through the backbone and show that shared link distributions approximate a power law indicating that a small number of backbone links in our study are very heavily utilized.