933 resultados para Power Series Distribution


Relevância:

30.00% 30.00%

Publicador:

Resumo:

It has been known for decades that the metabolic rate of animals scales with body mass with an exponent that is almost always <1, >2/3, and often very close to 3/4. The 3/4 exponent emerges naturally from two models of resource distribution networks, radial explosion and hierarchically branched, which incorporate a minimum of specific details. Both models show that the exponent is 2/3 if velocity of flow remains constant, but can attain a maximum value of 3/4 if velocity scales with its maximum exponent, 1/12. Quarterpower scaling can arise even when there is no underlying fractality. The canonical “fourth dimension” in biological scaling relations can result from matching the velocity of flow through the network to the linear dimension of the terminal “service volume” where resources are consumed. These models have broad applicability for the optimal design of biological and engineered systems where energy, materials, or information are distributed from a single source.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A novel approach is presented for combining spatial and temporal detail from newly available TRMM-based data sets to derive hourly rainfall intensities at 1-km spatial resolution for hydrological modelling applications. Time series of rainfall intensities derived from 3-hourly 0.25° TRMM 3B42 data are merged with a 1-km gridded rainfall climatology based on TRMM 2B31 data to account for the sub-grid spatial distribution of rainfall intensities within coarse-scale 0.25° grid cells. The method is implemented for two dryland catchments in Tunisia and Senegal, and validated against gauge data. The outcomes of the validation show that the spatially disaggregated and intensity corrected TRMM time series more closely approximate ground-based measurements than non-corrected data. The method introduced here enables the generation of rainfall intensity time series with realistic temporal and spatial detail for dynamic modelling of runoff and infiltration processes that are especially important to water resource management in arid regions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The collection of wind speed time series by means of digital data loggers occurs in many domains, including civil engineering, environmental sciences and wind turbine technology. Since averaging intervals are often significantly larger than typical system time scales, the information lost has to be recovered in order to reconstruct the true dynamics of the system. In the present work we present a simple algorithm capable of generating a real-time wind speed time series from data logger records containing the average, maximum, and minimum values of the wind speed in a fixed interval, as well as the standard deviation. The signal is generated from a generalized random Fourier series. The spectrum can be matched to any desired theoretical or measured frequency distribution. Extreme values are specified through a postprocessing step based on the concept of constrained simulation. Applications of the algorithm to 10-min wind speed records logged at a test site at 60 m height above the ground show that the recorded 10-min values can be reproduced by the simulated time series to a high degree of accuracy.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The translation of an ensemble of model runs into a probability distribution is a common task in model-based prediction. Common methods for such ensemble interpretations proceed as if verification and ensemble were draws from the same underlying distribution, an assumption not viable for most, if any, real world ensembles. An alternative is to consider an ensemble as merely a source of information rather than the possible scenarios of reality. This approach, which looks for maps between ensembles and probabilistic distributions, is investigated and extended. Common methods are revisited, and an improvement to standard kernel dressing, called ‘affine kernel dressing’ (AKD), is introduced. AKD assumes an affine mapping between ensemble and verification, typically not acting on individual ensemble members but on the entire ensemble as a whole, the parameters of this mapping are determined in parallel with the other dressing parameters, including a weight assigned to the unconditioned (climatological) distribution. These amendments to standard kernel dressing, albeit simple, can improve performance significantly and are shown to be appropriate for both overdispersive and underdispersive ensembles, unlike standard kernel dressing which exacerbates over dispersion. Studies are presented using operational numerical weather predictions for two locations and data from the Lorenz63 system, demonstrating both effectiveness given operational constraints and statistical significance given a large sample.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This chapter examines the extent to which Britain's status as a global power in the twentieth century was underpinned by the existence of its empire. It suggests that, in a military sense, empire represented an uncertain resource. While the mobilization of the empire in the two world wars was ultimately crucial to British victory, its latent power in the years leading to those conflicts was poorly appreciated, not least by UK policy‐makers themselves. As such, it had limited value as a deterrent to Britain's enemies. Furthermore, the process of mobilizing the empire for war placed an almost intolerable strain on the fragile structures of imperial control. Britain's continuing aspirations to play the role of a global power following post‐war decolonization reflect the extent to which its overseas interests had always transcended the formal boundaries of empire. Meanwhile the Anglo‐American alliance provided Britain with a degree of security that its empire had never offered.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, the potential role of planned, internal resettlement as a climate change adaptation measure has been highlighted by national governments and the international policy community. However, in many developing countries, resettlement is a deeply political process that often results in an unequal distribution of costs and benefits amongst relocated persons. This paper examines these tensions in Mozambique, drawing on a case study of flood-affected communities in the Lower Zambezi River valley. It takes a political ecology approach – focusing on discourses of human-environment interaction, as well as the power relationships that are supported by such discourses – to show how a dominant narrative of climate change-induced hazards for small-scale farmers is contributing to their involuntary resettlement to higher-altitude, less fertile areas of land. These forced relocations are buttressed by a series of wider economic and political interests in the Lower Zambezi River region, such dam construction for hydroelectric power generation and the extension of control over rural populations, from which resettled people derive little direct benefit. Rather than engaging with these challenging issues, most international donors present in the country accept the ‘inevitability’ of extreme weather impacts and view resettlement as an unfortunate and, in some cases, necessary step to increase people’s ‘resilience’, thus rationalising the top-down imposition of unpopular social policies. The findings add weight to the argument that a depoliticised interpretation of climate change can deflect attention away from underlying drivers of vulnerability and poverty, as well as obscure the interests of governments that are intent on reordering poor and vulnerable populations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study two-dimensional (2D) turbulence in a doubly periodic domain driven by a monoscale-like forcing and damped by various dissipation mechanisms of the form νμ(−Δ)μ. By “monoscale-like” we mean that the forcing is applied over a finite range of wavenumbers kmin≤k≤kmax, and that the ratio of enstrophy injection η≥0 to energy injection ε≥0 is bounded by kmin2ε≤η≤kmax2ε. Such a forcing is frequently considered in theoretical and numerical studies of 2D turbulence. It is shown that for μ≥0 the asymptotic behaviour satisfies ∥u∥12≤kmax2∥u∥2, where ∥u∥2 and ∥u∥12 are the energy and enstrophy, respectively. If the condition of monoscale-like forcing holds only in a time-mean sense, then the inequality holds in the time mean. It is also shown that for Navier–Stokes turbulence (μ=1), the time-mean enstrophy dissipation rate is bounded from above by 2ν1kmax2. These results place strong constraints on the spectral distribution of energy and enstrophy and of their dissipation, and thereby on the existence of energy and enstrophy cascades, in such systems. In particular, the classical dual cascade picture is shown to be invalid for forced 2D Navier–Stokes turbulence (μ=1) when it is forced in this manner. Inclusion of Ekman drag (μ=0) along with molecular viscosity permits a dual cascade, but is incompatible with the log-modified −3 power law for the energy spectrum in the enstrophy-cascading inertial range. In order to achieve the latter, it is necessary to invoke an inverse viscosity (μ<0). These constraints on permissible power laws apply for any spectrally localized forcing, not just for monoscale-like forcing.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Global communicationrequirements andloadimbalanceof someparalleldataminingalgorithms arethe major obstacles to exploitthe computational power of large-scale systems. This work investigates how non-uniform data distributions can be exploited to remove the global communication requirement and to reduce the communication costin parallel data mining algorithms and, in particular, in the k-means algorithm for cluster analysis. In the straightforward parallel formulation of the k-means algorithm, data and computation loads are uniformly distributed over the processing nodes. This approach has excellent load balancing characteristics that may suggest it could scale up to large and extreme-scale parallel computing systems. However, at each iteration step the algorithm requires a global reduction operationwhichhinders thescalabilityoftheapproach.Thisworkstudiesadifferentparallelformulation of the algorithm where the requirement of global communication is removed, while maintaining the same deterministic nature ofthe centralised algorithm. The proposed approach exploits a non-uniform data distribution which can be either found in real-world distributed applications or can be induced by means ofmulti-dimensional binary searchtrees. The approachcanalso be extended to accommodate an approximation error which allows a further reduction ofthe communication costs. The effectiveness of the exact and approximate methods has been tested in a parallel computing system with 64 processors and in simulations with 1024 processing element

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Results from 1D Vlasov drift-kinetic plasma simulations reveal how and where auroral electrons are accelerated along Earth’s geomagnetic field. In the warm plasma sheet, electrons become trapped in shear Alfven waves, preventing immediate wave damping. As waves move to regions with larger vTe=vA, their parallel electric field decreases, and the trapped electrons escape their influence. The resulting electron distribution functions compare favorably with in situ observations, demonstrating for the first time a self-consistent link between Alfven waves and electrons that form aurora.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The absorption spectra of phytoplankton in the visible domain hold implicit information on the phytoplankton community structure. Here we use this information to retrieve quantitative information on phytoplankton size structure by developing a novel method to compute the exponent of an assumed power-law for their particle-size spectrum. This quantity, in combination with total chlorophyll-a concentration, can be used to estimate the fractional concentration of chlorophyll in any arbitrarily-defined size class of phytoplankton. We further define and derive expressions for two distinct measures of cell size of mixed populations, namely, the average spherical diameter of a bio-optically equivalent homogeneous population of cells of equal size, and the average equivalent spherical diameter of a population of cells that follow a power-law particle-size distribution. The method relies on measurements of two quantities of a phytoplankton sample: the concentration of chlorophyll-a, which is an operational index of phytoplankton biomass, and the total absorption coefficient of phytoplankton in the red peak of visible spectrum at 676 nm. A sensitivity analysis confirms that the relative errors in the estimates of the exponent of particle size spectra are reasonably low. The exponents of phytoplankton size spectra, estimated for a large set of in situ data from a variety of oceanic environments (~ 2400 samples), are within a reasonable range; and the estimated fractions of chlorophyll in pico-, nano- and micro-phytoplankton are generally consistent with those obtained by an independent, indirect method based on diagnostic pigments determined using high-performance liquid chromatography. The estimates of cell size for in situ samples dominated by different phytoplankton types (diatoms, prymnesiophytes, Prochlorococcus, other cyanobacteria and green algae) yield nominal sizes consistent with the taxonomic classification. To estimate the same quantities from satellite-derived ocean-colour data, we combine our method with algorithms for obtaining inherent optical properties from remote sensing. The spatial distribution of the size-spectrum exponent and the chlorophyll fractions of pico-, nano- and micro-phytoplankton estimated from satellite remote sensing are in agreement with the current understanding of the biogeography of phytoplankton functional types in the global oceans. This study contributes to our understanding of the distribution and time evolution of phytoplankton size structure in the global oceans.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The use of pulse compression techniques to improve the sensitivity of meteorological radars has become increasingly common in recent years. An unavoidable side-effect of such techniques is the formation of ‘range sidelobes’ which lead to spreading of information across several range gates. These artefacts are particularly troublesome in regions where there is a sharp gradient in the power backscattered to the antenna as a function of range. In this article we present a simple method for identifying and correcting range sidelobe artefacts. We make use of the fact that meteorological targets produce an echo which fluctuates at random, and that this echo, like a fingerprint, is unique to each range gate. By cross-correlating the echo time series from pairs of gates therefore we can identify whether information from one gate has spread into another, and hence flag regions of contamination. In addition we show that the correlation coefficients contain quantitative information about the fraction of power leaked from one range gate to another, and we propose a simple algorithm to correct the corrupted reflectivity profile.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many key economic and financial series are bounded either by construction or through policy controls. Conventional unit root tests are potentially unreliable in the presence of bounds, since they tend to over-reject the null hypothesis of a unit root, even asymptotically. So far, very little work has been undertaken to develop unit root tests which can be applied to bounded time series. In this paper we address this gap in the literature by proposing unit root tests which are valid in the presence of bounds. We present new augmented Dickey–Fuller type tests as well as new versions of the modified ‘M’ tests developed by Ng and Perron [Ng, S., Perron, P., 2001. LAG length selection and the construction of unit root tests with good size and power. Econometrica 69, 1519–1554] and demonstrate how these tests, combined with a simulation-based method to retrieve the relevant critical values, make it possible to control size asymptotically. A Monte Carlo study suggests that the proposed tests perform well in finite samples. Moreover, the tests outperform the Phillips–Perron type tests originally proposed in Cavaliere [Cavaliere, G., 2005. Limited time series with a unit root. Econometric Theory 21, 907–945]. An illustrative application to U.S. interest rate data is provided

Relevância:

30.00% 30.00%

Publicador:

Resumo:

There are now many reports of imaging experiments with small cohorts of typical participants that precede large-scale, often multicentre studies of psychiatric and neurological disorders. Data from these calibration experiments are sufficient to make estimates of statistical power and predictions of sample size and minimum observable effect sizes. In this technical note, we suggest how previously reported voxel-based power calculations can support decision making in the design, execution and analysis of cross-sectional multicentre imaging studies. The choice of MRI acquisition sequence, distribution of recruitment across acquisition centres, and changes to the registration method applied during data analysis are considered as examples. The consequences of modification are explored in quantitative terms by assessing the impact on sample size for a fixed effect size and detectable effect size for a fixed sample size. The calibration experiment dataset used for illustration was a precursor to the now complete Medical Research Council Autism Imaging Multicentre Study (MRC-AIMS). Validation of the voxel-based power calculations is made by comparing the predicted values from the calibration experiment with those observed in MRC-AIMS. The effect of non-linear mappings during image registration to a standard stereotactic space on the prediction is explored with reference to the amount of local deformation. In summary, power calculations offer a validated, quantitative means of making informed choices on important factors that influence the outcome of studies that consume significant resources.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Three rapid, poleward bursts of plasma flow, observed by the U.K.-POLAR EISCAT experiment, are studied in detail. In all three cases the large ion velocities (> 1 kms−1) are shown to drive the ion velocity distribution into a non-Maxwellian form, identified by the characteristic shape of the observed spectra and the fact that analysis of the spectra with the assumption of a Maxwellian distribution leads to excessive rises in apparent ion temperature, and an anticorrelation of apparent electron and ion temperatures. For all three periods the total scattered power is shown to rise with apparent ion temperature by up to 6 dB more than is expected for an isotropic Maxwellian plasma of constant density and by an even larger factor than that expected for non-thermal plasma. The anomalous increases in power are only observed at the lower altitudes (< 300 km). At greater altitudes the rise in power is roughly consistent with that simulated numerically for homogeneous, anisotropic, non-Maxwellian plasma of constant density, viewed using the U.K.-POLAR aspect angle. The spectra at times of anomalously high power are found to be asymmetric, showing an enhancement near the downward Doppler-shifted ion-acoustic frequency. Although it is not possible to eliminate completely rapid plasma density fluctuations as a cause of these power increases, such effects cannot explain the observed spectra and the correlation of power and apparent ion temperature without an unlikely set of coincidences. The observations are made along a beam direction which is as much as 16.5° from orthogonality with the geomagnetic field. Nevertheless, some form of coherent-like echo contamination of the incoherent scatter spectrum is the most satisfactory explanation of these data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Industrial robotic manipulators can be found in most factories today. Their tasks are accomplished through actively moving, placing and assembling parts. This movement is facilitated by actuators that apply a torque in response to a command signal. The presence of friction and possibly backlash have instigated the development of sophisticated compensation and control methods in order to achieve the desired performance may that be accurate motion tracking, fast movement or in fact contact with the environment. This thesis presents a dual drive actuator design that is capable of physically linearising friction and hence eliminating the need for complex compensation algorithms. A number of mathematical models are derived that allow for the simulation of the actuator dynamics. The actuator may be constructed using geared dc motors, in which case the benefits of torque magnification is retained whilst the increased non-linear friction effects are also linearised. An additional benefit of the actuator is the high quality, low latency output position signal provided by the differencing of the two drive positions. Due to this and the linearised nature of friction, the actuator is well suited for low velocity, stop-start applications, micro-manipulation and even in hard-contact tasks. There are, however, disadvantages to its design. When idle, the device uses power whilst many other, single drive actuators do not. Also the complexity of the models mean that parameterisation is difficult. Management of start-up conditions still pose a challenge.