916 resultados para independent random variables with a commondensity
Resumo:
Kelp forests represent some of the most productive and diverse habitats on Earth. Understanding drivers of ecological patterns at large spatial scales is critical for effective management and conservation of marine habitats. We surveyed kelp forests dominated by Laminaria hyperborea (Gunnerus) Foslie 1884 across 9° latitude and >1000 km of coastline and measured a number of physical parameters at multiple scales to link ecological structure and standing stock of carbon with environmental variables. Kelp density, biomass, morphology and age were generally greater in exposed sites within regions, highlighting the importance of wave exposure in structuring L. hyperborea populations. At the regional scale, wave-exposed kelp canopies in the cooler regions (the north and west of Scotland) were greater in biomass, height and age than in warmer regions (southwest Wales and England). The range and maximal values of estimated standing stock of carbon contained within kelp forests was greater than in historical studies, suggesting that this ecosystem property may have been previously undervalued. Kelp canopy density was positively correlated with large-scale wave fetch and fine-scale water motion, whereas kelp canopy biomass and the standing stock of carbon were positively correlated with large-scale wave fetch and light levels and negatively correlated with temperature. As light availability and summer temperature were important drivers of kelp forest biomass, effective management of human activities that may affect coastal water quality is necessary to maintain ecosystem functioning, while increased temperatures related to anthropogenic climate change may impact the structure of kelp forests and the ecosystem services they provide.
Resumo:
Kelp forests represent some of the most productive and diverse habitats on Earth. Understanding drivers of ecological patterns at large spatial scales is critical for effective management and conservation of marine habitats. We surveyed kelp forests dominated by Laminaria hyperborea (Gunnerus) Foslie 1884 across 9° latitude and >1000 km of coastline and measured a number of physical parameters at multiple scales to link ecological structure and standing stock of carbon with environmental variables. Kelp density, biomass, morphology and age were generally greater in exposed sites within regions, highlighting the importance of wave exposure in structuring L. hyperborea populations. At the regional scale, wave-exposed kelp canopies in the cooler regions (the north and west of Scotland) were greater in biomass, height and age than in warmer regions (southwest Wales and England). The range and maximal values of estimated standing stock of carbon contained within kelp forests was greater than in historical studies, suggesting that this ecosystem property may have been previously undervalued. Kelp canopy density was positively correlated with large-scale wave fetch and fine-scale water motion, whereas kelp canopy biomass and the standing stock of carbon were positively correlated with large-scale wave fetch and light levels and negatively correlated with temperature. As light availability and summer temperature were important drivers of kelp forest biomass, effective management of human activities that may affect coastal water quality is necessary to maintain ecosystem functioning, while increased temperatures related to anthropogenic climate change may impact the structure of kelp forests and the ecosystem services they provide.
Resumo:
In the recent past one of the main concern of research in the field of Hypercomplex Function Theory in Clifford Algebras was the development of a variety of new tools for a deeper understanding about its true elementary roots in the Function Theory of one Complex Variable. Therefore the study of the space of monogenic (Clifford holomorphic) functions by its stratification via homogeneous monogenic polynomials is a useful tool. In this paper we consider the structure of those polynomials of four real variables with binomial expansion. This allows a complete characterization of sequences of 4D generalized monogenic Appell polynomials by three different types of polynomials. A particularly important case is that of monogenic polynomials which are simply isomorphic to the integer powers of one complex variable and therefore also called pseudo-complex powers.
Resumo:
This paper considers a stochastic SIR (susceptible-infective-removed) epidemic model in which individuals may make infectious contacts in two ways, both within 'households' (which for ease of exposition are assumed to have equal size) and along the edges of a random graph describing additional social contacts. Heuristically-motivated branching process approximations are described, which lead to a threshold parameter for the model and methods for calculating the probability of a major outbreak, given few initial infectives, and the expected proportion of the population who are ultimately infected by such a major outbreak. These approximate results are shown to be exact as the number of households tends to infinity by proving associated limit theorems. Moreover, simulation studies indicate that these asymptotic results provide good approximations for modestly-sized finite populations. The extension to unequal sized households is discussed briefly.
Resumo:
© 2014 Cises This work is distributed with License Creative Commons Attribution-Non commercial-No derivatives 4.0 International (CC BY-BC-ND 4.0)
Resumo:
Random Walk with Restart (RWR) is an appealing measure of proximity between nodes based on graph structures. Since real graphs are often large and subject to minor changes, it is prohibitively expensive to recompute proximities from scratch. Previous methods use LU decomposition and degree reordering heuristics, entailing O(|V|^3) time and O(|V|^2) memory to compute all (|V|^2) pairs of node proximities in a static graph. In this paper, a dynamic scheme to assess RWR proximities is proposed: (1) For unit update, we characterize the changes to all-pairs proximities as the outer product of two vectors. We notice that the multiplication of an RWR matrix and its transition matrix, unlike traditional matrix multiplications, is commutative. This can greatly reduce the computation of all-pairs proximities from O(|V|^3) to O(|delta|) time for each update without loss of accuracy, where |delta| (<<|V|^2) is the number of affected proximities. (2) To avoid O(|V|^2) memory for all pairs of outputs, we also devise efficient partitioning techniques for our dynamic model, which can compute all pairs of proximities segment-wisely within O(l|V|) memory and O(|V|/l) I/O costs, where 1<=l<=|V| is a user-controlled trade-off between memory and I/O costs. (3) For bulk updates, we also devise aggregation and hashing methods, which can discard many unnecessary updates further and handle chunks of unit updates simultaneously. Our experimental results on various datasets demonstrate that our methods can be 1–2 orders of magnitude faster than other competitors while securing scalability and exactness.
Resumo:
A beam-column resting on continuous Winkler foundation and discrete elastic supports is considered. The beam-column is of variable cross-section and the variation of sectional properties along the axis of the beam-column is deterministic. Young's modulus, mass per unit length and distributed axial loadings of the beam-column have a stochastic distribution. The foundation stiffness coefficient of the Winkler model, the stiffnesses of discrete elastic supports, stiffnesses of end springs and the end thrust, are all considered as random parameters. The material property fluctuations and distributed axial loadings are considered to constitute independent, one-dimension uni-variate homogeneous real stochastic fields in space. The foundation stiffness coefficient, stiffnesses of the discrete elastic supports, stiffnesses of end springs and the end thrust are considered to constitute independent random variables. Static response, free vibration and stability behaviour of the beam-column are studied. Hamilton's principle is used to formulate the problem using stochastic FEM. Sensitivity vectors of the response and stability parameters are evaluated. Using these statistics of free vibration frequencies, mode shapes, buckling parameters, etc., are evaluated. A numerical example is given.
Resumo:
The problem of identifying parameters of nonlinear vibrating systems using spatially incomplete, noisy, time-domain measurements is considered. The problem is formulated within the framework of dynamic state estimation formalisms that employ particle filters. The parameters of the system, which are to be identified, are treated as a set of random variables with finite number of discrete states. The study develops a procedure that combines a bank of self-learning particle filters with a global iteration strategy to estimate the probability distribution of the system parameters to be identified. Individual particle filters are based on the sequential importance sampling filter algorithm that is readily available in the existing literature. The paper develops the requisite recursive formulary for evaluating the evolution of weights associated with system parameter states. The correctness of the formulations developed is demonstrated first by applying the proposed procedure to a few linear vibrating systems for which an alternative solution using adaptive Kalman filter method is possible. Subsequently, illustrative examples on three nonlinear vibrating systems, using synthetic vibration data, are presented to reveal the correct functioning of the method. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
In contemporary wideband orthogonal frequency division multiplexing (OFDM) systems, such as Long Term Evolution (LTE) and WiMAX, different subcarriers over which a codeword is transmitted may experience different signal-to-noise-ratios (SNRs). Thus, adaptive modulation and coding (AMC) in these systems is driven by a vector of subcarrier SNRs experienced by the codeword, and is more involved. Exponential effective SNR mapping (EESM) simplifies the problem by mapping this vector into a single equivalent fiat-fading SNR. Analysis of AMC using EESM is challenging owing to its non-linear nature and its dependence on the modulation and coding scheme. We first propose a novel statistical model for the EESM, which is based on the Beta distribution. It is motivated by the central limit approximation for random variables with a finite support. It is simpler and as accurate as the more involved ad hoc models proposed earlier. Using it, we develop novel expressions for the throughput of a point-to-point OFDM link with multi-antenna diversity that uses EESM for AMC. We then analyze a general, multi-cell OFDM deployment with co-channel interference for various frequency-domain schedulers. Extensive results based on LTE and WiMAX are presented to verify the model and analysis, and gain new insights.
Resumo:
Climate change has resulted in substantial variations in annual extreme rainfall quantiles in different durations and return periods. Predicting the future changes in extreme rainfall quantiles is essential for various water resources design, assessment, and decision making purposes. Current Predictions of future rainfall extremes, however, exhibit large uncertainties. According to extreme value theory, rainfall extremes are rather random variables, with changing distributions around different return periods; therefore there are uncertainties even under current climate conditions. Regarding future condition, our large-scale knowledge is obtained using global climate models, forced with certain emission scenarios. There are widely known deficiencies with climate models, particularly with respect to precipitation projections. There is also recognition of the limitations of emission scenarios in representing the future global change. Apart from these large-scale uncertainties, the downscaling methods also add uncertainty into estimates of future extreme rainfall when they convert the larger-scale projections into local scale. The aim of this research is to address these uncertainties in future projections of extreme rainfall of different durations and return periods. We plugged 3 emission scenarios with 2 global climate models and used LARS-WG, a well-known weather generator, to stochastically downscale daily climate models’ projections for the city of Saskatoon, Canada, by 2100. The downscaled projections were further disaggregated into hourly resolution using our new stochastic and non-parametric rainfall disaggregator. The extreme rainfall quantiles can be consequently identified for different durations (1-hour, 2-hour, 4-hour, 6-hour, 12-hour, 18-hour and 24-hour) and return periods (2-year, 10-year, 25-year, 50-year, 100-year) using Generalized Extreme Value (GEV) distribution. By providing multiple realizations of future rainfall, we attempt to measure the extent of total predictive uncertainty, which is contributed by climate models, emission scenarios, and downscaling/disaggregation procedures. The results show different proportions of these contributors in different durations and return periods.
Resumo:
Bounds on the distribution function of the sum of two random variables with known marginal distributions obtained by Makarov (1981) can be used to bound the cumulative distribution function (c.d.f.) of individual treatment effects. Identification of the distribution of individual treatment effects is important for policy purposes if we are interested in functionals of that distribution, such as the proportion of individuals who gain from the treatment and the expected gain from the treatment for these individuals. Makarov bounds on the c.d.f. of the individual treatment effect distribution are pointwise sharp, i.e. they cannot be improved in any single point of the distribution. We show that the Makarov bounds are not uniformly sharp. Specifically, we show that the Makarov bounds on the region that contains the c.d.f. of the treatment effect distribution in two (or more) points can be improved, and we derive the smallest set for the c.d.f. of the treatment effect distribution in two (or more) points. An implication is that the Makarov bounds on a functional of the c.d.f. of the individual treatment effect distribution are not best possible.
Resumo:
Let P be a probability distribution on q -dimensional space. The so-called Diaconis-Freedman effect means that for a fixed dimension d<with increasing dimension q . It turns out, that the conditions formulated by Diaconis and Freedman (1984) are not only sufficient but necessary as well. Moreover, letting P ^ be the empirical distribution of n independent random vectors with distribution P , we investigate the behavior of the empirical process n √ (P ^ −P) under random projections, conditional on P ^ .
Resumo:
For non-negative random variables with finite means we introduce an analogous of the equilibrium residual-lifetime distribution based on the quantile function. This allows us to construct new distributions with support (0, 1), and to obtain a new quantile-based version of the probabilistic generalization of Taylor's theorem. Similarly, for pairs of stochastically ordered random variables we come to a new quantile-based form of the probabilistic mean value theorem. The latter involves a distribution that generalizes the Lorenz curve. We investigate the special case of proportional quantile functions and apply the given results to various models based on classes of distributions and measures of risk theory. Motivated by some stochastic comparisons, we also introduce the “expected reversed proportional shortfall order”, and a new characterization of random lifetimes involving the reversed hazard rate function.
Resumo:
The purpose of the work is to claim that engineers can be motivated to study statistical concepts by using the applications in their experience connected with Statistical ideas. The main idea is to choose a data from the manufacturing factility (for example, output from CMM machine) and explain that even if the parts used do not meet exact specifications they are used in production. By graphing the data one can show that the error is random but follows a distribution, that is, there is regularily in the data in statistical sense. As the error distribution is continuous, we advocate that the concept of randomness be introducted starting with continuous random variables with probabilities connected with areas under the density. The discrete random variables are then introduced in terms of decision connected with size of the errors before generalizing to abstract concept of probability. Using software, they can then be motivated to study statistical analysis of the data they encounter and the use of this analysis to make engineering and management decisions.
Resumo:
The Dirichlet distribution is a multivariate generalization of the Beta distribution. It is an important multivariate continuous distribution in probability and statistics. In this report, we review the Dirichlet distribution and study its properties, including statistical and information-theoretic quantities involving this distribution. Also, relationships between the Dirichlet distribution and other distributions are discussed. There are some different ways to think about generating random variables with a Dirichlet distribution. The stick-breaking approach and the Pólya urn method are discussed. In Bayesian statistics, the Dirichlet distribution and the generalized Dirichlet distribution can both be a conjugate prior for the Multinomial distribution. The Dirichlet distribution has many applications in different fields. We focus on the unsupervised learning of a finite mixture model based on the Dirichlet distribution. The Initialization Algorithm and Dirichlet Mixture Estimation Algorithm are both reviewed for estimating the parameters of a Dirichlet mixture. Three experimental results are shown for the estimation of artificial histograms, summarization of image databases and human skin detection.