908 resultados para Geometric Probability


Relevância:

60.00% 60.00%

Publicador:

Resumo:

A new approach is proposed to simulate splash erosion on local soil surfaces. Without the effect of wind and other raindrops, the impact of free-falling raindrops was considered as an independent event from the stochastic viewpoint. The erosivity of a single raindrop depending on its kinetic energy was computed by an empirical relationship in which the kinetic energy was expressed as a power function of the equivalent diameter of the raindrop. An empirical linear function combining the kinetic energy and soil shear strength was used to estimate the impacted amount of soil particles by a single raindrop. Considering an ideal local soil surface with size of I m x I m, the expected number of received free-failing raindrops with different diameters per unit time was described by the combination of the raindrop size distribution function and the terminal velocity of raindrops. The total splash amount was seen as the sum of the impact amount by all raindrops in the rainfall event. The total splash amount per unit time was subdivided into three different components, including net splash amount, single impact amount and re-detachment amount. The re-detachment amount was obtained by a spatial geometric probability derived using the Poisson function in which overlapped impacted areas were considered. The net splash amount was defined as the mass of soil particles collected outside the splash dish. It was estimated by another spatial geometric probability in which the average splashed distance related to the median grain size of soil and effects of other impacted soil particles and other free-falling raindrops were considered. Splash experiments in artificial rainfall were carried out to validate the availability and accuracy of the model. Our simulated results suggested that the net splash amount and re-detachment amount were small parts of the total splash amount. Their proportions were 0.15% and 2.6%, respectively. The comparison of simulated data with measured data showed that this model could be applied to simulate the soil-splash process successfully and needed information of the rainfall intensity and original soil properties including initial bulk intensity, water content, median grain size and some empirical constants related to the soil surface shear strength, the raindrop size distribution function and the average splashed distance. Copyright (c) 2007 John Wiley & Sons, Ltd.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The critical process parameter for mineral separation is the degree of mineral liberation achieved by comminution. The degree of liberation provides an upper limit of efficiency for any physical separation process. The standard approach to measuring mineral liberation uses mineralogical analysis based two-dimensional sections of particles which may be acquired using a scanning electron microscope and back-scatter electron analysis or from an analysis of an image acquired using an optical microscope. Over the last 100 years, mathematical techniques have been developed to use this two dimensional information to infer three-dimensional information about the particles. For mineral processing, a particle that contains more than one mineral (a composite particle) may appear to be liberated (contain only one mineral) when analysed using only its revealed particle section. The mathematical techniques used to interpret three-dimensional information belong, to a branch of mathematics called stereology. However methods to obtain the full mineral liberation distribution of particles from particle sections are relatively new. To verify these adjustment methods, we require an experimental method which can accurately measure both sectional and three dimensional properties. Micro Cone Beam Tomography provides such a method for suitable particles and hence, provides a way to validate methods used to convert two-dimensional measurements to three dimensional estimates. For this study ore particles from a well-characterised sample were subjected to conventional mineralogical analysis (using particle sections) to estimate three-dimensional properties of the particles. A subset of these particles was analysed using a micro-cone beam tomograph. This paper presents a comparison of the three-dimensional properties predicted from measured two-dimensional sections with the measured three-dimensional properties.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper we propose a class for introducing the probability teaching using the game discs which is based on the concept of geometric probability and which is supposed to determine the probability of a disc randomly thrown does not intercept the lines of a gridded surface. The problem was posed to a group of 3nd year of the Federal Institute of Education, Science and Technology of Rio Grande do Norte - Jo~ao C^amara. Therefore, the students were supposed to build a grid board in which the success percentage of the players had been previously de ned for them. Once the grid board was built, the students should check whether that theoretically predetermined percentage corresponded to reality obtained through experimentation. The results and attitude of the students in further classes suggested greater involvement of them with discipline, making the environment conducive for learning.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper we propose a class for introducing the probability teaching using the game discs which is based on the concept of geometric probability and which is supposed to determine the probability of a disc randomly thrown does not intercept the lines of a gridded surface. The problem was posed to a group of 3nd year of the Federal Institute of Education, Science and Technology of Rio Grande do Norte - Jo~ao C^amara. Therefore, the students were supposed to build a grid board in which the success percentage of the players had been previously de ned for them. Once the grid board was built, the students should check whether that theoretically predetermined percentage corresponded to reality obtained through experimentation. The results and attitude of the students in further classes suggested greater involvement of them with discipline, making the environment conducive for learning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Let n points be placed independently in d-dimensional space according to the density f(x) = A(d)e(-lambda parallel to x parallel to alpha), lambda, alpha > 0, x is an element of R-d, d >= 2. Let d(n) be the longest edge length of the nearest-neighbor graph on these points. We show that (lambda(-1) log n)(1-1/alpha) d(n) - b(n) converges weakly to the Gumbel distribution, where b(n) similar to ((d - 1)/lambda alpha) log log n. We also prove the following strong law for the normalized nearest-neighbor distance (d) over tilde (n) = (lambda(-1) log n)(1-1/alpha) d(n)/log log n: (d - 1)/alpha lambda <= lim inf(n ->infinity) (d) over tilde (n) <= lim sup(n ->infinity) (d) over tilde (n) <= d/alpha lambda almost surely. Thus, the exponential rate of decay alpha = 1 is critical, in the sense that, for alpha > 1, d(n) -> 0, whereas, for alpha <= 1, d(n) -> infinity almost surely as n -> infinity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wireless sensor networks can often be viewed in terms of a uniform deployment of a large number of nodes on a region in Euclidean space, e.g., the unit square. After deployment, the nodes self-organise into a mesh topology. In a dense, homogeneous deployment, a frequently used approximation is to take the hop distance between nodes to be proportional to the Euclidean distance between them. In this paper, we analyse the performance of this approximation. We show that nodes with a certain hop distance from a fixed anchor node lie within a certain annulus with probability approach- ing unity as the number of nodes n → ∞. We take a uniform, i.i.d. deployment of n nodes on a unit square, and consider the geometric graph on these nodes with radius r(n) = c q ln n n . We show that, for a given hop distance h of a node from a fixed anchor on the unit square,the Euclidean distance lies within [(1−ǫ)(h−1)r(n), hr(n)],for ǫ > 0, with probability approaching unity as n → ∞.This result shows that it is more likely to expect a node, with hop distance h from the anchor, to lie within this an- nulus centred at the anchor location, and of width roughly r(n), rather than close to a circle whose radius is exactly proportional to h. We show that if the radius r of the ge- ometric graph is fixed, the convergence of the probability is exponentially fast. Similar results hold for a randomised lattice deployment. We provide simulation results that il- lustrate the theory, and serve to show how large n needs to be for the asymptotics to be useful.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider evolving exponential RGGs in one dimension and characterize the time dependent behavior of some of their topological properties. We consider two evolution models and study one of them detail while providing a summary of the results for the other. In the first model, the inter-nodal gaps evolve according to an exponential AR(1) process that makes the stationary distribution of the node locations exponential. For this model we obtain the one-step conditional connectivity probabilities and extend it to the k-step case. Finite and asymptotic analysis are given. We then obtain the k-step connectivity probability conditioned on the network being disconnected. We also derive the pmf of the first passage time for a connected network to become disconnected. We then describe a random birth-death model where at each instant, the node locations evolve according to an AR(1) process. In addition, a random node is allowed to die while giving birth to a node at another location. We derive properties similar to those above.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Given two independent Poisson point processes Phi((1)), Phi((2)) in R-d, the AB Poisson Boolean model is the graph with the points of Phi((1)) as vertices and with edges between any pair of points for which the intersection of balls of radius 2r centered at these points contains at least one point of Phi((2)). This is a generalization of the AB percolation model on discrete lattices. We show the existence of percolation for all d >= 2 and derive bounds fora critical intensity. We also provide a characterization for this critical intensity when d = 2. To study the connectivity problem, we consider independent Poisson point processes of intensities n and tau n in the unit cube. The AB random geometric graph is defined as above but with balls of radius r. We derive a weak law result for the largest nearest-neighbor distance and almost-sure asymptotic bounds for the connectivity threshold.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Wireless sensor networks can often be viewed in terms of a uniform deployment of a large number of nodes in a region of Euclidean space. Following deployment, the nodes self-organize into a mesh topology with a key aspect being self-localization. Having obtained a mesh topology in a dense, homogeneous deployment, a frequently used approximation is to take the hop distance between nodes to be proportional to the Euclidean distance between them. In this work, we analyze this approximation through two complementary analyses. We assume that the mesh topology is a random geometric graph on the nodes; and that some nodes are designated as anchors with known locations. First, we obtain high probability bounds on the Euclidean distances of all nodes that are h hops away from a fixed anchor node. In the second analysis, we provide a heuristic argument that leads to a direct approximation for the density function of the Euclidean distance between two nodes that are separated by a hop distance h. This approximation is shown, through simulation, to very closely match the true density function. Localization algorithms that draw upon the preceding analyses are then proposed and shown to perform better than some of the well-known algorithms present in the literature. Belief-propagation-based message-passing is then used to further enhance the performance of the proposed localization algorithms. To our knowledge, this is the first usage of message-passing for hop-count-based self-localization.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a distribution-free approach to the study of random geometric graphs. The distribution of vertices follows a Poisson point process with intensity function n f(center dot), where n is an element of N, and f is a probability density function on R-d. A vertex located at x connects via directed edges to other vertices that are within a cut-off distance r(n)(x). We prove strong law results for (i) the critical cut-off function so that almost surely, the graph does not contain any node with out-degree zero for sufficiently large n and (ii) the maximum and minimum vertex degrees. We also provide a characterization of the cut-off function for which the number of nodes with out-degree zero converges in distribution to a Poisson random variable. We illustrate this result for a class of densities with compact support that have at most polynomial rates of decay to zero. Finally, we state a sufficient condition for an enhanced version of the above graph to be almost surely connected eventually.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Demixing is the task of identifying multiple signals given only their sum and prior information about their structures. Examples of demixing problems include (i) separating a signal that is sparse with respect to one basis from a signal that is sparse with respect to a second basis; (ii) decomposing an observed matrix into low-rank and sparse components; and (iii) identifying a binary codeword with impulsive corruptions. This thesis describes and analyzes a convex optimization framework for solving an array of demixing problems.

Our framework includes a random orientation model for the constituent signals that ensures the structures are incoherent. This work introduces a summary parameter, the statistical dimension, that reflects the intrinsic complexity of a signal. The main result indicates that the difficulty of demixing under this random model depends only on the total complexity of the constituent signals involved: demixing succeeds with high probability when the sum of the complexities is less than the ambient dimension; otherwise, it fails with high probability.

The fact that a phase transition between success and failure occurs in demixing is a consequence of a new inequality in conic integral geometry. Roughly speaking, this inequality asserts that a convex cone behaves like a subspace whose dimension is equal to the statistical dimension of the cone. When combined with a geometric optimality condition for demixing, this inequality provides precise quantitative information about the phase transition, including the location and width of the transition region.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A well-known paradigm for load balancing in distributed systems is the``power of two choices,''whereby an item is stored at the less loaded of two (or more) random alternative servers. We investigate the power of two choices in natural settings for distributed computing where items and servers reside in a geometric space and each item is associated with the server that is its nearest neighbor. This is in fact the backdrop for distributed hash tables such as Chord, where the geometric space is determined by clockwise distance on a one-dimensional ring. Theoretically, we consider the following load balancing problem. Suppose that servers are initially hashed uniformly at random to points in the space. Sequentially, each item then considers d candidate insertion points also chosen uniformly at random from the space,and selects the insertion point whose associated server has the least load. For the one-dimensional ring, and for Euclidean distance on the two-dimensional torus, we demonstrate that when n data items are hashed to n servers,the maximum load at any server is log log n / log d + O(1) with high probability. While our results match the well-known bounds in the standard setting in which each server is selected equiprobably, our applications do not have this feature, since the sizes of the nearest-neighbor regions around servers are non-uniform. Therefore, the novelty in our methods lies in developing appropriate tail bounds on the distribution of nearest-neighbor region sizes and in adapting previous arguments to this more general setting. In addition, we provide simulation results demonstrating the load balance that results as the system size scales into the millions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The present study on the characterization of probability distributions using the residual entropy function. The concept of entropy is extensively used in literature as a quantitative measure of uncertainty associated with a random phenomenon. The commonly used life time models in reliability Theory are exponential distribution, Pareto distribution, Beta distribution, Weibull distribution and gamma distribution. Several characterization theorems are obtained for the above models using reliability concepts such as failure rate, mean residual life function, vitality function, variance residual life function etc. Most of the works on characterization of distributions in the reliability context centers around the failure rate or the residual life function. The important aspect of interest in the study of entropy is that of locating distributions for which the shannon’s entropy is maximum subject to certain restrictions on the underlying random variable. The geometric vitality function and examine its properties. It is established that the geometric vitality function determines the distribution uniquely. The problem of averaging the residual entropy function is examined, and also the truncated form version of entropies of higher order are defined. In this study it is established that the residual entropy function determines the distribution uniquely and that the constancy of the same is characteristics to the geometric distribution