10 resultados para Point Process

em CentAUR: Central Archive University of Reading - UK


Relevância:

100.00% 100.00%

Publicador:

Resumo:

We bridge the properties of the regular triangular, square, and hexagonal honeycomb Voronoi tessellations of the plane to the Poisson-Voronoi case, thus analyzing in a common framework symmetry breaking processes and the approach to uniform random distributions of tessellation-generating points. We resort to ensemble simulations of tessellations generated by points whose regular positions are perturbed through a Gaussian noise, whose variance is given by the parameter α2 times the square of the inverse of the average density of points. We analyze the number of sides, the area, and the perimeter of the Voronoi cells. For all valuesα >0, hexagons constitute the most common class of cells, and 2-parameter gamma distributions provide an efficient description of the statistical properties of the analyzed geometrical characteristics. The introduction of noise destroys the triangular and square tessellations, which are structurally unstable, as their topological properties are discontinuous in α = 0. On the contrary, the honeycomb hexagonal tessellation is topologically stable and, experimentally, all Voronoi cells are hexagonal for small but finite noise withα <0.12. For all tessellations and for small values of α, we observe a linear dependence on α of the ensemble mean of the standard deviation of the area and perimeter of the cells. Already for a moderate amount of Gaussian noise (α >0.5), memory of the specific initial unperturbed state is lost, because the statistical properties of the three perturbed regular tessellations are indistinguishable. When α >2, results converge to those of Poisson-Voronoi tessellations. The geometrical properties of n-sided cells change with α until the Poisson- Voronoi limit is reached for α > 2; in this limit the Desch law for perimeters is shown to be not valid and a square root dependence on n is established. This law allows for an easy link to the Lewis law for areas and agrees with exact asymptotic results. Finally, for α >1, the ensemble mean of the cells area and perimeter restricted to the hexagonal cells agree remarkably well with the full ensemble mean; this reinforces the idea that hexagons, beyond their ubiquitous numerical prominence, can be interpreted as typical polygons in 2D Voronoi tessellations.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

There are various situations in which it is natural to ask whether a given collection of k functions, ρ j (r 1,…,r j ), j=1,…,k, defined on a set X, are the first k correlation functions of a point process on X. Here we describe some necessary and sufficient conditions on the ρ j ’s for this to be true. Our primary examples are X=ℝ d , X=ℤ d , and X an arbitrary finite set. In particular, we extend a result by Ambartzumian and Sukiasian showing realizability at sufficiently small densities ρ 1(r). Typically if any realizing process exists there will be many (even an uncountable number); in this case we prove, when X is a finite set, the existence of a realizing Gibbs measure with k body potentials which maximizes the entropy among all realizing measures. We also investigate in detail a simple example in which a uniform density ρ and translation invariant ρ 2 are specified on ℤ; there is a gap between our best upper bound on possible values of ρ and the largest ρ for which realizability can be established.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

We give necessary and sufficient conditions for a pair of (generali- zed) functions 1(r1) and 2(r1, r2), ri 2X, to be the density and pair correlations of some point process in a topological space X, for ex- ample, Rd, Zd or a subset of these. This is an infinite-dimensional version of the classical “truncated moment” problem. Standard tech- niques apply in the case in which there can be only a bounded num- ber of points in any compact subset of X. Without this restriction we obtain, for compact X, strengthened conditions which are necessary and sufficient for the existence of a process satisfying a further re- quirement—the existence of a finite third order moment. We general- ize the latter conditions in two distinct ways when X is not compact.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The clustering in time (seriality) of extratropical cyclones is responsible for large cumulative insured losses in western Europe, though surprisingly little scientific attention has been given to this important property. This study investigates and quantifies the seriality of extratropical cyclones in the Northern Hemisphere using a point-process approach. A possible mechanism for serial clustering is the time-varying effect of the large-scale flow on individual cyclone tracks. Another mechanism is the generation by one parent cyclone of one or more offspring through secondary cyclogenesis. A long cyclone-track database was constructed for extended October March winters from 1950 to 2003 using 6-h analyses of 850-mb relative vorticity derived from the NCEP NCAR reanalysis. A dispersion statistic based on the varianceto- mean ratio of monthly cyclone counts was used as a measure of clustering. It reveals extensive regions of statistically significant clustering in the European exit region of the North Atlantic storm track and over the central North Pacific. Monthly cyclone counts were regressed on time-varying teleconnection indices with a log-linear Poisson model. Five independent teleconnection patterns were found to be significant factors over Europe: the North Atlantic Oscillation (NAO), the east Atlantic pattern, the Scandinavian pattern, the east Atlantic western Russian pattern, and the polar Eurasian pattern. The NAO alone is not sufficient for explaining the variability of cyclone counts in the North Atlantic region and western Europe. Rate dependence on time-varying teleconnection indices accounts for the variability in monthly cyclone counts, and a cluster process did not need to be invoked.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Point defects in metal oxides such as TiO2 are key to their applications in numerous technologies. The investigation of thermally induced nonstoichiometry in TiO2 is complicated by the difficulties in preparing and determining a desired degree of nonstoichiometry. We study controlled self-doping of TiO2 by adsorption of 1/8 and 1/16 monolayer Ti at the (110) surface using a combination of experimental and computational approaches to unravel the details of the adsorption process and the oxidation state of Ti. Upon adsorption of Ti, x-ray and ultraviolet photoemission spectroscopy (XPS and UPS) show formation of reduced Ti. Comparison of pure density functional theory (DFT) with experiment shows that pure DFT provides an inconsistent description of the electronic structure. To surmount this difficulty, we apply DFT corrected for on-site Coulomb interaction (DFT+U) to describe reduced Ti ions. The optimal value of U is 3 eV, determined from comparison of the computed Ti 3d electronic density of states with the UPS data. DFT+U and UPS show the appearance of a Ti 3d adsorbate-induced state at 1.3 eV above the valence band and 1.0 eV below the conduction band. The computations show that the adsorbed Ti atom is oxidized to Ti2+ and a fivefold coordinated surface Ti atom is reduced to Ti3+, while the remaining electron is distributed among other surface Ti atoms. The UPS data are best fitted with reduced Ti2+ and Ti3+ ions. These results demonstrate that the complexity of doped metal oxides is best understood with a combination of experiment and appropriate computations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite the many models developed for phosphorus concentration prediction at differing spatial and temporal scales, there has been little effort to quantify uncertainty in their predictions. Model prediction uncertainty quantification is desirable, for informed decision-making in river-systems management. An uncertainty analysis of the process-based model, integrated catchment model of phosphorus (INCA-P), within the generalised likelihood uncertainty estimation (GLUE) framework is presented. The framework is applied to the Lugg catchment (1,077 km2), a River Wye tributary, on the England–Wales border. Daily discharge and monthly phosphorus (total reactive and total), for a limited number of reaches, are used to initially assess uncertainty and sensitivity of 44 model parameters, identified as being most important for discharge and phosphorus predictions. This study demonstrates that parameter homogeneity assumptions (spatial heterogeneity is treated as land use type fractional areas) can achieve higher model fits, than a previous expertly calibrated parameter set. The model is capable of reproducing the hydrology, but a threshold Nash-Sutcliffe co-efficient of determination (E or R 2) of 0.3 is not achieved when simulating observed total phosphorus (TP) data in the upland reaches or total reactive phosphorus (TRP) in any reach. Despite this, the model reproduces the general dynamics of TP and TRP, in point source dominated lower reaches. This paper discusses why this application of INCA-P fails to find any parameter sets, which simultaneously describe all observed data acceptably. The discussion focuses on uncertainty of readily available input data, and whether such process-based models should be used when there isn’t sufficient data to support the many parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A semi-distributed model, INCA, has been developed to determine the fate and distribution of nutrients in terrestrial and aquatic systems. The model simulates nitrogen and phosphorus processes in soils, groundwaters and river systems and can be applied in a semi-distributed manner at a range of scales. In this study, the model has been applied at field to sub-catchment to whole catchment scale to evaluate the behaviour of biosolid-derived losses of P in agricultural systems. It is shown that process-based models such as INCA, applied at a wide range of scales, reproduce field and catchment behaviour satisfactorily. The INCA model can also be used to generate generic information for risk assessment. By adjusting three key variables: biosolid application rates, the hydrological connectivity of the catchment and the initial P-status of the soils within the model, a matrix of P loss rates can be generated to evaluate the behaviour of the model and, hence, of the catchment system. The results, which indicate the sensitivity of the catchment to flow paths, to application rates and to initial soil conditions, have been incorporated into a Nutrient Export Risk Matrix (NERM).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In industrial practice, constrained steady state optimisation and predictive control are separate, albeit closely related functions within the control hierarchy. This paper presents a method which integrates predictive control with on-line optimisation with economic objectives. A receding horizon optimal control problem is formulated using linear state space models. This optimal control problem is very similar to the one presented in many predictive control formulations, but the main difference is that it includes in its formulation a general steady state objective depending on the magnitudes of manipulated and measured output variables. This steady state objective may include the standard quadratic regulatory objective, together with economic objectives which are often linear. Assuming that the system settles to a steady state operating point under receding horizon control, conditions are given for the satisfaction of the necessary optimality conditions of the steady-state optimisation problem. The method is based on adaptive linear state space models, which are obtained by using on-line identification techniques. The use of model adaptation is justified from a theoretical standpoint and its beneficial effects are shown in simulations. The method is tested with simulations of an industrial distillation column and a system of chemical reactors.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

If X is a stable process of index α∈(0, 2) whose Lévy measure has density cx−α−1 on (0, ∞), and S1=sup0x)∽Aα−1x−α as x→∞ and P(S1≤x)∽Bα−1ρ−1xαρ as x↓0. [Here ρ=P(X1>0) and A and B are known constants.] It is also known that S1 has a continuous density, m say. The main point of this note is to show that m(x)∽Ax−(α+1) as x→∞ and m(x)∽Bxαρ−1 as x↓0. Similar results are obtained for related densities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Age-related decline in the integrity of mitochondria is an important contributor to the human ageing process. In a number of ageing stem cell populations, this decline in mitochondrial function is due to clonal expansion of individual mitochondrial DNA (mtDNA) point mutations within single cells. However the dynamics of this process and when these mtDNA mutations occur initially are poorly understood. Using human colorectal epithelium as an exemplar tissue with a well-defined stem cell population, we analysed samples from 207 healthy participants aged 17-78 years using a combination of techniques (Random Mutation Capture, Next Generation Sequencing and mitochondrial enzyme histochemistry), and show that: 1) non-pathogenic mtDNA mutations are present from early embryogenesis or may be transmitted through the germline, whereas pathogenic mtDNA mutations are detected in the somatic cells, providing evidence for purifying selection in humans, 2) pathogenic mtDNA mutations are present from early adulthood (<20 years of age), at both low levels and as clonal expansions, 3) low level mtDNA mutation frequency does not change significantly with age, suggesting that mtDNA mutation rate does not increase significantly with age, and 4) clonally expanded mtDNA mutations increase dramatically with age. These data confirm that clonal expansion of mtDNA mutations, some of which are generated very early in life, is the major driving force behind the mitochondrial dysfunction associated with ageing of the human colorectal epithelium.