957 resultados para Pair distributions
Resumo:
The uniformization method (also known as randomization) is a numerically stable algorithm for computing transient distributions of a continuous time Markov chain. When the solution is needed after a long run or when the convergence is slow, the uniformization method involves a large number of matrix-vector products. Despite this, the method remains very popular due to its ease of implementation and its reliability in many practical circumstances. Because calculating the matrix-vector product is the most time-consuming part of the method, overall efficiency in solving large-scale problems can be significantly enhanced if the matrix-vector product is made more economical. In this paper, we incorporate a new relaxation strategy into the uniformization method to compute the matrix-vector products only approximately. We analyze the error introduced by these inexact matrix-vector products and discuss strategies for refining the accuracy of the relaxation while reducing the execution cost. Numerical experiments drawn from computer systems and biological systems are given to show that significant computational savings are achieved in practical applications.
Resumo:
Purpose of review: This review provides an overview on the importance of characterising and considering insect distribution infor- mation for designing stored commodity sampling protocols. Findings: Sampling protocols are influenced by a number of factors including government regulations, management practices, new technology and current perceptions of the status of insect pest damage. The spatial distribution of insects in stored commodities influ- ences the efficiency of sampling protocols; these can vary in response to season, treatment and other factors. It is important to use sam- pling designs based on robust statistics suitable for the purpose. Future research: The development of sampling protocols based on flexible, robust statistics allows for accuracy across a range of spatial distributions. Additionally, power can be added to sampling protocols through the integration of external information such as treatment history and climate. Bayesian analysis provides a coherent and well understood means to achieve this.
Resumo:
Over the past two decades, flat-plate particle collections have revealed the presence of a remarkable variety of both terrestrial and extraterrestrial material in the stratosphere [1-6]. The ratio of terrestrial to extraterrestrial material and the nature of material collected may vary over observable time scales. Variations in particle number density can be important since the earth’s atmospheric radiation balance, and therefore the earth’s climate, can be influenced by articulate absorption and scattering of radiation from the sun and earth [7-9]. In order to assess the number density of solid particles in the stratosphere, we have examined a representative fraction of the so1id particles from two flat-plate collection surfaces, whose collection dates are separated in time by 5 years.
Resumo:
A mineralogical survey of chondritic interplanetary dust particles (IDPs)showed that these micrometeorites differ significantly in form and texture from components of carbonaceous chondrites and contain some mineral assemblages which do not occur in any meteorite class1. Models of chondritic IDP mineral evolution generally ignore the typical (ultra-) fine grain size of consituent minerals which range between 0.002-0.1µm in size2. The chondritic porous (CP) subset of chondritic IDPs is probably debris from short period comets although evidence for a cometary origin is still circumstantial3. If CP IDPs represent dust from regions of the Solar System in which comet accretion occurred, it can be argued that pervasive mineralogical evolution of IDP dust has been arrested due to cryogenic storage in comet nuclei. Thus, preservation in CP IDPs of "unusual meteorite minerals", such as oxides of tin, bismuth and titanium4, should not be dismissed casually. These minerals may contain specific information about processes that occurred in regions of the solar nebula, and early Solar System, which spawned the IDP parent bodies such as comets and C, P and D asteroids6. It is not fully appreciated that the apparent disparity between the mineralogy of CP IDPs and carbonaceous chondrite matrix may also be caused by the choice of electron-beam techniques with different analytical resolution. For example, Mg-Si-Fe distributions of Cl matrix obtained by "defocussed beam" microprobe analyses are displaced towards lower Fe-values when using analytical electron microscope (AEM)data which resolve individual mineral grains of various layer silicates and magnetite in the same matrix6,7. In general, "unusual meteorite minerals" in chondritic IDPs, such as metallic titanium, Tin01-n(Magneli phases) and anatase8 add to the mineral data base of fine-grained Solar System materials and provide constraints on processes that occurred in the early Solar System.
Resumo:
The first representative chemical, structural, and morphological analysis of the solid particles from a single collection surface has been performed. This collection surface sampled the stratosphere between 17 and 19km in altitude in the summer of 1981, and therefore before the 1982 eruptions of El Chichón. A particle collection surface was washed free of all particles with rinses of Freon and hexane, and the resulting wash was directed through a series of vertically stacked Nucleopore filters. The size cutoff for the solid particle collection process in the stratosphere is found to be considerably less than 1 μm. The total stratospheric number density of solid particles larger than 1μm in diameter at the collection time is calculated to be about 2.7×10−1 particles per cubic meter, of which approximately 95% are smaller than 5μm in diameter. Previous classification schemes are expanded to explicitly recognize low atomic number material. With the single exception of the calcium-aluminum-silicate (CAS) spheres all solid particle types show a logarithmic increase in number concentration with decreasing diameter. The aluminum-rich particles are unique in showing bimodal size distributions. In addition, spheres constitute only a minor fraction of the aluminum-rich material. About 2/3 of the particles examined were found to be shards of rhyolitic glass. This abundant volcanic material could not be correlated with any eruption plume known to have vented directly to the stratosphere. The micrometeorite number density calculated from this data set is 5×10−2 micrometeorites per cubic meter of air, an order of magnitude greater than the best previous estimate. At the collection altitude, the maximum collision frequency of solid particles >5μm in average diameter is calculated to be 6.91×10−16 collisions per second, which indicates negligible contamination of extraterrestrial particles in the stratosphere by solid anthropogenic particles.
Resumo:
Dose kernels may be used to calculate dose distributions in radiotherapy (as described by Ahnesjo et al., 1999). Their calculation requires use of Monte Carlo methods, usually by forcing interactions to occur at a point. The Geant4 Monte Carlo toolkit provides a capability to force interactions to occur in a particular volume. We have modified this capability and created a Geant4 application to calculate dose kernels in cartesian, cylindrical, and spherical scoring systems. The simulation considers monoenergetic photons incident at the origin of a 3 m x 3 x 9 3 m water volume. Photons interact via compton, photo-electric, pair production, and rayleigh scattering. By default, Geant4 models photon interactions by sampling a physical interaction length (PIL) for each process. The process returning the smallest PIL is then considered to occur. In order to force the interaction to occur within a given length, L_FIL, we scale each PIL according to the formula: PIL_forced = L_FIL 9 (1 - exp(-PIL/PILo)) where PILo is a constant. This ensures that the process occurs within L_FIL, whilst correctly modelling the relative probability of each process. Dose kernels were produced for an incident photon energy of 0.1, 1.0, and 10.0 MeV. In order to benchmark the code, dose kernels were also calculated using the EGSnrc Edknrc user code. Identical scoring systems were used; namely, the collapsed cone approach of the Edknrc code. Relative dose difference images were then produced. Preliminary results demonstrate the ability of the Geant4 application to reproduce the shape of the dose kernels; median relative dose differences of 12.6, 5.75, and 12.6 % were found for an incident photon energy of 0.1, 1.0, and 10.0 MeV respectively.
Resumo:
We define a pair-correlation function that can be used to characterize spatiotemporal patterning in experimental images and snapshots from discrete simulations. Unlike previous pair-correlation functions, the pair-correlation functions developed here depend on the location and size of objects. The pair-correlation function can be used to indicate complete spatial randomness, aggregation or segregation over a range of length scales, and quantifies spatial structures such as the shape, size and distribution of clusters. Comparing pair-correlation data for various experimental and simulation images illustrates their potential use as a summary statistic for calibrating discrete models of various physical processes.
Resumo:
A technique for analysing exhaust emission plumes from unmodified locomotives under real world conditions is described and applied to the task of characterizing plumes from railway trains servicing an Australian shipping port. The method utilizes the simultaneous measurement, downwind of the railway line, of the following pollutants; particle number, PM2.5 mass fraction, SO2, NOx and CO2, with the last of these being used as an indicator of fuel combustion. Emission factors are then derived, in terms of number of particles and mass of pollutant emitted per unit mass of fuel consumed. Particle number size distributions are also presented. The practical advantages of the method are discussed including the capacity to routinely collect emission factor data for passing trains and to thereby build up a comprehensive real world database for a wide range of pollutants. Samples from 56 train movements were collected, analyzed and presented. The quantitative results for emission factors are: EF(N)=(1.7±1)×1016 kg-1, EF(PM2.5)= (1.1±0.5) g·kg-1, EF(NOx)= (28±14) g·kg-1, and EF(SO2 )= (1.4±0.4) g·kg-1. The findings are compared with comparable previously published work. Statistically significant (p<α, α=0.05) correlations within the group of locomotives sampled were found between the emission factors for particle number and both SO2 and NOx.
Resumo:
The launch of the current series of My Kitchen Rules has undoubtedly been successful, both in terms of television ratings and in capturing a social media audience, clearly winning the battle for the Twitter audience on premiere night, and maintaining a lead over both The Block and The Biggest Loser since then. But it is the controversy surrounding Perth contestants Kelly Ramsay and Chloe James that has dominated media coverage today, detailing the abuse to which they have been subjected on social media.
Resumo:
Many cell types form clumps or aggregates when cultured in vitro through a variety of mechanisms including rapid cell proliferation, chemotaxis, or direct cell-to-cell contact. In this paper we develop an agent-based model to explore the formation of aggregates in cultures where cells are initially distributed uniformly, at random, on a two-dimensional substrate. Our model includes unbiased random cell motion, together with two mechanisms which can produce cell aggregates: (i) rapid cell proliferation, and (ii) a biased cell motility mechanism where cells can sense other cells within a finite range, and will tend to move towards areas with higher numbers of cells. We then introduce a pair-correlation function which allows us to quantify aspects of the spatial patterns produced by our agent-based model. In particular, these pair-correlation functions are able to detect differences between domains populated uniformly at random (i.e. at the exclusion complete spatial randomness (ECSR) state) and those where the proliferation and biased motion rules have been employed - even when such differences are not obvious to the naked eye. The pair-correlation function can also detect the emergence of a characteristic inter-aggregate distance which occurs when the biased motion mechanism is dominant, and is not observed when cell proliferation is the main mechanism of aggregate formation. This suggests that applying the pair-correlation function to experimental images of cell aggregates may provide information about the mechanism associated with observed aggregates. As a proof of concept, we perform such analysis for images of cancer cell aggregates, which are known to be associated with rapid proliferation. The results of our analysis are consistent with the predictions of the proliferation-based simulations, which supports the potential usefulness of pair correlation functions for providing insight into the mechanisms of aggregate formation.
Resumo:
An important aspect of decision support systems involves applying sophisticated and flexible statistical models to real datasets and communicating these results to decision makers in interpretable ways. An important class of problem is the modelling of incidence such as fire, disease etc. Models of incidence known as point processes or Cox processes are particularly challenging as they are ‘doubly stochastic’ i.e. obtaining the probability mass function of incidents requires two integrals to be evaluated. Existing approaches to the problem either use simple models that obtain predictions using plug-in point estimates and do not distinguish between Cox processes and density estimation but do use sophisticated 3D visualization for interpretation. Alternatively other work employs sophisticated non-parametric Bayesian Cox process models, but do not use visualization to render interpretable complex spatial temporal forecasts. The contribution here is to fill this gap by inferring predictive distributions of Gaussian-log Cox processes and rendering them using state of the art 3D visualization techniques. This requires performing inference on an approximation of the model on a discretized grid of large scale and adapting an existing spatial-diurnal kernel to the log Gaussian Cox process context.
Resumo:
We investigate the utility to computational Bayesian analyses of a particular family of recursive marginal likelihood estimators characterized by the (equivalent) algorithms known as "biased sampling" or "reverse logistic regression" in the statistics literature and "the density of states" in physics. Through a pair of numerical examples (including mixture modeling of the well-known galaxy dataset) we highlight the remarkable diversity of sampling schemes amenable to such recursive normalization, as well as the notable efficiency of the resulting pseudo-mixture distributions for gauging prior-sensitivity in the Bayesian model selection context. Our key theoretical contributions are to introduce a novel heuristic ("thermodynamic integration via importance sampling") for qualifying the role of the bridging sequence in this procedure, and to reveal various connections between these recursive estimators and the nested sampling technique.