966 resultados para INFLATED DISTRIBUTIONS
Resumo:
The uniformization method (also known as randomization) is a numerically stable algorithm for computing transient distributions of a continuous time Markov chain. When the solution is needed after a long run or when the convergence is slow, the uniformization method involves a large number of matrix-vector products. Despite this, the method remains very popular due to its ease of implementation and its reliability in many practical circumstances. Because calculating the matrix-vector product is the most time-consuming part of the method, overall efficiency in solving large-scale problems can be significantly enhanced if the matrix-vector product is made more economical. In this paper, we incorporate a new relaxation strategy into the uniformization method to compute the matrix-vector products only approximately. We analyze the error introduced by these inexact matrix-vector products and discuss strategies for refining the accuracy of the relaxation while reducing the execution cost. Numerical experiments drawn from computer systems and biological systems are given to show that significant computational savings are achieved in practical applications.
Resumo:
Purpose of review: This review provides an overview on the importance of characterising and considering insect distribution infor- mation for designing stored commodity sampling protocols. Findings: Sampling protocols are influenced by a number of factors including government regulations, management practices, new technology and current perceptions of the status of insect pest damage. The spatial distribution of insects in stored commodities influ- ences the efficiency of sampling protocols; these can vary in response to season, treatment and other factors. It is important to use sam- pling designs based on robust statistics suitable for the purpose. Future research: The development of sampling protocols based on flexible, robust statistics allows for accuracy across a range of spatial distributions. Additionally, power can be added to sampling protocols through the integration of external information such as treatment history and climate. Bayesian analysis provides a coherent and well understood means to achieve this.
Resumo:
Poisson distribution has often been used for count like accident data. Negative Binomial (NB) distribution has been adopted in the count data to take care of the over-dispersion problem. However, Poisson and NB distributions are incapable of taking into account some unobserved heterogeneities due to spatial and temporal effects of accident data. To overcome this problem, Random Effect models have been developed. Again another challenge with existing traffic accident prediction models is the distribution of excess zero accident observations in some accident data. Although Zero-Inflated Poisson (ZIP) model is capable of handling the dual-state system in accident data with excess zero observations, it does not accommodate the within-location correlation and between-location correlation heterogeneities which are the basic motivations for the need of the Random Effect models. This paper proposes an effective way of fitting ZIP model with location specific random effects and for model calibration and assessment the Bayesian analysis is recommended.
Resumo:
Over the past two decades, flat-plate particle collections have revealed the presence of a remarkable variety of both terrestrial and extraterrestrial material in the stratosphere [1-6]. The ratio of terrestrial to extraterrestrial material and the nature of material collected may vary over observable time scales. Variations in particle number density can be important since the earth’s atmospheric radiation balance, and therefore the earth’s climate, can be influenced by articulate absorption and scattering of radiation from the sun and earth [7-9]. In order to assess the number density of solid particles in the stratosphere, we have examined a representative fraction of the so1id particles from two flat-plate collection surfaces, whose collection dates are separated in time by 5 years.
Resumo:
A mineralogical survey of chondritic interplanetary dust particles (IDPs)showed that these micrometeorites differ significantly in form and texture from components of carbonaceous chondrites and contain some mineral assemblages which do not occur in any meteorite class1. Models of chondritic IDP mineral evolution generally ignore the typical (ultra-) fine grain size of consituent minerals which range between 0.002-0.1µm in size2. The chondritic porous (CP) subset of chondritic IDPs is probably debris from short period comets although evidence for a cometary origin is still circumstantial3. If CP IDPs represent dust from regions of the Solar System in which comet accretion occurred, it can be argued that pervasive mineralogical evolution of IDP dust has been arrested due to cryogenic storage in comet nuclei. Thus, preservation in CP IDPs of "unusual meteorite minerals", such as oxides of tin, bismuth and titanium4, should not be dismissed casually. These minerals may contain specific information about processes that occurred in regions of the solar nebula, and early Solar System, which spawned the IDP parent bodies such as comets and C, P and D asteroids6. It is not fully appreciated that the apparent disparity between the mineralogy of CP IDPs and carbonaceous chondrite matrix may also be caused by the choice of electron-beam techniques with different analytical resolution. For example, Mg-Si-Fe distributions of Cl matrix obtained by "defocussed beam" microprobe analyses are displaced towards lower Fe-values when using analytical electron microscope (AEM)data which resolve individual mineral grains of various layer silicates and magnetite in the same matrix6,7. In general, "unusual meteorite minerals" in chondritic IDPs, such as metallic titanium, Tin01-n(Magneli phases) and anatase8 add to the mineral data base of fine-grained Solar System materials and provide constraints on processes that occurred in the early Solar System.
Resumo:
The first representative chemical, structural, and morphological analysis of the solid particles from a single collection surface has been performed. This collection surface sampled the stratosphere between 17 and 19km in altitude in the summer of 1981, and therefore before the 1982 eruptions of El Chichón. A particle collection surface was washed free of all particles with rinses of Freon and hexane, and the resulting wash was directed through a series of vertically stacked Nucleopore filters. The size cutoff for the solid particle collection process in the stratosphere is found to be considerably less than 1 μm. The total stratospheric number density of solid particles larger than 1μm in diameter at the collection time is calculated to be about 2.7×10−1 particles per cubic meter, of which approximately 95% are smaller than 5μm in diameter. Previous classification schemes are expanded to explicitly recognize low atomic number material. With the single exception of the calcium-aluminum-silicate (CAS) spheres all solid particle types show a logarithmic increase in number concentration with decreasing diameter. The aluminum-rich particles are unique in showing bimodal size distributions. In addition, spheres constitute only a minor fraction of the aluminum-rich material. About 2/3 of the particles examined were found to be shards of rhyolitic glass. This abundant volcanic material could not be correlated with any eruption plume known to have vented directly to the stratosphere. The micrometeorite number density calculated from this data set is 5×10−2 micrometeorites per cubic meter of air, an order of magnitude greater than the best previous estimate. At the collection altitude, the maximum collision frequency of solid particles >5μm in average diameter is calculated to be 6.91×10−16 collisions per second, which indicates negligible contamination of extraterrestrial particles in the stratosphere by solid anthropogenic particles.
Resumo:
A technique for analysing exhaust emission plumes from unmodified locomotives under real world conditions is described and applied to the task of characterizing plumes from railway trains servicing an Australian shipping port. The method utilizes the simultaneous measurement, downwind of the railway line, of the following pollutants; particle number, PM2.5 mass fraction, SO2, NOx and CO2, with the last of these being used as an indicator of fuel combustion. Emission factors are then derived, in terms of number of particles and mass of pollutant emitted per unit mass of fuel consumed. Particle number size distributions are also presented. The practical advantages of the method are discussed including the capacity to routinely collect emission factor data for passing trains and to thereby build up a comprehensive real world database for a wide range of pollutants. Samples from 56 train movements were collected, analyzed and presented. The quantitative results for emission factors are: EF(N)=(1.7±1)×1016 kg-1, EF(PM2.5)= (1.1±0.5) g·kg-1, EF(NOx)= (28±14) g·kg-1, and EF(SO2 )= (1.4±0.4) g·kg-1. The findings are compared with comparable previously published work. Statistically significant (p<α, α=0.05) correlations within the group of locomotives sampled were found between the emission factors for particle number and both SO2 and NOx.
Resumo:
An important aspect of decision support systems involves applying sophisticated and flexible statistical models to real datasets and communicating these results to decision makers in interpretable ways. An important class of problem is the modelling of incidence such as fire, disease etc. Models of incidence known as point processes or Cox processes are particularly challenging as they are ‘doubly stochastic’ i.e. obtaining the probability mass function of incidents requires two integrals to be evaluated. Existing approaches to the problem either use simple models that obtain predictions using plug-in point estimates and do not distinguish between Cox processes and density estimation but do use sophisticated 3D visualization for interpretation. Alternatively other work employs sophisticated non-parametric Bayesian Cox process models, but do not use visualization to render interpretable complex spatial temporal forecasts. The contribution here is to fill this gap by inferring predictive distributions of Gaussian-log Cox processes and rendering them using state of the art 3D visualization techniques. This requires performing inference on an approximation of the model on a discretized grid of large scale and adapting an existing spatial-diurnal kernel to the log Gaussian Cox process context.
Resumo:
The nonlinear stability analysis introduced by Chen and Haughton [1] is employed to study the full nonlinear stability of the non-homogeneous spherically symmetric deformation of an elastic thick-walled sphere. The shell is composed of an arbitrary homogeneous, incompressible elastic material. The stability criterion ultimately requires the solution of a third-order nonlinear ordinary differential equation. Numerical calculations performed for a wide variety of well-known incompressible materials are then compared with existing bifurcation results and are found to be identical. Further analysis and comparison between stability and bifurcation are conducted for the case of thin shells and we prove by direct calculation that the two criteria are identical for all modes and all materials.
Resumo:
The issue of using informative priors for estimation of mixtures at multiple time points is examined. Several different informative priors and an independent prior are compared using samples of actual and simulated aerosol particle size distribution (PSD) data. Measurements of aerosol PSDs refer to the concentration of aerosol particles in terms of their size, which is typically multimodal in nature and collected at frequent time intervals. The use of informative priors is found to better identify component parameters at each time point and more clearly establish patterns in the parameters over time. Some caveats to this finding are discussed.
Resumo:
In this letter, the velocity distributions of charge carriers in high-mobility polymer thin-film transistors (TFTs) with a diketopyrrolopyrrole- naphthalene copolymer (PDPP-TNT) semiconductor active layer are reported. The velocity distributions are found to be strongly dependent on measurement temperatures as well as annealing conditions. Considerable inhomogeneity is evident at low measurement temperatures and for low annealing temperatures. Such transient transport measurements can provide additional information about charge carrier transport in TFTs which are unavailable using steady-state transport measurements.
Resumo:
We report charge-carrier velocity distributions in high-mobility polymer thin-film transistors (PTFTs) employing a dual-gate configuration. Our time-domain measurements of dual-gate PTFTs indicate higher effective mobility as well as fewer low-velocity carriers than in single-gate operation. Such nonquasi-static (NQS) measurements support and clarify the previously reported results of improved device performance in dual-gate devices by various groups. We believe that this letter demonstrates the utility of NQS measurements in studying charge-carrier transport in dual-gate thin-film transistors.