169 resultados para distribution (probability theory)
Resumo:
A new approach is developed to analyze the thermodynamic properties of a sub-critical fluid adsorbed in a slit pore of activated carbon. The approach is based on a representation that an adsorbed fluid forms an ordered structure close to a smoothed solid surface. This ordered structure is modelled as a collection of parallel molecular layers. Such a structure allows us to express the Helmholtz free energy of a molecular layer as the sum of the intrinsic Helmholtz free energy specific to that layer and the potential energy of interaction of that layer with all other layers and the solid surface. The intrinsic Helmholtz free energy of a molecular layer is a function (at given temperature) of its two-dimensional density and it can be readily obtained from bulk-phase properties, while the interlayer potential energy interaction is determined by using the 10-4 Lennard-Jones potential. The positions of all layers close to the graphite surface or in a slit pore are considered to correspond to the minimum of the potential energy of the system. This model has led to accurate predictions of nitrogen and argon adsorption on carbon black at their normal boiling points. In the case of adsorption in slit pores, local isotherms are determined from the minimization of the grand potential. The model provides a reasonable description of the 0-1 monolayer transition, phase transition and packing effect. The adsorption of nitrogen at 77.35 K and argon at 87.29 K on activated carbons is analyzed to illustrate the potential of this theory, and the derived pore-size distribution is compared favourably with that obtained by the Density Functional Theory (DFT). The model is less time-consuming than methods such as the DFT and Monte-Carlo simulation, and most importantly it can be readily extended to the adsorption of mixtures and capillary condensation phenomena.
Resumo:
The application of nonlocal density functional theory (NLDFT) to determine pore size distribution (PSD) of activated carbons using a nongraphitized carbon black, instead of graphitized thermal carbon black, as a reference system is explored. We show that in this case nitrogen and argon adsorption isotherms in activated carbons are precisely correlated by the theory, and such an excellent correlation would never be possible if the pore wall surface was assumed to be identical to that of graphitized carbon black. It suggests that pore wall surfaces of activated carbon are closer to that of amorphous solids because of defects of crystalline lattice, finite pore length, and the presence of active centers.. etc. Application of the NLDFT adapted to amorphous solids resulted in quantitative description of N-2 and Ar adsorption isotherms on nongraphitized carbon black BP280 at their respective boiling points. In the present paper we determined solid-fluid potentials from experimental adsorption isotherms on nongraphitized carbon black and subsequently used those potentials to model adsorption in slit pores and generate a corresponding set of local isotherms, which we used to determine the PSD functions of different activated carbons. (c) 2005 Elsevier Ltd. All rights reserved.
Resumo:
Quasi-birth-and-death (QBD) processes with infinite “phase spaces” can exhibit unusual and interesting behavior. One of the simplest examples of such a process is the two-node tandem Jackson network, with the “phase” giving the state of the first queue and the “level” giving the state of the second queue. In this paper, we undertake an extensive analysis of the properties of this QBD. In particular, we investigate the spectral properties of Neuts’s R-matrix and show that the decay rate of the stationary distribution of the “level” process is not always equal to the convergence norm of R. In fact, we show that we can obtain any decay rate from a certain range by controlling only the transition structure at level zero, which is independent of R. We also consider the sequence of tandem queues that is constructed by restricting the waiting room of the first queue to some finite capacity, and then allowing this capacity to increase to infinity. We show that the decay rates for the finite truncations converge to a value, which is not necessarily the decay rate in the infinite waiting room case. Finally, we show that the probability that the process hits level n before level 0 given that it starts in level 1 decays at a rate which is not necessarily the same as the decay rate for the stationary distribution.
Resumo:
Reviews the ecological status of the mahogany glider and describes its distribution, habitat and abundance, life history and threats to it. Three serial surveys of Brisbane residents provide data on the knowledge of respondents about the mahogany glider. The results provide information about the attitudes of respondents to the mahogany glider, to its conservation and relevant public policies and about variations in these factors as the knowledge of participants of the mahogany glider alters. Similarly data is provided and analysed about the willingness to pay of respondents to conserve the mahogany glider. Population viability analysis is applied to estimate the required habitat area for a minimum viable population of the mahogany glider to ensure at least a 95% probability of its survival for 100 years. Places are identified in Queensland where the requisite minimum area of critical habitat can be conserved. Using the survey results as a basis, the likely willingness of groups of Australians to pay for the conservation of the mahogany glider is estimated and consequently their willingness to pay for the minimum required area of its habitat. Methods for estimating the cost of protecting this habitat are outlined. Australia-wide benefits seem to exceed the costs. Establishing a national park containing the minimum viable population of the mahogany glider is an appealing management option. This would also be beneficial in conserving other endangered wildlife species. Therefore, additional economic benefits to those estimated on account of the mahogany glider itself can be obtained.
Resumo:
Polytomous Item Response Theory Models provides a unified, comprehensive introduction to the range of polytomous models available within item response theory (IRT). It begins by outlining the primary structural distinction between the two major types of polytomous IRT models. This focuses on the two types of response probability that are unique to polytomous models and their associated response functions, which are modeled differently by the different types of IRT model. It describes, both conceptually and mathematically, the major specific polytomous models, including the Nominal Response Model, the Partial Credit Model, the Rating Scale model, and the Graded Response Model. Important variations, such as the Generalized Partial Credit Model are also described as are less common variations, such as the Rating Scale version of the Graded Response Model. Relationships among the models are also investigated and the operation of measurement information is described for each major model. Practical examples of major models using real data are provided, as is a chapter on choosing an appropriate model. Figures are used throughout to illustrate important elements as they are described.
Resumo:
This paper presents a personal view of the interaction between the analysis of choice under uncertainty and the analysis of production under uncertainty. Interest in the foundations of the theory of choice under uncertainty was stimulated by applications of expected utility theory such as the Sandmo model of production under uncertainty. This interest led to the development of generalized models including rank-dependent expected utility theory. In turn, the development of generalized expected utility models raised the question of whether such models could be used in the analysis of applied problems such as those involving production under uncertainty. Finally, the revival of the state-contingent approach led to the recognition of a fundamental duality between choice problems and production problems.
Resumo:
The integral of the Wigner function of a quantum-mechanical system over a region or its boundary in the classical phase plane, is called a quasiprobability integral. Unlike a true probability integral, its value may lie outside the interval [0, 1]. It is characterized by a corresponding selfadjoint operator, to be called a region or contour operator as appropriate, which is determined by the characteristic function of that region or contour. The spectral problem is studied for commuting families of region and contour operators associated with concentric discs and circles of given radius a. Their respective eigenvalues are determined as functions of a, in terms of the Gauss-Laguerre polynomials. These polynomials provide a basis of vectors in a Hilbert space carrying the positive discrete series representation of the algebra su(1, 1) approximate to so(2, 1). The explicit relation between the spectra of operators associated with discs and circles with proportional radii, is given in terms of the discrete variable Meixner polynomials.
Resumo:
This paper combines insights from the literature on the economics of organisation with traditional models of market structure to construct a theory of equilibrium firm size heterogeneity under the assumption of a homogenous product industry. It is possible that configurations consisting entirely of small firms (run by entrepreneurs with limited attention) and with larger firms (using managerial techniques to substitute away these limits to allow increasing returns technologies to become profitable) can arise in equilibrium. However, there also exist equilibrium configurations with the co-existence of large and small firms. The efficiency properties of these respective equilibria are discussed. Finally, the implications of an expanding market size are considered.
Resumo:
A two-component survival mixture model is proposed to analyse a set of ischaemic stroke-specific mortality data. The survival experience of stroke patients after index stroke may be described by a subpopulation of patients in the acute condition and another subpopulation of patients in the chronic phase. To adjust for the inherent correlation of observations due to random hospital effects, a mixture model of two survival functions with random effects is formulated. Assuming a Weibull hazard in both components, an EM algorithm is developed for the estimation of fixed effect parameters and variance components. A simulation study is conducted to assess the performance of the two-component survival mixture model estimators. Simulation results confirm the applicability of the proposed model in a small sample setting. Copyright (C) 2004 John Wiley Sons, Ltd.
Resumo:
We examine a problem with n players each facing the same binary choice. One choice is superior to the other. The simple assumption of competition - that an individual's payoff falls with a rise in the number of players making the same choice, guarantees the existence of a unique symmetric equilibrium (involving mixed strategies). As n increases, there are two opposing effects. First, events in the middle of the distribution - where a player finds itself having made the same choice as many others - become more likely, but the payoffs in these events fall. In opposition, events in the tails of the distribution - where a player finds itself having made the same choice as few others - become less likely, but the payoffs in these events remain high. We provide a sufficient condition (strong competition) under which an increase in the number of players leads to a reduction in the equilibrium probability that the superior choice is made.
Resumo:
Hatschekia plectropomi, an ectoparasitic copepod found on the gills, infected Plectropomus leopardus from Heron Island Reef with 100% prevalence (n = 32) and a mean +/- S.E. infection intensity of 131.9 +/- 22.1. The distribution of 4222 adult female parasites across 32 individual host fish was investigated at several organizational levels ranging from the level of holobranch pairs to that of individual filaments. Parasites demonstrated a site preference for the two central holobranchs (2 and 3). Along the lengths of hemibranchs, filaments near the dorsal and ventral ends and those in the proximity of the bend region were rarely occupied. The probability of coming into contact with a suitable attachment site and the ability to withstand ventilation forces at that site were proposed as the major factors affecting distribution. Two H. plectropomi morphotypes were identified based on the direction of body curvature. Regardless of morphotype, 99.9% of individuals were attached such that the convex side of the body was oriented towards the oncoming ventilating water currents. Further, 93.3% of individuals attached to the posterior faces of filaments, leading to a predictable pattern of attachment for this species. It is suggested that the direction of body curvature develops in response to the direction of the ventilating water currents. (c) 2006 The Fisheries Society of the British Isles.
Resumo:
Unauthorized accesses to digital contents are serious threats to international security and informatics. We propose an offline oblivious data distribution framework that preserves the sender's security and the receiver's privacy using tamper-proof smart cards. This framework provides persistent content protections from digital piracy and promises private content consumption.
Resumo:
Izenman and Sommer (1988) used a non-parametric Kernel density estimation technique to fit a seven-component model to the paper thickness of the 1872 Hidalgo stamp issue of Mexico. They observed an apparent conflict when fitting a normal mixture model with three components with unequal variances. This conflict is examined further by investigating the most appropriate number of components when fitting a normal mixture of components with equal variances.
Resumo:
Experimental data for E. coli debris size reduction during high-pressure homogenisation at 55 MPa are presented. A mathematical model based on grinding theory is developed to describe the data. The model is based on first-order breakage and compensation conditions. It does not require any assumption of a specified distribution for debris size and can be used given information on the initial size distribution of whole cells and the disruption efficiency during homogenisation. The number of homogeniser passes is incorporated into the model and used to describe the size reduction of non-induced stationary and induced E. coil cells during homogenisation. Regressing the results to the model equations gave an excellent fit to experimental data ( > 98.7% of variance explained for both fermentations), confirming the model's potential for predicting size reduction during high-pressure homogenisation. This study provides a means to optimise both homogenisation and disc-stack centrifugation conditions for recombinant product recovery. (C) 1997 Elsevier Science Ltd.
Resumo:
HE PROBIT MODEL IS A POPULAR DEVICE for explaining binary choice decisions in econometrics. It has been used to describe choices such as labor force participation, travel mode, home ownership, and type of education. These and many more examples can be found in papers by Amemiya (1981) and Maddala (1983). Given the contribution of economics towards explaining such choices, and given the nature of data that are collected, prior information on the relationship between a choice probability and several explanatory variables frequently exists. Bayesian inference is a convenient vehicle for including such prior information. Given the increasing popularity of Bayesian inference it is useful to ask whether inferences from a probit model are sensitive to a choice between Bayesian and sampling theory techniques. Of interest is the sensitivity of inference on coefficients, probabilities, and elasticities. We consider these issues in a model designed to explain choice between fixed and variable interest rate mortgages. Two Bayesian priors are employed: a uniform prior on the coefficients, designed to be noninformative for the coefficients, and an inequality restricted prior on the signs of the coefficients. We often know, a priori, whether increasing the value of a particular explanatory variable will have a positive or negative effect on a choice probability. This knowledge can be captured by using a prior probability density function (pdf) that is truncated to be positive or negative. Thus, three sets of results are compared:those from maximum likelihood (ML) estimation, those from Bayesian estimation with an unrestricted uniform prior on the coefficients, and those from Bayesian estimation with a uniform prior truncated to accommodate inequality restrictions on the coefficients.