114 resultados para semi-empirical methods
em University of Queensland eSpace - Australia
Resumo:
A semi-empirical linear equation has been developed to optimise the amount of maltodextrin additive (DE 6) required to successfully spray dry a sugar-rich product on the basis of its composition. Based on spray drying experiments, drying index values for individual sugars (sucrose, glucose, frutose) and citric acid were determined, and us;ng these index values an equation for model mixtures of these components was established. This equation has been tested with two sugar-rich natural products, pineapple juice and honey. The relationship was found to be valid for these products.
Resumo:
A review is given on the fundamental studies of gas-carbon reactions using electronic structure methods in the last several decades. The three types of electronic structure methods including semi-empirical, ab initio and density functional theory, methods are briefly introduced first, followed by the studies on carbon reactions with hydrogen and oxygen-containing gases (non-catalysed and catalysed). The problems yet to solve and possible promising directions are discussed. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Recent reviews of the desistance literature have advocated studying desistance as a process, yet current empirical methods continue to measure desistance as a discrete state. In this paper, we propose a framework for empirical research that recognizes desistance as a developmental process. This approach focuses on changes in the offending rare rather than on offending itself We describe a statistical model to implement this approach and provide an empirical example. We conclude with several suggestions for future research endeavors that arise from our conceptualization of desistance.
Resumo:
Small area health statistics has assumed increasing importance as the focus of population and public health moves to a more individualised approach of smaller area populations. Small populations and low event occurrence produce difficulties in interpretation and require appropriate statistical methods, including for age adjustment. There are also statistical questions related to multiple comparisons. Privacy and confidentiality issues include the possibility of revealing information on individuals or health care providers by fine cross-tabulations. Interpretation of small area population differences in health status requires consideration of migrant and Indigenous composition, socio-economic status and rural-urban geography before assessment of the effects of physical environmental exposure and services and interventions. Burden of disease studies produce a single measure for morbidity and mortality - disability adjusted life year (DALY) - which is the sum of the years of life lost (YLL) from premature mortality and the years lived with disability (YLD) for particular diseases (or all conditions). Calculation of YLD requires estimates of disease incidence (and complications) and duration, and weighting by severity. These procedures often mean problematic assumptions, as does future discounting and age weighting of both YLL and YLD. Evaluation of the Victorian small area population disease burden study presents important cross-disciplinary challenges as it relies heavily on synthetic approaches of demography and economics rather than on the empirical methods of epidemiology. Both empirical and synthetic methods are used to compute small area mortality and morbidity, disease burden, and then attribution to risk factors. Readers need to examine the methodology and assumptions carefully before accepting the results.
Resumo:
Global biodiversity loss and its consequences for human welfare and sustainable development have become major concerns. Economists have, therefore, given increasing attention to the policy issues involved in the management of genetic resources. To do so, they often apply empirical methods developed in behavioral and experimental economics to estimate economic values placed on genetic resources. This trend away from almost exclusive dependence on axiomatic methods is welcomed. However, major valuation methods used in behavioral economics raise new scientific challenges. Possibly the most important of these include deficiencies in the knowledge of the public (and researchers) about genetic resources, implications for the formation of values of supplying information to focal individuals, and limits to rationality. These issues are explored for stated-preference techniques of valuation (e.g., contingent valuation) as well as revealed preference techniques, especially the travel cost method. They are illustrated by Australian and Asian examples. Taking into account behavioral and psychological models and empirical evidence, particular attention is given to how elicitation of preferences, and supply of information to individuals, influences their preferences about biodiversity. Policy consequences are outlined.
Resumo:
The Dubinin-Radushkevich (DR) equation is widely used for description of adsorption in microporous materials, especially those of a carbonaceous origin. The equation has a semi-empirical origin and is based on the assumptions of a change in the potential energy between the gas and adsorbed phases and a characteristic energy of a given solid. This equation yields a macroscopic behaviour of adsorption loading for a given pressure. In this paper, we apply a theory developed in our group to investigate the underlying mechanism of adsorption as an alternative to the macroscopic description using the DR equation. Using this approach, we are able to establish a detailed picture of the adsorption in the whole range of the micropore system. This is different from the DR equation, which provides an overall description of the process. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
This paper proposes an integrated methodology for modelling froth zone performance in batch and continuously operated laboratory flotation cells. The methodology is based on a semi-empirical approach which relates the overall flotation rate constant to the froth depth (FD) in the flotation cell; from this relationship, a froth zone recovery (R,) can be extracted. Froth zone recovery, in turn, may be related to the froth retention time (FRT), defined as the ratio of froth volume to the volumetric flow rate of concentrate from the cell. An expansion of this relationship to account for particles recovered both by true flotation and entrainment provides a simple model that may be used to predict the froth performance in continuous tests from the results of laboratory batch experiments. Crown Copyright (C) 2002 Published by Elsevier Science B.V. All rights reserved.
Resumo:
A review of spontaneous rupture in thin films with tangentially immobile interfaces is presented that emphasizes the theoretical developments of film drainage and corrugation growth through the linearization of lubrication theory in a cylindrical geometry. Spontaneous rupture occurs when corrugations from adjacent interfaces become unstable and grow to a critical thickness. A corrugated interface is composed of a number of waveforms and each waveform becomes unstable at a unique transition thickness. The onset of instability occurs at the maximum transition thickness, and it is shown that only upper and lower bounds of this thickness can be predicted from linear stability analysis. The upper bound is equivalent to the Freakel criterion and is obtained from the zeroth order approximation of the H-3 term in the evolution equation. This criterion is determined solely by the film radius, interfacial tension and Hamaker constant. The lower bound is obtained from the first order approximation of the H-3 term in the evolution equation and is dependent on the film thinning velocity A semi-empirical equation, referred to as the MTR equation, is obtained by combining the drainage theory of Manev et al. [J. Dispersion Sci. Technol., 18 (1997) 769] and the experimental measurements of Radoev et al. [J. Colloid Interface Sci. 95 (1983) 254] and is shown to provide accurate predictions of film thinning velocity near the critical thickness of rupture. The MTR equation permits the prediction of the lower bound of the maximum transition thickness based entirely on film radius, Plateau border radius, interfacial tension, temperature and Hamaker constant. The MTR equation extrapolates to Reynolds equation under conditions when the Plateau border pressure is small, which provides a lower bound for the maximum transition thickness that is equivalent to the criterion of Gumerman and Homsy [Chem. Eng. Commun. 2 (1975) 27]. The relative accuracy of either bound is thought to be dependent on the amplitude of the hydrodynamic corrugations, and a semiempirical correlation is also obtained that permits the amplitude to be calculated as a function of the upper and lower bound of the maximum transition thickness. The relationship between the evolving theoretical developments is demonstrated by three film thickness master curves, which reduce to simple analytical expressions under limiting conditions when the drainage pressure drop is controlled by either the Plateau border capillary pressure or the van der Waals disjoining pressure. The master curves simplify solution of the various theoretical predictions enormously over the entire range of the linear approximation. Finally, it is shown that when the Frenkel criterion is used to assess film stability, recent studies reach conclusions that are contrary to the relevance of spontaneous rupture as a cell-opening mechanism in foams. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
The solubility of ethyl propionate, ethyl butyrate, and ethyl isovalerate in supercritical carbon dioxide was measured at temperature ranging from 308.15 to 333.15 K and pressure ranging from 85 to 195 bar. At the same temperature, the solubility of these compounds increases with pressure. The crossover pressure region was also observed in this study. The experimental data were correlated by the semi-empirical Chrastil equation and Peng-Robinson equation of state (EOS) using several mixing rules. The Peng-Robinson EOS gives better solubility prediction than the empirical Chrastil equation. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
The reliability of measurement refers to unsystematic error in observed responses. Investigations of the prevalence of random error in stated estimates of willingness to pay (WTP) are important to an understanding of why tests of validity in CV can fail. However, published reliability studies have tended to adopt empirical methods that have practical and conceptual limitations when applied to WTP responses. This contention is supported in a review of contingent valuation reliability studies that demonstrate important limitations of existing approaches to WTP reliability. It is argued that empirical assessments of the reliability of contingent values may be better dealt with by using multiple indicators to measure the latent WTP distribution. This latent variable approach is demonstrated with data obtained from a WTP study for stormwater pollution abatement. Attitude variables were employed as a way of assessing the reliability of open-ended WTP (with benchmarked payment cards) for stormwater pollution abatement. The results indicated that participants' decisions to pay were reliably measured, but not the magnitude of the WTP bids. This finding highlights the need to better discern what is actually being measured in VVTP studies, (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
A review of thin film drainage models is presented in which the predictions of thinning velocities and drainage times are compared to reported values on foam and emulsion films found in the literature. Free standing films with tangentially immobile interfaces and suppressed electrostatic repulsion are considered, such as those studied in capillary cells. The experimental thinning velocities and drainage times of foams and emulsions are shown to be bounded by predictions from the Reynolds and the theoretical MTsR equations. The semi-empirical MTsR and the surface wave equations were the most consistently accurate with all of the films considered. These results are used in an accompanying paper to develop scaling laws that bound the critical film thickness of foam and emulsion films. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
This special issue represents a further exploration of some issues raised at a symposium entitled “Functional magnetic resonance imaging: From methods to madness” presented during the 15th annual Theoretical and Experimental Neuropsychology (TENNET XV) meeting in Montreal, Canada in June, 2004. The special issue’s theme is methods and learning in functional magnetic resonance imaging (fMRI), and it comprises 6 articles (3 reviews and 3 empirical studies). The first (Amaro and Barker) provides a beginners guide to fMRI and the BOLD effect (perhaps an alternative title might have been “fMRI for dummies”). While fMRI is now commonplace, there are still researchers who have yet to employ it as an experimental method and need some basic questions answered before they venture into new territory. This article should serve them well. A key issue of interest at the symposium was how fMRI could be used to elucidate cerebral mechanisms responsible for new learning. The next 4 articles address this directly, with the first (Little and Thulborn) an overview of data from fMRI studies of category-learning, and the second from the same laboratory (Little, Shin, Siscol, and Thulborn) an empirical investigation of changes in brain activity occurring across different stages of learning. While a role for medial temporal lobe (MTL) structures in episodic memory encoding has been acknowledged for some time, the different experimental tasks and stimuli employed across neuroimaging studies have not surprisingly produced conflicting data in terms of the precise subregion(s) involved. The next paper (Parsons, Haut, Lemieux, Moran, and Leach) addresses this by examining effects of stimulus modality during verbal memory encoding. Typically, BOLD fMRI studies of learning are conducted over short time scales, however, the fourth paper in this series (Olson, Rao, Moore, Wang, Detre, and Aguirre) describes an empirical investigation of learning occurring over a longer than usual period, achieving this by employing a relatively novel technique called perfusion fMRI. This technique shows considerable promise for future studies. The final article in this special issue (de Zubicaray) represents a departure from the more familiar cognitive neuroscience applications of fMRI, instead describing how neuroimaging studies might be conducted to both inform and constrain information processing models of cognition.
Resumo:
In this paper we discuss implicit Taylor methods for stiff Ito stochastic differential equations. Based on the relationship between Ito stochastic integrals and backward stochastic integrals, we introduce three implicit Taylor methods: the implicit Euler-Taylor method with strong order 0.5, the implicit Milstein-Taylor method with strong order 1.0 and the implicit Taylor method with strong order 1.5. The mean-square stability properties of the implicit Euler-Taylor and Milstein-Taylor methods are much better than those of the corresponding semi-implicit Euler and Milstein methods and these two implicit methods can be used to solve stochastic differential equations which are stiff in both the deterministic and the stochastic components. Numerical results are reported to show the convergence properties and the stability properties of these three implicit Taylor methods. The stability analysis and numerical results show that the implicit Euler-Taylor and Milstein-Taylor methods are very promising methods for stiff stochastic differential equations.
Resumo:
Petrov-Galerkin methods are known to be versatile techniques for the solution of a wide variety of convection-dispersion transport problems, including those involving steep gradients. but have hitherto received little attention by chemical engineers. We illustrate the technique by means of the well-known problem of simultaneous diffusion and adsorption in a spherical sorbent pellet comprised of spherical, non-overlapping microparticles of uniform size and investigate the uptake dynamics. Solutions to adsorption problems exhibit steep gradients when macropore diffusion controls or micropore diffusion controls, and the application of classical numerical methods to such problems can present difficulties. In this paper, a semi-discrete Petrov-Galerkin finite element method for numerically solving adsorption problems with steep gradients in bidisperse solids is presented. The numerical solution was found to match the analytical solution when the adsorption isotherm is linear and the diffusivities are constant. Computed results for the Langmuir isotherm and non-constant diffusivity in microparticle are numerically evaluated for comparison with results of a fitted-mesh collocation method, which was proposed by Liu and Bhatia (Comput. Chem. Engng. 23 (1999) 933-943). The new method is simple, highly efficient, and well-suited to a variety of adsorption and desorption problems involving steep gradients. (C) 2001 Elsevier Science Ltd. All rights reserved.
Resumo:
This article describes a new test method for the assessment of the severity of environmental stress cracking of biomedical polyurethanes in a manner that minimizes the degree of subjectivity involved. The effect of applied strain and acetone pre-treatment on degradation of Pellethane 2363 80A and Pellethane 2363 55D polyurethanes under in vitro and in vivo conditions is studied. The results are presented using a magnification-weighted image rating system that allows the semi-quantitative rating of degradation based on distribution and severity of surface damage. Devices for applying controlled strain to both flat sheet and tubing samples are described. The new rating system consistently discriminated between. the effects of acetone pre-treatments, strain and exposure times in both in vitro and in vivo experiments. As expected, P80A underwent considerable stress cracking compared with P55D. P80A produced similar stress crack ratings in both in vivo and in vitro experiments, however P55D performed worse under in vitro conditions compared with in vivo. This result indicated that care must be taken when interpreting in vitro results in the absence of in vivo data. (C) 2001 Elsevier Science Ltd. All rights reserved.