11 resultados para Minimum Variance Model
em University of Queensland eSpace - Australia
Resumo:
Measurements of mean and fluctuating velocity and temperature and their self- and cross-products to the third-order are presented for a heated axisymmetric air jet. Froude numbers in the range of 3500 13,190, Reynolds numbers in the range of 3470-8500 and non-dimensional streamwise distances. X*, from 0.27 to 1.98 are covered by the data. It was found that turbulence intensity decreases for the heated jet in the region between the inertia dominated and the buoyancy dominated regions which is contrary to findings with helium jets mixing with ambient air to produce density fluctuations. The effects of heating on the turbulent kinetic energy budget and the temperature variance budget show small differences for the inertia dominated region and the intermediate region which help to explain the transition process to the far field plume region. Constants are evaluated for the isotropic eddy diffusivity and generalised gradient hypothesis models as well as the scalar variance model. No significant effect of heating on the dissipation time-scale ratio was found. A novel wire array with an inclined cold wire was used. Measurements obtained with this probe are found to lead to asymmetries in some of the higher-order products. Further investigation suggested that the asymmetries are attributable to an as yet unreported interference effect produced by the leading prong of the inclined temperature wire, The effect may also have implications for inclined velocity wires which contain a temperature component when used in heated flows. (C) 2002 Elsevier Science Inc. All rights reserved.
Resumo:
The classical model of surface layering followed by capillary condensation during adsorption in mesopores, is modified here by consideration of the adsorbate solid interaction potential. The new theory accurately predicts the capillary coexistence curve as well as pore criticality, matching that predicted by density functional theory. The model also satisfactorily predicts the isotherm for nitrogen adsorption at 77.4 K on MCM-41 material of various pore sizes, synthesized and characterized in our laboratory, including the multilayer region, using only data on the variation of condensation pressures with pore diameter. The results indicate a minimum mesopore diameter for the surface layering model to hold as 14.1 Å, below which size micropore filling must occur, and a minimum pore diameter for mechanical stability of the hemispherical meniscus during desorption as 34.2 Å. For pores in-between these two sizes reversible condensation is predicted to occur, in accord with the experimental data for nitrogen adsorption on MCM-41 at 77.4 K.
Resumo:
The Coefficient of Variance (mean standard deviation/mean Response time) is a measure of response time variability that corrects for differences in mean Response time (RT) (Segalowitz & Segalowitz, 1993). A positive correlation between decreasing mean RTs and CVs (rCV-RT) has been proposed as an indicator of L2 automaticity and more generally as an index of processing efficiency. The current study evaluates this claim by examining lexical decision performance by individuals from three levels of English proficiency (Intermediate ESL, Advanced ESL and L1 controls) on stimuli from four levels of item familiarity, as defined by frequency of occurrence. A three-phase model of skill development defined by changing rCV-RT.values was tested. Results showed that RTs and CVs systematically decreased as a function of increasing proficiency and frequency levels, with the rCV-RT serving as a stable indicator of individual differences in lexical decision performance. The rCV-RT and automaticity/restructuring account is discussed in light of the findings. The CV is also evaluated as a more general quantitative index of processing efficiency in the L2.
Resumo:
A mathematical model was developed to estimate HIV incidence in NSW prisons. Data included: duration of imprisonment; number of inmates using each needle; lower and higher number of shared injections per IDU per week; proportion of IDUs using bleach; efficacy of bleach; HIV prevalence and probability of infection. HIV prevalence in IDUs in prison was estimated to have risen from 0.8 to 5.7% (12.2%) over 180 weeks when using lower (and higher) values for frequency of shared injections. The estimated minimum (and maximum) number of IDU inmates infected with HIV in NSW prisons was 38 (and 152) in 1993 according to the model. These figures require confirmation by seroincidence studies. (C) 1998 Published by Elsevier Science Ireland Ltd. All rights reserved.
Resumo:
The conventional convection-dispersion (also called axial dispersion) model is widely used to interrelate hepatic availability (F) and clearance (Cl) with the morphology and physiology of the liver and to predict effects such as changes in liver blood flow on F and Cl. An extended form of the convection-dispersion model has been developed to adequately describe the outflow concentration-time profiles for vascular markers at both short and long times after bolus injections into perfused livers. The model, based on flux concentration and a convolution of catheters and large vessels, assumes that solute elimination in hepatocytes follows either fast distribution into or radial diffusion in hepatocytes. The model includes a secondary vascular compartment, postulated to be interconnecting sinusoids. Analysis of the mean hepatic transit time (MTT) and normalized variance (CV2) of solutes with extraction showed that the discrepancy between the predictions of MTT and CV2 for the extended and conventional models are essentially identical irrespective of the magnitude of rate constants representing permeability, volume, and clearance parameters, providing that there is significant hepatic extraction. In conclusion, the application of a newly developed extended convection-dispersion model has shown that the unweighted conventional convection-dispersion model can be used to describe the disposition of extracted solutes and, in particular, to estimate hepatic availability and clearance in booth experimental and clinical situations.
Resumo:
Experimental data for E. coli debris size reduction during high-pressure homogenisation at 55 MPa are presented. A mathematical model based on grinding theory is developed to describe the data. The model is based on first-order breakage and compensation conditions. It does not require any assumption of a specified distribution for debris size and can be used given information on the initial size distribution of whole cells and the disruption efficiency during homogenisation. The number of homogeniser passes is incorporated into the model and used to describe the size reduction of non-induced stationary and induced E. coil cells during homogenisation. Regressing the results to the model equations gave an excellent fit to experimental data ( > 98.7% of variance explained for both fermentations), confirming the model's potential for predicting size reduction during high-pressure homogenisation. This study provides a means to optimise both homogenisation and disc-stack centrifugation conditions for recombinant product recovery. (C) 1997 Elsevier Science Ltd.
Resumo:
In the past century, the debate over whether or not density-dependent factors regulate populations has generally focused on changes in mean population density, ignoring the spatial variance around the mean as unimportant noise. In an attempt to provide a different framework for understanding population dynamics based on individual fitness, this paper discusses the crucial role of spatial variability itself on the stability of insect populations. The advantages of this method are the following: (1) it is founded on evolutionary principles rather than post hoc assumptions; (2) it erects hypotheses that can be tested; and (3) it links disparate ecological schools, including spatial dynamics, behavioral ecology, preference-performance, and plant apparency into an overall framework. At the core of this framework, habitat complexity governs insect spatial variance. which in turn determines population stability. First, the minimum risk distribution (MRD) is defined as the spatial distribution of individuals that results in the minimum number of premature deaths in a population given the distribution of mortality risk in the habitat (and, therefore, leading to maximized population growth). The greater the divergence of actual spatial patterns of individuals from the MRD, the greater the reduction of population growth and size from high, unstable levels. Then, based on extensive data from 29 populations of the processionary caterpillar, Ochrogaster lunifer, four steps are used to test the effect of habitat interference on population growth rates. (1) The costs (increasing the risk of scramble competition) and benefits (decreasing the risk of inverse density-dependent predation) of egg and larval aggregation are quantified. (2) These costs and benefits, along with the distribution of resources, are used to construct the MRD for each habitat. (3) The MRD is used as a benchmark against which the actual spatial pattern of individuals is compared. The degree of divergence of the actual spatial pattern from the MRD is quantified for each of the 29 habitats. (4) Finally, indices of habitat complexity are used to provide highly accurate predictions of spatial divergence from the MRD, showing that habitat interference reduces population growth rates from high, unstable levels. The reason for the divergence appears to be that high levels of background vegetation (vegetation other than host plants) interfere with female host-searching behavior. This leads to a spatial distribution of egg batches with high mortality risk, and therefore lower population growth. Knowledge of the MRD in other species should be a highly effective means of predicting trends in population dynamics. Species with high divergence between their actual spatial distribution and their MRD may display relatively stable dynamics at low population levels. In contrast, species with low divergence should experience high levels of intragenerational population growth leading to frequent habitat-wide outbreaks and unstable dynamics in the long term. Six hypotheses, erected under the framework of spatial interference, are discussed, and future tests are suggested.
Resumo:
We introduce a model for the dynamics of a patchy population in a stochastic environment and derive a criterion for its persistence. This criterion is based on the geometric mean (GM) through time of the spatial-arithmetic mean of growth rates. For the population to persist, the GM has to be greater than or equal to1. The GM increases with the number of patches (because the sampling error is reduced) and decreases with both the variance and the spatial covariance of growth rates. We derive analytical expressions for the minimum number of patches (and the maximum harvesting rate) required for the persistence of the population. As the magnitude of environmental fluctuations increases, the number of patches required for persistence increases, and the fraction of individuals that can be harvested decreases. The novelty of our approach is that we focus on Malthusian local population dynamics with high dispersal and strong environmental variability from year to year. Unlike previous models of patchy populations that assume an infinite number of patches, we focus specifically on the effect that the number of patches has on population persistence. Our work is therefore directly relevant to patchily distributed organisms that are restricted to a small number of habitat patches.
Resumo:
This article develops a weighted least squares version of Levene's test of homogeneity of variance for a general design, available both for univariate and multivariate situations. When the design is balanced, the univariate and two common multivariate test statistics turn out to be proportional to the corresponding ordinary least squares test statistics obtained from an analysis of variance of the absolute values of the standardized mean-based residuals from the original analysis of the data. The constant of proportionality is simply a design-dependent multiplier (which does not necessarily tend to unity). Explicit results are presented for randomized block and Latin square designs and are illustrated for factorial treatment designs and split-plot experiments. The distribution of the univariate test statistic is close to a standard F-distribution, although it can be slightly underdispersed. For a complex design, the test assesses homogeneity of variance across blocks, treatments, or treatment factors and offers an objective interpretation of residual plot.
Resumo:
Chest clapping, vibration, and shaking were studied in 10 physiotherapists who applied these techniques on an anesthetized animal model. Hemodynamic variables (such as heart rate, blood pressure, pulmonary artery pressure, and right atrial pressure) were measured during the application of these techniques to verify claims of adverse events. In addition, expired tidal volume and peak expiratory flow rate were measured to ascertain effects of these techniques. Physiotherapists in this study applied chest clapping at a rate of 6.2 +/- 0.9 Hz, vibration at 10.5 +/- 2.3 Hz, and shaking at 6.2 +/- 2.3 Hz. With the use of these rates, esophageal pressure swings of 8.8 +/- 5.0, 0.7 +/- 0.3, and 1.4 +/- 0.7 mmHg resulted from clapping, vibration, and shaking respectively. Variability in rates and forces generated by these techniques was 80% of variance in shaking force (P = 0.003). Application of these techniques by physiotherapists was found to have no significant effects on hemodynamic and most ventilatory variables in this study. From this study, we conclude that chest clapping, vibration, and shaking 1) can be consistently performed by physiotherapists; 2) are significantly related to physiotherapists' characteristics, particularly clinical experience; and 3) caused no significant hemodynamic effects.
Resumo:
This paper presents a new model based on thermodynamic and molecular interaction between molecules to describe the vapour-liquid phase equilibria and surface tension of pure component. The model assumes that the bulk fluid can be characterised as set of parallel layers. Because of this molecular structure, we coin the model as the molecular layer structure theory (MLST). Each layer has two energetic components. One is the interaction energy of one molecule of that layer with all surrounding layers. The other component is the intra-layer Helmholtz free energy, which accounts for the internal energy and the entropy of that layer. The equilibrium between two separating phases is derived from the minimum of the grand potential, and the surface tension is calculated as the excess of the Helmholtz energy of the system. We test this model with a number of components, argon, krypton, ethane, n-butane, iso-butane, ethylene and sulphur hexafluoride, and the results are very satisfactory. (C) 2002 Elsevier Science B.V. All rights reserved.