62 resultados para Empirical Flow Models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The theoretical impacts of anthropogenic habitat degradation on genetic resources have been well articulated. Here we use a simulation approach to assess the magnitude of expected genetic change, and review 31 studies of 23 neotropical tree species to assess whether empirical case studies conform to theory. Major differences in the sensitivity of measures to detect the genetic health of degraded populations were obvious. Most studies employing genetic diversity (nine out of 13) found no significant consequences, yet most that assessed progeny inbreeding (six out of eight), reproductive output (seven out of 10) and fitness (all six) highlighted significant impacts. These observations are in line with theory, where inbreeding is observed immediately following impact, but genetic diversity is lost slowly over subsequent generations, which for trees may take decades. Studies also highlight the ecological, not just genetic, consequences of habitat degradation that can cause reduced seed set and progeny fitness. Unexpectedly, two studies examining pollen flow using paternity analysis highlight an extensive network of gene flow at smaller spatial scales (less than 10 km). Gene flow can thus mitigate against loss of genetic diversity and assist in long-term population viability, even in degraded landscapes. Unfortunately, the surveyed studies were too few and heterogeneous to examine concepts of population size thresholds and genetic resilience in relation to life history. Future suggested research priorities include undertaking integrated studies on a range of species in the same landscapes; better documentation of the extent and duration of impact; and most importantly, combining neutral marker, pollination dynamics, ecological consequences, and progeny fitness assessment within single studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An important and common problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. As this problem concerns the selection of significant genes from a large pool of candidate genes, it needs to be carried out within the framework of multiple hypothesis testing. In this paper, we focus on the use of mixture models to handle the multiplicity issue. With this approach, a measure of the local false discovery rate is provided for each gene, and it can be implemented so that the implied global false discovery rate is bounded as with the Benjamini-Hochberg methodology based on tail areas. The latter procedure is too conservative, unless it is modified according to the prior probability that a gene is not differentially expressed. An attractive feature of the mixture model approach is that it provides a framework for the estimation of this probability and its subsequent use in forming a decision rule. The rule can also be formed to take the false negative rate into account.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Subsequent to the influential paper of [Chan, K.C., Karolyi, G.A., Longstaff, F.A., Sanders, A.B., 1992. An empirical comparison of alternative models of the short-term interest rate. Journal of Finance 47, 1209-1227], the generalised method of moments (GMM) has been a popular technique for estimation and inference relating to continuous-time models of the short-term interest rate. GMM has been widely employed to estimate model parameters and to assess the goodness-of-fit of competing short-rate specifications. The current paper conducts a series of simulation experiments to document the bias and precision of GMM estimates of short-rate parameters, as well as the size and power of [Hansen, L.P., 1982. Large sample properties of generalised method of moments estimators. Econometrica 50, 1029-1054], J-test of over-identifying restrictions. While the J-test appears to have appropriate size and good power in sample sizes commonly encountered in the short-rate literature, GMM estimates of the speed of mean reversion are shown to be severely biased. Consequently, it is dangerous to draw strong conclusions about the strength of mean reversion using GMM. In contrast, the parameter capturing the levels effect, which is important in differentiating between competing short-rate specifications, is estimated with little bias. (c) 2006 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We develop foreign bank technical, cost and profit efficiency models for particular application with data envelopment analysis (DEA). Key motivations for the paper are (a) the often-observed practice of choosing inputs and outputs where the selection process is poorly explained and linkages to theory are unclear, and (b) foreign bank productivity analysis, which has been neglected in DEA banking literature. The main aim is to demonstrate a process grounded in finance and banking theories for developing bank efficiency models, which can bring comparability and direction to empirical productivity studies. We expect this paper to foster empirical bank productivity studies.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Network building and exchange of information by people within networks is crucial to the innovation process. Contrary to older models, in social networks the flow of information is noncontinuous and nonlinear. There are critical barriers to information flow that operate in a problematic manner. New models and new analytic tools are needed for these systems. This paper introduces the concept of virtual circuits and draws on recent concepts of network modelling and design to introduce a probabilistic switch theory that can be described using matrices. It can be used to model multistep information flow between people within organisational networks, to provide formal definitions of efficient and balanced networks and to describe distortion of information as it passes along human communication channels. The concept of multi-dimensional information space arises naturally from the use of matrices. The theory and the use of serial diagonal matrices have applications to organisational design and to the modelling of other systems. It is hypothesised that opinion leaders or creative individuals are more likely to emerge at information-rich nodes in networks. A mathematical definition of such nodes is developed and it does not invariably correspond with centrality as defined by early work on networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we propose a range of dynamic data envelopment analysis (DEA) models which allow information on costs of adjustment to be incorporated into the DEA framework. We first specify a basic dynamic DEA model predicated on a number or simplifying assumptions. We then outline a number of extensions to this model to accommodate asymmetric adjustment costs, non-static output quantities, non-static input prices, and non-static costs of adjustment, technological change, quasi-fixed inputs and investment budget constraints. The new dynamic DEA models provide valuable extra information relative to the standard static DEA models-they identify an optimal path of adjustment for the input quantities, and provide a measure of the potential cost savings that result from recognising the costs of adjusting input quantities towards the optimal point. The new models are illustrated using data relating to a chain of 35 retail department stores in Chile. The empirical results illustrate the wealth of information that can be derived from these models, and clearly show that static models overstate potential cost savings when adjustment costs are non-zero.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Water-sampler equilibrium partitioning coefficients and aqueous boundary layer mass transfer coefficients for atrazine, diuron, hexazionone and fluometuron onto C18 and SDB-RPS Empore disk-based aquatic passive samplers have been determined experimentally under a laminar flow regime (Re = 5400). The method involved accelerating the time to equilibrium of the samplers by exposing them to three water concentrations, decreasing stepwise to 50% and then 25% of the original concentration. Assuming first-order Fickian kinetics across a rate-limiting aqueous boundary layer, both parameters are determined computationally by unconstrained nonlinear optimization. In addition, a method of estimation of mass transfer coefficients-therefore sampling rates-using the dimensionless Sherwood correlation developed for laminar flow over a flat plate is applied. For each of the herbicides, this correlation is validated to within 40% of the experimental data. The study demonstrates that for trace concentrations (sub 0.1 mu g/L) and these flow conditions, a naked Empore disk performs well as an integrative sampler over short deployments (up to 7 days) for the range of polar herbicides investigated. The SDB-RPS disk allows a longer integrative period than the C18 disk due to its higher sorbent mass and/or its more polar sorbent chemistry. This work also suggests that for certain passive sampler designs, empirical estimation of sampling rates may be possible using correlations that have been available in the chemical engineering literature for some time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The effect of acceleration skewness on sheet flow sediment transport rates (q) over bar (s) is analysed using new data which have acceleration skewness and superimposed currents but no boundary layer streaming. Sediment mobilizing forces due to drag and to acceleration (similar to pressure gradients) are weighted by cosine and sine, respectively, of the angle phi(.)(tau)phi(tau) = 0 thus corresponds to drag dominated sediment transport, (q) over bar (s)similar to vertical bar u(infinity)vertical bar u(infinity), while phi(tau) = 90 degrees corresponds to total domination by the pressure gradients, (q) over bar similar to du(infinity)/dt. Using the optimal angle, phi = 51 degrees based on that data, good agreement is subsequently found with data that have strong influence from boundary layer streaming. Good agreement is also maintained with the large body of U-tube data simulating sine waves with superimposed currents and second-order Stokes waves, all of which have zero acceleration skewness. The recommended model can be applied to irregular waves with arbitrary shape as long as the assumption negligible time lag between forcing and sediment transport rate is valid. With respect to irregular waves, the model is much easier to apply than the competing wave-by-wave models. Issues for further model developments are identified through a comprehensive data review.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Numerical simulations of turbulent driven flow in a dense medium cyclone with magnetite medium have been conducted using Fluent. The predicted air core shape and diameter were found to be close to the experimental results measured by gamma ray tomography. It is possible that the Large eddy simulation (LES) turbulence model with Mixture multi-phase model can be used to predict the air/slurry interface accurately although the LES may need a finer grid. Multi-phase simulations (air/water/medium) are showing appropriate medium segregation effects but are over-predicting the level of segregation compared to that measured by gamma-ray tomography in particular with over prediction of medium concentrations near the wall. Further, investigated the accurate prediction of axial segregation of magnetite using the LES turbulence model together with the multi-phase mixture model and viscosity corrections according to the feed particle loading factor. Addition of lift forces and viscosity correction improved the predictions especially near the wall. Predicted density profiles are very close to gamma ray tomography data showing a clear density drop near the wall. The effect of size distribution of the magnetite has been fully studied. It is interesting to note that the ultra-fine magnetite sizes (i.e. 2 and 7 mu m) are distributed uniformly throughout the cyclone. As the size of magnetite increases, more segregation of magnetite occurs close to the wall. The cut-density (d(50)) of the magnetite segregation is 32 gm, which is expected with superfine magnetite feed size distribution. At higher feed densities the agreement between the [Dungilson, 1999; Wood, J.C., 1990. A performance model for coal-washing dense medium cyclones, Ph.D. Thesis, JKMRC, University of Queensland] correlations and the CFD are reasonably good, but the overflow density is lower than the model predictions. It is believed that the excessive underflow volumetric flow rates are responsible for under prediction of the overflow density. (c) 2006 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

DEM modelling of the motion of coarse fractions of the charge inside SAG mills has now been well established for more than a decade. In these models the effect of slurry has broadly been ignored due to its complexity. Smoothed particle hydrodynamics (SPH) provides a particle based method for modelling complex free surface fluid flows and is well suited to modelling fluid flow in mills. Previous modelling has demonstrated the powerful ability of SPH to capture dynamic fluid flow effects such as lifters crashing into slurry pools, fluid draining from lifters, flow through grates and pulp lifter discharge. However, all these examples were limited by the ability to model only the slurry in the mill without the charge. In this paper, we represent the charge as a dynamic porous media through which the SPH fluid is then able to flow. The porous media properties (specifically the spatial distribution of porosity and velocity) are predicted by time averaging the mill charge predicted using a large scale DEM model. This allows prediction of transient and steady state slurry distributions in the mill and allows its variation with operating parameters, slurry viscosity and slurry volume, to be explored. (C) 2006 Published by Elsevier Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study was to investigate the impacts of operating conditions and liquid properties on the hydrodynamics and volumetric mass transfer coefficient in activated sludge air-lift reactors. Experiments were conducted in internal and external air-lift reactors. The activated sludge liquid displayed a non-Newtonian rheological behavior. With an increase in the superficial gas velocity, the liquid circulation velocity, gas holdup and mass transfer coefficient increased, and the gas residence time decreased. The liquid circulation velocity, gas holdup and the mass transfer coefficient decreased as the sludge loading increased. The flow regime in the activated sludge air-lift reactors had significant effect on the liquid circulation velocity and the gas holdup, but appeared to have little impact on the mass transfer coefficient. The experimental results in this study were best described by the empirical models, in which the reactor geometry, superficial gas velocity and/or power consumption unit, and solid and fluid properties were employed. (c) 2006 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Current Physiologically based pharmacokinetic (PBPK) models are inductive. We present an additional, different approach that is based on the synthetic rather than the inductive approach to modeling and simulation. It relies on object-oriented programming A model of the referent system in its experimental context is synthesized by assembling objects that represent components such as molecules, cells, aspects of tissue architecture, catheters, etc. The single pass perfused rat liver has been well described in evaluating hepatic drug pharmacokinetics (PK) and is the system on which we focus. In silico experiments begin with administration of objects representing actual compounds. Data are collected in a manner analogous to that in the referent PK experiments. The synthetic modeling method allows for recognition and representation of discrete event and discrete time processes, as well as heterogeneity in organization, function, and spatial effects. An application is developed for sucrose and antipyrine, administered separately and together PBPK modeling has made extensive progress in characterizing abstracted PK properties but this has also been its limitation. Now, other important questions and possible extensions emerge. How are these PK properties and the observed behaviors generated? The inherent heuristic limitations of traditional models have hindered getting meaningful, detailed answers to such questions. Synthetic models of the type described here are specifically intended to help answer such questions. Analogous to wet-lab experimental models, they retain their applicability even when broken apart into sub-components. Having and applying this new class of models along with traditional PK modeling methods is expected to increase the productivity of pharmaceutical research at all levels that make use of modeling and simulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have developed a way to represent Mohr-Coulomb failure within a mantle-convection fluid dynamics code. We use a viscous model of deformation with an orthotropic viscoplasticity (a different viscosity is used for pure shear to that used for simple shear) to define a prefered plane for slip to occur given the local stress field. The simple-shear viscosity and the deformation can then be iterated to ensure that the yield criterion is always satisfied. We again assume the Boussinesq approximation, neglecting any effect of dilatancy on the stress field. An additional criterion is required to ensure that deformation occurs along the plane aligned with maximum shear strain-rate rather than the perpendicular plane, which is formally equivalent in any symmetric formulation. We also allow for strain-weakening of the material. The material can remember both the accumulated failure history and the direction of failure. We have included this capacity in a Lagrangian-integration-point finite element code and show a number of examples of extension and compression of a crustal block with a Mohr-Coulomb failure criterion. The formulation itself is general and applies to 2- and 3-dimensional problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Irrigation practices that are profligate in their use of water have come under closer scrutiny by water managers and the public. Trickle irrigation has the propensity to increase water use efficiency but only if the system is designed to meet the soil and plant conditions. Recently we have provided a software tool, WetUp (http://www.clw.csiro.au/products/wetup/), to calculate the wetting patterns from trickle irrigation emitters. WetUp uses an analytical solution to calculate the wetted perimeter for both buried and surface emitters. This analytical solution has a number of assumptions, two of which are that the wetting front is defined by water content at which the hydraulic conductivity (K) is I mm day(-1) and that the flow occurs from a point source. Here we compare the wetting patterns calculated with a 2-dimensional numerical model, HYDRUS2D, for solving the water flow into typical soils with the analytical solution. The results show that the wetting patterns are similar, except when the soil properties result in the assumption of a point source no longer being a good description of the flow regime. Difficulties were also experienced with getting stable solutions with HYDRUS2D for soils with low hydraulic conductivities. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Semantic data models provide a map of the components of an information system. The characteristics of these models affect their usefulness for various tasks (e.g., information retrieval). The quality of information retrieval has obvious important consequences, both economic and otherwise. Traditionally, data base designers have produced parsimonious logical data models. In spite of their increased size, ontologically clearer conceptual models have been shown to facilitate better performance for both problem solving and information retrieval tasks in experimental settings. The experiments producing evidence of enhanced performance for ontologically clearer models have, however, used application domains of modest size. Data models in organizational settings are likely to be substantially larger than those used in these experiments. This research used an experiment to investigate whether the benefits of improved information retrieval performance associated with ontologically clearer models are robust as the size of the application domains increase. The experiment used an application domain of approximately twice the size as tested in prior experiments. The results indicate that, relative to the users of the parsimonious implementation, end users of the ontologically clearer implementation made significantly more semantic errors, took significantly more time to compose their queries, and were significantly less confident in the accuracy of their queries.