47 resultados para agent based model


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Simulations of a complete reflected shock tunnel facility have been performed with the aim of providing a better understanding of the flow through these facilities. In particular, the analysis is focused on the premature contamination of the test flow with the driver gas. The axisymmetric simulations model the full geometry of the shock tunnel and incorporate an iris-based model of the primary diaphragm rupture mechanics, an ideal secondary diaphragm and account for turbulence in the shock tube boundary layer with the Baldwin-Lomax eddy viscosity model. Two operating conditions were examined: one resulting in an over-tailored mode of operation and the other resulting in approximately tailored operation. The accuracy of the simulations is assessed through comparison with experimental measurements of static pressure, pitot pressure and stagnation temperature. It is shown that the widely-accepted driver gas contamination mechanism in which driver gas 'jets' along the walls through action of the bifurcated foot of the reflected shock, does not directly transport the driver gas to the nozzle at these conditions. Instead, driver gas laden vortices are generated by the bifurcated reflected shock. These vortices prevent jetting of the driver gas along the walls and convect driver gas away from the shock tube wall and downstream into the nozzle. Additional vorticity generated by the interaction of the reflected shock and the contact surface enhances the process in the over-tailored case. However, the basic mechanism appears to operate in a similar way for both the over-tailored and the approximately tailored conditions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

High-fidelity eye tracking is combined with a perceptual grouping task to provide insight into the likely mechanisms underlying the compensation of retinal image motion caused by movement of the eyes. The experiments describe the covert detection of minute temporal and spatial offsets incorporated into a test stimulus. Analysis of eye motion on individual trials indicates that the temporal offset sensitivity is actually due to motion of the eye inducing artificial spatial offsets in the briefly presented stimuli. The results have strong implications for two popular models of compensation for fixational eye movements, namely efference copy and image-based models. If an efference copy model is assumed, the results place constraints on the spatial accuracy and source of compensation. If an image-based model is assumed then limitations are placed on the integration time window over which motion estimates are calculated. (c) 2006 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

As field determinations take much effort, it would be useful to be able to predict easily the coefficients describing the functional response of free-living predators, the function relating food intake rate to the abundance of food organisms in the environment. As a means easily to parameterise an individual-based model of shorebird Charadriiformes populations, we attempted this for shorebirds eating macro-invertebrates. Intake rate is measured as the ash-free dry mass (AFDM) per second of active foraging; i.e. excluding time spent on digestive pauses and other activities, such as preening. The present and previous studies show that the general shape of the functional response in shorebirds eating approximately the same size of prey across the full range of prey density is a decelerating rise to a plateau, thus approximating the Holling type 11 ('disc equation') formulation. But field studies confirmed that the asymptote was not set by handling time, as assumed by the disc equation, because only about half the foraging time was spent in successfully or unsuccessfully attacking and handling prey, the rest being devoted to searching. A review of 30 functional responses showed that intake rate in free-living shorebirds varied independently of prey density over a wide range, with the asymptote being reached at very low prey densities (< 150/m(-2)). Accordingly, most of the many studies of shorebird intake rate have probably been conducted at or near the asymptote of the functional response, suggesting that equations that predict intake rate should also predict the asymptote. A multivariate analysis of 468 'spot' estimates of intake rates from 26 shorebirds identified ten variables, representing prey and shorebird characteristics, that accounted for 81 % of the variance in logarithm-transformed intake rate. But four-variables accounted for almost as much (77.3 %), these being bird size, prey size, whether the bird was an oystercatcher Haematopus ostralegus eating mussels Mytilus edulis, or breeding. The four variable equation under-predicted, on average, the observed 30 estimates of the asymptote by 11.6%, but this discrepancy was reduced to 0.2% when two suspect estimates from one early study in the 1960s were removed. The equation therefore predicted the observed asymptote very successfully in 93 % of cases. We conclude that the asymptote can be reliably predicted from just four easily measured variables. Indeed, if the birds are not breeding and are not oystercatchers eating mussels, reliable predictions can be obtained using just two variables, bird and prey sizes. A multivariate analysis of 23 estimates of the half-asymptote constant suggested they were smaller when prey were small but greater when the birds were large, especially in oystercatchers. The resulting equation could be used to predict the half-asymptote constant, but its predictive power has yet to be tested. As well as predicting the asymptote of the functional response, the equations will enable research workers engaged in many areas of shorebird ecology and behaviour to estimate intake rate without the need for conventional time-consuming field studies, including species for which it has not yet proved possible to measure intake rate in the field.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Traditional sensitivity and elasticity analyses of matrix population models have been used to p inform management decisions, but they ignore the economic costs of manipulating vital rates. For exam le, the growth rate of a population is often most sensitive to changes in adult survival rate, but this does not mean that increasing that rate is the best option for managing the population because it may be much more expensive than other options. To explore how managers should optimize their manipulation of vital rates, we incorporated the cost of changing those rates into matrix population models. We derived analytic expressions for locations in parameter space where managers should shift between management of fecundity and survival, for the balance between fecundity and survival management at those boundaries, and for the allocation of management resources to sustain that optimal balance. For simple matrices, the optimal budget allocation can often be expressed as simple functions of vital rates and the relative costs of changing them. We applied our method to management of the Helmeted Honeyeater (Lichenostomus melanops cassidix; an endangered Australian bird) and the koala (Phascolarctos cinereus) as examples. Our method showed that cost-efficient management of the Helmeted Honeyeater should focus on increasing fecundity via nest protection, whereas optimal koala management should focus on manipulating both fecundity and survival simultaneously, These findings are contrary to the cost-negligent recommendations of elasticity analysis, which would suggest focusing on managing survival in both cases. A further investigation of Helmeted Honeyeater management options, based on an individual-based model incorporating density dependence, spatial structure, and environmental stochasticity, confirmed that fecundity management was the most cost-effective strategy. Our results demonstrate that decisions that ignore economic factors will reduce management efficiency.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Three important goals in describing software design patterns are: generality, precision, and understandability. To address these goals, this paper presents an integrated approach to specifying patterns using Object-Z and UML. To achieve the generality goal, we adopt a role-based metamodeling approach to define patterns. With this approach, each pattern is defined as a pattern role model. To achieve precision, we formalize role concepts using Object-Z (a role metamodel) and use these concepts to define patterns (pattern role models). To achieve understandability, we represent the role metamodel and pattern role models visually using UML. Our pattern role models provide a precise basis for pattern-based model transformations or refactoring approaches.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The design, development, and use of complex systems models raises a unique class of challenges and potential pitfalls, many of which are commonly recurring problems. Over time, researchers gain experience in this form of modeling, choosing algorithms, techniques, and frameworks that improve the quality, confidence level, and speed of development of their models. This increasing collective experience of complex systems modellers is a resource that should be captured. Fields such as software engineering and architecture have benefited from the development of generic solutions to recurring problems, called patterns. Using pattern development techniques from these fields, insights from communities such as learning and information processing, data mining, bioinformatics, and agent-based modeling can be identified and captured. Collections of such 'pattern languages' would allow knowledge gained through experience to be readily accessible to less-experienced practitioners and to other domains. This paper proposes a methodology for capturing the wisdom of computational modelers by introducing example visualization patterns, and a pattern classification system for analyzing the relationship between micro and macro behaviour in complex systems models. We anticipate that a new field of complex systems patterns will provide an invaluable resource for both practicing and future generations of modelers.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper reports on a system for automated agent negotiation, based on a formal and executable approach to capture the behavior of parties involved in a negotiation. It uses the JADE agent framework, and its major distinctive feature is the use of declarative negotiation strategies. The negotiation strategies are expressed in a declarative rules language, defeasible logic, and are applied using the implemented system DR-DEVICE. The key ideas and the overall system architecture are described, and a particular negotiation case is presented in detail.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A sensitive, specific polymerase chain reaction-based assay was developed for the detection of the causal agent of ratoon stunting disease of sugarcane, Clavibacter xyli subsp. xyli. This assay uses oligonucleotide primers derived from the internal transcribed spacer region between the 16S and 23S rRNA genes of the bacterial rRNA operon. The assay is specific for C. xyli subsp. xyli and does not produce an amplification product from the template of the closely related bacterium C. xyli subsp. cynodontis, nor from other bacterial species. The assay was successfully applied to the detection of C. xyli subsp. xyli in fibrovascular fluid extracted from sugarcane and was sensitive to approximately 22 cells per PCR assay. A multiplex PCR test was also developed which identified and differentiated C. xyli subsp. xyli and C. xyli subsp. cynodontis in a single PCR assay.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

A significant problem in the collection of responses to potentially sensitive questions, such as relating to illegal, immoral or embarrassing activities, is non-sampling error due to refusal to respond or false responses. Eichhorn & Hayre (1983) suggested the use of scrambled responses to reduce this form of bias. This paper considers a linear regression model in which the dependent variable is unobserved but for which the sum or product with a scrambling random variable of known distribution, is known. The performance of two likelihood-based estimators is investigated, namely of a Bayesian estimator achieved through a Markov chain Monte Carlo (MCMC) sampling scheme, and a classical maximum-likelihood estimator. These two estimators and an estimator suggested by Singh, Joarder & King (1996) are compared. Monte Carlo results show that the Bayesian estimator outperforms the classical estimators in almost all cases, and the relative performance of the Bayesian estimator improves as the responses become more scrambled.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper we present a model of specification-based testing of interactive systems. This model provides the basis for a framework to guide such testing. Interactive systems are traditionally decomposed into a functionality component and a user interface component; this distinction is termed dialogue separation and is the underlying basis for conceptual and architectural models of such systems. Correctness involves both proper behaviour of the user interface and proper computation by the underlying functionality. Specification-based testing is one method used to increase confidence in correctness, but it has had limited application to interactive system development to date.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The optimal dosing schedule for melphalan therapy of recurrent malignant melanoma in isolated limb perfusions has been examined using a physiological pharmacokinetic model with data from isolated rat hindlimb perfusions (IRHP), The study included a comparison of melphalan distribution in IRHP under hyperthermia and normothermia conditions. Rat hindlimbs were perfused with Krebs-Henseleit buffer containing 4.7% bovine serum albumin at 37 or 41.5 degrees C at a flow rate of 4 ml/min. Concentrations of melphalan in perfusate and tissues were determined by high performance liquid chromatography with fluorescence detection, The concentration of melphalan in perfusate and tissues was linearly related to the input concentration. The rate and amount of melphalan uptake into the different tissues was higher at 41.5 degrees C than at 37 degrees C. A physiological pharmacokinetic model was validated from the tissue and perfusate time course of melphalan after melphalan perfusion. Application of the model involved the amount of melphalan exposure in the muscle, skin and fat in a recirculation system was related to the method of melphalan administration: single bolus > divided bolus > infusion, The peak concentration of melphalan in the perfusate was also related to the method of administration in the same order, Infusing the total dose of melphalan over 20 min during a 60 min perfusion optimized the exposure of tissues to melphalan whilst minimizing the peak perfusate concentration of melphalan. It is suggested that this method of melphalan administration may be preferable to other methods in terms of optimizing the efficacy of melphalan whilst minimizing the limb toxicity associated with its use in isolated limb perfusion.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Experimental data for E. coli debris size reduction during high-pressure homogenisation at 55 MPa are presented. A mathematical model based on grinding theory is developed to describe the data. The model is based on first-order breakage and compensation conditions. It does not require any assumption of a specified distribution for debris size and can be used given information on the initial size distribution of whole cells and the disruption efficiency during homogenisation. The number of homogeniser passes is incorporated into the model and used to describe the size reduction of non-induced stationary and induced E. coil cells during homogenisation. Regressing the results to the model equations gave an excellent fit to experimental data ( > 98.7% of variance explained for both fermentations), confirming the model's potential for predicting size reduction during high-pressure homogenisation. This study provides a means to optimise both homogenisation and disc-stack centrifugation conditions for recombinant product recovery. (C) 1997 Elsevier Science Ltd.