27 resultados para Graph-Based Linear Programming Modelling
em University of Queensland eSpace - Australia
Resumo:
This paper presents an agent-based approach to modelling individual driver behaviour under the influence of real-time traffic information. The driver behaviour models developed in this study are based on a behavioural survey of drivers which was conducted on a congested commuting corridor in Brisbane, Australia. Commuters' responses to travel information were analysed and a number of discrete choice models were developed to determine the factors influencing drivers' behaviour and their propensity to change route and adjust travel patterns. Based on the results obtained from the behavioural survey, the agent behaviour parameters which define driver characteristics, knowledge and preferences were identified and their values determined. A case study implementing a simple agent-based route choice decision model within a microscopic traffic simulation tool is also presented. Driver-vehicle units (DVUs) were modelled as autonomous software components that can each be assigned a set of goals to achieve and a database of knowledge comprising certain beliefs, intentions and preferences concerning the driving task. Each DVU provided route choice decision-making capabilities, based on perception of its environment, that were similar to the described intentions of the driver it represented. The case study clearly demonstrated the feasibility of the approach and the potential to develop more complex driver behavioural dynamics based on the belief-desire-intention agent architecture. (C) 2002 Elsevier Science Ltd. All rights reserved.
Resumo:
Modelling and optimization of the power draw of large SAG/AG mills is important due to the large power draw which modern mills require (5-10 MW). The cost of grinding is the single biggest cost within the entire process of mineral extraction. Traditionally, modelling of the mill power draw has been done using empirical models. Although these models are reliable, they cannot model mills and operating conditions which are not within the model database boundaries. Also, due to its static nature, the impact of the changing conditions within the mill on the power draw cannot be determined using such models. Despite advances in computing power, discrete element method (DEM) modelling of large mills with many thousands of particles could be a time consuming task. The speed of computation is determined principally by two parameters: number of particles involved and material properties. The computational time step is determined by the size of the smallest particle present in the model and material properties (stiffness). In the case of small particles, the computational time step will be short, whilst in the case of large particles; the computation time step will be larger. Hence, from the point of view of time required for modelling (which usually corresponds to time required for 3-4 mill revolutions), it will be advantageous that the smallest particles in the model are not unnecessarily too small. The objective of this work is to compare the net power draw of the mill whose charge is characterised by different size distributions, while preserving the constant mass of the charge and mill speed. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
Current Physiologically based pharmacokinetic (PBPK) models are inductive. We present an additional, different approach that is based on the synthetic rather than the inductive approach to modeling and simulation. It relies on object-oriented programming A model of the referent system in its experimental context is synthesized by assembling objects that represent components such as molecules, cells, aspects of tissue architecture, catheters, etc. The single pass perfused rat liver has been well described in evaluating hepatic drug pharmacokinetics (PK) and is the system on which we focus. In silico experiments begin with administration of objects representing actual compounds. Data are collected in a manner analogous to that in the referent PK experiments. The synthetic modeling method allows for recognition and representation of discrete event and discrete time processes, as well as heterogeneity in organization, function, and spatial effects. An application is developed for sucrose and antipyrine, administered separately and together PBPK modeling has made extensive progress in characterizing abstracted PK properties but this has also been its limitation. Now, other important questions and possible extensions emerge. How are these PK properties and the observed behaviors generated? The inherent heuristic limitations of traditional models have hindered getting meaningful, detailed answers to such questions. Synthetic models of the type described here are specifically intended to help answer such questions. Analogous to wet-lab experimental models, they retain their applicability even when broken apart into sub-components. Having and applying this new class of models along with traditional PK modeling methods is expected to increase the productivity of pharmaceutical research at all levels that make use of modeling and simulation.
Resumo:
In a deregulated electricity market, optimizing dispatch capacity and transmission capacity are among the core concerns of market operators. Many market operators have capitalized on linear programming (LP) based methods to perform market dispatch operation in order to explore the computational efficiency of LP. In this paper, the search capability of genetic algorithms (GAs) is utilized to solve the market dispatch problem. The GA model is able to solve pool based capacity dispatch, while optimizing the interconnector transmission capacity. Case studies and corresponding analyses are performed to demonstrate the efficiency of the GA model.
Prediction of slurry transport in SAG mills using SPH fluid flow in a dynamic DEM based porous media
Resumo:
DEM modelling of the motion of coarse fractions of the charge inside SAG mills has now been well established for more than a decade. In these models the effect of slurry has broadly been ignored due to its complexity. Smoothed particle hydrodynamics (SPH) provides a particle based method for modelling complex free surface fluid flows and is well suited to modelling fluid flow in mills. Previous modelling has demonstrated the powerful ability of SPH to capture dynamic fluid flow effects such as lifters crashing into slurry pools, fluid draining from lifters, flow through grates and pulp lifter discharge. However, all these examples were limited by the ability to model only the slurry in the mill without the charge. In this paper, we represent the charge as a dynamic porous media through which the SPH fluid is then able to flow. The porous media properties (specifically the spatial distribution of porosity and velocity) are predicted by time averaging the mill charge predicted using a large scale DEM model. This allows prediction of transient and steady state slurry distributions in the mill and allows its variation with operating parameters, slurry viscosity and slurry volume, to be explored. (C) 2006 Published by Elsevier Ltd.
Resumo:
Models and model transformations are the core concepts of OMG's MDA (TM) approach. Within this approach, most models are derived from the MOF and have a graph-based nature. In contrast, most of the current model transformations are specified textually. To enable a graphical specification of model transformation rules, this paper proposes to use triple graph grammars as declarative specification formalism. These triple graph grammars can be specified within the FUJABA tool and we argue that these rules can be more easily specified and they become more understandable and maintainable. To show the practicability of our approach, we present how to generate Tefkat rules from triple graph grammar rules, which helps to integrate triple graph grammars with a state of a art model transformation tool and shows the expressiveness of the concept.
Resumo:
The principal aim of this paper is to measure the amount by which the profit of a multi-input, multi-output firm deviates from maximum short-run profit, and then to decompose this profit gap into components that are of practical use to managers. In particular, our interest is in the measurement of the contribution of unused capacity, along with measures of technical inefficiency, and allocative inefficiency, in this profit gap. We survey existing definitions of capacity and, after discussing their shortcomings, we propose a new ray economic capacity measure that involves short-run profit maximisation, with the output mix held constant. We go on to describe how the gap between observed profit and maximum profit can be calculated and decomposed using linear programming methods. The paper concludes with an empirical illustration, involving data on 28 international airline companies. The empirical results indicate that these airline companies achieve profit levels which are on average US$815m below potential levels, and that 70% of the gap may be attributed to unused capacity. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
We tested direct and indirect measures of benthic metabolism as indicators of stream ecosystem health across a known agricultural land-use disturbance gradient in southeast Queensland, Australia. Gross primary production (GPP) and respiration (R-24) in benthic chambers in cobble and sediment habitats, algal biomass (as chlorophyll a) from cobbles and sediment cores, algal biomass accrual on artificial substrates and stable carbon isotope ratios of aquatic plants and benthic sediments were measured at 53 stream sites, ranging from undisturbed subtropical rainforest to catchments where improved pasture and intensive cropping are major land-uses. Rates of benthic GPP and R-24 varied by more than two orders of magnitude across the study gradient. Generalised linear regression modelling explained 80% or more of the variation in these two indicators when sediment and cobble substrate dominated sites were considered separately, and both catchment and reach scale descriptors of the disturbance gradient were important in explaining this variation. Model fits were poor for net daily benthic metabolism (NDM) and production to respiration ratio (P/R). Algal biomass accrual on artificial substrate and stable carbon isotope ratios of aquatic plants and benthic sediment were the best of the indirect indicators, with regression model R-2 values of 50% or greater. Model fits were poor for algal biomass on natural substrates for cobble sites and all sites. None of these indirect measures of benthic metabolism was a good surrogate for measured GPP. Direct measures of benthic metabolism, GPP and R-24, and several indirect measures were good indicators of stream ecosystem health and are recommended in assessing process-related responses to riparian and catchment land use change and the success of ecosystem rehabilitation actions.
Resumo:
Systems biology is based on computational modelling and simulation of large networks of interacting components. Models may be intended to capture processes, mechanisms, components and interactions at different levels of fidelity. Input data are often large and geographically disperse, and may require the computation to be moved to the data, not vice versa. In addition, complex system-level problems require collaboration across institutions and disciplines. Grid computing can offer robust, scaleable solutions for distributed data, compute and expertise. We illustrate some of the range of computational and data requirements in systems biology with three case studies: one requiring large computation but small data (orthologue mapping in comparative genomics), a second involving complex terabyte data (the Visible Cell project) and a third that is both computationally and data-intensive (simulations at multiple temporal and spatial scales). Authentication, authorisation and audit systems are currently not well scalable and may present bottlenecks for distributed collaboration particularly where outcomes may be commercialised. Challenges remain in providing lightweight standards to facilitate the penetration of robust, scalable grid-type computing into diverse user communities to meet the evolving demands of systems biology.
Resumo:
A two-component survival mixture model is proposed to analyse a set of ischaemic stroke-specific mortality data. The survival experience of stroke patients after index stroke may be described by a subpopulation of patients in the acute condition and another subpopulation of patients in the chronic phase. To adjust for the inherent correlation of observations due to random hospital effects, a mixture model of two survival functions with random effects is formulated. Assuming a Weibull hazard in both components, an EM algorithm is developed for the estimation of fixed effect parameters and variance components. A simulation study is conducted to assess the performance of the two-component survival mixture model estimators. Simulation results confirm the applicability of the proposed model in a small sample setting. Copyright (C) 2004 John Wiley Sons, Ltd.
Resumo:
The anisotropic norm of a linear discrete-time-invariant system measures system output sensitivity to stationary Gaussian input disturbances of bounded mean anisotropy. Mean anisotropy characterizes the degree of predictability (or colouredness) and spatial non-roundness of the noise. The anisotropic norm falls between the H-2 and H-infinity norms and accommodates their loss of performance when the probability structure of input disturbances is not exactly known. This paper develops a method for numerical computation of the anisotropic norm which involves linked Riccati and Lyapunov equations and an associated special type equation.