960 resultados para Interface absorption models
Resumo:
Railway capacity determination and expansion are very important topics. In prior research, the competition between different entities such as train services and train types, on different network corridors however have been ignored, poorly modelled, or else assumed to be static. In response, a comprehensive set of multi-objective models have been formulated in this article to perform a trade-off analysis. These models determine the total absolute capacity of railway networks as the most equitable solution according to a clearly defined set of competing objectives. The models also perform a sensitivity analysis of capacity with respect to those competing objectives. The models have been extensively tested on a case study and their significant worth is shown. The models were solved using a variety of techniques however an adaptive E constraint method was shown to be most superior. In order to identify only the best solution, a Simulated Annealing meta-heuristic was implemented and tested. However a linearization technique based upon separable programming was also developed and shown to be superior in terms of solution quality but far less in terms of computational time.
Resumo:
Traditional sensitivity and elasticity analyses of matrix population models have been used to inform management decisions, but they ignore the economic costs of manipulating vital rates. For example, the growth rate of a population is often most sensitive to changes in adult survival rate, but this does not mean that increasing that rate is the best option for managing the population because it may be much more expensive than other options. To explore how managers should optimize their manipulation of vital rates, we incorporated the cost of changing those rates into matrix population models. We derived analytic expressions for locations in parameter space where managers should shift between management of fecundity and survival, for the balance between fecundity and survival management at those boundaries, and for the allocation of management resources to sustain that optimal balance. For simple matrices, the optimal budget allocation can often be expressed as simple functions of vital rates and the relative costs of changing them. We applied our method to management of the Helmeted Honeyeater (Lichenostomus melanops cassidix; an endangered Australian bird) and the koala (Phascolarctos cinereus) as examples. Our method showed that cost-efficient management of the Helmeted Honeyeater should focus on increasing fecundity via nest protection, whereas optimal koala management should focus on manipulating both fecundity and survival simultaneously. These findings are contrary to the cost-negligent recommendations of elasticity analysis, which would suggest focusing on managing survival in both cases. A further investigation of Helmeted Honeyeater management options, based on an individual-based model incorporating density dependence, spatial structure, and environmental stochasticity, confirmed that fecundity management was the most cost-effective strategy. Our results demonstrate that decisions that ignore economic factors will reduce management efficiency. ©2006 Society for Conservation Biology.
Resumo:
The quality of environmental decisions should be gauged according to managers' objectives. Management objectives generally seek to maximize quantifiable measures of system benefit, for instance population growth rate. Reaching these goals often requires a certain degree of learning about the system. Learning can occur by using management action in combination with a monitoring system. Furthermore, actions can be chosen strategically to obtain specific kinds of information. Formal decision making tools can choose actions to favor such learning in two ways: implicitly via the optimization algorithm that is used when there is a management objective (for instance, when using adaptive management), or explicitly by quantifying knowledge and using it as the fundamental project objective, an approach new to conservation.This paper outlines three conservation project objectives - a pure management objective, a pure learning objective, and an objective that is a weighted mixture of these two. We use eight optimization algorithms to choose actions that meet project objectives and illustrate them in a simulated conservation project. The algorithms provide a taxonomy of decision making tools in conservation management when there is uncertainty surrounding competing models of system function. The algorithms build upon each other such that their differences are highlighted and practitioners may see where their decision making tools can be improved. © 2010 Elsevier Ltd.
Resumo:
This thesis focused upon the development of improved capacity analysis and capacity planning techniques for railways. A number of innovations were made and were tested on a case study of a real national railway. These techniques can reduce the time required to perform decision making activities that planners and managers need to perform. As all railways need to be expanded to meet increasing demands, the presumption that analytical capacity models can be used to identify how best to improve an existing network at least cost, was fully investigated. Track duplication was the mechanism used to expanding a network's capacity, and two variant capacity expansion models were formulated. Another outcome of this thesis is the development and validation of bi objective models for capacity analysis. These models regulate the competition for track access and perform a trade-off analysis. An opportunity to develop more general mulch-objective approaches was identified.
Resumo:
The growth of APIs and Web services on the Internet, especially through larger enterprise systems increasingly being leveraged for Cloud and software-as-a-service opportunities, poses challenges for improving the efficiency of integration with these services. Interfaces of enterprise systems are typically larger, more complex and overloaded, with single operations having multiple data entities and parameter sets, supporting varying requests, and reflecting versioning across different system releases, compared to fine-grained operations of contemporary interfaces. We propose a technique to support the refactoring of service interfaces by deriving business entities and their relationships. In this paper, we focus on the behavioural aspects of service interfaces, aiming to discover the sequential dependencies of operations (otherwise known as protocol extraction) based on the entities and relationships derived. Specifically, we propose heuristics according to these relationships, and in turn, deriving permissible orders in which operations are invoked. As a result of this, service operations can be refactored on business entity CRUD lines, with explicit behavioural protocols as part of an interface definition. This supports flexible service discovery, composition and integration. A prototypical implementation and analysis of existing Web services, including those of commercial logistic systems (Fedex), are used to validate the algorithms proposed through the paper.
Resumo:
Wound healing and tumour growth involve collective cell spreading, which is driven by individual motility and proliferation events within a population of cells. Mathematical models are often used to interpret experimental data and to estimate the parameters so that predictions can be made. Existing methods for parameter estimation typically assume that these parameters are constants and often ignore any uncertainty in the estimated values. We use approximate Bayesian computation (ABC) to estimate the cell diffusivity, D, and the cell proliferation rate, λ, from a discrete model of collective cell spreading, and we quantify the uncertainty associated with these estimates using Bayesian inference. We use a detailed experimental data set describing the collective cell spreading of 3T3 fibroblast cells. The ABC analysis is conducted for different combinations of initial cell densities and experimental times in two separate scenarios: (i) where collective cell spreading is driven by cell motility alone, and (ii) where collective cell spreading is driven by combined cell motility and cell proliferation. We find that D can be estimated precisely, with a small coefficient of variation (CV) of 2–6%. Our results indicate that D appears to depend on the experimental time, which is a feature that has been previously overlooked. Assuming that the values of D are the same in both experimental scenarios, we use the information about D from the first experimental scenario to obtain reasonably precise estimates of λ, with a CV between 4 and 12%. Our estimates of D and λ are consistent with previously reported values; however, our method is based on a straightforward measurement of the position of the leading edge whereas previous approaches have involved expensive cell counting techniques. Additional insights gained using a fully Bayesian approach justify the computational cost, especially since it allows us to accommodate information from different experiments in a principled way.
Resumo:
PURPOSE: This paper describes dynamic agent composition, used to support the development of flexible and extensible large-scale agent-based models (ABMs). This approach was motivated by a need to extend and modify, with ease, an ABM with an underlying networked structure as more information becomes available. Flexibility was also sought after so that simulations are set up with ease, without the need to program. METHODS: The dynamic agent composition approach consists in having agents, whose implementation has been broken into atomic units, come together at runtime to form the complex system representation on which simulations are run. These components capture information at a fine level of detail and provide a vast range of combinations and options for a modeller to create ABMs. RESULTS: A description of the dynamic agent composition is given in this paper, as well as details about its implementation within MODAM (MODular Agent-based Model), a software framework which is applied to the planning of the electricity distribution network. Illustrations of the implementation of the dynamic agent composition are consequently given for that domain throughout the paper. It is however expected that this approach will be beneficial to other problem domains, especially those with a networked structure, such as water or gas networks. CONCLUSIONS: Dynamic agent composition has many advantages over the way agent-based models are traditionally built for the users, the developers, as well as for agent-based modelling as a scientific approach. Developers can extend the model without the need to access or modify previously written code; they can develop groups of entities independently and add them to those already defined to extend the model. Users can mix-and-match already implemented components to form large-scales ABMs, allowing them to quickly setup simulations and easily compare scenarios without the need to program. The dynamic agent composition provides a natural simulation space over which ABMs of networked structures are represented, facilitating their implementation; and verification and validation of models is facilitated by quickly setting up alternative simulations.
Resumo:
We propose a method for learning specific object representations that can be applied (and reused) in visual detection and identification tasks. A machine learning technique called Cartesian Genetic Programming (CGP) is used to create these models based on a series of images. Our research investigates how manipulation actions might allow for the development of better visual models and therefore better robot vision. This paper describes how visual object representations can be learned and improved by performing object manipulation actions, such as, poke, push and pick-up with a humanoid robot. The improvement can be measured and allows for the robot to select and perform the `right' action, i.e. the action with the best possible improvement of the detector.
Resumo:
Epigenetic changes correspond to heritable modifications of the chromosome structure, which do not involve alteration of the DNA sequence but do affect gene expression. These mechanisms play an important role in normal cell differentiation, but aberration is associated also with several diseases, including cancer and neural disorders. In consequence, despite intensive studies in recent years, the contribution of modifications remains largely unquantified due to overall system complexity and insufficient data. Computational models can provide powerful auxiliary tools to experimentation, not least as scales from the sub-cellular through cell populations (or to networks of genes) can be spanned. In this paper, the challenges to development, of realistic cross-scale models, are discussed and illustrated with respect to current work.
Resumo:
Tumour microenvironment greatly influences the development and metastasis of cancer progression. The development of three dimensional (3D) culture models which mimic that displayed in vivo can improve cancer biology studies and accelerate novel anticancer drug screening. Inspired by a systems biology approach, we have formed 3D in vitro bioengineered tumour angiogenesis microenvironments within a glycosaminoglycan-based hydrogel culture system. This microenvironment model can routinely recreate breast and prostate tumour vascularisation. The multiple cell types cultured within this model were less sensitive to chemotherapy when compared with two dimensional (2D) cultures, and displayed comparative tumour regression to that displayed in vivo. These features highlight the use of our in vitro culture model as a complementary testing platform in conjunction with animal models, addressing key reduction and replacement goals of the future. We anticipate that this biomimetic model will provide a platform for the in-depth analysis of cancer development and the discovery of novel therapeutic targets.
Resumo:
GVHD remains the major complication of allo-HSCT. Murine models are the primary system used to understand GVHD, and to develop potential therapies. Several factors are critical for GVHD in these models; including histo- compatibility, conditioning regimen, and T-cell number. We serendipitously found that environmental factors such as the caging system and bedding also significantly impact the kinetics of GVHD in these models. This is important because such factors may influence the experimental conditions required to cause GVHD and how mice respond to various treatments. Consequently, this is likely to alter interpretation of results between research groups, and the perceived effectiveness of experimental therapies.
Resumo:
The present study deals with two dimensional, numerical simulation of railway track supporting system subjected to dynamic excitation force. Under plane strain condition, the coupled finite-infinite elements to represent the near and far field stress distribution and thin layer interface element was employed to model the interfacial behavior between sleepers and ballast. To account for the relative debonding, slipping and crushing that could take place in the contact area between the sleepers and ballast, modified Mohr-Coulomb criterion was adopted. Furthermore an attempt has been made to consider the elasto-plastic material non-linearity of the railway track supporting media by employing different constitutive models to represent steel, concrete and supporting materials. Based on the proposed physical and constitutive modeling a code has been developed for dynamic loads. The applicability of the developed F.E code has been demonstrated by analyzing a real railway supporting structure.
Resumo:
The control of environmental factors in open-office environments, such as lighting and temperature is becoming increasingly automated. This development means that office inhabitants are losing the ability to manually adjust environmental conditions according to their needs. In this paper we describe the design, use and evaluation of MiniOrb, a system that employs ambient and tangible interaction mechanisms to allow inhabitants of office environments to maintain awareness of environmental factors, report on their own subjectively perceived office comfort levels and see how these compare to group average preferences. The system is complemented by a mobile application, which enables users to see and set the same sensor values and preferences, but using a screen-based interface. We give an account of the system’s design and outline the results of an in-situ trial and user study. Our results show that devices that combine ambient and tangible interaction approaches are well suited to the task of recording indoor climate preferences and afford a rich set of possible interactions that can complement those enabled by more conventional screen-based interfaces.
Resumo:
The present contribution deals with the numerical modelling of railway track-supporting systems-using coupled finite-infinite elements-to represent the near and distant field stress distribution, and also employing a thin layer interface element to account for the interfacial behaviour between sleepers and ballast. To simulate the relative debonding, slipping and crushing at the contact area between sleepers and ballast, a modified Mohr-Coulomb criterion was adopted. Further more an attempt was made to consider the elasto plastic materials’ non-linearity of the railway track supporting media by employing different constitutive models to represent steel, concrete and other supporting materials. It is seen that during an incremental-iterative mode of load application, the yielding initially started from the edge of the sleepers and then flowed vertically downwards and spread towards the centre of the railway supporting system.
Resumo:
Draglines are extremely large machines that are widely used in open-cut coal mines for overburden stripping. Since 1994 we have been working toward the development of a computer control system capable of automatically driving a dragline for a large portion of its operating cycle. This has necessitated the development and experimental evaluation of sensor systems, machines models, closed-loop control controllers, and an operator interface. This paper describes our steps toward the goal through scale-model and full-scale field experimentation.