716 resultados para Efficiency models
Resumo:
Introduced predators can have pronounced effects on naïve prey species; thus, predator control is often essential for conservation of threatened native species. Complete eradication of the predator, although desirable, may be elusive in budget-limited situations, whereas predator suppression is more feasible and may still achieve conservation goals. We used a stochastic predator-prey model based on a Lotka-Volterra system to investigate the cost-effectiveness of predator control to achieve prey conservation. We compared five control strategies: immediate eradication, removal of a constant number of predators (fixed-number control), removal of a constant proportion of predators (fixed-rate control), removal of predators that exceed a predetermined threshold (upper-trigger harvest), and removal of predators whenever their population falls below a lower predetermined threshold (lower-trigger harvest). We looked at the performance of these strategies when managers could always remove the full number of predators targeted by each strategy, subject to budget availability. Under this assumption immediate eradication reduced the threat to the prey population the most. We then examined the effect of reduced management success in meeting removal targets, assuming removal is more difficult at low predator densities. In this case there was a pronounced reduction in performance of the immediate eradication, fixed-number, and lower-trigger strategies. Although immediate eradication still yielded the highest expected minimum prey population size, upper-trigger harvest yielded the lowest probability of prey extinction and the greatest return on investment (as measured by improvement in expected minimum population size per amount spent). Upper-trigger harvest was relatively successful because it operated when predator density was highest, which is when predator removal targets can be more easily met and the effect of predators on the prey is most damaging. This suggests that controlling predators only when they are most abundant is the "best" strategy when financial resources are limited and eradication is unlikely. © 2008 Society for Conservation Biology.
Resumo:
The quality of environmental decisions should be gauged according to managers' objectives. Management objectives generally seek to maximize quantifiable measures of system benefit, for instance population growth rate. Reaching these goals often requires a certain degree of learning about the system. Learning can occur by using management action in combination with a monitoring system. Furthermore, actions can be chosen strategically to obtain specific kinds of information. Formal decision making tools can choose actions to favor such learning in two ways: implicitly via the optimization algorithm that is used when there is a management objective (for instance, when using adaptive management), or explicitly by quantifying knowledge and using it as the fundamental project objective, an approach new to conservation.This paper outlines three conservation project objectives - a pure management objective, a pure learning objective, and an objective that is a weighted mixture of these two. We use eight optimization algorithms to choose actions that meet project objectives and illustrate them in a simulated conservation project. The algorithms provide a taxonomy of decision making tools in conservation management when there is uncertainty surrounding competing models of system function. The algorithms build upon each other such that their differences are highlighted and practitioners may see where their decision making tools can be improved. © 2010 Elsevier Ltd.
Resumo:
Money is often a limiting factor in conservation, and attempting to conserve endangered species can be costly. Consequently, a framework for optimizing fiscally constrained conservation decisions for a single species is needed. In this paper we find the optimal budget allocation among isolated subpopulations of a threatened species to minimize local extinction probability. We solve the problem using stochastic dynamic programming, derive a useful and simple alternative guideline for allocating funds, and test its performance using forward simulation. The model considers subpopulations that persist in habitat patches of differing quality, which in our model is reflected in different relationships between money invested and extinction risk. We discover that, in most cases, subpopulations that are less efficient to manage should receive more money than those that are more efficient to manage, due to higher investment needed to reduce extinction risk. Our simple investment guideline performs almost as well as the exact optimal strategy. We illustrate our approach with a case study of the management of the Sumatran tiger, Panthera tigris sumatrae, in Kerinci Seblat National Park (KSNP), Indonesia. We find that different budgets should be allocated to the separate tiger subpopulations in KSNP. The subpopulation that is not at risk of extinction does not require any management investment. Based on the combination of risks of extinction and habitat quality, the optimal allocation for these particular tiger subpopulations is an unusual case: subpopulations that occur in higher-quality habitat (more efficient to manage) should receive more funds than the remaining subpopulation that is in lower-quality habitat. Because the yearly budget allocated to the KSNP for tiger conservation is small, to guarantee the persistence of all the subpopulations that are currently under threat we need to prioritize those that are easier to save. When allocating resources among subpopulations of a threatened species, the combined effects of differences in habitat quality, cost of action, and current subpopulation probability of extinction need to be integrated. We provide a useful guideline for allocating resources among isolated subpopulations of any threatened species. © 2010 by the Ecological Society of America.
Resumo:
Strategic searching for invasive pests presents a formidable challenge for conservation managers. Limited funding can necessitate choosing between surveying many sites cursorily, or focussing intensively on fewer sites. While existing knowledge may help to target more likely sites, e.g. with species distribution models (maps), this knowledge is not flawless and improving it also requires management investment. 2.In a rare example of trading-off action against knowledge gain, we combine search coverage and accuracy, and its future improvement, within a single optimisation framework. More specifically we examine under which circumstances managers should adopt one of two search-and-control strategies (cursory or focussed), and when they should divert funding to improving knowledge, making better predictive maps that benefit future searches. 3.We use a family of Receiver Operating Characteristic curves to reflect the quality of maps that direct search efforts. We demonstrate our framework by linking these to a logistic model of invasive spread such as that for the red imported fire ant Solenopsis invicta in south-east Queensland, Australia. 4.Cursory widespread searching is only optimal if the pest is already widespread or knowledge is poor, otherwise focussed searching exploiting the map is preferable. For longer management timeframes, eradication is more likely if funds are initially devoted to improving knowledge, even if this results in a short-term explosion of the pest population. 5.Synthesis and applications. By combining trade-offs between knowledge acquisition and utilization, managers can better focus - and justify - their spending to achieve optimal results in invasive control efforts. This framework can improve the efficiency of any ecological management that relies on predicting occurrence. © 2010 The Authors. Journal of Applied Ecology © 2010 British Ecological Society.
Resumo:
Achieving high efficiency with improved power transfer range and misalignment tolerance is the major design challenge in realizing Wireless Power Transfer (WPT) systems for industrial applications. Resonant coils must be carefully designed to achieve highest possible system performance by fully utilizing the available space. High quality factor and enhanced electromagnetic coupling are key indices which determine the system performance. In this paper, design parameter extraction and quality factor optimization of multi layered helical coils are presented using finite element analysis (FEA) simulations. In addition, a novel Toroidal Shaped Spiral (TSS) coil is proposed to increase power transfer range and misalignment tolerance. The proposed shapes and recommendations can be used to design high efficiency WPT resonator in a limited space.
Resumo:
This thesis focused upon the development of improved capacity analysis and capacity planning techniques for railways. A number of innovations were made and were tested on a case study of a real national railway. These techniques can reduce the time required to perform decision making activities that planners and managers need to perform. As all railways need to be expanded to meet increasing demands, the presumption that analytical capacity models can be used to identify how best to improve an existing network at least cost, was fully investigated. Track duplication was the mechanism used to expanding a network's capacity, and two variant capacity expansion models were formulated. Another outcome of this thesis is the development and validation of bi objective models for capacity analysis. These models regulate the competition for track access and perform a trade-off analysis. An opportunity to develop more general mulch-objective approaches was identified.
Resumo:
Wound healing and tumour growth involve collective cell spreading, which is driven by individual motility and proliferation events within a population of cells. Mathematical models are often used to interpret experimental data and to estimate the parameters so that predictions can be made. Existing methods for parameter estimation typically assume that these parameters are constants and often ignore any uncertainty in the estimated values. We use approximate Bayesian computation (ABC) to estimate the cell diffusivity, D, and the cell proliferation rate, λ, from a discrete model of collective cell spreading, and we quantify the uncertainty associated with these estimates using Bayesian inference. We use a detailed experimental data set describing the collective cell spreading of 3T3 fibroblast cells. The ABC analysis is conducted for different combinations of initial cell densities and experimental times in two separate scenarios: (i) where collective cell spreading is driven by cell motility alone, and (ii) where collective cell spreading is driven by combined cell motility and cell proliferation. We find that D can be estimated precisely, with a small coefficient of variation (CV) of 2–6%. Our results indicate that D appears to depend on the experimental time, which is a feature that has been previously overlooked. Assuming that the values of D are the same in both experimental scenarios, we use the information about D from the first experimental scenario to obtain reasonably precise estimates of λ, with a CV between 4 and 12%. Our estimates of D and λ are consistent with previously reported values; however, our method is based on a straightforward measurement of the position of the leading edge whereas previous approaches have involved expensive cell counting techniques. Additional insights gained using a fully Bayesian approach justify the computational cost, especially since it allows us to accommodate information from different experiments in a principled way.
Resumo:
PURPOSE: This paper describes dynamic agent composition, used to support the development of flexible and extensible large-scale agent-based models (ABMs). This approach was motivated by a need to extend and modify, with ease, an ABM with an underlying networked structure as more information becomes available. Flexibility was also sought after so that simulations are set up with ease, without the need to program. METHODS: The dynamic agent composition approach consists in having agents, whose implementation has been broken into atomic units, come together at runtime to form the complex system representation on which simulations are run. These components capture information at a fine level of detail and provide a vast range of combinations and options for a modeller to create ABMs. RESULTS: A description of the dynamic agent composition is given in this paper, as well as details about its implementation within MODAM (MODular Agent-based Model), a software framework which is applied to the planning of the electricity distribution network. Illustrations of the implementation of the dynamic agent composition are consequently given for that domain throughout the paper. It is however expected that this approach will be beneficial to other problem domains, especially those with a networked structure, such as water or gas networks. CONCLUSIONS: Dynamic agent composition has many advantages over the way agent-based models are traditionally built for the users, the developers, as well as for agent-based modelling as a scientific approach. Developers can extend the model without the need to access or modify previously written code; they can develop groups of entities independently and add them to those already defined to extend the model. Users can mix-and-match already implemented components to form large-scales ABMs, allowing them to quickly setup simulations and easily compare scenarios without the need to program. The dynamic agent composition provides a natural simulation space over which ABMs of networked structures are represented, facilitating their implementation; and verification and validation of models is facilitated by quickly setting up alternative simulations.
Resumo:
We propose a method for learning specific object representations that can be applied (and reused) in visual detection and identification tasks. A machine learning technique called Cartesian Genetic Programming (CGP) is used to create these models based on a series of images. Our research investigates how manipulation actions might allow for the development of better visual models and therefore better robot vision. This paper describes how visual object representations can be learned and improved by performing object manipulation actions, such as, poke, push and pick-up with a humanoid robot. The improvement can be measured and allows for the robot to select and perform the `right' action, i.e. the action with the best possible improvement of the detector.
Resumo:
In this paper, we investigate the effect of mobility constraints on epidemic broadcast mechanisms in DTNs (Delay-Tolerant Networks). Major factors affecting epidemic broadcast performances are its forwarding algorithm and node mobility. The impact of forwarding algorithm and node mobility on epidemic broadcast mechanisms has been actively studied in the literature, but those studies generally use unconstrained mobility models. The objective of this paper is therefore to quantitatively investigate the effect of mobility constraints on epidemic broadcast mechanisms. We evaluate the performances of three classes of epidemic broadcast mechanisms - P-BCAST (PUSH-based BroadCast), SA-BCAST (Self-Adaptive BroadCast), and HP-BCAST (History-based P-BCAST) - with a random waypoint mobility model with mobility constraints. Our finding includes that the existence of mobility constraints significantly improves the reach ability and dissemination speed of epidemic broadcast mechanisms while degrading their efficiency.
Resumo:
Most real-life data analysis problems are difficult to solve using exact methods, due to the size of the datasets and the nature of the underlying mechanisms of the system under investigation. As datasets grow even larger, finding the balance between the quality of the approximation and the computing time of the heuristic becomes non-trivial. One solution is to consider parallel methods, and to use the increased computational power to perform a deeper exploration of the solution space in a similar time. It is, however, difficult to estimate a priori whether parallelisation will provide the expected improvement. In this paper we consider a well-known method, genetic algorithms, and evaluate on two distinct problem types the behaviour of the classic and parallel implementations.