103 resultados para Population viability analysis
Resumo:
Reviews the ecological status of the mahogany glider and describes its distribution, habitat and abundance, life history and threats to it. Three serial surveys of Brisbane residents provide data on the knowledge of respondents about the mahogany glider. The results provide information about the attitudes of respondents to the mahogany glider, to its conservation and relevant public policies and about variations in these factors as the knowledge of participants of the mahogany glider alters. Similarly data is provided and analysed about the willingness to pay of respondents to conserve the mahogany glider. Population viability analysis is applied to estimate the required habitat area for a minimum viable population of the mahogany glider to ensure at least a 95% probability of its survival for 100 years. Places are identified in Queensland where the requisite minimum area of critical habitat can be conserved. Using the survey results as a basis, the likely willingness of groups of Australians to pay for the conservation of the mahogany glider is estimated and consequently their willingness to pay for the minimum required area of its habitat. Methods for estimating the cost of protecting this habitat are outlined. Australia-wide benefits seem to exceed the costs. Establishing a national park containing the minimum viable population of the mahogany glider is an appealing management option. This would also be beneficial in conserving other endangered wildlife species. Therefore, additional economic benefits to those estimated on account of the mahogany glider itself can be obtained.
Resumo:
Realistic time frames in which management decisions are made often preclude the completion of the detailed analyses necessary for conservation planning. Under these circumstances, efficient alternatives may assist in approximating the results of more thorough studies that require extensive resources and time. We outline a set of concepts and formulas that may be used in lieu of detailed population viability analyses and habitat modeling exercises to estimate the protected areas required to provide desirable conservation outcomes for a suite of threatened plant species. We used expert judgment of parameters and assessment of a population size that results in a specified quasiextinction risk based on simple dynamic models The area required to support a population of this size is adjusted to take into account deterministic and stochastic human influences, including small-scale disturbance deterministic trends such as habitat loss, and changes in population density through processes such as predation and competition. We set targets for different disturbance regimes and geographic regions. We applied our methods to Banksia cuneata, Boronia keysii, and Parsonsia dorrigoensis, resulting in target areas for conservation of 1102, 733, and 1084 ha, respectively. These results provide guidance on target areas and priorities for conservation strategies.
Resumo:
Models of population dynamics are commonly used to predict risks in ecology, particularly risks of population decline. There is often considerable uncertainty associated with these predictions. However, alternatives to predictions based on population models have not been assessed. We used simulation models of hypothetical species to generate the kinds of data that might typically be available to ecologists and then invited other researchers to predict risks of population declines using these data. The accuracy of the predictions was assessed by comparison with the forecasts of the original model. The researchers used either population models or subjective judgement to make their predictions. Predictions made using models were only slightly more accurate than subjective judgements of risk. However, predictions using models tended to be unbiased, while subjective judgements were biased towards over-estimation. Psychology literature suggests that the bias of subjective judgements is likely to vary somewhat unpredictably among people, depending on their stake in the outcome. This will make subjective predictions more uncertain and less transparent than those based on models. (C) 2004 Elsevier SAS. All rights reserved.
Resumo:
A decision theory framework can be a powerful technique to derive optimal management decisions for endangered species. We built a spatially realistic stochastic metapopulation model for the Mount Lofty Ranges Southern Emu-wren (Stipiturus malachurus intermedius), a critically endangered Australian bird. Using diserete-time Markov,chains to describe the dynamics of a metapopulation and stochastic dynamic programming (SDP) to find optimal solutions, we evaluated the following different management decisions: enlarging existing patches, linking patches via corridors, and creating a new patch. This is the first application of SDP to optimal landscape reconstruction and one of the few times that landscape reconstruction dynamics have been integrated with population dynamics. SDP is a powerful tool that has advantages over standard Monte Carlo simulation methods because it can give the exact optimal strategy for every landscape configuration (combination of patch areas and presence of corridors) and pattern of metapopulation occupancy, as well as a trajectory of strategies. It is useful when a sequence of management actions can be performed over a given time horizon, as is the case for many endangered species recovery programs, where only fixed amounts of resources are available in each time step. However, it is generally limited by computational constraints to rather small networks of patches. The model shows that optimal metapopulation, management decisions depend greatly on the current state of the metapopulation,. and there is no strategy that is universally the best. The extinction probability over 30 yr for the optimal state-dependent management actions is 50-80% better than no management, whereas the best fixed state-independent sets of strategies are only 30% better than no management. This highlights the advantages of using a decision theory tool to investigate conservation strategies for metapopulations. It is clear from these results that the sequence of management actions is critical, and this can only be effectively derived from stochastic dynamic programming. The model illustrates the underlying difficulty in determining simple rules of thumb for the sequence of management actions for a metapopulation. This use of a decision theory framework extends the capacity of population viability analysis (PVA) to manage threatened species.
Resumo:
This paper considers the economics of conserving a species with mainly non-use value, the endangered mahogany glider. Three serial surveys of Brisbane residents provide data on the knowledge of respondents about the mahogany glider. The results supply information about the attitudes of respondents to the mahogany glider, to its conservation and relevant public policies, and about variations in these factors as the knowledge of participants of the mahogany glider alters. Similarly, data are provided and analysed about the willingness to pay of respondents to conserve the mahogany glider and how it changes. Population viability analysis is applied to estimate the required habitat area for a minimum viable population of the mahogany glider to ensure at least a 95% probability of its survival for 100 years. Places are identified in Queensland where the requisite minimum area of critical habitat can be conserved. Using the survey results as a basis, the likely willingness of groups of Australians to pay for the conservation of the mahogany glider is estimated and consequently their willingness to pay for the minimum required area of its habitat. Methods for estimating the cost of protecting this habitat are outlined. Australia-wide benefits are estimated to exceed the costs. Establishing a national park containing the minimum viable population of the mahogany glider is an appealing management option. This would also be beneficial in conserving other endangered wildlife species and ecosystems. Therefore, additional economic benefits to those estimated on account of the mahogany glider itself can be obtained. (C) 2004 Elsevier Ltd. All rights reserved.
Resumo:
Conservation planning is the process of locating and designing conservation areas to promote the persistence of biodiversity in situ. To do this, conservation areas must be able to mitigate at least some of the proximate threats to biodiversity. Information on threatening processes and the relative vulnerability of areas and natural features to these processes is therefore crucial for effective conservation planning. However, measuring and incorporating vulnerability into conservation planning have been problematic. We develop a conceptual framework of the role of vulnerability assessments in conservation planning and propose a definition of vulnerability that incorporates three dimensions: exposure, intensity, and impact. We review and categorize methods for assessing the vulnerability of areas and the features they contain and identify the relative strengths and weaknesses of each broad approach, Our review highlights the need for further development and evaluation of approaches to assess vulnerability and for comparisons of their relative effectiveness.
Resumo:
The first step in conservation planning is to identify objectives. Most stated objectives for conservation, such as to maximize biodiversity outcomes, are too vague to be useful within a decision-making framework. One way to clarify the issue is to define objectives in terms of the risk of extinction for multiple species. Although the assessment of extinction risk for single species is common, few researchers have formulated an objective function that combines the extinction risks of multiple species. We sought to translate the broad goal of maximizing the viability of species into explicit objectives for use in a decision-theoretic approach to conservation planning. We formulated several objective functions based on extinction risk across many species and illustrated the differences between these objectives with simple examples. Each objective function was the mathematical representation of an approach to conservation and emphasized different levels of threat Our objectives included minimizing the joint probability of one or more extinctions, minimizing the expected number of extinctions, and minimizing the increase in risk of extinction from the best-case scenario. With objective functions based on joint probabilities of extinction across species, any correlations in extinction probabilities bad to be known or the resultant decisions were potentially misleading. Additive objectives, such as the expected number of extinctions, did not produce the same anomalies. We demonstrated that the choice of objective function is central to the decision-making process because alternative objective functions can lead to a different ranking of management options. Therefore, decision makers need to think carefully in selecting and defining their conservation goals.
Resumo:
Species extinctions and the deterioration of other biodiversity features worldwide have led to the adoption of systematic conservation planning in many regions of the world. As a consequence, various software tools for conservation planning have been developed over the past twenty years. These, tools implement algorithms designed to identify conservation area networks for the representation and persistence of biodiversity features. Budgetary, ethical, and other sociopolitical constraints dictate that the prioritized sites represent biodiversity with minimum impact on human interests. Planning tools are typically also used to satisfy these criteria. This chapter reviews both the concepts and technical choices that underlie the development of these tools. Conservation planning problems can be formulated as optimization problems, and we evaluate the suitability of different algorithms for their solution. Finally, we also review some key issues associated with the use of these tools, such as computational efficiency, the effectiveness of taxa and abiotic parameters at choosing surrogates for biodiversity, the process of setting explicit targets of representation for biodiversity surrogates, and
Resumo:
Objectives: To compare the population modelling programs NONMEM and P-PHARM during investigation of the pharmacokinetics of tacrolimus in paediatric liver-transplant recipients. Methods: Population pharmacokinetic analysis was performed using NONMEM and P-PHARM on retrospective data from 35 paediatric liver-transplant patients receiving tacrolimus therapy. The same data were presented to both programs. Maximum likelihood estimates were sought for apparent clearance (CL/F) and apparent volume of distribution (V/F). Covariates screened for influence on these parameters were weight, age, gender, post-operative day, days of tacrolimus therapy, transplant type, biliary reconstructive procedure, liver function tests, creatinine clearance, haematocrit, corticosteroid dose, and potential interacting drugs. Results: A satisfactory model was developed in both programs with a single categorical covariate - transplant type - providing stable parameter estimates and small, normally distributed (weighted) residuals. In NONMEM, the continuous covariates - age and liver function tests - improved modelling further. Mean parameter estimates were CL/F (whole liver) = 16.3 1/h, CL/F (cut-down liver) = 8.5 1/h and V/F = 565 1 in NONMEM, and CL/F = 8.3 1/h and V/F = 155 1 in P-PHARM. Individual Bayesian parameter estimates were CL/F (whole liver) = 17.9 +/- 8.8 1/h, CL/F (cutdown liver) = 11.6 +/- 18.8 1/h and V/F = 712 792 1 in NONMEM, and CL/F (whole liver) = 12.8 +/- 3.5 1/h, CL/F (cut-down liver) = 8.2 +/- 3.4 1/h and V/F = 221 1641 in P-PHARM. Marked interindividual kinetic variability (38-108%) and residual random error (approximately 3 ng/ml) were observed. P-PHARM was more user friendly and readily provided informative graphical presentation of results. NONMEM allowed a wider choice of errors for statistical modelling and coped better with complex covariate data sets. Conclusion: Results from parametric modelling programs can vary due to different algorithms employed to estimate parameters, alternative methods of covariate analysis and variations and limitations in the software itself.
Resumo:
The theoretical impacts of anthropogenic habitat degradation on genetic resources have been well articulated. Here we use a simulation approach to assess the magnitude of expected genetic change, and review 31 studies of 23 neotropical tree species to assess whether empirical case studies conform to theory. Major differences in the sensitivity of measures to detect the genetic health of degraded populations were obvious. Most studies employing genetic diversity (nine out of 13) found no significant consequences, yet most that assessed progeny inbreeding (six out of eight), reproductive output (seven out of 10) and fitness (all six) highlighted significant impacts. These observations are in line with theory, where inbreeding is observed immediately following impact, but genetic diversity is lost slowly over subsequent generations, which for trees may take decades. Studies also highlight the ecological, not just genetic, consequences of habitat degradation that can cause reduced seed set and progeny fitness. Unexpectedly, two studies examining pollen flow using paternity analysis highlight an extensive network of gene flow at smaller spatial scales (less than 10 km). Gene flow can thus mitigate against loss of genetic diversity and assist in long-term population viability, even in degraded landscapes. Unfortunately, the surveyed studies were too few and heterogeneous to examine concepts of population size thresholds and genetic resilience in relation to life history. Future suggested research priorities include undertaking integrated studies on a range of species in the same landscapes; better documentation of the extent and duration of impact; and most importantly, combining neutral marker, pollination dynamics, ecological consequences, and progeny fitness assessment within single studies.
Resumo:
Defining the pharmacokinetics of drugs in overdose is complicated. Deliberate self-poisoning is generally impulsive and associated with poor accuracy in dose history. In addition, early blood samples are rarely collected to characterize the whole plasma-concentration time profile and the effect of decontamination on the pharmacokinetics is uncertain. The aim of this study was to explore a fully Bayesian methodology for population pharmacokinetic analysis of data that arose from deliberate self-poisoning with citalopram. Prior information on the pharmacokinetic parameters was elicited from 14 published studies on citalopram when taken in therapeutic doses. The data set included concentration-time data from 53 patients studied after 63 citalopram overdose events (dose range: 20-1700 mg). Activated charcoal was administered between 0.5 and 4 h after 17 overdose events. The clinical investigator graded the veracity of the patients' dosing history on a 5-point ordinal scale. Inclusion of informative priors stabilised the pharmacokinetic model and the population mean values could be estimated well. There were no indications of non-linear clearance after excessive doses. The final model included an estimated uncertainty of the dose amount which in a simulation study was shown to not affect the model's ability to characterise the effects of activated charcoal. The effect of activated charcoal on clearance and bioavailability was pronounced and resulted in a 72% increase and 22% decrease, respectively. These findings suggest charcoal administration is potentially beneficial after citalopram overdose. The methodology explored seems promising for exploring the dose-exposure relationship in the toxicological settings.