16 resultados para frequency estimation
em eResearch Archive - Queensland Department of Agriculture
Resumo:
Abstract of Macbeth, G. M., Broderick, D., Buckworth, R. & Ovenden, J. R. (In press, Feb 2013). Linkage disequilibrium estimation of effective population size with immigrants from divergent populations: a case study on Spanish mackerel (Scomberomorus commerson). G3: Genes, Genomes and Genetics. Estimates of genetic effective population size (Ne) using molecular markers are a potentially useful tool for the management of endangered through to commercial species. But, pitfalls are predicted when the effective size is large, as estimates require large numbers of samples from wild populations for statistical validity. Our simulations showed that linkage disequilibrium estimates of Ne up to 10,000 with finite confidence limits can be achieved with sample sizes around 5000. This was deduced from empirical allele frequencies of seven polymorphic microsatellite loci in a commercially harvested fisheries species, the narrow barred Spanish mackerel (Scomberomorus commerson). As expected, the smallest standard deviation of Ne estimates occurred when low frequency alleles were excluded. Additional simulations indicated that the linkage disequilibrium method was sensitive to small numbers of genotypes from cryptic species or conspecific immigrants. A correspondence analysis algorithm was developed to detect and remove outlier genotypes that could possibly be inadvertently sampled from cryptic species or non-breeding immigrants from genetically separate populations. Simulations demonstrated the value of this approach in Spanish mackerel data. When putative immigrants were removed from the empirical data, 95% of the Ne estimates from jacknife resampling were above 24,000.
Resumo:
Understanding how aquatic species grow is fundamental in fisheries because stock assessment often relies on growth dependent statistical models. Length-frequency-based methods become important when more applicable data for growth model estimation are either not available or very expensive. In this article, we develop a new framework for growth estimation from length-frequency data using a generalized von Bertalanffy growth model (VBGM) framework that allows for time-dependent covariates to be incorporated. A finite mixture of normal distributions is used to model the length-frequency cohorts of each month with the means constrained to follow a VBGM. The variances of the finite mixture components are constrained to be a function of mean length, reducing the number of parameters and allowing for an estimate of the variance at any length. To optimize the likelihood, we use a minorization–maximization (MM) algorithm with a Nelder–Mead sub-step. This work was motivated by the decline in catches of the blue swimmer crab (BSC) (Portunus armatus) off the east coast of Queensland, Australia. We test the method with a simulation study and then apply it to the BSC fishery data.
Resumo:
The appropriate frequency and precision for surveys of wildlife populations represent a trade-off between survey cost and the risk of making suboptimal management decisions because of poor survey data. The commercial harvest of kangaroos is primarily regulated through annual quotas set as proportions of absolute estimates of population size. Stochastic models were used to explore the effects of varying precision, survey frequency and harvest rate on the risk of quasiextinction for an arid-zone and a more mesic-zone kangaroo population. Quasiextinction probability increases in a sigmoidal fashion as survey frequency is reduced. The risk is greater in more arid regions and is highly sensitive to harvest rate. An appropriate management regime involves regular surveys in the major harvest areas where harvest rate can be set close to the maximum sustained yield. Outside these areas, survey frequency can be reduced in relatively mesic areas and reduced in arid regions when combined with lowered harvest rates. Relative to other factors, quasiextinction risk is only affected by survey precision (standard error/mean × 100) when it is >50%, partly reflecting the safety of the strategy of harvesting a proportion of a population estimate.
Resumo:
Light interception is a major factor influencing plant development and biomass production. Several methods have been proposed to determine this variable, but its calculation remains difficult in artificial environments with heterogeneous light. We propose a method that uses 3D virtual plant modelling and directional light characterisation to estimate light interception in highly heterogeneous light environments such as growth chambers and glasshouses. Intercepted light was estimated by coupling an architectural model and a light model for different genotypes of the rosette species Arabidopsis thaliana (L.) Heynh and a sunflower crop. The model was applied to plants of contrasting architectures, cultivated in isolation or in canopy, in natural or artificial environments, and under contrasting light conditions. The model gave satisfactory results when compared with observed data and enabled calculation of light interception in situations where direct measurements or classical methods were inefficient, such as young crops, isolated plants or artificial conditions. Furthermore, the model revealed that A. thaliana increased its light interception efficiency when shaded. To conclude, the method can be used to calculate intercepted light at organ, plant and plot levels, in natural and artificial environments, and should be useful in the investigation of genotype-environment interactions for plant architecture and light interception efficiency. This paper originates from a presentation at the 5th International Workshop on Functional–Structural Plant Models, Napier, New Zealand, November 2007.
Resumo:
Genetic models partitioning additive and non-additive genetic effects for populations tested in replicated multi-environment trials (METs) in a plant breeding program have recently been presented in the literature. For these data, the variance model involves the direct product of a large numerator relationship matrix A, and a complex structure for the genotype by environment interaction effects, generally of a factor analytic (FA) form. With MET data, we expect a high correlation in genotype rankings between environments, leading to non-positive definite covariance matrices. Estimation methods for reduced rank models have been derived for the FA formulation with independent genotypes, and we employ these estimation methods for the more complex case involving the numerator relationship matrix. We examine the performance of differing genetic models for MET data with an embedded pedigree structure, and consider the magnitude of the non-additive variance. The capacity of existing software packages to fit these complex models is largely due to the use of the sparse matrix methodology and the average information algorithm. Here, we present an extension to the standard formulation necessary for estimation with a factor analytic structure across multiple environments.
Resumo:
The Davis Growth Model (a dynamic steer growth model encompassing 4 fat deposition models) is currently being used by the phenotypic prediction program of the Cooperative Research Centre (CRC) for Beef Genetic Technologies to predict P8 fat (mm) in beef cattle to assist beef producers meet market specifications. The concepts of cellular hyperplasia and hypertrophy are integral components of the Davis Growth Model. The net synthesis of total body fat (kg) is calculated from the net energy available after accounting tor energy needs for maintenance and protein synthesis. Total body fat (kg) is then partitioned into 4 fat depots (intermuscular, intramuscular, subcutaneous, and visceral). This paper reports on the parameter estimation and sensitivity analysis of the DNA (deoxyribonucleic acid) logistic growth equations and the fat deposition first-order differential equations in the Davis Growth Model using acslXtreme (Hunstville, AL, USA, Xcellon). The DNA and fat deposition parameter coefficients were found to be important determinants of model function; the DNA parameter coefficients with days on feed >100 days and the fat deposition parameter coefficients for all days on feed. The generalized NL2SOL optimization algorithm had the fastest processing time and the minimum number of objective function evaluations when estimating the 4 fat deposition parameter coefficients with 2 observed values (initial and final fat). The subcutaneous fat parameter coefficient did indicate a metabolic difference for frame sizes. The results look promising and the prototype Davis Growth Model has the potential to assist the beef industry meet market specifications.
Resumo:
Two prerequisites for realistically embarking upon an eradication programme are that cost-benefit analysis favours this strategy over other management options and that sufficient resources are available to carry the programme through to completion. These are not independent criteria, but it is our view that too little attention has been paid to estimating the investment required to complete weed eradication programmes. We deal with this problem by using a two-pronged approach: 1) developing a stochastic dynamic model that provides an estimation of programme duration; and 2) estimating the inputs required to delimit a weed incursion and to prevent weed reproduction over a sufficiently long period to allow extirpation of all infestations. The model is built upon relationships that capture the time-related detection of new infested areas, rates of progression of infestations from the active to the monitoring stage, rates of reversion of infestations from the monitoring to active stage, and the frequency distribution of time since last detection for all infestations. This approach is applied to the branched broomrape (Orobanche ramosa) eradication programme currently underway in South Australia. This programme commenced in 1999 and currently 7450 ha are known to be infested with the weed. To date none of the infestations have been eradicated. Given recent (2008) levels of investment and current eradication methods, model predictions are that it would take, on average, an additional 73 years to eradicate this weed at an average additional cost (NPV) of $AU67.9m. When the model was run for circumstances in 2003 and 2006, the average programme duration and total cost (NPV) were predicted to be 159 and 94 years, and $AU91.3m and $AU72.3m, respectively. The reduction in estimated programme length and cost may represent progress towards the eradication objective, although eradication of this species still remains a long term prospect.
Resumo:
Fisheries managers are becoming increasingly aware of the need to quantify all forms of harvest, including that by recreational fishers. This need has been driven by both a growing recognition of the potential impact that noncommercial fishers can have on exploited resources and the requirement to allocate catch limits between different sectors of the wider fishing community in many jurisdictions. Marine recreational fishers are rarely required to report any of their activity, and some form of survey technique is usually required to estimate levels of recreational catch and effort. In this review, we describe and discuss studies that have attempted to estimate the nature and extent of recreational harvests of marine fishes in New Zealand and Australia over the past 20 years. We compare studies by method to show how circumstances dictate their application and to highlight recent developments that other researchers may find of use. Although there has been some convergence of approach, we suggest that context is an important consideration, and many of the techniques discussed here have been adapted to suit local conditions and to address recognized sources of bias. Much of this experience, along with novel improvements to existing approaches, have been reported only in "gray" literature because of an emphasis on providing estimates for immediate management purposes. This paper brings much of that work together for the first time, and we discuss how others might benefit from our experience.
Resumo:
NeEstimator v2 is a completely revised and updated implementation of software that produces estimates of contemporary effective population size, using several different methods and a single input file. NeEstimator v2 includes three single-sample estimators (updated versions of the linkage disequilibrium and heterozygote-excess methods, and a new method based on molecular coancestry), as well as the two-sample (moment-based temporal) method. New features include the following: (i) an improved method for accounting for missing data; (ii) options for screening out rare alleles; (iii) confidence intervals for all methods; (iv) the ability to analyse data sets with large numbers of genetic markers (10000 or more); (v) options for batch processing large numbers of different data sets, which will facilitate cross-method comparisons using simulated data; and (vi) correction for temporal estimates when individuals sampled are not removed from the population (Plan I sampling). The user is given considerable control over input data and composition, and format of output files. The freely available software has a new JAVA interface and runs under MacOS, Linux and Windows.
Resumo:
Fire is a major driver of ecosystem change and can disproportionately affect the cycling of different nutrients. Thus, a stoichiometric approach to investigate the relationships between nutrient availability and microbial resource use during decomposition is likely to provide insight into the effects of fire on ecosystem functioning. We conducted a field litter bag experiment to investigate the long-term impact of repeated fire on the stoichiometry of leaf litter C, N and P pools, and nutrient-acquiring enzyme activities during decomposition in a wet sclerophyll eucalypt forest in Queensland, Australia. Fire frequency treatments have been maintained since 1972, including burning every two years (2yrB), burning every four years (4yrB) and no burning (NB). C:N ratios in freshly fallen litter were 29-42% higher and C:P ratios were 6-25% lower for 2yrB than NB during decomposition, with correspondingly lower 2yrB N:P ratios (27-32) than for NB (34-49). Trends in litter soluble and microbial N:P ratios were similar to the overall litter N:P ratios across fire treatments. Consistent with these, the ratio of activities for N-acquiring to P-acquiring enzymes in litter was higher for 2yrB than NB while 4yrB was generally intermediate between 2yrB and NB. Decomposition rates of freshly fallen litter were significantly lower for 2yrB (72±2% mass remaining at the end of experiment) than for 4yrB (59±3%) and NB (62±3%), a difference that may be related to effects of N limitation, lower moisture content, and/or litter C quality. Results for older mixed-age litter were similar to those for freshly fallen litter although treatment differences were less pronounced. Overall, these findings show that frequent fire (2yrB) decoupled N and P cycling, as manifested in litter C:N:P stoichiometry and in microbial biomass N:P ratio and enzymatic activities. These data indicate that fire induced a transient shift to N-limited ecosystem conditions during the post-fire recovery phase. This article is protected by copyright. All rights reserved.
Resumo:
We derive a new method for determining size-transition matrices (STMs) that eliminates probabilities of negative growth and accounts for individual variability. STMs are an important part of size-structured models, which are used in the stock assessment of aquatic species. The elements of STMs represent the probability of growth from one size class to another, given a time step. The growth increment over this time step can be modelled with a variety of methods, but when a population construct is assumed for the underlying growth model, the resulting STM may contain entries that predict negative growth. To solve this problem, we use a maximum likelihood method that incorporates individual variability in the asymptotic length, relative age at tagging, and measurement error to obtain von Bertalanffy growth model parameter estimates. The statistical moments for the future length given an individual’s previous length measurement and time at liberty are then derived. We moment match the true conditional distributions with skewed-normal distributions and use these to accurately estimate the elements of the STMs. The method is investigated with simulated tag–recapture data and tag–recapture data gathered from the Australian eastern king prawn (Melicertus plebejus).
Resumo:
Near infrared (NIR) spectroscopy was investigated as a potential rapid method of estimating fish age from whole otoliths of Saddletail snapper (Lutjanus malabaricus). Whole otoliths from 209 Saddletail snapper were extracted and the NIR spectral characteristics were acquired over a spectral range of 800–2780 nm. Partial least-squares models (PLS) were developed from the diffuse reflectance spectra and reference-validated age estimates (based on traditional sectioned otolith increments) to predict age for independent otolith samples. Predictive models developed for a specific season and geographical location performed poorly against a different season and geographical location. However, overall PLS regression statistics for predicting a combined population incorporating both geographic location and season variables were: coefficient of determination (R2) = 0.94, root mean square error of prediction (RMSEP) = 1.54 for age estimation, indicating that Saddletail age could be predicted within 1.5 increment counts. This level of accuracy suggests the method warrants further development for Saddletail snapper and may have potential for other fish species. A rapid method of fish age estimation could have the potential to reduce greatly both costs of time and materials in the assessment and management of commercial fisheries.
Resumo:
Reliable age information is vital for effective fisheries management, yet age determinations are absent for many deepwater sharks as they cannot be aged using traditional methods of growth bands counts. An alternative approach to ageing using near infrared spectroscopy (NIRS) was investigated using dorsal fin spines, vertebrae and fin clips of three species of deepwater sharks. Ages were successfully estimated for the two dogfish, Squalus megalops and Squalus montalbani, and NIRS spectra were correlated with body size in the catshark, Asymbolus pallidus. Correlations between estimated-ages of the dogfish dorsal fin spines and their NIRS spectra were good, with S. megalops R2=0.82 and S. montalbani R2=0.73. NIRS spectra from S. megalops vertebrae and fin clips that have no visible growth bands were correlated with estimated-ages, with R2=0.89 and 0.76, respectively. NIRS has the capacity to non-lethally estimate ages from fin spines and fin clips, and thus could significantly reduce the numbers of sharks that need to be lethally sampled for ageing studies. The detection of ageing materials by NIRS in poorly calcified deepwater shark vertebrae could potentially enable ageing of this group of sharks that are vulnerable to exploitation.
Resumo:
Retrospective identification of fire severity can improve our understanding of fire behaviour and ecological responses. However, burnt area records for many ecosystems are non-existent or incomplete, and those that are documented rarely include fire severity data. Retrospective analysis using satellite remote sensing data captured over extended periods can provide better estimates of fire history. This study aimed to assess the relationship between the Landsat differenced normalised burn ratio (dNBR) and field measured geometrically structured composite burn index (GeoCBI) for retrospective analysis of fire severity over a 23 year period in sclerophyll woodland and heath ecosystems. Further, we assessed for reduced dNBR fire severity classification accuracies associated with vegetation regrowth at increasing time between ignition and image capture. This was achieved by assessing four Landsat images captured at increasing time since ignition of the most recent burnt area. We found significant linear GeoCBI–dNBR relationships (R2 = 0.81 and 0.71) for data collected across ecosystems and for Eucalyptus racemosa ecosystems, respectively. Non-significant and weak linear relationships were observed for heath and Melaleuca quinquenervia ecosystems, suggesting that GeoCBI–dNBR was not appropriate for fire severity classification in specific ecosystems. Therefore, retrospective fire severity was classified across ecosystems. Landsat images captured within ~ 30 days after fire events were minimally affected by post burn vegetation regrowth.