15 resultados para Mark Estimation
em eResearch Archive - Queensland Department of Agriculture
Resumo:
Aerial surveys of kangaroos (Macropus spp.) in Queensland are used to make economically important judgements on the levels of viable commercial harvest. Previous analysis methods for aerial kangaroo surveys have used both mark-recapture methodologies and conventional distance-sampling analyses. Conventional distance sampling has the disadvantage that detection is assumed to be perfect on the transect line, while mark-recapture methods are notoriously sensitive to problems with unmodelled heterogeneity in capture probabilities. We introduce three methodologies for combining together mark-recapture and distance-sampling data, aimed at exploiting the strengths of both methodologies and overcoming the weaknesses. Of these methods, two are based on the assumption of full independence between observers in the mark-recapture component, and this appears to introduce more bias in density estimation than it resolves through allowing uncertain trackline detection. Both of these methods give lower density estimates than conventional distance sampling, indicating a clear failure of the independence assumption. The third method, termed point independence, appears to perform very well, giving credible density estimates and good properties in terms of goodness-of-fit and percentage coefficient of variation. Estimated densities of eastern grey kangaroos range from 21 to 36 individuals km-2, with estimated coefficients of variation between 11% and 14% and estimated trackline detection probabilities primarily between 0.7 and 0.9.
Resumo:
Genetic mark–recapture requires efficient methods of uniquely identifying individuals. 'Shadows' (individuals with the same genotype at the selected loci) become more likely with increasing sample size, and bias harvest rate estimates. Finding loci is costly, but better loci reduce analysis costs and improve power. Optimal microsatellite panels minimize shadows, but panel design is a complex optimization process. locuseater and shadowboxer permit power and cost analysis of this process and automate some aspects, by simulating the entire experiment from panel design to harvest rate estimation.
Resumo:
Light interception is a major factor influencing plant development and biomass production. Several methods have been proposed to determine this variable, but its calculation remains difficult in artificial environments with heterogeneous light. We propose a method that uses 3D virtual plant modelling and directional light characterisation to estimate light interception in highly heterogeneous light environments such as growth chambers and glasshouses. Intercepted light was estimated by coupling an architectural model and a light model for different genotypes of the rosette species Arabidopsis thaliana (L.) Heynh and a sunflower crop. The model was applied to plants of contrasting architectures, cultivated in isolation or in canopy, in natural or artificial environments, and under contrasting light conditions. The model gave satisfactory results when compared with observed data and enabled calculation of light interception in situations where direct measurements or classical methods were inefficient, such as young crops, isolated plants or artificial conditions. Furthermore, the model revealed that A. thaliana increased its light interception efficiency when shaded. To conclude, the method can be used to calculate intercepted light at organ, plant and plot levels, in natural and artificial environments, and should be useful in the investigation of genotype-environment interactions for plant architecture and light interception efficiency. This paper originates from a presentation at the 5th International Workshop on Functional–Structural Plant Models, Napier, New Zealand, November 2007.
Resumo:
Genetic models partitioning additive and non-additive genetic effects for populations tested in replicated multi-environment trials (METs) in a plant breeding program have recently been presented in the literature. For these data, the variance model involves the direct product of a large numerator relationship matrix A, and a complex structure for the genotype by environment interaction effects, generally of a factor analytic (FA) form. With MET data, we expect a high correlation in genotype rankings between environments, leading to non-positive definite covariance matrices. Estimation methods for reduced rank models have been derived for the FA formulation with independent genotypes, and we employ these estimation methods for the more complex case involving the numerator relationship matrix. We examine the performance of differing genetic models for MET data with an embedded pedigree structure, and consider the magnitude of the non-additive variance. The capacity of existing software packages to fit these complex models is largely due to the use of the sparse matrix methodology and the average information algorithm. Here, we present an extension to the standard formulation necessary for estimation with a factor analytic structure across multiple environments.
Resumo:
The Davis Growth Model (a dynamic steer growth model encompassing 4 fat deposition models) is currently being used by the phenotypic prediction program of the Cooperative Research Centre (CRC) for Beef Genetic Technologies to predict P8 fat (mm) in beef cattle to assist beef producers meet market specifications. The concepts of cellular hyperplasia and hypertrophy are integral components of the Davis Growth Model. The net synthesis of total body fat (kg) is calculated from the net energy available after accounting tor energy needs for maintenance and protein synthesis. Total body fat (kg) is then partitioned into 4 fat depots (intermuscular, intramuscular, subcutaneous, and visceral). This paper reports on the parameter estimation and sensitivity analysis of the DNA (deoxyribonucleic acid) logistic growth equations and the fat deposition first-order differential equations in the Davis Growth Model using acslXtreme (Hunstville, AL, USA, Xcellon). The DNA and fat deposition parameter coefficients were found to be important determinants of model function; the DNA parameter coefficients with days on feed >100 days and the fat deposition parameter coefficients for all days on feed. The generalized NL2SOL optimization algorithm had the fastest processing time and the minimum number of objective function evaluations when estimating the 4 fat deposition parameter coefficients with 2 observed values (initial and final fat). The subcutaneous fat parameter coefficient did indicate a metabolic difference for frame sizes. The results look promising and the prototype Davis Growth Model has the potential to assist the beef industry meet market specifications.
Resumo:
To collate support and extension materials to ensure the recipients of Australian cattle have, at least, a minimum understanding of animal husbandry. As the number of destination markets increases, the need will also increase to produce similar material relevant and locally sensitive for these new markets.
Resumo:
Abstract of Macbeth, G. M., Broderick, D., Buckworth, R. & Ovenden, J. R. (In press, Feb 2013). Linkage disequilibrium estimation of effective population size with immigrants from divergent populations: a case study on Spanish mackerel (Scomberomorus commerson). G3: Genes, Genomes and Genetics. Estimates of genetic effective population size (Ne) using molecular markers are a potentially useful tool for the management of endangered through to commercial species. But, pitfalls are predicted when the effective size is large, as estimates require large numbers of samples from wild populations for statistical validity. Our simulations showed that linkage disequilibrium estimates of Ne up to 10,000 with finite confidence limits can be achieved with sample sizes around 5000. This was deduced from empirical allele frequencies of seven polymorphic microsatellite loci in a commercially harvested fisheries species, the narrow barred Spanish mackerel (Scomberomorus commerson). As expected, the smallest standard deviation of Ne estimates occurred when low frequency alleles were excluded. Additional simulations indicated that the linkage disequilibrium method was sensitive to small numbers of genotypes from cryptic species or conspecific immigrants. A correspondence analysis algorithm was developed to detect and remove outlier genotypes that could possibly be inadvertently sampled from cryptic species or non-breeding immigrants from genetically separate populations. Simulations demonstrated the value of this approach in Spanish mackerel data. When putative immigrants were removed from the empirical data, 95% of the Ne estimates from jacknife resampling were above 24,000.
Resumo:
Fisheries managers are becoming increasingly aware of the need to quantify all forms of harvest, including that by recreational fishers. This need has been driven by both a growing recognition of the potential impact that noncommercial fishers can have on exploited resources and the requirement to allocate catch limits between different sectors of the wider fishing community in many jurisdictions. Marine recreational fishers are rarely required to report any of their activity, and some form of survey technique is usually required to estimate levels of recreational catch and effort. In this review, we describe and discuss studies that have attempted to estimate the nature and extent of recreational harvests of marine fishes in New Zealand and Australia over the past 20 years. We compare studies by method to show how circumstances dictate their application and to highlight recent developments that other researchers may find of use. Although there has been some convergence of approach, we suggest that context is an important consideration, and many of the techniques discussed here have been adapted to suit local conditions and to address recognized sources of bias. Much of this experience, along with novel improvements to existing approaches, have been reported only in "gray" literature because of an emphasis on providing estimates for immediate management purposes. This paper brings much of that work together for the first time, and we discuss how others might benefit from our experience.
Resumo:
NeEstimator v2 is a completely revised and updated implementation of software that produces estimates of contemporary effective population size, using several different methods and a single input file. NeEstimator v2 includes three single-sample estimators (updated versions of the linkage disequilibrium and heterozygote-excess methods, and a new method based on molecular coancestry), as well as the two-sample (moment-based temporal) method. New features include the following: (i) an improved method for accounting for missing data; (ii) options for screening out rare alleles; (iii) confidence intervals for all methods; (iv) the ability to analyse data sets with large numbers of genetic markers (10000 or more); (v) options for batch processing large numbers of different data sets, which will facilitate cross-method comparisons using simulated data; and (vi) correction for temporal estimates when individuals sampled are not removed from the population (Plan I sampling). The user is given considerable control over input data and composition, and format of output files. The freely available software has a new JAVA interface and runs under MacOS, Linux and Windows.
Resumo:
We derive a new method for determining size-transition matrices (STMs) that eliminates probabilities of negative growth and accounts for individual variability. STMs are an important part of size-structured models, which are used in the stock assessment of aquatic species. The elements of STMs represent the probability of growth from one size class to another, given a time step. The growth increment over this time step can be modelled with a variety of methods, but when a population construct is assumed for the underlying growth model, the resulting STM may contain entries that predict negative growth. To solve this problem, we use a maximum likelihood method that incorporates individual variability in the asymptotic length, relative age at tagging, and measurement error to obtain von Bertalanffy growth model parameter estimates. The statistical moments for the future length given an individual’s previous length measurement and time at liberty are then derived. We moment match the true conditional distributions with skewed-normal distributions and use these to accurately estimate the elements of the STMs. The method is investigated with simulated tag–recapture data and tag–recapture data gathered from the Australian eastern king prawn (Melicertus plebejus).
Resumo:
Common coral trout Plectropomus leopardus is an iconic fish of the Great Barrier Reef (GBR) and is the most important fish for the commercial fishery there. Most of the catch is exported live to Asia. This stock assessment was undertaken in response to falls in catch sizes and catch rates in recent years, in order to gauge the status of the stock. It is the first stock assessment ever conducted of coral trout on the GBR, and brings together a multitude of different data sources for the first time. The GBR is very large and was divided into a regional structure based on the Bioregions defined by expert committees appointed by the Great Barrier Reef Marine Park Authority (GBRMPA) as part of the 2004 rezoning of the GBR. The regional structure consists of six Regions, from the Far Northern Region in the north to the Swains and Capricorn–Bunker Regions in the south. Regions also closely follow the boundaries between Bioregions. Two of the northern Regions are split into Subregions on the basis of potential changes in fishing intensity between the Subregions; there are nine Subregions altogether, which include four Regions that are not split. Bioregions are split into Subbioregions along the Subregion boundaries. Finally, each Subbioregion is split into a “blue” population which is open to fishing and a “green” population which is closed to fishing. The fishery is unusual in that catch rates as an indicator of abundance of coral trout are heavily influenced by tropical cyclones. After a major cyclone, catch rates fall for two to three years, and rebound after that. This effect is well correlated with the times of occurrence of cyclones, and usually occurs in the same month that the cyclone strikes. However, statistical analyses correlating catch rates with cyclone wind energy did not provide significantly different catch rate trends. Alternative indicators of cyclone strength may explain more of the catch rate decline, and future work should investigate this. Another feature of catch rates is the phenomenon of social learning in coral trout populations, whereby when a population of coral trout is fished, individuals quickly learn not to take bait. Then the catch rate falls sharply even when the population size is still high. The social learning may take place by fish directly observing their fellows being hooked, or perhaps heeding a chemo-sensory cue emitted by fish that are hooked. As part of the assessment, analysis of data from replenishment closures of Boult Reef in the Capricorn–Bunker Region (closed 1983–86) and Bramble Reef in the Townsville Subregion (closed 1992–95) estimated a strong social learning effect. A major data source for the stock assessment was the large collection of underwater visual survey (UVS) data collected by divers who counted the coral trout that they sighted. This allowed estimation of the density of coral trout in the different Bioregions (expressed as a number of fish per hectare). Combined with mapping data of all the 3000 or so reefs making up the GBR, the UVS results provided direct estimates of the population size in each Subbioregion. A regional population dynamic model was developed to account for the intricacies of coral trout population dynamics and catch rates. Because the statistical analysis of catch rates did not attribute much of the decline to tropical cyclones, (and thereby implied “real” declines in biomass), and because in contrast the UVS data indicate relatively stable population sizes, model outputs were unduly influenced by the unlikely hypothesis that falling catch rates are real. The alternative hypothesis that UVS data are closer to the mark and declining catch rates are an artefact of spurious (e.g., cyclone impact) effects is much more probable. Judging by the population size estimates provided by the UVS data, there is no biological problem with the status of coral trout stocks. The estimate of the total number of Plectropomus leopardus on blue zones on the GBR in the mid-1980s (the time of the major UVS series) was 5.34 million legal-sized fish, or about 8400 t exploitable biomass, with an 2 additional 3350 t in green zones (using the current zoning which was introduced on 1 July 2004). For the offshore regions favoured by commercial fishers, the figure was about 4.90 million legal-sized fish in blue zones, or about 7700 t exploitable biomass. There is, however, an economic problem, as indicated by relatively low catch rates and anecdotal information provided by commercial fishers. The costs of fishing the GBR by hook and line (the only method compatible with the GBR’s high conservation status) are high, and commercial fishers are unable to operate profitably when catch rates are depressed (e.g., from a tropical cyclone). The economic problem is compounded by the effect of social learning in coral trout, whereby catch rates fall rapidly if fishers keep returning to the same fishing locations. In response, commercial fishers tend to spread out over the GBR, including the Far Northern and Swains Regions which are far from port and incur higher travel costs. The economic problem provides some logic to a reduction in the TACC. Such a reduction during good times, such as when the fishery is rebounding after a major tropical cyclone, could provide a net benefit to the fishery, as it would provide a margin of stock safety and make the fishery more economically robust by providing higher catch rates during subsequent periods of depressed catches. During hard times when catch rates are low (e.g., shortly after a major tropical cyclone), a change to the TACC would have little effect as even a reduced TACC would not come close to being filled. Quota adjustments based on catch rates should take account of long-term trends in order to mitigate variability and cyclone effects in data.
Resumo:
Near infrared (NIR) spectroscopy was investigated as a potential rapid method of estimating fish age from whole otoliths of Saddletail snapper (Lutjanus malabaricus). Whole otoliths from 209 Saddletail snapper were extracted and the NIR spectral characteristics were acquired over a spectral range of 800–2780 nm. Partial least-squares models (PLS) were developed from the diffuse reflectance spectra and reference-validated age estimates (based on traditional sectioned otolith increments) to predict age for independent otolith samples. Predictive models developed for a specific season and geographical location performed poorly against a different season and geographical location. However, overall PLS regression statistics for predicting a combined population incorporating both geographic location and season variables were: coefficient of determination (R2) = 0.94, root mean square error of prediction (RMSEP) = 1.54 for age estimation, indicating that Saddletail age could be predicted within 1.5 increment counts. This level of accuracy suggests the method warrants further development for Saddletail snapper and may have potential for other fish species. A rapid method of fish age estimation could have the potential to reduce greatly both costs of time and materials in the assessment and management of commercial fisheries.
Resumo:
Reliable age information is vital for effective fisheries management, yet age determinations are absent for many deepwater sharks as they cannot be aged using traditional methods of growth bands counts. An alternative approach to ageing using near infrared spectroscopy (NIRS) was investigated using dorsal fin spines, vertebrae and fin clips of three species of deepwater sharks. Ages were successfully estimated for the two dogfish, Squalus megalops and Squalus montalbani, and NIRS spectra were correlated with body size in the catshark, Asymbolus pallidus. Correlations between estimated-ages of the dogfish dorsal fin spines and their NIRS spectra were good, with S. megalops R2=0.82 and S. montalbani R2=0.73. NIRS spectra from S. megalops vertebrae and fin clips that have no visible growth bands were correlated with estimated-ages, with R2=0.89 and 0.76, respectively. NIRS has the capacity to non-lethally estimate ages from fin spines and fin clips, and thus could significantly reduce the numbers of sharks that need to be lethally sampled for ageing studies. The detection of ageing materials by NIRS in poorly calcified deepwater shark vertebrae could potentially enable ageing of this group of sharks that are vulnerable to exploitation.
Resumo:
Retrospective identification of fire severity can improve our understanding of fire behaviour and ecological responses. However, burnt area records for many ecosystems are non-existent or incomplete, and those that are documented rarely include fire severity data. Retrospective analysis using satellite remote sensing data captured over extended periods can provide better estimates of fire history. This study aimed to assess the relationship between the Landsat differenced normalised burn ratio (dNBR) and field measured geometrically structured composite burn index (GeoCBI) for retrospective analysis of fire severity over a 23 year period in sclerophyll woodland and heath ecosystems. Further, we assessed for reduced dNBR fire severity classification accuracies associated with vegetation regrowth at increasing time between ignition and image capture. This was achieved by assessing four Landsat images captured at increasing time since ignition of the most recent burnt area. We found significant linear GeoCBI–dNBR relationships (R2 = 0.81 and 0.71) for data collected across ecosystems and for Eucalyptus racemosa ecosystems, respectively. Non-significant and weak linear relationships were observed for heath and Melaleuca quinquenervia ecosystems, suggesting that GeoCBI–dNBR was not appropriate for fire severity classification in specific ecosystems. Therefore, retrospective fire severity was classified across ecosystems. Landsat images captured within ~ 30 days after fire events were minimally affected by post burn vegetation regrowth.
Resumo:
It is common to model the dynamics of fisheries using natural and fishing mortality rates estimated independently using two separate analyses. Fishing mortality is routinely estimated from widely available logbook data, whereas natural mortality estimations have often required more specific, less frequently available, data. However, in the case of the fishery for brown tiger prawn (Penaeus esculentus) in Moreton Bay, both fishing and natural mortality rates have been estimated from logbook data. The present work extended the fishing mortality model to incorporate an eco-physiological response of tiger prawn to temperature, and allowed recruitment timing to vary from year to year. These ecological characteristics of the dynamics of this fishery were ignored in the separate model that estimated natural mortality. Therefore, we propose to estimate both natural and fishing mortality rates within a single model using a consistent set of hypotheses. This approach was applied to Moreton Bay brown tiger prawn data collected between 1990 and 2010. Natural mortality was estimated by maximum likelihood to be equal to 0.032 ± 0.002 week−1, approximately 30% lower than the fixed value used in previous models of this fishery (0.045 week−1).