858 resultados para estimating population-size
Resumo:
1. Distance sampling is a widely used technique for estimating the size or density of biological populations. Many distance sampling designs and most analyses use the software Distance. 2. We briefly review distance sampling and its assumptions, outline the history, structure and capabilities of Distance, and provide hints on its use. 3. Good survey design is a crucial prerequisite for obtaining reliable results. Distance has a survey design engine, with a built-in geographic information system, that allows properties of different proposed designs to be examined via simulation, and survey plans to be generated. 4. A first step in analysis of distance sampling data is modeling the probability of detection. Distance contains three increasingly sophisticated analysis engines for this: conventional distance sampling, which models detection probability as a function of distance from the transect and assumes all objects at zero distance are detected; multiple-covariate distance sampling, which allows covariates in addition to distance; and mark–recapture distance sampling, which relaxes the assumption of certain detection at zero distance. 5. All three engines allow estimation of density or abundance, stratified if required, with associated measures of precision calculated either analytically or via the bootstrap. 6. Advanced analysis topics covered include the use of multipliers to allow analysis of indirect surveys (such as dung or nest surveys), the density surface modeling analysis engine for spatial and habitat-modeling, and information about accessing the analysis engines directly from other software. 7. Synthesis and applications. Distance sampling is a key method for producing abundance and density estimates in challenging field conditions. The theory underlying the methods continues to expand to cope with realistic estimation situations. In step with theoretical developments, state-of- the-art software that implements these methods is described that makes the methods accessible to practicing ecologists.
Resumo:
BACKGROUND: To plan and implement services to persons who inject drugs (PWID), knowing their number is essential. For the island of Montréal, Canada, the only estimate, of 11,700 PWID, was obtained in 1996 through a capture-recapture method. Thirteen years later, this study was undertaken to produce a new estimate. METHODS: PWID were defined as individuals aged 14-65 years, having injected recently and living on the island of Montréal. The study period was 07/01/2009 to 06/30/2010. An estimate was produced using a six-source capture-recapture log-linear regression method. The data sources were two epidemiological studies and four drug dependence treatment centres. Model selection was conducted in two steps, the first focusing on interactions between sources and the second, on age group and gender as covariates and as modulators of interactions. RESULTS: A total of 1480 PWID were identified in the six capture sources. They corresponded to 1132 different individuals. Based on the best-fitting model, which included age group and sex as covariates and six two-source interactions (some modulated by age), the estimated population was 3910 PWID (95% confidence intervals (CI): 3180-4900) which represents a prevalence of 2.8 (95% CI: 2.3-3.5) PWID per 1000 persons aged 14-65 years. CONCLUSIONS: The 2009-2010 estimate represents a two-third reduction compared to the one for 1996. The multisource capture-recapture method is useful to produce estimates of the size of the PWID population. It is of particular interest when conducted at regular intervals thus allowing for close monitoring of the injection phenomenon.
Resumo:
Since 1990, the issue of homelessness has become increasingly important in Hungary as a result of economic and structural changes. Various suggestions as to how the problem may be solved have always been preceded by the question "How many homeless people are there?" and there is still no official consensus as to the answer. Counting of the homeless is particularly difficult because of the bias in the initial sampling frame due to two factors that characterise this population: the definition of homelessness, and its 'hidden' nature. David aimed to estimate the size of the homeless population of Budapest by using two non-standard sampling methods: snowball sampling and the capture-recapture method. Her calculations are based on three data sets: one snowball data set and two independent list data sets. These estimators, supported by other statistical data, suggest that in 1999 there were about 8000-10000 homeless people in Budapest.
Resumo:
We propose a new model for estimating the size of a population from successive catches taken during a removal experiment. The data from these experiments often have excessive variation, known as overdispersion, as compared with that predicted by the multinomial model. The new model allows catchability to vary randomly among samplings, which accounts for overdispersion. When the catchability is assumed to have a beta distribution, the likelihood function, which is refered to as beta-multinomial, is derived, and hence the maximum likelihood estimates can be evaluated. Simulations show that in the presence of extravariation in the data, the confidence intervals have been substantially underestimated in previous models (Leslie-DeLury, Moran) and that the new model provides more reliable confidence intervals. The performance of these methods was also demonstrated using two real data sets: one with overdispersion, from smallmouth bass (Micropterus dolomieu), and the other without overdispersion, from rat (Rattus rattus).
Resumo:
Estimates of dolphin school sizes made by observers and crew members aboard tuna seiners or by observers on ship or aerial surveys are important components of population estimates of dolphins which are involved in the yellowfin tuna fishery in the eastern Pacific. Differences in past estimates made from tuna seiners and research ships and aircraft have been noted by Brazier (1978). To compare various methods of estimating dolphin school sizes a research cruise was undertaken with the following major objectives: 1) compare estimates made by observers aboard a tuna seiner and in the ship's helicopter, from aerial photographs, and from counts made at the backdown channel, 2) compare estimates of observers who are told the count of the school size after making their estimate to the observer who is not aware of the count to determine if observers can learn to estimate more accurately, and 3) obtain movie and still photographs of dolphin schools of known size at various stages of chase, capture and release to be used for observer training. The secondary objectives of the cruise were to: 1) obtain life history specimens and data from any dolphins that were killed incidental to purse seining. These specimens and data were to be analyzed by the U.S. National Marine Fisheries Service ( NMFS ) , 2) record evasion tactics of dolphin schools by observing them from the helicopter while the seiner approached the school, 3) examine alternative methods for estimating the distance and bearing of schools where they were first sighted, 4) collect the Commission's standard cetacean sighting, set log and daily activity data and expendable bathythermograph data. (PDF contains 31 pages.)
Resumo:
Crab traps have been used extensively in studies on the population dynamics of blue crabs to provide estimates of catch per unit of effort; however, these estimates have been determined without adequate consideration of escape rates. We examined the ability of the blue crab (Callinectes sapidus) to escape crab pots and the possibility that intraspecific crab interactions have an effect on catch rates. Approximately 85% of crabs that entered a pot escaped, and 83% of crabs escaped from the bait chamber (kitchen). Blue crabs exhibited few aggressive behavioral interactions in and around the crab pot and were documented to move freely in and out of the pot. Both the mean number and size of crabs caught were significantly smaller at deeper depths. Results from this study show that current estimates of catch per unit of effort may be biased given the high escape rate of blue crabs documented in this study. The results of this paper provide a mechanistic view of trap efficacy, and reveal crab behavior in and around commercial crab pots.
Resumo:
The time-of-detection method for aural avian point counts is a new method of estimating abundance, allowing for uncertain probability of detection. The method has been specifically designed to allow for variation in singing rates of birds. It involves dividing the time interval of the point count into several subintervals and recording the detection history of the subintervals when each bird sings. The method can be viewed as generating data equivalent to closed capture–recapture information. The method is different from the distance and multiple-observer methods in that it is not required that all the birds sing during the point count. As this method is new and there is some concern as to how well individual birds can be followed, we carried out a field test of the method using simulated known populations of singing birds, using a laptop computer to send signals to audio stations distributed around a point. The system mimics actual aural avian point counts, but also allows us to know the size and spatial distribution of the populations we are sampling. Fifty 8-min point counts (broken into four 2-min intervals) using eight species of birds were simulated. Singing rate of an individual bird of a species was simulated following a Markovian process (singing bouts followed by periods of silence), which we felt was more realistic than a truly random process. The main emphasis of our paper is to compare results from species singing at (high and low) homogenous rates per interval with those singing at (high and low) heterogeneous rates. Population size was estimated accurately for the species simulated, with a high homogeneous probability of singing. Populations of simulated species with lower but homogeneous singing probabilities were somewhat underestimated. Populations of species simulated with heterogeneous singing probabilities were substantially underestimated. Underestimation was caused by both the very low detection probabilities of all distant individuals and by individuals with low singing rates also having very low detection probabilities.
Resumo:
In this paper we consider the estimation of population size from onesource capture–recapture data, that is, a list in which individuals can potentially be found repeatedly and where the question is how many individuals are missed by the list. As a typical example, we provide data from a drug user study in Bangkok from 2001 where the list consists of drug users who repeatedly contact treatment institutions. Drug users with 1, 2, 3, . . . contacts occur, but drug users with zero contacts are not present, requiring the size of this group to be estimated. Statistically, these data can be considered as stemming from a zero-truncated count distribution.We revisit an estimator for the population size suggested by Zelterman that is known to be robust under potential unobserved heterogeneity. We demonstrate that the Zelterman estimator can be viewed as a maximum likelihood estimator for a locally truncated Poisson likelihood which is equivalent to a binomial likelihood. This result allows the extension of the Zelterman estimator by means of logistic regression to include observed heterogeneity in the form of covariates. We also review an estimator proposed by Chao and explain why we are not able to obtain similar results for this estimator. The Zelterman estimator is applied in two case studies, the first a drug user study from Bangkok, the second an illegal immigrant study in the Netherlands. Our results suggest the new estimator should be used, in particular, if substantial unobserved heterogeneity is present.
Resumo:
This note considers the variance estimation for population size estimators based on capture–recapture experiments. Whereas a diversity of estimators of the population size has been suggested, the question of estimating the associated variances is less frequently addressed. This note points out that the technique of conditioning can be applied here successfully which also allows us to identify sources of variation: the variance due to estimation of the model parameters and the binomial variance due to sampling n units from a population of size N. It is applied to estimators typically used in capture–recapture experiments in continuous time including the estimators of Zelterman and Chao and improves upon previously used variance estimators. In addition, knowledge of the variances associated with the estimators by Zelterman and Chao allows the suggestion of a new estimator as the weighted sum of the two. The decomposition of the variance into the two sources allows also a new understanding of how resampling techniques like the Bootstrap could be used appropriately. Finally, the sample size question for capture–recapture experiments is addressed. Since the variance of population size estimators increases with the sample size, it is suggested to use relative measures such as the observed-to-hidden ratio or the completeness of identification proportion for approaching the question of sample size choice.
Resumo:
This note considers the variance estimation for population size estimators based on capture–recapture experiments. Whereas a diversity of estimators of the population size has been suggested, the question of estimating the associated variances is less frequently addressed. This note points out that the technique of conditioning can be applied here successfully which also allows us to identify sources of variation: the variance due to estimation of the model parameters and the binomial variance due to sampling n units from a population of size N. It is applied to estimators typically used in capture–recapture experiments in continuous time including the estimators of Zelterman and Chao and improves upon previously used variance estimators. In addition, knowledge of the variances associated with the estimators by Zelterman and Chao allows the suggestion of a new estimator as the weighted sum of the two. The decomposition of the variance into the two sources allows also a new understanding of how resampling techniques like the Bootstrap could be used appropriately. Finally, the sample size question for capture–recapture experiments is addressed. Since the variance of population size estimators increases with the sample size, it is suggested to use relative measures such as the observed-to-hidden ratio or the completeness of identification proportion for approaching the question of sample size choice.
Resumo:
The feasibility of using an in-hardware implementation of a genetic algorithm (GA) to solve the computationally expensive travelling salesman problem (TSP) is explored, especially in regard to hardware resource requirements for problem and population sizes. We investigate via numerical experiments whether a small population size might prove sufficient to obtain reasonable quality solutions for the TSP, thereby permitting relatively resource efficient hardware implementation on field programmable gate arrays (FPGAs). Software experiments on two TSP benchmarks involving 48 and 532 cities were used to explore the extent to which population size can be reduced without compromising solution quality, and results show that a GA allowed to run for a large number of generations with a smaller population size can yield solutions of comparable quality to those obtained using a larger population. This finding is then used to investigate feasible problem sizes on a targeted Virtex-7 vx485T-2 FPGA platform via exploration of hardware resource requirements for memory and data flow operations.
Resumo:
We consider the problem of estimating a population size from successive catches taken during a removal experiment and propose two estimating functions approaches, the traditional quasi-likelihood (TQL) approach for dependent observations and the conditional quasi-likelihood (CQL) approach using the conditional mean and conditional variance of the catch given previous catches. Asymptotic covariance of the estimates and the relationship between the two methods are derived. Simulation results and application to the catch data from smallmouth bass show that the proposed estimating functions perform better than other existing methods, especially in the presence of overdispersion.
Resumo:
Microsatellite markers were used to examine spatio-temporal genetic variation in the endangered eastern freshwater cod Maccullochella ikei in the Clarence River system, eastern Australia. High levels of population structure were detected. A model-based clustering analysis of multilocus genotypes identified four populations that were highly differentiated by F-statistics (FST = 0· 09 − 0· 49; P < 0· 05), suggesting fragmentation and restricted dispersal particularly among upstream sites. Hatchery breeding programmes were used to re-establish locally extirpated populations and to supplement remnant populations. Bayesian and frequency-based analyses of hatchery fingerling samples provided evidence for population admixture in the hatchery, with the majority of parental stock sourced from distinct upstream sites. Comparison between historical and contemporary wild-caught samples showed a significant loss of heterozygosity (21%) and allelic richness (24%) in the Mann and Nymboida Rivers since the commencement of stocking. Fragmentation may have been a causative factor; however, temporal shifts in allele frequencies suggest swamping with hatchery-produced M. ikei has contributed to the genetic decline in the largest wild population. This study demonstrates the importance of using information on genetic variation and population structure in the management of breeding and stocking programmes, particularly for threatened species.
Resumo:
Abstract of Macbeth, G. M., Broderick, D., Buckworth, R. & Ovenden, J. R. (In press, Feb 2013). Linkage disequilibrium estimation of effective population size with immigrants from divergent populations: a case study on Spanish mackerel (Scomberomorus commerson). G3: Genes, Genomes and Genetics. Estimates of genetic effective population size (Ne) using molecular markers are a potentially useful tool for the management of endangered through to commercial species. But, pitfalls are predicted when the effective size is large, as estimates require large numbers of samples from wild populations for statistical validity. Our simulations showed that linkage disequilibrium estimates of Ne up to 10,000 with finite confidence limits can be achieved with sample sizes around 5000. This was deduced from empirical allele frequencies of seven polymorphic microsatellite loci in a commercially harvested fisheries species, the narrow barred Spanish mackerel (Scomberomorus commerson). As expected, the smallest standard deviation of Ne estimates occurred when low frequency alleles were excluded. Additional simulations indicated that the linkage disequilibrium method was sensitive to small numbers of genotypes from cryptic species or conspecific immigrants. A correspondence analysis algorithm was developed to detect and remove outlier genotypes that could possibly be inadvertently sampled from cryptic species or non-breeding immigrants from genetically separate populations. Simulations demonstrated the value of this approach in Spanish mackerel data. When putative immigrants were removed from the empirical data, 95% of the Ne estimates from jacknife resampling were above 24,000.