5 resultados para Mate sampling
em DigitalCommons@University of Nebraska - Lincoln
Resumo:
Killer whale (Orcinus orca Linnaeus, 1758) abundance in the North Pacific is known only for a few populations for which extensive longitudinal data are available, with little quantitative data from more remote regions. Line-transect ship surveys were conducted in July and August of 2001–2003 in coastal waters of the western Gulf of Alaska and the Aleutian Islands. Conventional and Multiple Covariate Distance Sampling methods were used to estimate the abundance of different killer whale ecotypes, which were distinguished based upon morphological and genetic data. Abundance was calculated separately for two data sets that differed in the method by which killer whale group size data were obtained. Initial group size (IGS) data corresponded to estimates of group size at the time of first sighting, and post-encounter group size (PEGS) corresponded to estimates made after closely approaching sighted groups.
Resumo:
Classical sampling methods can be used to estimate the mean of a finite or infinite population. Block kriging also estimates the mean, but of an infinite population in a continuous spatial domain. In this paper, I consider a finite population version of block kriging (FPBK) for plot-based sampling. The data are assumed to come from a spatial stochastic process. Minimizing mean-squared-prediction errors yields best linear unbiased predictions that are a finite population version of block kriging. FPBK has versions comparable to simple random sampling and stratified sampling, and includes the general linear model. This method has been tested for several years for moose surveys in Alaska, and an example is given where results are compared to stratified random sampling. In general, assuming a spatial model gives three main advantages over classical sampling: (1) FPBK is usually more precise than simple or stratified random sampling, (2) FPBK allows small area estimation, and (3) FPBK allows nonrandom sampling designs.
Resumo:
1. Distance sampling is a widely used technique for estimating the size or density of biological populations. Many distance sampling designs and most analyses use the software Distance. 2. We briefly review distance sampling and its assumptions, outline the history, structure and capabilities of Distance, and provide hints on its use. 3. Good survey design is a crucial prerequisite for obtaining reliable results. Distance has a survey design engine, with a built-in geographic information system, that allows properties of different proposed designs to be examined via simulation, and survey plans to be generated. 4. A first step in analysis of distance sampling data is modeling the probability of detection. Distance contains three increasingly sophisticated analysis engines for this: conventional distance sampling, which models detection probability as a function of distance from the transect and assumes all objects at zero distance are detected; multiple-covariate distance sampling, which allows covariates in addition to distance; and mark–recapture distance sampling, which relaxes the assumption of certain detection at zero distance. 5. All three engines allow estimation of density or abundance, stratified if required, with associated measures of precision calculated either analytically or via the bootstrap. 6. Advanced analysis topics covered include the use of multipliers to allow analysis of indirect surveys (such as dung or nest surveys), the density surface modeling analysis engine for spatial and habitat-modeling, and information about accessing the analysis engines directly from other software. 7. Synthesis and applications. Distance sampling is a key method for producing abundance and density estimates in challenging field conditions. The theory underlying the methods continues to expand to cope with realistic estimation situations. In step with theoretical developments, state-of- the-art software that implements these methods is described that makes the methods accessible to practicing ecologists.
Resumo:
We consider a fully model-based approach for the analysis of distance sampling data. Distance sampling has been widely used to estimate abundance (or density) of animals or plants in a spatially explicit study area. There is, however, no readily available method of making statistical inference on the relationships between abundance and environmental covariates. Spatial Poisson process likelihoods can be used to simultaneously estimate detection and intensity parameters by modeling distance sampling data as a thinned spatial point process. A model-based spatial approach to distance sampling data has three main benefits: it allows complex and opportunistic transect designs to be employed, it allows estimation of abundance in small subregions, and it provides a framework to assess the effects of habitat or experimental manipulation on density. We demonstrate the model-based methodology with a small simulation study and analysis of the Dubbo weed data set. In addition, a simple ad hoc method for handling overdispersion is also proposed. The simulation study showed that the model-based approach compared favorably to conventional distance sampling methods for abundance estimation. In addition, the overdispersion correction performed adequately when the number of transects was high. Analysis of the Dubbo data set indicated a transect effect on abundance via Akaike’s information criterion model selection. Further goodness-of-fit analysis, however, indicated some potential confounding of intensity with the detection function.
Resumo:
"How large a sample is needed to survey the bird damage to corn in a county in Ohio or New Jersey or South Dakota?" Like those in the Bureau of Sport Fisheries and Wildlife and the U.S.D.A. who have been faced with a question of this sort we found only meager information on which to base an answer, whether the problem related to a county in Ohio or to one in New Jersey, or elsewhere. Many sampling methods and rates of sampling did yield reliable estimates but the judgment was often intuitive or based on the reasonableness of the resulting data. Later, when planning the next study or survey, little additional information was available on whether 40 samples of 5 ears each or 5 samples of 200 ears should be examined, i.e., examination of a large number of small samples or a small number of large samples. What information is needed to make a reliable decision? Those of us involved with the Agricultural Experiment Station regional project concerned with the problems of bird damage to crops, known as NE-49, thought we might supply an ans¬wer if we had a corn field in which all the damage was measured. If all the damage were known, we could then sample this field in various ways and see how the estimates from these samplings compared to the actual damage and pin-point the best and most accurate sampling procedure. Eventually the investigators in four states became involved in this work1 and instead of one field we were able to broaden the geographical base by examining all the corn ears in 2 half-acre sections of fields in each state, 8 sections in all. When the corn had matured well past the dough stage, damage on each corn ear was assessed, without removing the ear from the stalk, by visually estimating the percent of the kernel surface which had been destroyed and rating it in one of 5 damage categories. Measurements (by row-centimeters) of the rows of kernels pecked by birds also were made on selected ears representing all categories and all parts of each field section. These measurements provided conversion factors that, when fed into a computer, were applied to the more than 72,000 visually assessed ears. The machine now had in its memory and could supply on demand a map showing each ear, its location and the intensity of the damage.