6 resultados para mistimed covariates
em DigitalCommons@University of Nebraska - Lincoln
Resumo:
We monitored the haul-out behavior of 68 radio-tagged harbor seals (Phoca vitulina) during the molt season at two Alaskan haul-out sites (Grand Island, August-September 1994; Nanvak Bay, August-September 2000). For each site, we created a statistical model of the proportion of seals hauled out as a function of date, time of day, tide, and weather covariates. Using these models, we identified the conditions that would result in the greatest proportion of seals hauled out. Although those “ideal conditions” differed between sites, the proportion of seals predicted to be hauled out under those conditions was very similar (81.3% for Grand Island and 85.7% for Nanvak Bay). The similar estimates for both sites suggest that haul-out proportions under locally ideal conditions may be constant between years and geographic regions, at least during the molt season.
Resumo:
The abundance of harbor seals (Phoca vitulina richardii) has declined in recent decades at several Alaska locations. The causes of these declines are unknown, but there is concern about the status of the populations, especially in the Gulf of Alaska. To assess the status of harbor seals in the Gulf of Alaska, we conducted aerial surveys of seals on their haul-out sites in August-September 1996. Many factors influence the propensity of seals to haul out, including tides, weather, time of day, and time of year. Because these “covariates” cannot simultaneously be controlled through survey design, we used a regression model to adjust the counts to an estimate of the number of seals that would have been ashore during a hypothetical survey conducted under ideal conditions for hauling out. The regression, a generalized additive model, not only provided an adjustment for the covariates, but also confirmed the nature and shape of the covariate effects on haul-out behavior. The number of seals hauled out was greatest at the beginning of the surveys (mid-August). There was a broad daily peak from about 1100-1400 local solar time. The greatest numbers were hauled out at low tide on terrestrial sites. Tidal state made little difference in the numbers hauled out on glacial ice, where the area available to seals did not fluctuate with the tide. Adjusting the survey counts to the ideal state for each covariate produced an estimate of 30,035 seals, about 1.8 times the total of the unadjusted counts (16,355 seals). To the adjusted count, we applied a correction factor of 1.198 from a separate study of two haul-out sites elsewhere in Alaska, to produce a total abundance estimate of 35,981 (SE 1,833). This estimate accounts both for the effect of covariates on survey counts and for the proportion of seals that remained in the water even under ideal conditions for hauling out.
Resumo:
Environmental data are spatial, temporal, and often come with many zeros. In this paper, we included space–time random effects in zero-inflated Poisson (ZIP) and ‘hurdle’ models to investigate haulout patterns of harbor seals on glacial ice. The data consisted of counts, for 18 dates on a lattice grid of samples, of harbor seals hauled out on glacial ice in Disenchantment Bay, near Yakutat, Alaska. A hurdle model is similar to a ZIP model except it does not mix zeros from the binary and count processes. Both models can be used for zero-inflated data, and we compared space–time ZIP and hurdle models in a Bayesian hierarchical model. Space–time ZIP and hurdle models were constructed by using spatial conditional autoregressive (CAR) models and temporal first-order autoregressive (AR(1)) models as random effects in ZIP and hurdle regression models. We created maps of smoothed predictions for harbor seal counts based on ice density, other covariates, and spatio-temporal random effects. For both models predictions around the edges appeared to be positively biased. The linex loss function is an asymmetric loss function that penalizes overprediction more than underprediction, and we used it to correct for prediction bias to get the best map for space–time ZIP and hurdle models.
Resumo:
We develop spatial statistical models for stream networks that can estimate relationships between a response variable and other covariates, make predictions at unsampled locations, and predict an average or total for a stream or a stream segment. There have been very few attempts to develop valid spatial covariance models that incorporate flow, stream distance, or both. The application of typical spatial autocovariance functions based on Euclidean distance, such as the spherical covariance model, are not valid when using stream distance. In this paper we develop a large class of valid models that incorporate flow and stream distance by using spatial moving averages. These methods integrate a moving average function, or kernel, against a white noise process. By running the moving average function upstream from a location, we develop models that use flow, and by construction they are valid models based on stream distance. We show that with proper weighting, many of the usual spatial models based on Euclidean distance have a counterpart for stream networks. Using sulfate concentrations from an example data set, the Maryland Biological Stream Survey (MBSS), we show that models using flow may be more appropriate than models that only use stream distance. For the MBSS data set, we use restricted maximum likelihood to fit a valid covariance matrix that uses flow and stream distance, and then we use this covariance matrix to estimate fixed effects and make kriging and block kriging predictions.
Resumo:
1. Distance sampling is a widely used technique for estimating the size or density of biological populations. Many distance sampling designs and most analyses use the software Distance. 2. We briefly review distance sampling and its assumptions, outline the history, structure and capabilities of Distance, and provide hints on its use. 3. Good survey design is a crucial prerequisite for obtaining reliable results. Distance has a survey design engine, with a built-in geographic information system, that allows properties of different proposed designs to be examined via simulation, and survey plans to be generated. 4. A first step in analysis of distance sampling data is modeling the probability of detection. Distance contains three increasingly sophisticated analysis engines for this: conventional distance sampling, which models detection probability as a function of distance from the transect and assumes all objects at zero distance are detected; multiple-covariate distance sampling, which allows covariates in addition to distance; and mark–recapture distance sampling, which relaxes the assumption of certain detection at zero distance. 5. All three engines allow estimation of density or abundance, stratified if required, with associated measures of precision calculated either analytically or via the bootstrap. 6. Advanced analysis topics covered include the use of multipliers to allow analysis of indirect surveys (such as dung or nest surveys), the density surface modeling analysis engine for spatial and habitat-modeling, and information about accessing the analysis engines directly from other software. 7. Synthesis and applications. Distance sampling is a key method for producing abundance and density estimates in challenging field conditions. The theory underlying the methods continues to expand to cope with realistic estimation situations. In step with theoretical developments, state-of- the-art software that implements these methods is described that makes the methods accessible to practicing ecologists.
Resumo:
We consider a fully model-based approach for the analysis of distance sampling data. Distance sampling has been widely used to estimate abundance (or density) of animals or plants in a spatially explicit study area. There is, however, no readily available method of making statistical inference on the relationships between abundance and environmental covariates. Spatial Poisson process likelihoods can be used to simultaneously estimate detection and intensity parameters by modeling distance sampling data as a thinned spatial point process. A model-based spatial approach to distance sampling data has three main benefits: it allows complex and opportunistic transect designs to be employed, it allows estimation of abundance in small subregions, and it provides a framework to assess the effects of habitat or experimental manipulation on density. We demonstrate the model-based methodology with a small simulation study and analysis of the Dubbo weed data set. In addition, a simple ad hoc method for handling overdispersion is also proposed. The simulation study showed that the model-based approach compared favorably to conventional distance sampling methods for abundance estimation. In addition, the overdispersion correction performed adequately when the number of transects was high. Analysis of the Dubbo data set indicated a transect effect on abundance via Akaike’s information criterion model selection. Further goodness-of-fit analysis, however, indicated some potential confounding of intensity with the detection function.