13 resultados para Forecast error variance

em Aquatic Commons


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Quantifying scientific uncertainty when setting total allowable catch limits for fish stocks is a major challenge, but it is a requirement in the United States since changes to national fisheries legislation. Multiple sources of error are readily identifiable, including estimation error, model specification error, forecast error, and errors associated with the definition and estimation of reference points. Our focus here, however, is to quantify the influence of estimation error and model specification error on assessment outcomes. These are fundamental sources of uncertainty in developing scientific advice concerning appropriate catch levels and although a study of these two factors may not be inclusive, it is feasible with available information. For data-rich stock assessments conducted on the U.S. west coast we report approximate coefficients of variation in terminal biomass estimates from assessments based on inversion of the assessment of the model’s Hessian matrix (i.e., the asymptotic standard error). To summarize variation “among” stock assessments, as a proxy for model specification error, we characterize variation among multiple historical assessments of the same stock. Results indicate that for 17 groundfish and coastal pelagic species, the mean coefficient of variation of terminal biomass is 18%. In contrast, the coefficient of variation ascribable to model specification error (i.e., pooled among-assessment variation) is 37%. We show that if a precautionary probability of overfishing equal to 0.40 is adopted by managers, and only model specification error is considered, a 9% reduction in the overfishing catch level is indicated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We have formulated a model for analyzing the measurement error in marine survey abundance estimates by using data from parallel surveys (trawl haul or acoustic measurement). The measurement error is defined as the component of the variability that cannot be explained by covariates such as temperature, depth, bottom type, etc. The method presented is general, but we concentrate on bottom trawl catches of cod (Gadus morhua). Catches of cod from 10 parallel trawling experiments in the Barents Sea with a total of 130 paired hauls were used to estimate the measurement error in trawl hauls. Based on the experimental data, the measurement error is fairly constant in size on the logarithmic scale and is independent of location, time, and fish density. Compared with the total variability of the winter and autumn surveys in the Barents Sea, the measurement error is small (approximately 2–5%, on the log scale, in terms of variance of catch per towed distance). Thus, the cod catch rate is a fairly precise measure of fish density at a given site at a given time.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

ENGLISH: Longline hook rates of bigeye and yellowfin tunas in the eastern Pacific Ocean were standardized by maximum depth of fishing, area, and season, using generalized linear models (GLM's). The annual trends of the standardized hook rates differ from the unstandardized, and are more likely to represent the changes in abundance of tunas in the age groups most vulnerable to longliners in the fishing grounds. For both species all of the interactions in the GLM's involving years, depths of fishing, areas, and seasons were significant. This means that the annual trends in hook rates depend on which depths, areas, and seasons are being considered. The overall average hook rates for each were estimated by weighting each 5-degree quadrangle equally and each season by the number of months in it. Since the annual trends in hook rates for each fishing depth category are roughly the same for bigeye, total average annual hook rate estimates are possible with the GLM. For yellowfin, the situation is less clear because of a preponderance of empty cells in the model. The full models explained 55% of the variation in bigeye hook rate and 33% of that of yellowfin. SPANISH: Se estandardizaron las tasas de captura con palangre de atunes patudo y aleta amarilla en el Océano Pacífico oriental por la profunidad máxima de pesca, área, y temporada, usando modelos lineales generalizados (MLG). Las tendencias anuales de las tasas de captura estandardizadas son diferentes a las de las tasas no estandardizadas, y es más que representen los cambios en la abundancia de los atunes en los grupos de edad más vulnerables a los palangreros en las áreas de pesca. Para ambas especies fueron significativas todas las interacciones en los MLG con año, profundidad de pesca, área, y temporada. Esto significa que las tendencias anuales de las tasas de captura dependen de cuál profundidad, área, y temporado se está considerando. Para la estimación de la tasa de captura general media para cada especie se ponderó cada cuadrángulo de 5 grados igualmente y cada temporada por el número de meses que contiene. Ya que las tendencias anuales en las tasas de captura para cada categoría de profundidad de pesca son aproximadamente iguales para el patudo, son posibles estimaciones de la tasa de captura anual media total con el MLG. En el caso del aleta amarilla, la situación es más confusa, debido a una preponderancia de celdas vacías en el modelo. Los modelos completos explican el 55% de la variación de la tasa de captura de patudo y 33% de la del aleta amarilla. (PDF contains 19 pages.)

Relevância:

20.00% 20.00%

Publicador:

Resumo:

High suspended sediment loads may be deleterious to adult salmonids and invertebrates in gravel-bedded streams. Further, the accumulation of fine material in the interstices of the gravel may have an adverse impact on the recruitment of the young stages of salmonids. It is important therefore not only to quantify the rates and degrees of silting but also to identify sediment sources and to determine both, the frequency of sediment inputs to the system and the duration of high sediment concentrations. This report explores the application of variance spectrum analysis to the isolation of sediment periodicities. For the particular river chosen for examination the method demonstrated the essentially undisturbed nature of the catchment. The regulated river chosen for examination is the River Tees in Northern England. Variance spectrum analysis was applied to a series of over 4000 paired daily turbidity and discharge readings.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Research on assessment and monitoring methods has primarily focused on fisheries with long multivariate data sets. Less research exists on methods applicable to data-poor fisheries with univariate data sets with a small sample size. In this study, we examine the capabilities of seasonal autoregressive integrated moving average (SARIMA) models to fit, forecast, and monitor the landings of such data-poor fisheries. We use a European fishery on meagre (Sciaenidae: Argyrosomus regius), where only a short time series of landings was available to model (n=60 months), as our case-study. We show that despite the limited sample size, a SARIMA model could be found that adequately fitted and forecasted the time series of meagre landings (12-month forecasts; mean error: 3.5 tons (t); annual absolute percentage error: 15.4%). We derive model-based prediction intervals and show how they can be used to detect problematic situations in the fishery. Our results indicate that over the course of one year the meagre landings remained within the prediction limits of the model and therefore indicated no need for urgent management intervention. We discuss the information that SARIMA model structure conveys on the meagre lifecycle and fishery, the methodological requirements of SARIMA forecasting of data-poor fisheries landings, and the capabilities SARIMA models present within current efforts to monitor the world’s data-poorest resources.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

New technologies can be riddled with unforeseen sources of error, jeopardizing the validity and application of their advancement. Bioelectrical impedance analysis (BIA) is a new technology in fisheries research that is capable of estimating proximate composition, condition, and energy content in fish quickly, cheaply, and (after calibration) without the need to sacrifice fish. Before BIA can be widely accepted in fisheries science, it is necessary to identify sources of error and determine a means to minimize potential errors with this analysis. We conducted controlled laboratory experiments to identify sources of errors within BIA measurements. We concluded that electrode needle location, procedure deviations, user experience, time after death, and temperature can affect resistance and reactance measurements. Sensitivity analyses showed that errors in predictive estimates of composition can be large (>50%) when these errors are experienced. Adherence to a strict protocol can help avoid these sources of error and provide BIA estimates that are both accurate and precise in a field or laboratory setting.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Body-size measurement errors are usually ignored in stock assessments, but may be important when body-size data (e.g., from visual sur veys) are imprecise. We used experiments and models to quantify measurement errors and their effects on assessment models for sea scallops (Placopecten magellanicus). Errors in size data obscured modes from strong year classes and increased frequency and size of the largest and smallest sizes, potentially biasing growth, mortality, and biomass estimates. Modeling techniques for errors in age data proved useful for errors in size data. In terms of a goodness of model fit to the assessment data, it was more important to accommodate variance than bias. Models that accommodated size errors fitted size data substantially better. We recommend experimental quantification of errors along with a modeling approach that accommodates measurement errors because a direct algebraic approach was not robust and because error parameters were diff icult to estimate in our assessment model. The importance of measurement errors depends on many factors and should be evaluated on a case by case basis.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Abundance indices derived from fishery-independent surveys typically exhibit much higher interannual variability than is consistent with the within-survey variance or the life history of a species. This extra variability is essentially observation noise (i.e. measurement error); it probably reflects environmentally driven factors that affect catchability over time. Unfortunately, high observation noise reduces the ability to detect important changes in the underlying population abundance. In our study, a noise-reduction technique for uncorrelated observation noise that is based on autoregressive integrated moving average (ARIMA) time series modeling is investigated. The approach is applied to 18 time series of finfish abundance, which were derived from trawl survey data from the U.S. northeast continental shelf. Although the a priori assumption of a random-walk-plus-uncorrelated-noise model generally yielded a smoothed result that is pleasing to the eye, we recommend that the most appropriate ARIMA model be identified for the observed time series if the smoothed time series will be used for further analysis of the population dynamics of a species.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We assayed allelic variation at 19 nuclear-encoded microsatellites among 1622 Gulf red snapper (Lutjanus campechanus) sampled from the 1995 and 1997 cohorts at each of three offshore localities in the northern Gulf of Mexico (Gulf). Localities represented western, central, and eastern subregions within the northern Gulf. Number of alleles per microsatellite per sample ranged from four to 23, and gene diversity ranged from 0.170 to 0.917. Tests of conformity to Hardy-Weinberg equilibrium expectations and of genotypic equilibrium between pairs of micro-satellites were generally nonsignificant following Bonferroni correction. Significant genic or genotypic heterogeneity (or both) among samples was detected at four microsatellites and over all microsatellites. Levels of divergence among samples were low (FST ≤0.001). Pairwise exact tests revealed that six of seven “significant” comparisons involved temporal rather than spatial heterogeneity. Contemporaneous or variance effective size (NeV) was estimated from the temporal variance in allele frequencies by using a maximum-likelihood method. Estimates of NeV ranged between 1098 and >75,000 and differed significantly among localities; the NeV estimate for the sample from the northcentral Gulf was >60 times as large as the estimates for the other two localities. The differences in variance effective size could ref lect differences in number of individuals successfully reproducing, differences in patterns and intensity of immigration, or both, and are consistent with the hypothesis, supported by life-history data, that different “demographic stocks” of red snapper are found in the northern Gulf. Estimates of NeV for red snapper in the northern Gulf were at least three orders of magnitude lower than current estimates of census size (N). The ratio of effective to census size (Ne/N) is far below that expected in an ideal population and may reflect high variance in individual reproductive success, high temporal and spatial variance in productivity among subregions or a combination of the two.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We report a Monte Carlo representation of the long-term inter-annual variability of monthly snowfall on a detailed (1 km) grid of points throughout the southwest. An extension of the local climate model of the southwestern United States (Stamm and Craig 1992) provides spatially based estimates of mean and variance of monthly temperature and precipitation. The mean is the expected value from a canonical regression using independent variables that represent controls on climate in this area, including orography. Variance is computed as the standard error of the prediction and provides site-specific measures of (1) natural sources of variation and (2) errors due to limitations of the data and poor distribution of climate stations. Simulation of monthly temperature and precipitation over a sequence of years is achieved by drawing from a bivariate normal distribution. The conditional expectation of precipitation. given temperature in each month, is the basis of a numerical integration of the normal probability distribution of log precipitation below a threshold temperature (3°C) to determine snowfall as a percent of total precipitation. Snowfall predictions are tested at stations for which long-term records are available. At Donner Memorial State Park (elevation 1811 meters) a 34-year simulation - matching the length of instrumental record - is within 15 percent of observed for mean annual snowfall. We also compute resulting snowpack using a variation of the model of Martinec et al. (1983). This allows additional tests by examining spatial patterns of predicted snowfall and snowpack and their hydrologic implications.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The number of fishing trials required for comparing the efficiency of fishing gears was investigated. A unique solution to this problem did not appear to exist because of the heterogeneity of the experimental material. Sequential experimentation and analysis have been found to be a practical approach to this problem. By this, the experiment can be terminated almost after 35 days fishing for catches with standard error per unit as per cent of the mean about 30% or less (after logarithmic transformation). For data with mean catches less than 1.5 kg analysis of variance approach does not appear to be meaningful.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A brief description is given of a program to carry out analysis of variance two-way classification on MICRO 2200, for use in fishery data processing.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

To bring out the relative efficiency of various types of fishing gears, in the analysis of catch data, a combination of Tukey's test, consequent transformation and graphical analysis for outlier elimination has been introduced, which can be advantageously used for applying ANOVA techniques, Application of these procedures to actual sets of data showed that nonadditivity in the data was caused by either the presence of outliers, or the absence of a suitable transformation or both. As a corollary, the concurrent model: X sub(ij) = µ + α sub(i) + β sub(j) + λ α sub(i) β sub(j) + E sub(ij) adequately fits the data.