949 resultados para Catch-up
Resumo:
We experimentally demonstrate for the first time 1.55μm vertical-cavity surface-emitting laser (VCSEL) transmission over 6.5 km single mode fiber (SMF) at 20 Gb/s for optical access networks. Characterization of the device is also investigated. © 2009 IEEE.
Resumo:
Seasonal trawling was conducted randomly in coastal (depths of 4.6–17 m) waters from St. Augustine, Florida, (29.9°N) to Winyah Bay, South Carolina (33.1°N), during 2000–03, 2008–09, and 2011 to assess annual trends in the relative abundance of sea turtles. A total of 1262 loggerhead sea turtles (Caretta caretta) were captured in 23% (951) of 4207 sampling events. Capture rates (overall and among prevalent 5-cm size classes) were analyzed through the use of a generalized linear model with log link function for the 4097 events that had complete observations for all 25 model parameters. Final models explained 6.6% (70.1–75.0 cm minimum straight-line carapace length [SCLmin]) to 14.9% (75.1–80.0 cm SCLmin) of deviance in the data set. Sampling year, geographic subregion, and distance from shore were retained as significant terms in all final models, and these terms collectively accounted for 6.2% of overall model deviance (range: 4.5–11.7% of variance among 5-cm size classes). We retained 18 parameters only in a subset of final models: 4 as exclusively significant terms, 5 as a mixture of significant or nonsignificant terms, and 9 as exclusively nonsignificant terms. Four parameters also were dropped completely from all final models. The generalized linear model proved appropriate for monitoring trends for this data set that was laden with zero values for catches and was compiled for a globally protected species. Because we could not account for much model deviance, metrics other than those examined in our study may better explain catch variability and, once elucidated, their inclusion in the generalized linear model should improve model fits.
Resumo:
Gulf of Mexico, white shrimp, Litopenaeus setiferus, catch statistics have been collected by NOAA’s National Marine Fisheries Service for over 50 years. Recent occurrences such as natural and manmade disasters have raised awareness for the need to publish these types of data. Here we report shrimp data collected from 1984 to 2011. These 28 years of catch history are the time series used in the most recent Gulf of Mexico white shrimp stock assessment. Fishing effort for this stock has fluctuated over the period reported, ranging from 54,675 to 162,952 days fished. Catch averaged 55.7 million pounds per year, increasing significantly over the times series. In addition, catch rates have been increasing in recent years, with CPUE levels ranging from 315 lb/day fished in 2002, to 1,175 lb/ day fished in 2008. The high CPUE’s we have measured is one indication that the stock was not in decline during this time period. Consequently, we believe the decline in effort levels is due purely to economic factors. Current stock assessments are now using these baseline data to provide managers with further insights into the Gulf L. setiferus stocks.
Resumo:
The recently revised Magnuson–Stevens Fishery Conservation and Management Act requires that U.S. fishery management councils avoid overfishing by setting annual catch limits (ACLs) not exceeding recommendations of the councils’ scientific advisers. To meet that requirement, the scientific advisers will need to know the overfishing limit (OFL) estimated in each stock assessment, with OFL being the catch available from applying the limit fishing mortality rate to current or projected stock biomass. The advisers then will derive ‘‘acceptable biological catch’’ (ABC) from OFL by reducing OFL to allow for scientific uncertainty, and ABC becomes their recommendation to the council. We suggest methodology based on simple probability theory by which scientific advisers can compute ABC from OFL and the statistical distribution of OFL as estimated by a stock assessment. Our method includes approximations to the distribution of OFL if it is not known from the assessment; however, we find it preferable to have the assessment model estimate the distribution of OFL directly. Probability-based methods such as this one provide well-defined approaches to setting ABC and may be helpful to scientific advisers as they translate the new legal requirement into concrete advice.
Resumo:
Concerns over climate change mean engineers need to understand the greenhouse gas emissions associated with infrastructure projects. Standard coefficients are increasingly used to calculate the embodied emissions of construction materials, but these are not generally appropriate to inherently variable earthworks. This paper describes a new tool that takes a bottom-up approach to calculating carbon dioxide emissions from earthworks operations. In the case of bulk earthworks this is predominantly from the fuel used by machinery moving materials already on site. Typical earthworks solutions are explored along with the impact of using manufactured materials such as lime.
Resumo:
Our analyses of observer records reveal that abundance estimates are strongly influenced by the timing of longline operations in relation to dawn and dusk and soak time— the amount of time that baited hooks are available in the water. Catch data will underestimate the total mortality of several species because hooked animals are “lost at sea.” They fall off, are removed, or escape from the hook before the longline is retrieved. For example, longline segments with soak times of 20 hours were retrieved with fewer skipjack tuna and seabirds than segments with soak times of 5 hours. The mortality of some seabird species is up to 45% higher than previously estimated. The effects of soak time and timing vary considerably between species. Soak time and exposure to dusk periods have strong positive effects on the catch rates of many species. In particular, the catch rates of most shark and billfish species increase with soak time. At the end of longline retrieval, for example, expected catch rates for broadbill swordfish are four times those at the beginning of retrieval. Survival of the animal while it is hooked on the longline appears to be an important factor determining whether it is eventually brought on board the vessel. Catch rates of species that survive being hooked (e.g. blue shark) increase with soak time. In contrast, skipjack tuna and seabirds are usually dead at the time of retrieval. Their catch rates decline with time, perhaps because scavengers can easily remove hooked animals that are dead. The results of our study have important implications for fishery management and assessments that rely on longline catch data. A reduction in soak time since longlining commenced in the 1950s has introduced a systematic bias in estimates of mortality levels and abundance. The abundance of species like seabirds has been over-estimated in recent years. Simple modifications to procedures for data collection, such as recording the number of hooks retrieved without baits, would greatly improve mortality estimates.
Resumo:
Over the past few years, pop-up satellite archival tags (PSATs) have been used to investigate the behavior, movements, thermal biology, and postrelease mortality of a wide range of large, highly migratory species including bluefin tuna (Block et al., 2001), swordfish (Sedberry and Loefer, 2001), blue marlin (Graves et al., 2002), striped marlin (Domeier and Dewar, 2003), and white sharks (Boustany et al., 2002). PSAT tag technology has improved rapidly, and current tag models are capable of collecting, processing, and storing large amounts of information on light level, temperature, and pressure (depth) for a predetermined length of time before the release of these tags from animals. After release, the tags float to the surface, and transmit the stored data to passing satellites of the Argos system.
Resumo:
Recreational fisheries in the waters off the northeast U.S. target a variety of pelagic and demersal fish species, and catch and effort data sampled from recreational fisheries are a critical component of the information used in resource evaluation and management. Standardized indices of stock abundance developed from recreational fishery catch rates are routinely used in stock assessments. The statistical properties of both simulated and empirical recreational fishery catch-rate data such as those collected by the National Marine Fisheries Service (NMFS) Marine Recreational Fishery Statistics Survey (MRFSS) are examined, and the potential effects of different assumptions about the error structure of the catch-rate frequency distributions in computing indices of stock abundance are evaluated. Recreational fishery catch distributions sampled by the MRFSS are highly contagious and overdispersed in relation to the normal distribution and are generally best characterized by the Poisson or negative binomial distributions. The modeling of both the simulated and empirical MRFSS catch rates indicates that one may draw erroneous conclusions about stock trends by assuming the wrong error distribution in procedures used to developed standardized indices of stock abundance. The results demonstrate the importance of considering not only the overall model fit and significance of classification effects, but also the possible effects of model misspecification, when determining the most appropriate model construction.
Resumo:
Although subsampling is a common method for describing the composition of large and diverse trawl catches, the accuracy of these techniques is often unknown. We determined the sampling errors generated from estimating the percentage of the total number of species recorded in catches, as well as the abundance of each species, at each increase in the proportion of the sorted catch. We completely partitioned twenty prawn trawl catches from tropical northern Australia into subsamples of about 10 kg each. All subsamples were then sorted, and species numbers recorded. Catch weights ranged from 71 to 445 kg, and the number of fish species in trawls ranged from 60 to 138, and invertebrate species from 18 to 63. Almost 70% of the species recorded in catches were “rare” in subsamples (less than one individual per 10 kg subsample or less than one in every 389 individuals). A matrix was used to show the increase in the total number of species that were recorded in each catch as the percentage of the sorted catch increased. Simulation modelling showed that sorting small subsamples (about 10% of catch weights) identified about 50% of the total number of species caught in a trawl. Larger subsamples (50% of catch weight on average) identified about 80% of the total species caught in a trawl. The accuracy of estimating the abundance of each species also increased with increasing subsample size. For the “rare” species, sampling error was around 80% after sorting 10% of catch weight and was just less than 50% after 40% of catch weight had been sorted. For the “abundant” species (five or more individuals per 10 kg subsample or five or more in every 389 individuals), sampling error was around 25% after sorting 10% of catch weight, but was reduced to around 10% after 40% of catch weight had been sorted.