931 resultados para target language
Resumo:
Bycatch and resultant discard mortality are issues of global concern. The groundfish demersal trawl fishery on the west coast of the United States is a multispecies fishery with significant catch of target and nontarget species. These catches are of particular concern in regard to species that have previously been declared overfished and are currently rebuilding biomass back to target levels. To understand these interactions better, we used data from the West Coast Groundfish Observer Program in a series of cluster analyses to evaluate 3 questions: 1) Are there identifiable associations between species caught in the bottom trawl fishery; 2) Do species that are undergoing population rebuilding toward target biomass levels (“rebuilding species”) cluster with targeted species in a consistent way; 3) Are the relationships between rebuilding bycatch species and target species more resolved at particular spatial scales or are relationships spatially consistent across the whole data set? Two strong species clusters emerged—a deepwater slope cluster and a shelf cluster—neither of which included rebuilding species. The likelihood of encountering rebuilding rockfish species is relatively low. To evaluate whether weak clustering of rebuilding rockfish was attributable to their low rate of occurrence, we specified null models of species occurrence. Results indicated that the ability to predict occurrence of rebuilding rockfish when target species were caught was low. Cluster analyses performed at a variety of spatial scales indicated that the most reliable clustering of rebuilding species was at the spatial scale of individual fishing ports. This finding underscores the value of spatially resolved data for fishery management.
Resumo:
This paper investigates a method of automatic pronunciation scoring for use in computer-assisted language learning (CALL) systems. The method utilizes a likelihood-based `Goodness of Pronunciation' (GOP) measure which is extended to include individual thresholds for each phone based on both averaged native confidence scores and on rejection statistics provided by human judges. Further improvements are obtained by incorporating models of the subject's native language and by augmenting the recognition networks to include expected pronunciation errors. The various GOP measures are assessed using a specially recorded database of non-native speakers which has been annotated to mark phone-level pronunciation errors. Since pronunciation assessment is highly subjective, a set of four performance measures has been designed, each of them measuring different aspects of how well computer-derived phone-level scores agree with human scores. These performance measures are used to cross-validate the reference annotations and to assess the basic GOP algorithm and its refinements. The experimental results suggest that a likelihood-based pronunciation scoring metric can achieve usable performance, especially after applying the various enhancements.