968 resultados para Unsupervised classification
Resumo:
Rockfishes (Sebastes spp.) tend to aggregate near rocky, cobble, or generally rugged areas that are difficult to survey with bottom trawls, and evidence indicates that assemblages of rockfish species may differ between areas accessible to trawling and those areas that are not. Consequently, it is important to determine grounds that are trawlable or untrawlable so that the areas where trawl survey results should be applied are accurately identified. To this end, we used multibeam echosounder data to generate metrics that describe the seafloor: backscatter strength at normal and oblique incidence angles, the variation of the angle-dependent backscatter strength within 10° of normal incidence, the scintillation of the acoustic intensity scattered from the seafloor, and the seafloor rugosity. We used these metrics to develop a binary classification scheme to estimate where the seafloor is expected to be trawlable. The multibeam echosounder data were verified through analyses of video and still images collected with a stereo drop camera and a remotely operated vehicle in a study at Snakehead Bank, ~100 km south of Kodiak Island in the Gulf of Alaska. Comparisons of different combinations of metrics derived from the multibeam data indicated that the oblique-incidence backscatter strength was the most accurate estimator of trawlability at Snakehead Bank and that the addition of other metrics provided only marginal improvements. If successful on a wider scale in the Gulf of Alaska, this acoustic remote-sensing technique, or a similar one, could help improve the accuracy of rockfish stock assessments.
Resumo:
The Ecological Society of America and NOAA's Offices of Habitat Conservation and Protected Resources sponsored a workshop to develop a national marine and estuarine ecosystem classification system. Among the 22 people involved were scientists who had developed various regional classification systems and managers from NOAA and other federal agencies who might ultimately use this system for conservation and management. The objectives were to: (1) review existing global and regional classification systems; (2) develop the framework of a national classification system; and (3) propose a plan to expand the framework into a comprehensive classification system. Although there has been progress in the development of marine classifications in recent years, these have been either regionally focused (e.g., Pacific islands) or restricted to specific habitats (e.g., wetlands; deep seafloor). Participants in the workshop looked for commonalties across existing classification systems and tried to link these using broad scale factors important to ecosystem structure and function.
Resumo:
We have applied a number of objective statistical techniques to define homogeneous climatic regions for the Pacific Ocean, using COADS (Woodruff et al 1987) monthly sea surface temperature (SST) for 1950-1989 as the key variable. The basic data comprised all global 4°x4° latitude/longitude boxes with enough data available to yield reliable long-term means of monthly mean SST. An R-mode principal components analysis of these data, following a technique first used by Stidd (1967), yields information about harmonics of the annual cycles of SST. We used the spatial coefficients (one for each 4-degree box and eigenvector) as input to a K-means cluster analysis to classify the gridbox SST data into 34 global regions, in which 20 comprise the Pacific and Indian oceans. Seasonal time series were then produced for each of these regions. For comparison purposes, the variance spectrum of each regional anomaly time series was calculated. Most of the significant spectral peaks occur near the biennial (2.1-2.2 years) and ENSO (~3-6 years) time scales in the tropical regions. Decadal scale fluctuations are important in the mid-latitude ocean regions.
Resumo:
The use of L1 regularisation for sparse learning has generated immense research interest, with successful application in such diverse areas as signal acquisition, image coding, genomics and collaborative filtering. While existing work highlights the many advantages of L1 methods, in this paper we find that L1 regularisation often dramatically underperforms in terms of predictive performance when compared with other methods for inferring sparsity. We focus on unsupervised latent variable models, and develop L1 minimising factor models, Bayesian variants of "L1", and Bayesian models with a stronger L0-like sparsity induced through spike-and-slab distributions. These spike-and-slab Bayesian factor models encourage sparsity while accounting for uncertainty in a principled manner and avoiding unnecessary shrinkage of non-zero values. We demonstrate on a number of data sets that in practice spike-and-slab Bayesian methods outperform L1 minimisation, even on a computational budget. We thus highlight the need to re-assess the wide use of L1 methods in sparsity-reliant applications, particularly when we care about generalising to previously unseen data, and provide an alternative that, over many varying conditions, provides improved generalisation performance.