965 resultados para Catalan margin
Resumo:
Nezumia aequalis and Coelorinchus mediterraneus are abundant species on the upper and lower continental slopes, respectively, in the Mediterranean Sea. A study on the reproductive strategy of the two species was conducted on the Catalan margin (NW Mediterranean). The reproductive cycle of both species was investigated using visual analyses of gonads and histological screening. The shallower species, N. aequalis, showed continuous reproduction with a peak of spawning females in winter months. In contrast, the deeper-living species, C. mediterraneus, showed semi-continuous reproduction with a regression period during the spring. Juveniles of N. aequalis were present in all seasons, but most abundant in the spring. Only two juveniles of C. mediterraneus were found. Both species had asynchronous oocyte development. The average fecundity of N. aequalis was 10,630 oocytes per individual, lower than known for the same species in the Atlantic Ocean. The fecundity of C. mediterraneus was measured for the first time in this study, with an average of 7693 oocytes per individual. Males of both species appear to have semi-cystic spermatogenesis. © 2013 Elsevier Ltd.
Resumo:
The distribution, type and quantity of marine litter accumulated on the bathyal and abyssal Mediterranean seafloor has been studied in the framework of the Spanish national projects PROMETEO and DOS MARES and the ESF-EuroDEEP project BIOFUN. Litter was collected with an otter trawl and Agassiz trawl while sampling for megafauna on the Blanes canyon and adjacent slope (Catalan margin, north-western Mediterranean) between 900 and 2700 m depth, and on the western, central and eastern Mediterranean basins at 1200, 2000 and 3000 m depth. All litter was sorted into 8 categories (hard plastic, soft plastic, glass, metal, clinker, fabric, longlines and fishing nets) and weighed. The distribution of litter was analysed in relation to depth, geographic area and natural (bathymetry, currents and rivers) and anthropogenic (population density and shipping routes) processes. The most abundant litter types were plastic, glass, metal and clinker. Lost or discarded fishing gear was also commonly found. On the Catalan margin, although the data indicated an accumulation of litter with increasing depth, mean weight was not significantly different between depths or between the open slope and the canyon. We propose that litter accumulated in the canyon, with high proportions of plastics, has predominantly a coastal origin, while litter collected on the open slope, dominated by heavy litter, is mostly ship-originated, especially at sites under major shipping routes. Along the trans-Mediterranean transect, although a higher amount of litter seemed to be found on the Western Mediterranean, differences of mean weight were not significant between the 3 geographic areas and the 3 depths. Here, the shallower sites, also closer to the coast, had a higher proportion of plastics than the deeper sites, which had a higher proportion of heavy litter and were often affected by shipping routes. The weight of litter was also compared to biomass of megafauna from the same samples. On the Blanes slope, the biomass of megafauna was significantly higher than the weight of litter between 900 and 2000 m depth and no significant differences were found at 2250 and 2700 m depth. Along the trans-Mediterranean transect, no significant differences were found between biomass and litter weight at all sites except in two sites: the Central Mediterranean at 1200 m depth, where biomass was higher than litter weight, and the Eastern Mediterranean at 1200 m depth, where litter weight was higher than biomass. The results are discussed in the framework of knowledge on marine litter accumulation, its potential impact on the habitat and fauna and the legislation addressing these issues.
Resumo:
In this paper we describe the Large Margin Vector Quantization algorithm (LMVQ), which uses gradient ascent to maximise the margin of a radial basis function classifier. We present a derivation of the algorithm, which proceeds from an estimate of the class-conditional probability densities. We show that the key behaviour of Kohonen's well-known LVQ2 and LVQ3 algorithms emerge as natural consequences of our formulation. We compare the performance of LMVQ with that of Kohonen's LVQ algorithms on an artificial classification problem and several well known benchmark classification tasks. We find that the classifiers produced by LMVQ attain a level of accuracy that compares well with those obtained via LVQ1, LVQ2 and LVQ3, with reduced storage complexity. We indicate future directions of enquiry based on the large margin approach to Learning Vector Quantization.
Resumo:
One of the surprising recurring phenomena observed in experiments with boosting is that the test error of the generated classifier usually does not increase as its size becomes very large, and often is observed to decrease even after the training error reaches zero. In this paper, we show that this phenomenon is related to the distribution of margins of the training examples with respect to the generated voting classification rule, where the margin of an example is simply the difference between the number of correct votes and the maximum number of votes received by any incorrect label. We show that techniques used in the analysis of Vapnik's support vector classifiers and of neural networks with small weights can be applied to voting methods to relate the margin distribution to the test error. We also show theoretically and experimentally that boosting is especially effective at increasing the margins of the training examples. Finally, we compare our explanation to those based on the bias-variance decomposition.
Resumo:
Log-linear and maximum-margin models are two commonly-used methods in supervised machine learning, and are frequently used in structured prediction problems. Efficient learning of parameters in these models is therefore an important problem, and becomes a key factor when learning from very large data sets. This paper describes exponentiated gradient (EG) algorithms for training such models, where EG updates are applied to the convex dual of either the log-linear or max-margin objective function; the dual in both the log-linear and max-margin cases corresponds to minimizing a convex function with simplex constraints. We study both batch and online variants of the algorithm, and provide rates of convergence for both cases. In the max-margin case, O(1/ε) EG updates are required to reach a given accuracy ε in the dual; in contrast, for log-linear models only O(log(1/ε)) updates are required. For both the max-margin and log-linear cases, our bounds suggest that the online EG algorithm requires a factor of n less computation to reach a desired accuracy than the batch EG algorithm, where n is the number of training examples. Our experiments confirm that the online algorithms are much faster than the batch algorithms in practice. We describe how the EG updates factor in a convenient way for structured prediction problems, allowing the algorithms to be efficiently applied to problems such as sequence learning or natural language parsing. We perform extensive evaluation of the algorithms, comparing them to L-BFGS and stochastic gradient descent for log-linear models, and to SVM-Struct for max-margin models. The algorithms are applied to a multi-class problem as well as to a more complex large-scale parsing task. In all these settings, the EG algorithms presented here outperform the other methods.
Resumo:
We consider the problem of structured classification, where the task is to predict a label y from an input x, and y has meaningful internal structure. Our framework includes supervised training of Markov random fields and weighted context-free grammars as special cases. We describe an algorithm that solves the large-margin optimization problem defined in [12], using an exponential-family (Gibbs distribution) representation of structured objects. The algorithm is efficient—even in cases where the number of labels y is exponential in size—provided that certain expectations under Gibbs distributions can be calculated efficiently. The method for structured labels relies on a more general result, specifically the application of exponentiated gradient updates [7, 8] to quadratic programs.
Resumo:
A classical condition for fast learning rates is the margin condition, first introduced by Mammen and Tsybakov. We tackle in this paper the problem of adaptivity to this condition in the context of model selection, in a general learning framework. Actually, we consider a weaker version of this condition that allows one to take into account that learning within a small model can be much easier than within a large one. Requiring this “strong margin adaptivity” makes the model selection problem more challenging. We first prove, in a general framework, that some penalization procedures (including local Rademacher complexities) exhibit this adaptivity when the models are nested. Contrary to previous results, this holds with penalties that only depend on the data. Our second main result is that strong margin adaptivity is not always possible when the models are not nested: for every model selection procedure (even a randomized one), there is a problem for which it does not demonstrate strong margin adaptivity.