950 resultados para British Columbia Margin


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Under present climate conditions, convection at high latitudes of the North Pacific is restricted to shallower depths than in the North Atlantic. To what extent this asymmetry between the two ocean basins was maintained over the past 20 kyr is poorly known because there are few unambiguous proxy records of ventilation from the North Pacific. We present new data for two sediment cores from the California margin at 800 and 1600 m depth to argue that the depth of ventilation shifted repeatedly in the northeast Pacific over the course of deglaciation. The evidence includes benthic foraminiferal Cd/Ca, 18O/16O, and 13C/12C data as well as radiocarbon age differences between benthic and planktonic foraminifera. A number of features in the shallower of the two cores, including an interval of laminated sediments, are consistent with changes in ventilation over the past 20 kyr suggested by alternations between laminated and bioturbated sediments in the Santa Barbara Basin and the Gulf of California [Keigwin and Jones, 1990 doi:10.1029/PA005i006p01009; Kennett and Ingram, 1995 doi:10.1038/377510a0; Behl and Kennett, 1996 doi:10.1038/379243a0]. Data from the deeper of the two California margin cores suggest that during times of reduced ventilation at 800 m, ventilation was enhanced at 1600 m depth, and vice versa. This pronounced depth dependence of ventilation needs to be taken into account when exploring potential teleconnections between the North Pacific and the North Atlantic.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

"A list of the sources and secondary works cited": p. [141]-145.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Raman spectroscopic analyses of fragmented wall-painting specimens from a Romano-British villa dating from ca. 200 AD are reported. The predominant pigment is red haematite, to which carbon, chalk and sand have been added to produce colour variations, applied to a typical Roman limewash putty composition. Other pigment colours are identified as white chalk, yellow (goethite), grey (soot/chalk mixture) and violet. The latter pigment is ascribed to caput mortuum, a rare form of haematite, to which kaolinite (possibly from Cornwall) has been added, presumably in an effort to increase the adhesive properties of the pigment to the substratum. This is the first time that kaolinite has been reported in this context and could indicate the successful application of an ancient technology discovered by the Romano-British artists. Supporting evidence for the Raman data is provided by X-ray diffraction and SEM-EDAX analyses of the purple pigment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Thoracoscopic instrumented anterior spinal fusion for adolescent idiopathic scoliosis (AIS) has clinical benefits that include reduced pulmonary morbidity, postoperative pain, and improved cosmesis. However, quantitative data on radiological improvement of vertebral rotation using this method is lacking. This study’s objectives were to measure preoperative and postoperative axial vertebral rotational deformity at the curve apex in endoscopically-treated anterior-instrumented scoliosis patients using CT, and assess the relevance of these findings to clinically measured chest wall rib hump deformity correction. This is the first quantitative CT study to confirm that endoscopic anterior instrumented fusion for AIS substantially improves axial vertebral body rotational deformity at the apex of the curve. The margin of correction of 43% compares favourably with historically published figures of 24% for patients with posterior all-hook-rod constructs. CT measurements correlated significantly to the clinical outcome of rib hump deformity correction.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper we describe the Large Margin Vector Quantization algorithm (LMVQ), which uses gradient ascent to maximise the margin of a radial basis function classifier. We present a derivation of the algorithm, which proceeds from an estimate of the class-conditional probability densities. We show that the key behaviour of Kohonen's well-known LVQ2 and LVQ3 algorithms emerge as natural consequences of our formulation. We compare the performance of LMVQ with that of Kohonen's LVQ algorithms on an artificial classification problem and several well known benchmark classification tasks. We find that the classifiers produced by LMVQ attain a level of accuracy that compares well with those obtained via LVQ1, LVQ2 and LVQ3, with reduced storage complexity. We indicate future directions of enquiry based on the large margin approach to Learning Vector Quantization.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

One of the surprising recurring phenomena observed in experiments with boosting is that the test error of the generated classifier usually does not increase as its size becomes very large, and often is observed to decrease even after the training error reaches zero. In this paper, we show that this phenomenon is related to the distribution of margins of the training examples with respect to the generated voting classification rule, where the margin of an example is simply the difference between the number of correct votes and the maximum number of votes received by any incorrect label. We show that techniques used in the analysis of Vapnik's support vector classifiers and of neural networks with small weights can be applied to voting methods to relate the margin distribution to the test error. We also show theoretically and experimentally that boosting is especially effective at increasing the margins of the training examples. Finally, we compare our explanation to those based on the bias-variance decomposition.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Log-linear and maximum-margin models are two commonly-used methods in supervised machine learning, and are frequently used in structured prediction problems. Efficient learning of parameters in these models is therefore an important problem, and becomes a key factor when learning from very large data sets. This paper describes exponentiated gradient (EG) algorithms for training such models, where EG updates are applied to the convex dual of either the log-linear or max-margin objective function; the dual in both the log-linear and max-margin cases corresponds to minimizing a convex function with simplex constraints. We study both batch and online variants of the algorithm, and provide rates of convergence for both cases. In the max-margin case, O(1/ε) EG updates are required to reach a given accuracy ε in the dual; in contrast, for log-linear models only O(log(1/ε)) updates are required. For both the max-margin and log-linear cases, our bounds suggest that the online EG algorithm requires a factor of n less computation to reach a desired accuracy than the batch EG algorithm, where n is the number of training examples. Our experiments confirm that the online algorithms are much faster than the batch algorithms in practice. We describe how the EG updates factor in a convenient way for structured prediction problems, allowing the algorithms to be efficiently applied to problems such as sequence learning or natural language parsing. We perform extensive evaluation of the algorithms, comparing them to L-BFGS and stochastic gradient descent for log-linear models, and to SVM-Struct for max-margin models. The algorithms are applied to a multi-class problem as well as to a more complex large-scale parsing task. In all these settings, the EG algorithms presented here outperform the other methods.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider the problem of structured classification, where the task is to predict a label y from an input x, and y has meaningful internal structure. Our framework includes supervised training of Markov random fields and weighted context-free grammars as special cases. We describe an algorithm that solves the large-margin optimization problem defined in [12], using an exponential-family (Gibbs distribution) representation of structured objects. The algorithm is efficient—even in cases where the number of labels y is exponential in size—provided that certain expectations under Gibbs distributions can be calculated efficiently. The method for structured labels relies on a more general result, specifically the application of exponentiated gradient updates [7, 8] to quadratic programs.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A classical condition for fast learning rates is the margin condition, first introduced by Mammen and Tsybakov. We tackle in this paper the problem of adaptivity to this condition in the context of model selection, in a general learning framework. Actually, we consider a weaker version of this condition that allows one to take into account that learning within a small model can be much easier than within a large one. Requiring this “strong margin adaptivity” makes the model selection problem more challenging. We first prove, in a general framework, that some penalization procedures (including local Rademacher complexities) exhibit this adaptivity when the models are nested. Contrary to previous results, this holds with penalties that only depend on the data. Our second main result is that strong margin adaptivity is not always possible when the models are not nested: for every model selection procedure (even a randomized one), there is a problem for which it does not demonstrate strong margin adaptivity.