904 resultados para encoding of measurement streams


Relevância:

100.00% 100.00%

Publicador:

Resumo:

To date there is no documented procedure to extrapolate findings of an isometric nature to a whole body performance setting. The purpose of this study was to quantify the reliability of perceived exertion to control neuromuscular output during an isometric contraction. 21 varsity athletes completed a maximal voluntary contraction and a 2 min constant force contraction at both the start and end of the study. Between pre and post testing all participants completed a 2 min constant perceived exertion contraction once a day for 4 days. Intra-class correlation coefficient (R=O.949) and standard error of measurement (SEM=5.12 Nm) concluded that the isometric contraction was reliable. Limits of agreement demonstrated only moderate initial reliability, yet with smaller limits towards the end of 4 training sessions. In conclusion, athlete's na"ive to a constant effort isometric contraction will produce reliable and acceptably stable results after 1 familiarization sessions has been completed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

As important social stimuli, faces playa critical role in our lives. Much of our interaction with other people depends on our ability to recognize faces accurately. It has been proposed that face processing consists of different stages and interacts with other systems (Bruce & Young, 1986). At a perceptual level, the initial two stages, namely structural encoding and face recognition, are particularly relevant and are the focus of this dissertation. Event-related potentials (ERPs) are averaged EEG signals time-locked to a particular event (such as the presentation of a face). With their excellent temporal resolution, ERPs can provide important timing information about neural processes. Previous research has identified several ERP components that are especially related to face processing, including the N 170, the P2 and the N250. Their nature with respect to the stages of face processing is still unclear, and is examined in Studies 1 and 2. In Study 1, participants made gender decisions on a large set of female faces interspersed with a few male faces. The ERP responses to facial characteristics of the female faces indicated that the N 170 amplitude from each side of the head was affected by information from eye region and by facial layout: the right N 170 was affected by eye color and by face width, while the left N 170 was affected by eye size and by the relation between the sizes of the top and bottom parts of a face. In contrast, the P100 and the N250 components were largely unaffected by facial characteristics. These results thus provided direct evidence for the link between the N 170 and structural encoding of faces. In Study 2, focusing on the face recognition stage, we manipulated face identity strength by morphing individual faces to an "average" face. Participants performed a face identification task. The effect of face identity strength was found on the late P2 and the N250 components: as identity strength decreased from an individual face to the "average" face, the late P2 increased and the N250 decreased. In contrast, the P100, the N170 and the early P2 components were not affected by face identity strength. These results suggest that face recognition occurs after 200 ms, but not earlier. Finally, because faces are often associated with social information, we investigated in Study 3 how group membership might affect ERP responses to faces. After participants learned in- and out-group memberships of the face stimuli based on arbitrarily assigned nationality and university affiliation, we found that the N170 latency differentiated in-group and out-group faces, taking longer to process the latter. In comparison, without group memberships, there was no difference in N170 latency among the faces. This dissertation provides evidence that at a neural level, structural encoding of faces, indexed by the N170, occurs within 200 ms. Face recognition, indexed by the late P2 and the N250, occurs shortly afterwards between 200 and 300 ms. Social cognitive factors can also influence face processing. The effect is already evident as early as 130-200 ms at the structural encoding stage.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective To determine overall, test–retest and inter-rater reliability of posture indices among persons with idiopathic scoliosis. Design A reliability study using two raters and two test sessions. Setting Tertiary care paediatric centre. Participants Seventy participants aged between 10 and 20 years with different types of idiopathic scoliosis (Cobb angle 15 to 60°) were recruited from the scoliosis clinic. Main outcome measures Based on the XY co-ordinates of natural reference points (e.g. eyes) as well as markers placed on several anatomical landmarks, 32 angular and linear posture indices taken from digital photographs in the standing position were calculated from a specially developed software program. Generalisability theory served to estimate the reliability and standard error of measurement (SEM) for the overall, test–retest and inter-rater designs. Bland and Altman's method was also used to document agreement between sessions and raters. Results In the random design, dependability coefficients demonstrated a moderate level of reliability for six posture indices (ϕ = 0.51 to 0.72) and a good level of reliability for 26 posture indices out of 32 (ϕ ≥ 0.79). Error attributable to marker placement was negligible for most indices. Limits of agreement and SEM values were larger for shoulder protraction, trunk list, Q angle, cervical lordosis and scoliosis angles. The most reproducible indices were waist angles and knee valgus and varus. Conclusions Posture can be assessed in a global fashion from photographs in persons with idiopathic scoliosis. Despite the good reliability of marker placement, other studies are needed to minimise measurement errors in order to provide a suitable tool for monitoring change in posture over time.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Study Design. Reliability study. Objectives. To assess between-acquisition reliability of new multilevel trunk cross sections measurements, in order to define what is a real change when comparing 2 trunk surface acquisitions of a same patient, before and after surgery or throughout the clinical monitoring. Summary of Background Data. Several cross-sectional surface measurements have been proposed in the literature for noninvasive assessment of trunk deformity in patients with adolescent idiopathic scoliosis (AIS). However, only the maximum values along the trunk are evaluated and used for monitoring progression and assessing treatment outcome. Methods. Back surface rotation (BSR), trunk rotation (TR), and coronal and sagittal trunk deviation are computed on 300 cross sections of the trunk. Each set of 300 measures is represented as a single functional data, using a set of basis functions. To evaluate between-acquisition variability at all trunk levels, a test-retest reliability study is conducted on 35 patients with AIS. A functional correlation analysis is also carried out to evaluate any redundancy between the measurements. Results. Each set of 300 measures was successfully described using only 10 basis functions. The test-retest reliability of the functional measurements is good to very good all over the trunk, except above the shoulders level. The typical errors of measurement are between 1.20° and 2.2° for the rotational measures and between 2 and 6 mm for deviation measures. There is a very strong correlation between BSR and TR all over the trunk, a moderate correlation between coronal trunk deviation and both BSR and TR, and no correlation between sagittal trunk deviation and any other measurement. Conclusion. This novel representation of trunk surface measurements allows for a global assessment of trunk surface deformity. Multilevel trunk measurements provide a broader perspective of the trunk deformity and allow a reliable multilevel monitoring during clinical follow-up of patients with AIS and a reliable assessment of the esthetic outcome after surgery.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Photothermal effect refers to heating of a sample due to the absorption of electromagnetic radiation. Photothermal (PT) heat generation which is an example of energy conversion has in general three kinds of applications. 1. PT material probing 2. PT material processing and 3. PT material destruction. The temperatures involved increases from 1-. 3. Of the above three, PT material probing is the most important in making significant contribution to the field of science and technology. Photothermal material characterization relies on high sensitivity detection techniques to monitor the effects caused by PT material heating of a sample. Photothermal method is a powerful high sensitivity non-contact tool used for non-destructive thermal characterization of materials. The high sensitivity of the photothermal methods has led to its application for analysis of low absorbance samples. Laser calorimetry, photothermal radiometry, pyroelectric technique, photoacoustic technique, photothermal beam deflection technique, etc. come under the broad class ofphotothermal techniques. However the choice of a suitable technique depends upon the nature of the sample, purpose of measurement, nature of light source used, etc. The present investigations are done on polymer thin films employing photothermal beam deflection technique, for the successful determination of their thermal diffusivity. Here the sample is excited by a He-Ne laser (A = 6328...\ ) which acts as the pump beam. Due to the refractive index gradient established in the sample surface and in the adjacent coupling medium, another optical beam called probe beam (diode laser, A= 6500A ) when passed through this region experiences a deflection and is detected using a position sensitive detector and its output is fed to a lock-in amplifier from which the amplitude and phase of the deflection can be directly obtained. The amplitude and phase of the signal is suitably analysed for determining the thermal diffusivity.The production of polymer thin film samples has gained considerable attention for the past few years. Plasma polymerization is an inexpensive tool for fabricating organic thin films. It refers to formation of polymeric materials under the influence of plasma, which is generated by some kind of electric discharge. Here plasma of the monomer vapour is generated by employing radio frequency (MHz) techniques. Plasma polymerization technique results in homogeneous, highly adhesive, thermally stable, pinhole free, dielectric, highly branched and cross-linked polymer films. The possible linkage in the formation of the polymers is suggested by comparing the FTIR spectra of the monomer and the polymer.Near IR overtone investigations on some organic molecules using local mode model are also done. Higher vibrational overtones often provide spectral simplification and greater resolution of peaks corresponding to nonequivalent X-H bonds where X is typically C, N or O. Vibrational overtone spectroscopy of molecules containing X-H oscillators is now a well established tool for molecular investigations. Conformational and steric differences between bonds and structural inequivalence ofCH bonds (methyl, aryl, acetylenic, etc.) are resolvable in the higher overtone spectra. The local mode model in which the X-H oscillators are considered to be loosely coupled anharmonic oscillators has been widely used for the interpretation of overtone spectra. If we are exciting a single local oscillator from the vibrational ground state to the vibrational state v, then the transition energy of the local mode overtone is given by .:lE a......v = A v + B v2 • A plot of .:lE / v versus v will yield A, the local mode frequency as the intercept and B, the local mode diagonal anharmonicity as the slope. Here A - B gives the mechanical frequency XI of the oscillator and B = X2 is the anharmonicity of the bond. The local mode parameters XI and X2 vary for non-equivalent X-H bonds and are sensitive to the inter and intra molecular environment of the X-H oscillator.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The main objective of this thesis is to design and develop spectral signature based chipless RFID tags Multiresonators are essential component of spectral signature based chipless tags. To enhance the data coding capacity in spectral signature based tags require large number of resonances in a limited bandwidth. The frequency of the resonators have to be close to each other. To achieve this condition, the quality factor of each resonance needs to be high. The thesis discusses about various types of multiresonators, their practical implementation and how they can be used in design. Encoding of data into spectral domain is another challenge in chipless tag design. Here, the technique used is the presence or absence encoding technique. The presence of a resonance is used to encode Logic 1 and absence of a speci c resonance is used to encode Logic 0. Di erent types of multiresonators such as open stub multiresonators, coupled bunch hairpin resonators and shorted slot ground ring resonator are proposed in this thesis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Landwirtschaft spielt eine zentrale Rolle im Erdsystem. Sie trägt durch die Emission von CO2, CH4 und N2O zum Treibhauseffekt bei, kann Bodendegradation und Eutrophierung verursachen, regionale Wasserkreisläufe verändern und wird außerdem stark vom Klimawandel betroffen sein. Da all diese Prozesse durch die zugrunde liegenden Nährstoff- und Wasserflüsse eng miteinander verknüpft sind, sollten sie in einem konsistenten Modellansatz betrachtet werden. Dennoch haben Datenmangel und ungenügendes Prozessverständnis dies bis vor kurzem auf der globalen Skala verhindert. In dieser Arbeit wird die erste Version eines solchen konsistenten globalen Modellansatzes präsentiert, wobei der Schwerpunkt auf der Simulation landwirtschaftlicher Erträge und den resultierenden N2O-Emissionen liegt. Der Grund für diese Schwerpunktsetzung liegt darin, dass die korrekte Abbildung des Pflanzenwachstums eine essentielle Voraussetzung für die Simulation aller anderen Prozesse ist. Des weiteren sind aktuelle und potentielle landwirtschaftliche Erträge wichtige treibende Kräfte für Landnutzungsänderungen und werden stark vom Klimawandel betroffen sein. Den zweiten Schwerpunkt bildet die Abschätzung landwirtschaftlicher N2O-Emissionen, da bislang kein prozessbasiertes N2O-Modell auf der globalen Skala eingesetzt wurde. Als Grundlage für die globale Modellierung wurde das bestehende Agrarökosystemmodell Daycent gewählt. Neben der Schaffung der Simulationsumgebung wurden zunächst die benötigten globalen Datensätze für Bodenparameter, Klima und landwirtschaftliche Bewirtschaftung zusammengestellt. Da für Pflanzzeitpunkte bislang keine globale Datenbasis zur Verfügung steht, und diese sich mit dem Klimawandel ändern werden, wurde eine Routine zur Berechnung von Pflanzzeitpunkten entwickelt. Die Ergebnisse zeigen eine gute Übereinstimmung mit Anbaukalendern der FAO, die für einige Feldfrüchte und Länder verfügbar sind. Danach wurde das Daycent-Modell für die Ertragsberechnung von Weizen, Reis, Mais, Soja, Hirse, Hülsenfrüchten, Kartoffel, Cassava und Baumwolle parametrisiert und kalibriert. Die Simulationsergebnisse zeigen, dass Daycent die wichtigsten Klima-, Boden- und Bewirtschaftungseffekte auf die Ertragsbildung korrekt abbildet. Berechnete Länderdurchschnitte stimmen gut mit Daten der FAO überein (R2 = 0.66 für Weizen, Reis und Mais; R2 = 0.32 für Soja), und räumliche Ertragsmuster entsprechen weitgehend der beobachteten Verteilung von Feldfrüchten und subnationalen Statistiken. Vor der Modellierung landwirtschaftlicher N2O-Emissionen mit dem Daycent-Modell stand eine statistische Analyse von N2O-und NO-Emissionsmessungen aus natürlichen und landwirtschaftlichen Ökosystemen. Die als signifikant identifizierten Parameter für N2O (Düngemenge, Bodenkohlenstoffgehalt, Boden-pH, Textur, Feldfrucht, Düngersorte) und NO (Düngemenge, Bodenstickstoffgehalt, Klima) entsprechen weitgehend den Ergebnissen einer früheren Analyse. Für Emissionen aus Böden unter natürlicher Vegetation, für die es bislang keine solche statistische Untersuchung gab, haben Bodenkohlenstoffgehalt, Boden-pH, Lagerungsdichte, Drainierung und Vegetationstyp einen signifikanten Einfluss auf die N2O-Emissionen, während NO-Emissionen signifikant von Bodenkohlenstoffgehalt und Vegetationstyp abhängen. Basierend auf den daraus entwickelten statistischen Modellen betragen die globalen Emissionen aus Ackerböden 3.3 Tg N/y für N2O, und 1.4 Tg N/y für NO. Solche statistischen Modelle sind nützlich, um Abschätzungen und Unsicherheitsbereiche von N2O- und NO-Emissionen basierend auf einer Vielzahl von Messungen zu berechnen. Die Dynamik des Bodenstickstoffs, insbesondere beeinflusst durch Pflanzenwachstum, Klimawandel und Landnutzungsänderung, kann allerdings nur durch die Anwendung von prozessorientierten Modellen berücksichtigt werden. Zur Modellierung von N2O-Emissionen mit dem Daycent-Modell wurde zunächst dessen Spurengasmodul durch eine detailliertere Berechnung von Nitrifikation und Denitrifikation und die Berücksichtigung von Frost-Auftau-Emissionen weiterentwickelt. Diese überarbeitete Modellversion wurde dann an N2O-Emissionsmessungen unter verschiedenen Klimaten und Feldfrüchten getestet. Sowohl die Dynamik als auch die Gesamtsummen der N2O-Emissionen werden befriedigend abgebildet, wobei die Modelleffizienz für monatliche Mittelwerte zwischen 0.1 und 0.66 für die meisten Standorte liegt. Basierend auf der überarbeiteten Modellversion wurden die N2O-Emissionen für die zuvor parametrisierten Feldfrüchte berechnet. Emissionsraten und feldfruchtspezifische Unterschiede stimmen weitgehend mit Literaturangaben überein. Düngemittelinduzierte Emissionen, die momentan vom IPCC mit 1.25 +/- 1% der eingesetzten Düngemenge abgeschätzt werden, reichen von 0.77% (Reis) bis 2.76% (Mais). Die Summe der berechneten Emissionen aus landwirtschaftlichen Böden beträgt für die Mitte der 1990er Jahre 2.1 Tg N2O-N/y, was mit den Abschätzungen aus anderen Studien übereinstimmt.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We present an unsupervised learning algorithm that acquires a natural-language lexicon from raw speech. The algorithm is based on the optimal encoding of symbol sequences in an MDL framework, and uses a hierarchical representation of language that overcomes many of the problems that have stymied previous grammar-induction procedures. The forward mapping from symbol sequences to the speech stream is modeled using features based on articulatory gestures. We present results on the acquisition of lexicons and language models from raw speech, text, and phonetic transcripts, and demonstrate that our algorithm compares very favorably to other reported results with respect to segmentation performance and statistical efficiency.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several methods have been suggested to estimate non-linear models with interaction terms in the presence of measurement error. Structural equation models eliminate measurement error bias, but require large samples. Ordinary least squares regression on summated scales, regression on factor scores and partial least squares are appropriate for small samples but do not correct measurement error bias. Two stage least squares regression does correct measurement error bias but the results strongly depend on the instrumental variable choice. This article discusses the old disattenuated regression method as an alternative for correcting measurement error in small samples. The method is extended to the case of interaction terms and is illustrated on a model that examines the interaction effect of innovation and style of use of budgets on business performance. Alternative reliability estimates that can be used to disattenuate the estimates are discussed. A comparison is made with the alternative methods. Methods that do not correct for measurement error bias perform very similarly and considerably worse than disattenuated regression

Relevância:

100.00% 100.00%

Publicador:

Resumo:

peaker(s): Jon Hare Organiser: Time: 25/06/2014 11:00-11:50 Location: B32/3077 Abstract The aggregation of items from social media streams, such as Flickr photos and Twitter tweets, into meaningful groups can help users contextualise and effectively consume the torrents of information on the social web. This task is challenging due to the scale of the streams and the inherently multimodal nature of the information being contextualised. In this talk I'll describe some of our recent work on trend and event detection in multimedia data streams. We focus on scalable streaming algorithms that can be applied to multimedia data streams from the web and the social web. The talk will cover two particular aspects of our work: mining Twitter for trending images by detecting near duplicates; and detecting social events in multimedia data with streaming clustering algorithms. I'll will describe in detail our techniques, and explore open questions and areas of potential future work, in both these tasks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: To examine the ‘interrater reliability’ of the Alberta Infant Motor Scale (AIMS) in term and preterm born infants between 10 to 16 months age from Talca province, Maule Region - Chile. Subjects: 115 infants between 10 to 16 months age were incorporated to the study; 95 term born infants were attended in the local Health Centre in Talca City, and 20 preterm infants belonged to the Premature Infants Follow-Up Programme of Talca Regional Hospital. Methods: The motor behaviour of each infant was recorded and later it was assessed by two trained assessors using AIMS. It was obtained the total AIMS’ score and also from prone, supine, seated, and stand subscales. For ‘interrater reliability’ analysis it was used the Intraclass Coefficient of Correlation (ICC), the Standard Error of Measurement (SEM) and 95% limits of agreement. Results: The obtained ICC for the total scores AIMS were major than 0.94 (p<0.0002) for term and preterm born infants. The SEM of total scores was less than 3.1 points, higher than what was found in other similar studies. The 95% limits of agreement were +5.3 to -4.1 points and +7.7 to – 3.9 points in term and preterm born, respectively, revealing ‘interrater agreement’. Conclusion: The AIMS showed adequate ‘interrater reliable’ levels when was applied in Chilean term and preterm born from 10 to 16 month’s age.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The atmospheric composition of the central North Atlantic region has been sampled using the FAAM BAe146 instrumented aircraft during the Intercontinental Transport of Ozone and Precursors (ITOP) campaign, part of the wider International Consortium for Atmospheric Research on Transport and Transformation (ICARTT). This paper presents an overview of the ITOP campaign. Between late July and early August 2004, twelve flights comprising 72 hours of measurement were made in a region from approximately 20 to 40°W and 33 to 47°N centered on Faial Island, Azores, ranging in altitude from 50 to 9000 m. The vertical profiles of O3 and CO are consistent with previous observations made in this region during 1997 and our knowledge of the seasonal cycles within the region. A cluster analysis technique is used to partition the data set into air mass types with distinct chemical signatures. Six clusters provide a suitable balance between cluster generality and specificity. The clusters are labeled as biomass burning, low level outflow, upper level outflow, moist lower troposphere, marine and upper troposphere. During this summer, boreal forest fire emissions from Alaska and northern Canada were found to provide a major perturbation of tropospheric composition in CO, PAN, organic compounds and aerosol. Anthropogenic influenced air from the continental boundary layer of the USA was clearly observed running above the marine boundary layer right across the mid-Atlantic, retaining high pollution levels in VOCs and sulfate aerosol. Upper level outflow events were found to have far lower sulfate aerosol, resulting from washout on ascent, but much higher PAN associated with the colder temperatures. Lagrangian links with flights of other aircraft over the USA and Europe show that such signatures are maintained many days downwind of emission regions. Some other features of the data set are highlighted, including the strong perturbations to many VOCs and OVOCs in this remote region.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mega-scale glacial lineations (MSGLs) are longitudinally aligned corrugations (ridge-groove structures 6-100 km long) in sediment produced subglacially. They are indicators of fast flow and a common signature of ice-stream beds. We develop a qualitative theory that accounts for their formation, and use numerical modelling, and observations of ice-stream beds to provide supporting evidence. Ice in contact with a rough (scale of 10-10(3) m) bedrock surface will mimic the form of the bed. Because of flow acceleration and convergence in ice-stream onset zones, the ice-base roughness elements experience transverse strain, transforming them from irregular bumps into longitudinally aligned keels of ice protruding downwards. Where such keels slide across a soft sedimentary bed, they plough through the sediments, carving elongate grooves, and deforming material up into intervening ridges. This explains MSGLs and has important implications for ice-stream mechanics. Groove ploughing provides the means to acquire new lubricating sediment and to transport large volumes of it downstream. Keels may provide basal drag in the force budget of ice streams, thereby playing a role in flow regulation and stability We speculate that groove ploughing permits significant ice-stream widening, thus facilitating high-magnitude ice discharge.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A collection of 24 seawaters from various worldwide locations and differing depth was culled to measure their chlorine isotopic composition (delta(37)Cl). These samples cover all the oceans and large seas: Atlantic, Pacific, Indian and Antarctic oceans, Mediterranean and Red seas. This collection includes nine seawaters from three depth profiles down to 4560 mbsl. The standard deviation (2sigma) of the delta(37)Cl of this collection is +/-0.08 parts per thousand, which is in fact as large as our precision of measurement ( +/- 0.10 parts per thousand). Thus, within error, oceanic waters seem to be an homogeneous reservoir. According to our results, any seawater could be representative of Standard Mean Ocean Chloride (SMOC) and could be used as a reference standard. An extended international cross-calibration over a large range of delta(37)Cl has been completed. For this purpose, geological fluid samples of various chemical compositions and a manufactured CH3Cl gas sample, with delta(37)Cl from about -6 parts per thousand to +6 parts per thousand have been compared. Data were collected by gas source isotope ratio mass spectrometry (IRMS) at the Paris, Reading and Utrecht laboratories and by thermal ionization mass spectrometry (TIMS) at the Leeds laboratory. Comparison of IRMS values over the range -5.3 parts per thousand to +1.4 parts per thousand plots on the Y=X line, showing a very good agreement between the three laboratories. On 11 samples, the trend line between Paris and Reading Universities is: delta(37)Cl(Reading)= (1.007 +/- 0.009)delta(37)Cl(Paris) - (0.040 +/- 0.025), with a correlation coefficient: R-2 = 0.999. TIMS values from Leeds University have been compared to IRMS values from Paris University over the range -3.0 parts per thousand to +6.0 parts per thousand. On six samples, the agreement between these two laboratories, using different techniques is good: delta(37)Cl(Leeds)=(1.052 +/- 0.038)delta(37)Cl(Paris) + (0.058 +/- 0.099), with a correlation coefficient: R-2 = 0.995. The present study completes a previous cross-calibration between the Leeds and Reading laboratories to compare TIMS and IRMS results (Anal. Chem. 72 (2000) 2261). Both studies allow a comparison of IRMS and TIMS techniques between delta(37)Cl values from -4.4 parts per thousand to +6.0 parts per thousand and show a good agreement: delta(37)Cl(TIMS)=(1.039 +/- 0.023)delta(37)Cl(IRMS)+(0.059 +/- 0.056), with a correlation coefficient: R-2 = 0.996. Our study shows that, for fluid samples, if chlorine isotopic compositions are near 0 parts per thousand, their measurements either by IRMS or TIMS will give comparable results within less than +/- 0.10 parts per thousand, while for delta(37)Cl values as far as 10 parts per thousand (either positive or negative) from SMOC, both techniques will agree within less than +/- 0.30 parts per thousand. (C) 2004 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Victoria Island lies at the north-western extremity of the region covered by the vast North American Laurentide Ice Sheet (LIS) in the Canadian Arctic Archipelago. This area is significant because it linked the interior of the LIS to the Arctic Ocean, probably via a number of ice streams. Victoria Island, however, exhibits a remarkably complex glacial landscape, with several successive generations of ice flow indicators superimposed on top of each other and often at abrupt (90 degrees) angles. This complexity represents a major challenge to those attempting to produce a detailed reconstruction of the glacial history of the region. This paper presents a map of the glacial geomorphology of Victoria Island. The map is based on analysis of Landsat Enhanced Thematic Plus (ETM+) satellite imagery and contains over 58,000 individual glacial features which include: glacial lineations, moraines (terminal, lateral, subglacial shear margin), hummocky moraine, ribbed moraine, eskers, glaciofluvial deposits, large meltwater channels, and raised shorelines. The glacial features reveal marked changes in ice flow direction and vigour over time. Moreover, the glacial geomorphology indicates a non-steady withdrawal of ice during deglaciation, with rapidly flowing ice streams focussed into the inter-island troughs and several successively younger flow patterns superimposed on older ones. It is hoped that detailed analysis of this map will lead to an improved reconstruction of the glacial history of this area which will provide other important insights, for example, with respect to the interactions between ice streaming, deglaciation and Arctic Ocean meltwater events.