995 resultados para Depth of anesthesia


Relevância:

90.00% 90.00%

Publicador:

Resumo:

To increase the organic matter (OM) content in the soil is one main goal in arable soil management. The adoption of tillage systems with reduced tillage depth and/or frequency (reduced tillage) or of no-tillage was found to increase the concentration of soil OM compared to conventional tillage (CT; ploughing to 20-30 cm). However, the underlying processes are not yet clear and are discussed contradictorily. So far, few investigations were conducted on tillage systems with a shallow tillage depth (minimum tillage = MT; maximum tillage depth of 10 cm). A better understanding of the interactions between MT implementation and changes in OM transformation in soils is essential in order to evaluate the possible contribution of MT to a sustainable management of arable soils. The objectives of the present thesis were (i) to compare OM concentrations, microbial biomass, water-stable aggregates, and particulate OM (POM) between CT and MT soils, (ii) to estimate the temporal variability of water-stable aggregate size classes occurring in the field and the dynamics of macroaggregate (>250 µm) formation and disruption under controlled conditions, (iii) to investigate whether a lower disruption or a higher formation rate accounts for a higher occurrence of macroaggregates under MT compared to CT, (iv) to determine which fraction is the major agent for storing the surplus of OM found under MT compared to CT, and (v) to observe the early OM transformation after residue incorporation in different tillage systems simulated. Two experimental sites (Garte-Süd and Hohes Feld) near Göttingen, Germany, were investigated. Soil type of both sites was a Haplic Luvisol. Since about 40 years, both sites receive MT by a rotary harrow (to 5-8 cm depth) and CT by a plough (to 25 cm depth). Surface soils (0-5 cm) and subsoils (10-20 cm) of two sampling dates (after fallow and directly after tillage) were investigated for concentrations of organic C (Corg) and total N (N), different water-stable aggregate size classes, different density fractions (for the sampling date after fallow only), microbial biomass, and for biochemically stabilized Corg and N (by acid hydrolysis; for the sampling date after tillage only). In addition, two laboratory incubations were performed under controlled conditions: Firstly, MT and CT soils were incubated (28 days at 22°C) as bulk soil and with destroyed macroaggregates in order to estimate the importance of macroaggregates for the physical protection of the very labile OM against mineralization. Secondly, in a microcosm experiment simulating MT and CT systems with soil <250 µm and with 15N and 13C labelled maize straw incorporated to different depths, the mineralization, the formation of new macroaggregates, and the partitioning of the recently added C and N were followed (28 days at 15°C). Forty years of MT regime led to higher concentrations of microbial biomass and of Corg and N compared to CT, especially in the surface soil. After fallow and directly after tillage, a higher proportion of water-stable macroaggregates rich in OM was found in the MT (36% and 66%, respectively) than in the CT (19% and 47%, respectively) surface soils of both sites (data shown are of the site Garte-Süd only). The subsoils followed the same trend. For the sampling date after fallow, no differences in the POM fractions were found but there was more OM associated to the mineral fraction detected in the MT soils. A large temporal variability was observed for the abundance of macroaggregates. In the field and in the microcosm simulations, macroaggregates were found to have a higher formation rate after the incorporation of residues under MT than under CT. Thus, the lower occurrence of macroaggregates in CT soils cannot be attributed to a higher disruption but to a lower formation rate. A higher rate of macroaggregate formation in MT soils may be due to (i) the higher concentrated input of residues in the surface soil and/or (ii) a higher abundance of fungal biomass in contrast to CT soils. Overall, as a location of storage of the surplus of OM detected under MT compared to CT, water-stable macroaggregates were found to play a key role. In the incubation experiment, macroaggregates were not found to protect the very labile OM against mineralization. Anyway, the surplus of OM detected after tillage in the MT soil was biochemically degradable. MT simulations in the microcosm experiment showed a lower specific respiration and a less efficient translocation of recently added residues than the CT simulations. Differences in the early processes of OM translocation between CT and MT simulations were attributed to a higher residue to soil ratio and to a higher proportion of fungal biomass in the MT simulations. Overall, MT was found to have several beneficial effects on the soil structure and on the storage of OM, especially in the surface soil. Furthermore, it was concluded that the high concentration of residues in the surface soil of MT may alter the processes of storage and decomposition of OM. In further investigations, especially analysis of the residue-soil-interface and of effects of the depth of residue incorporation should be emphasised. Moreover, further evidence is needed on differences in the microbial community between CT and MT soils.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In der psycholinguistischen Forschung ist die Annahme weitverbreitet, dass die Bewertung von Informationen hinsichtlich ihres Wahrheitsgehaltes oder ihrer Plausibilität (epistemische Validierung; Richter, Schroeder & Wöhrmann, 2009) ein strategischer, optionaler und dem Verstehen nachgeschalteter Prozess ist (z.B. Gilbert, 1991; Gilbert, Krull & Malone, 1990; Gilbert, Tafarodi & Malone, 1993; Herbert & Kübler, 2011). Eine zunehmende Anzahl an Studien stellt dieses Zwei-Stufen-Modell von Verstehen und Validieren jedoch direkt oder indirekt in Frage. Insbesondere Befunde zu Stroop-artigen Stimulus-Antwort-Kompatibilitätseffekten, die auftreten, wenn positive und negative Antworten orthogonal zum aufgaben-irrelevanten Wahrheitsgehalt von Sätzen abgegeben werden müssen (z.B. eine positive Antwort nach dem Lesen eines falschen Satzes oder eine negative Antwort nach dem Lesen eines wahren Satzes; epistemischer Stroop-Effekt, Richter et al., 2009), sprechen dafür, dass Leser/innen schon beim Verstehen eine nicht-strategische Überprüfung der Validität von Informationen vornehmen. Ausgehend von diesen Befunden war das Ziel dieser Dissertation eine weiterführende Überprüfung der Annahme, dass Verstehen einen nicht-strategischen, routinisierten, wissensbasierten Validierungsprozesses (epistemisches Monitoring; Richter et al., 2009) beinhaltet. Zu diesem Zweck wurden drei empirische Studien mit unterschiedlichen Schwerpunkten durchgeführt. Studie 1 diente der Untersuchung der Fragestellung, ob sich Belege für epistemisches Monitoring auch bei Informationen finden lassen, die nicht eindeutig wahr oder falsch, sondern lediglich mehr oder weniger plausibel sind. Mithilfe des epistemischen Stroop-Paradigmas von Richter et al. (2009) konnte ein Kompatibilitätseffekt von aufgaben-irrelevanter Plausibilität auf die Latenzen positiver und negativer Antworten in zwei unterschiedlichen experimentellen Aufgaben nachgewiesen werden, welcher dafür spricht, dass epistemisches Monitoring auch graduelle Unterschiede in der Übereinstimmung von Informationen mit dem Weltwissen berücksichtigt. Darüber hinaus belegen die Ergebnisse, dass der epistemische Stroop-Effekt tatsächlich auf Plausibilität und nicht etwa auf der unterschiedlichen Vorhersagbarkeit von plausiblen und unplausiblen Informationen beruht. Das Ziel von Studie 2 war die Prüfung der Hypothese, dass epistemisches Monitoring keinen evaluativen Mindset erfordert. Im Gegensatz zu den Befunden anderer Autoren (Wiswede, Koranyi, Müller, Langner, & Rothermund, 2013) zeigte sich in dieser Studie ein Kompatibilitätseffekt des aufgaben-irrelevanten Wahrheitsgehaltes auf die Antwortlatenzen in einer vollständig nicht-evaluativen Aufgabe. Die Ergebnisse legen nahe, dass epistemisches Monitoring nicht von einem evaluativen Mindset, möglicherweise aber von der Tiefe der Verarbeitung abhängig ist. Studie 3 beleuchtete das Verhältnis von Verstehen und Validieren anhand einer Untersuchung der Online-Effekte von Plausibilität und Vorhersagbarkeit auf Augenbewegungen beim Lesen kurzer Texte. Zusätzlich wurde die potentielle Modulierung dieser Effeke durch epistemische Marker, die die Sicherheit von Informationen anzeigen (z.B. sicherlich oder vielleicht), untersucht. Entsprechend der Annahme eines schnellen und nicht-strategischen epistemischen Monitoring-Prozesses zeigten sich interaktive Effekte von Plausibilität und dem Vorhandensein epistemischer Marker auf Indikatoren früher Verstehensprozesse. Dies spricht dafür, dass die kommunizierte Sicherheit von Informationen durch den Monitoring-Prozess berücksichtigt wird. Insgesamt sprechen die Befunde gegen eine Konzeptualisierung von Verstehen und Validieren als nicht-überlappenden Stufen der Informationsverarbeitung. Vielmehr scheint eine Bewertung des Wahrheitsgehalts oder der Plausibilität basierend auf dem Weltwissen – zumindest in gewissem Ausmaß – eine obligatorische und nicht-strategische Komponente des Sprachverstehens zu sein. Die Bedeutung der Befunde für aktuelle Modelle des Sprachverstehens und Empfehlungen für die weiterführende Forschung zum Vehältnis von Verstehen und Validieren werden aufgezeigt.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This research project focuses on contemporary eagle-taming falconry practice of the Altaic Kazakhs animal herding society in Bayan Ulgii Province in Western Mongolia. It aims to contributing both theoretical and empirical criteria for cultural preservation of Asian falconry. This cultural as well as environmental discourse is illustrated with concentrated field research framed by ecological anthropology and ethno-ornithology from the viewpoint of “Human-Animal Interaction (HAI)” and “Human-Animal Behavior (HAB)”. Part I (Chapter 2 & 3) explores ethno-archaeological and ethno-ornithological dimensions by interpretive research of archaeological artefacts which trace the historical depth of Asian falconry culture. Part II (Chapter 4 & 5) provides an extensive ethnographic narrative of Altaic Kazakh falconry, which is the central part of this research project. The “Traditional Art and Knowledge (TAK)” in human-raptor interactions, comprising the entire cycle of capture, perch, feeding, training, hunting, and release, is presented with specific emphasis on its relation to environmental and societal context. Traditional falconry as integral part of a nomadic lifestyle has to face some critical problems nowadays which necessitate preventing the complete disappearance of this outstanding indigenous cultural heritage. Part III (Chapter 6 & 7) thus focuses on the cultural sustainability of Altaic Kazakh falconry. Changing livelihoods, sedentarisation, and decontextualisation are identified as major threats. The role of Golden Eagle Festivals is critically analysed with regard to positive and negative impact. This part also intends to contribute to the academic definition of eagle falconry as an intangible cultural heritage, and to provide scientific criteria for a preservation master plan, as well as stipulate local resilience by pointing to successive actions needed for conservation. This research project concludes that cultural sustainability of Altaic Kazakh falconry needs to be supported from the angles of three theoretical frameworks; (1) Cultural affairs for protection based on the concept of nature-guardianship in its cultural domain, (2) Sustainable development and improvement of animal herding productivity and herder’s livelihood, (3) Natural resource management, especially supporting the population of Golden Eagles, their potential prey animals, and their nesting environment.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper uses the data of 1338 rural households in the Northern Mountainous Region of Vietnam to examine the extent to which subsidised credit targets the poor and its impacts. Principal Component Analysis and Propensity Score Matching were used to evaluate the depth of outreach and the income impact of credit. To address the problem of model uncertainty, the approach of Bayesian Model Average applied to the probit model was used. Results showed that subsidised credit successfully targeted the poor households with 24.10% and 69.20% of clients falling into the poorest group and the three bottom groups respectively. Moreover, those who received subsidised credit make up 83% of ethnic minority households. These results indicate that governmental subsidies are necessary to reach the poor and low income households, who need capital but are normally bypassed by commercial banks. Analyses also showed that ethnicity and age of household heads, number of helpers, savings, as well as how affected households are by shocks were all factors that further explained the probability at which subsidised credit has been assessed. Furthermore, recipients obtained a 2.61% higher total income and a 5.93% higher farm income compared to non-recipients. However, these small magnitudes of effects are statistically insignificant at a 5% level. Although the subsidised credit is insufficient to significantly improve the income of the poor households, it possibly prevents these households of becoming even poorer.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In the absence of cues for absolute depth measurements as binocular disparity, motion, or defocus, the absolute distance between the observer and a scene cannot be measured. The interpretation of shading, edges and junctions may provide a 3D model of the scene but it will not inform about the actual "size" of the space. One possible source of information for absolute depth estimation is the image size of known objects. However, this is computationally complex due to the difficulty of the object recognition process. Here we propose a source of information for absolute depth estimation that does not rely on specific objects: we introduce a procedure for absolute depth estimation based on the recognition of the whole scene. The shape of the space of the scene and the structures present in the scene are strongly related to the scale of observation. We demonstrate that, by recognizing the properties of the structures present in the image, we can infer the scale of the scene, and therefore its absolute mean depth. We illustrate the interest in computing the mean depth of the scene with application to scene recognition and object detection.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Polydimethylsiloxane (PDMS) is the elastomer of choice to create a variety of microfluidic devices by soft lithography techniques (eg., [1], [2], [3], [4]). Accurate and reliable design, manufacture, and operation of microfluidic devices made from PDMS, require a detailed characterization of the deformation and failure behavior of the material. This paper discusses progress in a recently-initiated research project towards this goal. We have conducted large-deformation tension and compression experiments on traditional macroscale specimens, as well as microscale tension experiments on thin-film (≈ 50µm thickness) specimens of PDMS with varying ratios of monomer:curing agent (5:1, 10:1, 20:1). We find that the stress-stretch response of these materials shows significant variability, even for nominally identically prepared specimens. A non-linear, large-deformation rubber-elasticity model [5], [6] is applied to represent the behavior of PDMS. The constitutive model has been implemented in a finite-element program [7] to aid the design of microfluidic devices made from this material. As a first attempt towards the goal of estimating the non-linear material parameters for PDMS from indentation experiments, we have conducted micro-indentation experiments using a spherical indenter-tip, and carried out corresponding numerical simulations to verify how well the numerically-predicted P(load-h(depth of indentation) curves compare with the corresponding experimental measurements. The results are encouraging, and show the possibility of estimating the material parameters for PDMS from relatively simple micro-indentation experiments, and corresponding numerical simulations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Introducción: La elección de la técnica anestésica para cualquier procedimiento quirúrgico debe estar basada en su seguridad, la rapidez para su aplicación, la recuperación óptima para el paciente y minimización de los efectos secundarios, la anestesia raquídea es una técnica anestésica que puede ser utilizada con buenos resultados clínicos y minimas complicaciones . Materiales y métodos: Se realizó un estudio observacional con recolección prospectiva en mujeres clasificadas como ASA I - II y que posteriormente fueron llevadas a la realización de legrado uterino obstétrico por embarazo no viable durante las primeras 12 semanas de gestación, las técnicas anestésicas fueron anestesia espinal o anestesia general endovenosa dependiendo de la elección hecha por el anestesiólogo previo al procedimiento. Se midieron variables hemodinámicas, control del dolor postoperatorio, tiempo de recuperación y complicaciones perioperatorias con el fin de determinar si se presentaban diferencias significativas entre estas dos técnicas anestésicas. Resultados: Se incluyeron un total de 110 pacientes, 63.6% (n=70) con anestesia general y 36.4% (n40) con anestesia espinal. Ambas poblaciones fueron comparables. Se presentaron menos efectos secundarios con la técnica espinal, hay una diferencia estadísticamente significativa en cuanto al dolor a favor de la anestesia espinal (p0,000) Discusión: La anestesia raquídea es una opción viable, sencilla , fácil y eficaz para la realización de legrados obstétricos, se puede realizar con monitorización básica y las complicaciones son mínimas. Se requieren estudios más amplios para determinar el papel de cual es la mejor técnica. Palabras claves: legrado uterino instrumentado, anestesia espinal, anestesia general endovenosa

Relevância:

90.00% 90.00%

Publicador:

Resumo:

El uso de materiales compuestos de matriz polimérica (FRP) emerge como alternativa al hormigón convencionalmente armado con acero debido a la mayor resistencia a la corrosión de dichos materiales. El presente estudio investiga el comportamiento en servicio de vigas de hormigón armadas con barras de FRP mediante un análisis teórico y experimental. Se presentan los resultados experimentales de veintiséis vigas de hormigón armadas con barras de material compuesto de fibra de vidrio (GFRP) y una armada con acero, todas ellas ensayadas a flexión de cuatro puntos. Los resultados experimentales son analizados y comparados con algunos de los modelos de predicción más significativos de flechas y fisuración, observándose, en general, una predicción adecuada del comportamiento experimental hasta cargas de servicio. El análisis de sección fisurada (CSA) estima la carga última con precisión, aunque se registra un incremento de la flecha experimental para cargas superiores a las de servicio. Esta diferencia se atribuye a la influencia de las deformaciones por esfuerzo cortante y se calcula experimentalmente. Se presentan los aspectos principales que influyen en los estados límites de servicio: tensiones de los materiales, ancho máximo de fisura y flecha máxima permitida. Se presenta una metodología para el diseño de dichos elementos bajo las condiciones de servicio. El procedimiento presentado permite optimizar las dimensiones de la sección respecto a metodologías más generales.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The atmospheric circulation changes predicted by climate models are often described using sea level pressure, which generally shows a strengthening of the mid-latitude westerlies. Recent observed variability is dominated by the Northern Annular Mode (NAM) which is equivalent barotropic, so that wind variations of the same sign are seen at all levels. However, in model predictions of the response to anthropogenic forcing, there is a well-known enhanced warming at low levels over the northern polar cap in winter. This means that there is a strong baroclinic component to the response. The projection of the response onto a NAM-like zonal index varies with height. While at the surface most models project positively onto the zonal index, throughout most of the depth of the troposphere many of the models give negative projections. The response to anthropogenic forcing therefore has a distinctive baroclinic signature which is very different to the NAM

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The polar vortex of the Southern Hemisphere (SH) split dramatically during September 2002. The large-scale dynamical effects were manifest throughout the stratosphere and upper troposphere, corresponding to two distinct cyclonic centers in the upper troposphere–stratosphere system. High-resolution (T511) ECMWF analyses, supplemented by analyses from the Met Office, are used to present a detailed dynamical analysis of the event. First, the anomalous evolution of the SH polar vortex is placed in the context of the evolution that is usually witnessed during spring. Then high-resolution fields of potential vorticity (PV) from ECMWF are used to reveal several dynamical features of the split. Vortex fragments are rapidly sheared out into sheets of high (modulus) PV, which subsequently roll up into distinct synoptic-scale vortices. It is proposed that the stratospheric circulation becomes hydrodynamically unstable through a significant depth of the troposphere–stratosphere system as the polar vortex elongates.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Aquatic sediments often remove hydrophobic contaminants from fresh waters. The subsequent distribution and concentration of contaminants in bed sediments determines their effect on benthic organisms and the risk of re-entry into the water and/or leaching to groundwater. This study examines the transport of simazine and lindane in aquatic bed sediments with the aim of understanding the processes that determine their depth distribution. Experiments in flume channels (water flow of 10 cm s(-1)) determined the persistence of the compounds in the absence of sediment with (a) de-ionised water and (b) a solution that had been in contact with river sediment. In further experiments with river bed sediments in light and dark conditions, measurements were made of the concentration of the compounds in the overlying water and the development of bacterial/algal biofilms and bioturbation activity. At the end of the experiments, concentrations in sediments and associated pore waters were determined in sections of the sediment at 1 mm resolution down to 5 mm and then at 10 mm resolution to 50 mm depth and these distributions analysed using a sorption-diffusion-degradation model. The fine resolution in the depth profile permitted the detection of a maximum in the concentration of the compounds in the pore water near the surface, whereas concentrations in the sediment increased to a maximum at the surface itself. Experimental distribution coefficients determined from the pore water and sediment concentrations indicated a gradient with depth that was partly explained by an increase in organic matter content and specific surface area of the solids near the interface. The modelling showed that degradation of lindane within the sediment was necessary to explain the concentration profiles, with the optimum agreement between the measured and theoretical profiles obtained with differential degradation in the oxic and anoxic zones. The compounds penetrated to a depth of 40-50 rum over a period of 42 days. (C) 2004 Society of Chemical Industry.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Bed-sediments are a sink for many micro-organic contaminants in aquatic environments. The impact of toxic contaminants on benthic fauna often depends on their spatial distribution, and the fate of the parent compounds and their metabolites. The distribution of a synthetic pyrethroid, permethrin, a compound known to be toxic to aquatic invertebrates, was studied using river bed-sediments in lotic flume channels. trans/cis-Permethrin diagnostic ratios were used to quantify the photoisomerization of the trans isomer in water. Rates were affected by the presence of sediment particles and colloids when compared to distilled water alone. Two experiments in dark/light conditions with replicate channels were undertaken using natural sediment, previously contaminated with permethrin, to examine the effect of the growth of an algal biofilm at the sediment-water interface on diffusive fluxes of permethrin into the sediment. After 42 days, the bulk water was removed, allowing a fine sectioning of the sediment bed (i.e., every mm down to 5 mm and then 5-10 mm, then every 10 mm down to 50 mm). Permethrin was detected in all cases down to a depth of 5-10 mm, in agreement with estimates by the Millington and Quirk model, and measurements of concentrations in pore water produced a distribution coefficient (K-d) for each section, High K-d's were observed for the top layers, mainly as a result of high organic matter and specific surface area. Concentrations in the algal biofilm measured at the end of the experiment under light conditions, and increases in concentration in the top 1 mm of the sediment, demonstrated that algal/bacterial biofilm material was responsible for high K-d's at the sediment surface, and for the retardation of permethrin diffusion. This specific partition of permethrin to fine sediment particles and algae may enhance its threat to benthic invertebrates. In addition,the analysis of trans/cis-permethrin isomer ratios in sediment showed greater losses of trans-permethrin in the experiment under light conditions, which may have also resulted from enhanced biological activity at the sediment surface.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Displacement studies on leaching of potassium (K+) were conducted under unsaturated steady state flow conditions in nine undisturbed soil columns (15.5 cm in diameter and 25 cm long). Pulses of K+ applied to columns of undisturbed soil were leached with distilled water or calcium chloride (CaCl2) at a rate of 18 mm h(-1). The movement of K+ in gypsum treated soil leached with distilled water was at a similar rate to that of the untreated soil leached with 15 mM CaCl2. The Ca2+ concentrations in the leachates were about 15 mM, the expected values for the dissolution of the gypsum. When applied K+ was displaced with the distilled water, K+ was retained in the top 10-12.5 cm depth of soil. In the undisturbed soil cores there is possibility of preferential flow and lack of K+ sorption. The application of gypsum and CaCl2 in the reclamation of sodic soils would be expected to leach K+ from soils. It can also be concluded that the use of sources of water for irrigation which have a high Ca2+ concentration can also lead to leaching of K+ from soil. Average effluent concentration of K+ during leaching period was 30.2 and 28.6 mg l(-1) for the gypsum and CaCl2 treated soils, respectively. These concentrations are greater than the recommended guideline of the World Health Organisation (12 mg K+ l(-1)).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Testing of the Integrated Nitrogen model for Catchments (INCA) in a wide range of ecosystem types across Europe has shown that the model underestimates N transformation processes to a large extent in northern catchments of Finland and Norway in winter and spring. It is found, and generally assumed, that microbial activity in soils proceeds at low rates at northern latitudes during winter, even at sub-zero temperatures. The INCA model was modified to improve the simulation of N transformation rates in northern catchments, characterised by cold climates and extensive snow accumulation and insulation in winter, by introducing an empirical function to simulate soil temperatures below the seasonal snow pack, and a degree-day model to calculate the depth of the snow pack. The proposed snow-correction factor improved the simulation of soil temperatures at Finnish and Norwegian field sites in winter, although soil temperature was still underestimated during periods with a thin snow cover. Finally, a comparison between the modified INCA version (v. 1.7) and the former version (v. 1.6) was made at the Simojoki river basin in northern Finland and at Dalelva Brook in northern Norway. The new modules did not imply any significant changes in simulated NO3- concentration levels in the streams but improved the timing of simulated higher concentrations. The inclusion of a modified temperature response function and an empirical snow-correction factor improved the flexibility and applicability of the model for climate effect studies.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

During deglaciation of the North American Laurentide Ice Sheet large proglacial lakes developed in positions where proglacial drainage was impeded by the ice margin. For some of these lakes, it is known that subsequent drainage had an abrupt and widespread impact on North Atlantic Ocean circulation and climate, but less is known about the impact that the lakes exerted on ice sheet dynamics. This paper reports palaeogeographic reconstructions of the evolution of proglacial lakes during deglaciation across the northwestern Canadian Shield, covering an area in excess of 1,000,000 km(2) as the ice sheet retreated some 600 km. The interactions between proglacial lakes and ice sheet flow are explored, with a particular emphasis on whether the disposition of lakes may have influenced the location of the Dubawnt Lake ice stream. This ice stream falls outside the existing paradigm for ice streams in the Laurentide Ice Sheet because it did not operate over fined-grained till or lie in a topographic trough. Ice margin positions and a digital elevation model are utilised to predict the geometry and depth of proglacial takes impounded at the margin at 30-km increments during deglaciation. Palaeogeographic reconstructions match well with previous independent estimates of lake coverage inferred from field evidence, and results suggest that the development of a deep lake in the Thelon drainage basin may have been influential in initiating the ice stream by inducing calving, drawing down ice and triggering fast ice flow. This is the only location alongside this sector of the ice sheet where large (>3000 km(2)), deep lakes (similar to120 m) are impounded for a significant length of time and exactly matches the location of the ice stream. It is speculated that the commencement of calving at the ice sheet margin may have taken the system beyond a threshold and was sufficient to trigger rapid motion but that once initiated, calving processes and losses were insignificant to the functioning of the ice stream. It is thus concluded that proglacial lakes are likely to have been an important control on ice sheet dynamics during deglaciation of the Laurentide Ice Sheet. (C) 2004 Elsevier B.V. All rights reserved.