942 resultados para Matching-to-sample arbitrário
Nitric Oxide in the Exhaled Breath Condensate of Healthy Volunteers Collected With a Reusable Device
Resumo:
Background: The analysis of exhaled breath condensate (EBC) is a non-invasive technique that enables the determination of several volatile and nonvolatile substances produced in the respiratory tract, whose measurement may be useful for the diagnosis and monitoring of several respiratory diseases. Objective: The aim of this study was to produce a low-cost reusable device in order to sample exhaled breath condensate in healthy adult volunteers, and to determine the concentration of nitric oxide in the sample collected. Material and methods: The apparatus was made with a U-shaped tube of borosilicate glass. The tube was placed in a container with ice, and unidirectional respiratory valves were fitted to the distal end. Afterwards, nitric oxide was measured in the exhaled breath condensate (EBC) by chemiluminescence. Results: The total cost of the device was $120.20. EBC samples were obtained from 116 volunteers of both sexes, aged between 20 and 70. The mean volume of exhaled breath condensate collected during 10 minutes was 1.0 +/- 0.6 mL, and the mean level of nitric oxide was 12.99 +/- 14.38 mu M (median 8.72 mu M). There was no correlation between the nitric oxide levels in the exhaled breath condensate and age or gender. Conclusion: We demonstrate that it is possible to fabricate a low-cost, efficient, reusable device in order to collect and determine nitric oxide levels in EBC. We have identified no correlation between the nitric oxide levels present in the EBC obtained with this method with either age or sex. (C) 2011 SEPAR. Published by Elsevier Espana, S.L. All rights reserved.
Resumo:
OBJECTIVE: To analyze the association between noise levels present in preschool institutions and vocal disorders among educators. METHODS: Cross-sectional study conducted in 2009 with 28 teachers from three preschool institutions located in the city of Sao Paulo (Southeastern Brazil). Sound pressure levels were measured according to Brazilian Technical Standards Association, with the use of a sound level meter. The averages were classified according to the levels of comfort, discomfort, and auditory damage proposed by the Pan American Health Organization. The educators underwent voice evaluation: self-assessment with visual analogue scale, auditory perceptual evaluation using the GRBAS scale, and acoustic analysis utilizing the Praat program. To analyze the association between noise and voice evaluation, descriptive statistics and the chi-square test were employed, with significance of 10% due to sample size. RESULTS: The teachers' age ranged between 21 and 56 years. The noise average was 72.7 dB, considered as damage 2. The professionals' vocal self-assessment ranked an average of 5.1 on the scale, being considered as moderate alteration. In the auditory-perceptual assessment, 74% presented vocal alteration, especially hoarseness; of these, 52% were considered mild alterations. In the acoustic assessment the majority presented fundamental frequency below the expected level. Averages for jitter, shimmer and harmonic-noise ratio showed alterations. An association between the presence of noise between the harmonics and vocal disorders was observed. CONCLUSIONS: There is an association between presence of noise between the harmonics and vocal alteration, with high noise levels. Although most teachers presented mild voice alteration, the self-evaluation showed moderate alteration, probably due to the difficulty in projection.
Resumo:
The forest-like characteristics of agroforestry systems create a unique opportunity to combine agricultural production with biodiversity conservation in human-modified tropical landscapes. The cacao-growing region in southern Bahia, Brazil, encompasses Atlantic forest remnants and large extensions of agroforests, locally known as cabrucas, and harbors several endemic large mammals. Based on the differences between cabrucas and forests, we hypothesized that: (1) non-native and non-arboreal mammals are more frequent, whereas exclusively arboreal and hunted mammals are less frequent in cabrucas than forests; (2) the two systems differ in mammal assemblage structure, but not in species richness; and (3) mammal assemblage structure is more variable among cabrucas than forests. We used camera-traps to sample mammals in nine pairs of cabruca-forest sites. The high conservation value of agroforests was supported by the presence of species of conservation concern in cabrucas, and similar species richness and composition between forests and cabrucas. Arboreal species were less frequently recorded, however, and a non-native and a terrestrial species adapted to open environments (Cerdocyon thous) were more frequently recorded in cabrucas. Factors that may overestimate the conservation value of cabrucas are: the high proportion of total forest cover in the study landscape, the impoverishment of large mammal fauna in forest, and uncertainty about the long-term maintenance of agroforestry systems. Our results highlight the importance of agroforests and forest remnants for providing connectivity in human-modified tropical forest landscapes, and the importance of controlling hunting and dogs to increase the value of agroforestry mosaics.
Resumo:
Theoretical and empirical studies demonstrate that the total amount of forest and the size and connectivity of fragments have nonlinear effects on species survival. We tested how habitat amount and configuration affect understory bird species richness and abundance. We used mist nets (almost 34,000 net hours) to sample birds in 53 Atlantic Forest fragments in southeastern Brazil. Fragments were distributed among 3 10,800-ha landscapes. The remaining forest in these landscapes was below (10% forest cover), similar to (30%), and above (50%) the theoretical fragmentation threshold (approximately 30%) below which the effects of fragmentation should be intensified. Species-richness estimates were significantly higher (F = 3715, p = 0.00) where 50% of the forest remained, which suggests a species occurrence threshold of 30-50% forest, which is higher than usually occurs (<30%). Relations between forest cover and species richness differed depending on species sensitivity to forest conversion and fragmentation. For less sensitive species, species richness decreased as forest cover increased, whereas for highly sensitive species the opposite occurred. For sensitive species, species richness and the amount of forest cover were positively related, particularly when forest cover was 30-50%. Fragment size and connectivity were related to species richness and abundance in all landscapes, not just below the 30% threshold. Where 10% of the forest remained, fragment size was more related to species richness and abundance than connectivity. However, the relation between connectivity and species richness and abundance was stronger where 30% of the landscape was forested. Where 50% of the landscape was forested, fragment size and connectivity were both related to species richness and abundance. Our results demonstrated a rapid loss of species at relatively high levels of forest cover (30-50%). Highly sensitive species were 3-4 times more common above the 30-50% threshold than below it; however, our results do not support a unique fragmentation threshold.
Resumo:
Several activities were conducted during my PhD activity. For the NEMO experiment a collaboration between the INFN/University groups of Catania and Bologna led to the development and production of a mixed signal acquisition board for the Nemo Km3 telescope. The research concerned the feasibility study for a different acquisition technique quite far from that adopted in the NEMO Phase 1 telescope. The DAQ board that we realized exploits the LIRA06 front-end chip for the analog acquisition of anodic an dynodic sources of a PMT (Photo-Multiplier Tube). The low-power analog acquisition allows to sample contemporaneously multiple channels of the PMT at different gain factors in order to increase the signal response linearity over a wider dynamic range. Also the auto triggering and self-event-classification features help to improve the acquisition performance and the knowledge on the neutrino event. A fully functional interface towards the first level data concentrator, the Floor Control Module, has been integrated as well on the board, and a specific firmware has been realized to comply with the present communication protocols. This stage of the project foresees the use of an FPGA, a high speed configurable device, to provide the board with a flexible digital logic control core. After the validation of the whole front-end architecture this feature would be probably integrated in a common mixed-signal ASIC (Application Specific Integrated Circuit). The volatile nature of the configuration memory of the FPGA implied the integration of a flash ISP (In System Programming) memory and a smart architecture for a safe remote reconfiguration of it. All the integrated features of the board have been tested. At the Catania laboratory the behavior of the LIRA chip has been investigated in the digital environment of the DAQ board and we succeeded in driving the acquisition with the FPGA. The PMT pulses generated with an arbitrary waveform generator were correctly triggered and acquired by the analog chip, and successively they were digitized by the on board ADC under the supervision of the FPGA. For the communication towards the data concentrator a test bench has been realized in Bologna where, thanks to a lending of the Roma University and INFN, a full readout chain equivalent to that present in the NEMO phase-1 was installed. These tests showed a good behavior of the digital electronic that was able to receive and to execute command imparted by the PC console and to answer back with a reply. The remotely configurable logic behaved well too and demonstrated, at least in principle, the validity of this technique. A new prototype board is now under development at the Catania laboratory as an evolution of the one described above. This board is going to be deployed within the NEMO Phase-2 tower in one of its floors dedicated to new front-end proposals. This board will integrate a new analog acquisition chip called SAS (Smart Auto-triggering Sampler) introducing thus a new analog front-end but inheriting most of the digital logic present in the current DAQ board discussed in this thesis. For what concern the activity on high-resolution vertex detectors, I worked within the SLIM5 collaboration for the characterization of a MAPS (Monolithic Active Pixel Sensor) device called APSEL-4D. The mentioned chip is a matrix of 4096 active pixel sensors with deep N-well implantations meant for charge collection and to shield the analog electronics from digital noise. The chip integrates the full-custom sensors matrix and the sparsifification/readout logic realized with standard-cells in STM CMOS technology 130 nm. For the chip characterization a test-beam has been set up on the 12 GeV PS (Proton Synchrotron) line facility at CERN of Geneva (CH). The collaboration prepared a silicon strip telescope and a DAQ system (hardware and software) for data acquisition and control of the telescope that allowed to store about 90 million events in 7 equivalent days of live-time of the beam. My activities concerned basically the realization of a firmware interface towards and from the MAPS chip in order to integrate it on the general DAQ system. Thereafter I worked on the DAQ software to implement on it a proper Slow Control interface of the APSEL4D. Several APSEL4D chips with different thinning have been tested during the test beam. Those with 100 and 300 um presented an overall efficiency of about 90% imparting a threshold of 450 electrons. The test-beam allowed to estimate also the resolution of the pixel sensor providing good results consistent with the pitch/sqrt(12) formula. The MAPS intrinsic resolution has been extracted from the width of the residual plot taking into account the multiple scattering effect.
Resumo:
Es wurde ein für bodengebundene Feldmessungen geeignetes System zur digital-holographischen Abbildung luftgetragener Objekte entwickelt und konstruiert. Es ist, abhängig von der Tiefenposition, geeignet zur direkten Bestimmung der Größe luftgetragener Objekte oberhalb von ca. 20 µm, sowie ihrer Form bei Größen oberhalb von ca. 100µm bis in den Millimeterbereich. Die Entwicklung umfaßte zusätzlich einen Algorithmus zur automatisierten Verbesserung der Hologrammqualität und zur semiautomatischen Entfernungsbestimmung großer Objekte entwickelt. Eine Möglichkeit zur intrinsischen Effizienzsteigerung der Bestimmung der Tiefenposition durch die Berechnung winkelgemittelter Profile wurde vorgestellt. Es wurde weiterhin ein Verfahren entwickelt, das mithilfe eines iterativen Ansatzes für isolierte Objekte die Rückgewinnung der Phaseninformation und damit die Beseitigung des Zwillingsbildes erlaubt. Weiterhin wurden mithilfe von Simulationen die Auswirkungen verschiedener Beschränkungen der digitalen Holographie wie der endlichen Pixelgröße untersucht und diskutiert. Die geeignete Darstellung der dreidimensionalen Ortsinformation stellt in der digitalen Holographie ein besonderes Problem dar, da das dreidimensionale Lichtfeld nicht physikalisch rekonstruiert wird. Es wurde ein Verfahren entwickelt und implementiert, das durch Konstruktion einer stereoskopischen Repräsentation des numerisch rekonstruierten Meßvolumens eine quasi-dreidimensionale, vergrößerte Betrachtung erlaubt. Es wurden ausgewählte, während Feldversuchen auf dem Jungfraujoch aufgenommene digitale Hologramme rekonstruiert. Dabei ergab sich teilweise ein sehr hoher Anteil an irregulären Kristallformen, insbesondere infolge massiver Bereifung. Es wurden auch in Zeiträumen mit formal eisuntersättigten Bedingungen Objekte bis hinunter in den Bereich ≤20µm beobachtet. Weiterhin konnte in Anwendung der hier entwickelten Theorie des ”Phasenrandeffektes“ ein Objekt von nur ca. 40µm Größe als Eisplättchen identifiziert werden. Größter Nachteil digitaler Holographie gegenüber herkömmlichen photographisch abbildenden Verfahren ist die Notwendigkeit der aufwendigen numerischen Rekonstruktion. Es ergibt sich ein hoher rechnerischer Aufwand zum Erreichen eines einer Photographie vergleichbaren Ergebnisses. Andererseits weist die digitale Holographie Alleinstellungsmerkmale auf. Der Zugang zur dreidimensionalen Ortsinformation kann der lokalen Untersuchung der relativen Objektabstände dienen. Allerdings zeigte sich, dass die Gegebenheiten der digitalen Holographie die Beobachtung hinreichend großer Mengen von Objekten auf der Grundlage einzelner Hologramm gegenwärtig erschweren. Es wurde demonstriert, dass vollständige Objektgrenzen auch dann rekonstruiert werden konnten, wenn ein Objekt sich teilweise oder ganz außerhalb des geometrischen Meßvolumens befand. Weiterhin wurde die zunächst in Simulationen demonstrierte Sub-Bildelementrekonstruktion auf reale Hologramme angewandt. Dabei konnte gezeigt werden, dass z.T. quasi-punktförmige Objekte mit Sub-Pixelgenauigkeit lokalisiert, aber auch bei ausgedehnten Objekten zusätzliche Informationen gewonnen werden konnten. Schließlich wurden auf rekonstruierten Eiskristallen Interferenzmuster beobachtet und teilweise zeitlich verfolgt. Gegenwärtig erscheinen sowohl kristallinterne Reflexion als auch die Existenz einer (quasi-)flüssigen Schicht als Erklärung möglich, wobei teilweise in Richtung der letztgenannten Möglichkeit argumentiert werden konnte. Als Ergebnis der Arbeit steht jetzt ein System zur Verfügung, das ein neues Meßinstrument und umfangreiche Algorithmen umfaßt. S. M. F. Raupach, H.-J. Vössing, J. Curtius und S. Borrmann: Digital crossed-beam holography for in-situ imaging of atmospheric particles, J. Opt. A: Pure Appl. Opt. 8, 796-806 (2006) S. M. F. Raupach: A cascaded adaptive mask algorithm for twin image removal and its application to digital holograms of ice crystals, Appl. Opt. 48, 287-301 (2009) S. M. F. Raupach: Stereoscopic 3D visualization of particle fields reconstructed from digital inline holograms, (zur Veröffentlichung angenommen, Optik - Int. J. Light El. Optics, 2009)
Resumo:
The aim of my thesis is to parallelize the Weighting Histogram Analysis Method (WHAM), which is a popular algorithm used to calculate the Free Energy of a molucular system in Molecular Dynamics simulations. WHAM works in post processing in cooperation with another algorithm called Umbrella Sampling. Umbrella Sampling has the purpose to add a biasing in the potential energy of the system in order to force the system to sample a specific region in the configurational space. Several N independent simulations are performed in order to sample all the region of interest. Subsequently, the WHAM algorithm is used to estimate the original system energy starting from the N atomic trajectories. The parallelization of WHAM has been performed through CUDA, a language that allows to work in GPUs of NVIDIA graphic cards, which have a parallel achitecture. The parallel implementation may sensibly speed up the WHAM execution compared to previous serial CPU imlementations. However, the WHAM CPU code presents some temporal criticalities to very high numbers of interactions. The algorithm has been written in C++ and executed in UNIX systems provided with NVIDIA graphic cards. The results were satisfying obtaining an increase of performances when the model was executed on graphics cards with compute capability greater. Nonetheless, the GPUs used to test the algorithm is quite old and not designated for scientific calculations. It is likely that a further performance increase will be obtained if the algorithm would be executed in clusters of GPU at high level of computational efficiency. The thesis is organized in the following way: I will first describe the mathematical formulation of Umbrella Sampling and WHAM algorithm with their apllications in the study of ionic channels and in Molecular Docking (Chapter 1); then, I will present the CUDA architectures used to implement the model (Chapter 2); and finally, the results obtained on model systems will be presented (Chapter 3).
Resumo:
Diese Arbeit stellt eine ausführliche Studie fundamentaler Eigenschaften der Kalzit CaCO3(10.4) und verwandter Mineraloberflächen dar, welche nicht nur durch die Verwendung von Nichtkontakt Rasterkraftmikroskopie, sondern hauptsächlich durch die Messung von Kraftfeldern ermöglicht wurde. Die absolute Oberflächenorientierung sowie der hierfür zugrundeliegende Prozess auf atomarer Skala konnten erfolgreich für die Kalzit (10.4) Oberfläche identifiziert werden.rnDie Adsorption chiraler Moleküle auf Kalzit ist relevant im Bereich der Biomineralisation, was ein Verständnis der Oberflächensymmetrie unumgänglich macht. Die Messung des Oberflächenkraftfeldes auf atomarer Ebene ist hierfür ein zentraler Aspekt. Eine solche Kraftkarte beleuchtet nicht nur die für die Biomineralisation wichtige Wechselwirkung der Oberfläche mit Molekülen, sondern enthält auch die Möglichkeit, Prozesse auf atomarer Skala und damit Oberflächeneigenschaften zu identifizieren.rnDie Einführung eines höchst flexiblen Messprotokolls gewährleistet die zuverlässige und kommerziell nicht erhältliche Messung des Oberflächenkraftfeldes. Die Konversion der rohen ∆f Daten in die vertikale Kraft Fz ist jedoch kein trivialer Vorgang, insbesondere wenn Glätten der Daten in Frage kommt. Diese Arbeit beschreibt detailreich, wie Fz korrekt für die experimentellen Bedingungen dieser Arbeit berechnet werden können. Weiterhin ist beschrieben, wie Lateralkräfte Fy und Dissipation Γ erhalten wurden, um das volle Potential dieser Messmethode auszureizen.rnUm Prozesse auf atomarer Skala auf Oberflächen zu verstehen sind die kurzreichweitigen, chemischen Kräfte Fz,SR von größter Wichtigkeit. Langreichweitige Beiträge müssen hierzu an Fz angefittet und davon abgezogen werden. Dies ist jedoch eine fehleranfällige Aufgabe, die in dieser Arbeit dadurch gemeistert werden konnte, dass drei unabhängige Kriterien gefunden wurden, die den Beginn zcut von Fz,SR bestimmen, was für diese Aufgabe von zentraler Bedeutung ist. Eine ausführliche Fehleranalyse zeigt, dass als Kriterium die Abweichung der lateralen Kräfte voneinander vertrauenswürdige Fz,SR liefert. Dies ist das erste Mal, dass in einer Studie ein Kriterium für die Bestimmung von zcut gegeben werden konnte, vervollständigt mit einer detailreichen Fehleranalyse.rnMit der Kenntniss von Fz,SR und Fy war es möglich, eine der fundamentalen Eigenschaften der CaCO3(10.4) Oberfläche zu identifizieren: die absolute Oberflächenorientierung. Eine starke Verkippung der abgebildeten Objekte
Resumo:
Capuchin monkeys are notable among New World monkeys for their widespread use of tools. They use both hammer tools and insertion tools in the wild to acquire food that would be unobtainable otherwise. Evidence indicates that capuchins transport stones to anvil sites and use the most functionally efficient stones to crack nuts. We investigated capuchins’ assessment of functionality by testing their ability to select a tool that was appropriate for two different tool-use tasks: A stone for a hammer task and a stick for an insertion task. To select the appropriate tools, the monkeys investigated a baited tool-use apparatus (insertion or hammer), traveled to a location in their enclosure where they could no longer see the apparatus, made a selection between two tools (stick or stone), and then could transport the tool back to the apparatus to obtain a walnut. Four capuchins were first trained to select and use the appropriate tool for each apparatus. After training, they were then tested by allowing them to view a baited apparatus and then travel to a location 8 m distant where they could select a tool while out of view of the apparatus. All four monkeys chose the correct tool significantly more than expected and transported the tools back to the apparatus. Results confirm capuchins’ propensity for transporting tools, demonstrate their capacity to select the functionally appropriate tool for two different tool-use tasks, and indicate that they can retain the memory of the correct choice during a travel time of several seconds.
Resumo:
Micelle-forming bile salts have previously been shown to be effective pseudo-stationary phases for separating the chiral isomers of binaphthyl compounds with micellar electrokinetic capillary chromatography (MEKC). Here, cholate micelles are systematically investigated via electrophoretic separations and NMR using R, S-1, 1¿- binaphthyl- 2, 2¿-diylhydrogenphosphate (BNDHP) as a model chiral analyte. The pH, temperature, and concentration of BNDHP were systematically varied while monitoring the chiral resolution obtained with MEKC and the chemical shift of various protons in NMR. NMR data for each proton on BNDHP is monitored as a function of cholate concentration: as cholate monomers begin to aggregate and the analyte molecules begin to sample the micelle aggregate we observe changes in the cholate methyl and S-BNDHP proton chemical shifts. From such NMR data, the apparent CMC of cholate at pH 12 is found to be about 13-14 mM, but this value decreases at higher pH, suggesting that more extreme pHs may give rise to more effective separations. In general, CMCs increase with temperature indicating that one may be able to obtain better separations at lower temperatures. S-BNDHP concentrations ranging from 50 ¿M to 400 ¿M (pH 12.8) gave rise to apparent cholate CMC values from 10 mM to 8 mM, respectively, indicating that S-BNDHP, the chiral analyte molecule, may play an active role in stabilizing cholate aggregates. In all, these data show that NMR can be used to systematically investigate a complex multi-variable landscape of potential optimizations of chiral separations.
Resumo:
Measurements of NOx within the snowpack at Summit, Greenland were carried out from June 2008 to July 2010, using a novel system to sample firn air with minimal disruption of the snowpack. These long-term measurements were motivated by the need of improving the representation of air-snow interactions in global models. Results indicated that the NOx budget within the snowpack was on the order of 550 pptv as maximum, and was constituted primarily for NO2. NOx production was observed within the first 50 cm of the snowpack during the sunlight season between February and August. Presence of NOx at larger depths was attributed to high speed wind and vertical transport processes. Production of NO correlated with the seasonal incoming radiation profile, while NO2 maximum was observed in April. These measurements constitute the larger data set of NOx within the firn and will improve the representation of processes driving snow photochemistry at Summit.
Resumo:
Streams and riparian areas can be intricately connected via physical and biotic interactions that influence habitat conditions and supply resource subsidies between these ecosystems. Streambed characteristics such as the size of substrate particles influence the composition and the abundance of emergent aquatic insects, which can be an important resource for riparian breeding birds. We predict fine sediment abundance in small headwater streams directly affects the composition and number of emergent insects while it may indirectly affect riparian bird assemblages. Streams with abundant fine sediments that embed larger substrates should have lower emergence of large insects such as phemeroptera, Plecoptera and Trichoptera. Streams with lower emergent insect abundance are predicted to support fewer breeding birds and may lack certain bird species that specialize on aquatic insects. This study examined relationships between streambed characteristics, and emergent insects (composition, abundance and biomass), and riparian breeding birds (abundance and richness) along headwater streams of the Otter River Watershed. The stream bed habitats of seven stream reaches were characterized using longitudinal surveys. Malaise traps were deployed to sample emergent aquatic insects. Riparian breeding birds were surveyed using fixed-radius point-counts. Streams differed within a wide range of fine sediment abundances. Total emergent aquatic insect abundance increased as coverage by instream substrates increased in diameter, while bird community was unresponsive to insect or stream features. Knowledge of stream and riparian relationships is important for understanding of food webs in these ecosystems, and it is useful for riparian forest conservation and improving land-use management to reduce sediment pollution in these systems.
Resumo:
K`4 decays are interesting for several reasons: They allow an accurate measurement of a combination of S-wave pp scattering lengths, one form factor of the decay is connected to the chiral anomaly and the decay is the best source for the determination of some low energy constants of ChPT. We present a dispersive approach to K`4 decays, which takes rescattering effects fully into account. Some fits to NA48/2 and E865 measurements and results of the matching to ChPT are shown.
Resumo:
This study aimed to evaluate whether equine serum amyloid A (SAA) concentrations could be reliably measured in plasma with a turbidimetric immunoassay previously validated for equine SAA concentrations in serum. Paired serum and lithium-heparin samples obtained from 40 horses were evaluated. No difference was found in SAA concentrations between serum and plasma using a paired t test (P=0.48). The correlation between paired samples was 0.97 (Spearman's rank P<0.0001; 95% confidence interval 0.95-0.99). Passing-Bablok regression analyses revealed no differences between paired samples. Bland-Altman plots revealed a positive bias in plasma compared to serum but the difference was not considered clinically significant. The results indicate that lithium-heparin plasma samples are suitable for measurement of equine SAA using this method. Use of either serum or plasma allows for greater flexibility when it comes to sample collection although care should be taken when comparing data between measurements from different sample types.
Resumo:
Glacier highstands since the Last Glacial Maximum are well documented for many regions, but little is known about glacier fluctuations and lowstands during the Holocene. This is because the traces of minimum extents are difficult to identify and at many places are still ice covered, limiting the access to sample material. Here we report a new approach to assess minimal glacier extent, using a 72-m long surface-to-bedrock ice core drilled on Khukh Nuru Uul, a glacier in the Tsambagarav mountain range of the Mongolian Altai (4130 m asl, 48°39.338′N, 90°50.826′E). The small ice cap has low ice temperatures and flat bedrock topography at the drill site. This indicates minimal lateral glacier flow and thereby preserved climate signals. The upper two-thirds of the ice core contain 200 years of climate information with annual resolution, whereas the lower third is subject to strong thinning of the annual layers with a basal ice age of approximately 6000 years before present (BP). We interpret the basal ice age as indicative of ice-free conditions in the Tsambagarav mountain range at 4100 m asl prior to 6000 years BP. This age marks the onset of the Neoglaciation and the end of the Holocene Climate Optimum. The ice-free conditions allow for adjusting the Equilibrium Line Altitude (ELA) and derive the glacier extent in the Mongolian Altai during the Holocene Climate Optimum. Based on the ELA-shift, we conclude that most of the glaciers are not remnants of the Last Glacial Maximum but were formed during the second part of the Holocene. The ice core derived accumulation reconstruction suggests important changes in the precipitation pattern over the last 6000 years. During formation of the glacier, more humid conditions than presently prevailed followed by a long dry period from 5000 years BP until 250 years ago. Present conditions are more humid than during the past millennia. This is consistent with precipitation evolution derived from lake sediment studies in the Altai.