952 resultados para inborn errors of metabolism
Resumo:
Seit Anbeginn der Menschheitsgeschichte beeinflussen die Menschen ihre Umwelt. Durch anthropogene Emissionen ändert sich die Zusammensetzung der Atmosphäre, was einen zunehmenden Einfluss unter anderem auf die Atmosphärenchemie, die Gesundheit von Mensch, Flora und Fauna und das Klima hat. Die steigende Anzahl riesiger, wachsender Metropolen geht einher mit einer räumlichen Konzentration der Emission von Luftschadstoffen, was vor allem einen Einfluss auf die Luftqualität der windabwärts gelegenen ruralen Regionen hat. In dieser Doktorarbeit wurde im Rahmen des MEGAPOLI-Projektes die Abluftfahne der Megastadt Paris unter Anwendung des mobilen Aerosolforschungslabors MoLa untersucht. Dieses ist mit modernen, zeitlich hochauflösenden Instrumenten zur Messung der chemischen Zusammensetzung und Größenverteilung der Aerosolpartikel sowie einiger Spurengase ausgestattet. Es wurden mobile Messstrategien entwickelt und angewendet, die besonders geeignet zur Charakterisierung urbaner Emissionen sind. Querschnittsmessfahrten durch die Abluftfahne und atmosphärische Hintergrundluftmassen erlaubten sowohl die Bestimmung der Struktur und Homogenität der Abluftfahne als auch die Berechnung des Beitrags der urbanen Emissionen zur Gesamtbelastung der Atmosphäre. Quasi-Lagrange’sche Radialmessfahrten dienten der Erkundung der räumlichen Erstreckung der Abluftfahne sowie auftretender Transformationsprozesse der advehierten Luftschadstoffe. In Kombination mit Modellierungen konnte die Struktur der Abluftfahne vertieft untersucht werden. Flexible stationäre Messungen ergänzten den Datensatz und ließen zudem Vergleichsmessungen mit anderen Messstationen zu. Die Daten einer ortsfesten Messstation wurden zusätzlich verwendet, um die Alterung des organischen Partikelanteils zu beschreiben. Die Analyse der mobilen Messdaten erforderte die Entwicklung einer neuen Methode zur Bereinigung des Datensatzes von lokalen Störeinflüssen. Des Weiteren wurden die Möglichkeiten, Grenzen und Fehler bei der Anwendung komplexer Analyseprogramme zur Berechnung des O/C-Verhältnisses der Partikel sowie der Klassifizierung der Aerosolorganik untersucht. Eine Validierung verschiedener Methoden zur Bestimmung der Luftmassenherkunft war für die Auswertung ebenfalls notwendig. Die detaillierte Untersuchung der Abluftfahne von Paris ergab, dass diese sich anhand der Erhöhung der Konzentrationen von Indikatoren für unprozessierte Luftverschmutzung im Vergleich zu Hintergrundwerten identifizieren lässt. Ihre eher homogene Struktur kann zumeist durch eine Gauß-Form im Querschnitt mit einem exponentiellen Abfall der unprozessierten Schadstoffkonzentrationen mit zunehmender Distanz zur Stadt beschrieben werden. Hierfür ist hauptsächlich die turbulente Vermischung mit Umgebungsluftmassen verantwortlich. Es konnte nachgewiesen werden, dass in der advehierten Abluftfahne eine deutliche Oxidation der Aerosolorganik im Sommer stattfindet; im Winter hingegen ließ sich dieser Prozess während der durchgeführten Messungen nicht beobachten. In beiden Jahreszeiten setzt sich die Abluftfahne hauptsächlich aus Ruß und organischen Partikelkomponenten im PM1-Größenbereich zusammen, wobei die Quellen Verkehr und Kochen sowie zusätzlich Heizen in der kalten Jahreszeit dominieren. Die PM1-Partikelmasse erhöhte sich durch die urbanen Emissionen im Vergleich zum Hintergrundwert im Sommer in der Abluftfahne im Mittel um 30% und im Winter um 10%. Besonders starke Erhöhungen ließen sich für Polyaromaten beobachten, wo im Sommer eine mittlere Zunahme von 194% und im Winter von 131% vorlag. Jahreszeitliche Unterschiede waren ebenso in der Größenverteilung der Partikel der Abluftfahne zu finden, wo im Winter im Gegensatz zum Sommer keine zusätzlichen nukleierten kleinen Partikel, sondern nur durch Kondensation und Koagulation angewachsene Partikel zwischen etwa 10nm und 200nm auftraten. Die Spurengaskonzentrationen unterschieden sich ebenfalls, da chemische Reaktionen temperatur- und mitunter strahlungsabhängig sind. Weitere Anwendungsmöglichkeiten des MoLa wurden bei einer Überführungsfahrt von Deutschland an die spanische Atlantikküste demonstriert, woraus eine Kartierung der Luftqualität entlang der Fahrtroute resultierte. Es zeigte sich, dass hauptsächlich urbane Ballungszentren von unprozessierten Luftschadstoffen betroffen sind, advehierte gealterte Substanzen jedoch jede Region beeinflussen können. Die Untersuchung der Luftqualität an Standorten mit unterschiedlicher Exposition bezüglich anthropogener Quellen erweiterte diese Aussage um einen Einblick in die Variation der Luftqualität, abhängig unter anderem von der Wetterlage und der Nähe zu Emissionsquellen. Damit konnte gezeigt werden, dass sich die entwickelten Messstrategien und Analysemethoden nicht nur zur Untersuchung der Abluftfahne einer Großstadt, sondern auch auf verschiedene andere wissenschaftliche und umweltmesstechnische Fragestellungen anwenden lassen.
Resumo:
"Silent mating type information regulation 2 Type" 1 (SIRT1), das humane Homolog der NAD+-abhängigen Histondeacetylase Sir2 aus Hefe, besitzt Schlüsselfunktionen in der Regulation des Metabolismus, der Zellalterung und Apoptose. Letztere wird vor allem durch die Deacetylierung von p53 an Lys382 und der dadurch verringerten Transkription proapoptotischer Zielgene vermittelt. Im Rahmen der vorliegenden Arbeit wurde die SIRT1 Regulation im Zusammenhang mit der DNA-Schadensantwort untersucht.rnIn der Apoptoseregulation übernimmt die Serin/Threonin-Kinase "Homeodomain interacting protein kinase" 2 (HIPK2) eine zentrale Rolle und daher wurde die SIRT1 Modifikation und Regulation durch HIPK2 betrachtet. Durch Phosphorylierung des Tumorsuppressorproteins p53 an Ser46 aktiviert HIPK2 das Zielprotein und induziert die Transkription proapoptotischer Zielgene von p53. Es wurde beschrieben, dass HIPK2 nach DNA-Schädigung über einen bisher unbekannten Mechnismus die Acetylierung von p53 potenzieren kann.rnIn der vorliegenden Arbeit konnte gezeigt werden, dass SIRT1 von HIPK2 in vitro und in Zellen an Serin 27 und 682 phosphoryliert wird. Weiterhin ist die Interaktion von SIRT1 mit HIPK2 sowie die SIRT1 Phosphorylierung an Serin 682 durch DNA-schädigende Adriamycinbehandlung erhöht. Es gibt Hinweise, dass HIPK2 die Expression von SIRT1 reguliert, da HIPK2 RNA-Interferenz zur Erniedrigung der SIRT1 Protein- und mRNA-Mengen führt.rnEin weiterer interessanter Aspekt liegt in der Beobachtung, dass Ko-Expression von PML-IV, welches SIRT1 sowie HIPK2 in PML-Kernkörper rekrutiert, die SIRT1 Phosphorylierung an Serin 682 verstärkt. Phosphorylierung von SIRT1 an Serin 682 interferiert wiederum mit der SUMO-1 Modifikation, welche für die Lokalisation in PML-Kernkörpen wichtig ist.rnBemerkenswerterweise reduziert die DNA-schadendsinduzierte SIRT1 Phosphorylierung die Bindung des SIRT1 Ko-Aktivators AROS, beeinflusst aber nicht diejenige des Inhibitors DBC1. Dies führt zur Reduktion der enzymatischen Aktivität von SIRT1 und der darausfolgenden weniger effizienten Deacetylierung des Zielproteins p53.rnDurch die von mir in der vorliegenden Promotionsarbeit erzielten Ergebnisse konnte ein neuer molekularer Mechanismus entschlüsselt werden, welcher die durch HIPK2 modulierte Acetylierung von p53 und die daran anschließende Induktion der Apoptose beschreibt.rnHIPK2-vermittelte SIRT1 Phosphorylierung resultiert in einer verminderten Deacetylasefunktion von SIRT1 und führt so zu einer verstärkten acetylierungsinduzierten Expression proapoptotischer p53 Zielgene.
Resumo:
The research for exact solutions of mixed integer problems is an active topic in the scientific community. State-of-the-art MIP solvers exploit a floating- point numerical representation, therefore introducing small approximations. Although such MIP solvers yield reliable results for the majority of problems, there are cases in which a higher accuracy is required. Indeed, it is known that for some applications floating-point solvers provide falsely feasible solutions, i.e. solutions marked as feasible because of approximations that would not pass a check with exact arithmetic and cannot be practically implemented. The framework of the current dissertation is SCIP, a mixed integer programs solver mainly developed at Zuse Institute Berlin. In the same site we considered a new approach for exactly solving MIPs. Specifically, we developed a constraint handler to plug into SCIP, with the aim to analyze the accuracy of provided floating-point solutions and compute exact primal solutions starting from floating-point ones. We conducted a few computational experiments to test the exact primal constraint handler through the adoption of two main settings. Analysis mode allowed to collect statistics about current SCIP solutions' reliability. Our results confirm that floating-point solutions are accurate enough with respect to many instances. However, our analysis highlighted the presence of numerical errors of variable entity. By using the enforce mode, our constraint handler is able to suggest exact solutions starting from the integer part of a floating-point solution. With the latter setting, results show a general improvement of the quality of provided final solutions, without a significant loss of performances.
Resumo:
This study is based on a former student’s work, aimed at examining the influence of handedness on conference interpreting. In simultaneous interpreting (IS) both cerebral hemispheres participate in the decoding of the incoming message and in the activation of the motor functions for the production of the output signal. In right-handers language functions are mainly located in the left hemisphere, while left-handers have a more symmetrical representation of language functions. Given that with the development of interpreting skills and a long work experience the interpreters’ brain becomes less lateralized for language functions, in an initial phase left-handers may be «neurobiologically better suited for interpreting tasks» (Gran and Fabbro 1988: 37). To test this hypothesis, 9 students (5 right-handers and 4 left-handers) participated in a dual test of simultaneous and consecutive interpretation (CI) from English into Italian. The subjects were asked to interpret one text with their preferred ear and the other with the non-preferred one, since according neuropsychology aural symmetry reflects cerebral symmetry. The aim of this study was to analyze:1) the differences between the number of errors in consecutive and simultaneous interpretation with the preferred and non-preferred ear; 2) the differences in performance (in terms of number of errors) between right-handed and left-handed, both with the preferred and non-preferred ear; 3) the most frequent types of errors in right and left-handers; 4) the influence of the degree of handedness on interpreting quality. The students’ performances were analyzed in terms of errors of meaning, errors of numbers, omissions of text, omissions of numbers, inaccuracies, errors of nexus, and unfinished sentences. The results showed that: 1) in SI subjects committed fewer errors interpreting with the preferred ear, whereas in CI a slight advantage of the non-preferred ear was observed. Moreover, in CI, right-handers committed fewer mistakes with the non-preferred ear than with the preferred one. 2) The total performance of left-handers proved to be better than that of right-handers. 3) In SI left-handers committed fewer errors of meaning and fewer errors of number than right-handers, whereas in CI left-handers committed fewer errors of meaning and more errors of number than right-handers 4) As the degree of left-handedness increases, the number of errors committed also increases. Moreover, there is a statistically significant left-ear advantage for right-handers and a right-ear one for left-handers. Finally, those who interpreted with their right ear committed fewer errors of number than those who have used their left ear or both ears.
Resumo:
The CBS-QB3 method was used to calculate the gas-phase free energy difference between 20 phenols and their respective anions, and the CPCM continuum solvation method was applied to calculate the free energy differences of solvation for the phenols and their anions. The CPCM solvation calculations were performed on both gas-phase and solvent-phase optimized structures. Absolute pKa calculations with solvated phase optimized structures for the CPCM calculations yielded standard deviations and root-mean-square errors of less than 0.4 pKa unit. This study is the most accurate absolute determination of the pKa values of phenols, and is among the most accurate of any such calculations for any group of compounds. The ability to make accurate predictions of pKa values using a coherent, well-defined approach, without external approximations or fitting to experimental data, is of general importance to the chemical community. The solvated phase optimized structures of the anions are absolutely critical to obtain this level of accuracy, and yield a more realistic charge separation between the negatively charged oxygen and the ring system of the phenoxide anions.
Resumo:
Complete basis set and Gaussian-n methods were combined with Barone and Cossi's implementation of the polarizable conductor model (CPCM) continuum solvation methods to calculate pKa values for six carboxylic acids. Four different thermodynamic cycles were considered in this work. An experimental value of −264.61 kcal/mol for the free energy of solvation of H+, ΔGs(H+), was combined with a value for Ggas(H+) of −6.28 kcal/mol, to calculate pKa values with cycle 1. The complete basis set gas-phase methods used to calculate gas-phase free energies are very accurate, with mean unsigned errors of 0.3 kcal/mol and standard deviations of 0.4 kcal/mol. The CPCM solvation calculations used to calculate condensed-phase free energies are slightly less accurate than the gas-phase models, and the best method has a mean unsigned error and standard deviation of 0.4 and 0.5 kcal/mol, respectively. Thermodynamic cycles that include an explicit water in the cycle are not accurate when the free energy of solvation of a water molecule is used, but appear to become accurate when the experimental free energy of vaporization of water is used. This apparent improvement is an artifact of the standard state used in the calculation. Geometry relaxation in solution does not improve the results when using these later cycles. The use of cycle 1 and the complete basis set models combined with the CPCM solvation methods yielded pKa values accurate to less than half a pKa unit. © 2001 John Wiley & Sons, Inc. Int J Quantum Chem, 2001
Resumo:
Complete Basis Set and Gaussian-n methods were combined with CPCM continuum solvation methods to calculate pKa values for six carboxylic acids. An experimental value of −264.61 kcal/mol for the free energy of solvation of H+, ΔGs(H+), was combined with a value for Ggas(H+) of −6.28 kcal/mol to calculate pKa values with Cycle 1. The Complete Basis Set gas-phase methods used to calculate gas-phase free energies are very accurate, with mean unsigned errors of 0.3 kcal/mol and standard deviations of 0.4 kcal/mol. The CPCM solvation calculations used to calculate condensed-phase free energies are slightly less accurate than the gas-phase models, and the best method has a mean unsigned error and standard deviation of 0.4 and 0.5 kcal/mol, respectively. The use of Cycle 1 and the Complete Basis Set models combined with the CPCM solvation methods yielded pKa values accurate to less than half a pKa unit.
Resumo:
Aims Cardiac grafts from non-heartbeating donors (NHBDs) could significantly increase organ availability and reduce waiting-list mortality. Reluctance to exploit hearts from NHBDs arises from obligatory delays in procurement leading to periods of warm ischemia and possible subsequent contractile dysfunction. Means for early prediction of graft suitability prior to transplantation are thus required for development of heart transplantation programs with NHBDs. Methods and Results Hearts (n = 31) isolated from male Wistar rats were perfused with modified Krebs-Henseleit buffer aerobically for 20 min, followed by global, no-flow ischemia (32°C) for 30, 50, 55 or 60 min. Reperfusion was unloaded for 20 min, and then loaded, in working-mode, for 40 min. Left ventricular (LV) pressure was monitored using a micro-tip pressure catheter introduced via the mitral valve. Several hemodynamic parameters measured during early, unloaded reperfusion correlated significantly with LV work after 60 min reperfusion (p<0.001). Coronary flow and the production of lactate and lactate dehydrogenase (LDH) also correlated significantly with outcomes after 60 min reperfusion (p<0.05). Based on early reperfusion hemodynamic measures, a composite, weighted predictive parameter, incorporating heart rate (HR), developed pressure (DP) and end-diastolic pressure, was generated and evaluated against the HR-DP product after 60 min of reperfusion. Effective discriminating ability for this novel parameter was observed for four HR*DP cut-off values, particularly for ≥20 *103 mmHg*beats*min−1 (p<0.01). Conclusion Upon reperfusion of a NHBD heart, early evaluation, at the time of organ procurement, of cardiac hemodynamic parameters, as well as easily accessible markers of metabolism and necrosis seem to accurately predict subsequent contractile recovery and could thus potentially be of use in guiding the decision of accepting the ischemic heart for transplantation.
Resumo:
Ornithine transcarbamylase (OTC) deficiency is the most common inborn error of the urea cycle. OTC locus is located in the short arm of X-chromosome. Authors report a case of a woman who gave birth to monozygotic male twins who later died because of severe neonatal-onset hyperammonaemic encephalopathy caused by a novel mutation of OTC gene. Post-mortem liver biopsy was taken from the second twin; afterwards, blood was drawn from the mother for examination. DNA sequence data showed that the mother was a carrier of the same novel mutation that was previously detected in the case of her son. In OTC deficiency, detection of female carriers is important for genetic counselling and eventual prenatal diagnosis.
Resumo:
Hintergrund: Bei der Durchführung von summativen Prüfungen wird üblicherweise eine Mindestreliabilität von 0,8 gefordert. Bei praktischen Prüfungen wie OSCEs werden manchmal 0,7 akzeptiert (Downing 2004). Doch was kann man sich eigentlich unter der Präzision einer Messung mit einer Reliabilität von 0,7 oder 0,8 vorstellen? Methode: Mittels verschiedener statistischer Methoden wie dem Standardmessfehler oder der Generalisierbarkeitstheorie lässt sich die Reliabilität in ein Konfidenzintervall um eine festgestellte Kandidatenleistung übersetzen (Brennan 2003, Harvill 1991, McManus 2012). Hat ein Kandidat beispielsweise bei einer Prüfung 57 Punkte erreicht, schwankt seine wahre Leistung aufgrund der Messungenauigkeit der Prüfung um diesen Wert (z.B. zwischen 50 und 64 Punkte). Im Bereich der Bestehensgrenze ist die Messgenauigkeit aber besonders wichtig. Läge die Bestehensgrenze in unserem Beispiel bei 60 Punkten, wäre der Kandidat mit 57 Punkten zwar pro forma durchgefallen, allerdings könnte er aufgrund der Schwankungsbreite um seine gemessene Leistung in Wahrheit auch knapp bestanden haben. Überträgt man diese Erkenntnisse auf alle KandidatInnen einer Prüfung, kann man die Anzahl der Grenzfallkandidaten bestimmen, also all jene Kandidatinnen, die mit Ihrem Prüfungsergebnis so nahe an der Bestehensgrenze liegen, dass ihr jeweiliges Prüfungsresultate falsch positiv oder falsch negativ sein kann. Ergebnisse: Die Anzahl der GrenzfallkandidatInnen in einer Prüfung ist, nicht nur von der Reliabilität abhängig, sondern auch von der Leistung der KandidatInnen, der Varianz, dem Abstand der Bestehensgrenze zum Mittelwert und der Schiefe der Verteilung. Es wird anhand von Modelldaten und konkreten Prüfungsdaten der Zusammenhang zwischen der Reliabilität und der Anzahl der Grenzfallkandidaten auch für den Nichtstatistiker verständlich dargestellt. Es wird gezeigt, warum selbst eine Reliabilität von 0.8 in besonderen Situationen keine befriedigende Präzision der Messung bieten wird, während in manchen OSCEs die Reliabilität fast ignoriert werden kann. Schlussfolgerungen: Die Berechnung oder Schätzung der Grenzfallkandidaten anstatt der Reliabilität verbessert auf anschauliche Weise das Verständnis für die Präzision einer Prüfung. Wenn es darum geht, wie viele Stationen ein summativer OSCE benötigt oder wie lange eine MC-Prüfung dauern soll, sind Grenzfallkandidaten ein valideres Entscheidungskriterium als die Reliabilität. Brennan, R.L. (2003) Generalizability Theory. New York, Springer Downing, S.M. (2004) ‘Reliability: on the reproducibility of assessment data’, Medical Education 2004, 38, 1006–12 Harvill, L.M. (1991) ‘Standard Error of Measurement’, Educational Measurement: Issues and Practice, 33-41 McManus, I.C. (2012) ‘The misinterpretation of the standard error of measurement in medical education: A primer on the problems, pitfalls and peculiarities of the three different standard errors of measurement’ Medical teacher, 34, 569 - 76
Resumo:
Neonatal and adult cardiomyocytes were isolated from rat hearts. Some of the adult myocytes were cultured to allow for cell dedifferentiation, a phenomenon thought to mimic cell changes that occur in stressed myocardium, with myocytes regressing to a fetal pattern of metabolism and stellate neonatal shape.Using fluorescence deconvolution microscopy, cells were probed with fluorescent markers and scanned for a number of proteins associated with ion control, calcium movements and cardiac function. Image analysis of deconvoluted image stacks and sequential real-time image recordings of calcium transients of cells were made.All three myocyte groups were predominantly comprised of binucleate cells. Clustering of proteins to a single nucleus was a common observation, suggesting that one nucleus is active in protein synthesis pathways, while the other nucleus assumes a 'dormant' or different role and that cardiomyocytes might be mitotically active even in late development, or specific protein syntheses could be targeted and regulated for reintroduction into the cell cycle.Such possibilities would extend cardiac disease associated stem cell research and therapy options, while producing valuable insights into developmental and death pathways of binucleate cardiomyocytes (word count 183).
Resumo:
The acquisition of accurate information on the size of traits in animals is fundamental for the study of animal ecology and evolution and their management. We demonstrate how morphological traits of free-ranging animals can reliably be estimated on very large observation distances of several hundred meters by the use of ordinary digital photographic equipment and simple photogrammetric software. In our study, we estimated the length of horn annuli in free-ranging male Alpine ibex (Capra ibex) by taking already measured horn annuli of conspecifics on the same photographs as scaling units. Comparisons with hand-measured horn annuli lengths and repeatability analyses revealed a high accuracy of the photogrammetric estimates. If length estimations of specific horn annuli are based on multiple photographs measurement errors of <5.5 mm can be expected. In the current study the application of the described photogrammetric procedure increased the sample size of animals with known horn annuli length by an additional 104%. The presented photogrammetric procedure is of broad applicability and represents an easy, robust and cost-efficient method for the measuring of individuals in populations where animals are hard to capture or to approach.
Resumo:
After attending this presentation, attendees will: (1) understand how body height from computed tomography data can be estimated; and, (2) gain knowledge about the accuracy of estimated body height and limitations. The presentation will impact the forensic science community by providing knowledge and competence which will enable attendees to develop formulas for single bones to reconstruct body height using postmortem Computer Tomography (p-CT) data. The estimation of Body Height (BH) is an important component of the identification of corpses and skeletal remains. Stature can be estimated with relative accuracy via the measurement of long bones, such as the femora. Compared to time-consuming maceration procedures, p-CT allows fast and simple measurements of bones. This study undertook four objectives concerning the accuracy of BH estimation via p-CT: (1) accuracy between measurements on native bone and p-CT imaged bone (F1 according to Martin 1914); (2) intra-observer p-CT measurement precision; (3) accuracy between formula-based estimation of the BH and conventional body length measurement during autopsy; and, (4) accuracy of different estimation formulas available.1 In the first step, the accuracy of measurements in the CT compared to those obtained using an osteometric board was evaluated on the basis of eight defleshed femora. Then the femora of 83 female and 144 male corpses of a Swiss population for which p-CTs had been performed, were measured at the Institute of Forensic Medicine in Bern. After two months, 20 individuals were measured again in order to assess the intraobserver error. The mean age of the men was 53±17 years and that of the women was 61±20 years. Additionally, the body length of the corpses was measured conventionally. The mean body length was 176.6±7.2cm for men and 163.6±7.8cm for women. The images that were obtained using a six-slice CT were reconstructed with a slice thickness of 1.25mm. Analysis and measurements of CT images were performed on a multipurpose workstation. As a forensic standard procedure, stature was estimated by means of the regression equations by Penning & Riepert developed on a Southern German population and for comparison, also those referenced by Trotter & Gleser “American White.”2,3 All statistical tests were performed with a statistical software. No significant differences were found between the CT and osteometric board measurements. The double p-CT measurement of 20 individuals resulted in an absolute intra-observer difference of 0.4±0.3mm. For both sexes, the correlation between the body length and the estimated BH using the F1 measurements was highly significant. The correlation coefficient was slightly higher for women. The differences in accuracy of the different formulas were small. While the errors of BH estimation were generally ±4.5–5.0cm, the consideration of age led to an increase in accuracy of a few millimetres to about 1cm. BH estimations according to Penning & Riepert and Trotter & Gleser were slightly more accurate when age-at-death was taken into account.2,3 That way, stature estimations in the group of individuals older than 60 years were improved by about 2.4cm and 3.1cm.2,3 The error of estimation is therefore about a third of the common ±4.7cm error range. Femur measurements in p-CT allow very accurate BH estimations. Estimations according to Penning led to good results that (barely) come closer to the true value than the frequently used formulas by Trotter & Gleser “American White.”2,3 Therefore, the formulas by Penning & Riepert are also validated for this substantial recent Swiss population.
Resumo:
A state-of-the-art inverse model, CarbonTracker Data Assimilation Shell (CTDAS), was used to optimize estimates of methane (CH4) surface fluxes using atmospheric observations of CH4 as a constraint. The model consists of the latest version of the TM5 atmospheric chemistry-transport model and an ensemble Kalman filter based data assimilation system. The model was constrained by atmospheric methane surface concentrations, obtained from the World Data Centre for Greenhouse Gases (WDCGG). Prior methane emissions were specified for five sources: biosphere, anthropogenic, fire, termites and ocean, of which bio-sphere and anthropogenic emissions were optimized. Atmospheric CH 4 mole fractions for 2007 from northern Finland calculated from prior and optimized emissions were compared with observations. It was found that the root mean squared errors of the posterior esti - mates were more than halved. Furthermore, inclusion of NOAA observations of CH 4 from weekly discrete air samples collected at Pallas improved agreement between posterior CH 4 mole fraction estimates and continuous observations, and resulted in reducing optimized biosphere emissions and their uncertainties in northern Finland.
Resumo:
Linear- and unimodal-based inference models for mean summer temperatures (partial least squares, weighted averaging, and weighted averaging partial least squares models) were applied to a high-resolution pollen and cladoceran stratigraphy from Gerzensee, Switzerland. The time-window of investigation included the Allerød, the Younger Dryas, and the Preboreal. Characteristic major and minor oscillations in the oxygen-isotope stratigraphy, such as the Gerzensee oscillation, the onset and end of the Younger Dryas stadial, and the Preboreal oscillation, were identified by isotope analysis of bulk-sediment carbonates of the same core and were used as independent indicators for hemispheric or global scale climatic change. In general, the pollen-inferred mean summer temperature reconstruction using all three inference models follows the oxygen-isotope curve more closely than the cladoceran curve. The cladoceran-inferred reconstruction suggests generally warmer summers than the pollen-based reconstructions, which may be an effect of terrestrial vegetation not being in equilibrium with climate due to migrational lags during the Late Glacial and early Holocene. Allerød summer temperatures range between 11 and 12°C based on pollen, whereas the cladoceran-inferred temperatures lie between 11 and 13°C. Pollen and cladocera-inferred reconstructions both suggest a drop to 9–10°C at the beginning of the Younger Dryas. Although the Allerød–Younger Dryas transition lasted 150–160 years in the oxygen-isotope stratigraphy, the pollen-inferred cooling took 180–190 years and the cladoceran-inferred cooling lasted 250–260 years. The pollen-inferred summer temperature rise to 11.5–12°C at the transition from the Younger Dryas to the Preboreal preceded the oxygen-isotope signal by several decades, whereas the cladoceran-inferred warming lagged. Major discrepancies between the pollen- and cladoceran-inference models are observed for the Preboreal, where the cladoceran-inference model suggests mean summer temperatures of up to 14–15°C. Both pollen- and cladoceran-inferred reconstructions suggest a cooling that may be related to the Gerzensee oscillation, but there is no evidence for a cooling synchronous with the Preboreal oscillation as recorded in the oxygen-isotope record. For the Gerzensee oscillation the inferred cooling was ca. 1 and 0.5°C based on pollen and cladocera, respectively, which lies well within the inherent prediction errors of the inference models.