946 resultados para map over B
Resumo:
We describe a new method of identifying night-time clouds over the Pierre Auger Observatory using infrared data from the Imager instruments on the GOES-12 and GOES-13 satellites. We compare cloud identifications resulting from our method to those obtained by the Central Laser Facility of the Auger Observatory. Using our new method we can now develop cloud probability maps for the 3000 km2 of the Pierre Auger Observatory twice per hour with a spatial resolution of ∼2.4 km by ∼5.5 km. Our method could also be applied to monitor cloud cover for other ground-based observatories and for space-based observatories.
Resumo:
In this work we aim to propose a new approach for preliminary epidemiological studies on Standardized Mortality Ratios (SMR) collected in many spatial regions. A preliminary study on SMRs aims to formulate hypotheses to be investigated via individual epidemiological studies that avoid bias carried on by aggregated analyses. Starting from collecting disease counts and calculating expected disease counts by means of reference population disease rates, in each area an SMR is derived as the MLE under the Poisson assumption on each observation. Such estimators have high standard errors in small areas, i.e. where the expected count is low either because of the low population underlying the area or the rarity of the disease under study. Disease mapping models and other techniques for screening disease rates among the map aiming to detect anomalies and possible high-risk areas have been proposed in literature according to the classic and the Bayesian paradigm. Our proposal is approaching this issue by a decision-oriented method, which focus on multiple testing control, without however leaving the preliminary study perspective that an analysis on SMR indicators is asked to. We implement the control of the FDR, a quantity largely used to address multiple comparisons problems in the eld of microarray data analysis but which is not usually employed in disease mapping. Controlling the FDR means providing an estimate of the FDR for a set of rejected null hypotheses. The small areas issue arises diculties in applying traditional methods for FDR estimation, that are usually based only on the p-values knowledge (Benjamini and Hochberg, 1995; Storey, 2003). Tests evaluated by a traditional p-value provide weak power in small areas, where the expected number of disease cases is small. Moreover tests cannot be assumed as independent when spatial correlation between SMRs is expected, neither they are identical distributed when population underlying the map is heterogeneous. The Bayesian paradigm oers a way to overcome the inappropriateness of p-values based methods. Another peculiarity of the present work is to propose a hierarchical full Bayesian model for FDR estimation in testing many null hypothesis of absence of risk.We will use concepts of Bayesian models for disease mapping, referring in particular to the Besag York and Mollié model (1991) often used in practice for its exible prior assumption on the risks distribution across regions. The borrowing of strength between prior and likelihood typical of a hierarchical Bayesian model takes the advantage of evaluating a singular test (i.e. a test in a singular area) by means of all observations in the map under study, rather than just by means of the singular observation. This allows to improve the power test in small areas and addressing more appropriately the spatial correlation issue that suggests that relative risks are closer in spatially contiguous regions. The proposed model aims to estimate the FDR by means of the MCMC estimated posterior probabilities b i's of the null hypothesis (absence of risk) for each area. An estimate of the expected FDR conditional on data (\FDR) can be calculated in any set of b i's relative to areas declared at high-risk (where thenull hypothesis is rejected) by averaging the b i's themselves. The\FDR can be used to provide an easy decision rule for selecting high-risk areas, i.e. selecting as many as possible areas such that the\FDR is non-lower than a prexed value; we call them\FDR based decision (or selection) rules. The sensitivity and specicity of such rule depend on the accuracy of the FDR estimate, the over-estimation of FDR causing a loss of power and the under-estimation of FDR producing a loss of specicity. Moreover, our model has the interesting feature of still being able to provide an estimate of relative risk values as in the Besag York and Mollié model (1991). A simulation study to evaluate the model performance in FDR estimation accuracy, sensitivity and specificity of the decision rule, and goodness of estimation of relative risks, was set up. We chose a real map from which we generated several spatial scenarios whose counts of disease vary according to the spatial correlation degree, the size areas, the number of areas where the null hypothesis is true and the risk level in the latter areas. In summarizing simulation results we will always consider the FDR estimation in sets constituted by all b i's selected lower than a threshold t. We will show graphs of the\FDR and the true FDR (known by simulation) plotted against a threshold t to assess the FDR estimation. Varying the threshold we can learn which FDR values can be accurately estimated by the practitioner willing to apply the model (by the closeness between\FDR and true FDR). By plotting the calculated sensitivity and specicity (both known by simulation) vs the\FDR we can check the sensitivity and specicity of the corresponding\FDR based decision rules. For investigating the over-smoothing level of relative risk estimates we will compare box-plots of such estimates in high-risk areas (known by simulation), obtained by both our model and the classic Besag York Mollié model. All the summary tools are worked out for all simulated scenarios (in total 54 scenarios). Results show that FDR is well estimated (in the worst case we get an overestimation, hence a conservative FDR control) in small areas, low risk levels and spatially correlated risks scenarios, that are our primary aims. In such scenarios we have good estimates of the FDR for all values less or equal than 0.10. The sensitivity of\FDR based decision rules is generally low but specicity is high. In such scenario the use of\FDR = 0:05 or\FDR = 0:10 based selection rule can be suggested. In cases where the number of true alternative hypotheses (number of true high-risk areas) is small, also FDR = 0:15 values are well estimated, and \FDR = 0:15 based decision rules gains power maintaining an high specicity. On the other hand, in non-small areas and non-small risk level scenarios the FDR is under-estimated unless for very small values of it (much lower than 0.05); this resulting in a loss of specicity of a\FDR = 0:05 based decision rule. In such scenario\FDR = 0:05 or, even worse,\FDR = 0:1 based decision rules cannot be suggested because the true FDR is actually much higher. As regards the relative risk estimation, our model achieves almost the same results of the classic Besag York Molliè model. For this reason, our model is interesting for its ability to perform both the estimation of relative risk values and the FDR control, except for non-small areas and large risk level scenarios. A case of study is nally presented to show how the method can be used in epidemiology.
Resumo:
This work shows for the first time that native CSTB polymerizes on addition of Cu2+ and DnaK (Hsp70). Cysteines are involved in the polymerization process and in particular at least one cysteine is necessary. We propose that Cu2+ interacts with the thiol group of cysteine and oxidize it. The oxidized cysteine modifies the CSTB structure allowing interaction with DnaK/Hsp70 to occur. Thus, Cu2+ binding to CSTB exposes a site for DnaK and such interaction allows the polymerization of CSTB. The polymers generated from native CSTB monomers, are DTT sensitive and they may represent physiological polymers. Denatured CSTB does not require Cu2+ and polymerizes simply on addition of DnaK. The polymers generated from denatured CSTB do not respond to DTT. They have characteristics similar to those of the CSTB toxic aggregates described in vivo in eukaryotic cells following CSTB over-expression. Interaction between CSTB and Hsp70 is shown by IP experiments. The interaction occurs with WT CSTB and not with the ïcys mutant. This suggests that disulphur bonds are involved. Methal-cathalyzed oxidation of proteins involves reduction of the metal ion(s) bound to the protein itself and oxidation of neighboring ammino acid residues resulting in structural modification and de-stabilization of the molecule. In this work we propose that the cysteine thyol residue of CSTB in the presence of Cu2+ is oxidized, and cathalyzes the formation of disulphide bonds with Hsp70, that, once bound to CSTB, mediates its polymerization. In vivo this molecular mechanism of CSTB polymerization could be regulated by redox environment through the cysteine residue. This may imply that CSTB physiological polymers have a specific cellular function, different from that of the protease inhibitor known for the CSTB monomer. This hypothesis is interesting in relation to Progressive Myoclonus Epilepsy of type 1 (EPM1). This pathology is usually caused by mutations in the CSTB gene. CSTB is a ubiquitous protein, but EPM1 patients have problems only in the central nervous system. Maybe physiological CSTB polymers have a specific function altered in people affected by EPM1.
Resumo:
153 Nachkommen einer Kreuzung aus der pilzresistenten Rebsorte ‘Regent‘ und ‘Lemberger‘ als klassischer pilzsensitiver Sorte zeigen quantitative Merkmalsvariation bezüglich der Resistenz gegen Plasmopara viticola und Uncinula necator sowie für weitere Eigenschaften, die z.B. das Eintreten der Beerenreife betreffen. Auf dem Weg über die genetische Kartierung mit molekularen Markern und der Lokalisierung von QTL-Effekten konnten Hinweise auf weinbaulich relevante Genomregionen gewonnen werden; dies liefert z.B. die Basis für markergestützte Selektion bei Zuchtvorhaben mit dem Resistenzträger ‘Regent’ (vgl. auch FISCHER et al., 2004). Ein Major-QTL für die Resistenz gegen den Echten Mehltau Uncinula necator sowie zwei Major QTL für die Resistenz gegen den Erreger des Falschen Mehltau, Plasmopara viticola, traten mit hoher Signifikanz auf drei verschiedenen Kopplungsgruppen von ‘Regent‘ auf. Auch Regionen mit Relevanz für das Eintreten der Beerenreife wurden beschrieben. Über die Isolierung, Sequenzierung und anschließende Analyse einzelner Markerfragmente mit Methoden der Bioinformatik ist es gelungen, ein putatives T10P12.4-Ortholog der Weinrebe (ein thioredoxinähnliches Protein) in enger Kopplung zu einem Major-QTL-Maximum für Plasmopara viticola-Resistenz zu identifizieren, das als Kandidat für die Beteiligung an der Pathogenantwort in Frage kommt. Es konnte exemplarisch gezeigt werden, dass die eingesetzten Methoden der Kartierung und QTL-Analyse unter Verwendung PCR-basierter Markertypen wie SSR und AFLP und einer beschleunigten Analyse über computergestützte Kapillargelelektrophorese in vertretbarem Zeitrahmen bis zur Isolation potentieller Schlüsselgene führen können. Die grundsätzliche Eignung der QTL-Analyse als effizientes Werkzeug gezielter Züchtungsplanung für den Weinbau bestätigte sich. Ihre Anwendung im Rahmen der vorliegenden Dissertation hat die Basis für die Nutzung von QTL-Information bei dem Vergleich etablierter und der Entwicklung neuer Sorten gelegt und zum Verständnis von Prozessen beigetragen, die den betrachteten Eigenschaften wie der Pilzresistenz möglicherweise zu Grunde liegen. Ein großer Teil der gewonnenen Daten bringt auch die Untersuchungen anderer Kultivare voran und ist intervarietal übertragbar. Darüber hinaus haben sich Chancen für vergleichende Studien zwischen der Weinrebe einerseits und der Modellpflanze Arabidopsis thaliana sowie weiteren Kulturpflanzen andererseits abgezeichnet. Die Hinweise auf die zentrale Rolle und universelle Natur des Redox-Signalling haben interessante Perspektiven zum Verständnis organismenübergreifender physiologischer Zusammenhänge eröffnet. Dies betrifft z.B. auch die Reaktion auf Verwundung oder die Pathogenantwort.
Resumo:
Das Standardmodell (SM) der Teilchenphysik beschreibt sehr präzise die fundamentalen Bausteine und deren Wechselwirkungen (WW). Trotz des Erfolges gibt es noch offene Fragen, die vom SM nicht beantwortet werden können. Ein noch noch nicht abgeschlossener Test besteht aus der Messung der Stärke der schwachen Kopplung zwischen Quarks. Neutrale B- bzw. $bar{B}$-Mesonen können sich innerhalb ihrer Lebensdauer über einen Prozeß der schwachen WW in ihr Antiteilchen transformieren. Durch die Messung der Bs-Oszillation kann die Kopplung Vtd zwischen den Quarksorten Top (t) und Down (d) bestimmt werden. Alle bis Ende 2005 durchgeführten Experimente lieferten lediglich eine untere Grenze für die Oszillationsfrequenz von ms>14,4ps-1. Die vorliegenden Arbeit beschreibt die Messung der Bs-Oszillationsfrequenz ms mit dem semileptonischen Kanal BsD(-)+. Die verwendeten Daten stammen aus Proton-Antiproton-Kollisionen, die im Zeitraum von April 2002 bis März 2006 mit dem DØ-Detektor am Tevatron-Beschleuniger des Fermi National Accelerator Laboratory bei einer Schwerpunktsenergie von $sqrt{s}$=1,96TeV aufgezeichnet wurden. Die verwendeten Datensätze entsprechen einer integrierten Luminosität von 1,3fb-1 (620 millionen Ereignisse). Für diese Oszillationsmessung wurde der Quarkinhalt des Bs-Mesons zur Zeit der Produktion sowie des Zerfalls bestimmt und die Zerfallszeit wurde gemessen. Nach der Rekonstruktion und Selektion der Signalereignisse legt die Ladung des Myons den Quarkinhalt des Bs-Mesons zur Zeit des Zerfalls fest. Zusätzlich wurde der Quarkinhalt des Bs-Mesons zur Zeit der Produktion markiert. b-Quarks werden in $pbar{p}$-Kollisionen paarweise produziert. Die Zerfallsprodukte des zweiten b-Hadrons legen den Quarkinhalt des Bs-Mesons zur Zeit der Produktion fest. Bei einer Sensitivität von msenss=14,5ps-1 wurde eine untere Grenze für die Oszillationsfrequenz ms>15,5ps-1 bestimmt. Die Maximum-Likelihood-Methode lieferte eine Oszillationsfrequenz ms>(20+2,5-3,0(stat+syst)0,8(syst,k))ps-1 bei einem Vertrauensniveau von 90%. Der nicht nachgewiesene Neutrinoimpuls führt zu dem systematischen Fehler (sys,k). Dieses Resultat ergibt zusammen mit der entsprechenden Oszillation des Bd-Mesons eine signifikante Messung der Kopplung Vtd, in Übereinstimmung mit weiteren Experimenten über die schwachen Quarkkopplungen.
Resumo:
Environmental computer models are deterministic models devoted to predict several environmental phenomena such as air pollution or meteorological events. Numerical model output is given in terms of averages over grid cells, usually at high spatial and temporal resolution. However, these outputs are often biased with unknown calibration and not equipped with any information about the associated uncertainty. Conversely, data collected at monitoring stations is more accurate since they essentially provide the true levels. Due the leading role played by numerical models, it now important to compare model output with observations. Statistical methods developed to combine numerical model output and station data are usually referred to as data fusion. In this work, we first combine ozone monitoring data with ozone predictions from the Eta-CMAQ air quality model in order to forecast real-time current 8-hour average ozone level defined as the average of the previous four hours, current hour, and predictions for the next three hours. We propose a Bayesian downscaler model based on first differences with a flexible coefficient structure and an efficient computational strategy to fit model parameters. Model validation for the eastern United States shows consequential improvement of our fully inferential approach compared with the current real-time forecasting system. Furthermore, we consider the introduction of temperature data from a weather forecast model into the downscaler, showing improved real-time ozone predictions. Finally, we introduce a hierarchical model to obtain spatially varying uncertainty associated with numerical model output. We show how we can learn about such uncertainty through suitable stochastic data fusion modeling using some external validation data. We illustrate our Bayesian model by providing the uncertainty map associated with a temperature output over the northeastern United States.
Resumo:
In this thesis, a systematic analysis of the bar B to X_sgamma photon spectrum in the endpoint region is presented. The endpoint region refers to a kinematic configuration of the final state, in which the photon has a large energy m_b-2E_gamma = O(Lambda_QCD), while the jet has a large energy but small invariant mass. Using methods of soft-collinear effective theory and heavy-quark effective theory, it is shown that the spectrum can be factorized into hard, jet, and soft functions, each encoding the dynamics at a certain scale. The relevant scales in the endpoint region are the heavy-quark mass m_b, the hadronic energy scale Lambda_QCD and an intermediate scale sqrt{Lambda_QCD m_b} associated with the invariant mass of the jet. It is found that the factorization formula contains two different types of contributions, distinguishable by the space-time structure of the underlying diagrams. On the one hand, there are the direct photon contributions which correspond to diagrams with the photon emitted directly from the weak vertex. The resolved photon contributions on the other hand arise at O(1/m_b) whenever the photon couples to light partons. In this work, these contributions will be explicitly defined in terms of convolutions of jet functions with subleading shape functions. While the direct photon contributions can be expressed in terms of a local operator product expansion, when the photon spectrum is integrated over a range larger than the endpoint region, the resolved photon contributions always remain non-local. Thus, they are responsible for a non-perturbative uncertainty on the partonic predictions. In this thesis, the effect of these uncertainties is estimated in two different phenomenological contexts. First, the hadronic uncertainties in the bar B to X_sgamma branching fraction, defined with a cut E_gamma > 1.6 GeV are discussed. It is found, that the resolved photon contributions give rise to an irreducible theory uncertainty of approximately 5 %. As a second application of the formalism, the influence of the long-distance effects on the direct CP asymmetry will be considered. It will be shown that these effects are dominant in the Standard Model and that a range of -0.6 < A_CP^SM < 2.8 % is possible for the asymmetry, if resolved photon contributions are taken into account.
Resumo:
Monoclonal antibodies have emerged as one of the most promising therapeutics in oncology over the last decades. The generation of fully human tumorantigen-specific antibodies suitable for anti-tumor therapy is laborious and difficult to achieve. Autoreactive B cells expressing those antibodies are detectable in cancer patients and represent a suitable source for human antibodies. However, the isolation and cultivation of this cell type is challenging. A novel method was established to identify antigen-specific B cells. The method is based on the conversion of the antigen independent CD40 signal into an antigen-specific one. For that, the artificial fusion proteins ABCos1 and ABCos2 (Antigen-specific B cell co-stimulator) were generated, which consist of an extracellular association-domain derived from the constant region of the human immunoglobulin (Ig) G1, a transmembrane fragment and an intracellular signal transducer domain derived of the cytoplasmic domain of the human CD40 receptor. By the association with endogenous Ig molecules the heterodimeric complex allows the antigen-specific stimulation of both the BCR and CD40. In this work the ability of the ABCos constructs to associate with endogenous IgG molecules was shown. Moreover, crosslinking of ABCos stimulates the activation of NF-κB in HEK293-lucNifty and induces proliferation in B cells. The stimulation of ABCos in transfected B cells results in an activation pattern different from that induced by the conventional CD40 signal. ABCos activated B cells show a mainly IgG isotype specific activation of memory B cells and are characterized by high proliferation and the differentiation into plasma cells. To validate the approach a model system was conducted: B cells were transfected with IVT-RNA encoding for anti-Plac1 B cell receptor (antigen-specific BCR), ABCos or both. The stimulation with the BCR specific Plac1 peptide induces proliferation only in the cotransfected B cell population. Moreover, we tested the method in human IgG+ memory B cells from CMV infected blood donors, in which the stimulation of ABCos transfected B cells with a CMV peptide induces antigen-specific expansion. These findings show that challenging ABCos transfected B cells with a specific antigen results in the activation and expansion of antigen-specific B cells and not only allows the identification but also cultivation of these B cells. The described method will help to identify antigen-specific B cells and can be used to characterize (tumor) autoantigen-specific B cells and allows the generation of fully human antibodies that can be used as diagnostic tool as well as in cancer therapy.
Resumo:
Landslide hazard and risk are growing as a consequence of climate change and demographic pressure. Land‐use planning represents a powerful tool to manage this socio‐economic problem and build sustainable and landslide resilient communities. Landslide inventory maps are a cornerstone of land‐use planning and, consequently, their quality assessment represents a burning issue. This work aimed to define the quality parameters of a landslide inventory and assess its spatial and temporal accuracy with regard to its possible applications to land‐use planning. In this sense, I proceeded according to a two‐steps approach. An overall assessment of the accuracy of data geographic positioning was performed on four case study sites located in the Italian Northern Apennines. The quantification of the overall spatial and temporal accuracy, instead, focused on the Dorgola Valley (Province of Reggio Emilia). The assessment of spatial accuracy involved a comparison between remotely sensed and field survey data, as well as an innovative fuzzylike analysis of a multi‐temporal landslide inventory map. Conversely, long‐ and short‐term landslide temporal persistence was appraised over a period of 60 years with the aid of 18 remotely sensed image sets. These results were eventually compared with the current Territorial Plan for Provincial Coordination (PTCP) of the Province of Reggio Emilia. The outcome of this work suggested that geomorphologically detected and mapped landslides are a significant approximation of a more complex reality. In order to convey to the end‐users this intrinsic uncertainty, a new form of cartographic representation is needed. In this sense, a fuzzy raster landslide map may be an option. With regard to land‐use planning, landslide inventory maps, if appropriately updated, confirmed to be essential decision‐support tools. This research, however, proved that their spatial and temporal uncertainty discourages any direct use as zoning maps, especially when zoning itself is associated to statutory or advisory regulations.
Resumo:
Die Arbeit beschäftigt sich mit der Kontrolle von Selbstorganisation und Mikrostruktur von organischen Halbleitern und deren Einsatz in OFETs. In Kapiteln 3, 4 und 5 eine neue Lösungsmittel-basierte Verabeitungsmethode, genannt als Lösungsmitteldampfdiffusion, ist konzipiert, um die Selbstorganisation von Halbleitermolekülen auf der Oberfläche zu steuern. Diese Methode als wirkungsvolles Werkzeug erlaubt eine genaue Kontrolle über die Mikrostruktur, wie in Kapitel 3 am Beispiel einer D-A Dyad bestehend aus Hexa-peri-hexabenzocoronene (HBC) als Donor und Perylene Diimide (PDI) als Akzeptor beweisen. Die Kombination aus Oberflächenmodifikation und Lösungsmitteldampf kann die Entnetzungseffekte ausgleichen, so dass die gewüschte Mikrostruktur und molekulare Organisation auf der Oberfläche erreicht werden kann. In Kapiteln 4 und 5 wurde diese Methode eingesetzt, um die Selbstorganisation von Dithieno[2, 3-d;2’, 3’-d’] benzo[1,2-b;4,5-b’]dithiophene (DTBDT) und Cyclopentadithiophene -benzothiadiazole copolymer (CDT-BTZ) Copolymer zu steuern. Die Ergebnisse könnten weitere Studien stimulieren und werfen Licht aus andere leistungsfaähige konjugierte Polymere. rnIn Kapiteln 6 und 7 Monolagen und deren anschlieβende Mikrostruktur von zwei konjugierten Polymeren, Poly (2,5-bis(3-alkylthiophen-2-yl)thieno[3,2-b]thiophene) PBTTT und Poly{[N,N ′-bis(2-octyldodecyl)-naphthalene-1,4,5,8-bis (dicarboximide)-2,6-diyl]-alt-5,5′- (2,2′-bithiophene)}, P(NDI2OD-T2)) wurden auf steife Oberflächen mittels Tauchbeschichtung aufgebracht. Da sist das erste Mal, dass es gelungen ist, Polymer Monolagen aus der Lösung aufzubringen. Dieser Ansatz kann weiter auf eine breite Reihe von anderen konjugierten Polymeren ausgeweitet werden.rnIn Kapitel 8 wurden PDI-CN2 Filme erfolgreich von Monolagen zu Bi- und Tri-Schichten auf Oberflächen aufgebracht, die unterschiedliche Rauigkeiten besitzen. Für das erste Mal, wurde der Einfluss der Rauigkeit auf Lösungsmittel-verarbeitete dünne Schichten klar beschrieben.rn
Resumo:
The long-term outcome of antiretroviral therapy (ART) is not assessed in controlled trials. We aimed to analyse trends in the population effectiveness of ART in the Swiss HIV Cohort Study over the last decade.
Resumo:
Intravenous thrombolysis with alteplase for ischemic stroke is fixed at a maximal dose of 90 mg for safety reasons. Little is known about the clinical outcomes of stroke patients weighing >100 kg, who may benefit less from thrombolysis due to this dose limitation.
Resumo:
Since its discovery in Greenland ice cores, the millennial scale climatic variability of the last glacial period has been increasingly documented at all latitudes with studies focusing mainly on Marine Isotopic Stage 3 (MIS 3; 28–60 thousand of years before present, hereafter ka) and characterized by short Dansgaard-Oeschger (DO) events. Recent and new results obtained on the EPICA and NorthGRIP ice cores now precisely describe the rapid variations of Antarctic and Greenland temperature during MIS 5 (73.5–123 ka), a time period corresponding to relatively high sea level. The results display a succession of abrupt events associated with long Greenland InterStadial phases (GIS) enabling us to highlight a sub-millennial scale climatic variability depicted by (i) short-lived and abrupt warming events preceding some GIS (precursor-type events) and (ii) abrupt warming events at the end of some GIS (rebound-type events). The occurrence of these sub-millennial scale events is suggested to be driven by the insolation at high northern latitudes together with the internal forcing of ice sheets. Thanks to a recent NorthGRIP-EPICA Dronning Maud Land (EDML) common timescale over MIS 5, the bipolar sequence of climatic events can be established at millennial to sub-millennial timescale. This shows that for extraordinary long stadial durations the accompanying Antarctic warming amplitude cannot be described by a simple linear relationship between the two as expected from the bipolar seesaw concept. We also show that when ice sheets are extensive, Antarctica does not necessarily warm during the whole GS as the thermal bipolar seesaw model would predict, questioning the Greenland ice core temperature records as a proxy for AMOC changes throughout the glacial period.