952 resultados para Outlying regions
Analysis of the influence of epitope flanking regions on MHC class I restricted antigen presentation
Resumo:
Peptides presented by MHC class I molecules for CTL recognition are derived mainly from cytosolic proteins. For antigen presentation on the cell surface, epitopes require correct processing by cytosolic and ER proteases, efficient TAP transport and MHC class I binding affinity. The efficiency of epitope generation depends not only on the epitope itself, but also on its flanking regions. In this project, the influence of the C-terminal region of the model epitope SIINFEKL (S8L) from chicken ovalbumin (aa 257-264) on antigen processing has been investigated. S8L is a well characterized epitope presented on the murine MHC class I molecule, H-2Kb. The Flp-In 293Kb cell line was transfected with different constructs each enabling the expression of the S8L sequence with different defined C-terminal flanking regions. The constructs differed at the two first C-terminal positions after the S8L epitope, so called P1’ and P2’. At these sites, all 20 amino acids were exchanged consecutively and tested for their influence on H-2Kb/S8L presentation on the cell surface of the Flp-In 293Kb cells. The detection of this complex was performed by immunostaining and flow cytometry. The prevailing assumption is that proteasomal cleavages are exclusively responsible for the generation of the final C-termini of CTL epitopes. Nevertheless, recent publications showed that TPPII (tripeptidyl peptidase II) is required for the generation of the correct C-terminus of the HLA-A3-restricted HIV epitope Nef(73-82). With this background, the dependence of the S8L generation on proteasomal cleavage of the designed constructs was characterized using proteasomal inhibitors. The results obtained indicate that it is crucial for proteasomal cleavage, which amino acid is flanking the C-terminus of an epitope. Furthermore, partially proteasome independent S8L generation from specific S8L-precursor peptides was observed. Hence, the possibility of other existing endo- or carboxy-peptidases in the cytosol that could be involved in the correct trimming of the C-terminus of antigenic peptides for MHC class I presentation was investigated, performing specific knockdowns and using inhibitors against the target peptidases. In parallel, a purification strategy to identify the novel peptidase was established. The purified peaks showing an endopeptidase activity were further analyzed by mass spectrometry and some potential peptidases (like e.g. Lon) were identified, which have to be further characterized.
Resumo:
In this thesis two related arguments are investigated: - The first stages of the process of massive star formation, investigating the physical conditions and -properties of massive clumps in different evolutionary stages, and their CO depletion; - The influence that high-mass stars have on the nearby material and on the activity of star formation. I characterise the gas and dust temperature, mass and density of a sample of massive clumps, and analyse the variation of these properties from quiescent clumps, without any sign of active star formation, to clumps likely hosting a zero-age main sequence star. I briefly discuss CO depletion and recent observations of several molecular species, tracers of Hot Cores and/or shocked gas, of a subsample of these clumps. The issue of CO depletion is addressed in more detail in a larger sample consisting of the brightest sources in the ATLASGAL survey: using a radiative tranfer code I investigate how the depletion changes from dark clouds to more evolved objects, and compare its evolution to what happens in the low-mass regime. Finally, I derive the physical properties of the molecular gas in the photon-dominated region adjacent to the HII region G353.2+0.9 in the vicinity of Pismis 24, a young, massive cluster, containing some of the most massive and hottest stars known in our Galaxy. I derive the IMF of the cluster and study the star formation activity in its surroundings. Much of the data analysis is done with a Bayesian approach. Therefore, a separate chapter is dedicated to the concepts of Bayesian statistics.
Resumo:
The aim of this work was to identify markers associated with production traits in the pig genome using different approaches. We focused the attention on Italian Large White pig breed using Genome Wide Association Studies (GWAS) and applying a selective genotyping approach to increase the power of the analyses. Furthermore, we searched the pig genome using Next Generation Sequencing (NSG) Ion Torrent Technology to combine selective genotyping approach and deep sequencing for SNP discovery. Other two studies were carried on with a different approach. Allele frequency changes for SNPs affecting candidate genes and at Genome Wide level were analysed to identify selection signatures driven by selection program during the last 20 years. This approach confirmed that a great number of markers may affect production traits and that they are captured by the classical selection programs. GWAS revealed 123 significant or suggestively significant SNP associated with Back Fat Thickenss and 229 associated with Average Daily Gain. 16 Copy Number Variant Regions resulted more frequent in lean or fat pigs and showed that different copies of those region could have a limited impact on fat. These often appear to be involved in food intake and behavior, beside affecting genes involved in metabolic pathways and their expression. By combining NGS sequencing with selective genotyping approach, new variants where discovered and at least 54 are worth to be analysed in association studies. The study of groups of pigs undergone to stringent selection showed that allele frequency of some loci can drastically change if they are close to traits that are interesting for selection schemes. These approaches could be, in future, integrated in genomic selection plans.
Resumo:
The thesis investigates the nucleon structure probed by the electromagnetic interaction. One of the most basic observables, reflecting the electromagnetic structure of the nucleon, are the form factors, which have been studied by means of elastic electron-proton scattering with ever increasing precision for several decades. In the timelike region, corresponding with the proton-antiproton annihilation into a electron-positron pair, the present experimental information is much less accurate. However, in the near future high-precision form factor measurements are planned. About 50 years after the first pioneering measurements of the electromagnetic form factors, polarization experiments stirred up the field since the results were found to be in striking contradiction to the findings of previous form factor investigations from unpolarized measurements. Triggered by the conflicting results, a whole new field studying the influence of two-photon exchange corrections to elastic electron-proton scattering emerged, which appeared as the most likely explanation of the discrepancy. The main part of this thesis deals with theoretical studies of two-photon exchange, which is investigated particularly with regard to form factor measurements in the spacelike as well as in the timelike region. An extraction of the two-photon amplitudes in the spacelike region through a combined analysis using the results of unpolarized cross section measurements and polarization experiments is presented. Furthermore, predictions of the two-photon exchange effects on the e+p/e-p cross section ratio are given for several new experiments, which are currently ongoing. The two-photon exchange corrections are also investigated in the timelike region in the process pbar{p} -> e+ e- by means of two factorization approaches. These corrections are found to be smaller than those obtained for the spacelike scattering process. The influence of the two-photon exchange corrections on cross section measurements as well as asymmetries, which allow a direct access of the two-photon exchange contribution, is discussed. Furthermore, one of the factorization approaches is applied for investigating the two-boson exchange effects in parity-violating electron-proton scattering. In the last part of the underlying work, the process pbar{p} -> pi0 e+e- is analyzed with the aim of determining the form factors in the so-called unphysical, timelike region below the two-nucleon production threshold. For this purpose, a phenomenological model is used, which provides a good description of the available data of the real photoproduction process pbar{p} -> pi0 gamma.
Resumo:
In vielen Industriezweigen, zum Beispiel in der Automobilindustrie, werden Digitale Versuchsmodelle (Digital MockUps) eingesetzt, um die Konstruktion und die Funktion eines Produkts am virtuellen Prototypen zu überprüfen. Ein Anwendungsfall ist dabei die Überprüfung von Sicherheitsabständen einzelner Bauteile, die sogenannte Abstandsanalyse. Ingenieure ermitteln dabei für bestimmte Bauteile, ob diese in ihrer Ruhelage sowie während einer Bewegung einen vorgegeben Sicherheitsabstand zu den umgebenden Bauteilen einhalten. Unterschreiten Bauteile den Sicherheitsabstand, so muss deren Form oder Lage verändert werden. Dazu ist es wichtig, die Bereiche der Bauteile, welche den Sicherhabstand verletzen, genau zu kennen. rnrnIn dieser Arbeit präsentieren wir eine Lösung zur Echtzeitberechnung aller den Sicherheitsabstand unterschreitenden Bereiche zwischen zwei geometrischen Objekten. Die Objekte sind dabei jeweils als Menge von Primitiven (z.B. Dreiecken) gegeben. Für jeden Zeitpunkt, in dem eine Transformation auf eines der Objekte angewendet wird, berechnen wir die Menge aller den Sicherheitsabstand unterschreitenden Primitive und bezeichnen diese als die Menge aller toleranzverletzenden Primitive. Wir präsentieren in dieser Arbeit eine ganzheitliche Lösung, welche sich in die folgenden drei großen Themengebiete unterteilen lässt.rnrnIm ersten Teil dieser Arbeit untersuchen wir Algorithmen, die für zwei Dreiecke überprüfen, ob diese toleranzverletzend sind. Hierfür präsentieren wir verschiedene Ansätze für Dreiecks-Dreiecks Toleranztests und zeigen, dass spezielle Toleranztests deutlich performanter sind als bisher verwendete Abstandsberechnungen. Im Fokus unserer Arbeit steht dabei die Entwicklung eines neuartigen Toleranztests, welcher im Dualraum arbeitet. In all unseren Benchmarks zur Berechnung aller toleranzverletzenden Primitive beweist sich unser Ansatz im dualen Raum immer als der Performanteste.rnrnDer zweite Teil dieser Arbeit befasst sich mit Datenstrukturen und Algorithmen zur Echtzeitberechnung aller toleranzverletzenden Primitive zwischen zwei geometrischen Objekten. Wir entwickeln eine kombinierte Datenstruktur, die sich aus einer flachen hierarchischen Datenstruktur und mehreren Uniform Grids zusammensetzt. Um effiziente Laufzeiten zu gewährleisten ist es vor allem wichtig, den geforderten Sicherheitsabstand sinnvoll im Design der Datenstrukturen und der Anfragealgorithmen zu beachten. Wir präsentieren hierzu Lösungen, die die Menge der zu testenden Paare von Primitiven schnell bestimmen. Darüber hinaus entwickeln wir Strategien, wie Primitive als toleranzverletzend erkannt werden können, ohne einen aufwändigen Primitiv-Primitiv Toleranztest zu berechnen. In unseren Benchmarks zeigen wir, dass wir mit unseren Lösungen in der Lage sind, in Echtzeit alle toleranzverletzenden Primitive zwischen zwei komplexen geometrischen Objekten, bestehend aus jeweils vielen hunderttausend Primitiven, zu berechnen. rnrnIm dritten Teil präsentieren wir eine neuartige, speicheroptimierte Datenstruktur zur Verwaltung der Zellinhalte der zuvor verwendeten Uniform Grids. Wir bezeichnen diese Datenstruktur als Shrubs. Bisherige Ansätze zur Speicheroptimierung von Uniform Grids beziehen sich vor allem auf Hashing Methoden. Diese reduzieren aber nicht den Speicherverbrauch der Zellinhalte. In unserem Anwendungsfall haben benachbarte Zellen oft ähnliche Inhalte. Unser Ansatz ist in der Lage, den Speicherbedarf der Zellinhalte eines Uniform Grids, basierend auf den redundanten Zellinhalten, verlustlos auf ein fünftel der bisherigen Größe zu komprimieren und zur Laufzeit zu dekomprimieren.rnrnAbschießend zeigen wir, wie unsere Lösung zur Berechnung aller toleranzverletzenden Primitive Anwendung in der Praxis finden kann. Neben der reinen Abstandsanalyse zeigen wir Anwendungen für verschiedene Problemstellungen der Pfadplanung.
Resumo:
The use of antibiotics is highest in primary care and directly associated with antibiotic resistance in the community. We assessed regional variations in antibiotic use in primary care in Switzerland and explored prescription patterns in relation to the use of point of care tests. Defined daily doses of antibiotics per 1000 inhabitants (DDD(1000pd) ) were calculated for the year 2007 from reimbursement data of the largest Swiss health insurer, based on the anatomic therapeutic chemical classification and the DDD methodology recommended by WHO. We present ecological associations by use of descriptive and regression analysis. We analysed data from 1 067 934 adults, representing 17.1% of the Swiss population. The rate of outpatient antibiotic prescriptions in the entire population was 8.5 DDD(1000pd) , and varied between 7.28 and 11.33 DDD(1000pd) for northwest Switzerland and the Lake Geneva region. DDD(1000pd) for the three most prescribed antibiotics were 2.90 for amoxicillin and amoxicillin-clavulanate, 1.77 for fluoroquinolones, and 1.34 for macrolides. Regions with higher DDD(1000pd) showed higher seasonal variability in antibiotic use and lower use of all point of care tests. In regression analysis for each class of antibiotics, the use of any point of care test was consistently associated with fewer antibiotic prescriptions. Prescription rates of primary care physicians showed variations between Swiss regions and were lower in northwest Switzerland and in physicians using point of care tests. Ecological studies are prone to bias and whether point of care tests reduce antibiotic use has to be investigated in pragmatic primary care trials.
Resumo:
In the past few decades the impacts of climate warming have been significant in alpine glaciated regions. Many valley glaciers formerly linked as distributary glaciers to high-level icecaps have decoupled at their icefalls, exposing major escarpments and generating a suite of dynamic landforrns dominated by mass wasting. Ice-dominated landforms, here termed icy debris fans, develop rapidly by ice avalanching, rockfall, and icy debris flow. Field-based reconnaissance studies at two alpine settings, the Wrangell Mountains of Alaska and the Southern Alps of New Zealand, provide a preliminary morphogenetic model of spatial and temporal evolution of icy debris fans in a range of alpine settings. The influence of these processes on landform evolution is largely unrecognized in the literature dealing with post-glacial landform adjustment known as the paraglacial. A better understanding of these dynamic processes will be increasingly important because of the extreme geohazards characterizing these areas. Our field studies show that after glacier decoupling, icy debris fans begin to form along the base of bedrock escarpments at the mouths of catchments and prograde over valley glaciers. The presence of a distinct catchment, apex, and fan morphology distinguishes these landforms from other landforms common in periglacial hillslope settings receiving abundant clastic debris and ice. Ice avalanching is the most abundant process involved in icy debris fan formation. Fans developed below weakly incised catchments are dominated by ice avalanching and are composed primarily of ice with minor lithic detritus. Typically, avalanches fall into the fan catchments where sediments transform into grainflows that flow onto the fans. Once on the fans, avalanche deposits ablate rapidly, flattening and concentrating lithic fragments at the surface. Icy debris fans may become thick enough to become glaciers with splay crevasse systems. Fans developed below larger, more complex catchments are composed of higher proportions of lithic detritus resulting from temporary storage of ice and lithic detritus deposits within the catchment. Episodic outbursts of meltwater from the icecap may mix with the stored sediments and mobilize icy debris flows (mixture of ice and lithic clasts) onto the fans. Our observations indicate that the entire evolutionary cycle of icy debris fans probably occurs during an early paraglacial interval (i.e., decades to 100 years). Observations comparing avalanche frequency, volume, and fan morphologic evolution at the Alaska site between 2006 and 2010 illustrate complex response between icy debris fans even within the same cirque - where one fan may be growing while others are downwasting because of differences in ice supply controlled by their respective catchments and icecap contributions. As ice supply from the icecap diminishes through time, icy debris fans rapidly downwaste and eventually evolve into talus cones that receive occasional but ephemeral ice avalanches.