980 resultados para Systematic errors


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Pós-graduação em Ciências Cartográficas - FCT

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In accelerating dark energy models, the estimates of the Hubble constant, Ho, from Sunyaev-Zerdovich effect (SZE) and X-ray surface brightness of galaxy clusters may depend on the matter content (Omega(M)), the curvature (Omega(K)) and the equation of state parameter GO. In this article, by using a sample of 25 angular diameter distances of galaxy clusters described by the elliptical beta model obtained through the SZE/X-ray technique, we constrain Ho in the framework of a general ACDM model (arbitrary curvature) and a flat XCDM model with a constant equation of state parameter omega = p(x)/rho(x). In order to avoid the use of priors in the cosmological parameters, we apply a joint analysis involving the baryon acoustic oscillations (BA()) and the (MB Shift Parameter signature. By taking into account the statistical and systematic errors of the SZE/X-ray technique we obtain for nonflat ACDM model H-0 = 74(-4.0)(+5.0) km s(-1) Mpc(-1) (1 sigma) whereas for a fiat universe with constant equation of state parameter we find H-0 = 72(-4.0)(+5.5) km s(-1) Mpc(-1)(1 sigma). By assuming that galaxy clusters are described by a spherical beta model these results change to H-0 = 6(-7.0)(+8.0) and H-0 = 59(-6.0)(+9.0) km s(-1) Mpc(-1)(1 sigma), respectively. The results from elliptical description are in good agreement with independent studies from the Hubble Space Telescope key project and recent estimates based on the Wilkinson Microwave Anisotropy Probe, thereby suggesting that the combination of these three independent phenomena provides an interesting method to constrain the Bubble constant. As an extra bonus, the adoption of the elliptical description is revealed to be a quite realistic assumption. Finally, by comparing these results with a recent determination for a, flat ACDM model using only the SZE/X-ray technique and BAO, we see that the geometry has a very weak influence on H-0 estimates for this combination of data.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We present the results of an operational use of experimentally measured optical tomograms to determine state characteristics (purity) avoiding any reconstruction of quasiprobabilities. We also develop a natural way how to estimate the errors (including both statistical and systematic ones) by an analysis of the experimental data themselves. Precision of the experiment can be increased by postselecting the data with minimal (systematic) errors. We demonstrate those techniques by considering coherent and photon-added coherent states measured via the time-domain improved homodyne detection. The operational use and precision of the data allowed us to check purity-dependent uncertainty relations and uncertainty relations for Shannon and Renyi entropies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Objective To evaluate and compare the intraobserver and interobserver reliability and agreement for the biparietal diameter (BPD), abdominal circumference (AC), femur length (FL) and estimated fetal weight (EFW) obtained by two-dimensional ultrasound (2D-US) and three-dimensional ultrasound (3D-US). Methods Singleton pregnant women between 24 and 40 weeks were invited to participate in this study. They were examined using 2D-US in a blinded manner, twice by one observer, intercalated by a scan by a second observer, to determine BPD, AC and FL. In each of the three examinations, three 3D-US datasets (head, abdomen and thigh) were acquired for measurements of the same parameters. We determined EFW using Hadlock's formula. Systematic errors between 3D-US and 2D-US were examined using the paired t-test. Reliability and agreement were assessed by intraclass correlation coefficients (ICCs), limits of agreement (LoA), SD of differences and proportion of differences below arbitrary points. Results We evaluated 102 singleton pregnancies. No significant systematic error between 2D-US and 3D-US was observed. The ICC values were higher for 3D-US in both intra- and interobserver evaluations; however, only for FL was there no overlap in the 95% CI. The LoA values were wider for 2D-US, suggesting that random errors were smaller when using 3D-US. Additionally, we observed that the SD values determined from 3D-US differences were smaller than those obtained for 2D-US. Higher proportions of differences were below the arbitrarily defined cut-off points when using 3D-US. Conclusion 3D-US improved the reliability and agreement of fetal measurements and EFW compared with 2D-US.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Photometric data in the UBV(RI)(C) system have been acquired for 80 solar analog stars for which we have previously derived highly precise atmospheric parameters T-eff, log g, and [Fe/H] using high-resolution, high signal-to-noise ratio spectra. UBV and (RI)(C) data for 46 and 76 of these stars, respectively, are published for the first time. Combining our data with those from the literature, colors in the UBV(RI) C system, with similar or equal to 0.01 mag precision, are now available for 112 solar analogs. Multiple linear regression is used to derive the solar colors from these photometric data and the spectroscopically derived T-eff, log g, and [Fe/H] values. To minimize the impact of systematic errors in the model-dependent atmospheric parameters, we use only the data for the 10 stars that most closely resemble our Sun, i.e., the solar twins, and derive the following solar colors: (B - V)(circle dot) = 0.653 +/- 0.005, (U - B)(circle dot) = 0.166 +/- 0.022, (V - R)(circle dot) = 0.352 +/- 0.007, and (V - I)(circle dot) = 0.702 +/- 0.010. These colors are consistent, within the 1 sigma errors, with those derived using the entire sample of 112 solar analogs. We also derive the solar colors using the relation between spectral-line-depth ratios and observed stellar colors, i.e., with a completely model-independent approach, and without restricting the analysis to solar twins. We find (B - V)(circle dot) = 0.653 +/- 0.003, (U - B)(circle dot) = 0.158 +/- 0.009, (V - R)(circle dot) = 0.356 +/- 0.003, and (V - I)(circle dot) = 0.701 +/- 0.003, in excellent agreement with the model-dependent analysis.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Objectives The current study investigated to what extent task-specific practice can help reduce the adverse effects of high-pressure on performance in a simulated penalty kick task. Based on the assumption that practice attenuates the required attentional resources, it was hypothesized that task-specific practice would enhance resilience against high-pressure. Method Participants practiced a simulated penalty kick in which they had to move a lever to the side opposite to the goalkeeper's dive. The goalkeeper moved at different times before ball-contact. Design Before and after task-specific practice, participants were tested on the same task both under low- and high-pressure conditions. Results Before practice, performance of all participants worsened under high-pressure; however, whereas one group of participants merely required more time to correctly respond to the goalkeeper movement and showed a typical logistic relation between the percentage of correct responses and the time available to respond, a second group of participants showed a linear relationship between the percentage of correct responses and the time available to respond. This implies that they tended to make systematic errors for the shortest times available. Practice eliminated the debilitating effects of high-pressure in the former group, whereas in the latter group high-pressure continued to negatively affect performance. Conclusions Task-specific practice increased resilience to high-pressure. However, the effect was a function of how participants responded initially to high-pressure, that is, prior to practice. The results are discussed within the framework of attentional control theory (ACT).

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Since the first underground nuclear explosion, carried out in 1958, the analysis of seismic signals generated by these sources has allowed seismologists to refine the travel times of seismic waves through the Earth and to verify the accuracy of the location algorithms (the ground truth for these sources was often known). Long international negotiates have been devoted to limit the proliferation and testing of nuclear weapons. In particular the Treaty for the comprehensive nuclear test ban (CTBT), was opened to signatures in 1996, though, even if it has been signed by 178 States, has not yet entered into force, The Treaty underlines the fundamental role of the seismological observations to verify its compliance, by detecting and locating seismic events, and identifying the nature of their sources. A precise definition of the hypocentral parameters represents the first step to discriminate whether a given seismic event is natural or not. In case that a specific event is retained suspicious by the majority of the State Parties, the Treaty contains provisions for conducting an on-site inspection (OSI) in the area surrounding the epicenter of the event, located through the International Monitoring System (IMS) of the CTBT Organization. An OSI is supposed to include the use of passive seismic techniques in the area of the suspected clandestine underground nuclear test. In fact, high quality seismological systems are thought to be capable to detect and locate very weak aftershocks triggered by underground nuclear explosions in the first days or weeks following the test. This PhD thesis deals with the development of two different seismic location techniques: the first one, known as the double difference joint hypocenter determination (DDJHD) technique, is aimed at locating closely spaced events at a global scale. The locations obtained by this method are characterized by a high relative accuracy, although the absolute location of the whole cluster remains uncertain. We eliminate this problem introducing a priori information: the known location of a selected event. The second technique concerns the reliable estimates of back azimuth and apparent velocity of seismic waves from local events of very low magnitude recorded by a trypartite array at a very local scale. For the two above-mentioned techniques, we have used the crosscorrelation technique among digital waveforms in order to minimize the errors linked with incorrect phase picking. The cross-correlation method relies on the similarity between waveforms of a pair of events at the same station, at the global scale, and on the similarity between waveforms of the same event at two different sensors of the try-partite array, at the local scale. After preliminary tests on the reliability of our location techniques based on simulations, we have applied both methodologies to real seismic events. The DDJHD technique has been applied to a seismic sequence occurred in the Turkey-Iran border region, using the data recorded by the IMS. At the beginning, the algorithm was applied to the differences among the original arrival times of the P phases, so the cross-correlation was not used. We have obtained that the relevant geometrical spreading, noticeable in the standard locations (namely the locations produced by the analysts of the International Data Center (IDC) of the CTBT Organization, assumed as our reference), has been considerably reduced by the application of our technique. This is what we expected, since the methodology has been applied to a sequence of events for which we can suppose a real closeness among the hypocenters, belonging to the same seismic structure. Our results point out the main advantage of this methodology: the systematic errors affecting the arrival times have been removed or at least reduced. The introduction of the cross-correlation has not brought evident improvements to our results: the two sets of locations (without and with the application of the cross-correlation technique) are very similar to each other. This can be commented saying that the use of the crosscorrelation has not substantially improved the precision of the manual pickings. Probably the pickings reported by the IDC are good enough to make the random picking error less important than the systematic error on travel times. As a further justification for the scarce quality of the results given by the cross-correlation, it should be remarked that the events included in our data set don’t have generally a good signal to noise ratio (SNR): the selected sequence is composed of weak events ( magnitude 4 or smaller) and the signals are strongly attenuated because of the large distance between the stations and the hypocentral area. In the local scale, in addition to the cross-correlation, we have performed a signal interpolation in order to improve the time resolution. The algorithm so developed has been applied to the data collected during an experiment carried out in Israel between 1998 and 1999. The results pointed out the following relevant conclusions: a) it is necessary to correlate waveform segments corresponding to the same seismic phases; b) it is not essential to select the exact first arrivals; and c) relevant information can be also obtained from the maximum amplitude wavelet of the waveforms (particularly in bad SNR conditions). Another remarkable point of our procedure is that its application doesn’t demand a long time to process the data, and therefore the user can immediately check the results. During a field survey, such feature will make possible a quasi real-time check allowing the immediate optimization of the array geometry, if so suggested by the results at an early stage.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In the context of “testing laboratory” one of the most important aspect to deal with is the measurement result. Whenever decisions are based on measurement results, it is important to have some indication of the quality of the results. In every area concerning with noise measurement many standards are available but without an expression of uncertainty, it is impossible to judge whether two results are in compliance or not. ISO/IEC 17025 is an international standard related with the competence of calibration and testing laboratories. It contains the requirements that testing and calibration laboratories have to meet if they wish to demonstrate that they operate to a quality system, are technically competent and are able to generate technically valid results. ISO/IEC 17025 deals specifically with the requirements for the competence of laboratories performing testing and calibration and for the reporting of the results, which may or may not contain opinions and interpretations of the results. The standard requires appropriate methods of analysis to be used for estimating uncertainty of measurement. In this point of view, for a testing laboratory performing sound power measurement according to specific ISO standards and European Directives, the measurement of uncertainties is the most important factor to deal with. Sound power level measurement, according to ISO 3744:1994 , performed with a limited number of microphones distributed over a surface enveloping a source is affected by a certain systematic error and a related standard deviation. Making a comparison of measurement carried out with different microphone arrays is difficult because results are affected by systematic errors and standard deviation that are peculiarities of the number of microphones disposed on the surface, their spatial position and the complexity of the sound field. A statistical approach could give an overview of the difference between sound power level evaluated with different microphone arrays and an evaluation of errors that afflict this kind of measurement. Despite the classical approach that tend to follow the ISO GUM this thesis present a different point of view of the problem related to the comparison of result obtained from different microphone arrays.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This thesis is about three major aspects of the identification of top quarks. First comes the understanding of their production mechanism, their decay channels and how to translate theoretical formulae into programs that can simulate such physical processes using Monte Carlo techniques. In particular, the author has been involved in the introduction of the POWHEG generator in the framework of the ATLAS experiment. POWHEG is now fully used as the benchmark program for the simulation of ttbar pairs production and decay, along with MC@NLO and AcerMC: this will be shown in chapter one. The second chapter illustrates the ATLAS detectors and its sub-units, such as calorimeters and muon chambers. It is very important to evaluate their efficiency in order to fully understand what happens during the passage of radiation through the detector and to use this knowledge in the calculation of final quantities such as the ttbar production cross section. The last part of this thesis concerns the evaluation of this quantity deploying the so-called "golden channel" of ttbar decays, yielding one energetic charged lepton, four particle jets and a relevant quantity of missing transverse energy due to the neutrino. The most important systematic errors arising from the various part of the calculation are studied in detail. Jet energy scale, trigger efficiency, Monte Carlo models, reconstruction algorithms and luminosity measurement are examples of what can contribute to the uncertainty about the cross-section.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In Experimenten an lasergekühlten, in einer linearen Paulfalle gespeicherten $Ca^+$-Ionen wurde dieLebensdauer des metastabilen $3D_{5/2}$-Niveaus durch Beobachtung von Quantensprüngen einzelner Ionen zu 1100(18)msbestimmt. Systematische Fehler durch quenchende Stöße oder Stark-Mischen durch das Speicherfeld liegen unterhalb dererreichten Genauigkeit. Abweichungen von früheren Messungen konnten durch eine vernachlässigte Abhängigkeit derLebensdauer von der Laserleistung des Rückpumplasers erklärt werden. Das Endergebnis zeigt gute Übereinstimmung mitneueren theoretischen Werten. In weiteren Messungen an zehn Ionen wurde in einigen Messreihen eine deutliche Reduktionder Lebensdauer gegenüber einem einzelnen Ion festgestellt. Dabei wurden mehr koinzidente Zerfälle von zwei und dreiIonen beobachtet als für unabhängige Teilchen zu erwarten. In einem Ionenkristall wurde eine räumliche Trennung atomarer Zustände erreicht. Dabei wurde ein Teil der Ionen einesKristalls aus einigen hundert Ionen in den metastabilen Zustand gepumpt, der von den Kühllasern vollständig entkoppeltist. Durch sympathetische Kühlung werden diese Ionen weiterhin gekühlt und der Kristall schmilzt nicht. Durch denLichtdruck, den die Kühllaser ausgeüben, werden die Ionen nach atomaren Zuständen sortiert, weil die lasergekühltenIonen einen Rückstoß erfahren, die übrigen aber nicht. Für zukünftige Experimente wurden Verbesserungen des experimentellen Aufbaus auf den Weg gebracht. So wurden Methodenund Komponenten für eine verbesserte Frequenzstabilisierung der Diodenlaser entwickelt.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Im Rahmen der vorliegenden Arbeit wurden zum ersten Mal kalorimetrische Tieftemperatur-Detektoren in der Beschleuniger-Massenspektrometrie (Accelerator Mass Spectrometry AMS), einer Standard-Methode zur Bestimmung kleinster Isotopenverhältnisse, eingesetzt, um das Isotopenverhältnis von 236U zu 238U zu bestimmen. Das Uran-Isotop 236U entsteht in der Neutroneneinfang-Reaktion 235U(n,gamma)236U und kann daher als Monitor-Nuklid für Neutronenflüsse verwendet werden. Die Detektoren bestehen aus einem Saphir-Absorber, auf den ein supraleitender Aluminium-Film aufgedampft ist, der als Thermistor dient. Ein energetisches Schwerion deponiert seine kinetische Energie als Wärme im Absorber, dessen Temperaturänderung durch die Widerstandsänderung des Supraleiters nachgewiesen wird. Mit solchen Detektoren konnte in vorhergehenden Experimenten bei GSI in einem Energiebereich von E = 5 - 300 MeV/amu für eine Vielzahl von Ionen von Neon bis Uran eine relative Energieauflösung von (1 - 4) E-3 erreicht werden. Der für die Beschleuniger-Massenspektrometrie typische Energiebereich liegt bei E = 0.1 - 1 MeV/amu. Im ersten Schritt wurde daher die systematische Untersuchung der Detektoreigenschaften auf diesen Energiebereich ausgedehnt. Diese Untersuchungen sowie die AMS-Messungen wurden am Tandem-Beschleuniger VERA des Instituts für Isotopenforschung und Kernphysik der Universität Wien durchgeführt. In einem Energiebereich von 10 - 60 MeV konnte für verschiedene Ionen (13C, 197Au, 238U) zunächst eine relative Energieauflösung von DeltaE/E = 7 E-3 erreicht werden. Dies übertrifft die Auflösung konventioneller Ionisations-Detektoren um ca. eine Größenordnung. Durch eine Verbesserung thermischer und elektronischer Rauschbeiträge konnte in einem zweiten Experiment für Uran der Energie 17 MeV die Auflösung auf DeltaE/E = 4.6 E-3 verbessert werden. Die Energie-Response des Detektors war linear über den gesamten beobachteten Energiebereich und unabhängig von der Ionenmasse; bis auf ein Niveau von 0.1 % wurde kein Pulshöhendefekt beobachtet. Diese Ergebnisse zeigen, daß solche Detektoren ein wertvolles Werkzeug in der Schwerionenphysik im Bereich relativ niedriger Ionenenergien darstellen. Mit der erreichten Energieauflösung war es möglich, für mehrere Proben aus natürlichem Uran das Isotopenverhältnis 236U/238U zu bestimmen: Um einen Material-Standard für Uran in der AMS zu etablieren, wurde das Isotopenverhältnis 236U/238U für zwei Proben aus der Mine ''K.u.K. Joachimsthal'' möglichst präzise bestimmt. Die Ergebnisse in der vorliegenden Arbeit stimmen gut mit früheren Messungen überein, die mit einem konventionellen Detektorsystem durchgeführt wurden. Sowohl der statistische als auch der systematische Fehler konnten deutlich reduziert werden. Für eine weitere Probe, extrahiert aus dem Wasser einer Uran-haltigen Quelle in Bad Gastein, wurde ein Isotopenverhältnis von 6.1 E-12 gemessen. Dies stellt das kleinste bislang für 236U/238U gemessene Isotopenverhältnis dar und bedeutet eine Steigerung der Sensitivität um eine Größenordnung. Die erreichte Energieauflösung ermöglicht es außerdem, die Detektoren zur direkten Massenidentifikation von schweren Ionen mittels einer kombinierten Energie-Flugzeit-Messung einzusetzen. In ersten Test-Messungen im Rahmen der vorliegenden Arbeit wurde eine Massenauflösung von DeltaM/M = (8.5 - 11.0) E-3 erreicht. In einem ersten Test für den Einsatz dieser Detektoren zum Nachweis sog. ''superschwerer Elemente (Z >= 112)'' erlaubte der große dynamische Bereich, die Reaktionsprodukte und ihre nachfolgenden Alpha-Zerfälle mit hoher Energieauflösung simultan und zeitaufgelöst nachzuweisen.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Die A4-Kollaboration am Mainzer Mikrotron MAMI erforscht die Struktur des Protons mit Hilfe der elastischen Streuung polarisierter Elektronen an unpolarisiertem Wasserstoff. Bei longitudinaler Polarisation wird eine paritätsverletzende Asymmetrie im Wirkungsquerschnitt gemessen, die Aufschluß über den Beitrag der Strangeness zu den Vektor-Formfaktoren des Protons gibt. Bei transversaler Polarisation treten azimutale Asymmetrien auf, die auf Beiträge des Zwei-Photon-Austauschs zum Wirkungsquerschnitt zurückzuführen sind und den Zugriff auf den Imaginärteil der Zwei-Photon-Amplitude ermöglichen. Im Rahmen der vorliegenden Arbeit wurden Messungen bei zwei Impulsüberträgen und jeweils Longitudinal- und Transversalpolarisation durchgeführt und analysiert. Im Vordergrund standen die Extraktion der Rohasymmetrien aus den Daten, die Korrekturen der Rohasymmetrien auf apparative Asymmetrien, die Abschätzung des systematischen Fehlers und die Bestimmung der Strange-Formfaktoren aus den paritätsverletzenden Asymmetrien. Bei den Messungen mit Longitudinalpolarisation wurden die Asymmetrien zu A=(-5.59 +- 0.57stat +- 0.29syst)ppm bei Q^2=0.23 (GeV/c)^2 und A=(-1.39 +- 0.29stat +- 0.12syst)ppm bei Q^2=0.11(GeV/c)^2 bestimmt. Daraus lassen sich die Linearkombinationen der Strange-Formfaktoren zu GEs+0.225GMs= 0.029 +- 0.034 bzw. GEs+0.106GMs=0.070+-0.035 ermitteln. Die beiden Resultate stehen in Übereinstimmung mit anderen Experimenten und deuten darauf hin, daß es einen nichtverschwindenden Strangeness-Beitrag zu den Formfaktoren gibt. Bei den Messungen mit Transversalpolarisation wurden die azimutalen Asymmetrien zu A=(-8.51 +- 2.31stat +-0.89syst)ppm bei E=855 MeV und Q^2=0.23(GeV/c)^2 und zu A=(-8.59 +- 0.89stat +- 0.83syst)ppm bei E=569 MeV und Q^2=0.11(GeV/c)^2 bestimmt. Die Größe der gemessenen Asymmetrien belegt, daß beim Zwei-Photon-Austausch neben dem Grundzustand des Protons vor allem auch angeregte Zwischenzustände einen wesentlichen Beitrag liefern.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Quantitative branch determination in polyolefins by solid- and melt-state 13C NMR has been investigated. Both methods were optimised toward sensitivity per unit time. While solid-state NMR was shown to give quick albeit only qualitative results, melt-state NMR allowed highly time efficient accurate branch quantification. Comparison of spectra obtained using spectrometers operating at 300, 500 and 700 MHz 1H Larmor frequency, with 4 and 7~mm MAS probeheads, showed that the best sensitivity was achieved at 500 MHz using a 7 mm 13C-1H optimised high temperature probehead. For materials available in large quantities, static melt-state NMR, using large diameter detection coils and high coil filling at 300 MHz, was shown to produce comparable results to melt-state MAS measurements in less time. While the use of J-coupling mediated polarisation transfer techniques was shown to be possible, direct polarisation via single-pulse excitation proved to be more suitable for branch quantification in the melt-state. Artificial line broadening, introduced by FID truncation, was able to be reduced by the use of π pulse-train heteronuclear dipolar decoupling. This decoupling method, when combined with an extended duty-cycle, allowed for significant improvement in resolution. Standard setup, processing and analysis techniques were developed to minimise systematic errors contributing to the measured branch contents. The final optimised melt-state MAS NMR method was shown to allow time efficient quantification of comonomer content and distribution in both polyethylene- and polypropylene-co-α-olefins. The sensitivity of the technique was demonstrated by quantifying branch concentrations of 8 branches per 100,000 CH2 for an industrial ‘linear’ polyethylene in only 13 hours. Even lower degrees of 3–8 long-chain branches per 100,000 carbons were able to be estimated in just 24 hours for a series of γ-irradiated polypropylene homopolymers.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Erkrankungen des Skelettapparats wie beispielsweise die Osteoporose oder Arthrose gehören neben den Herz-Kreislauferkrankungen und Tumoren zu den Häufigsten Erkrankungen des Menschen. Ein besseres Verständnis der Bildung und des Erhalts von Knochen- oder Knorpelgewebe ist deshalb von besonderer Bedeutung. Viele bisherige Ansätze zur Identifizierung hierfür relevanter Gene, deren Produkte und Interaktionen beruhen auf der Untersuchung pathologischer Situationen. Daher ist die Funktion vieler Gene nur im Zusammenhang mit Krankheiten beschrieben. Untersuchungen, die die Genaktivität bei der Normalentwicklung von knochen- und knorpelbildenden Geweben zum Ziel haben, sind dagegen weit weniger oft durchgeführt worden. rnEines der entwicklungsphysiologisch interessantesten Gewebe ist die Epiphysenfuge der Röhrenknochen. In dieser sogenannten Wachstumsfuge ist insbesondere beim fötalen Gewebe eine sehr hohe Aktivität derjenigen Gene zu erwarten, die an der Knochen- und Knorpelbildung beteiligt sind. In der vorliegenden Arbeit wurde daher aus der Epiphysenfuge von Kälberknochen RNA isoliert und eine cDNA-Bibliothek konstruiert. Von dieser wurden ca. 4000 Klone im Rahmen eines klassischen EST-Projekts sequenziert. Durch die Analyse konnte ein ungefähr 900 Gene umfassendes Expressionsprofil erstellt werden und viele Transkripte für Komponenten der regulatorischen und strukturbildenden Bestandteile der Knochen- und Knorpelentwicklung identifiziert werden. Neben den typischen Genen für Komponenten der Knochenentwicklung sind auch deutlich Bestandteile für embryonale Entwicklungsprozesse vertreten. Zu ersten gehören in erster Linie die Kollagene, allen voran Kollagen II alpha 1, das mit Abstand höchst exprimierte Gen in der fötalen Wachstumsfuge. Nach den ribosomalen Proteinen stellen die Kollagene mit ca. 10 % aller auswertbaren Sequenzen die zweitgrößte Gengruppe im erstellten Expressionsprofil dar. Proteoglykane und andere niedrig exprimierte regulatorische Elemente, wie Transkriptionsfaktoren, konnten im EST-Projekt aufgrund der geringen Abdeckung nur in sehr geringer Kopienzahl gefunden werden. Allerdings förderte die EST-Analyse mehrere interessante, bisher nicht bekannte Transkripte zutage, die detaillierter untersucht wurden. Dazu gehören Transkripte die, die dem LOC618319 zugeordnet werden konnten. Neben den bisher beschriebenen drei Exonbereichen konnte ein weiteres Exon im 3‘-UTR identifiziert werden. Im abgeleiteten Protein, das mindestens 121 AS lang ist, wurden ein Signalpeptid und eine Transmembrandomäne nachgewiesen. In Verbindung mit einer möglichen Glykosylierung ist das Genprodukt in die Gruppe der Proteoglykane einzuordnen. Leicht abweichend von den typischen Strukturen knochen- und knorpelspezifischer Proteoglykane ist eine mögliche Funktion dieses Genprodukts bei der Interaktion mit Integrinen und der Zell-Zellinteraktion, aber auch bei der Signaltransduktion denkbar. rnDie EST-Sequenzierungen von ca. 4000 cDNA-Klonen können aber in der Regel nur einen Bruchteil der möglichen Transkripte des untersuchten Gewebes abdecken. Mit den neuen Sequenziertechnologien des „Next Generation Sequencing“ bestehen völlig neue Möglichkeiten, komplette Transkriptome mit sehr hoher Abdeckung zu sequenzieren und zu analysieren. Zur Unterstützung der EST-Daten und zur deutlichen Verbreiterung der Datenbasis wurde das Transkriptom der bovinen fötalen Wachstumsfuge sowohl mit Hilfe der Roche-454/FLX- als auch der Illumina-Solexa-Technologie sequenziert. Bei der Auswertung der ca. 40000 454- und 75 Millionen Illumina-Sequenzen wurden Verfahren zur allgemeinen Handhabung, der Qualitätskontrolle, dem „Clustern“, der Annotation und quantitativen Auswertung von großen Mengen an Sequenzdaten etabliert. Beim Vergleich der Hochdurchsatz Blast-Analysen im klassischen „Read-Count“-Ansatz mit dem erstellten EST-Expressionsprofil konnten gute Überstimmungen gezeigt werden. Abweichungen zwischen den einzelnen Methoden konnten nicht in allen Fällen methodisch erklärt werden. In einigen Fällen sind Korrelationen zwischen Transkriptlänge und „Read“-Verteilung zu erkennen. Obwohl schon simple Methoden wie die Normierung auf RPKM („reads per kilo base transkript per million mappable reads“) eine Verbesserung der Interpretation ermöglichen, konnten messtechnisch durch die Art der Sequenzierung bedingte systematische Fehler nicht immer ausgeräumt werden. Besonders wichtig ist daher die geeignete Normalisierung der Daten beim Vergleich verschieden generierter Datensätze. rnDie hier diskutierten Ergebnisse aus den verschiedenen Analysen zeigen die neuen Sequenziertechnologien als gute Ergänzung und potentiellen Ersatz für etablierte Methoden zur Genexpressionsanalyse.rn

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A permanent electric dipole moment of the neutron violates time reversal as well as parity symmetry. Thus it also violates the combination of charge conjugation and parity symmetry if the combination of all three symmetries is a symmetry of nature. The violation of these symmetries could help to explain the observed baryon content of the Universe. The prediction of the Standard Model of particle physics for the neutron electric dipole moment is only about 10e−32 ecm. At the same time the combined violation of charge conjugation and parity symmetry in the Standard Model is insufficient to explain the observed baryon asymmetry of the Universe. Several extensions to the Standard Model can explain the observed baryon asymmetry and also predict values for the neutron electric dipole moment just below the current best experimental limit of d n < 2.9e−26 ecm, (90% C.L.) that has been obtained by the Sussex-RAL-ILL collaboration in 2006. The very same experiment that set the current best limit on the electric dipole moment has been upgraded and moved to the Paul Scherrer Institute. Now an international collaboration is aiming at increasing the sensitivity for an electric dipole moment by more than an order of magnitude. This thesis took place in the frame of this experiment and went along with the commissioning of the experiment until first data taking. After a short layout of the theoretical background in chapter 1, the experiment with all subsystems and their performance are described in detail in chapter 2. To reach the goal sensitivity the control of systematic errors is as important as an increase in statistical sensitivity. Known systematic efects are described and evaluated in chapter 3. During about ten days in 2012, a first set of data was measured with the experiment at the Paul Scherrer Institute. An analysis of this data is presented in chapter 4, together with general tools developed for future analysis eforts. The result for the upper limit of an electric dipole moment of the neutron is |dn| ≤ 6.4e−25 ecm (95%C.L.). Chapter 5 presents investigations for a next generation experiment, to build electrodes made partly from insulating material. Among other advantages, such electrodes would reduce magnetic noise, generated by the thermal movement of charge carriers. The last Chapter summarizes this work and gives an outlook.