947 resultados para method comparison
Resumo:
In case of violation of CPT- and Lorentz Symmetry, the minimal Standard Model Extension (SME) of Kostelecky and coworkers predicts sidereal modulations of atomic transition frequencies as the Earth rotates relative to a Lorentz-violating background field. One method to search for these modulations is the so-called clock-comparison experiment, where the frequencies of co-located clocks are compared as they rotate with respect to the fixed stars. In this work an experiment is presented where polarized 3He and 129Xe gas samples in a glass cell serve as clocks, whose nuclear spin precession frequencies are detected with the help of highly sensitive SQUID sensors inside a magnetically shielded room. The unique feature of this experiment is the fact that the spins are precessing freely, with transverse relaxation times of up to 4.4 h for 129Xe and 14.1 h for 3He. To be sensitive to Lorentz-violating effects, the influence of external magnetic fields is canceled via the weighted difference of the 3He and 129Xe frequencies or phases. The Lorentz-violating SME parameters for the neutron are determined out of a fit on the phase difference data of 7 spin precession measurements of 12 to 16 hours length. The result of the fit gives an upper limit for the equatorial component of the neutron parameter b_n of 3.7×10^(−32) GeV at the 95% confidence level. This value is not limited by the signal-to-noise ratio, but by the strong correlations between the fit parameters. To reduce the correlations and therewith improve the sensitivity of future experiments, it will be necessary to change the time structure of the weighted phase difference, which can be realized by increasing the 129Xe relaxation time.
Resumo:
The Schroeder's backward integration method is the most used method to extract the decay curve of an acoustic impulse response and to calculate the reverberation time from this curve. In the literature the limits and the possible improvements of this method are widely discussed. In this work a new method is proposed for the evaluation of the energy decay curve. The new method has been implemented in a Matlab toolbox. Its performance has been tested versus the most accredited literature method. The values of EDT and reverberation time extracted from the energy decay curves calculated with both methods have been compared in terms of the values themselves and in terms of their statistical representativeness. The main case study consists of nine Italian historical theatres in which acoustical measurements were performed. The comparison of the two extraction methods has also been applied to a critical case, i.e. the structural impulse responses of some building elements. The comparison underlines that both methods return a comparable value of the T30. Decreasing the range of evaluation, they reveal increasing differences; in particular, the main differences are in the first part of the decay, where the EDT is evaluated. This is a consequence of the fact that the new method returns a “locally" defined energy decay curve, whereas the Schroeder's method accumulates energy from the tail to the beginning of the impulse response. Another characteristic of the new method for the energy decay extraction curve is its independence on the background noise estimation. Finally, a statistical analysis is performed on the T30 and EDT values calculated from the impulse responses measurements in the Italian historical theatres. The aim of this evaluation is to know whether a subset of measurements could be considered representative for a complete characterization of these opera houses.
Resumo:
The atmosphere is a global influence on the movement of heat and humidity between the continents, and thus significantly affects climate variability. Information about atmospheric circulation are of major importance for the understanding of different climatic conditions. Dust deposits from maar lakes and dry maars from the Eifel Volcanic Field (Germany) are therefore used as proxy data for the reconstruction of past aeolian dynamics.rnrnIn this thesis past two sediment cores from the Eifel region are examined: the core SM3 from Lake Schalkenmehren and the core DE3 from the Dehner dry maar. Both cores contain the tephra of the Laacher See eruption, which is dated to 12,900 before present. Taken together the cores cover the last 60,000 years: SM3 the Holocene and DE3 the marine isotope stages MIS-3 and MIS-2, respectively. The frequencies of glacial dust storm events and their paleo wind direction are detected by high resolution grain size and provenance analysis of the lake sediments. Therefore two different methods are applied: geochemical measurements of the sediment using µXRF-scanning and the particle analysis method RADIUS (rapid particle analysis of digital images by ultra-high-resolution scanning of thin sections).rnIt is shown that single dust layers in the lake sediment are characterized by an increased content of aeolian transported carbonate particles. The limestone-bearing Eifel-North-South zone is the most likely source for the carbonate rich aeolian dust in the lake sediments of the Dehner dry maar. The dry maar is located on the western side of the Eifel-North-South zone. Thus, carbonate rich aeolian sediment is most likely to be transported towards the Dehner dry maar within easterly winds. A methodology is developed which limits the detection to the aeolian transported carbonate particles in the sediment, the RADIUS-carbonate module.rnrnIn summary, during the marine isotope stage MIS-3 the storm frequency and the east wind frequency are both increased in comparison to MIS-2. These results leads to the suggestion that atmospheric circulation was affected by more turbulent conditions during MIS-3 in comparison to the more stable atmospheric circulation during the full glacial conditions of MIS-2.rnThe results of the investigations of the dust records are finally evaluated in relation a study of atmospheric general circulation models for a comprehensive interpretation. Here, AGCM experiments (ECHAM3 and ECHAM4) with different prescribed SST patterns are used to develop a synoptic interpretation of long-persisting east wind conditions and of east wind storm events, which are suggested to lead to an enhanced accumulation of sediment being transported by easterly winds to the proxy site of the Dehner dry maar.rnrnThe basic observations made on the proxy record are also illustrated in the 10 m-wind vectors in the different model experiments under glacial conditions with different prescribed sea surface temperature patterns. Furthermore, the analysis of long-persisting east wind conditions in the AGCM data shows a stronger seasonality under glacial conditions: all the different experiments are characterized by an increase of the relative importance of the LEWIC during spring and summer. The different glacial experiments consistently show a shift from a long-lasting high over the Baltic Sea towards the NW, directly above the Scandinavian Ice Sheet, together with contemporary enhanced westerly circulation over the North Atlantic.rnrnThis thesis is a comprehensive analysis of atmospheric circulation patterns during the last glacial period. It has been possible to reconstruct important elements of the glacial paleo climate in Central Europe. While the proxy data from sediment cores lead to a binary signal of the wind direction changes (east versus west wind), a synoptic interpretation using atmospheric circulation models is successful. This shows a possible distribution of high and low pressure areas and thus the direction and strength of wind fields which have the capacity to transport dust. In conclusion, the combination of numerical models, to enhance understanding of processes in the climate system, with proxy data from the environmental record is the key to a comprehensive approach to paleo climatic reconstruction.rn
Resumo:
Ein wesentlicher Anteil an organischem Kohlenstoff, der in der Atmosphäre vorhanden ist, wird als leichtflüchtige organische Verbindungen gefunden. Diese werden überwiegend durch die Biosphäre freigesetzt. Solche biogenen Emissionen haben einen großen Einfluss auf die chemischen und physikalischen Eigenschaften der Atmosphäre, indem sie zur Bildung von bodennahem Ozon und sekundären organischen Aerosolen beitragen. Um die Bildung von bodennahem Ozon und von sekundären organischen Aerosolen besser zu verstehen, ist die technische Fähigkeit zur genauen Messung der Summe dieser flüchtigen organischen Substanzen notwendig. Häufig verwendete Methoden sind nur auf den Nachweis von spezifischen Nicht-Methan-Kohlenwasserstoffverbindungen fokussiert. Die Summe dieser Einzelverbindungen könnte gegebenenfalls aber nur eine Untergrenze an atmosphärischen organischen Kohlenstoffkonzentrationen darstellen, da die verfügbaren Methoden nicht in der Lage sind, alle organischen Verbindungen in der Atmosphäre zu analysieren. Einige Studien sind bekannt, die sich mit der Gesamtkohlenstoffbestimmung von Nicht-Methan-Kohlenwasserstoffverbindung in Luft beschäftigt haben, aber Messungen des gesamten organischen Nicht-Methan-Verbindungsaustauschs zwischen Vegetation und Atmosphäre fehlen. Daher untersuchten wir die Gesamtkohlenstoffbestimmung organische Nicht-Methan-Verbindungen aus biogenen Quellen. Die Bestimmung des organischen Gesamtkohlenstoffs wurde durch Sammeln und Anreichern dieser Verbindungen auf einem festen Adsorptionsmaterial realisiert. Dieser erste Schritt war notwendig, um die stabilen Gase CO, CO2 und CH4 von der organischen Kohlenstofffraktion zu trennen. Die organischen Verbindungen wurden thermisch desorbiert und zu CO2 oxidiert. Das aus der Oxidation entstandene CO2 wurde auf einer weiteren Anreicherungseinheit gesammelt und durch thermische Desorption und anschließende Detektion mit einem Infrarot-Gasanalysator analysiert. Als große Schwierigkeiten identifizierten wir (i) die Abtrennung von CO2 aus der Umgebungsluft von der organischen Kohlenstoffverbindungsfaktion während der Anreicherung sowie (ii) die Widerfindungsraten der verschiedenen Nicht-Methan-Kohlenwasserstoff-verbindungen vom Adsorptionsmaterial, (iii) die Wahl des Katalysators sowie (iiii) auftretende Interferenzen am Detektor des Gesamtkohlenstoffanalysators. Die Wahl eines Pt-Rd Drahts als Katalysator führte zu einem bedeutenden Fortschritt in Bezug auf die korrekte Ermittlung des CO2-Hintergrund-Signals. Dies war notwendig, da CO2 auch in geringen Mengen auf der Adsorptionseinheit während der Anreicherung der leichtflüchtigen organischen Substanzen gesammelt wurde. Katalytische Materialien mit hohen Oberflächen stellten sich als unbrauchbar für diese Anwendung heraus, weil trotz hoher Temperaturen eine CO2-Aufnahme und eine spätere Abgabe durch das Katalysatormaterial beobachtet werden konnte. Die Methode wurde mit verschiedenen leichtflüchtigen organischen Einzelsubstanzen sowie in zwei Pflanzenkammer-Experimenten mit einer Auswahl an VOC-Spezies getestet, die von unterschiedlichen Pflanzen emittiert wurden. Die Pflanzenkammer-messungen wurden durch GC-MS und PTR-MS Messungen begleitet. Außerdem wurden Kalibrationstests mit verschiedenen Einzelsubstanzen aus Permeations-/Diffusionsquellen durchgeführt. Der Gesamtkohlenstoffanalysator konnte den tageszeitlichen Verlauf der Pflanzenemissionen bestätigen. Allerdings konnten Abweichungen für die Mischungsverhältnisse des organischen Gesamtkohlenstoffs von bis zu 50% im Vergleich zu den begleitenden Standardmethoden beobachtet werden.
Resumo:
Der Haupt-Lichtsammenkomplex II (LHCII) höherer Pflanzen ist das häufigsternMembranprotein der Welt und in die chloroplastidäre Thylakoidmembran integriert. DerrnLHCII kann als Modellsystem genutzt werden, um die Funktionsweise vonrnMembranproteinen besser zu verstehen, da 96 % seiner Struktur kristallografisch aufgelöstrnist und er in rekombinanter Form in vitro rückgefaltet werden kann. Hierbei entsteht einrnvoll funktionaler Protein-Pigment.Komplex, der nahezu identisch mit der in vivo Varianternist.rnElektronenparamagnetischen Resonanz (EPR) Spektroskopie ist eine hoch sensitive undrnideal geeignete Methode, um die Strukturdynamik von Proteinen zu untersuchen. Hierzurnist eine ortsspezifische Markierung mit Spinsonden notwendig, die kovalent an Cysteinernbinden. Möglich wird dies, indem sorgfältig ausgewählte Aminosäuren gegen Cysteinerngetauscht werden, ohne dass die Funktionsweise des LHCII beeinträchtigt wird.rnIm Rahmen dieser Arbeit wurden die Stabilität des verwendeten Spinmarkers und diernProbenqualität verbessert, indem alle Schritte der Probenpräparation untersucht wurden.rnMithilfe dieser Erkenntnisse konnte sowohl die Gefahr einer Proteinaggregation als auchrnein Verlust des EPR Signals deutlich vermindert werden. In Kombination mit derrngleichzeitigen Etablierung des Q-Band EPR können nun deutlich geringer konzentrierternProben zuverlässig vermessen werden. Darüber hinaus wurde eine reproduzierbarernMethode entwickelt, um heterogene Trimere herzustellen. Diese bestehen aus einemrndoppelt markierten Monomer und zwei unmarkierten Monomeren und erlauben es, diernkristallografisch unvollständig aufgelöste N-terminale Domäne im monomeren undrntrimeren Assemblierungsgrad zu untersuchen. Die Ergebnisse konnten einerseits diernVermutung bestätigen, dass diese Domäne im Vergleich zum starren Proteinkern sehrrnflexibel ist und andererseits, dass sie in Monomeren noch mobiler ist als in Trimeren.rnZudem wurde die lumenale Schleifenregion bei unterschiedlichen pH Werten undrnvariierender Pigmentzusammensetzung untersucht, da dieser Bereich sehr kontroversrndiskutiert wird. Die Messergebnisse offenbarten, dass diese Region starre und flexiblerernSektionen aufweist. Während der pH Wert keinen Einfluss auf die Konformation hatte,rnzeigte sich, dass die Abwesenheit von Neoxanthin zu einer Änderung der Konformationrnführt. Weiterführende Analysen der strukturellen Dynamik des LHCII in einerrnLipidmembran konnten hingegen nicht durchgeführt werden, da dies eine gerichteternInsertion des rückgefalteten Proteins in Liposomen erfordert, was trotz intensiverrnVersuche nicht zum Erfolg führte.
Resumo:
This thesis deals with the development of a novel simulation technique for macromolecules in electrolyte solutions, with the aim of a performance improvement over current molecular-dynamics based simulation methods. In solutions containing charged macromolecules and salt ions, it is the complex interplay of electrostatic interactions and hydrodynamics that determines the equilibrium and non-equilibrium behavior. However, the treatment of the solvent and dissolved ions makes up the major part of the computational effort. Thus an efficient modeling of both components is essential for the performance of a method. With the novel method we approach the solvent in a coarse-grained fashion and replace the explicit-ion description by a dynamic mean-field treatment. Hence we combine particle- and field-based descriptions in a hybrid method and thereby effectively solve the electrokinetic equations. The developed algorithm is tested extensively in terms of accuracy and performance, and suitable parameter sets are determined. As a first application we study charged polymer solutions (polyelectrolytes) in shear flow with focus on their viscoelastic properties. Here we also include semidilute solutions, which are computationally demanding. Secondly we study the electro-osmotic flow on superhydrophobic surfaces, where we perform a detailed comparison to theoretical predictions.
Resumo:
In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general. However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of "intelligence". The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them. CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems. HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain. In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.
Resumo:
This thesis is aimed to assess similarities and mismatches between the outputs from two independent methods for the cloud cover quantification and classification based on quite different physical basis. One of them is the SAFNWC software package designed to process radiance data acquired by the SEVIRI sensor in the VIS/IR. The other is the MWCC algorithm, which uses the brightness temperatures acquired by the AMSU-B and MHS sensors in their channels centered in the MW water vapour absorption band. At a first stage their cloud detection capability has been tested, by comparing the Cloud Masks they produced. These showed a good agreement between two methods, although some critical situations stand out. The MWCC, in effect, fails to reveal clouds which according to SAFNWC are fractional, cirrus, very low and high opaque clouds. In the second stage of the inter-comparison the pixels classified as cloudy according to both softwares have been. The overall observed tendency of the MWCC method, is an overestimation of the lower cloud classes. Viceversa, the more the cloud top height grows up, the more the MWCC not reveal a certain cloud portion, rather detected by means of the SAFNWC tool. This is what also emerges from a series of tests carried out by using the cloud top height information in order to evaluate the height ranges in which each MWCC category is defined. Therefore, although the involved methods intend to provide the same kind of information, in reality they return quite different details on the same atmospheric column. The SAFNWC retrieval being very sensitive to the top temperature of a cloud, brings the actual level reached by this. The MWCC, by exploiting the capability of the microwaves, is able to give an information about the levels that are located more deeply within the atmospheric column.
Resumo:
Systolic right ventricular (RV) function is an important predictor in the course of various congenital and acquired heart diseases. Its practical determination by echocardiography remains challenging. We compared routine assessment of lateral tricuspid annular systolic motion velocity (TV(lat), cm/s) using pulsed-wave tissue Doppler imaging from the apical 4-chamber view with cardiac magnetic resonance (CMR) as reference method.
Resumo:
Coronary late stent thrombosis, a rare but devastating complication, remains an important concern in particular with the increasing use of drug-eluting stents. Notably, pathological studies have indicated that the proportion of uncovered coronary stent struts represents the best morphometric predictor of late stent thrombosis. Intracoronary optical frequency domain imaging (OFDI), a novel second-generation optical coherence tomography (OCT)-derived imaging method, may allow rapid imaging for the detection of coronary stent strut coverage with a markedly higher precision when compared with intravascular ultrasound, due to a microscopic resolution (axial approximately 10-20 microm), and at a substantially increased speed of image acquisition when compared with first-generation time-domain OCT. However, a histological validation of coronary OFDI for the evaluation of stent strut coverage in vivo is urgently needed. Hence, the present study was designed to evaluate the capacity of coronary OFDI by electron (SEM) and light microscopy (LM) analysis to detect and evaluate stent strut coverage in a porcine model.
Resumo:
Bacteria are generally difficult specimens to prepare for conventional resin section electron microscopy and mycobacteria, with their thick and complex cell envelope layers being especially prone to artefacts. Here we made a systematic comparison of different methods for preparing Mycobacterium smegmatis for thin section electron microscopy analysis. These methods were: (1) conventional preparation by fixatives and epoxy resins at ambient temperature. (2) Tokuyasu cryo-section of chemically fixed bacteria. (3) rapid freezing followed by freeze substitution and embedding in epoxy resin at room temperature or (4) combined with Lowicryl HM20 embedding and ultraviolet (UV) polymerization at low temperature and (5) CEMOVIS, or cryo electron microscopy of vitreous sections. The best preservation of bacteria was obtained with the cryo electron microscopy of vitreous sections method, as expected, especially with respect to the preservation of the cell envelope and lipid bodies. By comparison with cryo electron microscopy of vitreous sections both the conventional and Tokuyasu methods produced different, undesirable artefacts. The two different types of freeze-substitution protocols showed variable preservation of the cell envelope but gave acceptable preservation of the cytoplasm, but not lipid bodies, and bacterial DNA. In conclusion although cryo electron microscopy of vitreous sections must be considered the 'gold standard' among sectioning methods for electron microscopy, because it avoids solvents and stains, the use of optimally prepared freeze substitution also offers some advantages for ultrastructural analysis of bacteria.
Resumo:
The analysis of samplings from periodontal pockets is important in the diagnosis and therapy of periodontitis. In this study, three different sampling techniques were compared to determine whether one method yielded samples suitable for the reproducible and simultaneous determination of bacterial load, cytokines, neutrophil elastase, and arginine-specific gingipains (Rgps). Rgps are an important virulence factor of Porphyromonas gingivalis, the exact concentration of which in gingival crevicular fluid (GCF) has not been quantified.
Resumo:
The impact of a semiquantitative commercially available test based on DNA-strip technology (microIDent®, Hain Lifescience, Nehren, Germany) on diagnosis and treatment of severe chronic periodontitis of 25 periodontitis patients was evaluated in comparison with a quantitative in-house real-time PCR. Subgingival plaque samples were collected at baseline as well as at 3, 6, and 12 months later. After extracting DNA, Aggregatibacter actinomycetemcomitans, Porphyromonas gingivalis, Tannerella forsythia, Treponema denticola, and several other periodontopathogens were determined by both methods. The results obtained by DNA-strip technology were analyzed semiquantitatively and additionally quantitatively by densitometry. The results for the 4 major periodontopathogenic bacterial species correlated significantly between the 2 methods. Samples detecting a high bacterial load by one method and negative by the other were always found in less than 2% of the total samples. Both technologies showed the impact of treatment on microflora. Especially the semiquantitative DNA-strip technology clearly analyzed the different loads of periodontopathogens after therapy and is useful in microbial diagnostics for patients in dental practices.
Resumo:
The G3, CBS-QB3, and CBS-APNO methods have been used to calculate ΔH and ΔG values for deprotonation of seventeen gas-phase reactions where the experimental values are reported to be accurate within one kcal/mol. For these reactions, the mean absolute deviation of these three methods from experiment is 0.84 to 1.26 kcal/mol, and the root-mean-square deviation for ΔG and ΔH is 1.43 and 1.49 kcal/mol for the CBS-QB3 method, 1.06 and 1.14 kcal/mol for the CBS-APNO method, and 1.16 and 1.28 for the G3 method. The high accuracy of these methods makes them reliable for calculating gas-phase deprotonation reactions, and allows them to serve as a valuable check on the accuracy of experimental data reported in the National Institutes of Standards and Technology database.
Resumo:
Comparison of the crystal structure of a transition state analogue that was used to raise catalytic antibodies for the benzoyl ester hydrolysis of cocaine with structures calculated by ab initio, semiempirical, and solvation semiempirical methods reveals that modeling of solvation is crucial for replicating the crystal structure geometry. Both SM3 and SM2 calculations, starting from the crystal structure TSA I, converged on structures similar to the crystal structure. The 3-21G(*)/HF, 6-31G*/HF, PM3, and AM1 calculations converged on structures similar to each other, but these gas-phase structures were significantly extended relative to the condensed phase structures. Two transition states for the hydrolysis of the benzoyl ester of cocaine were located with the SM3 method. The gas phase calculations failed to locate reasonable transition state structures for this reaction. These results imply that accurate modeling of the potential energy surfaces for the hydrolysis of cocaine requires solvation methods.